NOT VALID constraints
Nikolay: Hello, hello, this is
Postgres.FM.
My name is Nikolay, Postgres.AI, and as usual my co-host is
Michael, pgMustard.
Hi Michael
Michael: Hello Nikolay
Nikolay: So there is a topic I
wanted to discuss and it's widely
used by those projects which are
grown-up projects.
Usually in tiny projects, in new
projects this topic is skipped.
This topic is related to my favorite
area in Postgres administration.
It's zero downtime changes.
First of all, schema changes.
We all know that it's a mistake
to create index without the word
CONCURRENTLY live, because it's
not a live operation, not online
operation.
There's such term online operations.
It means it's blocking other sessions
which deal with the same
table and it's bad.
While CREATE INDEX CONCURRENTLY
is good, if it succeeds it takes
roughly 2 times longer because
it scans table twice, but if it
succeeds, we have an index and
nobody is blocked, which is great.
And we discussed many times that
there are many changes which
are tricky.
For example, if you just add a
column, you can block others if
you are blocked by, for example,
a vacuum running in transaction
ID wraparound mode.
And in many cases, there are tricks
how to, instead of single
step, do multiple steps, but achieve
the same result in truly
online fashion.
And Let's talk about some details
here.
First of all, how we introduce
new constraints on existing columns.
First of all, check constraints
on foreign keys.
Michael: Yeah, should we talk about
if we didn't have this feature,
so if we didn't have the ability
to do it in multiple steps,
if we couldn't set it NOT VALID
first, what would we do?
Or would we live without the constraint?
I think it's worth discussing what
kind of lock is taken and
why we need this feature or why
this feature was implemented
in the first place.
Nikolay: Yeah, let's talk about
this.
If we imagine that we have already
a column and we want to change
something about it.
For example, we want to change
default, it's easy because default
is only about the future.
Postgres doesn't need to check the
existing rows, doesn't need to
change anything in existing rows,
it's only some metadata change.
So, ALTER TABLE, ALTER COLUMN
set default, super fast, unless
we acquired a lock.
This topic is important almost
to any alter.
It's important, lock acquisition.
In some cases, you need exclusive
lock acquisition.
In some cases, also exclusive.
But I remember differences.
For example, if you alter table,
adding a table to publication,
to enable publication with logical
replication, there it's not
that bad.
But if we alter table, alter column,
set default, it's quite
bad.
It's fast, but you need a lock.
If you cannot acquire a lock, you
are blocking others.
But this is a simple example.
It's super fast, nothing to worry
about in terms of this operation
takes long.
Lock acquisition might take long,
right?
We need to take care of it.
We need to have a low lock timeout
and retry logic, be ready to
do it.
Again, I'm discussing here, my
comment in the beginning was specifically
we talk about projects which are
grown already and they have
significant load, say more than
1000 TPS.
Other projects also experience
it but they just don't notice
because traffic is too low.
Right.
Michael: Yeah, and we did a whole
episode on the general case
here.
I think we did one on zero downtime
migrations, which I think we went
into detail on the general case,
didn't we?
Nikolay: Yeah, I agree.
If you are interested in this topic,
you listen to this episode.
Please listen to that episode as
well because today we are going
to go deeper in a specific narrow
area, which is not that narrow.
If you take any project, take three
years of its lifetime, definitely
these things happen.
Adjustment of default is relatively
easy.
We need just to remember about
this lock acquisition issue.
But if we go further, for example,
we created a table.
It has...
Oh, by the way, another comment.
Folks who just prefer JSON, JSONB,
or very flexible schema would
say, it doesn't matter.
We keep everything in text, no
additional constraints.
Let's application take care of
everything.
And I understand this position,
but I fully disagree with it
because I think database is the
only, like database constraints
is very powerful mechanism to ensure
the data is clean and high
quality.
Right?
So we...
Michael: And I think there are
even extensions that help you
validate schemas even within JSONB
columns.
Right.
And I think they're using constraint.
Nikolay: But anyway, constraint
capabilities in database system,
it's very powerful.
And when you shift these to application
logic, you should be
100% sure that in future you don't
write second application which
deals with the same database or
different parts of it in different
code.
Or don't use some UIs which can
allow people to change data and
bypass those checks which are left
in your application.
So Constraint validation should
be done closer to data, and this
is database system.
Okay, default, easy.
Next, for example, let's jump to
the main case.
We have not null.
Not null is a tricky constraint
because, in my opinion, it should
not exist.
Because we have check constraint,
which is much more powerful,
it's a superset of not null.
It includes it because we can say,
check, this column is not
null.
That's it.
Easy.
But is not null also, it exists
because it's SQL standard.
I think it was created before check
constraint was added to standard.
I guess.
It's my guess.
I might be wrong.
Michael: I'm not even sure, check
constraint is definitely in
standard.
Nikolay: Check constraint should
be in standard.
Michael: Okay.
Nikolay: I'm not 100% sure, but
it would be very strange if not.
But it's obviously, like, why do
we have not null constraints
at all?
We have check constraints, right?
And interesting that if you define
a primary key, it requires
a not null constraint.
It needs it.
It requires 2 things.
Michael: It's inherent.
Nikolay: Yeah.
Unique constraint and not null
constraint.
Fortunately, since Postgres 12
optimizations, we can forget about
this distinction and we can build
the check constraint with not
null and implicitly not null actual
constraint will be built
efficiently on top of our check
constraint and primary key will
use it.
But some implementation detail,
right?
Before 12 it was nightmare.
I remember discussing with very
experienced DBAs, discussing
how we can introduce not null constraints.
Not null doesn't support this two-phase
approach we discussed
today.
You cannot define not null constraint,
first NOT VALID and then
validate.
It's not possible.
Before 12, it always led to full
table scan.
I remember our experience DBA said,
you know what, between us,
we just adjust system catalogs
because we know there are no records
violating these constraints, so
we accept this, we just do it.
Seems like Postgres 8.0 or something.
But yeah, this is an ugly hack
actually.
Risky if you don't know
Michael: exactly what you're doing.
Nikolay: Well, it seems like not
super ugly, not super risky,
but fortunately since Postgres
12 we don't need it and we can
define check constraint in this
two-phased manner and then not
null constraint will be implicitly
defined very fast without
full table scan.
So, for example, we create check
constraint, we define some logic
in this.
For example, not null, or something
else.
For example, we can say all integer
values in this column should
be positive.
Michael: Yeah, well it's super
flexible isn't it?
You can pretty much define whatever
you want via any query via
a check constraint.
It's
Nikolay: really cool.
Not any query unfortunately.
There is a huge limitation.
Michael: For multiple tables you
can talk about columns?
Nikolay: You need to stay inside
the same table context, you
cannot refer other tables, unfortunately.
Michael: But it's pretty powerful,
right?
Nikolay: Of course.
Michael: You can reference multiple
columns within the same table.
But yeah, the normal use cases
are like, this number cannot be
negative or greater
Nikolay: than or odd.
Or something.
This number should
Michael: be proper.
Yeah, or, I've not seen that in
the wild, but yeah, or certain,
Maybe like a maximum value or something.
Nikolay: Yeah.
Yeah, ranges for many, many things
can be, can like make, it's
good.
It's good.
It's good practice because you,
this is how you ensure data has
high quality.
Yes.
And that's it.
But imagine we have a billion rows
table, which should not happen.
You should have partitioning in
place and don't allow it to grow
beyond a few dozens of millions
of rows maybe.
But it happens all the time.
In grown-up projects, I see it
as a billion-plus rows all the
time.
And we define this constraint and
how Postgres...
If we just say alter blah blah
blah, once a lock is acquired,
to adjust metadata, system catalogs,
Postgres acquires lock, and
then what it does, of course it
goes and checks existing rows
to see if there is any violation.
If there is, it will roll back.
And this is super bad because for
1 billion rows, it takes a
lot of time.
Michael: Full table
scale.
And while it's doing that, it's
blocking any new writes.
Exactly.
Nikolay: Not only writes, SELECTs
as well, because it's an algorithm.
Yeah, true.
Yeah.
So it's super bad.
This is where some engineers have
feeling like...
They promised us that SELECTs are
not blocked.
What's happening here?
Well, they are not blocked by writes,
regular writes, but you
are changing the rules right now.
Rules, I mean, you're changing
the schema.
So this is not simple, right?
So you acquire the lock, you start
reading this.
It can take a lot, like minutes,
maybe more, depends on disk
capabilities and actual table size.
Even SELECTs are blocked.
This is super bad.
That's why we have for CHECK constraints,
not for NOT NULL.
For NOT NULL, we don't have it.
For CHECK constraints, we have
the ability to say, NOT VALID.
We say, ALTER TABLE, blah, blah,
blah.
We say, add constraint, and then
we specify NOT VALID.
Flag.
It's a flag.
By the way, terrible naming.
Michael: Yeah.
Or confusing.
If you had to guess what it was
without knowing, I think you
wouldn't get the details right.
Nikolay: Right.
Why is it terrible?
Because once this transaction is
fast because it doesn't check
the existing volume of data at
all.
It just defines this constraint
as it was with...
As the table was very small, like
empty.
Super fast.
Just Lock acquisition is needed.
Again, we need lock_timeout and
retries.
This is the rule.
Michael: I think that's not the
bit that would have confused
me.
The thing that would have confused
me is that it will be checking
new rows and updating.
Exactly, yes.
That's what's confusing, right?
Nikolay: You, okay, you brought
it before.
It's okay.
Yeah, but I mean, mechanics is
like, we create constraint like
table's empty, and that's it.
And it's marked NOT VALID in table
definition if you use backslash
D in psql, and you say, oh,
This is NOT VALID constraint,
I don't care.
And I remember I was preparing
the first complex transformation
of 4-byte primary key to 8-byte
primary key.
And it was a very complex operation,
like 7 steps.
Each step was reversible.
And I remember for the sake of
reversibility, it was not about
check constraints, it was about
foreign keys.
It's quite similar there.
You can also say NOT VALID for
foreign key.
And I left NOT VALID foreign keys
for some like shadow table
or old table or something.
And I thought they won't be used.
It was huge mistake, led to some
bug.
Unfortunately, I quickly realized,
okay, NOT VALID doesn't mean
it's not working.
It's still working.
It means that it's applied to all
new writes, INSERTs and UPDATEs,
not DELETEs.
In the case of CHECK constraint
DELETEs don't matter, right,
because we don't need to check.
Yeah.
Yeah.
But in the case of foreign keys,
DELETEs also matter, right?
Yeah, true.
Yeah, because if you DELETE the
referenced row...
Michael: Referencing row.
Nikolay: Yeah, referencing row
becomes an orphan if...
Yeah, so for all new writes, let's
say writes, like, aggregate
all changes of data here.
And if you have...
You define check constraint, you
try to insert, if you have,
for example, let's say we have
not null.
You try to insert null, it says,
okay, violation of NOT VALID
constraint.
How does it sound?
It sounds not good.
It's confusing.
So I think a better name could
be...
I had an idea yesterday we were
discussing with 1 client.
I forgot.
What would be better?
Not verified maybe, right?
Not verified.
Michael: Yeah, it's better.
Naming things is notoriously difficult,
right?
But yeah, we've got this now and
we're stuck with it.
I think that distinction between
the types of rights is quite
important though.
I think I would then have gone
away and thought, oh, if I change
that column, it will validate it.
But I wouldn't have realized that
if I changed a
Nikolay: different column.
Or no columns at all.
As you remember, my favorite example,
update setId equals id
doing nothing logically, but physically
it produces a new tuple.
A new tuple is also check against
this NOT VALID constraint,
Michael: and it
Nikolay: complains violation.
Michael: Yeah, so in case that
wasn't clear, if that tuple has
a violation in it, and because
we haven't validated yet, it didn't
fail, then it tries to write that
tuple again with the invalid
column.
So yeah, not verified feels good,
actually, yeah, that makes
sense.
But I don't think the side effects
of this, leaving a constraint
in a NOT VALID state, are obvious.
Or at least not to me when I first
came across it.
Nikolay: But despite naming, I
think the idea that it's already
working for new rights is great
because it gives you opportunity
to say, okay, we now are covered
by all future rights, all future
data.
Now let's take care of existing
data.
How do we do this?
First of all, we need to...
I would not go straight to AlTER TABLE
VALIDATE CONSTRAINT, which
scans the table in non-blocking
way.
This is the key.
It doesn't block anyone.
It just scans it.
And if everything is clear, it
updates system catalogs.
Again, I think lock will be needed
maybe, right?
But only in the end.
I'm not sure if Retrialogic is
inside this.
It's an interesting question.
I
Michael: looked it up.
There's a really good note in the
ALTER TABLE docs, and it says
validation acquires only a share
update exclusive lock for the
table being altered for like a
check constraint, and there is
an additional lock when it's a
foreign key on the...
If a constraint is a foreign key,
then a row share lock is also
required on the table referenced
by the constraint.
Nikolay: Which makes sense.
Yeah, yeah, yeah.
It might fail, for example, again,
if autovacuum is running transaction
ID wraparound prevention mode,
I think it can be blocked by it.
Fortunately, we can retry it multiple
times.
Yes, it will read the whole table,
quiet data, disk I/O intensive
operation, but it's already much
better.
The key is we don't block anyone.
If we succeed, it's great.
But we might not succeed if there
are nulls, or if it's different
check constraint, if there are
rows which violated.
In this case, before doing this
second step, there should be
additional step in between.
We should take care of violations
ourselves.
Either we need to delete those
rows or update them, setting value,
right?
Michael: Look, question, why not
do that before
Nikolay: the setting?
Yeah, good question.
And I think if we think like, since
NOT VALID constraint already
covers all future rights, I would
prefer first introduce constraint,
then clean up, taking care of violations
in one way or another.
It's very domain-specific.
And then VALIDATE CONSTRAINT.
Why?
Because in this way we don't introduce
any gaps where new violations
can be added.
If we move the step of cleanup
before, like if we have it as
the very first step, in this case,
what is like again, depending
on the logic of application, but
in general case, after we're
done, and before we start creating
constraint, some new rights
might happen and violate it once
again right and this is bad.
Michael: So yeah I guess it depends
how many we've got but I
would still like if we've got loads
then we're gonna have we're
gonna end up with a NOT VALID constraint
for quite a while while
we clean up quite a lot of data.
My feeling would be, I guess both
might be the solution.
Get an idea of how many you've
got in your existing data, work
out a plan for fixing those, deal
with the ones that you knew
about at that time.
So, there's some snapshot from
yesterday, we checked which rows
were in violation, we've got this
script to fix them all up,
we've run that.
Now we check, maybe we check again,
see if there's any new ones,
or maybe then you flip it to NOT VALID,
because at least you're
only going to have a day's worth
of, or however long it takes
your company or your organization
to deal with those.
So it could be that you do it before
and then in between.
Nikolay: Yes, unless we don't want
to scan the table too many
times, because we already need,
when we do it, to clean up.
And also this VALIDATE CONSTRAINT
step, the final step, also
needs to do it.
But maybe we should do it before,
like this very first step,
then introduce constraint, do it
again, and then validate.
Maybe this is the ideal situation.
But regardless of that, if we just
blindly say, okay, we introduce
constraint, then clean up, then
validate, this may lead to not
understanding that NOT VALID constraint
is already applied to
new writes.
This may lead to late code fixes.
And if code, application code or
whatever, continues to perform
new writes which violate constraint,
this is a problem because
it will bring more and more and more,
right?
But there is another problem also.
Exactly what you brought.
Imagine we perform an update, changing
the other column.
For example, if it's some post
or something, we just edit title.
And then there is some additional
column which we want to be
not NULL.
We had some NULLs, we want to convert
them to, I don't know,
false or 0 or something, because
we finally realized we don't
consider NULLs as zeros or empty
strings.
And then we have a bunch of rows
which still have NULLs, and
we already introduced constraint.
Then user goes and tries to, for
example, edit the title, and
cannot because constraint is already
working.
During editing a title, Postgres,
as we know, in MVCC, Postgres
copies the whole row, it's a physical
row, it's called tuple,
right?
And new tuple is violating
NOT VALID constraint.
So editing of title is broken.
It means we must clean up as the
first step.
Accept gaps.
Michael: So this is why we always
need to validate the constraint
eventually.
When I say eventually, I mean at
the end of this process and
ideally as quickly as possible
to avoid issues.
Nikolay: Do you see how deep the topic is?
Yeah, it's interesting.
Michael: For sure.
Nikolay: Unexpectedly, right?
Michael: What do you see in the wild?
Yeah, do you see people trying to do this almost instantaneously?
Because running, still, if we run them back to back, it's still
much, much better than introducing the constraint without the
NOT VALID.
Nikolay: Honestly, I think I saw systems, maybe it was smaller
projects, and maybe it was because of me where we made a mistake
and did cleanup as the second step, not having it before the
first ALTER TABLE.
It should be before the first ALTER TABLE, definitely.
And we just didn't notice this bug.
I mean, it's a bug of application, it's not a bug of Postgres.
When user tries to edit a title, but because of some additional
column violation, what's happening here?
And now I'm more cautious about it.
So I would put it on the first place.
And yeah.
Michael: So just to clarify, first step, we manually run, we
manually
Nikolay: check for violations.
Michael: Automatic validation.
Yeah, exactly.
So you know what I mean?
Like we don't use the validation step to validate whether
Nikolay: it's actually...
First step is get rid of violations even before you introduce
constraint.
Then you introduce NOT VALID constraint, then ideally once again
get rid of violations because maybe we had a gap, maybe a new
came, and then VALIDATE CONSTRAINT.
Michael: I've got, I actually think I'm wrong there.
I think we'd be better off checking for violations, fixing them,
check for violations again, And if you find any, don't go ahead
with your plan, because you've still got a problem at the application
level of bad data being inserted.
So probably then don't get started, because you need to go back
and stop.
Nikolay: This algorithm needs to be visualized already.
Michael: Yeah, yeah.
But do you see what I mean?
Because otherwise we're going to have errors.
Nikolay: Yes, exactly.
But how does it change it?
Okay, clean up, create constraint with NOT VALID while you are
shaking your head.
Michael: I'm talking about leaving a gap at the beginning.
So, clean up the existing data, wait a while.
Nikolay: Wait is a good idea, yeah.
See if you need to clean up again.
First, fix your application.
Fix your application.
Not to produce wrong data anymore.
Second, clean up.
1 more step, wait.
Yeah.
Like days, weeks, I don't know.
Then introduce, no need to second clean up, introduce constraint.
Why not?
Ah, okay, second cleanup.
Why not
Michael: check?
Why
Nikolay: not check without cleanup?
Okay, I mean, just, yeah.
Michael: It's just a minor step, it doesn't take long.
1 more scan of the table, though.
Nikolay: Okay, this is, yeah, This is full-fledged.
Okay, 1 more time.
Fix application code.
Clean up.
Check and clean up if needed again.
But if clean up needed again the second time, it means code is
not fixed yet.
This is great.
Then, ALTER TABLE constraint NOT VALID.
And then what do you think?
Do you need new cleanup?
Maybe no already, right?
Michael: Shouldn't do, right?
Nikolay: Yeah.
Well,
Michael: because at this point, if you just validate it, if you
needed cleanup, it will fail.
Nikolay: Yeah, it's similar to just checking.
Yeah.
Yeah.
Yeah.
So yeah, I agree I let's remove clean up between to ALTER TABLE
steps and but have to Clean up plus additional check and some
some time between them.
Yeah, that's great.
Final question I have here.
Could we have just ALTER TABLE CONCURRENTLY and have less headache
when doing these things?
What do you think?
For example, if you create a similar situation, CREATE INDEX
CONCURRENTLY, CREATE UNIQUE INDEX
CONCURRENTLY, it's not just
index, it's constraint basically.
And if we have duplicates, it will
fail.
What we need to do in this case
is clean up a manually invalid
index.
It will be shown as invalid.
And then make a new attempt from
scratch.
Michael: You said index there,
but did you mean constraint?
Nikolay: Well, constraint will
be created implicitly.
There's a subtle difference between
unique indexes and constraints.
Let's not go there.
And I know places in Postgres where
confusion is violating logic.
I have a patch proposed 5 years
ago or so, nobody took care of
it, it's okay.
Anyway, when we create a unique
index, logically we are building
constraints.
So if we have duplicates, this
operation will fail, leaving invalid
index behind because this attempt
also two-phase, just single
line of code, but it's two-phase,
right?
We need to drop that index again
CONCURRENTLY, and then start
from scratch.
Oh, again, we need to clean up
duplicates and start from scratch.
How about here?
We could say, ALTER TABLE CONCURRENTLY,
ALTER TABLE blah blah,
like, add, check constraint CONCURRENTLY.
If there are problems, it would
fail.
And we need to clean up and try
again.
I don't know.
Michael: I don't understand the
reason...
I've never really thought about
this before, but why does CREAT INDEX CONCURRENTLY,
if it fails, leave behind...
Why can't it fully tidy up after
itself?
Nikolay: Because it's not transactional.
Michael: But if it's not transactional
already, why can't it
do the DROP INDEX?
Like, why can't it know that it's
left?
Nikolay: Yeah, it's a good question
actually.
And it could DROP INDEX CONCURRENTLY.
Yeah.
Yeah.
It's a good question.
I will think about this.
Michael: There's probably a reason,
right?
People would have thought of that.
So I imagine there is a good reason,
but I don't know it.
Nikolay: Maybe it's just easier
to implement, that's it.
Michael: Maybe, but that would
be a good feature if it's not
really difficult.
Nikolay: Yeah, the user should
be okay just with the error message.
Michael: Yeah, exactly, it didn't
work.
You're back where you started,
even
Nikolay: if I hadn't
Michael: seen several transitions.
Nikolay: Fix your duplicates and
try again.
But if you don't use explicit naming
for your indexes, there
is a risk that you had 10 attempts
and didn't notice that you
have 9 invalid indexes left behind
because you didn't check the
definition.
This is a risk here.
If you use explicit naming, of
course it will say the index with
such name already exists, right?
Or no?
Index comparable.
Michael: I actually
don't know if it's valid.
I assume so.
Nikolay: I think,
yeah, it will be in the index.
It must be, yeah.
Yeah.
So I always prefer to use explicit
naming for indexes, just to
control the naming schema, like
understanding which columns are
indexed and so on.
But also because of this, to better
control of my attempts.
If it's CONCURRENTLY, it's always
CONCURRENTLY, almost always.
Yeah, so anyway, back to this ALTER TABLE,
let constraint, check
constraint.
CONCURRENTLY, why not?
It could just do 2 phases.
First phase NOT VALID, second phase
validation.
If validation fails, clean up or
no is a separate topic now.
You detached this topic.
Michael: I prefer clean up, but
yeah.
Nikolay: But it wouldn't it be
so great?
Michael: Yeah.
I feel like CONCURRENTLY is coming
up in every episode these
days.
It's good.
Nikolay: Great, great.
Yeah.
So, ALTER TABLE CONCURRENTLY.
Michael: I think it'd be a great
feature.
Nikolay: It's not an easy to implement,
I'm sure.
Michael: And again, non-transactional.
It's another 1 of those things
that would have...
Nikolay: It means that there is
a promise of Postgres has transactional
SQL, but when you start working,
it's already so.
In real life, we don't have transactional
DDL, because create-index-concurrent
is not transactional, and these
steps, when you split it to several
steps, each 1 of them is transactional,
which is great, but Sometimes
you cannot reverse this.
And overall, you lose some data
or something.
Michael: Yeah, I think it's fair to say we do have transactional
DDO in Postgres, but it's not practical in larger projects with
high load, because you just can't afford the downtime that comes
is as a result of that transactional data.
So in medium to heavy load, large enough setups, we don't have
transaction data or not practical.
We can't practically use it.
So these practical solutions are really valuable, I think.
The other approach seems to be, yeah, like the completely rewriting
the table type approaches like we talked about last week.
Or there's another project and maybe it's a whole, It's definitely
a whole other topic, but have you come across pgroll?
I think it's been brought up a couple of times on the podcast.
Nikolay: Yes, in the past.
Fully wrapped off a table.
Michael: Yeah, exactly.
I think that's another solution to this, right?
Nikolay: Just to introduce constraints.
Thank you, no.
Michael: I don't know if that's what they do for this, but I
think that's how they handle most schema changes.
Nikolay: I understand that.
Michael: It's heavy-handed, but it does also solve this problem.
Nikolay: Yeah, yeah, yeah.
Now I remember when we discussed pg_squeeze, we discussed
changing order of Columns.
This makes sense and of course CONCURRENTLY... not CONCURRENTLY.
Michael: I guess 1 final solution is again super heavy-handed,
but you could logically replicate to an Instance that has the
Constraints that you want in place already.
A huge amount of work.
Nikolay: When we want to just constrain, we need just to read
everything to ensure there are no violations.
It's not a big change, it's just adding a restriction.
So we don't need to change data.
Yeah,
Michael: shouldn't be.
Nikolay: Anyway, I think that's it, what I wanted
to discuss.
Anything else?
Michael: Yes, super useful feature.
I'm hoping more people become aware of it now.
Nikolay: You mean ALTER TABLE CONCURRENTLY doesn't exist?
Michael: No, I mean alter table add constraint NOT VALID.
Nikolay: NOT VALID.
And a little bit confusing naming, so watch out.
New rights are already being checked.
Michael: Yeah, I'll include some links to the documentation because
that explains it pretty well as well.
Nikolay: Great, great.
Thank you.
Michael: Merci Nikolay, thanks so
much.
Nikolay: See you soon.
Bye bye.
Michael: See you soon. Bye