Nikolay: Hello, hello, Postgres.FM.

My name is Nik, Postgres.AI, and
as usual my co-host is

Michael, pgMustard.

Hi, Michael.

Michael: Hi, Nik.

How's it going?

Nikolay: I'm very good.

How are you?

Michael: Yeah, very good also.

Nikolay: It's again this time of
the year, right?

When it's obviously the right moment
to discuss something.

We make a mystery of it, but people
already saw the title.

And I'm pretty sure you won't hide
it in the title, right?

This elephant in the room.

Michael: I was going to call the
episode Christmas for Postgres

users.

Nikolay: Big box of gifts.

Michael: Exactly.

Nikolay: Cool.

Okay, let's talk about it.

Michael: Well, I think for anybody
brand new to Postgres or like

relatively new to the show, we've
done these episodes each year

when the major releases come out
and Postgres has got a really

good track record for many many
years now of doing a single major

release each year around this time
of year and that's the only

release in the year that will include
like new functionality,

breaking changes, you know very
semantic version style major

version.

Nikolay: No it's not same there
at all And we have many years

we have a dispute inside my team.

Same there or not same there.

I noticed by the way, Tiger Data,
Timescale, they appreciate

Postgres approach, only 2 numbers,
not 3, because original SemVar

is 3 numbers.

And 3 numbers, I always have, like
I understand the logic behind

3 number versioning, But I like
Postgres approach much more.

It's much more simplified and so
on.

Michael: Cool.

Well, breaking changes will only
ever go into a major version,

but you're right, even the major
version, we would still cut

a new major version even if it
was only new features, even if

it didn't have breaking changes.

So that's a good point.

So yeah, you're right.

Nikolay: Yeah, yeah, yeah.

It's only 2 numbers, but yeah,
major version, minor version.

Simplicity.

Michael: Yes.

Nikolay: And major version changes
every year.

Michael: Major version every year,
minor versions at least every

quarter, sometimes more often if
they need to do quick security

fixes.

So yeah, this is the time of year
where we get new features and

the only time of the year that
we get new features in core Postgres.

So as such there tend to be a lot
of them, so we've got a lot

of things we could be choosing,
no chance we're going to talk

about them all today, but the idea
is to talk about some of our

favorites, some of the kind of
categories of things that are

coming, things that we've been
kind of involved with or emotionally

attached to or wanting for many
years, all sorts.

Where would you like to start?

Nikolay: Well, first of all, maybe
we should just mention the

features which we already discussed
in depth.

And we had the whole episode sometimes
for some features which

are going to be out.

Right, so for example, skip scans
in B-tree, right?

Michael: Yeah, that's a huge one.

And we had a great chat with Peter
Geoghegan, didn't we?

All about them.

Nikolay: Right, right.

And the TLDR version of it, it's
sometimes index on two columns,

for example.

It will be able to support searches,
for example, only on the

second column.

So this rule, we got used to it
over time that we must put the

second column on the first place
if we need searches or create

additional index only on that column,
second column.

Now it's sometimes not true and
things become more complicated

when we make decisions on indexing,
right?

This is my perception, oh, like
it's even more complicated now.

Michael: Yeah, I would add a little
bit that it's about the efficiency

of scanning that index as well.

So we could previously scan a full
index for things that didn't

have filters on the on the leading
columns it just wasn't it

had no chance of being efficient
or it's very unlikely to be

efficient depending on the exact
case now we can efficiently

do it or Postgres can efficiently
scan skipping any equality

like across the first column if
the first column doesn't have

high cardinality.

Let's say I think you gave the
example a while ago of a Boolean.

So if we had only true and false,
we could jump to the true and

check that for the thing we want
to filter on, and then skip

the rest of the true values and
then jump to the false where

we, and pick up again there.

So it's a much, much more efficient
way of being able to scan

indexes that aren't perfect for
our query.

Nikolay: For performance it's excellent
news.

Excellent.

I'm just thinking about the decision-making
process when we have

a lot of tables and complex workload
and we need to choose proper

index set.

And I'm already, as you know, I
shifted my mind totally to full

automation because of full self-driving
idea, full self-driving

Postgres.

And I see several good attempts
to create index advisors mechanisms.

1 more is from Konstantin Knizhnik,
there is blog post and Neon

published this week.

So and there is also a pganalyze
Index Advisor.

I was just thinking how this new
feature will affect those tools.

Michael: Interesting to see.

The good news for users though,
this is 1 of those features that

if you upgrade you might just see
certain queries get faster

without having to add new indexes.

If you haven't thoroughly gone
through all of your queries, optimizing

them with perfect indexes already,
there's a good chance that

some of your existing queries will
better serve queries that

you haven't optimized.

Nikolay: So if in the past we could
consider something like this

as a mistake.

Now, like, Postgres is more foggy
in this area.

Michael: Much more.

Yeah.

Nikolay: Good, good.

Michael: So yeah, this is 1 of
those cool features that I think

loads of people are going to benefit
without even necessarily

knowing that they're going to,
or...

Nikolay: As usual, my advice remains
to verify everything with

real testing, with proper datasets
and settings of Postgres Planner

and work_mem, which is not Postgres
Planner setting.

And just verify everything and
so on.

Cool.

Michael: Testing and

Nikolay: so on.

And maybe, maybe we will be able
to drop some indexes additionally

because of this feature, who knows?

It's yet to be discovered, because
we don't like a lot of indexes

due to many reasons, and now that
1, like 1 of them will be discussed

later, I'm pretty sure.

What else we already discussed?

Michael: So the 1 we've discussed,
I think we've had at least

2, maybe 3 episodes on, was where
the BUFFERS should be on.

Nikolay: Ah, okay, I didn't get,
now I get.

Yes, this is our favorite, and
this is big news for this podcast

actually.

Right.

This is why in January I said this
is I think it was committed

very long ago in January or it
was already obvious that it's

going to be committed.

So we spent 2 years, actually,
this is my position, my point

of view, we spent 2 years advertising
for using BUFFERS inside

EXPLAIN when you do EXPLAIN ANALYZE,
and finally, I feel like,

I feel slightly being sad because
we don't need this anymore.

I actually had t-shirt with this,
EXPLAIN (ANALYZE, BUFFERS) with BUFFERS

written in bold.

So yeah, BUFFERS after Postgres
18 hitting your Postgres servers.

Yeah, it's not needed anymore.

You just say EXPLAIN ANALYZE and
you get BUFFERS.

Yeah, I had some slight shift.

I know pgMustard website has a recommendation
how to properly

collect plans, execution plans.

And I shifted my mind already,
focus shifted to SETTINGS, WAL,

and VERBOSE.

Great.

Yeah.

So now, for older Postgres versions,
we write EXPLAIN ANALYZE,

BUFFERS, WAL, SETTINGS, VERBOSE.

Then it will be minus 1 word.

But it feels still like there is
a room for improvement here.

And we had this discussion.

So let's just refer to it and move
on.

I think what I've got confused,
because we also had a couple

of episodes about UUID.

Maybe 1 of them was without you
actually.

Michael: It was.

Nikolay: Yeah, this work uuidv7
started a couple of years

ago when Kirk Wolak joined Postgres
hacking sessions where Andrey

Borodin and I tried to cook something
online on YouTube channel,

Postgres TV YouTube channel.

And he joined with idea, let's
implement ULID originally.

Then it shifted to uuidv7,
but then we needed to wait.

I mean, Postgres project needed
to wait for quite some time because

RFC was not polished and Peter
Eisentraut's position was like,

we should release only when it's
polished because who knows,

maybe something is changed, is
going to change.

So I remember like, yeah, very
conservative.

Everyone already has it.

Various NPM modules, Python, Google
projects, they all already

have uuidv7 support, even
though RFC is not finalized.

But Postgres, I think, took very
conservative position.

Maybe it's good for Database because
it should be conservative,

right?

Michael: Well, and we need to support
it for 5 years, right?

Like if whatever we implement,
we're saying we're supporting

this version for 5 years, even
if you automatically, even if

you manage to deprecate it the
next version, which is also nearly

impossible.

Normally you need to deprecate,
give warnings that it's being

deprecated, then actually remove
it in a later version.

So I really understand that and
I think also Postgres did support,

like you've said this multiple
times, Postgres does support uuidv7

in older versions of Postgres.

You just have to generate them
either outside of the database

or with your own function.

You've done it even within Postgres.

Nikolay: Yeah, this is what we
do already for a couple of years.

And we have how to do it in PL/pgSQL
or just with pure SQL.

It was part of my how-tos.

Yeah, it's already aged.

And you can combine it with partitioning,
TimescaleDB or whatever.

But this time it's just, actually
it's like basically synthetic

sugar, honestly.

Michael: It's a nice function,
right?

Nikolay: Since it's official we
can expect it will be more widely

used.

Yes, and it was discussed a couple
of times on Hacker News and

people say there is like concerns
about security ability to guess.

I don't get those concerns because
it's really hard to guess

anything from those values.

You can extract timestamp and unlike
recommendations from original

RFC authors, Postgres decided to
present a function uuid_extract_timestamp

from value, from uuidv7
value, which I find useful, for

example, for partitioning implementation.

So anyway, this is a great feature.

We discussed it in depth.

Let's not lose time.

But it's not like we have a data
type, we just have a couple

of functions to generate and to
extract timestamp, that's it.

But data type remains just UUID,
it's version agnostic.

Michael: Yeah, we already, we do
have a data type, we already

have the perfect data type, which
is we've had it for many years,

now we have nice functions around
it too.

Nikolay: Now it's just in Query
and everything in Query is great.

Just use it, use it.

It's better than version 4 for
performance reasons and also convenient.

I still find it helpful to keep
timestamp like created at separately,

although you lose additional bytes,
but it's a matter of like,

you need to compare what's better
for you.

Just rely on ID value, if it's
uuidv7, or have it separate.

I think it's worth a separate blog
post to compare pros and cons

for this, actually.

It's an interesting topic for developers,
I think.

What else?

We also discussed 1 feature a few
times, which I'm listed as

author, right?

But I'm like, I wipe coded it originally.

And it was --no-policies support
for pg_dump and pg_restore.

I know there is an issue which
some other folks were fixing because

it didn't eliminate comments.

Policies can have comments, you
know.

So if you specify --no-policies,
I saw there was an additional

work following up this patch.

And yeah, it's a pity that I missed it.

But in my defense, I must admit that other know something in

pg_dump and pg_restore also forgot about comments.

And this work was done in Claude 3.7 in January, I think.

Was committed later by Tom Lane with some improvements.

But when I say VibeCoded, it was originally VibeCoded, but eventually

before sending patch, I always review line by line everything

and polish myself and we're using multiple LLMs.

So I think it's not strictly speaking vibe-coding.

We call it vibe-hacking, but it's just good for prototyping to

use some help of LLM if you don't write C code every day.

And I think it unlocks the power for people who are not C coders.

They can implement things and present patches.

But obviously, if you do it without looking into code and send

some vibe-coded, purely vibe-coded patch, it's a terrible idea.

And inside my company, everyone is using LLM AI to code, but

it's like, it's prohibited to right now to start coding on your

own, not thinking about LLM, but it's also prohibited, not, not

taking responsibility for final result.

If it's some BS code, it's your, on your shoulders, not like

you cannot blame AI.

Right.

And if you, if you send something, some proposal, open pull request,

merge request, or sending patches, obviously you are responsible

for what you are sending right

Michael: yeah well I think it's quite rude as well.

Isn't it because you're asking when you send a proposal PR merge

request Anything you're asking other people to review it and

I think the minimum you owe them is that you've reviewed it yourself

yeah, when you when you ask other people to review something.

And that didn't used to be an issue because by writing something,

you are reviewing it in a way.

But now that people have options that involve them not even writing

it, I think it is a step worth being insistent on.

Nikolay: If you like not writing C code all the time, but you

want to implement something, go take AI to help you, but then

take another AI to review it and explain every line, explain

every line to you so you fully understand what you, like the

final result and you are happy with it and you have tests and

documentation.

By the way, small tip, start with writing documentation first.

And like usually people like, it's boring to think about documentation

and tests with AI.

It's slightly more like productive and Postgres test system is

it's Perl.

So it's really like not easy to deal with if you've like never

wrote Perl or already forgot it after 20 years, for example,

or something.

Anyway, I think AI is great.

It's like it's unlocking creativity here, but the final result

is on your shoulders fully.

And I also, before this episode, I checked a couple of articles

about Postgres 18 and I found terrible articles.

Terrible articles.

I can, like, I already, I'm building the set list of companies

who produce BS blog posts, obviously LLM generated.

And this is like not okay, in my opinion.

Do you mind if I name the companies?

Michael: Not at all.

I find it quite frustrating.

And I think again, it's rude, right?

You're taking a shortcut and publishing something that you hope

will be read by many, many people, taking many, many more hours

to read it than you took to write it.

I just think it's...

Nikolay: Yeah.

In the past, I saw a lot of BS articles about Postgres on the

website called MinervaDB.xyz, something like

this.

This time stormatics.tech blog post has a lot of wrong information.

Michael: Well yeah I guess this goes back to your your age-old

thing of always verify right but sadly now we need to make sure

we're also verifying things from what we considered maybe in

the past to be reputable companies.

Nikolay: Actually let me correct myself this stormatics.tech

article was not I'm in the context of you know like building

some police bot for blog posts about Postgres.

This blog post was about optimization of moving pg_wal to different

disk and it had a lot of mistakes.

For example, mentioning undo in WAL and a few others.

But this time about Postgres 18, we have blog post published

on dev.to, devtips, and it has a lot of, like it's LLM generated

because it says logical application gets full DDL support.

Well, no, it's not.

This is 1 of the pains we still have.

Obviously, LLM generated.

So I think before we consider problems with patches, wipe coded,

purely wipe coded, not wipe hacked, as I said, but wipe coded.

We have a bigger problem of misinformation being spread on in

blog posts So yeah, and what do you think it would it would help

to have some tool to quickly verify if I to do some fact-checking

So we I checking AI and so on.

Michael: The challenge is false positives and false negatives

right?

Like if an AI is capable of coming up with this stuff, well who's

to say an AI wouldn't also rubber stamp it?

So that's the kind of, there's the, there's the not catching

the ones that are AI written.

And well, and then there's the issue of catching or flagging

blog posts that were human written,
but it thinks AI and accusing

those of being AI written.

I think this is an unsolved problem,
isn't it?

Don't education have this exact
same issue with coursework and

things?

I think

Nikolay: you are right.

Humans also hallucinate.

I hallucinated on this very podcast
sometimes.

I think we should say, not AI checking
AI, we should say, I have

this information.

It can be a blog post or some report.

We actually, internally, we have
such tools already because we

do a lot of consulting We write
some reports and we need to verify

we do we need to do fact-checking
because sometimes you think

this is this is it this is how
Postgres works, but it worked

like that 10 years ago and it already
changed.

So we need to dig deeper and do
some fact-checking all the time.

Because for example, I was wrong
recently when I said changing

your primary key from int4
to int8 is going to produce

a lot of bloat.

I was completely wrong.

And in that case, I was wrong very
deeply.

So it turned out there is a table
rewrite mechanism similar to

VACUUM FULL.

Nice.

And table comes out from it, like
alter table, alter column.

Table in the end is actually fresh
and all indexes are freshly

rebuilt.

And sometimes it's acceptable to
have this downtime maintenance

window.

Michael: Yeah of course completely
offline operation but no bloat.

Nikolay: So yeah I sometimes don't
trust myself and this tool

like checking, fact checking we
could use it yeah we use it internally

again Like I am thinking to present
it as an external tool so

we could check blog posts we suspect
are not accurate.

Michael: I think that well accuracy
is slightly different to

LLM generated and I think it depends
what you want to like.

Do you want accurate posts or do
you want human written posts?

And that those are subtly different.

Nikolay: I don't care about who
wrote it, I want accurate posts.

Michael: Yes, so I wasn't even
talking about inaccuracies.

I was talking about inaccuracies
in the checking.

So like, even if you assume the
article is completely accurate

and human-written an LLM checker
could still say we think this

was this has some hallucinations
in it you know hallucinating

the hallucination so I don't have
much hope in this, other than

trust-based.

You know, if you as a company start
publishing posts that you

haven't checked, whether with humans
or if it has inaccuracies

in it, I'll forgive a few.

But if you're doing it consistently and there's loads of made

up stuff, I'm gonna stop reading your stuff and I'm gonna stop

sharing it and you know there's...

Nikolay: So LLM can be considered like amplifier of your like

state.

If you are not accurate, you produce a lot of bad pieces of advice,

wrong things.

It just amplifies it, right?

So if you don't check information.

To connect bits a little bit and finalize this AI discussion,

I find it Also interesting that sometimes hallucinations are

really helpful.

I had several hallucinations which led to the idea, actually

this missing feature would be great to have.

Well, this is great hallucination.

Let's have DDL in Postgres 18.

Already too late, right?

There is a work in progress, unfortunately.

And by the way, I recently checked it, DDL and logical, support

of DDL and logical replication.

It's not like we discussed it, right?

I have low expectations that it will be implemented very soon,

unfortunately.

And it's

Michael: a lot of work, isn't it?

And there's probably lower hanging fruit like sequences and things.

Nikolay: Yes, this is also a work in progress.

And I think, is it already going to post this 19 next year?

I'm not sure.

But sometimes I like just working, writing some ideas, I'm getting

some great ideas like we had the case with corruption and we

needed to fix it with all backups and so on.

So we needed to apply a pg_resetwal.

The idea was we need to recover from some backups and they were

broken, so LLM suggested to use pg_resetwal with a system identifier

option which doesn't exist.

So I ended up implementing that option and proposing a patch.

Unfortunately, it was not finished before Postgres 18, but maybe

it will be finished before Postgres 19 and we will have this

option.

Yeah, so there is discussion about this.

Michael: That's funny.

Nikolay: Yeah, and a few things like this.

So sometimes, so this is like, interesting thing to have LLMs,

but you must fact check and sometimes you think, oh, this is

a good idea and go implement it.

So why not?

Michael: Yeah.

All right.

Well, while we're on topic of corruption,
another thing we've

discussed a few times is checksums
and encouraging people to

enable them.

And we're getting them, they're
going to be on by default now

from 18 onwards.

Nikolay: I think it's long overdue.

Michael: Yeah, it was but great.

Nikolay: Yeah, it's a great great
thing Minus 1 recommendation

we will have in the future for
our clients and RDS had it for

ages, right?

Checksums are enabled there by
default.

Google Cloud also yeah yeah just
great thing

Michael: yeah I think quite a few
of the hosting providers do

but each of them have to it's another
1 that any new hosting

provider that comes along they
have to then know to do that or

their customers don't get it by
default.

Now they don't even have to know
and they'll get it on by default

which I think is great.

Yeah.

Another 1 we've discussed a few
times is pg_upgrade getting or

preserving the statistics, the
optimized statistics across a

version upgrade?

Nikolay: This is a huge pain, and
especially because, unlike

previous topics, this is not solved
in managed Postgres providers.

I didn't see any of them, I mean,
major providers like RDS, Cloud

SQL, others.

They always put it on shoulders
on users to run, Analyze after

major upgrade.

And we had outages because of that.

So now I think in the future, not
now, in the future.

You cannot use it before your old
cluster is already on 18.

So it's only like first time we'll
be able to benefit from it

in 1 or 2 years only, when you
upgrade from 18 to newer version.

But anyway, this is great news,
so I feel really happy that this

is solved finally.

And this is solved in a very elegant
way because now you can

dump statistics, So it's not only
for upgrades, you can use it

for other, in other areas because
you know, like, pg_restore also

doesn't care about it.

You restore and you need to run,
Analyze manually, also vacuum

actually, you need to run vacuum.

Now, recommendation will be after
pg_restore, after restoring

from a logical backups, aka dumps,
like logical backups is fine

to name dumps, like, but not just
backups.

So when you restore from it, we
always said you need to run a

vacuum Analyze.

Now it's just a vacuum to have
visibility maps created sooner.

Michael: Your index only scans,
yeah.

Nikolay: Well if you forget vacuum,
it's not a big deal.

Maybe you will like, and of course,
autovacuum will take care

of it, but lacking statistics,
it's a big danger.

Now you can dump statistics, restore
statistics.

This is great I think from this
we can benefit sooner than from

having it in upgrades because once
you're already running Postgres

18, you can start using it

Michael: By the way, I think I
think we might owe at least 1

cloud provider an apology.

Do you know who implemented this
feature?

3 guys, all from AWS.

So I think at least 1 cloud provider
does deserve credit for

implementing this feature, and
they've actually implemented it

in a way that everybody benefits
which is cool.

Nikolay: Right, but why do we need
to apologize?

Michael: Well because you said
all of the cloud providers have

just shoved this onto users and
didn't do anything about it.

Nikolay: I think yes, we can apologize
easily, not a problem.

And this is great.

I knew this feature is from RDS
team.

This is great.

But still, if you run upgrade on
RDS, it has 16 points, 16 steps.

Michael: Yeah, yeah, yeah.

Nikolay: How to upgrade using pg_upgrade
official documentation

has 19 steps.

RDS has 16 steps.

And 1 of them, last 1, 1 of couple
of last ones, is you don't

need to run analyze.

And people tend to forget or just
say, okay, maybe it's not super

important.

And it happens.

People, it's manual.

And so I stand my ground.

They don't automate it, they put
it on shoulders on users.

But of course, now they need to
wait until 18 and then 19, and

then it will be fully automated,
which is great.

Yeah, I heard reasons why they
don't automate it, because it

can take time, prolonging downtime
window.

And there is an official approach,
analyzing stages, which I

think is a mistake, honestly.

Because in OLTP cases, we better
to get final statistics sooner

and do it inside maintenance window.

Right?

Michael: Yeah.

I just think if you care about
the downtime, you should probably

be working out how to do a 0 downtime
upgrade in some way, shape,

or form, and then you don't have
to worry about this problem

anyway.

So it's more, I think if you can
take the downtime, why not just

do the full analyze?

Nikolay: Exactly.

And also in this area, pg_upgrade,
I think, did you see that it

got parallelization --drops option for...

Michael: Yes, I did, yeah.

Nikolay: This is also great news
for those who care about upgrades.

Everyone should care about upgrades,
right?

Michael: Yeah, especially when
we're talking about a new major

version.

Nikolay: But anyway, like, am I
right that statistics can be

dumped and restored, even not in
the context of upgrades?

Michael: If it's if If they can,
then actually people can already

benefit from this because you can
use...

Nikolay: Yeah, this is a dump option
statistics.

So once we are...

Because this is discussed in the
context of upgrades only, but

again, it's not about only upgrades.

Once your server is running Postgres
18, you can already start

benefiting and adjust your dump
scripts and all these dump statistics.

And then you can just restore it
and that's it.

I also noticed that vacuumdb has
now, in Postgres 18, received

the option missing stats only,
to compute only missing optimizer

statistics.

This is interesting, because vacuumdb
has also --jobs,

so to move faster if you
have many many cores?

Right, you can say...

Michael: I think, yeah, I think
we need the missing statistics

because currently extended statistics
aren't dumped and aren't

preserved.

So If you create, you know, we
talked about correlations between

2 columns.

If you create those, those are
not preserved.

So I suspect that got added to
analyze so that we don't have

to run full analyze still.

Otherwise there's, Well, if we've
used create statistics, if

we've used the extended statistics.

Nikolay: Yeah, that's interesting.

But also, as like, to be careful
with vacuumdb --drops,

I love it.

And because if, if you have increased
default statistics target,

or like a lot of statistics to
a lot of tables and so on, it's

great to have parallelized processing
here, right?

But unfortunately, until version
19, so next year only, partition

tables will still be in trouble
because vacuumdb doesn't take

proper care of them, it only cares
about partitions themselves

but root table is not processed
and if you process it with analyze

it will process partitions once
again anyway so there is a problem

here.

Michael: We have a new feature
for that too in 18.

Nikolay: No it should be in 19.

Michael: We have an analyze only
for the parent partition.

Nikolay: No no no Analyze only
for parent partition is in work.

I think it's not like it's, it's,
maybe it's committed already,

but it's not in Postgres 18.

Laurenz Albe did it.

I noticed because I also wanted
to do it but it was only recently

committed I think.

There is allow vacuum analyze to
process partition tables without

processing their children.

This is great but this is not vacuumdb
stuff.

Michael: Yeah okay but we can do
analyze only and then the table,

the parent name, you're right it's
a separate part.

Nikolay: Analyze only we had always,
I mean for ages.

vacuumdb --analyze-only, this is
what we do after upgrade, I know

it because This was our recipe
until we hit this very problem

when partition tables lacked, root
table lacked statistics and

some queries suffered from it.

Michael: No, but if you did it
on the parent partition, it would

also gather statistics on all of
the child notations as well.

Nikolay: There are many things
here.

There is a single threaded SQL
command, right?

Analyze, or vacuum analyze, doesn't
matter.

They are single

Michael: thread,

Nikolay: right?

And there is also vacuumdb, which
has a -j, aka

--drops option, which helps
you parallelize.

The problem is when you run vacuumdb -j
like 60, I'll analyze

only.

Root tables in partition tables
will lack statistics after it.

And this is solved after Postgres
18 beta 1 was out.

So we need to wait until 19.

Michael: So that would be in 19,

Nikolay: sure.

We cannot move fast and take care
of partitions, unfortunately,

partition tables properly.

Yeah.

Michael: We need to set, then you
need to do a separate task.

Nikolay: Right.

So in Postgres 18, first of all,
statistics can be dumped, which

is great.

pg_upgrade is not in trouble anymore.

Okay.

But sometimes we still need to
rebuild statistics.

In Postgres 18, also, vacuumdb
received missing stats only.

And vacuum, single-threaded SQL
command, received ability to

take care of partition tables properly.

This is what happened.

Michael: Not just vacuum, but also
analyze.

Nikolay: Yeah.

Vacuum, well, yeah, vacuum analyze
and analyze.

Michael: So vacuum's a bit pointless.

I don't think there's any point
vacuuming a parent partition

because it's not going to have
any data in it.

Nikolay: But it needs statistics.

Vacuum, analyze, vacuum.

Michael: Exactly, it does need
statistics separately from its

children.

But in the past, in 17 for example,
in version 17, 16, 15, if

I, to get statistics on the parent
partition, I'd have to run

analyze on the parent table, which
would also gather, regather

statistics for all of the child
tables, which is a lot, like

that, There's so much more work.

Nikolay: So what happened, we say,
vacuumdb -j is great because

we have 96 cores, let's go.

Hyphen-j 96, let's go.

But then, oh, actually, root tables
and partition case, they

lack statistics.

We need to run analyze on them.

But when you run analyze on root
table, it recalculates statistics

for all partitions once again.

Now in 18, we have ability to skip
that and say, okay, vacuumdb

-j, move really fast.

And then additionally run analyze
only on partition tables, only

for root partitions, skipping partitions
themselves, because

vacuumdb already did it.

So now we have full recipe for
Postgres 18, how to recalculate

statistics in all database and
move really fast.

When you're like inside maintenance
window, for example, you

need vacuumdb -j, and then separately
you need to take care of

root tables for partition tables,
skipping partitions themselves.

Right?

This will be the fastest.

In Postgres 19, vacuumdb will take
care of this and we will need

to get rid of this second step.

Michael: Yeah, I didn't check the
commit, but it's possible.

I said missing stats only might
be for create statistics, it

might be for extended statistics,
but it might also be for parent

partitions.

That could be another place you're
missing stats.

Yeah.

I haven't checked.

Nikolay: Anyway, 19 will be excellent
and we will have all the

pieces of the puzzle ready and
we can move fast.

I like, I honestly think like Vacuum
and Analyze could have some

option, I mean SQL commands, they
could have some option to tell

like I want additional workers.

Give me like, you know, like vacuumdb
is good, But if you in

managed Postgres case like RDS,
you cannot easily run vacuumdb

because you need this to instance
running in the same region

or something.

So you like it's terrible idea
to run it from laptop, right?

Because connection issues and so
on, you will lose it.

So you need an EC2 instance next
to it, like some maintenance

host, and from there you can...

It's not convenient, right?

It would be great to have autovacuum
and say, I want like 16 drops

or 96 drops and let's go full speed,
we are inside maintenance

window.

Makes sense, right?

Michael: I think so, yeah.

Yeah I'm thinking also you might
always want to have more, if

you've got, you know, regularly
scheduled jobs, you might always

be able to give it more.

You might even want a configuration
parameters like always give

my vacuums at least 3, you know
up to 3 workers or something.

Anyway, getting into autovacuum
territory here.

Nikolay: Yeah, yeah, yeah.

Let's maybe jump in between some
other pieces.

There are actually so many great
things.

I like the release overall, a lot
of stuff.

And for example, I like the idea
that now regular check, not

check, not null constraints can
be defined as not valid.

As we discussed a few times, it's
terrible naming not valid because

they will be checked for new writes,
but they are not checked

for old writes.

I think it's like all the pieces
we had before already, because

not null, I think in Postgres 12,
since Postgres 12, it can rely

on check constraints, check not
null constraints.

There is not null and there is
check constraint.

For primary keys, we needed only
regular not nulls, not null

constraints, right?

But implicitly we could have already
used like regular check

constraints with not null and then
do some dance around it.

Now it's like becoming easier to
redefine primary keys without

thinking about all these nuances.

I like this.

Things are polished now in many
areas.

Like we just discussed it with
statistics and this, not now.

I like this.

Maturity of many pieces, many functionalities.

Michael: It feels like a release
chock full of things that have

come up for real.

For real users have hit these issues
and people have come up

with fixes for them.

It's not like, I know that's the
almost a definition of a maturing

product but it doesn't feel like
there's that bigger divide between

hackers working on things that
hackers want to work on versus

users hitting problems and actually
wanting to solve problems

you know that it feels very very
tightly done which is nice.

Nikolay: Yeah yeah another thing
I wanted to mention is ability

is infamous number of fast path
slots for Lock Manager, right?

We had 16 only.

Yeah, we had issues in multiple
customer databases.

Several companies suffered.

And I remember Jeremy Schneider,
who did it?

Like this was year of Lock Manager.

Michael: Yes, that was his post.

Nikolay: Yeah, it was maybe a couple
of years ago, right?

Or maybe, yeah, 2023, I think.

And now, yeah, I raised this last
year in hackers that we need

to make this probably adjustable,
like expose it as a setting.

And Postgres 18 finally has this
solved, but in a different way.

I think Tomas Vondra, who worked
on it mainly, right?

And instead of adding 1 more GUC
setting, we already have almost

300, right?

Too many.

Instead of that, now it takes into
account max locks per transaction

setting, right?

I think.

And automatically adjusts based
on that.

So if you have a lot of cores,
you likely, if you tune your Postgres,

you raise that setting already.

So number of these slots for fast
path checks in the Lock Manager

mechanism, it will automatically
raise.

And we checked in 1 case, we had
16 hitting replicas in that

case.

Sometimes it's hitting your primary
workloads, sometimes replicas

workloads.

In that case, we had 16 by default,
right, before 18, but now

we will have, due to that setting,
it's already adjusted, we

will have 64.

And honestly, in half an hour,
there's a scheduled benchmark

I need to do, I need to conduct.

But unfortunately, I remember early
implementations of the idea

of raising this threshold were
not super successful in terms

of helping with this performance
cliff.

So I'm not sure.

This should be very carefully tested
in each particular case,

because there are several variants
of this cliff.

Michael: Yeah, well if I trust
anybody around performance cliff,

I do trust Tomas to have looked
into it.

Nikolay: So I'm optimistic.

As we learned from our podcast
episode, we have different definitions

of performance cliff.

Michael: True.

There is 1 huge feature in Postgres
18 we haven't talked about

yet, which is asynchronous I/O.

Nikolay: Yeah, well, this is elephant
in the room, honestly.

And I must admit, I lack experience.

So I cannot tell a lot about this.

I know this is a big work led by
Andres Freund, right?

Yeah.

So yeah, I remember following this
work.

It's a lot.

I hope we will find good results
in some benchmarks in like actual

systems we help manage But I don't
have such experience yet.

Looking forward to it, honestly,
so

Michael: Yeah, it'll be good to
benchmark it because I think

a lot of the normal benchmark type
workloads, you know, pgbench,

sysbench, that kind of thing, won't
see benefits from this work.

There are like lots and lots of
small queries and this is much

better at large scans, right?

Like if you're benefiting from
gathering a lot at once, you're

gonna see bigger differences when
you're getting more blocks

than 1 block, for example.

So yeah, I think it'll be interesting
in real world workloads

to see how big a difference.

I think it's another 1 of those
things that you could see significant

improvements to how fast vacuums
run on some of your largest

tables or how fast, I don't actually
don't know exactly which

things have made it in, but I think
bitmap heap scans definitely

have and vacuum definitely has
and there's a few others but as

we see more and more of those get
used in future versions of

Postgres I think we're going to
see the benefits of this without

having to change anything again
it's another 1 of those features

that has managed to be committed
on by default and with quite

high, like not, maybe not high
is not necessarily the right word,

but it hasn't been shipped with
really conservative defaults,

I don't think.

That's how I'd phrase it.

Nikolay: Yeah.

I would also pay attention to additional
CLI tools this version

is bringing.

Because in Postgres 17 somehow
I've overlooked and we in our

discussion didn't touch it, but
later I noticed, not I noticed,

somebody pointed out to this big
new tool appeared, a pg_create_subscription,

converting physical
standby to logical replica,

which is great, like absolutely,
like this is automation of,

like we can throw out a lot of
automation, which is great.

I always love to do it.

It's official and it's working.

I'm not sure about managed Postgres
setups yet because I think

they must expose it via their APIs
and CLIs because it needs

to be run on the server and we
don't have access to it.

But in self-managed setups, it's
already available since last

year.

I mean, since your production is
running Postgres 17.

Yeah.

But also, this release Postgres
18 gets extension pg_logicalinspect.

Well, it's not CLI, but some extension.

I'm not sure what's inside in detail,
but I would like to explore

it, because logical replication
is quite complex, and inspecting

logical snapshots and sounds is a really good idea.

And also there is extension pg_overexplain.

I'm jumping between CLI tools and some extensions, I know.

But this is a good thing, I think, to understand.

I remember how it was born.

Robert Haas did it, right?

Yeah.

And I think maybe you can tell more about this because it's closer

to your fields, dealing with EXPLAIN plans.

Michael: Yeah, I think, firstly, kudos for an awesome name.

How good a name is pg_overexplain

Nikolay: but over tune

Michael: Yeah But yeah, so I think it does 2 things that are

quite important 1 is it's the first customer of the ability to

extend EXPLAIN in an extension, So I think it's a good example

to people if they want to add more features to EXPLAIN in the

future which has been, you know, over the last 4 or 5 versions

we've had quite a few new parameters added to EXPLAIN and I suspect

there's some amount of belief that maybe some of those could

have been extensions for a while first and seen how popular they

were before adding them for everybody.

So that first and foremost Robert made it EXPLAIN extensible

and then overexplain is an example of that but also is more

information than a user would normally want from EXPLAIN that

would generally be useful for hackers like people working on

the Postgres source code or optimizer trying to understand why

it's making certain decisions while they're working on a new

feature.

So for example the behavior around disabled nodes.

So you know when you can you know when you're testing why isn't

Postgres using my index and you want to do enable_seqscan = off

so that you completely disable, well you try and discourage sequential

scans so that if it could possibly use your index it will.

In the past that used to work by adding a massive cost constant

to sequential scans so that they'd be discouraged but not impossible

That's changed in I know Robert actually was happy to be the

person working on that 1 as well and it's now done on a on a

base of counting the number of disabled nodes as the first tiebreak

and then cost as the second tiebreak which is a really neat implementation

I've actually blogged about that but the overexplain instead

of telling you which nodes are disabled within the EXPLAIN plan

it will give you a counter of the number of disabled nodes so

far so that's useful for hackers but it's really confusing as

a user because if you're using it you can see disabled nodes

counter being 1 all the way up and you have to kind of look back

to see what the first 1 was and luckily I think it was David

Rowley who kind of backed that out and made it more user-friendly

but the the point is EXPLAIN is designed for users.

The primary user of EXPLAIN and EXPLAIN ANALYZE are people trying

to speed up slow queries.

Nikolay: But- Do you mean human
users or do you

Michael: mean- I

Nikolay: mean human users.

Michael: I mean human users and
what I mean is not Postgres hackers,

not people working on the optimizer.

So overexplain gives them a bit
more information about exactly

what the optimizer is doing at
each stage.

And I just don't think it's going
to be useful for users.

Nikolay: Yeah.

Let's also mention moving to macro
level.

We mentioned that pg_stat_statements
have a couple of improvements.

We always suggest that especially
for Java applications or some

applications which tend to set
application_name to some like

session ID, blah, blah, blah.

And set comment could pollute pg_stat_statements.

We always said set pg_stat_statements.track_utility
to off, so you

don't track set commands.

Now in Postgres 18 it's not a problem
anymore.

Parameter is just like, I think
$1, right?

Should be.

Yeah.

So yeah, you basically it's all,
everything is just 1 entry set

application_name to something.

So normalization is implemented
for set commands.

And also something is improved
in the area of tracking parallel

activity, right?

Michael: Yes, there's like some
extra columns.

Nikolay: Not enough columns in
pg_stat_statements.

Michael: I have actually seen a
conversation since then discouraging

more columns.

Interesting that that 1 got in.

I wouldn't have thought that would
be the 1 that snuck in last.

Talking of new columns, there's
also some new ones on

pg_stat_all_tables and similar for vacuum
tracking, vacuum and analyze

time spent which looks really useful.

Nikolay: Yeah yeah I like this
and I think it's worth working

on moving information from logs
to SQL yeah It's easier to monitor

it than dealing with logs.

I can't wait until errors will
be tracked properly in Postgres.

There is work in progress in this
area.

I hope it will end up being added
to 19.

Not yet, I just like, maybe it
won't work, but I think errors

should be tracked as well, properly.

Counters at least, like something.

Yeah, because now we need to go
to logs all the time.

So yeah, although I actually, I
learn all the time.

Last 20 years I learned Postgres
and sometimes I feel like I'm

lagging.

Development is faster than my learning
curve.

Yeah.

So I just learned that there is a pg_stat_database_conflicts or something,

view, which exposes some types of errors.

And for example, it can tell you that query was canceled because

tablespace was moved.

It also can tell you that it was canceled because of certain

timeouts, but not statement timeout and not transaction timeout,

not idle transaction timeout.

Yeah, due to lock timeouts, for example.

Lock timeouts, yes, but not statement timeout, not transaction,

not, I don't know.

And a few more like replication conflicts.

So it tracks some counters, but choice is really strange.

And query cancellation due to very common reasons like statement

timeout.

Unfortunately, you need to parse logs right now or use an extension

called logerrors.

Not super popular, but very convenient.

Unfortunately, available nowhere, almost.

So only in self-managed case, I guess.

But I hope this idea of logerrors will be finally implemented

in the future in Postgres itself and yeah.

Okay, good.

We touched some future a little bit as well.

Michael: I had 1 more favorite I wanted to give a shout out to.

Obviously it's EXPLAIN related being me.

Nikolay: Right.

Michael: I think there's a really big improvement that has gone

a bit under the radar for many, which is actual row counts are

going to be reported as decimals now so this is most important

when you have loops and lots of loops and it will report you've

got 100,000 loops.

Pardon me?

Nikolay: 1.5 rows, right?

Michael: 1.5 per loop is way more useful than 1 or 2 rounded

to, but it's even more important when you are around the 0.5

rows per loop or 0 like anything 0 to 1 is particularly interesting

below 0.5 is very very important because it will round to 0 and

no matter what you multiply 0 by you're gonna think that that's

returning 0 rows and that's really Really not true when you've

got a hundred thousand loops.

Nikolay: Yeah Yeah, the mistake accumulates and multiplies.

Michael: Yeah, exactly Okay, the next like once you get past

the looping operation, you will see the actual number of rows,

but you don't know what's happened in between.

Now you don't have to worry about that.

It will report it as a decimal and you can multiply that.

Nikolay: Let me add 1 more because this is great actually because

I feel connection to so many features and 1 more yeah this is

where Andrey Borodin worked like and others many others actually.

He worked for quite some years and finally amcheck is getting

support for GIN indexes.

Yeah.

This is big because we do need it.

We do need it because we need to use amcheck often.

Michael: I actually missed that in the release notes as well.

Nikolay: It always worked for B-tree to check indexes for corruption

when you, for example, perform OS upgrade or other stuff.

You need to check for corruption from time to time.

But GIN indexes, like, we couldn't do it.

Well, we usually, in self-managed cases, yeah, only in self-managed

cases, I think we used patched version.

So we, we tested that patch.

We, it, it took several years for, for the patch to mature and

get rid of false positives, false negatives, I think.

Especially false negatives when it's missing.

I always get confused.

But it was false positives as well.

Yeah, but now it's great.

It's there.

So it's time to use it.

Extend your amcheck actions to a GIN.

Yeah.

And if you don't have amcheck actions, you must.

I mean, you should, not must, you should.

Because amcheck is a great tool.

I think maybe it should be renamed because we use it mostly to

check indexes for corruption.

But yeah, anyway, maybe it's a good name actually because it's

unique.

But I think for large Postgres databases, everyone should check

for corruption from time to time for various types of corruption

and amcheck is a great tool for it.

Michael: Is it access method?

Nikolay: Access method check, yeah.

Michael: Yeah, makes sense.

All right, yeah, good one.

Thanks so much, Nikolai.

Catch you next week.

Nikolay: Thank you, enjoy it.

Some kind things our listeners have said