Why isn't Postgres using my index?

Why isn't Postgres using my index?

Nikolay: Hello, hello, this is
PostgresFM episode, I don't remember

the number, and today... I'm Nikolay
and this is Michael.

Hi, Michael.

Michael: Hello, Nikolay.

I think 85.

Nikolay: Well, honestly, it's
already at this point, it

doesn't matter.

Michael: When we get to 100, it
will matter.

Nikolay: Yeah, remind me, wake
me up.

Yeah, at 100.

So today I'm in the passenger seat
and Michael is going to talk

about, I will join of course, but
from the passenger seat, about

why the planner behaves not as
expected, not choosing index I

have prepared and how to troubleshoot
that, right?

Michael: Yeah, yeah, this was my
suggestion.

So it's something I see a bunch
and I got reminded again yesterday.

I happened to be on LinkedIn
of all social networks.

It's surprisingly one of the slightly
nicer social networks to

be on these days, which I would
have been shocked by a few years

ago.

But I saw a post from Brent Ozar,
who I remember from my Microsoft

SQL Server days, and has been recently
getting into Postgres

things, which is cool.

I think it would be a great addition
to the community.

And he posted what I thought was
going to be a bug post, and

he cheekily said, why isn't Postgres
using my functional index?

And it turned out it was actually
a Stack Exchange question that

he had posted, and he had included
links to an Aurora database,

which, not Postgres, but Postgres
compatible, which is fun.

And he was genuinely asking why
isn't it using my index?

And I'd noticed it was only posted
like sometimes on LinkedIn,

you see posts from like 3 weeks
ago, but it was only posted about

20, 25 minutes ago or something.

I thought, 'Oh, you know, there's
no answers to this.

I, this is something I generally
know about.

I've written a whole blog post
on.'

So maybe I can help here.

I had a look.

And it just reminded me that this
is a remarkably common issue

for beginners for a bunch of reasons,
but even for some quite

expert database users, sometimes
get confused as to why they've

got an index that Postgres should
be using or they think they

do.

And for some reason, Postgres either
can't use it, and they're

expecting it to be able to, or
doesn't think it will be faster

and is choosing a different plan.

So, yeah, I've seen this a bunch
of times.

I feel like there's probably about
10 different reasons it could

be, which is I think slightly,
like, people don't realize quite

how many reasons there could be
for this.

So I thought it might make an interesting
discussion.

Nikolay: Yeah, well, yeah, I remember
this struggle.

Now I know how to troubleshoot
it, but I remember the struggle

I had many times in the past.

It's not easy sometimes to understand
why a planner behaves in

one way, not as you expected.

And sometimes you just, it upsets
you a lot and you start thinking,

oh, it's stupid.

I saw also people started blaming
Postgres a lot after such events.

They think like, oh, planner is
like, I want control.

Also, this is where you probably
want to start wanting this hinting

which Postgres lacks, right?

Well, it has it, but it's not like
standard.

Michael: Yeah, I think we probably
will get to that.

You said you now know how to solve
it out of interest.

What's the first thing you'll do?

Nikolay: Well, first thing I'll
do, I'll try to understand, okay,

This is the moment, usually I don't
pay attention a lot, but

this is the exact moment when I
start paying attention to cost

metrics in the explain analyze
buffers plan.

So the idea is I try to understand,
is it really a huge number

or like it's relatively small or
tiny?

And then I think, okay, based on
my experience, and here actually

it helps if you have some experience,
right?

Based on my experience, if my index
was used, what would be this

cost, right?

And then, like, basically I already
explained my algorithm.

So, okay, I see some cost.

I wanted to say actual cost, but
the word actual we must reserve

for execution, not for planning,
right?

But this is the cost you see in
the root of this tree of query

execution.

Actually, we don't need to analyze
here, right?

We just say, explain and it's enough.

Michael: For the total cost that
you're talking about, I guess

you're talking about, they're called
startup cost and total cost.

I guess you're talking about the
total cost on the root node.

Nikolay: In the root, yeah, yeah.

We just, like, we have the plan,
we see our index we hoped would

be used, it's not present there,
and we see different index or

maybe sequential scan, and we think,
oh, what's happening here?

And so I see the total cost, probably
also check the successive

method node where data access is
happening, and the cost there

as well.

It's also important.

And then I think, oh, if my index
was used, what would be the

cost?

And since I'm experienced, I'm
sometimes already understanding

what's happening.

But if I have doubts, I just start
playing with these rough knobs.

enable seq scan, enable index scan,
enable blah blah, to put

penalty on specific access methods,
right?

Or so, to specific steps.

Michael: Yeah, scan types.

Nikolay: Yeah, Right, scan types.

And in this case, very often I
start seeing what I expected and

I see, oh, like, this would be
the cost.

And I see the difference.

And sometimes it's very small.

So we were very close to choose
my plan, but it was not chosen

just because...

And then super often, very often,
it's a new system and I ask

people, what random page cost
do you have?

Because sequential scan, our index
is not used.

Oh, and the random page cost is
4.

Recently, some Amazon guys, they
got consultation with me and

this is exactly what happened.

I asked what are the random page
costs?

They said 4.

Remember, Crunchy Bridge, after
listening to us, changed it to

1.1, and they also published good
benchmarks, and it's still

in my to-do to revisit those benchmarks
because I had a tendency

to set random page cost to 1, but
they said 1.1 is better, actually.

According to our benchmarks, and
they had some methodology interesting,

but it's slightly off topic.

So, for those listeners who don't
know what random page cost

is, is how the planner thinks about
random access to data.

And by default, it expects you
using magnetic disks, rotational

disks.

And this is not normal in 2024,
definitely.

And I don't know if Aurora and
RDS still have 4.

That cluster may be created long
ago, but if they still do have

this, This is another question
to them.

We had the question about CPU and
green color in performance

insights last time.

But today we have a new question.

We need to check.

I haven't created clusters for
long.

Yeah.

Michael: Last time I checked, most
of them were.

There were a couple of good exceptions
like ScaleGrid and now

Crunchy Bridge but there aren't
many that have tuned it last

time I checked which was admittedly
quite a while ago.

Nikolay: This is super strange
and also Postgres community.

It's time to revisit this and we
need this for mostly for new

clusters.

You shouldn't think too much about
old clusters, they already

have postgresql.conf in place and
everything there.

So, change it.

Michael: And I would say, why are
we optimizing for the few people

that are still running magnetic
disks instead of the vast majority

who are running on SSDs.

It's time to change, I think, for
the default.

Nikolay: I might be, again, taking
too much time for a passenger,

but let me just, I just feel this
that maybe some people don't

understand what we are talking
about.

I will be very short.

So there is a seqpage cost and
random page cost.

Seqpage cost means sequential,
the cost of sequential data access.

And it's 1.

If you change it, you change scale.

I have cases when people change
it.

But it's 1, it's our baseline.

It's 1.

And random page cost by default
in Postgres is 4.

It expects that random data access
is 4 times more expensive

than sequential page access, which
is not so if you, for example,

have all data fit in memory, cached,
or if you have, if it doesn't

fit, but if you have non-rotational
disks, SSD, NVMe SSD, or

something like that, modern disks.

And most of new databases have
modern disks, of course.

And Aurora, I think they have very
good storage, so obviously

it's not rotational.

I think maybe I'm wrong, actually,
right?

Should not be rotational.

It's very slow.

I mean, throughput is terrible,
right?

Okay.

So, and why does it matter here?

Because if you have 1 and 4 default
settings, 1 sequential and

4 random access, it's 4 times more
expensive.

The planner quite often thinks,
oh, sequential scan is not that

bad.

I would prefer sequential scan
over index access if I need to

fetch a lot of rows, because it
seems to be cheaper for me.

Once you shift it to normal 1.1,
1.1 is normal these days.

This is common understanding.

And Crunchy Bridge have good benchmarks.

Let's attach the link.

So in this case, you tell the planner,
actually, sequential access,

random access, they're almost the
same.

And index scan starts winning.

Yeah.

Michael: Yeah.

Or at least it's only a 10% penalty
instead of a 300% penalty,

which is a huge difference once
you start returning a bunch of

a lot of rows for a single.

Yeah.

Which happens quite often.

There's quite a lot.

Like, anyway, I'm very happy for
you to do a bunch of talking

here.

It's a pretty common topic.

I like that we've gone this deep
so quickly, but I do want to

go, I think it's worth going back
and saying, this is a great

avenue for like, like you mentioned
looking at the costs, but

I think there's an easy, like even
for non-experts, you can,

you can use that to your advantage,
right?

You can, if you have access, I
did, I didn't, when I was looking

into Brent's problem.

It was a read only user, so I couldn't
do enable set scan off,

which is the easiest way if you're
getting a sequential scan,

flipping it to, well, basically
telling the planner that sequential

access is tons more expensive than
it really is, and therefore,

if it has any ability to use the
index, no matter how expensive

it should be, it will then use
it.

If it can't, though, enableSeqScanOff
doesn't actually prevent

the planner using seq scans.

It just penalizes it.

It makes it look more expensive.

So if you cannot use the index,
which is a bunch of these cases

are Postgres can't use the index
for some reason.

Or you've forgotten that you're
in an environment that doesn't

have the index.

Like that is a surprisingly common

Nikolay: and valid index.

Michael: Yeah.

Well, Yeah.

I haven't seen that 1 as often.

Like I haven't seen it come up
as often.

But quite often people are in the
middle of debugging something

and someone's dropped that index
or they're on staging and it

doesn't have it when they went
on production, they do have it.

So it is worth double checking
the index exists, but by using

enable seq scan off, you can quickly
check, is it a costing issue?

Now it's trickier with if you're
in the case you mentioned where

it's using 1 index and not the
1 you're expecting, you can't

use that.

You can disable different scan
types, but you can't tell it not

to use a specific index.

I have seen a trick for that though,
which I wasn't familiar

with before, but Haki Benita shared
on a blog post.

He does a begin transaction, drop
index, the 1 that it was using,

explain the query, and then roll
back, which means you can try

and see if that index didn't exist,
would it pick my index?

Which is quite a nice trick as
well.

So trying to find out, are you
in the case where Postgres can't

use the index or you're in the
case where it's choosing not to?

The enable parameter is a really
nice way of finding that out

quite quickly normally.

So yeah, cool that you start there
as well, even if you're eyeballing

the costs.

The one thing I would add on that
is, I think explain analyze can

be helpful.

So if you've run that already,
or if your query actually runs

and doesn't time out, then the
explain analyze part is helpful

because you get the rows returned,
and I think when the cost estimates

are off, sometimes it's random page cost, but quite

often it's about the number of
rows Postgres expects to be returned

and the number of rows that are
actually returned.

So, in fact, maybe you don't
need to analyze because if

you know the query, you know the
data, you know the query you're

running, if you're expecting only
a few rows to be returned and

you see hundreds of thousands in
the estimate, you've got a pretty

good clue right there that you're
in a case where it's overestimating

the cost of that.

Nikolay: Or underestimating maybe.

But yeah, underestimating wouldn't
lead to the index not being used,

but anyway, it can lead to wrong
plans.

And yeah, that's a good point.

So, this is the reason number 1,
let's say.

First reason is like costs are
slightly different.

But this is already talking
about the most difficult

case, probably.

There are easier cases.

I just excluded them from my mind
because they are easy.

If the cost is off, and indeed, we
check the planned rows and actual

rows, if the mismatch is huge,
either it's outdated stats, probably

somebody is blocking the autovacuum
or something we need to check,

or maybe lack of stats.

Sometimes we don't have stats at
all.

It happens.

For example, if you created a functional
index but you didn't

run analyze at all yet.

Michael: That was the case yesterday.

Nikolay: In this case, explain
analyze buffers also is good.

I agree with you.

But sometimes you have
a query which lasts hours.

So, in this case, we need to
downgrade to just a regular explain.

Michael: Well, this is one of those,
like, I'm obviously a big fan

of explaining those buffers like
you, but this is one of those

cases where I don't think you can
know, like, maybe you can never

know, but I don't think you can
know from a single execution

plan which case you're in, we often
need at least 2.

We need the before and the after
we've changed one of these parameters.

The first one is not going to tell
us that much.

It tells us it's not using the
index, but it doesn't tell us

why it's not using the index.

We could get some clues, like row
estimates being off or like

the cost number, but they're only
they're only clues like we've

only by changing something and
running it again that we can see

the difference of when it
is using the index.

Nikolay: That's why database branching
and experimentation with

iterations matters so much because
if you already calculated the

statistics, but then you start
having questions, what if I did

something different?

You want to reset and go a different
route, different path, right?

In this case, being able to iterate,
like reset in a few seconds,

run it again, check this idea,
check that idea.

That's why branching and fast cloning
matters so much, right?

Exactly.

This is, yeah, because otherwise it's a one-way
ticket.

ticket.

So you already calculated statistics
and that's it.

Michael: Yeah, true.

Actually, you can't go back.

You can solve your problem, but
you can't go back.

Nikolay: And sometimes you, you,
okay, you solve the problem,

but you cannot explain in detail
what happened.

But this doesn't build confidence
in your team if you cannot

explain, right?

And to explain, you need to go
back.

Yeah.

And understand better.

Michael: Yeah.

I like that you're using explain
in a different context now.

Nikolay: Okay.

But it's related, right?

It's related.

Michael: It is.

It really is.

And then you can analyze what you're
going to do next.

Nikolay: Right.

Analyze what is overused.

Michael: Yeah.

Well, that's exactly what happened
yesterday as well.

I didn't well, there's an interesting
thing here.

I jumped to conclusions a little
bit.

I did spot the it was it was probably
an estimation error, but

I thought it might also be a data
type casting issue, which is

the kind of thing that we didn't
cover that in depth, actually.

We covered it in my

Nikolay: Tom's
mismatched query, right?

Let's call it mismatched query,
maybe.

Michael: Tim, yeah.

So, basically, if your query can't
be answered by the index you

have.

So, like, the typical example of
this is a function.

So, for example, if you're running
lower on an expression

on a field, but you have only the
field index.

And you might think logically that
it could use the index.

If, for example, in your...

Date.

Nikolay: Huh?

From timestamp to date, your reduction
from timestamp to date,

for example, it's very common.

You convert timestamp to date and
expect that an index on timestamp

will work.

It won't.

Right?

Because it's for timestamp, not
for your expression.

Michael: So, yeah.

So, in some cases, Postgres handles
some of these data type conversions,

like text to varchar, which I wasn't
sure about yesterday.

But in some cases, it doesn't.

So, it depends on whether it supports
that.

But in general, that's the kind
of thing that can prevent the

use of an index that you're expecting
to be used, or like the

operator not being supported.

There's other cases where the index
type you, so let's say you've

used a B-tree because that's what
most of us are using most of

the time.

One example is on a text field using
ILIKE.

So like again, a case insensitive
search operator is not going

to use.

The B-tree index only supports
certain operators.

And if you're using an operator
that it doesn't support, like

a greater than on a hash index,
for example, or anything other

than equality on a hash index,
it won't use the index, kind of

obviously.

So I think there's a few cases
that are quite simple for why

it can't use the index and then
there's a bunch that are a bit

more complicated along the lines
you were talking about which

is what I've just classified as
Postgres doesn't think it will

be faster.

So yeah, we've said expensive and
cheap a few times, but those

costs, while they're in an arbitrary
unit, the idea is for them

to estimate how fast the query
will be, or how slow it will be.

The higher the cost, the slower
it would be.

That's the idea.

It's not the only measure that
Postgres could have tried to optimize

for.

It could have tried to optimize
for IO.

And that's correlated.

But it's not the same thing.

It optimizes for speed.

Which is, yeah, interesting and
kind of leads us to what could

have gone wrong.

So I actually think we didn't cover
a whole case, though, like

a whole simple case of it doesn't
think it would be faster, and

It's correct.

That's a case that catches people
out a bunch.

Like you're trying to force it
to use an index, but it's doing

the faster route already.

Nikolay: For example, if you read
95% of your table, just a sequential

scan might be much faster on the
fly filtering out those 5% that

are not needed, than walking on
the tree, the B-tree, right? It's

it will be probably...

Even if you balance random page
cost and index scan cost,

it still might be better to use
a sequential scan in reality.

So,

Michael: Yeah, for sure.

And because of how sequential scans
work, because of how efficient

they are, and the fact they don't
need to read data from multiple

places.

And if your data fits really tightly
on like very few pages it

could be a lot faster and not even
not even just at percentages

as high as 95% to be honest. Like for
for small tables, for some with

like not very wide rows, I've seen
it be as low as 30% or so,

that's still faster as a sequential
scan.

So I suspect you've come up with
a contrived example of it being

even lower.

Nikolay: Right, so in this case,
I have various, like simple,

very basic questions.

And I think we should, like, in
many cases, we just should start

from this question when we consider
the performance of a query.

We should think how much underlying
data do we have, and next,

very high level, how many rows
return?

Then it can be like, we can unfold
this and say, okay, if it's

just one row, was it an aggregate?

So we need to analyze a lot of
rows for real, or just we return...

Last week I had a case, people
complained about very slow, like

very bad behavior and they used
explain with buffers, they saw

like some gigabytes of data and
so on.

And they knew, ORM or GraphQL was
involved, I don't remember,

but they knew that they need only
like not a lot of rows.

And then we just saw that, like,
oh, it's bad, it's bad, it's

bad.

And the question, how many rows
are we actually returning?

And they also like, we created
all the indexes.

The indexes are here, but somehow
it's still sequential scans.

But how many rows?

And we thought like, oh, actually
it was like 10,000 rows or

so, maybe 100,000 or 25,000 rows.

I don't know, a lot of rows returned.

Do you really need it or did you just
forget the limit?

So the limit is applied on the client side
maybe, right?

It's terrible.

And then I said immediately, like,
I'm a big fan of using, Again,

maybe off-topic, but I'm a big
fan of using, in larger projects,

not in tiny projects when you care
about everything in your pet.

So queries for me, like you know
this concept, pets versus cattle,

right?

Michael: Yeah.

Nikolay: Yeah, so for virtual machines
or for real machines,

for big fleet of infrastructure.

In the case of workload, if the workload
is complex, I'm a big fan

of dealing with queries like with
cattle as well.

So when I saw this, it was a new
client.

When I saw this, that they had
this problem, returning too many

rows unexpectedly, I immediately
said, stop here.

Like, it's just 1 example of the
problem.

And let's return to a high-level,
top-down analysis using pg_stat_statements

because it has rows,
rows metric, and let's see the

whole picture, how many other queries
behave similarly and in

turn too many rows.

But this is exactly when it happens,
an index is not used.

Because you request for too many
rows here, right?

And selectivity is not good.

It's very weak.

Michael: Yeah, and Postgres is
going to do exactly what you asked

it to do.

If you want all of the rows, it's
not going to say, let me give

you the first 25 and see if that's
enough.

It's going to say, nope, here you
go.

Here's all of them.

Nikolay: ROMAN BATURINIKIS Unless
it's a special case, which

is slightly uncertain, right?

It's something new for us.

Michael: Right, good point.

I haven't actually considered the
new index types.

Nikolay: Yeah.

Anyway.

No more certainty in the SQL world.

Michael: Yeah.

But yeah, there's another case
as well.

A big one is if you're selecting
a high proportion of the table,

chances are it's actually faster
to do a sequential scan.

Possibly a different index might
serve your query better.

I haven't seen that one as often.

But another case I've seen really
often is small tables.

So even if you're only selecting
one row in a table that's like

fewer than 100, which might not
be that common in production,

I've seen quite a few like reference
tables though that are not

very many rows, or more commonly
developer databases on people's

local machines where they just
have a tiny bit of data for like

a new feature.

Even if you have the perfect index,
Postgres won't choose to

use it when you don't have much
data just because it's so...

Because all the data is on one page
and it can just simply look

it up very, very quickly, very,
very easily.

So small tables are the other exception
that I see more often

because of dev boxes.

Nikolay: You know exactly how easy
it can be impatient here trying

to interrupt you, right?

This is my favorite topic.

Probably I will not add anything
here.

And listeners who follow us for
long already know what I would

say here, right?

Because you should always try to
deal with full-size databases.

Michael: Yeah, or at least like
a

Nikolay: bigger,

Michael: large datasets.

If you're going to have to, yeah.

But it trips people up.

Nikolay: Unless you're a big fan
of the hypothetical approach, hyper-pg

and indexed partitioning.

So it also would work, but in a
limited way.

Michael: Would it though?

Because if you don't have any data,
I don't think...

Nikolay: Ah, no, sorry.

It was only discussed, not implemented.

And there was another project,
which like, let's export statistics

from production, import it to a lower
environment, and pretend

we have a lot of data.

Michael: That's a cool idea.

Nikolay: Yes, it's a cool idea.

Maybe already...

Hyper-pg has been developed, so
definitely partitioning was covered,

but about statistics maybe not,
but there was another project

developed in Japan, I guess, and
it's not super popular, but

the idea is cool, I think it's really
good, but I expect some limitations

of this idea obviously if you can
afford testing on full-size

databases this is the best way
yeah you just need to make it

cheap and then here I stop yeah

Michael: but yeah this is a so
this is the more interesting case.

So right, like once you're past
those two of it being, you know,

actually faster, if it's wrong,
if Postgres is not estimating

correctly, That's when we get into
the more interesting parts

of, like, you mentioned stale statistics
or not even having statistics

that are relevant or needed.

So analyze is your friend here
on the tables in question.

We've also got a couple of other
tools.

We can increase the sample that
Postgres will do, if you've got

an oddly distributed-

Nikolay: Default statistics target.

Yeah, some people-

Michael: Not even-

Nikolay: For one column.

Michael: Yeah, exactly.

You can do it globally, but per
column...

Nikolay: But 100, default 100 is
also quite a lot.

It might not be enough if you have
high insert rate and constantly

working on the edge of new data
and the distribution of values

in new data is very different compared
to the archived data.

So there are things there, definitely.

Michael: That's one of the defaults
I'm happiest with, actually.

I think it's a pretty good sweet
spot because increasing it,

if you increase it globally, you
increase the time for analyze

to run globally.

And I think that has knock-on effects
for things like your downtime

for doing certain types of major
upgrades and things.

So I

Nikolay: can see...

Major upgrades

Michael: should be

Nikolay: zero downtime.

And if you've jumped from 100 to
1,000, yes, it will probably

increase analyze time maybe two or
three times, but not 10 times.

It's not linear.

Michael: Sure, but I haven't
seen it cause huge problems

globally.

But yeah, if you've got a column
that's like not a distribution,

well in a skewed distribution,
increasing it for that column

can really help.

And then the final one I've got on
my list was the multi-columns.

Yeah.

Nikolay: Which can be only four columns
inside one table yeah it cannot

be unfortunately for two tables, two
different columns in two tables.

It would be interesting to have
maybe as well. Sometimes one table

fully depends on another, right?

Michael: Yeah, I

Nikolay: don't know.

We

Michael: do have a whole episode
on hints, but I think this is

where the hints discussion really
comes into play.

It's like, what are your options
when we're out of tools within

Postgres to give the planner as
much information as we can about

the distribution and the stats.

Nikolay: Right.

Bugs also happen.

So bugs might happen.

I mean, planner bugs or some not
developed things, not yet developed

things, some easy stuff which is
not yet there, Postgres planner

is not super powerful.

It's quite powerful, but compared
to SQL server, for example,

the code base is much smaller and
the amount of, the number of

engineering hours invested into SQL
Server Planner are much bigger

than into Postgres.

So of course it's evolving, but
still some things might be not

yet developed and sometimes people
see quite simple things.

To me, sometimes you think, oh,
it's so obvious why it's not

here.

And as an example, it's very unrelated,
but it still poses.

Last week, statement_timeout
was committed by Alexander Korotkov.

Many thanks to him.

And statement_timeout was my
idea.

And when I came to this
idea, I was thinking, am I stupid?

Like, am I missing something?

Like whole world still didn't raise
this.

I searched, I don't see discussion.

Why statement_timeout is not
present in PostgreSQL still, like

so many dozens of years of development.

And then a few folks validated
it's a good idea actually, and

then it got support.

So if you spend time with Postgres
enough, with the Query Planner, you

might indeed see cases which are
not yet well-developed.

Or just simply bugs.

Sometimes just bugs happen as well.

Michael: Yeah.

I've spoken to several customers
that have long histories with

DB2, Oracle, and they're surprised
by some things, but they also

talk about some issues with, like,
especially on the Oracle side,

about kind of patchwork solutions.

So sometimes more code in these
optimizers is not necessarily

better because it's like patched
fix on patch fix on patch fix

and it leads to quite like nasty
weird behaviors.

So I do admire the elegance of
the simplicity, but equally, I

think you're right people I know
that that know these things

well, or have used SQL Server or
DB2 in the past, definitely

say nice things about those planners.

Nikolay: Well, my opinion is also
based not on my experience.

My last experience outside Postgres
was very long ago.

I mean, real experience when you
do develop something for long.

It was more than 15 years ago.

So I'm just translating other folks'
opinions, but they seem

reasonable, these opinions.

And they are experienced guys who
actually came to Postgres and

say, you know, like, let's just
admit the planner can be much,

much better.

We need to continue developing
things, improving things.

And in this case, let's advertise
a recent blog post from David

Rowley.

So I've noticed many items, it
was his contribution.

Michael: Yeah, but also many from
other people too.

Nikolay: Yeah, so like the post
is what's new in the planner

in PostgreSQL 16?

Let's unfold it and so on.

It's great.

It's also in my to-do list to inspect
deeper and maybe do some

tests.

So it's a good post.

Michael: And the thing I love most
about it was inspiring people

to upgrade.

**Postgres** 16 came out a few months
ago now.

We've had some minor, at least
1, I think 2 now minor patches.

Nikolay: February, I have no idea.

Michael: I already know at least
1 company that has upgraded

as a direct result of that blog
post coming out, just to see

if some of their queries improved
as a result.

Cool.

Nikolay: Cool.

Michael: Good.

So yeah, thank you to David.

Nikolay: This is important to share,
by the way, because other

people think like, maybe it's still
too early and so on.

And you know, like I recently was
approached and just the question,

do you know anything bad about
**Postgres** 16?

I said no, actually, it's not,
it's not **Postgres** 9.3.

No, 9.3 was terrible.

I remember.

Michael: I think I've got a new
answer for you that I think is

more on brand.

I think you should say yes, but
they're also true in 15, 14,

13, 12, 11 and 10.

Nikolay: Okay.

Yeah.

Michael: Anyway, I know what you
mean.

Did you mean like you don't have
anything bad in 16 that's new

in 16?

Nikolay: Nobody bumped into some
problems like complaining that

it's bad.

There are a few companies upgraded,
I mean, maybe not huge installations,

but still.

It has a good reputation so far.

And of course, the risks are still
there, and tomorrow we might

see some problems, but these risks
already go down, down, and

down over time, right?

And I mean, it's important to share,
so we upgraded, everything

is good, in our case.

It doesn't mean in any case it
will be good, but it builds up

the level of confidence over time,
and more people will be upgrading.

Nice one.

Okay, good.

Did

Michael: you have anything else
you wanted to add?

Nikolay: I always have a lot of
things, but I think we're out

of time.

Let's not consume a lot of time.

Of course, it's good if you run
more or walk your dog more, listening

to us longer, but still, there
are other things to do.

So thank you so much and thank
you for support.

We still like, I didn't follow
really well, but we see comments

on YouTube, for example, it's good.

Please share your experience as
well and other people also see

it and we can highlight some comments
not only about Red Bull,

right?

Or beer.

Michael: Free advert.

I'm teasing.

Yeah, I think over a thousand people
have listened to the last

few episodes, which is pretty cool.

Yeah.

So thank you all.

Nikolay: Good.

Okay.

Bye bye.

Some kind things our listeners have said