BRIN indexes

BRIN indexes

Nikolay and Michael discuss BRIN indexes — how are they different to the default B-Tree index, when are they useful, and how they've improved in PostgreSQL 14.

Hello, and welcome to Postgres FM a weekly show about all things PostgreSQL.

I'm Michael founder of pgMustard.

And this is my co-host Nikolay founder of Postgres AI.

Hey Nikolay.

How are you doing today?

Hello.

Doing great.

Just returned to California from a long trip, long flight but doing great.

How are you?

I am also well just recovering from our hottest day ever in the UK yesterday.

I, I was tired of movies and tried to watch some news and so on.

And the BBC 95% of time was talking about this weather records.

Right.

So yeah.

Wonderful.

So today you chose the topic.

What is it we're gonna be talking about and why?

We are going to talk about, various kinds of indexes, particularly
we are talking like BRIN indexes, so, block range, indexes.

Because recently we had a few materials
published, like it's, I think it's good.

Also discussion on Hacker News.

So it's a good time probably to discuss various
aspects what we in our experience as well.

Yeah, absolutely.

I'm very interested in this.

And as, as you mentioned, there was a recent
blog post by Paul Ramsey at Crunchy Data.

And I was lucky enough to be able to see a, talk at
Postgres London by Thomas Vondra about some of the

Oh, and also you, you was.

Yeah.

Let me repeat the thoughts I expressed last time,
I think all conferences should start recording.

In terms of my personal materials , I'm going
to only to attend conferences which record.

I think it's very important to present to wider audience
because not everyone can travel and, it's just much more

efficient if you present to offline audience and you have good,
connection to them and follow up discussions after your talk.

But also if you have recorded material, this is forever, it's
published and you can refer to your, previous materials and

you can also see some mistakes you've made and, and fix later.

I saw how other conferences outside the Postgres
community did it during many, many years.

And it also, it's also beneficial for conference itself because
these like cloud of materials accumulated over years, it attracts

even more attention to the conference and it increases the value.

So I think conferences, which don't record.

They shouldn't, exist anymore at some point.

This is my strong belief.

So, as I've said in San Jose, I, I made announcement that I'm not
going to present anymore if conference is not providing recording.

So I'm glad that you saw Thomas Vondra, right?

You, you was there?

Yes.

And it was,

and you saw it, but we have slides.

We don't have video, unfortunately.

Right.

Yes, good point.

He was great and he published his slides.

I'm not actually a hundred percent sure it's the exact slides from
Postgres London, but I think he's given this talk a couple of times.

So it's from one of them.

I hope he will give it one more time online.

I already asked to invite him to Postgres TV open talk series.

So I hope we will have recording if he accepts.

It'll be great.

Wonderful.

Anyone listening that will be on Postgres TV, if we get it.

Right.

So we have Postgres FM, Postgres TV, quite easy to remember.

Right.

And, since we discussed it, everyone, please subscribe and, help us to grow.

Uh, what, what can help of course, if you subscribe can help.

And also if you share links to Postgres FM to particular episodes or just
to Postgres FM webpage in your social networks and groups where we discuss

Postgres or engineering in general, it would be, very helpful for us to grow.

So please do it.

Brilliant.

Well, so block range indexes.

So we have our default index type.

If I just create an index in, in Postgres, it'll create, a B- tree,
which is the default great default, uh, has got a lot of benefits.

It's brilliant.

But we do have other options and one of those is BRIN.

Yeah.

Speaking of B-tree, I think in, in every database system it's default.

And this is my favorite question, but also very tricky question,
when you interview some engineer, you feel it's quite good engineer,

but if you like, pull this question out of your set of prepared
questions, like what's b-tree and let's discuss the structure and

balancing in many, many cases, interview is over, unfortunately.

So, so I think every engineer should know it, but many, there are
many, many engineers who don't know, unfortunately what B-tree is,

but B-tree of course, is a standard defacto for database systems.

Right.

But it's not enough in many cases, it's like not enough in
terms of performance, uh, and B-tree can be, can degrade

over time, quite, like a lot, or it can be big one, right?

In terms of like size occupied.

And also like when you talk about size, it's not only disc
space occupied, but also, part of buffer pool, the cache state.

So if index is huge, it means, your buffer pool needs
to keep more pages, more buffers, in the pool, right.

I, I just wanted to, provide some remarks about B-tree because also
like some meta remark, additionally, uh, we have got very good feedback.

Thank you to everyone, for feedback, it's very important
for us to hear, what you think about our show and the ideas.

Uh, and we also got some, couple of guys mentioned
that we, we like quite basic material, right?

Let's do some more hardcore, but I'm a hundred percent sure
that there are many, many people who need basic material.

So I think we need to continue to talk about
some basics, but sometimes trying to go deeper.

Of course.

Right.

So that's why I talk about B-tree, because if we
jump straight to BRIN, well, B-tree is default.

So it's, it's, it's important to understand how it works.

Well, and BRIN is quite advanced I would say.

I think a lot of people can go a long way using
Postgres and there's not a sensible case for using BRIN

or there's no need to know about it for a long time.

And you, mentioned, already a few of the
times where B-tree can get cumbersome.

And I think the, the big one here is, size just raw.

You know, if you've got a very, very large table and your
indexing a column on it, it can be a very, very large index.

And index bloat.

Yep, absolutely.

And then the other thing that got brought up by Paul Ramsey
in his blog post, which I had n't actually considered

before was there can be a difference in write overhead.

So B-tree having a, higher write overhead
on average than some other index types.

So that's, that's one other final, downside sometimes.

Yeah.

This famous index write amplification.

One of the reasons Uber went from Postgres to MySQL.

Right?

So each time we update a row, all indexes,
should be updated unless it's HOT update.

So it's, a big problem.

Of course.

So, if you have just one index, it's just
one update additionally, to the heap update.

But if we have 10 indexes, overhead becomes bigger and bigger.

That's why you need to get rid of unused indexes and redundant indexes right?

Yeah, but the one thing I hadn't considered was that different
index types could have massively different overheads to one another.

And that,

Well, well, well, it's, it's very common for example,
to have issues, with updating of GIN, there is, there

are special options, like fast update and so on.

So GIN updates are very heavy, expensive, but of B-tree,
like it's like medium level of overhead in terms of updates.

But BRIN is definitely like light, light, super low.

Do you think it's fair to say that the biggest
advantage of Brin indexes is their size?

So they're, at least in order of magnitude, often
two orders of magnitude smaller than a B tree index.

Well, right.

So if, you don't know about B-tree, you should definitely start with
Wikipedia as soon as possible if you consider yourself engineer.

Right.

But what is BRIN index?

It's quite simple structure, which, describes, which, um,
like in, in the B-tree pages we have links to, uh, tuples

and, but not direct for, for each tuple, but for ranges.

So we describe like this page in heap has, tuples starting from this ID
or timestamp or something, starting with this number till this number.

So everything in between is, there.

Of course multiple pages can be referenced in this way.

By default it's 128 pages can be referenced
for one range, but it can be adjusted.

So, so in this case, like interesting here is that this
index directly dictates you that like you should think

about physical layout because B-tree directly doesn't do it.

Right.

So of course we know that if, for example, you have some value, or
range, and B-tree says that these two are located in that heap pages.

And for example, we have a thousand tuples and each one is in
separate pages, a thousand pages it's called sparse storage.

You can run cluster command and reorganize your heap.

So the storage will be according to this index.

Right.

And in this case, probably they will all go to a few pages.

So fetching such tuples will be extremely
fast compared to these sparse storage.

But BRIN index directly says I'm useful only if your tuples
in heap are stored sequentially, according to ID or timestamp.

So in this case, uh, for this particular range,
only a few pages will be referenced right.

The way I've heard it described is it's, it's all about correlation.

And if they're exactly in order a hundred percent correlated.

It's optimal.

And, you can get really good performance there because the index can be used
to filter out the vast majority of the, of the blocks from being scanned.

And it only has to look at a small number.

But as it gets less correlated, uh, performance degrades.

And if it's, if there are some, uh, exceptions and
it makes the, makes the ranges wider than they, than

they ideally should be, it can degrade quite quickly.

And overlapping.

also overlapping.

Right.

So, so like, okay.

Uh, yeah, not overlapping, but, uh, the same range can be, uh, used
many times, because more than 128 pages are referenced for this range.

So it's not good, but like how can we check, the physical storage aspects?

There is a hidden column called CTID you cannot create a column
called CTID right, because it's a system name it's reserved.

But, what CTID is?

It's in two numbers.

One number is page number and second number is offset inside page.

First number is the most important, and there is a trick
how to extract it, for like various, I don't know, like

to, to count distinct number of pages, for particular row.

It's easy.

You can convert it to point and then extract
only first, like to point, I mean, X, Y, right.

And then you can take only first argument.

And in this case you will get only page.

This is how you can extract only page number from CTID.

So you can check your data always.

And, check what CTIDs, or just page numbers with this trick you can
check and you can see exactly if, if you can see this correlation, right?

So if you have sequential ID used or timestamp, which is filled
by some, uh, like now or better in this case, clock timestamp,

because now is generated only in the beginning of transaction.

So if you insert, you have some batch insert thousand rows,
all of them will have the same now value, but clock timestamp

will generate timestamp for each row inserted, right.

And you will have difference in values, right?

And in this case, you can.

Check select CTID convert to point and get
first argument and then ID or created at.

And you can see if there is correlation or you, I think you can even
apply some, functions to prove that correlation correlation is strong.

Right?

Maybe it's a good exercise.

But the cases I hear about this are like people actually using these
in the real world tend to be cases where you're pretty sure it's

correlated already because you've been inserting timestamped data
in order, you know, maybe it's a sensor reporting data and you.

You never update it.

You never delete it.

It's all in order and you can

delete is De delete is okay.

Only update matters.

Good point.

Good point.

Yep.

so you can check for sure, but equally that's the case where it's
most likely to be relevant or at least until the latest version.

I think those cases were pretty much the
only good use case for BRIN when you had

exactly.

Yeah, let me describe once more.

I'm trying to like, for, as I've said, some basics to describe
some basics, uh, so once you've learned what CTID is, I recommend

you to create some table with, uh, surrogate, primary key like
sequentially, some generated number or some like sequence.

and then like for example, we have a row where ID is
one and we, we see that it's it's inside page zero.

We've said zero.

Okay.

But then.

I recommend you to execute, update this table.

Set ID equals ID, where ID equals one and see what happens with CTID.

This is very interesting because it may be unexpected for you because,
uh, you will see that CTID value changes sometimes page is the same.

If there is, there is space inside the page, but you see how tuple, which
is a row version, physical row version is generated new tuple generated.

Always, even if you logically didn't do update
like ID equals ID means we don't do anything right.

but you see how tuple is generated.

Uh, that's why updates can shuffle your data, right?

You, you can have, uh, correlation can be worse over time.

Yes.

Let's use an example where if I have some really old data and maybe
I found out the sensor was incorrect or had a bug in it, and some of

the data needs to be updated if I update some of that old data what
Postgres will really do is insert new rows at the bottom of the heap.

And then mark

Or in some pages where there is a space.

Yes.

Good point.

Yeah.

Good

Yeah.

Good

but not in the same.

Yes.

But then that reduces the correlation of that table.

Exactly.

Brilliant.

Or, or like we have some import of old data.

We are missing let's.

Not update, but insert can be problem as well.

If we insert all data like postponed, insert, right?

Delete is not a problem.

I like, I, I don't foresee problem with gel
delete, but postponed, inserts and updates.

And also, you know what, probably a problem, uh, repacking,
if you run pg_repack, the pack allows you to achieve the same

effect as cluster but without downtime, not blocking for long.

Right?

So if someone, some DBAs, for example, didn't notice that there is a BRIN
index, which requires this correlation, with physical storage, and decided

to perform repacking using pg_repack and, use clustering according to
some, another column or index, you will get very different physical storage

and correlation will be not good at all in terms for your BRIN index.

Interesting.

I thought, I thought pg_repack would help.

Have you heard of pg_squeeze?

That's one I've I'm aware of that's an alternative to pg_repack.

pg_squeeze I've read about it, checked it, but I mean, I didn't use it yet.

It's, it's interesting idea to use logical decoding, instead
of this pg_repack approach with like substitution of table.

It's, it's really interesting idea with delta table, right?

So pg_repack writes all changes in some delta table
and then sometimes it's a, it's a challenge, to apply.

It's because delta changes from this delta table should
be applied , need to be applied in a single transaction.

So if a lot of changes are accumulated can be and can be a problem.

But, but if you, I mean, if you use repacking with cluster
option, uh, using different, for example, you may have,

uh, I don't know, like, like some name, column, right?

Alphabetical.

And you want to reorganize your heap.

You have, uh, pages, uh, you need to present the data with pagination,
order by name and you think this is the most useful use case for you.

So you decided to reorganize heap.

So the data in, in heap is ordered according name.

So there is correlation with name not with ID.

Created at timestamp and your BRIN in this
case will, will, perform not, not good.

Unless it's a BRIN on name.

Right.

But I've actually thought of a problem with delete as well.

Once you've de once you've deleted those rows in the,
let's say in the middle of your heap and it's vacuumed

later, new inserts are now gonna go in the middle.

Yeah.

So

now we've got a problem again, but, anyway,

point.

Good point.

This was a, this is interesting, cause I think it slightly
takes us onto the improvements that Thomas was talking about in

Yeah,

14.

because we discussed this.

Yeah.

So.

So, just to make some conclusion before
we discuss improvements in Postgres 14.

So it, it means that before Postgres 14 BRIN can be used
only, if you definitely have append only pattern for your

table, log like table, you log some events or some data
from some like some telemetry data from somewhere and so on.

Right in this case, BRIN can be.

Or it degrades quickly and you need to reindex regularly.

Like I, I guess there are some use cases if,
um, if you do update data and then reindex.

You, I guess you have some benefits still.

Um, but it degrades quickly and maybe there isn't the benefits.

Maybe you're better

hold on a re-index can reindex help?

Like clustering can help.

So you need to organize hip, but re like yeah.

Maybe repacking and then re-indexing

maybe maybe, this, this is interesting.

Do we need re-indexing, but yeah, but it's, it's slight anyway.

Yes.

Um,

it's very light to build

popular like that use case is not, is, you
know, lots of people have logging tables.

Lots of people do have this

do you yeah, but, but, the problem is I always, like, I, I
tried at least three times over a few last years since BRIN

popped up, like was, was created, never decided to go with BRIN.

Never

Interesting.

I, I didn't find it useful, like, okay.

We, the size of it is quite small, but.

It always on large volumes of data, it always performed worse than B-tree
for me, much worse, maybe because I had like these, these materials,

both materials were mentioned and we will provide links in description.

They both mentioned that for like point, searches.

Like when you need only one row or a couple of rows, BRIN is not attractive.

Maybe I had closer to this or like I needed only 20 rows for pagination and
so on, but when you need to find many, many rows, like thousands of rows

BRIN can outperform B-tree, I just never, I, I always, when I need to decide.

What I should use.

I do experiments.

I'm like huge fan of, I, I I'm building company on top
of idea of experimenting, experimenting with always,

always, always for learning, for making decision.

It's the best case if you can experiment with production data
or like similar to production without PII, without personal

data, but I never decided to use BRIN because I saw them
less performant, to B-tree even for update only only inserts.

I'm the same.

I've only ever seen, B-tree outperform BRIN in terms of raw query performance.

I think there's a good example in the, in the Crunchy Data
blog post, where in lab conditions, BRIN can outperform B-tree,

perform,

in the real

uh, uh, I have, uh, Like let me to have some disclaimer.

Uh, last time I was like a bad cop guy who criticized
a lot of various stuff I'm going to continue.

So, but like, like I thi I see value in criticism, but, I'm going
to be like, Polite, maybe sometimes not very polite, but I,

because I, often have, um, some, uh, opinion, quite strong opinion.

Right.

But I hope, uh, nobody will be offended.

And, uh, just for improving things, not for.

Uh, damage.

Right.

And I also can be sometimes very wrong and I, I, I quickly admit it if
I see evidence that I'm wrong, but in this case, I want to criticize

both materials from both from, from Paul Ramsey and Thomas Vonda.

I cannot understand how we can talk about performance and,
physical layout and so on and provide plans without buffers.

It's huge mistake.

Simply huge mistake because we discuss how BRIN, uh, can
be more, uh, better in terms of performance than B-tree.

And talk about some timing, which is have a lot of fluctuations.

Right.

And that's not reliable, at least, at least you need to run it
multiple times and take some average if you talk about timing.

So, and, and also Paul Ramsey's blog post also makes some
conclusions based only just from a single point of data,

for example, like 10 rows 100 rows 1000 rows, 10,000 rows.

Okay.

We see BRIN is better.

What, like, I'm not convinced I'm not convinced because I saw for huge
and also like million rows, seriously million rows is, is nothing today.

Test at least of billion rows.

Right.

So, so sorry for maybe I'm offensive, but I just it's so
like, I just want everyone to, to, to have better materials.

It's great to discuss performance, but don't do plans of
explain, analyze, do plans with explain (analyze, buffers).

We will see IO.

IO is most important.

All indexes are needed to reduce number of IO operations.

Index is all about reducing IO, any index we want instead of
making like a lot of buffer hits and reads we want to have a

few buffer hits and hits to fetch one or, or thousand rows.

We don't want to have, uh, million, uh, hits
and res when we need only thousand of rows.

But when you talk timing, I don't understand where this timing comes from.

When I see buffers, I understand, oh, this timing goes
from a huge, uh, reads and buffer res and hits numbers.

yeah, really good point.

And I think actually there's a couple more dis, so the reason I
was always, um, always used to think until the, until the blog

post and until a couple of things that Thomas said, I always
thought BRIN couldn't outperform B-tree because it only has

the ability to do bitmap heap scans or bitmap scans in general.

And so a B-tree can also do a bitmap scan.

So the only slight advantage is that you have
to build the bitmap if you're using a B-tree.

So there's slight overhead there for the B tree, but it's also efficient.

So it's looking at exactly the blocks it needs, whereas the bit the, um,

When you say you need to build it, you mean,
uh, you as executor, which will do it right?

sorry.

Yes, not the user.

no, but they, I was thinking.

The, the executor has to for the B-tree index, but it doesn't
have do for the BRIN index, but with a BRIN index, I think you'd

always get false positives back that you have to filter out.

Like you're always gonna get some rows on those
blocks that you're gonna then have to get rid of.

So I didn't understand how it could be more efficient but
the, the main advantage I always thought was the size.

So instead of having a multi gigabyte index that also then takes
up multiple gigabytes in the, in the cahce and all those things.

You could have a, an index that's normally in the
kilobytes, or even for very large tables in the megabytes.

So that I always saw that as the main advantage rather than performance.

You know, if, for example, if we just created the table, filled it with
data and forgot to run vacuum, and I see Crunchy blog post blocks it,

in this case, you will have the plan with, btimap index scan, right?

and in this case, I, I can imagine that BRIN will be efficient if you have
huge table and you need to find and fetch a lot of rows should be there.

But, but like, I would definitely look first of all,
on, on like type of operation, like, bitmap index scan.

And also I will, I would look at buffer hits

just remembered.

Yeah, I agree.

One last thing.

That's a bit unfair with the, Crunchy Data blog post, and I know it makes
it a slightly fairer comparison, but it's unfair in the real world is that

they also disabled index only scans because BRIN doesn't support them.

So make it a more apples to apples comparison, that's
disabled, but in the real world, you might sometimes

get an index-only scan, especially for data that

Yeah.

Michael, Michael, Michael, you, you, you are supposed to be a good cop here.

I didn't agree to this.

Why, why you, okay.

Okay.

Um, yes, but yeah.

Um, Those are all really good points.

I'm really happy that we've brought those up.

I actually do think they're good, uh, content,
and I'm glad that they spark in a discussion here.

The other thing that if we're on the more advanced side,
um, I do think Brin has improved a lot in the last version.

I didn't know that.

Uh, until, well, I wasn't aware of how much it had until Thomas's talk.

The one thing I really wanted to bring people's attention to
is a new operator class, which I'm aware is slightly advanced,

but, um, by default bring index is still behaved pretty
much exactly as they did in Postgres 13 and 12, I believe.

But if you change a setting while you're creating the index.

If you create it slightly differently using a new well, there's a couple of
different new, uh, operator classes, but the one I'm particularly excited

about is minmax-multi and the, my understanding of that, and it, this might
be flawed is that instead of only maintaining a single minimum and maximum

for each block range, It can instead maintain multiple minimum maximums.

Now the, the big advantage of that, I, I, well, so two, two big
advantages are, it can support different types of correlation.

So if there are, um, If your data's, uh, oddly maybe not the time
stamp example, but may well, actually time stamp example is great.

If we, if we inserted some old data, we would now be able to have two
min maxes or probably not two, probably lots more than that, but we

could have the new data and the old data and a big gap in the middle
where Postgres knows it doesn't have any data for that big gap.

So at worst they'd degrade much less badly, but I
think it's a huge, huge improvement that could support

a lot of different types of correlation as well.

And I'm interested why, well, I, I think it could be so useful that it
should be the new default for Brin, but I do understand that defaults

are difficult to change and you might not want to affect existing users.

To me it sounds like, uh, game changer.

And like, I, I'm looking forward to testing it once again.

Unfortunately, I still don't have any real big production
with Postgres 14, but once I have it, uh, next time, like

for me, it's a reset of my, opinion about BRIN indexes.

So, because as I've said in the past, they like, I made decisions not, not go.

And I like next time I would probably not spend any time, uh, not
waste this time, double checking, but now I definitely will double

check, uh, for, for the cases when this correlation is not perfect.

And we have some, some, not in intensive, but, like occasional
updates or delete deletes or postponed inserts as we discussed.

It's sounds interesting.

And I also see like, like to be fair, uh, Paul Ramsey's article
describes doesn't describe this improvements, but it, it talks

about, uh, pages per range option that was available before already.

And you can play, try to play with this parameter and see how
performance in your particular case is affected negatively or

positively, and also mentions, pgstattuple extension, to check like
physical layout of pages, and see what's happening under the hood.

This is very, very good.

I, I mean to remind about these capabilities, this
is helpful if you, uh, run your own experiments.

So good, good thing.

But Thomas Vondra discussing improvements also show some good
examples and, uh, I've noticed that these Uh, operator, classes,

uh, it looks like they are available for various data types, right.

And, UUID is there, this is interesting because UUID is usually
considered as like randomly dis physical distribution is awful, right.

Is and BRIN for UUID is not good.

because you have, like, it's not order.

Some, I think there are some UUID versions that are yeah.

Okay.

But I'm wondering if that's why they've been included.

Actually we use it.

I, I now remember we use some, some location in our tool.

We use some almost, uh, orders.

Right.

But, uh, many versions are not like U U ID V V three.

Before.

I don't remember these things.

They don't look ordered at all.

And, uh, it is interesting to check.

New improved brain index can perform on this, this data type.

So not only, so, I mean, not only it's should be interesting for, for cases
which not fully update, only pattern, but also for these kind of data types.

Yeah in person, actually, Thomas made a really good
point about this is actually a really good area for new

people that want to contribute to Postgres to explore.

He has got a few ideas of improving these further and indexing
in general is quite An isolated part of the code base.

You don't have to understand everything about Postgres to be able to
improve things and BRIN indexes, especially they they're isolated.

They're not, uh, the default, so they're not as contentious.

So I think there's a few, um, few good reasons
why this would be a good area to get involved in.

And, um, I think Thomas mentioned being willing
to help people out if they do want to want to get

started on that kind of thing.

So that would be.

good first issue, right?

right.

The label

if, Postgres used GitHub or GitLab, it would be this label would be there

in this issue, but no,

definitely a topic for another day.

For another day.

Yes.

Oh yeah.

Yeah.

We have several topics like that.

Yeah.

Good, good.

Uh, so, so I, I find these two materials useful, for reminding that
you can use this in your experiments or that, but as I've said, I

think everyone should experiment with their own data and, queries.

If you need to make decision don't trust, blog post blindly, experiment with
your data and your queries, think about if, even if you don't have data, you

can generate some, like, as close as you think about future and also queries.

And then you can start experimenting.

my, my big piece, there is always to make sure
you have the, roughly the right number of rows.

Like everything else matters a bit, but then the, the raw number
of rows in the right order of magnitude makes a big, big difference

to which query plans you're gonna see and to overall performance.

So if you do nothing else, please at least
insert an appropriate number of rows for testing.

and, I need to add here, uh, on two sides, number of rows
on two sides, first number, what number of rows stored in

table, and second number of rows you need for your query.

Sometimes there is no limit in query and then your data is growing.

And this is like, this is a big mistake.

Many people do often do so not limit.

The results set.

And so you need to think how many rows you, your users will need.

Uh, they, they won't need probably million rows if you
present this results on some page on mobile or web app, right?

So you need to think about limit and pagination and so

Absolutely.

I think we're done.

Yeah.

Good.

Nothing left in this, in this, topic.

At least, at least in our heads, maybe some
people have some, uh, additional thoughts.

Uh, I would love to, to, to see, in comments or in Twitter probably right.

and well, I think we probably could talk about it for
a bit more like there's the bloom, like the new, uh,

bloom operating operator class seemed really interesting.

And the, the Minax thing is also configurable.

oh yeah, I

I'm also conscious of time.

And I think we should probably, I think that probably is verging a little
bit advance, but there's some really cool things happening in BRIN.

If you haven't considered them for a while,
please do, uh, upgrade if you can, to Postgres 14.

to post .What else?

I think,

15 already

Right.

If you're brave, um,

for for testing you not don't need to be brave.

You it's not production testing can, can happen elsewhere.

Right?

So i, I'm not saying upgrade your production
to, to 15 beta two it's it's probably

yeah.

Brave is not quite the right word, is it?

Awesome.

Well, thank you everybody for joining us.

And yeah, send us your feedback, let us know what you'd like
discussed and yeah, thank you Nikolay, hope you have a good week.

Thank you, Michael.

See you next next week.

See you next week.

Bye.

Some kind things our listeners have said