BUFFERS by default

BUFFERS by default

Nikolay and Michael discuss BUFFERS — what they are, how they can be very useful for query optimisation, and whether they should be on by default (spoiler alert, we think they should).

Michael: Hello, and welcome to Postgres
FM a we show about all things, PostgreSQL.

I am Michael founder of pgMustard.

This is my cohost Nikolay founder of Postgres AI.

Hey Nikolay.

How are you doing today?

Nikolay: Hello doing great.

How are you?

Michael: I am doing well.

Thank you very much.

Today, we are gonna talk about buffers.

And I know this is a topic you care a lot about and
I've enjoyed reading your opinions on them in the past.

So , really excited to talk about why they're important.

Maybe we can start, with what they are

Nikolay: You know what let's start slightly off topic.

SQL can be written in upper case or lower case, or
like maybe some, maybe there are many more options.

So I prefer lower case, because I write a lot of SQL code,
so it's like, I, I write SQL code much more than other code,

like any other language I use over the last many years.

And so I don't like to scream and use ACA at all,
but when I type buffers, I enjoy typing to uppercase.

Because

I, I, yeah, because, because it's so important to use them

Michael: So just to check, do you write, explain open brackets,
analyze comma, and then, and then caps slack on buffers,

and then you

Nikolay: I, I mean, I, still, I still use
upper case when I explain things to people.

And when you embed small parts of SQL in.

Regular text.

It makes sense to still use upper case.

That that's why probably I, I, I remember, but like
buffers, I, I just couple of times per week, I explain

how important it is to use buffers to other people.

So , and this is the one the, I enjoy typing upper.

Michael: That's so funny on that.

I think I mix my use of lowercase and uppercase all the time in blog posts.

It's really difficult to know.

Sometimes it's not clear that you're talking about a keyword
or, or code, and sometimes the code formatting is not great.

So yeah, I, when I'm, especially when it's in line, I do sometimes
use capitals, just to show that I'm talking about the keyword,

not just a normal word in the sentence, but I'm very inconsistent.

While we are talking about consistency of how we write things.

Do you always write Postgres sometimes PostgreSQL?

Do you have like a rule on which one you

Nikolay: 90% pause just for 8 letters instead of

Michael: just for the shortness.

I find myself doing the same, probably similar 90%.

I tend to use PostgreSQL if like it's a super formal use.

So maybe if I'm talking about a version number in a formal
setting, I might say PostgreSQL, but I don't have any better

Nikolay: Yeah, I will be trying to to pull us to off topics, you know, like
it's so bad that Pogo says eight letters, not seven because in California

you, you can have a custom driver license plate and the limited seven letters.

So, imagine being in the car with a license plate, Postgre without.

Michael: Well, so that would

be pretty funny.

Yeah, there are people out there that would see that as a hate crime, I think.

Nikolay: Mm-hmm

Michael: anyway.

Yeah.

So back back to a shorter word.

Buffers.

So, in case anybody's not sure what we're talking about.

This is a measure of, well, I guess it's not quite
strictly this, but it's a rough measure of IO in the query.

So terms of number of blocks being read or written by various
parts of the query and it shows up in multiple places.

I'm aware of it being in explain.

So explain analyze mostly, but also now explain as of recent
versions and of course, as columns in PG stat statements,

in terms of telling us how much IO different queries are
doing, are there, are there other places that it's showing

Nikolay: Yeah, well, ex let's let's explain, explain a little
bit very, very briefly because it's very confusing sometimes for

new people explain is just to check what planner thinks about.

Future query execution, but it does not execute the query.

It's only like shows what plan thinks right now
for given data statistics and parameters of posts.

Explain analyzes for execution.

Buffers makes sense where one lean execution.

So explain analyze, or in both it's I think like, you know, like
since Postgres 13, it also makes sense for planning stage as well.

Right?

Because it can use some buffers for planner to make it work.

Right.

Tricky question.

I don't remember a hundred percent

Michael: Well, I guess it's always been
possible for the planning stage to read data.

But I guess we've not had the ability to ask it how much it's doing before.

Nikolay: Since POS 13 is possible, I guess.

Right.

So there is four planner stage.

It's also shows how much, how many buffers were heat red or, or, and so on.

right, but what I also wanted to, like, I I'm, there are many
confusing places in database area fielded in general and positives,

particularly, for example, there is also analyzed keyword, which
is another absolutely another thing it's command to ate statistics.

Well, it's, it's not a hundred percent far from getting the plan, because if
you run, analyze on a table, you, you can fix the plan, for example, because

you will have fresh statistics, but it's it's, it can be confusing because
analyze after explain means very different thing, then analyze a table.

Right?

So it's like basics for people who start with podcast.

Michael: Yeah, absolutely.

So in probably today, we're only gonna be talking
about analyzing the context of the explain parameter.

Yeah.

Nikolay: All right.

Michael: Cool.

So, what, is there anything in particular you wanted to
make sure we, like, where did you wanna start with this?

Nikolay: Maybe we should discuss what, 1000 buffers hit or red means, right?

Like it's I, I, I found working.

Over many years working with various engineers.

And I like most interesting in this case are backend
engineers who are the authors directly or indirectly of SQL.

they either write a sequel directly or they
use some ORM, or something that generates SQL.

And, but they have, they have the biggest influence on the result.

And I noticed that they, most of them don't understand what, like they
don't, they may understand, but they don't feel like thousand buffer hits.

What, what is it?

Michael: Yeah.

Awesome.

So when we are talking about this, we're saying,
let's say I've done explain, analyze on a query.

That's a bit slower than I'm expecting it to be.

And because I've been told, I always should use buffers from some
helpful people down the years, maybe they listen to a podcast.

And they're now using buffers, but they see under, let's say an index scan.

They see shared hit equals 500 red equals 500.

So in total we've got 500 blocks that are
shared hits and 500 blocks that are shared red.

And this in total is a thousand blocks and these re.

Each one of these is an eight kilobyte Reed.

The hits being from Postgres buffer cash and the reeds being from,
well maybe from disk, but maybe from the operating system cash.

We don't, unfortunately we don't know which one.

Nikolay: By the way, it'll be so great to see, like we
have for, for macro analysis, just as statements we have

extension additional P Kash, which can show you real disc IO.

But for explain, we don't have anything.

It would be so good somehow to hack.

Michael: We have, I've forgotten the actual word wording for it.

Is it IO timing.

So we can, we can do show IO timing

Nikolay: Yeah.

Michael: key word.

Nikolay: track_io_timing parameter in post, but it won't
show you like number of buffers is the amount of work.

Somebody mentioned this phrase in Twitter, we had discussion about
buffer yet another discussion about buffers in explain on Twitter.

And somebody mentioned the amount of work.

This is exactly like, this is great des.

Of uh, this information and timing is not amount of work.

It's a duration of work.

Why we are interested in the amount of work we we'll discuss it later.

Right.

While it's maybe more interesting than timing, but what I,
first of all, I double checked explain buffers without analyze.

It makes sense.

I have pauses 14, but I, I, I believe it's since PO 13, when it explain,
got this planning stage and I see buffers hits and, and reads there.

So, so 1000 buffers is a big or not that big, how to feel.

Because developers in mind, they, they may understand.

Okay.

One buffer is eight ki ki bytes.

By the way, your article is old school.

It says kilobytes and old school way ki bytes.

It's like cause it's not 1000 it's one, 1024, but it's another off topic.

So we have block size eight KB bytes.

Is it big to have 500 hits and 500 hits for buffers?

total.

Michael: My arithmetic is awful at this kind of thing.

That's one of the reasons why in the tool we make we,
we display that for people to try and make that easier.

Nikolay: So we take number of blocks, multiply by eight divided by 1,001,000.

Of course one, 1000 blocks is eight, maybe bite.

It's not that big, it's quite small number, but it depends.

Of course, if you just need to read very, very small row consisting of couple
of numbers, probably it's too much to read like tiny row with two columns.

So, it's not that big, but what I'm trying
to tell that that you're absolutely right.

Like converting to bites.

It encourages engineers to think about, like, to
imagine how, how big is that this data volume.

So if they hear to read this couple of roles we needed to, to deal.

even heating, not, not reading heating a gigabyte, so it makes
them to think, oh, it's something not optimal is happening here.

I should find a better way to improve this query, for example,
to have a better index option or, or something like that.

But there is a trick here when we talk about res if we, for example, okay.

1000 re.

Of buffers is can, can be converted to eight Miyes, but 1000 hits can be
tricky because some buffers can be hit multiple times in, in the buffer pool.

Michael: Well, yeah, so I think this is contentious.

I've chosen to mostly ignore this and if we get some double
counting then actually in some ways, progress is doing duplicate

Nikolay: Right.

Michael: is some duplicate work going on and
if we are using it as a measure of work done,

Nikolay: Yeah.

Michael: with the double counting.

Nikolay: it's.

I also think it's okay.

So we, we can have stored much less data in memory, but if
we need to heat one the same buffer multiple times, we still

count it the same way as it would be separate buffers.

And we, we just need to understand how much work we need to do.

We can imagine the cases when the buffer hits converted
to bites you can see the amount of work bigger than to be

hit buffers to be it exceeds the buffer pool size, maybe.

Right.

Michael: Well, it could even exceed the
amount of data you have in the database.

Like it's totally possible.

Nikolay: Theoretically.

Michael: Well, I saw an example, I think it was from through test data,
but Ryan, did you see the blog post by Ryan Lambert on H three indexes?

It was a, it was a few weeks back.

And it was really interesting to me.

They were type of geospatial index and one of his example, query plans, he
was looking at, he was doing an aggregation on a lot of the data, I believe.

And it was doing something like 39 gigabytes of buffers total.

And that's, that's a lot.

Right.

But he, it really shocked him because his
data set, he knew was smaller than that.

Nikolay: 35 gigabytes of buffers or buffer hits.

Michael: Buffer's total.

Nikolay: So buffers work total, because like, if we, if we, say some number
of bites of data, it feels like a storage, not amount of work to be done.

Michael: Yeah.

Good point.

Nikolay: so like, I mean, it can lead to
confusion much easier compared to the case.

When we mentioned hits and res all the time, buffer hits, buffer res.

So I think we shouldn't IIT.

If we convert to bites, we shouldn't meet these like action words.

Michael: That's a good, interesting point.

Do you mean like hit and reads or do you mean the,
make sure we still mention that they're buffer.

Nikolay: Yeah.

I, I mean, if we say number of bites, buffers, we can provoke the
confusion to, to, to think about it as a number of bites stored in memory.

But if we keep mentioning heats and res we avoid
this confusion, like maybe it's just some opinion.

Michael: Yeah.

Yeah.

Nikolay: Your post also mentions other types of buffers, not only
shared buffers, but also local and temp and additional confusion

that can be made there because local is used for temporary
tables, temp buffers, we used are for some other operations.

And like, it's, it's interesting that it can, can lead to
confusion, but I found like most of the time we just work with

shared buffers when we optimize the query and let's discuss why
it's more interesting to focus on buffers than just on timing.

Michael: Yes.

I did do a blog post on this recently.

I'll link it up in the, show notes.

This is something I think I learned mostly from listening
to you speak in the past, but the people, people that are

super experienced in Postgres performance work do often
tell me that they focus a lot on buffers at the start.

And it took me a while to really work out why that was.

But the super important parts are that timing's alone.

So if we just get, explain, analyze, and
don't ask for buffers, there are a few.

Slight issues with that one is we can ask for the same query
plan a hundred times and get a hundred different durations.

You might get slightly slower ones, some slightly fast ones than
they mostly around the same time, but it's different each time.

That's one floor

Nikolay: Especially.

It alone on this server,

Michael: yeah.

,
Nikolay: if, and we almost always, we are not alone.

Michael: Yeah, really good point.

So conversely, why is it different for buffers?

The number of shared hits might change and the number of shared
reads might change, but in combination, unless you change something

else, chances are, if you run the same query a hundred times, those
two numbers summed together will sum to the same number each time.

So, that is a more consistent number than timings,
even if the individual numbers there change.

So that leads on to issue number two, which is if you're looking at
timings, the first time you run a query that data might not be cashed.

And as you run it a few, yeah, exactly.

It might, or it might not, but you don't
necessarily know without Buffer's information.

So timings can fluctuate quite a lot based on.

Cash state again, buffers whilst the number of hits and reads would change
the sum of those two won't change, depending on the state of the cash.

And then the third one I pointed out in this blog post doesn't
come up as much, but I think it's quite important that the Postgres

query planner not trying to minimize the number of buffers.

What, what it's trying to do is minimize the amount of time.

And sometimes it will pick a plan that is inefficient
in terms of buffers, if it could make it faster.

So the most obvious example of this, I think maybe
the only one I'm not sure is through parallelism.

So if it can spin up multiple workers to do the work
quicker and sequentially scan through the entire table.

Maybe it'll choose to do that even though on a pure efficiency
play, you might have been able to do less work on a single worker.

So, yeah, I'm not sure I see many examples of that, but
it does feel like a flaw of looking at timings alone.

Nikolay: Yeah, exactly.

I agree with all points.

I also like if you think about time of course you want to minimize that
this is your final goal, but indeed if you check the query on a clone, for

example, which has different hardware, maybe even a file system and so on.

And it, makes you think about timing.

Like you deal with time and it doesn't match production and you think,
oh, it's not possible, we need the same level of machine and so on.

But then the process became very expense.

but it's still possible to keep the process cheap.

You just need to focus on buffers, forget about
timing for a bit optimize based on the amount of work.

And if we focus on buffer numbers of course, we on, we
focus on, on row numbers, but it's like more logical.

You have rows, but you don't understand how many Ravions
were checked and how many dead tubes were removed.

Explain doesn't show it, but buffers can, can
help you understand the amount of work to be done.

And this is exactly what optimization should be
about because any index is the way to reduce IO.

Right?

To just reduce the amount of work instead of sequential scan, for
example, on large table, where, when we need to read a lot of pages

index helps us to read a few pages and reach the target quicker.

So the index is to the way to reduce the amount of work.

And that's why timing is also reduced.

It's a consequence.

So when you optimize something, analyze a query, optimize it.

Deal.

Why to deal with sequences instead of like the core
of optimization, the amount of works or buffers.

Michael: I think I completely agree with
you, but I do have a couple of questions.

I think people really click when they see that they didn't have an
index before sequential scan, it read 500 megabytes of data maybe.

And then when they add an index, it's able to look up the
exact same row in 24 kilobytes or something, you know, of

Nikolay: Right.

Right.

And instead of seeing how timing reduced and thinking, oh good.

we see how buffers are reduced and understand why timing was also reduced.

Like we, we see the reason of this reduction of timing.

Michael: Exactly.

I think there's a risk that people think they see an index scan.

They think, oh, index is a magic.

That's why it's fast.

It's like, oh no, it's not magic.

It just lets you look it up much more efficiently and therefore faster.

So I completely with you on that.

But where I lose you a little bit is that there
are expensive operations that don't report buffers.

So for example, a sort in memory or some aggregations, for
example, maybe uh, these were count as CPU intensive rather

than IO and that maybe that's far less often the bottleneck.

but we don't get any buffers reported for them if they're done in memory.

I like getting both timing and buffers and using them in combination

Nikolay: Yeah, of course we still have other
information in the plan so we can understand.

Okay.

IO was quite low buffers, four buffers hit and that's it.

But we have a hundred millisecond what's happening here, right?

Like it's, of course sometimes, but quite rare.

you agree, like most often we the reason of slow
query is a lot of value happening under the hood.

Right.

Michael: Well, even with the sort case, right?

Like why is sort taken so long?

It's because you are sorting a million rows and if you could instead
sort 10, the first 10 that you need maybe you're you're imaginating or

something, you can get those ordered from an index you're gonna massively
reduce the IO therefore not need to sort as many rows in the first place.

So even when it's not the bottleneck, I think it's often the
solution even if you SP up that sort of a million rows, it's still

gonna be a, a lot, lot slower than only fetching and sorting term.

Nikolay: Yeah, we, we also may think about
like our like efficiency in the following way.

Like we, okay.

We need to return 25 rows or 10 rows.

How many buffers were involved in whole query?

And the, the buffer numbers are important.

I accumulate your form.

So you can look at the.

Of the query and seed total number for everything included underneath.

So the question will be how many buffers?

So we.

Involved to return our 10 rows.

If it's 10 buffers, it's quite good.

If it's one it's excellent.

So means we had scan of just one buffer one page
and all rows happen to be present in this page.

So few buffers is good to return 10 rows.

Thousand already.

Not so.

Right.

We, we discussed that it's just eight, maybe bytes, but return 10 rows.

Probably.

It's not that efficient.

But also two slightly deeper comments related to explain.

It's interesting.

Like, as I mentioned for Ji statements, we have gist K cash extens.

unfortunately not available almost uh, managed pore uh, services
like RDS, but available for all people who manage pores themselves.

So this excellent extension, it adds you information like
about CPU and real dis Cayo and CPU can even distinguish

user and, and the system CPU time also can text switches.

Excellent, but for expand, we don't have and simple idea.

Like we could still get this information.

If we have access to slash pro on the host with no process ID.

Even if we have parallel workers, we, we can extract their process IDs and
we could take very interesting information about real dis Cayo happened.

And also CPU you, you mentioned CPU intensive work.

It could be present in explain.

Somehow, like additional extension or something, or
maybe like some hacked S for non-production environments?

I think it's quite interesting.

The area to explore and improve observability of single query analysis.

right.

And it can be helpful to see that you like it's very CCP intensive work.

I was low.

That's why query was low.

You just see how much CPU go spent or something like this.

And of course, real this Coyo.

It's also interesting to see.

And another thing I lack the ability to understand the second best
and third best plan in explain you see, because the planner makes

the decision based on virtual cost like something abstract, right?

Which is of course can be tuned according to
parameters, like uh, a random page cost that page cost.

And so.

but uh, you can tune costs uh, and, and planner
never think about what the, the CPU is used.

It doesn't doesn't think about it.

And how many gigabytes we have doesn't think about it.

Michael: Well, it has, so it does factor those into the costs, right?

Like it does CPU, top costs and things like that.

But I think I know what you mean.

It doesn't factor in the server parameters.

Nikolay: The planner doesn't know what hardware we have

Michael: Yeah.

Yeah, sure.

Nikolay: and the planner, even we can, we can fool the planner and
we do it for, for queer optimization in non-production environments.

So when we have, for example, on production, we
have almost a terabyte of Ram on, on non-production.

We, we don't want to pay for it.

We, we have for.

I dunno, 32 gigabytes of Ram.

And the buffer pool is much smaller than a production.

It's not a problem.

The planner doesn't even look at the shared buffers.

Uh, Setting value at all.

It uh, only looks at uh, effective cash size.

So you can say we have terabyte of my, so we, we
said like three fourth of that usual, usual approach.

So you, you trick the planner and it behaves
exactly like on production, choose the same plan.

But what I'm telling, like sometimes we see, okay, planner thinks
this is the best option to execute the query based on cost.

Which depends on statistics and our settings.

But we see a lot of fire happening.

Buffers option shows it.

Why, what if we had, like, what else we
have on the plate plan ahead on the plate.

We don't see it.

Unfortunately, I've heard Mongo has this capability
to explain this and provide the second option as well.

So what do we usually do?

We apply a trick.

We say, okay, we had like BIAB scan here said enable
BIAB scan to off and try to check what other option was.

So we put a penalty to bitmap scans or so we see the second possible option.

Probably second.

We are not sure, but this is a trick.

Michael: Well, that's what I wanted to ask.

Like how I think it's a really difficult problem.

I've not looked into it myself, but what do we mean by second best?

Do we mean second best plan that's sufficiently different.

What if it did a bitmap?

Nikolay: slightly worse cost.

Michael: Yeah.

So I understand what you mean, but I think we
might end up with not quite what we wanted.

So if we actually want to see what would this do with an, with an index
scan of the same table, maybe disabling bit scan is the perfect way to go.

But what if.

The second best plan posts could have chosen.

Would've been a bitmap scam of a different index.

would we want to see

Nikolay: Right.

Good point.

Michael: like, what if, if it just changed the join order a little
bit, or the index scan direction, or like there there's so many minor

Nikolay: Yes.

I agree.

I agree.

But my intent is to understand what were other options.

Several of them may be to understand their
costs and their buffer like their IO as well.

In compare.

Sometimes cost can be slightly different where like
on some edge case buffers are drastically fewer.

So we start thinking maybe we need to adjust our settings for the planner.

For example, random page cost default four should go
down to sequential sec page cost, which is one and,

and this like exactly understanding the second option.

Okay.

Maybe you're right.

Maybe there are many options in between.

So this second, maybe.

10 already.

I don't know, but this is what I lack in explain two
things, uh, real physical operations, like CPU and IO,

disc, real disc, and also second, third other options.

What were their costs?

Right.

Michael: Yeah.

Nikolay: So it would be good to like you, you mentioned somewhere
that it's already to complex to, to complicated, to read explain.

It requires a lot of experience, but it
still likes many interesting points in my.

Michael: I think this is such an interesting trade off though, right?

And this takes us onto the last topic I did wanna make sure we discussed.

I think there's a trade off between being useful for people that are
new to Postgres versus being useful for super experienced people.

And I'm not sure exactly where we should be drawing that line
or where the people in charge should be drawing that line.

And we've talked for quite a while.

Defaults and what should be on by default.

So uh, explain itself is fairly simple, but explain, analyze
once we have timings the extra let's say Penalty of also

asking for buffers, maybe even for Bose but other parameters
and definitely buffers based on our whole conversation today.

Should that be on by default?

So when anybody asks for explain, analyze, they also get those buffer
statistics, even if they don't know about them, don't ask for them.

You can turn them off.

Maybe if you're advanced user, you know, you don't want them for some
reason, but you have the ability to shape what beginners ask for.

So if they're reading some guide from three years ago that says,
use, explain analyze then they'll get buffers on by the Default

Nikolay: is very important.

Yeah.

Do you have some stats uh, about your users?

How, how many of them have buffers included?

Michael: Yeah.

Last time I checked, it was 95% do include buffers, but 95%.

Nikolay: that's

Michael: well, 95% also include for Bose,
not the exact same 95%, but almost the,

Nikolay: because I, I guess your documentation it, right?

Michael: not just that we offer it, our tool does not
support the text format of explained so automatically.

If somebody tries to get explained, analyzed and paste into
PG mustard it will tell them we need at minimum format, Jason.

But by that point, we are also saying, please
ask for, explain, analyze buffers for both

Nikolay: Right.

So you, propose to, use it.

That's why they use it.

If you check the publicly available plan on explain.depesz.com,
for example, or dalibo.com, I I'm sure more than 50% will be

without buffers, unfortunately, because this is default behavior.

And I think I interesting enough, like the, there is a consensus based
on what I saw in hacker smelling list, I didn't see big objections.

Looks like people think that it should be owned by default,
but still somehow the patch, the patch needs review.

Actually right now, there are several iterations already.

And, uh, Let's include the link.

Also, if someone can, can do review, it would be great help for
community, because I think we should have buffers enabled by default.

I hope we convinced people, right?

That buffer should be used.

We said that it's important sometimes to convert numbers, to bite.

To, have a feeling how big is that?

We discussed some liking features of explain that probably
are tricky to develop, but still like would be good to have.

And also we discussed that it's possible to run, explain, analyze
with buffers, of course, on a different environment than on product.

And in this case, I also would like to mention that our tool, that Database
Lab Engine and additional um, chat bot, can uh, can be executed in, can

be run in slack or It's called the Joe bot and it also converts to bites.

And it allows you to have a very good workflow of
SQL optimization where you don't touch product.

Michael: really cool.

it even estimates how, like, if let's say the timing is milliseconds,
it even estimates how much fast that would be on production too.

Right.

Nikolay: Yeah.

Well, this is, this is tricky.

We, this option is ex experimental.

It's very tricky to develop.

We still don't consider it as, as like final version.

But it's not like very needed people are fine with just seeing buffers,
different timing because different file system, different state of

CAEs and so on but buffers, if we have this shift in mind to focus on
buffers when perform an optimization, this is perfect place to play with

queries and Database Lab Engine also provides ability to create an index.

Not disturbing production and your colleagues.

This is very important.

And to see if it is helpful to reduce the amount of work, so buffer
numbers and therefore to reduce timing in, in the end of the day.

So I recommend checking this on postgres.ai,
and of course, pgMustard understanding plans.

Maybe that's it.

Right.

So we discussed everything we wanted, right,

Michael: Yeah.

So final thing is if you or anybody, you know, any of your friends are
able to review Postgres patches please, please, please do uh, check

out the, the one at the moment, the way Postgres development works.

Um, There's a new version of Postgres due out back end of this.

Postgres 15, that's already, uh, frozen.

Yeah.

So that's already uh, feature freeze.

So even if we do manage to get this committed soon, it's
still at, at best will come out in just over a year's time.

So even if it makes it into Postgres 16, so these things can take years.

So don't expect fast results, but if you can, that'll be wonderful.

Thank you.

Nikolay: Current commit Fest closes in July 31st.

So like in, in five days.

So, so

Michael: So get your skates on,

Nikolay: right.

But there, there will be one more commit Fest.

Definitely.

So a few actually for Pogs.

A few of course.

Okay, good.

It was interesting.

I

Michael: I hope so.

Nikolay: I hope everyone likes our podcast.

We need your help please.

Like subscribe and please, please.

The links in your social networks and groups where
you discuss positive database engineering and so on.

Thank you everyone for listening.

Some kind things our listeners have said