Intro to query optimization
Michael: Hello, and welcome to Postgres FM,
a weekly show about all things Postgres.
I am Michael founder of PG mustard.
This is Nikolay founder of Postgres AI.
Hey Nikolay What are we talking about today?
Nikolay: Hello, hello.
Let's talk about query optimization.
I think this is maybe the most interesting topic in in
the area of Pogs in general, but I found not everyone
is interested in it, but let's talk about it anyway.
Michael: Yeah, it's also, I guess, a, topic quite close to both of our hearts.
We've spent many years looking at this.
So hopefully we have some interesting things to add.
Nikolay: but let, let's make some boundaries.
Let's distinguish analysis of workload as a whole
and try attempts to find the, like the worst.
Best candidates for optimization versus single query optimization.
Let's talk about this, the, the second topic subtopic,
Michael: I've heard you differentiate between macro
performance analysis and micro performance analysis in the past.
So macro being system level and with, I guess, we're not
talking about that today and we're gonna look more at micro
once you've, worked out, there is a problematic query.
You know which one it is, how do you go from that to.
What can I do about it?
Nikolay: Right.
How, how to understand Is it good or bad in terms
of execution and how to read the query plan?
The comment, explain, which is the main tool here, right?
Let's talk about this.
Michael: Is there any other, is there anything else
before we dive into explain, are there any other
parts of it that we, we might need to cover as well?
Nikolay: well?
the, the things that explain doesn't cover, for example it, it won't show it.
Won't it won't tell you for example, CPU utilization, user CPU system CPU.
It won't tell you physical this Coyo.
how many operations at dis cloud, because PSUs doesn't see
them directly because Pogo works only with file system cash.
So I, your operations reason, his PSUs tell
tells you they are not necessarily from disk.
They are from page cash.
So this can get additional, not use and explain it would be good to.
These things inside, explain somehow like pist K cash extends pist statements.
So it'll be good to have something that would extend, explain,
but I don't, I'm not aware of such thing to exist and Yeah.
Michael: Nor me.
I'm not aware either, but I think we get clues about them in explain
don't we, we see some timing that can't be explained other otherwise
Nikolay: your timing human or,
Michael: Sorry.
No, I, I mean, let's say you mentioned CPU performance.
If we have an operation, that's not doing
much IO but it is taking a long, long time.
That's a clue that there might be something else going on.
Nikolay: right.
Right, right.
So in, in general, if we run even one query.
Theoretically, it might, it might be sense to use
things like P and C flame graph for one queer execution.
And it would augment the information that you
can extract from explain, analyze buffers.
But let's just discuss the basics.
Maybe if were to.
Michael: Sounds good and something so possibly even
explain versus explain analyze is a good place to start.
By explain we get the, the query plan that
often normally returns really quickly.
Roughly in the, the planning time of the query and then explain, analyze we
get the actual, well, it, it runs the query and returns performance data.
So we, we can see how much time was spent.
And if we ask for buffers, how much IO was done and all sorts of
other things as well, but it allows us to compare things like.
How much.
So the explain might tell us how many rows were being
expected to be returned at each stage and explain, analyze.
We can get the actual number of rows returned at each
stage and comparing the two can be really useful.
Nikolay: right, right.
And Yeah.
absolutely.
And also.
we discussed it some time ago that explained shows only one plan.
Sometimes you want to see multiple plans,
like second best candidate and so on.
Otherwise, like you should do some tricks to try to
guess what plan ahead on, on the plate when choosing.
Michael: That's a really good point.
And maybe even a good place to start in terms of using explain.
So the first thing you probably notice when you're looking
at explain for the first time is a lot of cost numbers.
These are an, an arbitrary unit that give you an idea of how expensive.
So it's a kind of an estimate of well, as they, as the cost numbers go up,
Postgres thinks it'll take longer to execute, but they're not in, they're
not an estimate of milliseconds or they're not in any real unit, but.
You can, you can then use a couple of different like, is it enable sex?
There's like some, there's some parameters you can use to affect those costs.
So you could maybe try and get the second
best plan by making the current plan.
Very expensive.
So if your query is currently doing the sequential
scan and you want to see if it could use an index and
it's just choosing not to, you can disable sex scans.
Well, it doesn't actually disable sex scans.
It just makes them incredibly expensive and you can get exactly.
So you might still see that in the
Nikolay: It's very, this trick is very helpful.
When two plans are very close to each other in terms of cost overall cost.
And if you disable six scan and see, or example index
scan and see different plan and see the cost is very
close, it gives you idea that we are on the edge, right.
Or.
We either crossed it recently because data
is changing or we are about to cross it.
And, and it's quite kind of dangerous.
But first of all, I would like to mention that
the plans, the plan a plan is a three right.
Cycles are not possible, which is important
because it could be possible actually.
But no, , I mean,
Michael: I think in the simplest cases yes, but with
CTE's you get some, like, you can get some strange things
can refer to the same CT more than once, for example.
But yeah, in the simplest cases,
Nikolay: but when it's.
already executed, that's still like a tree right.
Oh, interesting.
By the way.
Yes, Anyway, it's it is like very rough it's, it's a three
and it's when it it's printed, it's a three, but right.
Loops are possible inside, of course.
Right.
And important thing to understand that
regular metrics are as cost rose timing.
They are shown for one iteration inside the loop, but buffers,
they it's a sum of everything and it's, it can be confusing.
Sometimes right.
Michael: Yeah, well, let's go back to the tree.
I think that's really important.
And a few, a few things that aren't obvious when you're first
looking at them is that logically it's happening almost backwards.
So the first node that you see on the tree is the last one to be executed.
Whereas,
Nikolay: It grows from leaves to root
Michael: Exactly.
So kind of outside in like, kind of right to left a little bit.
And there's also some really important statistics, especially when you
use explain, analyze some really important statistics right at the bottom.
So some summary metrics like execution, time planning, time
Nikolay: Oh, they're printed separately from these three.
Right?
Right.
Michael: and like trigger time just in time compilation, each of these things.
Dominant sometimes they can be like where all of the time's going.
And if you have like a really long tree the general
recommendation is start kind of right to left.
But I also say check that bottom section because it
could be, you might not have to look through the entire
tree if you find out that your query's spending 90% a
Nikolay: And interesting additional point here is
that planning time can be sometimes very, very big.
I, I had cases when inspection of of the path of merge
joint led to huge scan and during the planning time, And.
Disabling merge scan helped, but it was not so obvious
at all in the beginning because if you don't notice
that planning time is seconds so insane, suddenly huge.
It's a surprise to you for you.
So checking, planning time also useful.
Michael: Yeah, same for time spent in triggers and time spent
just in time compilation as well for an some analytical queries.
you might consider it a relatively simple
query or quite, it should be quite fast.
But if the costs are overestimated a lot, sometimes the just
in time, compilation kicks in, spends several seconds thinking
it's saving your time, but then that's overall, you know, the,
the overall queries only a few milliseconds that's suboptimal.
Nikolay: Right.
And also since for August 13 planning time also like it's, if you
include buffers option, it'll show you buffers used for planning as well.
Right?
Michael: Yes.
And actually one other thing on planning time before we move on
from that, is that auto_explain doesn't include planning time.
So there you can't spot planning time issues in order
to explain other than I think we discussed this once.
Yeah.
You could, if you, I think we discussed it years ago or
ages ago, because it was a, it is probably the only use
case for logging the query time plus the order to explain.
Time, and then you could diff the two and it's probably planning time.
That's the difference.
But yeah, it's, it's a limitation, not as you say, not
super common that planning time's the dominant issue,
but when it is, it can be 90 plus percent easily.
Nikolay: Right.
It's it's gonna be unexpected.
This is the danger of it, right?
Michael: Yeah.
So right to.
In kind of inside out start at the bottom check.
The main statistics, you mentioned briefly
that some of the statistics are per, per loop.
So loops are quite, I think they're quite a confusing
topic when you're first getting used to it and.
Uh, Especially if there's, you know, 10,000 loops, you could easily miss that.
One of the statistics, it looks quite small, but
once you times it, by 10,000, it can be really big.
So things, examples are of course the costs and the
timing, but also things like rose removed by filter.
Sometimes people look out for those numbers.
If it, if it says one that's a per loop average and
actually 10,000 of those is SU suddenly not insignificant.
Nikolay: Right.
And those averages can be rough.
That's also like there's a, there is a
mistake that can be present, present there.
Michael: Well, especially around zero and one,
like, because some of the numbers are intes.
Yeah.
Ex so anybody that's wondering if.
Exactly.
It's rounded to the nearest integer.
And if it's less than nor 0.5 and gets rounded to zero, it doesn't
necessarily mean that there are zero which can be problematic.
Nikolay: Also, you know what, we discussed things that
probably there are many talks and many articles that are useful.
I think this podcast is not going to replace them.
We are trying to highlight problems which can be tricky for in the beginning.
Right.
And one of the things um, I think uh, We should mention tools as well.
Right?
So first of all, explain decom is the oldest one and
still very, very popular, maybe still the most popular
then explain decom, which is P P V two greatly improved.
Very good.
And of course, P master, which is commercial, which
you, you, you develop right worth checking all of them.
They are good.
Pro and cost for all of them.
But what I think is important to understand is some meta.
When you talk about query analysis of single query
analysis, we should say, okay, this is our plan.
better if it Was with execution and the ex like the
best, if it's execution collected with buffers and we
will discuss by the way, overhead a little bit later.
Right.
But when you ask someone to help with optimization, of
course, the first question will be showing the query itself.
Sometimes show me two plans.
Also, if, if there Was some change and we want
to understand why this change influenza the plan.
So we have two plans, but we need to have
a query like must have, we should have it.
But additionally, I think very important to get Pogs uh, settings.
for like enable sex can or random patch, cost patch, cost, all costs
work_mem as well, even though work ma'am is not inside planner.
Positive settings are grouped by in groups.
If you can select star from pg_settings and see group
names, and there is a whole group named planner.
Planner rule settings and work ma is not there,
but work ma influences planner decisions.
If you change work, ma'am plan can be different.
Right.
So right now my rule is like, let's take planner, settings plus work.
Ma'am maybe something else.
I, I like I'm interested to see if something else should be there as well.
So when we ask someone to help, we need to present a plan or to query.
Linear settings.
And I also believe that schema is important to present.
Like what kind of schema we had table indexes and probably statistics as well.
This is like whole picture to analyze what we had.
Imagine if the, all these tools collected these things
automatically, for example, how great it would be to.
Jump into like, I want to help someone
with optimization and I see whole picture.
I see query plan settings, the schema, not whole
schema, only part of it, which is involved in like,
which is with query deals with, and also statistics.
Maybe statistics is kind of tricky, but
This is what, basis for planner decisions.
This is what defines what plan will.
Oh, of course.
POG this version as well.
Because the different version of the plan can be different,
different nodes in the plan can, can be present depending on version.
So what do you think about this?
Like big whole picture.
I understand that none of tools collect this information
and none of, none of tools require users to present this
information, but it would be great to, to, to store it in history.
For example,
Michael: Yeah, there are some really interesting
tools that do some, some of that, but not all of it.
So there's a, a tool.
Started as a, my SQL tool, but they've, they've added
Postgres support called ever SQL that that asks for
things like the query and the schema and things like that.
And then does some static analysis, which is super interesting.
There's tools like PG analyze It's a monitoring tool and it's
starting to do some well, it's been for, for at least a couple
of years now doing more ad hoc performance, like query analysis,
fire, explain visualizations and has access to a lot of those.
A lot of that information already by the nature of being a monitoring tool.
But I think there's also this natural trade off between this macro analysis.
This, the, all of the information you can gather and the overhead of doing
so versus the amount of information you're willing to gather, to do a
like once, you know, a certain query is a problem you're willing to pay
a higher overhead because you only, you only need to gather that once.
Whereas if you want to do this all the time for every
query, I think there's a a slightly higher overhead.
So I think there's, there's some
tension.
There's
Nikolay: could do some hashing, for example, to track that like
statistics didn't change and Response experimenters didn't change.
We could just check it too automatically with hash.
Michael: Yeah, I think there's some super cool things here, but there's
also, there's a, there's a couple of different um, environments, right?
So in production, like the one, one key thing
is that let's say statistics is a great example.
Production's not the same as staging.
Like we could, we can make all of the data the same.
We can,
Nikolay: It depends.
Michael: sorry.
Yeah.
Production might not be the same as staging and
therefore like a statistics problem may not show up.
And if you are, let's say you're doing some development work.
It's, it's really tricky to, to reproduce all of those things.
So, and, and I think I'd also push back that some of those,
some of the most obvious things that can be a problem, it.
Most like, maybe even schema doesn't matter.
maybe even query doesn't matter if, if you see that somebody's doing a
sequential scan of 10 million MOS maybe probably it's parallel and they've
returned, they're doing a filter and it returns just one of those rows.
Without the query, we can tell that an
Nikolay: to suggest index right.
Michael: Exactly.
So.
There's a bunch of cases where you can give a, you can give some
pretty sensible advice without any of that extra information, but
definitely as it gets more complex, I think more of those things can
be more useful, but EV even in the index case, you know, you still
need, I think you still need context from the customer in terms of
trade offs to, you know, if they've, if this is a super high right
table, you might be less inclined to, to add an index than if it's not.
Or if the customer has certain requirements.
There's always gonna be a, there's always an, it depends.
Right?
There's always, I think whenever you're giving this kind of
advice, you have to be careful that for different customers,
different things would be sensible, I guess, except for, except for
Nikolay: Well, I understand that what I just described a
collection of all these pieces require requires a lot of efforts.
So that's why it should be automated.
But imagine if everything was collected automatically inside some tool.
we used an organization and stored historically, you understand?
Okay, we optimize this query and we try to
optimize this query and we know the whole context.
When we ask for help, we have all pieces and if
expert comes to help us, all pieces are present.
That's it?
It would be much easier to help.
Right.
Michael: Yeah.
And I think there are some, some interesting projects in this area.
I think you have you come across the one by Perona,
they they're doing a kind of a replacement to
Nikolay: well, it's about micro macroanalysis as well.
P monitor, right?
PTA monitor
Michael: Yes.
But I think they
do things like plan.
Nikolay: start statements or no,
Michael: Yes, but with additions, like I think
they let you track query plans per query.
So like, I think you could, for example, see if a plan has changed.
So it's that, that kind of thing.
With relatively low overhead, I think you
start to get a bit more of that information.
So when an expert comes along, hopefully this
is something already installed and already
Nikolay: This is old big discussion.
There is a current ongoing discussion in PJI hackers about
Brisco hackers, mailing list someone froms proposed adding
plan ID to PSA statements, triggering discussion one more time.
And this is, this would be great.
Of course.
We know that each query registered and P
statements can, might, might have multiple plan.
Depending on parameters used.
So when you optimize a query, very important thing.
I, I missed it.
My, in my list parameters you used, right?
Because different parameters may trigger the
planner to choose different plan.
Right?
So, so it's very, very important.
When optimize the query, we cannot say we optimize the query.
We, we, we must say.
We optimize a query for some parameters, and we need
to think about variations that we should expect on on
production and check them too, not just a single case, right?
So this, and this is tricky by the way.
Michael: Yeah.
So if anybody's wondering, this is like the simplest example of this
is let's say you have a column where 99% of the data is a single value.
And then one, the, the other 1% is millions of like unique values.
If you search for one of the unique values.
You might get an index scan.
If you search for the value that 99% of the
table is, then you should get a sequential scan.
That would be the, the optimal plan.
So that's the
Nikolay: Right.
This is classic example and even enable set, enable six can to.
Might not help to avoid six can in some cases.
And I also, I, I had a couple of times in my optimization activities.
I had the case when somebody provided my me a query without parameters
and I've checked the table, I saw, okay, what's the worst case.
And I started to optimize for the worst case and made bad decisions
because this worst case was, was never used in production.
so it's very, very interesting topic.
I definitely want to find some approach when we don't know which parameters.
We have, but we guess somehow, for example, if some
additional tool would analyzing statistics, this tool
would say, oh, take this set of parameter, this and this.
Like, this is most typical case.
This is like some kind of worst case and try to optimize for them.
This would be.
Michael: So, so yes, agree.
And I think there are some, I think, for example, auto explain.
Very old tool, but one thing it does really well is it
spits out the exact query that caused that slow plan.
And that's one way of getting at least the
extreme versions of the parameters that
Nikolay: Or just slow log.
If you have log duration statement above 500, 100 milliseconds which is
good, or at least a second or two, which is not so good, but also , fine.
Uh, You have examples of parameters which
trigger slow execution, but you don't see.
good parameter sets, which are not registered in this slow log.
When I say slow.
log I mean a part of single PSUs log, because PSUs has just one log.
It's a different, different discussion maybe.
And if log duration statement enabled to see the, the examples
with duration, but ought to explain it's even better with the plan.
Michael: Yeah.
Is this a good time to talk about overhead?
Nikolay: Yeah.
Let's talk about overhead.
So when you run, explain, analyze versus you run
query without any observability tooling, which
explain, explain analyze is, is observability tooling.
It, it adds a lot of details about query execution and planner decision.
Right.
But you can just run the query and see some timing, but
then run, explain, and license and see different timing.
Right?
You had a block post on this topic, but about auto
explain auto explain is also like a related question here.
Michael: Yeah, Andres had a really good blood post on the observer effect.
So I think, I think there's two, there's two cases that , not only
can explain, analyze, tell you that it, that it can be very accurate.
So it can be that it's roughly the same amount of
time as the, as running the query through your client.
It can be too high where it's adding overhead and it can be too low.
For the case where lots of data's being transmitted.
It doesn't transmit that data.
So it can even be faster than a query that would return the data.
So there's kind of three, three cases, two cases that are bad.
They're quite rare in my experience.
And they.
especially on modern hardware.
They don't show up that often.
And also they're not that problematic when
you're actually looking for the problem.
If, there's a relatively universal overhead added and you're
still looking for what's the slowest part, it's still probably
the same place, but yeah, let's explain why it happens.
uh, , it's doing, in order to measure timing.
There is some overhead
Nikolay: I would split it to three parts.
So sorry for interrupting.
I would split it to three parts first, when we
say explain, we just see the plan of decision.
We don't is good cure, query, nothing to discuss in terms of overhead here.
Right?
Well, there's, uh, The cost of planning, planning work,
but it's not overhead it's it's anyway, we need it.
But when we add analyze, There is overhead like we really execute
a query, but we need to measure things and see how manys in each
node were collected, everything like that and timing as well.
But they also can say buffers.
This, this is additional overhead.
And we also can say Tracko timing, which is a posts setting.
You can set dynamically I.
guess.
Right.
And you can see IO timing additionally printed
in by explain, by explaining lies here.
Right.
And this like three pieces of overhead.
Right?
What do you think about each of them?
Michael: Yes.
Well, as you mentioned, I did do a blog post on this
because I, I saw quite a few places where people would
really warn against or to explain with timing on.
There was really, there's a really strong warning
against it in the documentary, in the Postgres docs.
There's multiple monitoring tools that tell you if you
have, or to explain on, make sure you have timing off.
Nikolay: At the same time, I, I observe very heavily loaded
systems serving a lot of thousand, like hundred, hundred
thousand transactions, very loaded systems where it's enabled.
Michael: Same.
I was, I was coming across customers that had it
Nikolay: Doesn't it depend on the hardware on.
on the CPU.
Michael: Yes.
So I did.
there's like a, there's a tool in progress that lets you check.
Uh, I've forgotten what it's called.
Is it PG test timing?
It's ah,
Nikolay: Something like that.
It's in binary directory, standard package
of
Michael: yeah, I'll find it and link to it.
But yeah, basically my understanding is if you have pretty fast system
system clock, lookups, it, the overhead can be hard to measure, but if
you have slow system clock, then it can be extremely easy to measure.
And I that's the ongress blog post, I think is deliberately
picking a system has a slow system clock in order to
show that it can add hundreds of percent of I overhead.
But when I was looking at on a when I was looking at
it on an OTP workload, very, very simple PG bench OTP
workload, I, I was basically unable to measure it.
I, I got, I think I got a 2% overhead of adding
all of the parameters and it was basically
Nikolay: Y, you know, This is my old idea.
And I, I, couple of times we implemented it in any company.
When we deal with many SGO costs, it would be good when we set up a host
to have a set of micro benchmarks checking uh, Like this scale limits CPU.
We can use this bench for that, or fi for this in the old
life we used Boni plus, plus I remember, and this micro
benchmark checking timing overhead would be also great there.
And like sometimes we, we might have in, in cloud,
sometimes we might have two virtual machines of the.
same class, same type.
And they, but they behave differently.
So it would be good to check it all the time we set up some machine.
Michael: Yep.
Well, yeah, this is your age old.
this is what you are dedicating your professional life to is experiment.
You know, PE get, if you are intrigued as to what it would be
on your system, it's it, it might be different for you for some
reason, maybe for hardware reasons, maybe for workload reasons,
there might be some, some specific way that it's bad for you.
Very difficult to provide general advice and the advice you read online.
Generally be cautious, especially in the Postgres documentation,
then they're gonna be cautious by default because they don't want
to give advice that one person's gonna find horrifically awful.
even if the majority would find
Nikolay: Right.
Back to these three classes of overhead.
I guess the first class is like from UN analyze Inex explain part.
And second is Tracko timing.
Third is buffers let's let's postpone a little bit.
They both are related to this overhead from
how clock clock work with clock is organized.
But the difference is that inside explain, analyze, they are both.
Working, but Tracko timing also working if you haveta
statements, because it's registered there as well.
So regular execution without running, explain regular regular query execution.
We've also included.
So if working with clock is slow Tracko timing
can add some penalty when you use PSTA statements.
Right?
Michael: Same with auto explain.
auto explain runs on every, in there is a, there is a parameter where you
Nikolay: There is sampling and it existed long ago.
I, I, I didn't realize it exists uh, for slow lock.
There is sampling capabilities since POS 13, I guess, but for auto explain it.
you told me, right.
It exists at, for long, like many years already.
It's great.
So you cannot auto explain only like 1% of everything
Michael: If you're to be cautious at first, you, yeah.
You can sample a really small percentage, but naturally yeah, for LTP,
it's probably fine because you're probably running the same queries
over and over, and you don't need loads of examples of them to optimize.
Nikolay: there is also possible observer effect from just logging.
If you, if log writing to logs is slow.
For example, disc is not.
Very fast where you log it.
So it's also can be a problem, but it's slightly different topic.
The third part of buffers, what do you think about other health from buffers
Michael: Well, you were the first to tell me that it's worth
looking into, but I wasn't able to I wasn't able to measure it.
Yeah.
I wasn't able to measure it.
,
Nikolay: there?
That definitely should be difference if you're just run, explain wise.
Many times everything is cashed and then explain
analyze buffers difference should be there.
I I'm sure, but still hundred percent worth having buffers inside.
Explain, analyze as we discussed separately, whole half an hour, right.
Michael: Yes.
previous episode, I actually think it might have, I don't know
if you, if, if this is to do with you, but I think explain Dr.
pe.com deserve some praise.
Cause I noticed today or yesterday that
it now asks for explain, analyze buffers.
So that's quite a
Nikolay: Well, in my opinion, if like when we
analyze a query, we should not do it on production.
As usual, we should do it on a special
environment, which should be a clone of production.
And the best way to get to have a clone
is using database lap engine we develop.
And then there, of course you are in slightly different situation.
Maybe hardware is different.
Maybe you have less memory, for example, different state of Cassius.
And maybe a different fast system as in the case of
database lap engine, because it uses ZFS by default.
And there you, you should focus on buffers.
This is like, should be like our final goal is timing, but inside
it, we focus on buffers inside the process and reducing IO numbers.
Not just buffers, maybe IO numbers like RO rose is also
logical is also important metric to, to keep in mind.
And if you reduce IO, you'll reduce timing.
This is secret of optimization.
Everyone should understand in my opinion, right.
Michael: Yes, couldn't agree more.
and if anybody disagrees we can refer you to episode.
I'm guessing two.
I, it was quite an early one.
Nikolay: Right, right.
Good.
So what else?
we should discuss in terms of starting of working with explain,
Michael: Yeah, well, I, we might be close to time, you
know I wonder if we should save it for another time.
Is there anything else that we have to mention?
Nikolay: well, we didn't discuss particular
notes, like various types of join and, and so on.
Of course, it's, it requires time to learn.
And of course there is documentation.
There are many, I see different people.
Present talk name, explaining, explain.
This is like default default name for such talks.
So not just one person presented it.
So I think all of those talks are useful were checking,
Michael: My, yeah, my favorite is one by Josh.
Burkus I'll make sure to link it up.
He did a really, yeah.
Old, but still I, I listened to it again last
year and it's, it's still perfectly relevant.
there've been some new parameters and sure.
It doesn't cover everything.
Nikolay: New notes, pluralization since then was added,
Michael: Yeah.
But equally.
Nikolay: JIT compilation, which should be disabled on LT.
Michael: Yeah.
I've also done two talks, one at the, be one to try and cover the I
think there's some beginner stuff that he doesn't cover at the beginning.
And there's some more advanced stuff that he doesn't
get to cuz it in an hour you can only do so much.
So I have done two talks trying to cover either side
of that, not doing the explaining, explain part.
so yeah, maybe I'll link those up as well.
Oh.
And also have a glossary
Nikolay: Oh,
Glosser is great.
Yes.
Yes.
So, Yeah.
It's a good thing.
Right?
Good.
So I hope it was helpful for some folks.
let's wrap it up,
Michael: Yeah, I hope so too.
fingers crossed, and also feel free to reach out.
Like, I think this is the kind of topic
that we love and find it very interesting.
I'm Def very, very happy to help people with this kind of thing.
Nikolay: We are asking for topics we can ask right here.
Once again, like we are open, We have a list
of dozens of ideas, but we react to feedback.
If someone asks for a topic, we will prioritize it in, in our list.
Definitely.
And we will try to discuss it soon.
And as, as usual thank you, everyone who is
providing feedback, it's very, very important.
We receive it quite often.
All.
Like at least once per couple of days, it's, great feeling.
I, I would say.
And also please as usual subscribe everywhere you can like everywhere
you can and please share in your social networks and working groups.
Michael: Absolutely.
Thank you so much.
Thanks everyone.
And thanks Nicola.
Nikolay: Thank you.
Bye bye.