Query macro analysis intro

Query macro analysis intro

Nikolay and Michael discuss query macro analysis — at the whole system level as opposed to an individual query.

Michael: Hello, and welcome to Postgres FM,
a weekly show about all things Postgres up.

I'm Michael founder of PG mustard.

And this is my coach, Nick Eli, founder of Postgres AI.

Hey Nilay what are we gonna be talking about today?

Nikolay: Hi, Michael.

A few weeks ago we discussed what I call query micro analysis.

When we have a one, one query and we want to understand
how it works, how it will be executed on production.

Let's talk today about the wide picture, about how we can
analyze the whole workload and find the worst parts of it.

Michael: Yeah.

So if that was micro performance analysis,
this, this I've heard you call yeah.

Macro macro performance analysis.

So maybe there's nothing wrong, but we want to be able to see at a glance.

Are there any big issues?

Is there a is there a spike , on some metrics, something
like that, maybe it's monitoring related, maybe it's

a review of something in the past, that kind of thing.

Nikolay: Right.

There are many goals that we can have in these analysis,
for example there are complaints like databases slow, right?

We hear it quite often.

Database is slow from application developers and other, other.

I dunno, like SRE, no, and so on.

And we want to identify the parts of workload,
which be had the worst, and this is one thing.

Or, or we just want to optimize the source
consumption to prepare for future growth.

And we want, we don't want to spend more for hardware.

So we, we want to find again, we want to find the
worst behaving parts of workload, and we optimize them.

There are many of different cases.

Michael: Yeah, so, I mean, it could be, it could be an application
developer telling us it could be customers reporting that there

are issues and we, maybe we want to find out which part is slow,
or maybe we want to show them that there isn't an issue, you

know, there, or we can't, we can't see anything database side.

Nikolay: all right.

One of the cases I like especially is when we perform
workload analysis as a whole, during various kinds of

preparations, various kinds of testing before we deploy.

And this is also interesting.

And there, we can also try to understand, are all parts behave
well or there are not so well behaving parts, so we should optimize

before we deploy, but let's just start from maybe historical.

Aspects of it like 15 years ago, or so we didn't have Al statements,
which right now is standard de fact extension for macroanalysis.

And this is consensus in community that P
assessments must, must be, have, should be enabled.

In all Postgres installations by default, this extension
is not installed, but everyone should consider installing

this extension because it has very, very small overhead.

But it's like the place where you probably want to

start your

Nikolay: Macro analysis.

We understanding how workload behaves, but before PTA statements,
we had only logs and the idea was okay, we log all slow queries,

for example, queries, execution of which is longer than one second.

And I remember I, when I was briefly just one week, I
was a user of my sequel and switched to PSGs and then I,

it was a confirmation of the, like my choice was right.

It was confirmation.

When I found that in Pogs, I could go down below one second.

I mean, log me duration statement, and the log queries, for example, which
are above a hundred millisecond, but in my sequel, it was not possible.

And one, one second was the lowest value right now.

They fixed it already and you can go down below
one second, but at that time it was in 2000.

5 6, 7.

So and, and we log all slow querie and then we can uh, parts logs.

And I remember we had tool called Ji win written in.

PHP.

And then PPG ger was created written in parallel.

It's much more it's much better in terms of performance.

It, it has much more features and so on, like
more robust, it it's been developed still.

And I, by the way, yesterday they released version 12.

I think I saw some in the news.

Right?

So PGE, ger 12 more and more features right now.

It can work with auto explain and many.

So the idea was let's parse those, those
queries and remove parameters from them.

And so aggregate them the process, which Justa statements
source called terminological query normalization.

And then we will, we will show the worst
according to some metric, the worst query groups.

But the problem with this approach that this is only the tip of the iceberg.

We might have much more, many more queries, which are
not visible in logs, but they produce the most load.

Sometimes like 90% of load is produced by queries,
which are under local duration statement, time up.

so the, some NBS used approach, like let's enable all query logging with,
with duration for a few minutes and collect locks and some including

myself found a way to store logs in memory, it was quite like risky.

So I used it only a couple of times.

So we put, we create a drive in memory and we put logs there, but
it's like only like, I don't know, like half a gigabyte, because

memory is, is expensive and we make very aggressive rotation.

So we don't let Polock to be, to, to saturate this small disk, but
since it's memory, we can afford logging a lot and we can set log

duration statement out to zero, meaning that let's log all queries.

Michael: that's the big downside of, you know, logging does have an overhead
if you, if you're logging excessively or if you so that, that seems to be

a good argument for not, not, well, that's why we don't have to anymore.

Right.

Nikolay: Right.

Yes.

Logging overhead may be very big and it's not noticeable until some point
when your drive where your right logs is situated in terms of right.

This Coyo.

And then everything goes down and it's not fun.

Like this is one of the worst observer effects I had in big production cases.

And it was very painful.

So I don't recommend to go down to zero in terms of
administration statement blindly and without proper preparation.

But anyway, this right now we consider as
outdated approach because we have PSA state.

Right.

And we don't need to lock all queries anymore.

We still need to lock some queries because PTA statements doesn't
have examples in PI reports, aggregated, normalized queries.

I call them query groups.

The PI provides a few examples, and this is very important because in
each query group you might have different cases, one the same query.

Abstract query without parameters might behave very differently
in terms of plan, execution plan, depending on parameters.

So when you, you already identified queries,
you want to improve, you need examples always.

And this is tricky.

Guessing examples is a big, not yet solved task.

I think it's a good task for machine learning and so on.

I like, I I'm very interested in like this area.

if some, if someone of our listeners also interested in this area,
please let's talk because I, I, I think it's very interesting area to

automate, to improve, to allow us to improve more queries with less time.

Right.

Michael: But at the moment we can patch it together
via a mixture of PGE stat statements and the logging.

Nikolay: Yeah, we, we can combine logging and statements.

Also, query ID helps in recent, very recent Postgres versions before query ID.

We used lip co query.

From Lucas fit.

Right?

So we, we like, it's an additional idea and the good thing about this
library or tool that if you apply it to already normalized query, it

will produce the same finger print as for non normalized raw query.

But anyway if you use.

Logs, you can find examples, but another source of examples
of queries is persisted activity, but in most cases where lack

of DBA involvement happened I saw that track activity sites or
house called, I always re forget the, there is parameter, which

says the maximum length of produce that activity dot query.

Column and by default it's one, 1024 only.

Right?

Right.

And it's not enough or our ramps or humans,
they can create much bigger queries these days.

So we want to put it to put like 10 K there or something.

Overhead is very small.

So I always also recommend to increasing,
but increasing, it requires a restart.

This is main problem.

Michael: Yeah.

So one of those ones you have to do at the beginning normally, isn't it.

Nikolay: Yeah.

Yeah, yeah, yeah.

So this this, parameter should be increased and if we increase it, we
have more opportunity to get samples of queries from produced activity.

Then we can join or like, Match them with produced statements,
data, and then probably logs are not that needed unless you use

auto to explain, because auto to explain also very useful, and
again, your article about the overhead and how to measure it.

And the idea that sometimes it's not that big is good.

And maybe you want also this, because in this case you see plans
exactly as it was during execution, because plan flips happen as well.

Michael: Yeah, I was gonna ask, actually, we've talked
about the overhead of, of these things a little bit.

You mentioned the overhead of PGE stat statements is, is low.

I've seen some people mention that they've tried to
benchmark and struggled, but have you seen anybody

Nikolay: Struggled in which

Michael: struggled to measure the overhead on a, on a normal workload of,

Nikolay: It's, it's very

Michael: you

seen any benchmarks of it?

Nikolay: I, I haven't, I haven't, I trust
already current experience, but we can do it.

This is not the most difficult benchmark in the imposer ecosystem.

So we can just, of course, it's, it's good.

If you can benchmark for your own workload.

And the question is how to reproduce a workload in a reliable way.

So each run is the.

or very close to each other.

And then we can just use.

We, we can use various metrics, which database has without PSA statements.

But for example, from PG start database, we should, if we enable
Tracko timing and by the way, usually we should enable it.

Of course, we also discussed it direct cases on some hardware.

It can be expensive, it's worth checking, but if we
enable it, we can produce workloads check this per.

then there is that project, or we can also, we can check
throughput and especially a latency from application side.

If we use application called PG bench, which I don't know why Ubuntu
ships it in server package, not client package, but, but if, if

we use that application, it reports all latencies or C bench, any.

Michael: Yeah, what I meant.

I think when I said, when I said struggled, what I meant
was the variance of each run is larger than any overhead.

So either, you you're saying the, you can't say the overhead zero, right?

Cuz it's definitely doing some work, but it's not, not necessarily measurable

Nikolay: It should be a few percent,

Michael: I don't think it, I, I think it
might even be lower than that for some LTP

Nikolay: maybe, well, uh, benchmarking is the area.

We probably need to discuss one day

separately, but my general advice is by default many benchmark tools and.

Picture Badger is no exclusion here.

They don't do low testing, regular low testing.

They do each case of low testing called stress
testing, like let's load it a hundred percent.

By default.

That's why I suggest usually to find the spot in terms of TPS,
you can control TPSs and PGE bench find a spot like loading

your system in terms of, for example, CPU or dis or like 25%.

Between 25 and 50 emulating normal days of your production.

Because if it's above, you already should think
about upgrading or very high optimization.

And in this case you should check S and compare in insurance.

And in, in this case, variants should be the same.

I don't know why they are different.

Something is wrong.

I, I would like to see the concrete case and, and, and.

Michael: Yeah.

Well, so back onto, so now, now that we have PG stat
statements, there's a few things to mention this.

Yeah.

Not on by default, unless you are, unless you're on a cloud
provider, they often, they often do have it on by default.

So people do need to load it.

If they don't already, I come across quite a lot of
customers who don't, if they, if they're self-managing,

they don't even, they're not even aware it's the thing.

So.

There probably are a bunch of people out there who don't
have it on, even though the, the experienced people in there.

Nikolay: have it.

Michael: Yeah, I think so.

But then again, what, so there are a few default
settings there that I think can be improved.

There's like a, is it 5,000 statements by the or 5,000 unique

Nikolay: Yeah, you are talking about PTA statements dot max
parameter, which is as I remember 5,000 by default, usually it's.

But in some cases, it's not enough when your queries are very
volatile in terms of structure, not in terms of parameters because

SP assessments during query normalization removes parameters.

But in terms of the structure, if you just swap two columns
in your query, it's already considered as two different cases,

two different entries in PSO, stable and PSAR statements.

Max is 5,000.

So you can increase the button.

As I remember only.

10,000 maximum.

I don't remember exactly, but why, why we care because
if P statements has metrics, which just grow over time,

incremental metrics like total time total exact time, total.

Plan time because they split in positive 13.

As I remember in this case we need two snapshots to analyze.

We need those two snapshots and two numbers, and then
difference between their two numbers is what we have during our.

Period of observation, snapshoting is absolutely needed.

Otherwise it's not like the manual approach.

Let's reset statistics often.

And then use only final snapshot thinking that we
started everything from zero, it has downsides, for

example feeling with new interest has overhead as well.

If you check source code that says when we add an entry, so there is a lock.

And it might be noticeable during dozens
or sometimes like a hundred millisecond.

It can be, it can be noticeable for all workload.

So it's, it's better not to reset very often in my experience,
and anyway, it's not, it's not practical to resent them.

You lose information as well.

Then the question is like how often we have
evictions of rare queries and like, what is the.

Some drift of our query set and you, you can see it.

You can compare the difference between two
snapshots like for example, one hour between them.

And you can see which Nu queries and queries,
which, which disappeared from the list.

And this is this difference.

Indicates how well, like eviction speed and I've noticed that some,
for example, Java applications, they use a set application name to so.

Very unique including some maybe process idea or something, and PSA statements
cannot normalize so-called utility comments and set as utility commands.

So these queries are considered as U separate.

All of them all set per in this case, you might want to
turn off statements, track utility, which is owned by.

right.

And in this case you don't, because I, I, I haven't had
cases when we do need to analyze the speed of set commands.

Well, maybe it might happen, but my experience, not yet.

so it's better to just turn it off.

It's on by default.

Michael: That makes loads of sense.

I think talking about the snapshot comparisons, I
think that must be how the cloud providers all do it.

And a lot, lot of the dashboards that you'll see in RDS,
Google cloud SQL there's a bunch of other ones as well.

Like or even, even like open source source are
talking of tools that have, have had leases recently.

PG hero came out with version three, I think

Nikolay: Wow.

And I

Michael: recent.

Nikolay: good, interesting.

It's very, very lightweight and good tool.

Like for, for small teams.

I, I enjoy

Michael: Yeah.

and and based on page step statements again.

So it's yeah, it's, it's the basis for lots of these, but back
to the cloud providers, by taking snap, the way they get historic

data is by taking these snapshots and rolling them and comparing
them to each other, not by well, they not by rolling it forever on

the same you know, they don't, they won't wanna reset once a year.

For example, if that makes.

Nikolay: right.

Well, so why, why do we care about this eviction speed?

Uh, and, transition to new to new list
because we want to analyze the whole work.

Right.

And in this case of course DISA ity we, we will, don't see
this utility part, but if you consider it small, we can do it,

but we will have more real queries in our producer statements.

And we will have 5,000 by default.

It's quite big number.

but the second place.

Cutoff can happen is monitoring system or cloud?

I don't know.

Like they usually store like 500 on thousand.

They don't take everything because it's expensive
to store everything in monitoring system.

You need to, if you, if you want snapshots samples of statements, snapshots
every for example, Imagine how, how many records you need to store.

If you, every, every minute you store 5,000 entries from producer.

So usually they also cut off and making
decisions what, what to remove, what to leave.

we usually think about which metrics are the most important

Michael: yeah.

So actually that you, I think you already mentioned it
briefly, but the, the total time seems is my, is my favorite.

I know, I know we've

Nikolay: Mine as well, but I mine as well, but I saw
people which prefer not total time, actually in my team.

There, there are such people which prefer, for example average time meantime,
mean exact time or mean exact time plus mean plan time, because we like.

Probably want to combine them because the execution
includes both planning and execution like me.

I mean, okay, totology sorry.

Michael: We call that to, we call it total
time by summing the two, but there's no such

Nikolay: total already used in different context.

Well,

Michael: but then, but the problem, this goes back to our
conversation about logs versus PG sat statements though, the

reason I guess the reason for you as well, but I'd be interested.

The reason I prefer total time is you
could easily have your biggest performance

Nikolay: Yeah.

So, so sorry, you understand why total is, is used twice here, right?

Because total, total is sum of all timing for,
I mean, there is total exact time and total

Michael: A total planning time.

Yep.

Nikolay: Total, total.

No, it's not good.

Like total whole I, how to, how to name it.

Michael: Yeah, well, they don't and they,
but we can sum those at the query level.

Right.

We can sum some of the two of them if that's what we care about.

But my, yeah, sorry, what I was, what I guess I was trying to say was.

Our biggest performance problem could easily be a
relatively fast query, which has a really low average mean.

So sorry, average by mean time.

So it could be on average 20 milliseconds, but it's getting run.

So many times, and maybe it's still like, not optimal.

Maybe it could be running in sub one millisecond.

And that could be our biggest performance opportunity.

And by looking at total time total execution time, plus total planning time.

We could see that that could rise to the top of our query.

It could be line number one.

Whereas if we're looking at average time, we could easily,
they so many queries that only run a couple of times that

take a few seconds each they they'd be long above it.

Nikolay: This is interesting topic, which metric is more important by the
way, the lack of words here indicates that the topic is quite complex, right?

I mean, English doesn't have enough words to,
to provide I'm joking, of course, but the.

Interesting total, what time, if you combine both exec and plan some
words should exist and we probably already use somewhere some word.

So total versus average or meantime?

In, in my, like I came to conclusion like this.

If our primary goal is resource optimization, if we want to prepare for
future growth, we want to pay less for cloud uh, resources or hardware.

Total time is our front because this is well,
of course it includes some wait time as well.

For example, we have a lot of contention and some
queries are, some sessions are blocked by other sessions.

It's also contributes to total time, but resource consumption
probably it's not that much because waiting is quite cheap usually.

Right.

But If we forget about this a little bit, total both plan and exact time.

If we combine them, this is our time spent for, to process our workload.

If we know that we analyzed everything,
this is how much work Pogs did we can even.

Take total total time and divide it by observation duration.

And we, we will understand how much time we spend every second.

I, I call it like metric is, is in seconds per second.

My favorite metric.

It, it, if, for example, is if it's one second per second means that.

That we like roughly one one

Michael: Cool.

Nikolay: core could process this.

It's very, very not like we forget about context, which is here of
course, and so on, but it gives someone a feeling of our workload.

If we have 10 seconds per second.

Needed to process.

It's quite good workload already.

We need probably some beef server here.

As for average time, these numbers are most useful when we
have a goal like let's optimize for best user experience.

Michael: Yeah, so like it, I guess that's our 50th percentile.

Isn't it with the is it no, it's not, it's not.

So I, I, even then, I don't prefer, I don't even like
mean for those because I'd much rather look at a P 95 or

something and look at it, client side, not database side.

Nikolay: Well, it depends, but sometimes ex like, as you, as you
said, sometimes we had very, very rare, rarely executed queries,

but, but quite important ones, for example, it can be some kind
of analytics, not analytics, but some aggregation and so on.

And the average is terrible and we do want to.

Because we know that users who look at those numbers
who use these queries, these users are important.

For example, some like our internal team analyzing
something, or I dunno, like finance people or something.

And some, some kind of more analytical workload, not
necessarily analytical, but I hope you understand.

Right.

So.

Michael: I understand.

So like give you an example.

When I was at a payments company, we had, it
was like a batch, it was daily batch payments.

We had a deadline to submit a file.

It was like a 10:00 PM UK time deadline.

And the job literally had to finish before then.

And , as this job got longer and longer, It got closer
to that deadline and then it, it forced some, some work.

So maybe that wouldn't have shown up if
we'd looked at duration or total total time.

But yeah, I did, but I also think those kinds of issues often
crop up without you doing, having to do this, like macro

analysis work, because somebody's telling you about them.

Nikolay: Yeah.

So, so if we, or if we decided to order by meantime, you, sometimes
we see on the top, we see something that we say, oh, it's fine.

That it executes a minute a minute because it's some crunch job.

And nobody cares if it's, if it's just select for
example, and no, no lock involved and it lasts one minute.

It's not a big deal.

So we, we probably want to exclude some queries
from top and ordered by meantime every time.

For total time, it's not.

So I usually really interested in each entry from the top, that's
why I also prefer total time, but I see people use meantime,

successfully caring about mostly users, not about service.

So roughly total time is for infrastructure teams for optimize for service.

While meantime was probably interesting to
application development teams and for humans.

Right?

So very, very roughly uh, and.

there is also calls important metric, right?

Why we, why we discuss which matter to choose, because in when you
build good monitoring you need to choose several metrics and build

a dashboard consisting of multiple charts top end charts top end
by total time, top end by meantime, top end by calls for example.

Why calls?

Probably for database itself.

It's not that important.

And if the most frequent queries they might, might produce not the
biggest load of course, if, if there are a lot of very, very fast

queries, I would check and text switches, for example, and so on.

Right.

And think about how CPUs are, are busy in this area.

But I've noticed that sometimes we want to reduce frequency of some, some of
the most frequent queries, because overhead on application side is terrible.

This is unusual approach because sometimes people optimizing
workload or database, they think only about database, but

I had cases when optimization, for example, let's take
top three order by calls and just reducing the frequency.

We can throw out 50% of our application nodes.

Can you imagine the, the, the benefit of it?

Michael: The cost saving.

Right.

But so it could like, just to give an example from the application
side I guess that would be one way of spotting potential and plus

one issues where if it's the same queries getting executed over and
over again, that's the, that's the kind of thing it could point to.

Nikolay: Right.

Right.

So it's interesting.

I think I don't understand all of aspects here and I
think we lack good documentation, how to USEA statements.

So many, many angles, so many like derivatives as well, but.

I, I would like to finalize discussion of metrics.

I wanted to mention also, I, I, your metrics
shared buffer hits and, and shared blocks.

They call shared blocks red and hit, right?

Let me check shared blocks, hit shed blocks, read also Jo and written.

But if it consider only

Michael: temp.

Nikolay: well, local and temp additional, but
let's just, if we discuss everything we need to.

Right.

So many aspects, but I, I wanted to mention only few like hit
and read char B hit and, and red hit and red because it's okay.

If we talk about this only interesting thing here is that sometimes monitoring
system thinks that read is enough because it's the slowest operation.

Let's well, as we already discussed a few
times Pogo doesn't uh, the actual disc

Michael: Like, like the operating system cash versus the

Nikolay: Yeah.

Yeah.

Yeah.

So this read is from page cash.

Maybe it's discreet, but maybe not.

We don't know, but usually monitoring system says, okay, ordering by she
box read is the most interesting, but I had cases at least two times when I.

Really needed just that statements shared blocks hit and
finding the most like, because working with buffer pool in POS,

August shared buffers was so intensive by some query group.

So, so.

And I, if you don't have it in monitoring, you need to start
sampling P assessments, yourself, writing some scripts on the fly.

It's not fun at all.

so I think most exp most DBS who are, have experienced, they have something
in their tool set, but for example, P P center P center can sample it.

you can use it as a hoc tool if you don't have it in
monitoring and you have problem right now, for example,

but I also suspect they like top end by hit number.

So these angles and new, new metric wall, how, like let's find
querie which generate the most the, the more wall data that's ordered.

How's gold.

Let let's check.

I have the list here.

It's called

Michael: full page.

Nikolay: yeah.

Yeah.

Wall records, wall FBI, full inserts and wall bites.

Michael: Yeah.

Nikolay: Three metrics add to post 13.

I, I didn't see them yet in any monitoring.

I, I hope, oh, maybe our pitch watch two PCI edition.

It has it already.

Right.

Michael: Yeah, I remember this was, this was added to explain in version 13.

Was it added to page that statements the same

Nikolay: was same time.

Yes.

Yes.

Mm-hmm and it's so good.

It's so good.

Like order by like, like we want to reduce world generation.

Definitely because it reduction of it will have very
positive effect, both on our backups, subsystem and replica.

Both logical and physical.

So we do want to produce fewer wall records or fewer wall bites or full page

rights as well.

Yeah, yeah.

Yeah.

So it's, I never use it yet.

I, I hope soon I will use it someday.

So this, I mean, I know that it's there, but never use it myself yet.

Michael: Yeah.

Talking about this has given me an idea as
well in, we talked while back about buffers.

And one of the things we do on a per query basis
is look at the total sum of all of the buffers.

And I don't, I know that doesn't make tons of sense, summing
dirty buffers plus temp buffers, plus local plus shared.

Nikolay: we can call a

Michael: But yeah, exactly.

Or, or some kind of measure of work done
and actually some summing, all of those.

And then ordering by that and looking at the top 10 queries by total IO

Nikolay: it's smart idea.

Each IO has some.

And if we find queries which involve most IO operations, of course.

It's a good angle for our analysis.

Yeah.

What, what else?

We mentioned that we deal with page cash when we look at iyo,
but sometimes we do want to order by real physical disc iyo.

Right.

And there is such opportunity.

For those who manage Pogs themselves, it's called PSTA Kash, additional
extension to PSTA statements, extension to extension, I would say.

And it provides you very good things like dis reads and
rights, real physical discs and rights, and also CPU.

Sometimes you want to find queries that generate the most load
to your CPU and it, it even distinguishes system and user CPU.

it's like, it's good.

Yeah.

Yeah.

And also take switches.

So it's very useful extension.

If you care about resource consumption and you want to prepare for growth and
you want to, to do some capacity planning and before that you want optimize.

Michael: And I think you've said before, but did
you say it's not available on most managed services?

Nikolay: No, I only know Yandex manage services.

They, they installed by default, but I
don't, I'm not aware of any others, so yeah.

Also, there is another way to analyze workload.

We didn't cover today at all weight event analysis.

This is what RRGs for example, provides us like
starting point actually for workload analysis.

I think it came from Oracle world active session history analysis.

So let's yes, some someday let's discuss it and
compare it with traditional analysis we discussed.

Michael: Yes.

And for anybody that is aware of Ash and wants it for
Postgres, there is a, a, I've heard it called Ash.

Yeah.

P a S H as well.

Yeah,

Nikolay: but it's only a, it's a Java client application,
which will sample do sampling from just activity.

But uh, it can only be used as at hoc tool if you're in

Michael: Same,

Nikolay: Yeah.

Michael: same as Ash.

Right?

Isn't it.

Nikolay: Well you can install for example, PPG weight sampling
extension, and uh, immediately in our PPG watch two POS addition,

you will have similar graphs as performance insights in RDS.

I think Google also implemented it.

I'm not quite hundred percent sure, but I think they did it.

all Puget center.

I mentioned earlier, also a good at ho tool.

It also has weight event sampling it's
but let, let's discuss it some other day

Michael: we also have an episode on monitoring that people
can go check out if they want the, a deeper discussion on

Nikolay: And micro analysis.

It's good to distinguish the things.

Sometimes you, you have already query you, just you need
to under go inside it and understand what's happening.

Why is this?

Is, is it so slow?

But sometimes you have no idea where to start.

Database is slow.

Everything is bad.

In this case, queer analysis, macro analysis is definitely worth conducting.

So yeah.

Okay, sorry, about 40 minutes again.

So it's again,

it's again, longer than, than we wanted,

uh, as as usual.

Let's thank all our listeners who provide
feedback this week was excellent as well.

A lot of, a lot of

Michael: Yeah.

We had a lot of great suggestions.

It's been really good.

Thank you.

Nikolay: Yeah.

This, this drives us.

Thank you so much.

Michael: Yeah, really appreciate it.

Well, thanks again, Nicola.

I hope you have a good week and see you next

Nikolay: as final words like share, share.

share.

you.

Bye.

Michael: Take halfway.

Some kind things our listeners have said