pg_ash
Nikolay: Hello, hello, this is PostgresFM.
My name is Nik, PostgresAI, and as usual with me is Michael,
pgMustard.
Hello, Michael.
Michael: Hello, Nik.
Nikolay: How are you?
What's new?
Michael: I'm good.
Not as much new as you, I think.
Wrote a blog post recently, been working
Nikolay: on some
Michael: little improvements here and there, doing some research
in the background as to what people want and need.
But yeah, you've got some much more exciting news I think.
Nikolay: It's small but maybe good.
I don't know, I'm experimenting a lot lately with AIs as well,
constantly learning.
It's crazy times.
People reconsider workflows, obviously, how we work.
So it's easier right now to ship some stuff.
And all the ideas you had during a long time, it's quite easy
right now to implement.
If you just shift from coding to engineering and organize design
and verification of all the details, like benchmarks and you
focus on like architecture more and so on.
It's great time to have like helpers and honestly Opus 4.6 is
quite good, even better than 4.5.
So yeah.
And others also good, I use them for reviews, other models including
Gemini and GPT of course.
Anyway, I wanted to discuss this small toy tool I just created
over weekend.
And today I'm releasing version 1.2 and publishing it finally.
I have good feedback from a couple of folks, not only from my
team, but also external ones.
And it's working on our production already and yeah and I think
it's it gives me the same feeling as I had when I proposed the
transaction_timeout to Andrey to implement during our Postgres
life hacking sessions on YouTube because it's feeling is why
doesn't this thing exist yet?
Yeah, do you want
Michael: to say what it is?
Nikolay: Yeah, pgsentinel, active session history.
I believe with all the new setups and growing existing setups
of Postgres, millions of database clusters.
I believe wait event analysis has huge potential and is heavily
underappreciated.
And performance insights in RDS is great.
Other tools like pg_wait_sampling, all tools are great.
But there is huge potential for various tools and people to understand
what it is.
And I think a lot of non-experts
don't understand what it is
and they are going to explore it
soon I hope because it's a great
tool to troubleshoot database issues
You know like Brendan Gregg
books and his presentations He
says if if somebody asks him which
Linux performance observability,
like Linux performance analysis
tool in console, he says it would
be IOstat.
Because IOstat shows database metrics,
disk metrics, right?
Latency, throughput, queue size.
And also it shows a little bit
about CPU as well.
If you have 1 tool, quickly understand
as much as possible.
This is it.
Of course, he also mentions SAR,
SAR is great and so on.
But the same thing here.
If you choose just 1 approach or
tool, which will give you as
much understanding of problems
like during troubleshooting, like
why database is slow, This is going
to be a wait event analysis
or active session history analysis.
We had a separate episode about
it, maybe a couple of them.
And I think inside me there's a
huge feeling it's hugely underappreciated.
When I talk to people who are not
Postgres experts, if they are
on RDS, this is default tool for
them, but outside of RDS, they
don't know what it is.
And then even with RDS, performance
insights, or Cloud SQL database
insights, or how it's called, query
insights, it's still like
a huge lack of use of it all the
time.
And I think it should be central
in all troubleshooting.
This should be starting point.
Michael: So.
Okay.
So why?
Okay, go on.
Nikolay: Yes.
So I've been a big fan of
pg_wait_sampling for many years.
pg_wait_sampling is an extension
which provides you very precise
sampling, 10 milliseconds.
Internally, but you need additional
tool like monitoring to export
that data for analysis.
It's hard to analyze it inside
because you still need some persistent
storage because history it has,
it's not covering a lot.
It's just some tail and you need
to export it.
But it gives you 10 millisecond
sampling.
It's very precise.
The problem with pg_wait_sampling,
and we had it for long in our
monitoring stack, but slowly we
moved attention from it away
and we implemented, we call it
lazy sampling, where every 15
seconds our monitoring tool collects
it.
Because of availability, a lot
of customers, they don't have
pg_wait_sampling.
It's present only in Cloud SQL.
I wish it was installed everywhere, but no.
Unfortunately, the reality, you know, like, so we shifted to
lazy sampling.
And then something like it's working, it's enough if database
is huge.
15 seconds sampling already good enough.
And we cannot do 1 second sampling because it's too much to big
frequency observer effect, like it's too much.
Because we are external, we're pooling this data to some external
tool, connecting to the instance on Supabase or RDS anywhere.
There is a feeling that lazy, this approach is not lazy, infrequent.
It's not okay.
We need something internally to sample and then we need to afford.
This was number 1 reason why I thought we need something else.
Number 2 reason, there are more and more cases when databases
are small and it's hard to justify having full-fledged monitoring
for them because you need to pay like basically at least 100
bucks for this monitoring setup.
But if your database is tiny and you pay 50 bucks, it's like
some pet project, there's no way you will decide to spend a few
hundred bucks per month for this pet project.
But you still need this because if something happened a couple
of hours ago, you need to troubleshoot, investigate.
And Postgres internally has everything, right?
But it's just, you need to, it doesn't have memory.
So wait events, there are 2 columns, wait event type, wait event,
in persistent activity.
But nobody is sampling them without, like by default.
And it was also the third reason, understanding that current
monitoring tools are going to be changed to provide information
in better form for LLMs during troubleshooting.
This was reason number 3.
That's how I came to the idea, okay, let's implement something.
And this something must be low footprint in terms of storage
and observer effect.
It must work everywhere and it must be LLM friendly in terms
of how information is provided.
Not a lot of JSONs or you need to write PromQL all the time,
but some compact form you can feed it to your AI agent or something
and ask to explain what's happening here and where to dig further.
And this is how pg_ash was created.
Because of the first requirement, I couldn't create it as an
extension.
So I called it anti-extension.
And remember we discussed pg_index_pilot, which is automation for
recreation of indexes.
It was also created in the same fashion.
And also, like some sneak peek, we talked with some Supabase
folks, and they seemed to understand that as well.
And there is also a new project called pg-flight-recorder.
I hope we will discuss it maybe separately, which actually is
very similar to my pg_ash, but it covers more.
So we are thinking how to maybe unite our efforts.
It's an ongoing discussion.
But yeah, it samples like pg_stat_statements, pg_stat_io, and so
on, like all the stuff, SLRU and
Michael: so on.
Oh, interesting.
Nikolay: So there was an old project called pg_profile.
Andrey Zubkov created it and for quite some time it exists.
It's great, it's AWR from Oracle but for Postgres, right?
It records all statistics also inside, but it's extension.
So since it's extension, it's not available on most managed Postgres
platforms.
That's why I like the idea of anti-extension, because it's just
pure SQL and PL/pgSQL, that's it.
—
Michael: So installation is just a case of running an SQL script
that you've got, you know, creates a schema, some functions.
Nikolay: Yeah, exactly.
Yeah, yeah.
So, I think this...
Michael: Which you could bundle as an extension, right?
But the problem then becomes...
Nikolay: Let me be transparent.
Why not?
Like, yesterday we discussed it and there is an idea that yes,
it can be like extension, like non-extension, like just a bunch
of...
There is an idea, okay, this could be...
I honestly have some thought that maybe Postgres could have some
concept of packages or something.
There was trusted language.
Trusted extensions.
Michael: Yes, exactly.
Nikolay: Yes, Rust is, exactly.
So David Ventimiglia, I hope I pronounced last name right.
I apologize David if I did wrong.
From Supabase said, and who is developing pg-flight-recorder.
Michael: Makes sense.
Nikolay: Yes.
So he said, this is the direction he's thinking to package as
a TLE extension, right?
Because extensions, if it's available, I think it's available
on Supabase or RDS, right?
It's a straightforward concept, right?
But in some cases we need to think about all platforms.
And in this case we want to be maximum flexibility.
So I'm thinking now to package it in future versions, to package
it in both ways, like pure SQL installation or TLE installation,
why not?
Michael: Yeah, absolutely.
It's just a bit sad that extensions
were...
The idea of having an extension
framework was to make distribution
easier and now we've gone full
circle and it makes it harder
because everyone's on managed service
providers that pick which
extensions are available.
So yeah, it's just a bit of a shame.
Nikolay: This sentiment you just
shared, like it's so deep in
me.
I remember 2005, 6, 7 when like
Postgres is Extensible, then
extensions concept was created
by Dimitri Fontaine, right?
Oh, really?
Yeah.
Yeah.
He created extension as his thing.
Maybe I'm not mistaken, I hope.
I can hallucinate often.
This happens because I'm overloaded
with information.
And then everyone was excited.
Postgres is so extensible.
But we should blame managed Postgres
platforms, starting with
Heroku and RDS and all others.
Why extensions became an extensible
concept?
Because it's in their hands to
decide which extension to add.
They have their own reasons, obviously.
Every time they add some extension,
it means they need to support
it and to have some responsibility
if it's secure and so on.
I understand that.
But extensions became unextendable.
Every time I thought, oh, I want
to implement this in Postgres,
every time I said, no way I will
decide to make this extension.
Of course, the concept of trusted
language extensions is great,
but still it's not adopted everywhere.
It's limited, right?
—
Michael: Yeah, so this is not an
extension, but could be, but
it's not to start with, and it
installs a couple of tables, some
functions, what does it do exactly?
Nikolay: — Yeah, let's talk about
this, what exactly it does.
The idea is let's just sample,
collect wait event samples from
pg_stat_activity with query IDs
and have some history.
It's quite simple.
I didn't want...
So the problem is we need some
help to invoke function which
will sample, right?
And I don't want any lambda functions
or anything.
I want everything in Postgres,
but I cannot build a Background
Worker or anything because I don't
want to create extension.
pg_cron was chosen as a dependency,
and pg_cron is present everywhere.
And I remember I was driving at
that time, I was brainstorming
with ChatGPT.
Unfortunately, Claude, the voice
abilities are not as good as
ChatGPT.
So I was brainstorming and we agreed
both that pg_cron is going
to be a problem because it has,
as any cron, it has only 1-minute
precision.
I was driving, I reached my destination,
I was starting to doubt.
I always doubt when working with
AI and humans, actually.
It's not only about AI.
Humans also can make mistakes.
So I was in doubt and I said, let's
verify, check the documentation,
check with me.
And thankfully, starting pg_cron
1.5, per second precision is
available.
And this was moment, okay, I'm
going to build this because I
need per second precision
Michael: So is 1 second the minimum
though?
Nikolay: It's minimum, yeah.
Michael: But I was going to ask
why 1 second as the default and
now it makes sense.
Nikolay: Yes, then I was thinking,
okay, we are going to write
a lot.
We need to write query ID and the
wait event.
So how to optimize storage?
And there are 2 big problems.
First, how much we are going to
spend in terms of bytes if we're
just writing.
And second, we know Postgres and
MVCC and bloat issues.
So if you implement some ring buffer
for your storage...
Anyway, let's jump to solution.
I knew the proper solution from
Skype, PgQ, SkyTools PgQ,
3 partitions and rotation.
So I just created partition yesterday,
today and tomorrow, logically.
So yesterday is read-only partition
fully filled, 24x60x60
right?
Records.
Actually no, we need to think how
do we store 1 row per event
pair or we somehow store it differently.
Let's return to this point.
Now we have current partition today
and tomorrow is truncated
because truncated is super efficient
as we all know, right?
No deletes, no updates, inserts
and truncate, that's it.
So this is how we implement.
And we have visibility right now
in current version, we have
visibility only for yesterday and
today.
And you can export it to your monitoring
tool.
This is we are going to implement
compatibility with this extension,
not anti-extension in our monitoring
stack, but also there is
a plan, maybe when you listen to
this podcast, I already implemented
this, there is a plan to have roll-up
storage for longer term.
I plan to have 1 year of precision,
maybe 1 hour or 1 minute,
it depends.
It's not a lot.
So this is how we solved MVCC,
just borrowed an idea from old
good Skype from 20 years ago.
I still think it's a great, it's
a great idea.
Michael: So no bloat.
So that solves the bloat issue,
yeah exactly.
Nikolay: Fully.
And second, how to find optimized
storage.
We had several ideas so we decided,
like, I actually decided,
but yeah I agreed, like with a
lot of benchmarks, many iterations,
we chose the idea to store 1 row
per second.
So it's timestamp, it's database
ID, because, oh, not 1 row.
If you have multiple logical databases,
then it will be as many
databases as are currently active
in terms of workload at this
very second.
So at this very moment.
Michael: 1 row per database per
second.
Nikolay: But only those which receive,
which have active sessions
right now.
Michael: Okay.
Nikolay: Yeah.
I thought about it.
If you, if we have 1000 databases,
it's insane, but it's unlikely
all of them have right now active
sessions because it might be
if you have huge machine and a
lot of things happening, but still,
if you have huge machine, you can
afford the, maybe like thousand
rows per second.
Maybe no, it depends.
There is room for improvement here.
But anyway, right now it's 1 row
per second if it's only 1 database
active right now.
And then we encode everything,
but how?
The thought about JSON-B, first
thing, JSON or JSON-B.
I proposed correlated arrays and
it proved to me that it's better
in this case than JSON-B.
2 arrays, so 2 rows.
And then I proposed this encoded
1 array and benchmarks proved
it's better.
So we encode wait event, number
of sessions, wait event, for
example, I or, for example, lightweight
lock manager, our favorite.
And then there is number how many
active sessions we are present,
like for example, 5, and then 5
query IDs, right?
So 7 numbers, and then next wait
event, and so on.
And so we have a bunch of numbers.
And I didn't want to use 8-byte
integers.
I wanted to use 2-byte integers,
because it should be enough.
So we created 2 dictionaries.
1 to encode all wait events.
Actually, if you write every time,
if you write LWLock manager,
it's a lot of bytes.
Why?
We can encode it and to support,
as in recent couple of major
versions, there is a pg_wait_event,
wait events dictionary, right?
In Postgres.
Yeah.
Which is just a static list of
them But I wanted to support all
the versions with this thing supports
Postgres 14+ so that's
why we created some our dictionary
like to propagate it to back,
yeah
Michael: 1 thing I didn't understand
is you mentioned not using
8-byte integers.
Does that include, because query
IDs by default are 8-byte, right?
Nikolay: Yeah, so query ID is 8,
and that's why we have 2 dictionaries.
1 to encode wait events, and second
is to encode query IDs, to
map, to two-byte integers.
That's it.
And, yeah, and we also, I don't
remember, but we somehow solved
the problem that it can grow without
limits.
We solved this, So it cannot grow
without limits.
We keep it short.
I know how we solved it.
Can I remember because I proposed
this solution?
By the way, I forgot to say, I
created it using 3 AI plus me.
So a team of 4 worked.
1 AI was an engineer and 1 was
focusing on benchmarks, another
was focusing only just on quality
and documentation and reviews.
And benchmark engineer also focused
on reviews, but with specific
goal like storage efficiency and
performance and so low Observer
effect and so on and we worked
the many iterations Like crazy
and I did it every I the whole
thing is coded in Telegram.
So I didn't touch anything Yeah,
Michael: but you reviewed it right
Nikolay: I reviewed what I didn't
review code
Michael: really
Nikolay: really I did I reviewed
only results of benchmarks.
I cross-checked from 2 engineers,
AI engineers, I cross-checked
results, like they work.
Code, I trust this code actually.
It works in my production already.
Yeah, why should I look at this
code if I know that it was very
thoroughly tested and benchmarked
and reviewed many times by
AI?
—
Michael: Oh yeah, that's the why
for me.
But yeah, anyway, it's interesting
how different, yeah.
Nikolay: I understand your question,
but in this case I think
it's quite solid.
I actually checked the code and
it follows my style.
Michael: Wait, you just said you
didn't check the code.
Nikolay: I checked pieces of code
just to ensure.
Before I decided not to look into
code, I made sure it's written
according to our style guide, and
I like what I see.
So style corrected, how we write
PL/pgSQL, I have already very
well established, because PL/pgSQL
is the most popular code language
for me in the last 10 years, so
I wrote a lot in it myself.
So that's why, when I made sure
it's producing good code, I didn't
see the final version of everything.
But I know it was thoroughly tested
in many aspects.
So back to storage.
This is how we encode and how we
solve this unbounded growth
of query ID dictionary.
My idea, actually.
I'm excited because you can engineer
solutions thinking about
algorithms and data structures
and you don't need to code everything.
AI is coding.
So I decided to have 3 tables,
1 pair every day, that's it.
And the same like rotation, like
we have dictionary for today,
we have dictionary for tomorrow.
AI said there is a doubt because
query ID might not match, but
we don't care because if the same
query ID receives different
encoded ID tomorrow, it doesn't
matter because we don't work
with this data directly, we work
with this with some interface
functions which expose it to user,
and user doesn't see the encoded
numbers at all.
Michael: Yeah, and once you look
them up, then it's the same
query ID anyway.
Okay, yeah, that makes sense, great.
Nikolay: And it cannot grow unbounded
anymore, right?
Yeah.
Yeah, And I guess for a lab approach,
we also will need a separate
dictionary.
That's it.
Which might be bigger, but-
Michael: Wait, why?
Nikolay: Because I want to have
history for 1 year at least,
or maybe half a year.
Michael: Good, okay, fine.
Nikolay: But it will be already
compacted, and query IDs won't,
not all query IDs will go there,
only most important, which
are most popular, or participating
in spikes.
Michael: Yeah, so make sure I've
understood we've got 3 main
tables, only 2 of which will contain
data each time a day's worth
of data, like all of yesterday's
data and then yeah and then
today's data until now like it's
it's like yeah 1 at 1 and a
bit days data at any point in time
and that's fine because the
main point of this is something
was slow recently like we had
an incident 10 minutes ago or an
hour ago or over the weekend.
In fact over the weekend is interesting, but there was something
fairly recently we want to look into, and normally that's within
a couple of days.
Nikolay: Yeah, I think Maybe we should allow to configure how
many days raw data is stored.
Anyway, I know this approach with
3, it's from PgQ as I said,
this is 3 partitions in rotation.
You cannot see day before yesterday with it until we implement
rollup approach for longer term storage.
And I forgot to mention that how much of data it is.
So, per 1 row, if it's 5 backends active, it's roughly 100 bytes
only.
I forgot to mention also that I decided, knowing alignment padding,
I decided to also encode timestamps.
So it's 4 bytes instead of 8, right?
4 bytes Unix timestamp, but it's shifted.
It starts with January 1st this year, so it will be enough until
the end of the century almost.
And the database ID, it's 4 bytes, it was already, it's all ID,
so 4 bytes.
8 bytes plus this encoded data, roughly 100 bytes if you have
5 active sessions.
And for example, if you have a more loaded machine, it will be,
for example, 50 average sessions on on average, it will be producing
30 to 50 megabytes per day only, which is acceptable for us,
absolutely.
I also forgot to mention that you cannot use it on replicas.
This is downside.
We forgot to mention that in README,
but I thought about it before
we implemented it.
So it's obvious because we need to write.
But it's fine because we observe more and more even bigger clusters
coming to our consulting which don't have replicas.
And they run...
Michael: Or only HA replicas.
Nikolay: Yeah, this is a good point.
Actually yes, they recognize HA is needed but they are okay to
live with 1 somehow quite long, they live with it, like it's
so unusual.
I still think classic 3 node setup is much better.
But reality just showing different thing because cloud resources
as we discussed multiple times, they are so reliable already
that people are okay.
And also Postgres' performance, you don't need to read replicas
for quite some time because they add complexity.
So that's why I thought, okay, this is only for primary.
It's enough.
And I'm satisfied with results like 50 megabytes per day for
a loaded cluster it's great and I'm thinking maybe we'll like
let's keep 7 days of raw data or 14 days of raw data maybe.
Michael: Yeah I could easily see at least at least the weekend
thing makes sense to me, right?
You come in on a Monday and it turns out there was some blip
on Saturday.
It would be a shame if that data's gone.
Nikolay: I agree.
Yeah.
I agree.
Yeah, so, yeah, and we have a bunch of functions to which allow
you to see what's happening.
Give me like overview of last few hours, for example, or previous
day, which top wait events happened and which query IDs are
participating there.
Yeah.
And it joins with pg_stat_statements if it's available to present
some high-level settings, macro-level settings for each query
ID which is participating in some wait events.
We also implemented visualization with bars.
It can be monochrome visualization or even colorful for psql
if you do some trick.
And I enjoy it.
You see basically performance insights right inside psql.
It's quite fun.
And you can, For example, overview last 24 hours, then you think
you can zoom to specific area, you can understand which queries.
And of course it requires some typing work, but since how it's
organized, quite straightforward, You can let your LLM to do
it, right?
If you connect using some read-only user role, and let it troubleshoot,
it can quickly find which bottlenecks.
Is it like I-O bound workload, you need to increase disks or
give more memory.
Or it's like there is a heavyweight lock contention, which queries
participate.
It's quite good in terms of this kind of troubleshooting.
Michael: Yeah, definitely macro level stuff.
But I saw there was even some micro level abilities as well,
like looking at a specific query's waits by the query ID.
Nikolay: That's pretty cool.
Yeah, you can ask, for example, for a specific query ID you can
say which wait events.
It mimics a little bit what we have in full-fledged monitoring,
because we have all this.
And here is just like in textual form you can do this kind of
thing.
For this query ID, what are wait events during last hour or previous
day?
And for this wait event type or specific wait event, what are
like top 5 query IDs and their macro level characteristics.
Yeah, it's not, it's not, it's still macro level actually, because
you're still like a lot of aggregation
is happening here, but
yeah, You can jump to micro level
specific query ID and see plans
and start working with this.
So there is a bridge here, right?
Yeah, there is a bridge.
So anyway, it's early days.
I'm releasing this week.
Let's see how it works.
Is it useful or no?
I wanted to just to add that this
thing feels to me as...
Oh, yeah, 1 more thing.
So this is poor man's monitoring,
self-monitoring, right?
Michael: I think it's not...
I don't think so, but yeah, sure.
Nikolay: — It's quite straightforward.
This trick with pg_cron — I forgot
to mention, by the way, that
if we sample every second, pg_cron
by default writes logs to
a table.
—
Michael: Oh, yeah.
Yeah.
Nikolay: — Yeah, so I think pg_cron
can be improved here.
For example, I would turn it off
for this specific job.
I don't need logging here.
I just need to see errors and I
don't need errors to be stored
with full EcID.
Postgres log itself would be enough
for me to troubleshoot this.
Anyway, there's a potential for
pg_cron here to be slightly improved
for these high-frequency jobs.
But what I wanted to say, this
is a self-contained, self-monitored
system.
Even without LLM, you can write
some analysis inside Postgres
so it could produce some reports
and analyze what's happening.
Maybe it's time to add more, increase
shared_buffers, for example,
because we see a lot of I/O data
file read or something.
Or maybe we see a lot of time spent
on heavyweight locks, so
we need to signal that workload
must be redesigned to avoid contention
on heavyweight locks, right?
But there is a trade-off.
Should we monitor inside or outside?
And I think there are pros and
cons.
We discussed it, right, already?
Michael: We did.
I had 1 last question though.
I think, by the way, the reason
I disagree is I think it's actually
just another tool, right, it's
another tool in the tool belt.
If you self-manage or use a provider
that doesn't have wait
sampling, this is a valuable tool
in its own right, like regardless
of monitoring, especially when
you mentioned the overhead from
monitoring externally, you'd only
get let's say 15 second frequency
or something of that order of magnitude.
To get 1 second frequencies is
an upgrade, not like a poor man's
solution.
Nikolay: Yeah, it's not 10 milliseconds.
Michael: It's true, good point,
good point.
So the last thing I wanted to ask,
cause you always ask our guests
this, the, on the license front,
you've chosen a really permissive
license, Apache 2.0.
Why?
Nikolay: Why not?
It can be embeddable to some stuff.
And I even think, honestly, let
me to be transparent here.
I think it will contribute to my
business because if we have
this everywhere, it's easier for
us to explain which problems
and what should be done about it
and improve health and support
people like we do in Postgres.
So we're right now not in quite
a good position because Cloud
SQL, this is 1 of not many but
quite great things they have,
is that they have pg_wait_sampling,
super great.
But RDS, they have their own proprietary
tool, Performance Insights,
which is great if you only own
AWS, but if you think about interoperability
or how to say, like you want to
be able to work with any Postgres,
performance insights, database
insights, they call it right now.
It's some proprietary API and it's
not transparent.
And also external dependency.
I always think, okay, how good
is it?
Is it trustworthy?
Maybe there are also bugs.
I don't see a code, right?
I cannot know.
I don't know.
Maybe there are bugs there.
RDS quality is great, like let's
agree, RDS quality is great,
but still like sometimes you doubt
like how exactly it works.
This is fully transparent, works
everywhere, right, and that's
it.
Michael: Yeah, nice.
All right.
Thanks so much, Nikolay.
Nikolay: Yeah, thank you.
So yeah, everyone is welcome to
try and contribute and fork anything.
Michael: Yeah, I will stick it
in the show notes, obviously.
Nikolay: Right.
Michael: Make it nice and easy.
Nikolay: Thank you.
Michael: Yeah.
Cool.
All right, take care.
Catch you in a bit.