Michael: Hello and welcome to Postgres FM, a show about all things

PostgreSQL.

I am Michael, founder of pgMustard and I'm joined as usual by

Nik, founder of PostgresAI.

Hey Nik!

Nikolay: Hi Michael, how are you?

Michael: I am doing okay, thank you.

How are you?

Nikolay: Recovering from some bad flu but all good.

Michael: Yeah, good to have you back.

And yeah, so what are we talking about today?

Nikolay: Let's talk about metadata and various meanings and including

like comments, maybe more, right?

Comments to database objects, comments inside queries, inside

PL/pgSQL, stored procedures and functions, just broadly.

What kind of metadata it makes sense to store and how?

Pros and cons, I don't know.

Michael: Yeah, I think so.

And also some side effects of having it in there or use cases.

I was actually quite surprised we hadn't talked about it already.

I saw a recent blog post by Markus Winand on Modern SQL about,

I think it was mostly actually about, it was called SQL comments,

but it was mostly about query.

I think so.

Nikolay: It doesn't have a date and by default I assume all the

posts are like very old somehow I think so, but yeah maybe it's

new stuff.

But it's very small post right?

Michael: Yeah but small but also it's 1 of those on Modern SQL

he has these like nice visualizations of which databases have

supported, which syntax from which dates, and I always enjoy

those when more

Nikolay: when also in Postgres, but practically, I'm, I rarely

leave the ecosystem of Postgres.

So it's just okay, good to know Postgres is good here as well

and here and here and that's it.

Michael: Occasionally you come across them where Postgres doesn't

support some syntax that another database does.

It's not common but I have been caught out a few times thinking

oh wow that's or not caught out but surprised thinking oh it's

interesting there are somewhere Oracle or SQL Server have some

new syntax that's even in the standard that we don't yet have.

So it isn't often, and it is really nice to see the especially

the dark green ticks it's like fully compliant tick not no no

little subtext on the and how it deviates from the standard or

anything else yeah but that was about query level comments so

so single line and multi-line comments

Nikolay: yeah in the main like SQL runtime like usually there

is a standard minus comment style.

And it's interesting to see that MySQL and MariaDB, they have

issues there if you, there's a
requirement to have whitespace

after 2 hyphens.

Yeah.

Okay.

But yeah, Postgres supports that
because it's standard and also

it supports C-style comments.

Exactly.

/* ... */

Michael: Yeah.

Which crucially can be, yeah, they're
good for multi-line comments,

aren't they?

So you can have the first slash
star, and then start writing.

Yeah, and also,

Nikolay: I try to use them all
the time because they are predictable.

If you use SQL standard comments,
you might have issues when,

for example, in some cases your
query, line endings are stripped

from your query.

In this case it's quite a messed
up situation because everything

becomes a comment.

Michael: Yeah that's a really good
point and I wasn't going to

bring this up till later because
it felt like a really minor

detail, but I was relatively surprised
to read that psql strips

out the single line, the standard
comments before even sending

them to the server, whereas it
doesn't for multi-line.

So yeah, exactly client side.

So when they're used, for example,
if you want that information

server-side for some reason, it
needs to be 1 of the C style

comments.

Nikolay: Yeah.

And we do have situations where
we appreciate comments, for example,

coming to pg_stat_statements.

Let's start with pg_stat_activity,
first of all.

Comments for queries can be super
useful to pass some, I don't

know, like trace ID, like origin,
even URL sometimes.

Like you can indicate which part
of your application generated

this query or participated in generation, right?

There are some libraries for different
languages.

I remember 1 for Ruby on Rails.

It's called marginalia.

I don't know.

Michael: It would be margin, because
it's from the, from, I think

the, yeah, from the margins, like
when you, in a book, if you

write some little notes about it.

Yeah.

Nikolay: Yeah.

So it's very useful.

It can bring automatically generated
comments to your queries

which are coming from Ruby on Rails
ORM and it's helpful to trace,

to analyze and quickly find where
this query is coming from and

also how like we obviously can
see them in pg_stat_activity.

The downside obviously that it
increases the size of the query

and by default pg_stat_activity the
query column it has only 1024

characters.

track_activity_query_size setting.

We usually recommend to bump it
like 10x.

Yeah.

Because we have memory for it,
let's do it.

Because queries tend to be bigger
and bigger over years, right?

So 1000 is not enough.

And of course, this comment is
put in front of the query and

you might not be able to see, unfortunately,
how SQL is written.

It starts with SELECT.

And sometimes ORMs, they put a
lot of columns there.

And with comment plus column list,
you might see that it's truncated

and you don't see the FROM clause
at all.

To see the FROM clause is super
essential.

There are opinions that SQL, like
SELECT, is the wrong way around.

Reorganized starting FROM, because
this is where things start

to be executed.

But It is what it is.

And yeah, so it's if you have a
huge helpful comment, it might

bite you here because you might
have your query trimmed faster

and you don't see it.

But it is what it is, right?

So my recommendation is comments
are super helpful here, like

from this library or you can write
your own or something, just

to trace the origin of the query
and so on.

And you just need to bump your
track_activity_query_size to have

bigger.

Unfortunately, it requires a restart.

That's the downside of that change.

Michael: I actually don't know
for sure, but I was looking at

the marginalia documentation briefly
just before this and noticed

that the comments were going at
the end of the query.

So it might be that they've deliberately
done that.

Nikolay: In this case you don't
see it in pg_stat_activity

maybe, right?

Michael: Yeah, good point.

Maybe it's even worse, yeah.

Nikolay: And what would be great
actually to, I don't know, this

is strange, but I recently implemented
in our monitoring, I was

hearing it's difficult, but then
I just took Claude Code and implemented

it, and it was quite successful.

So I implemented the approach I
saw in other systems.

So We have a mode in our dashboards
in Grafana, we basically

switch.

You can see the whole query or
you can see the query with stripped

less important parts.

I consider the column list as less
important part and I just

replace it with not dot dot dot
but a single symbol.

There is a unique triple dot.

And in this case you have more useful information and comments

I think are also stripped in this case.

Right?

Makes sense.

But yeah, depending on the situation, comments can be helpful

for various observability activities to connect some dots.

And it's interesting how comments in queries, for example, coming

in front of...

Oh, by the way, a big downside of having a comment in front of

a query is also that in our old checkup we had analysis so-called

mid-level.

We had high level, it's just the whole workload, all metrics

for the whole workload.

According to like from pg_stat_statements, of course it's not the

whole, usually it's only 5000 by default queries, normalized

queries.

But the lowest level it's individual normalized query, right?

Mid-level is when we call it so-called first word analysis.

So we just, which word is the first?

Okay, SELECT.

Or is it UPDATE?

There is a trick with, because with can combine multiple things

in 1 query.

But usually it's quite helpful to understand how many of queries

in terms of calls or overall timing, some metric, how many of

them are SELECTs versus UPDATEs and DELETEs.

You can get stats for writes from tuple statistics, but to analyze

at query level, it's not like straightforward.

So this is what we did.

And then I remember having comments destroy this analysis.

Of course, this is a method, like, it's easy to fix, right?

You can just ignore comments.

But this is interesting.

So bringing some observability helpers, you might destroy some

observability tools sometimes.

Michael: Yeah.

The other thing I wanted to make sure we mentioned in this area

was how pg_stat_statements works

Nikolay: because it

Michael: does denormalize.

So yeah, so comments, I think it's a quite good trade-off actually.

I think I quite like the decision they've made so they denormalize

So you if you have the same?

Query, but with 2 different comments, they will count as the

same query.

You'll get the same query ID They'll be grouped together, But

only the first 1 that gets stored under that query ID

Nikolay: What?

Unchanged

Michael: it's so it's not unchanged
with the comment exactly

Nikolay: For example, imagine we
have a simple select with blah

blah blah where I don't know like
email equals some value or

lower of email equals some value.

And you decide to put this email
as a comment.

This is how your PII leaks to pg_stat_statements,
but only the

first occurrence.

That's weird.

So.

Yeah.

Michael: I don't know if I've seen
P, have you seen PII in comments?

Nikolay: That's interesting.

Imaginary situation, it's possible,
right?

Michael: Sure.

Nikolay: Yeah, it's definitely
possible.

For example, we might, we can say,
okay, this is user with email,

this is acting here.

And we have, it's leaked to pg_stat_statements, everything else

is stripped, so like it's normalized,
we don't see parameters,

but comments we see only first
occurrence.

This is weird.

Michael: Yeah.

It's weird but I quite like, imagine,
I was thinking what's the

alternative?

Either they don't show comments
at all, or they have to store

loads of copies.

Nikolay: Yeah, there are pros and
cons here.

What I would like to see, it's
an unresolved problem still.

I would like to have ability to
pass comments as maybe key value

pairs, comma separated or white
space separated.

For example, you say, okay, application
ID this or application

component, component ID this, like
correlation ID, many things,

URL.

And then to be able to have aggregated
metrics based on those

dimensions.

So for example, I know my application
consists of various components,

I pass this component ID to comments,
and then I want to see

how many calls overall for this
query particularly, how many

calls are coming from that component
ID versus different component

ID.

And this type of analysis would
be super powerful, basically

custom dimensions for pg_stat_statements.

I know it was discussed for pg_stat_kcache And the consensus was

it should be in pg_stat_statements,
and I don't, it was many years

ago and I don't know how it ended.

But definitely there is like desire
to have some kind of analysis,

but I imagine it can be quite expensive
if implemented not in

a good way.

But yeah, this is what people want.

They come with questions like, okay, how to identify, we have

like many parts of, we have monolith, first of all, in terms

of code.

We have many teams working on different parts.

How to identify which part is most expensive in terms of CPU

usage for example.

Or time spent by Database to process this.

And here is where such kind of analysis would be super helpful.

Michael: Do you see people doing it, like you could for example

have them connect with different roles and that would then be

logged?

Nikolay: Yes, this is 1 way, this is indirect, like you could

just downgrade this, like I just pictured very good, flexible

approach that you could do many dimensions, but you can downgrade

and let your different parts of applications speak using different

users and use the fact that pg_stat_statements has user ID.

The downside of this approach would be you need to think how

to manage pools in PgBouncer, for example, because different

users means you need to set different quotas, pool sizes, right?

This can be quite inflexible.

If you want to have a single quota for all users, how?

Maybe it's a question to PgBouncer.

Maybe it's actually possible, maybe no.

It's another question, right?

So once I like to, this is, I think, good practice to separate

your workload to different segments and each segment works under

different DB users.

But there is also management overhead for maintaining various

limits and so on.

Michael: Yeah.

It's

Nikolay: all about, I forgot how.

So it's an interesting, maybe some of our listeners has a clear

picture.

What best practice would be here and please leave a comment somewhere.

Michael: Yeah.

Or even just what people are doing, what you're doing in practice,

it'd be good to hear what solutions people have come up with.

Nikolay: What's possible right now I think is if you, for example,

have quite high track activity query size, like 10k for example,

I see people even go further, even more, like 30k.

You use comments from marginalia or something, and you have

already started to appreciate performance insights or wait event

analysis.

We talked about it a lot, right?

In this case, you can start...

You can recognize different wait events and how many active sessions

and segment them by wait event type and wait event.

In this case, you can bring this knowledge about dimensions to

this analysis and start saying, okay, we have usually like this

amount of sessions spending on
I-O, and among them, like 90%

is coming from that part of our
application according to comments.

This is quite powerful.

And this you don't need to do anything
except like just to you

don't need to change pg_stat_statements
or how Postgres works.

It's possible right now already.

Michael: Yeah, as long as 2 parts
of your application aren't

doing the same query.

Yeah, that would be fun.

Nikolay: They can do the same query,
but they put different comments.

Michael: Yeah, but in pg_stat_statements.

Not in pg_stat_statements.

Nikolay: No, no, I'm talking about
wait event analysis.

Michael: Sorry.

Nikolay: And I'm talking actually
about, we call it lazy approach,

like sampling of pg_stat_activity.

Fine.

It cannot be super frequent because
there is overhead.

For example, every second or every
5 seconds you sample pg_stat_activity

activity and you have a raw query
from there, including comments

as is.

If you start using pg_wait_sampling,
which is great in terms of

sampling rate, 10 milliseconds,
every 10 milliseconds by default

it samples, but you lose this,
you lose the raw query and comments,

and you have the same problem as
pg_stat_statements.

These dimensions become not available
at pg_wait_sampling level.

So anyway, this is super interesting
observability topic, I think.

And what do you think about comments
which, I don't know, which

are put inside PL/pgSQL functions,
for example?

Michael: What for?

Is it to describe behavior like
almost like code comments?

Nikolay: Yeah it can be explanation
of this what this function

does and every piece like if you
for example look at Postgres

code it's very well commented.

Comments are very thorough right
they can be huge sometimes you

open some .c file and
a huge comment in the beginning

explaining what's happening here.

Yeah.

Is it a good idea to put this to
function bodies?

Michael: What's the downside?

I think I generally err on the
side of commenting things.

I like comments.

Although, you brought up AI already,
I do find some of the LLM

commenting excessive at times,
or like, maybe not excessive in

the sense that there's large comments,
it's more that there are

just comments, there are too many
comments, just comments at

too many stages.

I do like the Postgres style, but
they tend to be Huge comment

blocks describing a whole area
then loads of code not like comments

at each line of the code Describing
what each line is doing so

I like that style, but I tend to
find that people at least databases

I've seen over the years aren't
commented generally as well as

people's applications.

I see application code commented
better on average than database

code.

Maybe I'm looking at the wrong
projects, but personally, before

knowing the downsides, would think
it's a good idea.

Are there downsides I don't know
about though?

Nikolay: First of all, I agree
with you.

If comment just explains what next
line does, it's like, instead

of 4 ways, it's quite silly.

Like, stupid, how to say it better.

It's a low value comment, right?

But if it explains some knowledge
and decision making, how it

was made, some trade-offs which
were made, This is super valuable.

And right now I think comments
makes even more sense because

sometimes when we engineer something
and involve AI, we have

some roadmap and some intention
and maybe first version is not

final implementation of everything.

So having to-do comment, right?

To-do, fix me, right?

This is a meme comment, right?

But these days I think it's maybe...

I just feel the shift here because
it makes sense to comment

some future intentions more often
because next time we will revisit

this with AI as well.

We probably improve in the same
direction we wanted originally.

So preserving context now using
comments makes a lot of sense.

It's not always worth putting it
as a comment right inside function

body, because we might end up having
huge, like the plan inside

function body, and this doesn't
feel right, and it will consume

a lot of bytes stored.

So maybe some big comments should
go as a separate document adjacent

to the place, like in the same
place where we store function

in Git, for example.

Maybe it's better documented separately,
right?

But when you do something and you
say, okay, we do this, but

we plan to extend it to this and
this, I like these to-do comments

because they are in the same place
and next time AI or you reading

this like you you understand okay
this is what we planned here

and so to do fix me style makes
more sense now because well I

don't know explain it and we plan
to fix it later, why not?

Michael: But that was always true.

If you work in a team, if you work
in a style that's iterative,

like any kind of agile process,
any kind of extreme programming,

that kind of let's do the minimum
version and then let's iterate,

that's always been true hasn't
it?

Nikolay: But it's always also been
true that a lot of dead code

and a lot of such comments which
have very little chance to be

really improved.

So you say, to do, fix me, you
leave this comment, but you never

return because of the capacity.

Now it's much easier to return
and actually fix because we have

AI.

Michael: So

Nikolay: capacity changed, right?

And you think, okay, actually,
let's explain all the things like

in the comment right here and we
will, we know that we will revisit

it if this code survives and if
we don't drop it fully because

of some different like understanding
of like product or something.

We actually will improve.

I start believing into this, right?

Unlike pre-AI era when I knew nobody
will have capacity to work

on this because everyone is busy,
too much everything and so

on.

And this is great actually.

So comments are good.

If you don't leave comment, some
weird decision made, code is

hard to understand, like why it's
so, then we are in trouble.

And inline comment is great because
the AI won't miss it.

It's reading this part.

Comment is here, all clear.

But again, if it's some long document,
it's better to offload

it to some different part.

And We slowly move to the topic
we definitely wanted to discuss

is database object level comments.

Michael: Oh, before we do, can
I do 1 more for query level comments?

Nikolay: Yeah.

I just thought

Michael: it was, I'd forgotten
about this until recently, that

this is how pg_hint_plan puts hints
in.

And I think it's true for other
databases too, different not

just Postgres, how hints yeah it's
fascinating to me that's the

method we've chosen it makes sense
right If we don't have hints

at the database level, how else
could we get them in at the query

level other than Putting them in
a structured format inside a

comment and I didn't know I had
until reading the psql thing

I didn't know for sure why it was
in a multi-line comment other

than for readability But I found
it really interesting that it

uses that syntax probably because
it gets stripped less often.

So yeah, that seems to be another
like big use case for it for

query level comments.

Nikolay: PSQL doesn't strip C-like
comments?

Michael: Doesn't strip the C style.

Nikolay: That's interesting.

Michael: Yeah, so you can use pg_hint_plan
with psql without issues.

Nikolay: Yeah, so back to functions,
my approach is to have good

comments, explaining intentions,
context, plans maybe, but if

it's a huge document it should
be offloaded.

But we also have for each function
we can create a separate metadata

piece, we can say comment on function
name, comment on function

and function name.

Have you seen how many variations
comment on a statement has

in Postgres?

Michael: Yes, so I didn't realize,
and I think I might start

using this more, but for, I didn't
realize you could add comments

to indexes.

Yeah.

That's really cool in the sense
that sometimes you go to someone,

like sometimes someone shows you
they've got these 16 indexes

but they don't know like why certain
ones were added and wouldn't

it be cool if you could just check
the comments on as to what

you know.

Nikolay: On constraints, on sequences,
isn't it like fascinating?

Michael: I knew tables, I knew
columns, I knew like general objects

you could put comments on them
but I didn't know there were so

Nikolay: many options.

That's already hacking.

On access methods.

Too much, too deep.

Yeah so yeah, it's cool.

44 lines there.

And a funny thing, in 2005 or 6,
when we created first social

network, it was Postgres plus PHP.

And I, some time ago, not far ago,
not long ago, I stumbled upon

an email, first review of my code
from someone with experience

actually.

And big criticism was like lack
of comments at database object

level.

Can you imagine 20 plus years ago?

And I remember like actually how
I was like protective and defensive.

Michael: Oh wow.

Nikolay: Yeah.

Yeah.

But it's a good thing.

And my point right now is it always
has been a good thing to

have some approach in your project
and use comments because they

can, sometimes they become like
not valuable, right?

You can have a comment to a table
but also for each column yes

and I remember I tried to enforce
this in multiple teams I had

in different projects.

I tried to enforce this rule.

Let's do it.

Let's do it.

After that review, because I eventually
agreed it's a good thing

to have.

This is just a lot of metadata.

But I remember also seeing, okay,
column ID, this is our ID.

What comment can you put there
on this column?

I don't know.

Sometimes there is no like extra meaning, right?

Super simple column.

But right now I think it's valuable to think about, and this

is engineering level, this is what humans should think, maybe

brainstorming with AI, but what should we really document in

database object comments?

Right now it's so easy.

When we do something, there is no more excuse not to write tests

for CI, because this is what AI does quite well.

You just need to control it.

And not just coverage.

You should go deeper and say, okay, coverage is like a super

simple thing.

We have 80 plus percent, but what does it really mean?

We should cover edge cases, corner cases, really test things,

right?

And the same documentation.

And comments is our documentation.

This is a part of project documentation.

Database table should have some comments, functions should have

comments, columns as well.

But of course, if it's like nothing to say about some simple

column, okay, it can be skipped.

But there should be some rule, and AI should be helping to maintain

good comments so later when you try to add more features or do

refactoring, there is a great context and also when you work

with database, all like all those MCP Servers, APIs, like if

you work with database and it can describe itself, it's great

things to have, right?

Instead of guessing the meaning of column just on column name,

you have a comment.

That's great.

So now I think there's no excuse to avoid this powerful tool.

Yeah.

And have everything documented.

Michael: I think you raised some interesting things.

I do think that being strict about it on ID columns you make

a perfect point, there's no point.

But the place I've seen this super useful is reporting queries

so I think sometimes it can be quite complex to make sure you

are like summing the right columns or the column means what you

think it means.

Does this number, like what, this revenue number, what does it

include, what doesn't it include?

And that's like super relevant when you're trying to report stuff.

And maybe sometimes like just the data person knows that, but

if they can put it in comments on the schema then their future

if once they hire a team those people can know it and nowadays

even LLM is writing reporting queries they've got a better chance

of getting that right rather than pure mistake.

Exactly, exactly.

Nikolay: And the ideal schema, 2 columns, ID, column ID, and

it's of type UUID.

And we should have a comment, never put UUID version 4 here,

always UUID version 7.

And second column, data, JSONB.

It's a joke, just in case.

Michael: Yeah, yeah.

Sounds like Mongo.

Nikolay: And then the comment which you extend all the time,

extending schema of that JSONB, explaining what's inside.

Anyway, 1 of the interesting cases we discussed recently, it

was some project which was originally monolith but they split

the database into several pieces.

So when you do this, you need to abandon some foreign keys because

you cannot have foreign keys between 2 clusters, between 2 primaries,

right?

And I remember we discussed maybe we should maintain some fake

foreign keys, like imaginary foreign keys and define them in

the comments.

It was just an idea, right?

Because who will be enforcing the rule that nobody will ever

write those comments?

I don't know, but it's possible.

So you have a column in 1 cluster saying that it should, values

here should match values of that column in that cluster.

And periodically application or some additional tooling checks

this.

Anyway, tests and comments are super cheap to write these days.

There should be some like rule to enforce in every project to

make them rich.

Michael: Yeah, not only are they cheaper though, I think there's,

they've always been valuable.

I'm a big fan of tests, like I was, I love making changes to

things and knowing that we haven't introduced any regressions

like all the previous bugs that now have tests that mean that

we can't reintroduce them or reintroduce something similar.

So I'm a big fan of tests, I'm a big fan of comments but I think

their value might even be going up in this new world Like I think

it's not just that they just as valuable and cheaper to add,

but I think they might be even more valuable.

Like I would be super scared letting an AI make changes to an

application that doesn't have good test coverage these days.

Like just the value of tests for me is going up even higher because

I have less trust that people have properly reviewed things So

it's did you see where I'm coming from that actually these things

might be even more like comments quality.

I think so and Yeah checking obviously getting it to add these

things is 1 thing, but then checking that they, there is a reasonable

comment that it is documenting what you think that column is

or does.

Yeah, I can see that.

The 1 thing I was going to ask
you though is what do you think

about index comments?

Do you use them?

Honestly,

Nikolay: I don't remember I used
them ever.

But it makes sense to document
why we created this index, right?

Michael: I think so.

I even think maybe because we re-index,
right, sometimes for

maintenance, I was even thinking
like when we added it, who added

it, what for, there might be some
interesting metadata

Nikolay: in there.

That's interesting.

That's interesting.

And not only index, I think, yeah,
I remember cases when I thought,

oh damn, I wish we could establish
proper, like we could figure

out when something was really created
in Postgres.

Some table or index.

Michael: Function.

Nikolay: Yeah, if you, for example,
establish a rule that when

you create something or recreate,
rebuild index, you document

it in a comment, why not?

It's an interesting idea actually.

Probably I should borrow it for
our pg_index_pilot project, which

re-indexes automatically.

Michael: Yeah, so for a lot of
people using ORMs, they'll have

the source control of this, right?

Like they can look up when was
this first created and hopefully

that comes with a commit...

Nikolay: Or looking at logs.

Yeah Like not easy usually.

Michael: Looking at logs.

Nikolay: Logs, if you document
DDL.

Yeah, but how long do people store
those for?

If it's a serious project, usually
we have something like Elastic

and store it for quite long, not
forever.

I agree, it's a lot.

Yeah.

Michael: Indexes could easily have
been created years ago and

people wouldn't like it.

Nikolay: Yeah, and I remember,
well, actually this is also interesting.

I remember in pg_index_pilot,
we of course have a couple of

tables where we store such metadata
and all the history of rebuilding

and so on.

Nice.

Yeah, and this is interesting.

This is like to think pros and
cons of storing some metadata

in the comment versus you have
specific table and store it there.

Pros and cons are not obvious to
me, because of course, comment

is closer, right?

It's easier than to consume.

But you don't have history, for
example.

Only additional table, you need
to maintain it and so on.

Pros of storing some comments separately,
also permissions.

Sometimes you want to store some
data which you don't want regular

users to observe, for example.

It's very specific nuance for like
your goals, right?

But yeah.

Last thing we wanted to mention
is this blog post from Andrei

Lepikhov.

An interesting idea to use security
labels as metadata storage.

It's quite elegant, I think, and
we discussed before.

So the idea is that we need some
metadata storage, but instead

of creating a table and write it
there, in that case, it was,

I think, it was pgEdge, so it was
related to probably a multi-master

solution and logical replication,
bidirectional logical replication,

something.

So the idea was, let's use these
security labels coming from

integration with SE Linux security
stuff and benefit from the

fact that You can put anything
there and for different users,

unlike for example comment, which
is single comment for database

object, there you can have multiple
metadata pieces belonging

to specifically like to which user,
so it's one-to-many relationship,

so it's interesting.

And putting there some custom data,
why not actually?

Michael: Yeah, so it's called security
labels, but I guess you

could just think of them as labels?

Nikolay: Yeah, So it's interesting.

I never thought about this and
maybe there are different use

cases where you can benefit from
this.

If you need specific comments for
specific users for this particular

database object.

Michael: Yeah, many options.

Nikolay: Yeah, anyway, comments
should be used more in the AI

era.

Like table level, index level,
I like it a lot.

Never use but I'm going to think.

Michael: Yeah, and even if you're
somewhere that isn't using

AI stuff all the time, I don't
know how many of them there are

these days.

But just I think this is useful
anyway even for teams that are

collaborating like this comments
are good for communication generally.

Good.

Alright, nice one Nikolay.

Thanks so much for this and catch
you next time.

Nikolay: Have a great week, bye
bye.

Some kind things our listeners have said