Michael: Ho ho ho hello and welcome
to Postgres.FM.

My name is Michael I'm a founder of
pgMustard and I'm joined as

always by Nik from PostgresAI
Merry Christmas Nik and happy

new year.

Nikolay: Yeah, you too and to all
our audience which yeah roughly

as we just counted across all the
platforms, roughly 10,000

people.

So it's impressive.

Michael: Yeah, it's kind of wild,
but yes, thank you everyone

for another good year.

Nikolay: Yeah.

So obviously we wanted to do some
overview of this year 2023

for Postgres.

I think it's a great year for Postgres
because a lot of stuff,

a lot happened and a lot is still
happening.

And vector of development of product
and communities around it

and companies and everything ecosystem
it's it's astonishing

right

Michael: yeah absolutely Feels
like it's going from strength

to strength and yeah, as you say
another good year

Nikolay: Yeah, so where to start
let's maybe start with some

technical topics first.

Maybe Postgres 16 Obviously 1 of
them and we had a whole episode

about that and what do you remember?

What are achievements of Postgres
16 which shine?

Michael: Yeah good question I guess
1 thing we didn't have when

Postgres 16 came out and we did
the episode was a little bit

of hindsight.

We didn't know how it was going
to go.

With a major version, you're always
a tiny bit worried there's

going to be a big issue in it.

And once you get through the release
candidates and the beta

without that, that either means
there isn't a big issue or it

hasn't been found in testing yet.

So now with a bit of benefit of
hindsight I think it's gone pretty

well hasn't it?

Like I mean touch wood it feels
like not only has it gone down

well but people have been reporting
some performance improvements,

people have actually been upgrading.

I'm actually seeing people, in
our case, submitting version 16

query plans to the product.

So they're using it on production
and already looking into performance

things with it.

So they've not only upgraded, but
have had it in production for

a little bit.

So yeah, it seems like people are
upgrading which isn't always

the case when a new major version
first comes out and Seeing

some wins, which is cool.

Nikolay: Yeah, I also feel the
pace of upgrades is improving

from year to year I think managed
Postgres providers say polish

the procedure and make new Postgres
available sooner than in

previous years.

So it's obviously being noticed,
right?

So, but in terms of Postgres 18,
I remember things like in any

overview somehow UUID version 7,
native support of it made it

to the short list of always mentioned
things.

And it's great because again, it
started during Postgres TV hacking

sessions online.

I'm super proud of this.

Like, you know, like finally it
made it after a couple of years

of waiting because of, like, waiting
for RFC standard to be finalized.

Postgres was very conservative
compared to others.

It's great that this thing made
it.

It's not a huge thing, but it's
obviously very useful, helpful

for people.

Michael: Not a reason to upgrade
though, right?

Nikolay: No, no.

Among reasons, I wanted to mention
a specific one.

I know we have customers who are
working on upgrades sooner and

already made a decision to upgrade
much sooner than usually to

Postgres 18 just because LockManager,
lightweight lock contention

is fully solved.

And I have a series of articles
exploring this in detail, showing

benchmarks that you just raise,
if you upgrade, you just raise

max_locks_per_transaction parameter,
which requires a restart,

unfortunately, but it's easier
than suffering, you know?

Suffering from lightweight LockManager contention.

And that's it, it's solved.

So this work also took a few years
to make it into final release

and finally we have it and this
is definitely one of the reasons

to upgrade.

Now if there are other reasons,
I also noticed as we discussed

AIO.

That's a

Michael: huge one for people.

I actually haven't seen many, I
think a few people have blogged

kind of theoretically about the
performance improvements that

could be possible there, but I
haven't seen anybody publish anything

that they upgraded and saw a big
improvement due to AIO, but

I would say it has driven a lot
of interest in the major version

and it could it could be that it
is proving really good for performance

but but more than that it's like
a good marketing feature like

it has driven a lot of interest
in the upgrade and people actually

considering it.

Nikolay: Yeah I saw it not in upgrade,
but I saw it in...

Well, it's actually my blog post...

Maxim's and mine blog post about
pgBackRest.

And I suddenly noticed that before
single-threaded pg_basebackup

only could do 300-400 megabytes
per second, but in Postgres 18

it's gigabyte per second on fast
disks so it was big surprise

to me and it's also like 1 of the
reasons I think should should

be considered to upgrade sooner
Oh, yeah, we can dive into technical

details for another episode, but
yeah, let's move on.

What's next?

What's what is big in 2025 on your
opinion in terms of technical

stuff

Michael: on just the technical
side that's an interesting question

I've got a lot actually in and
around the community, but on the

technical side, I think 1 of the
bigger things we saw was more

efforts, actually maybe that's
community still, do you consider

kind of the efforts to empower
and grow more hackers, like more

actual Postgres developers, people
working on the Postgres core

base, do you consider that technical
or more community?

Nikolay: Well, it's less technical.

Technical, if you mentioned the
word hackers, let's use this

word in a different meaning.

Remember US treasury hack?

Michael: Oh yeah.

Nikolay: This is quite technical.

There was a case when it was not
so good, right?

And Postgres obviously used everywhere,
so vulnerabilities happen

from time to time and it's obviously
technical wise, Postgres

has great process to fix, to issue
new minor, set of minor releases

which close vulnerability, but
still due to very wide use of

Postgres.

I think this year Postgres made
it to more technical media because

of that.

And it was mentioned in the context
of, okay, There was like

hackers from China hacked the US
infrastructure and Postgres

was involved.

So this I think it's also kind
of trend and it's slightly less

technical but there is some technical
aspect in it and obviously

it means that you need to upgrade
in terms of minor upgrades

always, like patches should be
applied promptly Because things

happen, bad things happen, and
it's better to keep Postgres up

to date in terms of minor version
Right, So this is what I just

remember this this thing.

Michael: Nice.

I have 1 more technical 1.

Yeah Feels like it was a big year
for Announcements at least

in the sharding space.

We got at least 3 3 big new projects
kicked off to try and shard

Postgres specifically for OLTP

Nikolay: Yes, exactly.

So we we had episodes with PgDog,
Lev Kokotov, PgDog.

We have had episode with Sugu Multigres,
which is developed

inside Supabase.

And yeah, these are 2 things which
are purely open source, and

there is a non-open source product,
PlanetScale is developing,

right?

So, yeah.

Michael: I think they've said they're gonna open source it, but

it's all private at the moment.

Nikolay: Yeah.

Yeah.

We have 2 open source things we can also, actually not we can,

my team is already, my team has already tested in terms of latency

overhead, almost all of them.

Michael: Is Multigres testable already?

Nikolay: I've heard kind of yes, I cannot say for sure.

There is something there already.

So I will let them announce all the things, but it's definitely

being developed quite rapidly.

You can go and check pull requests.

What I like about open source is it's very transparent, right?

So you know what's already happening in vector In which it's

being developed in both cases.

This is great power of open source Yeah, yeah

Michael: true and it's yeah, it's Interesting like how they're

choosing slightly different trade-offs and I think it will be

fascinating in the coming years, kind of 3, 4 years time I'd

be really interested to see who's using which systems, what are

the trade-offs, like which kind of systems benefit most from

1 approach versus the other, Does the license matter?

I think PgDog and Multigres have gone different directions on

the licensing front.

Language.

Yeah, exactly.

So it will be interesting, but this felt like the big year.

I think it was, I've written down, it was May that PgDog was

first announced.

June that Multigres was first announced and August that Neki

was first announced.

So quite short, you know, if you consider how long it's been

since we had a new potential sharding solution and then 3 come

along in a few months, it feels like the timing was right for

that to happen now.

Nikolay: So yeah, there are companies which do need it.

And there is big demand, including companies who don't need it.

This is like, yeah, with sharding, more people think they need

it and actually need it.

But it always was a weak side of Postgres ecosystem, lack of

reliable and mature sharding solution.

We had Citus, we had PgCat, we had SPQR.

And none of them was publicly known to be used in big OLTP projects

like huge huge scale like you know like for Vitess.

Vitess we have GitHub, Slack and so on, Pinterest.

Here we didn't have such stories.

We still don't have such stories yet but I'm 100% sure we will

have it soon, just because of this year's development.

Michael: Yeah, the big projects that sharded Postgres, almost

all of them seemed, or at least the ones that have talked about

it publicly, did so application side.

So they have sharded, but it's all on their own heads, basically,

or, you know, using a framework, not a

Nikolay: Oh, yeah, yeah, we had episode 100 terabyte of Postgres.

It was great.

And 2 or 3 guests, I think all the guests use basically application

side sharding.

Yeah.

But at least 1 of them have good articles about this.

I think Notion, right?

Notion.

Michael: And Figma did too.

Nikolay: Okay, great.

So, you can do sharding yourself, of course, right?

But it's good to have some system.

And obviously we are going to have multiple systems which will

be used in big companies soon.

Michael: I think the Vitess thing is really interesting as well

though, because I think there's, you mentioned more people want

sharding than need sharding.

I think there is like a really attractive thing to early stage

startups that, you know, having to tell all their investors they're

going to be the biggest thing ever, probably have to believe

that as well.

So they want, and some of them, it is true, right?

Like some of them are going to need, we talked about 1 last,

last week where it was, they should have partitioned a year ago,

but they only launched a year ago.

So there are these startups that need a small database at the

beginning, and they want, they're going to need sharding within

a couple of years because they're going to scale that fast.

So for them the story of, and it used to be let's say PlanetScale

with Vitess or just Vitess in general, was you can start

small with us and then we, you know, you can have a sharded setup

when you're ready.

It was the same story with Mongo many years ago.

It was this kind of like, I'm going to grow for web scale.

Do you remember that phrase?

So it is really, yeah, it will be nice.

And I think that kind of transitions onto 2 things I had on my

list.

1 was, I think MySQL's loss is Postgres' gain this year.

I think there's been quite a few, like firstly PlanetScale coming

into the Postgres ecosystem, but also I think there were some

stories more recently around a lot of MySQL's team getting laid

off.

So I think momentum on that's been kind of shifting and slowing

for a while since the Oracle acquisition, maybe longer.

But I think Postgres is ready to
take up the slack and the work

on sharding is part of that story
because I don't think we did

have a good answer to Vitess until,
well, and maybe arguably

still don't, but it feels promising
that 1, hopefully 1 of these

3, if not more than 1 of these
3, proved to be good.

Nikolay: Yeah, I agree with you.

And that's why I actually mentioned
the stories matter.

So we have stories published about
successful sharding of Postgres

to great scale in DIY approach.

But we need more like case studies.

And I think I'm looking forward
to next couple of years to hear

about successful use of new sharding
solutions.

So it's going to be great.

This is a great foundation this
year, basically answer that Postgres

won't be considered as very limited
OLTP database system where

you will be scared to hit some
walls.

All right, so this is a great year
for this topic.

I like it a lot.

Okay, let's move on.

Maybe I mentioned less technical
stuff.

I think there is a big movement
in terms of startups and Postgres

companies, pure Postgres companies
and less pure, but also very

Postgres related companies.

For example, let's start from maybe
like, there are bad news

and good news.

Bad news, I think, what do you
think about Yugabyte?

I think Yugabyte is struggling.

It's not pure Postgres, but it
showed some way to, also solid

solution, right?

But I think it's struggling.

I see people like, left, many people,
and I also think, the topic

we just discussed, that's a native
sharding solution will have

multiple OLTP to achieve great scale
and automation.

They are going basically to say
Yugabit is not needed anymore.

Michael: Yeah, I honestly don't
know.

And I think when we said lots of
people, more people want sharded

than want, than need it.

I think the same is true for distributed.

I categorize them slightly differently
in my mind, like sharded

and distributed, and I think maybe
the difference is the idea

of writing to multiple primaries
in different availability zones.

It feels like it's this kind of
like, you pay the you pay the

tax on latency and for some it's
100% worth it.

But for a lot it's not that trade
off isn't worth it.

And like I think a lot more people
think they want multi master

multi or whatever the I don't know
what the appropriate phrase

is for that I'm a bit multi primary

Nikolay: not so much so is fine
these days again I'm joking interesting

Michael: yeah but yes so my main
point is I think for the kind

of financial institutions or big
banks that need it, it's hugely

valuable, but I think most companies
don't need that and the

trade-offs aren't worth it.

But I have no idea how they're
doing.

And also it's kind of Postgres
compatible, right?

Do you consider it Postgres?

Nikolay: Well, yeah, yeah, you're
right.

You're right.

It's more Postgres compatible than
CockroachDB, but it's definitely

not pure Postgres.

Anyway, I think solutions we just
discussed are going to meet

needs of people who start with
just small 1 Postgres like more

naturally, you know, like then
migrating to other systems.

Although at the same time, we must
admit that, for example, AlloyDB

feels good.

I have customers who migrated to
AlloyDB and actually I can mention

Gadget for example, we had an episode
with Gadget CTO, right,

Harry, and we just found out that
our monitoring system works

well with AlloyDB and like AlloyDB
also has bloat and so on things

to take care of it's interesting
so it feels like Postgres although

there is a lot of rewritten also
this year I think It's important

to mention Amazon DSQL although
it's not open source at all,

but it also plays in the same area
for enterprises.

They are not limited, distributed,
everything, multi-region,

everything.

And they are quite active in terms
of development, I think, although

it's not my area at all.

Like I'm pure Postgres guy and
I consider all these friends of

Postgres so to speak, some of them
open source, some are not.

It's just like for me it's kind
of it's great to have them around,

but I think pure Postgres should
be majority of in terms of choices.

Michael: Yeah and well like each
of the each of the hyperscalers

has their version of this product
right this kind of distributed

Postgres compatible for whatever
definition of Postgres compatible

that they're going with this year
But there has still been significant

contributions from these hyperscalers
to pure Postgres.

And I think there's a kind of alternative
timeline where we wouldn't

be seeing the likes of AWS and
Microsoft and even a little bit

Google Cloud, but like lots and
lots of companies, but including

those hyperscalers actually contributing
back to pure Postgres,

not just making their own proprietary
forks.

So it's been quite, I really hope
it continues, but this year

it certainly feels like a lot of
the contributions we got a lot

of the improvements in Postgres
18, a lot of the community efforts

came from these big companies.

And I would include EDB in that
as well in terms of big companies

continuing to like invest in

Nikolay: pure Postgres.

Microsoft HorizonDB, this is fresh
news.

I already, I lost track for this
enterprise attempt, like enterprise

Postgres topic, right?

Like so many options.

I think Pure Postgres is a player
in enterprise and it's going

to stay as a player in enterprise.

Although we have Aurora, AlloyDB,
HorizonDB, DSQL, who else?

EDB, everything.

Michael: Yeah, but all of them
also offer a separate product

that is much, much closer to pure
Postgres.

Right.

So AWS have RDS, Google have Cloud
SQL, you know, if every actually

always forget what Microsoft call
theirs, but every single 1

of them have 1 that looks just
like Postgres.

That acts just like Postgres.

Yeah, it's something like for PostgreSQL
or something.

Nikolay: We have customers, but
I don't even remember.

With Microsoft, I touch Microsoft
much less than others.

Michael: Microsoft is good at a
lot of things, but naming products

is not 1 of them.

Nikolay: Well, Google is a champion.

Michael: Yeah, fair.

Nikolay: So anyway, we have so
many flavors of Postgres, right?

And obviously there is an interesting
competition in the area

of Sharding and there is all this
competition, which not started

this year, it started much earlier,
but now it's becoming Definitely

like kind of red ocean in terms
of enterprise policies topic,

right?

This is absolutely a red ocean
every big company is playing there.

Every big cloud company is playing
there

Michael: Yeah, that's an interesting
point because if you're

talking about it being highly,
highly competitive, normally what

you'd expect to see if that was
the case would be prices starting

to come down.

And I don't think we've seen that.

Like I still think they're charging
quite a premium for it.

So but if it was truly a red ocean,
I don't know, maybe I'm not

seeing the kind of enterprise contract
negotiations.

Maybe that's where the prices are
coming down.

Nikolay: Well, yeah, Maybe you're
right.

I think there are segments there
and in the segment of like big

Postgres, enterprise Postgres.

I think there are different methods
to cut prices.

They all like normal cloud ways,
you know, like, like savings

plans, saving plans and so on.

Yeah.

But they don't rush into going
down in terms of prices for Postgres

for 1 machine.

Obviously, there is big premium.

In terms of technical things trends
this year, not touching AI

yet, but maybe moving closer to
it.

I think branching is becoming,
like, slowly, very slowly, it

makes its way to being commodity.

It's not yet there at all, like
I think.

And branching is still very rare.

It's not, like, not everyone has
it.

— I feel

Michael: like there's 2 trends,
right?

Like I feel like there's branching
and there's also cloning and

these are very kind of like similar
topics and they're kind of

being converged at and some, some
platforms and some systems

offer, offer cloning.

Nikolay: Well, some

Michael: platforms, some platforms
offer thin cloning some,

but mostly it's thick cloning,
but that's still, that's still

valuable for some amounts of experimentation
and test runs.

Nikolay: Yes, had always, since
the very beginning.

Michael: And Heroku even, Heroku
even offered it.

Nikolay: Right, But if it's thick,
well, tooling is improving

there for sure.

But, I'm big fan of, you know,
like I'm, I'm big fan of thin

cloning and we have our own product,
DBLab for database branching

and thin cloning.

And, I think is if before this
year, Neon was there, DBLab was

there and that's it.

Now we see Timescale, Tigris data
playing there.

Michael: That happened this year.

Nikolay: Understanding its role
for AI and experiments is great.

Like This is exactly how I see
it for many years already.

And their CTO Mike just posted
yesterday an article how Replit

is doing this similar thing.

So experimentation at scale, like
in isolated environments and

to make it fast and cheap, you
need copy on write.

So this is the way you like basically
you can have a lot of ideas

how to improve your database.

And not everything can be verified
on thin cloning, but a lot

of ideas can be verified and in
the isolated Postgres instances

and database branching is great.

For example, definitely everything
related to query optimization

at macro level, we discussed it
many times, working with plans,

verifying ideas, this is absolutely
needed.

Think otherwise you become too
slow, too expensive, and you are

limited.

And in the topic of self-driving
Postgres, we actually, yesterday

I noticed big trend, Like people
started talking about autonomous

Postgres and self-driving Postgres.

We had, I have a post, someone
from EDB mentioned something and

I also posted.

So this is part of autonomous Postgres
for sure.

Automated experimentation and branching
is essential part of

this vision.

And we have pieces already when
companies start implementing

this.

So this works at GitLab, DBLab for many years already so I'm

happy this topic becomes more and more popular and future of

self-driving Postgres also like it's still foggy But we have

components which already becoming quite clear and we see how

to achieve levels of automation very high levels of automation

So

Michael: did you see?

In this on this topic area of like branching, thin cloning, and

copy on write, did you see there was a recent post by Radim from

Boring SQL talking about a feature in Postgres 18 that I'd missed

that is like you can create database within Postgres and and

specify a strategy and that can use if your file system supports

Nikolay: yeah yeah yeah I actually I also missed I remember this

discussion on hackers mailing list, but I somehow overlooked it.

So it's already Postgres 18, are you sure?

Michael: Yeah.

Wow.

I haven't verified myself, but this blog post says so, yeah.

Nikolay: It has a limited scope of use.

So I think if you control everything, all experimentation environments

very well, you can definitely use it.

So basically you have 1 single Postgres instance and each create

database.

If it's based on copy and write, it's fast and cheap.

It brings you isolated logical database for experimentation,

but it's not for everything.

For example, in the case of DBLab, you can perform major upgrade

of clone.

Sure.

And test.

Here, it will be not possible because it's a single instance.

Although, working with like ideas how to verify indexes, to tone

tuning, this will work for sure.

This is great.

Michael: Yeah.

Yeah.

And baked into Postgres already, that's super cool.

Nikolay: Yeah, well, that's cool.

That's cool.

I need to revisit this somehow, I overlooked as well.

Great, great.

I think what we like, what we see in the future, more tools which

will automate experimentation for optimization, performance optimization,

and instead of like throwing something to LLM and sitting with

ideas which you don't know which of them are correct which are

completely wrong.

You will throw to some tool and LLM will generate ideas.

Well, this is Actually already happening.

We just don't have it connected to all the pieces, right?

But we ideas are verified on close
on thin clones, right?

Yeah, and the user receives already
ideas which were considered

during brainstorm phase and then
ideas which didn't work and

ideas which worked best proposed
for to apply in production.

Michael: Yeah, I personally think
this is something, this is

a topic that has got a lot of legs
for the coming years But I

don't think we've seen yet that
much like I don't think I think

we've only seen the beginnings
of this topic Yeah,

Nikolay: and the pieces Some basically
like building blocks we

have.

Yeah, the final like the whole
solution is yet to be built.

I agree

Michael: Did you see much progress
on the vector search side

of things this year?

Nikolay: Oh yeah, I remember I
started here looking at turbopuffer.

We found that we also had an episode.

So it confirms it's a great successful
year for Postgres, but

also for PostgresFM podcast, because
we had great episodes covering

areas which are important for users,
right?

So, turbopuffer were great, and this
approach with vectors in S3,

and you know S3 also now has a
vector type.

I revisited recently and it's outside
of Postgres, but I started

looking at it because I noticed
clients who come to us using

Postgres, they use them.

And also I noticed it's used in
Cursor, so that's how I know

there's turbopuffer.

So a combination of Postgres plus
turbopuffer, it's kind of like

I saw trends similar to you know,
like elastic, Postgres plus

elastic.

Similar, like something happening.

And I thought like it's great like
in terms of price and like

technology is great.

And S3 also has vectors now.

But I think it's still to be understood
use cases for all of

these Approaches and pgvector is
going to stay.

I wish it was in core.

I don't see how it's possible now
But

Michael: I've had conversations
about it there I haven't I haven't

been looking closely, but I might
have missed them.

Have there been conversations about
doing something in core,

like, formally?

Nikolay: All beginnings of conversations
were just interrupted.

Like, this index type is not normal
because it's answering different

results, right?

Because it's due to its approximate
nature.

Basically, it's not deterministic,
right?

And this brings new challenges.

So to be considered as a core thing,
but in my opinion, it must

be in core to be developed properly
and be considered as something

like super standard.

Although, pgvector is super
standard thanks to all managed

platforms supporting it as well.

So all people start there.

I wanted to say this, I don't see
anyone who solved 1 billion

vectors problem yet.

All those who claim they solved
it, I think they are lying.

Michael: It's like the spacing,
right?

What do they call it?

Nikolay: They say, you know it's
bad, but you need to basically

cluster, not cluster, partition
it, right?

So it's not 1 index, it's multiple
indexes.

Like key spaces, namespaces, you
call it, use any word.

But what it means is that you don't
have a single index which

supports good OLTP scale, OLTP latencies,
meaning it's definitely

much faster than 1 second for a
single query and has covering

1 billion vectors.

Nobody does it.

S3 including, right?

S3 also like has maximum, I don't
remember, 50 million or how

many vectors in 1 index supports.

So anyway, this is unsolved problem
and it's unsolved, I think,

everywhere, not inside Postgres
ecosystem.

So it's a big problem still.

How to have a single index?

Why do we need a single index?

Maybe we don't need it.

Maybe we can always split it.

PgDog also showed how to combine
sharding and pgvector, right?

So you can have multiple Postgres
instances and so on.

Questions about price and so on
also can pop up, but turbopuffer

is not open source.

Again, like, we have great open
source tooling already and and

if you combine with sharding maybe
you are fine but I haven't

seen a single 1 billion scale index
which would work.

Michael: — Why does it matter?

Like, if I have a partition table,
that's not, that doesn't have

a single index on it.

— Right.

— So why does it matter?

Nikolay: Complexity.

If you talk about regular indexes,
having a few billion records

in a table is not a problem at
all.

And latency will be amazing if
you have index-only scan, or just

index scan.

If you don't have these rows filtered
out, right?

But when it comes to vectors, if
you have 1 billion vectors

and you try to cover it with single
index, build time will be

terrible for HNSW, latency will
be terrible, so it's not OLTP.

It cannot be, it doesn't meet OLTP
standards, which we know like

100, 200 milliseconds maximum because
of human perception.

Michael: Yeah, but not for search.

Nikolay: For search including.

Search is

Michael: 1 of those use cases where
people are willing to, okay

fine.

Nikolay: No, no, no.

Are you okay if you Google something
and you need to wait 10

seconds?

You're not okay.

Michael: Yeah it's a good point
actually.

I think Google is probably the
proof that search is important

to be fast.

Nikolay: Of course.

Because That's what they've

Michael: always focused

Nikolay: on.

People need it everywhere.

Like you have mobile app, you type
something, you want to move

it very fast.

If it's below 100 milliseconds,
it doesn't feel slow.

That's great.

So we have human perception, which
defines what multi-piece.

Michael: You don't need to convince
me that performance matters

and it's good, but I was just thinking
that might

Nikolay: be.

Search is considered part of application.

It's not like we have such system
Okay, let's in some cases.

Okay if it's some If it's a lawyer
who needs to pull some lawsuits

from history like Related to the
topic they are working on right

now They can wait a minute.

No problem Or if it's Like bi report
this actually means we overlooked

1 trend as well Analytical thing
right so yeah market iceberg

and and DuckDB and how to bring
all this and Postgres couple

of a few companies played good
game and And we are acquired

Michael: Yeah, I would say this
was huge news, I think, let's

say last year.

And then I think this year what
happened as a result were possibly

that 1 of the biggest analytic
databases companies in the world

acquired what was 1 of the most
promising companies in this space.

So that was, I actually, yeah I
had that on my list to cover,

I was thinking that's more community
side of things, but I guess

it's technical because it's it
came about probably because of

their progress on the analytics
space.

Nikolay: Yeah, so obviously Crunchy
Data started to work with

analytical side of Postgres, which
was always considered weak

and achieved had great achievements
and got acquired for a quarter

of 1,000,000,000.

And Neon, although it's, they
didn't play a lot with analytics.

They did something with DugDB,
as I saw, like, they tried here

and there things, but still it was purely an OLTP thing, right?

For, with branching and super fast point-in-time recovery and

serverless, everything.

And got acquired by Snowflake for 1,000,000,000.

Michael: Databricks.

Nikolay: Oh, Databricks, yes.

Snowflake, Crunchy, Databricks, Neon, right.

And they are, as we see, they both work to bring solutions which

are combined, like analytics plus OLTP and everything.

We could consider this also from a different angle, but this

is also a game to conquer like enterprise Postgres topic, right?

Just coming from a political perspective first.

And it's interesting.

I like what's happening, And I also like that some people don't

like to be not in open source environment and more talents are

available.

Michael: Oh, interesting.

Nikolay: To be higher because of these acquisitions.

Yeah, of course it's natural because both Snowflake and Databricks,

I don't consider them as pro open source companies at all.

And we know Neon, I haven't checked for a few weeks, but Neon

stopped showing any commits on GitHub since July, right?

Michael: Yeah, that's the tricky thing with acquisitions like

this, because if you read the posts about what the plan is and

what they're announcing is going to happen, and then watch for

5 years, quite often those things don't align.

So it's really difficult to assess these things within, you know,

6 months of them happening as to why did it happen?

Is this a good thing for the community?

Is it a bad thing for the community?

It's really hard to tell.

It could be the best thing ever, or it could be awful.

And it's really hard to tell this close to it, which of those

is gonna be the case.

Nikolay: Yeah, in this context, I would like to mention also

a couple of companies which I like what's happening as well.

Like you can see like some negative sides, but you can see a

lot of positive sides in everywhere.

For example, Planescale came to Postgres ecosystem.

And they resurrected the old topic, which some people tried to

bring life into.

It's let's use local NVMe disks.

That's great, like absolutely great.

And they also, like I like what they do a lot, in a lot of areas.

I don't like their position with open source because I don't

see what open source they produce at all.

They just use Vitess, which was created before PlanetScale and

that's it.

Michael: And maintain it, I think, yeah.

Nikolay: Well, yeah, yeah, that's, that's, that's of course,

but what else?

So anyway, but I like a lot what
they do and they brought absolutely

different, I think different kind
of views at things which Postgres

ecosystem somehow missed.

So like it's kind of blood of MySQL
ecosystem was merged

to Postgres ecosystem and this
is great.

Like I've different points of view
and so on and This feels really

great and this year achievement,
I think, in terms of Postgres

ecosystem development.

And another is, of course, Supabase,
which is pure open source.

They bet heavily on open source
and Multi-model and OrioleDB

and all things.

And this is their, I think, strategy
to do things open source.

And their growth is...

Well, and not just, I would

Michael: say not just open source,
but with permissive licenses

Yeah, I think it's I think that's
incredible is I think it's

something that people don't really
fully appreciate of how important

it was for PostgreSQL to have a
perm, not just an open source

license, but a really permissive
open source license.

Nikolay: Yeah.

Like Apache, MIT.

Michael: Exactly.

Exactly.

And.

Nikolay: Postgres license.

Michael: Yeah.

All of them extremely permissive.

And, And I mean, they're not copy
left.

They're not AGPL.

Like they're actively trying to
encourage collaboration and allowing

people to use it for commercial
purposes is is you

Nikolay: know, common misconception
here at GPL and AGPL, they

are more open source than these
permissive licenses, right?

But somehow people avoid them.

Because they cannot build commercial
cloud things on top of them.

Michael: I'm not trying to get
into an argument as to what's

more or less open source, I'm saying
that the fact that permissive

I think is,

Nikolay: is...

Freedom, yeah.

Michael: Yeah, and I think it's
good for everybody, it's good

for the rest

Nikolay: of us.

Unprotected freedom, this is how
you can see it.

Because somebody can clone and
create commercial software based

on that, easy.

Michael: Well, and I think it's
about betting on the long term.

Like, I think it's about saying
commercial things will come and

go, but this will still be here.

Nikolay: It's off topic,

Michael: yes.

I apologize.

Nikolay: I wanted to credit them for that.

Michael: To wrap it up,

Nikolay: I just wanted to mention that Supabase growth is absolutely

crazy.

And why?

Because a lot of tools which are also growing crazy, They basically

bring a lot of development activities even from non-developers,

vibe coding, right?

Many platforms grow, grow, grow.

And Postgres became, it's obvious this year, Postgres is default

database for vibe coding.

This is 100% Like as of end of 2025 it's so and Supabase growth

and Neon mentioning that 80% or something of databases created

are created because of vibe coding and AI agents and so on.

Michael: On Neon yeah.

What about Replit?

I actually didn't finish reading the what did you see the post

that Mike was responding to?

Is that Postgres?

Nikolay: Yeah well yeah it's Postgres yeah so Postgres everywhere.

Michael: Yeah nice.

I would say the thing about Supabase and PlanetScale and people

like that, they're also amazing at marketing and I think that's

underrated in the Postgres ecosystem in terms of being good for

the project.

They've kind of brought some of that energy of early MongoDB

that people are actually excited to use it, developers want to

use it, they think it's an easy thing to do.

You know what I mean?

It's that kind of like, it's cool all of a sudden.

And I don't know if I've been here long enough that to remember

the last time that it was actually cool so

Nikolay: I don't know yeah let's mention friend Pashol who left

Postgres world but he says not fully left also had an episode right

yeah yeah and and joined MongoDB, so that's also interesting.

Last time I used MongoDB was 12 years ago, I don't know.

Last time I listened to them was 2018 in VLDB Los Angeles.

Michael: But if you go back, if you look at startups that are

12 years old, a lot of them will be on Mongo, like, or 11 years

old, 10 years old.

And I think we're going to see this, like, startups that started

now, like last year, this year, next year, a lot of them are

gonna be on Postgres, and I think that's really healthy for the

ecosystem.

Nikolay: Yeah, yeah, yeah.

And also challenging because people don't realize what is it.

Michael: Fair.

Nikolay: What do

Michael: you mean the vibe code
is?

Nikolay: Right, so I think there
are challenges in the area of

explaining basics.

It's I think demand on understanding
like simple basics like

for us vacuum is basic but maybe
it's not basics.

Relational things are basics like
sometimes I see different platforms

go at different levels there some
people like Supabase they

expose whole Postgres you can play
with tables and so on.

Some like Gadget, for example,
they expose concept of indexes,

but they don't let you to connect
to Postgres directly.

So more like vibe side.

So different levels here.

It's so cool.

You can choose how deep you can
go, but if you go and you don't

understand, you need, I think,
demand for good education of relational

databases and Postgres is growing
because of all these activities.

And on 1 hand, on another hand,
demand for good maintenance,

operational tools and practices
and Methodologies also growing

because imagine how many databases
are created now every day

a year ago.

It was much much less right Postgres
Like number of databases

is exploding this year.

Literally.

Do you think, but

Michael: don't you think their
shelf life is also reducing?

Like,

Nikolay: of course, most of them
won't survive.

Yeah.

We're just experimenting, wipe
coding, like throwing out later,

but some of them will survive.

They need a path for good health.

That's why I think self-driving
is also a new trend which is

going to stay and It's great to
have examples of previous attempts

as Oracle Autonomous, we can learn
from them.

And others.

Yeah, okay.

We'll definitely discuss this topic
not once in future, I'm quite

sure.

Well, again, like trend is obvious
trend started here in 2025

Michael: Okay Wait, well, but yeah
I think that it would be interesting

to see how important it turns out
to be in 2026 My opinion is

actually that the ones that survive
can then be given a bit more

attention.

So like, it's gonna be not that,
I think it's still difficult

to get a business up and running
and to a huge scale.

The ones that do it are still incredible
and they're still not

that many.

I don't see that there's gonna
be tens of thousands more of those

really successful huge companies
But they might get there quicker

and with fewer technical resources
in the beginning So they'll

need support for sure But I don't
see the argument that there's

going to be a board of magnitude
more them.

Nikolay: My team feels it really
well.

Regular, imagine enterprises, regular
startups, they are different,

right?

And there is new breed AI startups.

These guys move really fast.

They like database expertise even
more.

They are very often very smart,
but they are different from regular

startups and I feel it just from
perspective how we how like

work with databases organized.

So this trend will only grow in
the future, it started this year

I think, We noticed it this year,
but it will not disappear.

I think this is something new.

Michael: And you don't think they'll
hire, like, you think they

won't hire for those, like, specialties
once they grow?

Nikolay: I think they will, but
they move so fast.

You can hire but then what like
it's too late.

Like you mentioned this case about
partitioning right with regular

methods this new hire with absolutely
great knowledge will come

but it will be too late.

It already should be done a year
ago.

Michael: But it's not too late.

Success generally brings enough
money to solve these problems.

Like hire a good consultancy to
help out.

Nikolay: I'm not joking.

I heard let's migrate to a different
database system maybe.

Michael: Okay, so you think for
Postgres is survival?

Nikolay: It's a challenge.

Yeah, yeah.

So like, if database cannot scale
without like, too much attention

and too much manual work, it won't
survive its hyper growth.

We had this term hyper scalers.

AI is going to bring new level
of hyper scaling.

So we need to adjust.

Thank you so much.

Let's keep, let's say happy Christmas,
happy new year to everyone.

And let's continue new year with
all these trends and good Postgres

health, good Postgres tools, good
open source.

It's great to be in this ecosystem
and this community in broader

meaning.

And I really enjoy making this
podcast with you, Michael, and

also happy new year and Merry Christmas.

Michael: Absolutely, likewise.

Thank you everybody.

Thank you Nik.

Some kind things our listeners have said