[00:00:00] Michael: Hello and welcome to Postgres fm. I'm Michael, from PG Mustard. I'm back after a couple of weeks off. Thank you so much for holding the Fort Nikolai. this is Nikolai from Postgres ai, how are you doing?

[00:00:10] Nikolay: Much better than last two weeks. Thank you for returning. I, I was thinking maybe you, that's it you decided to stop this after one year of doing it.

[00:00:19] Michael: Yeah. Unlike Americans. us Europeans do take holidays and, two weeks off, must be almost all of your annual leave, I guess, over there. But yeah, we like our

summer holidays here.

[00:00:29] Nikolay: I don't, know what you're talking about. I, I, I have, three spots in my Calendly today. I, I've noticed so people still can schedule talks, even, even if it's holiday. Today is July 4th, but I still have some work to do probably. So this is episode number 53. Today is July 4th, and our first episode was published last year, July five.

[00:00:52] Michael: Well, yeah. So this is our one year anniversary and we decided we're gonna do a. A mailbag episode. We've got a lot of requests and, suggestions and things from people over the, over the year, probably more than either of us are expecting. So thank you everybody. but it does mean we have struggled to keep up with the, suggestions and we've got a few that probably aren't full episodes or that we can give a little, kind of quick couple of opinions or Maybe even, it's just a quick answer too.

So I think the idea was for us to go through some of those this week. so it's gonna be a bit of a mixed bag. hopefully a few of you that, that have asked those questions will get something out of it, and we can probably follow up with a few of them in more detail at some point.

[00:01:34] Nikolay: Right. It's like a photo dump in Instagram. By the way, the side, the software we are using just told me that subscription has expired. Meaning that exactly. One year ago you subscribed. Right. Congratulations. You need to pay again. Okay. Right. Okay. Let's start.

Question 1: wal_log_hints

[00:01:53] Michael: Wonderful. So the first, question I had in the list was, what is the effect of wow log hints equals on after bulk deletes. why does the next select one slow and generate tons of war? So this one came in via Twitter.

[00:02:10] Nikolay: Yeah, that's strange. I, I, I saw this question and it's, it's strange that, kins leads, I don't think there's connection here. Maybe I'm wrong, but, uh, kins are needed to propagate, Hidden beats to, to wall, and to, to replicas. So one of like the key purposes of it is if we, if fellers, for example, patron, patron supports it, if failover happens and, uh, there is some deviation, Old primary can be slightly.

In future compared to new primary. And, uh, in this case, uh, a regular approach, we need to rebuild it. But, if hin beats are wall logged, Petron can, uh, apply PG rewind or we can apply PG rewind, and uh, make standby out of former primary much faster. But, uh, speaking of bulk deletes, I think the problem is not in inside.

Maybe I'm wrong again, but I think the problem itself is in bulk deletes themselves. Like if you, if you delete a lot, a lot of Taos and then read from the same pages as some life type tops or revisit, like if indexes don't have. version information. So index scan, checking, uh, HIPA records, records and hip and table.

Might find that these records already dead, just deleted recently. And there is also a mechanism like in place, uh, vacuum. When you select something, pocus might decide to vacuum right now a little bit in this page. And when vacuum hap vacuum happens, uh, at leads to the additional wall rights, of course. So I think the problem is maybe, again, maybe I'm wrong, while lock hint, if it's off, then on replicas we might, see, select leading to rights because we don't have hidden beats.

There's an article, old article in OKMeter blog, we cannot touch it. If you don't have all lock hints, then on replicas you just select something and Poco decides to update, Hins. And this causes some rights on, on replicas, but this is different. So I think the author of Office Question deals with, bulk gI deletes and the problem is bulk de deletes and lack of control of auto vacuum behavior. So what we need, we need like to split de deletes and batches. We need to control it. We need to make sure that, , that couples are cleaned up, more aggressively, more actively by auto vacuum workers or manually we can vacuum ourselves.

Right? So this is what I

think in this case, but I, maybe I'm wrong. Yeah.

[00:04:44] Michael: Yeah, well, I'll respond. I'll respond to the person on Twitter, so hopefully they can let us know more detail if they're, if they've got it. for anybody else wondering, this is off by default, so not something you need to worry about unless, unless you've turned it on.

[00:04:57] Nikolay: Yeah. If I'm wrong, it would be great to, to have some reproduction and explore it. And it would be interesting to explore, uh, like with additional tooling.

[00:05:06] Michael: Could it, be that the select, is setting hint bits and therefore causing a load of full page rights, whereas they, it wouldn't have done if you're just doing a select afterwards, you wouldn't have caused any work like Wow. Without this setting on. So it's, you're causing a bunch of four page writes that you wouldn't have done in the, in the previous

[00:05:26] Nikolay: Maybe actually. Yeah. Interesting. Well, yeah, it's interesting. We can explore this with page spec, for example. Right.

We can, , delete, inspect the pages before our select and after hours select and hidden bits will be visible there. So page inspect could help to understand the behavior and, and, yeah.

[00:05:48] Michael: Great suggestion. Should we move on?

[00:05:50] Nikolay: Yeah, that's, do it.

Question 2: reading PostgreSQL source code

[00:05:51] Michael: All right. So how to get started reading Postgres source code, maybe PostgreSQL style c reference guide to consult with, for non C programmers. So, yeah, any. Advice for people first getting started, or I've, I've got, I saw on the wiki, on the Postgres Wiki that there's a really good, slightly oddly titled page called, so You Want to Be a Developer?

, which I, I actually thought was really good advice in general from everything I've seen. So I'll link that up in the show notes. but how about from your perspective,

[00:06:20] Nikolay: Well, I think, reading source code is not a major, issue. A ma major, uh, challenge. If you want to be a developer, the major challenge is different. It's like dealing with, discussions and, and with other people and convince them that you, your vision is better.

so defending your, your thoughts opinions.

But reading source code is, requires no, uh, like in the beginning. It doesn't require c coding experience. It requires just understanding English. Because there are good comments and there are good README files. It's enough to understand what's happening, like code discovered with good comments. Definitely.

And I think, uh, this is the main thing. Like I, I usually just read it and maybe it's some challenges to find how to find proper. places in source code because it's huge. You need to find proper places. And, uh, I usually use that gi, GI grip. Sometimes I use GitLab or GitHub search, but it's not, super convenient.

Also, there is sourcegraph or something which helps you to navigate. Uh, you have a function you can quickly, she, it's, I have a browser. Extension for that, I, I can quickly jump to definition of functions to see all calls. So it's basic, , tooling for navigation in code, regardless of language.

It helps as well to navigate a little bit, but I also think, there is opportunity here and actually some small secret to be revealed. , there is opportunity to have a good tool. to talk to, to source code, right? To like explain me, I mean, ti right or something, some LM or something.

Just explain me how this is working. And, based on comments and not only the comments on code itself, , this tool can explain your show. You. Mention particular files and even line numbers, depending on version, of course. Oh, also, very good thing is to read, two books, internals books. One from from Suzuki, one from, , Roko Goro, and particularly Goro ofs book about pogo internals.

It has, links to source code like check this file. We described something here in terms of how Pogo works and. You can find it in this file. It has on side these links. It's very convenient. This is my like, overview of the problem.

[00:08:43] Michael: Nice. I agree with you by the way, and I've found GitHub search to be surprisingly good, for just for reading the com, like for getting to places where the comments are good.

[00:08:52] Nikolay: They improved recently, a year or two ago.

[00:08:54] Michael: Nice.

Question 3: isolation levels

[00:08:55] Michael: Next one. So this is, quite a long one, but it's basically about isolation levels, their uses in different scenarios. Batch tested strategies and insights, performance, trade offs, edge cases to consider at scale with replication, sharding, et cetera. there was some interesting jepson analysis that they've linked to and, has this type of behavior or another one similar to this affected you or your clients in any significant way?

[00:09:23] Nikolay: Well, I'm very a guy with very like, focus on mobile and web apps and. We usually used to, to work at default level. It committed in which you, you might see in consistency, for example, in single transaction, oh, by the way, we also usually tend to have, uh, small transactions, sometimes single statement transactions.

Right? And, uh, actually being like dba, I hate when people. Use explicit begin coming the block for a single statement. It's, it doesn't make sense. It, uh, increases around the time and affects performance. Not like, sometimes, not significantly, but sometimes a lot. so first of all, isolation levels matter a lot when you have multiple statements in single transaction.

And there you can have, uh, Uh, anomalies. But, , it's interesting, to understand like usually we have asynchronous replicas usually, and we have this problem when, reads are not yet propagated. But if someone, for example, edit a comment, refresh a page, and we go to replica in, in second request and don't see comment, which was just added by this user. So we implement something like stick to primary and so on, and similar effect can be seen with just single node, primary node. Because if you have different transaction inside transaction, you read something, then you read something else because you read committed old committed transactions.

So these anomalies and or deleted something, some, concurrent session deleted something and your transaction, you just read something, you read it again, and you don't see it right. And this is okay. I mean, we got used to it. We just keep it in mind. And, when we design our transactions consisting of multiple statements, we just understand that this might happen.

[00:11:12] Michael: so I was gonna ask actually, what proportion of your clients, uh, your customers, sorry, that's a confusing word here. even change the default here. I, I haven't seen many, or I haven't heard of


[00:11:23] Nikolay: well, first of all, we all change when we use page dump for example, well, P dump is working in repeatable read because we need snapshot. We want, tables to be in, uh, consistent position. We, we want to deal with single snapshot. We don't want, uh, like reading different table.

Breaking foreign key for, for example, and, uh, and all if pitch dump, we run page dump in multiple sessions. Dash J or number of jobs is four, for example, to move faster, right? In this case we have snapshot, which we, they are synchronized. All workers of page dump will work with single snapshot. We can do it in our application.

It's not difficult actually to specify snapshot in repeatable RET transaction. So repeatable RET transaction is needed. Right. It's, it's good and we all implicitly use it when we run pitch dump. But, , as for, , explicit, use, I saw only two big use cases. One use case when people understand very well what they do.

And they move very carefully to higher level understanding that, when you move to repeatable reading, especially to Serializable, you are going to start getting some deadlock, deadlocks occasionally and some slowness. But second case is more interesting and probably happens more often. Some new developers who.

Don't understand yet the problems of moving to upper level. I, I mean, sometimes we need it, for example, pitch dump is one of the cases, or some billing system. Probably sometimes we do need a higher level to avoid these anomalies. But, uh, another approach is like people just decide, oh, we want to be very in, very consistent state.

Let's start, with serializable level right away. I saw it not once. And in this case, then the, the second thing they do is start complaining about Postgres performance,

[00:13:14] Michael: Yeah, so that's the big trade off, right? Yeah,

[00:13:17] Nikolay: even with few users. Yes, even with few users. Of course, if like for mobile and web apps, the loudest, loudest and inpost to the loudest level, uh, it committed is, uh, should be default.

But, uh, with, understanding role.

[00:13:33] Michael: this is the big thing I've seen. It's, I think when people first learn about transactions, uh, leads them to assume it. At least a repeatable read behavior. so I think it does catch developers out when people are first learning it, which is understandable. there's a good blog post by a former colleague of mine actually on this that I'll link up, uh, Lawrence Jones, who I know from GoCardless.

So it's, yeah. he writes about that in detail and I think it's a good one to share with juniors on your team if they're learning about this.

[00:14:03] Nikolay: Right, so it's my comparison to replicas, synchronous and asynchronous is, Or semi synchronous. It's, it's sim, it's, it's different. But in terms of anomalies, it's quite, similar. I mean, users have anomalies and we've seen asynchronous replicas. They also have anomalies and we need to deal with it. But, uh, why, like if you think performance doesn't matter, let's, make like we have five replicas.

Let's make all of them synchronous, right? And no anomalies anymore. They all have data, the same data. And let's go to Serializable. Synchron iCal.

[00:14:39] Michael: Yeah. Oh, and in multi regions around the world as well, right? Yeah,

[00:14:45] Nikolay: of course. With Big S,

[00:14:47] Michael: What

[00:14:47] Nikolay: uh, with big network

complexity between them. Right, right. And serializable.

Good. Good luck.

By the way, it, this is probably one of the topics we probably should dive deeper and, explain many, many cases. And, uh, I, I'm refreshing my memory from time to time. I'm sitting in the uhit at most of my life, so sometimes I go there and, uh, find new things for, to me. So I would like to explore this and discuss

[00:15:16] Michael: Nice. yeah. Wonderful.

[00:15:18] Nikolay: Mm-hmm.

Question 4: data encryption

[00:15:19] Michael: encryption in Postgres. That's all we got on this one.

[00:15:22] Nikolay: Data encryption can be different, uh, like at rest and in transit. Right. Two, two big


And I

think, mm-hmm. Well, I'm, again, like, uh, security is what we need to deal with. But I'm not a big, fan of, exploring all, all. Like usually I prefer to, to just, to check what's the best approach and so on.

And then encryption of course is good thing, but compression is also good thing, right? And sometimes they go together, but, not always. And, encryption in, uh, like should be enabled. for example, especially if you do, if you, if you work in, in cloud, but it should be enabled at protocol


tls, and

[00:16:04] Michael: So, okay. Okay. I see where you're coming from now. I assume they were actually talking about, the time I see this come up most in Postgres is because we don't have the whole, you know, encryption at rest at the database level. So like,

[00:16:17] Nikolay: And compression also.

[00:16:19] Michael: yeah, so there's the common, argument is that you can encrypt the discs.

[00:16:25] Nikolay: Using which, uh, key, uh, provided by you or by cloud itself? If provided by cloud,

[00:16:31] Michael: I.

[00:16:33] Nikolay: then

how good is it?

[00:16:36] Michael: Well, I also, I also had, well, I, I tend to trust cloud, uh, at least the major cloud providers more than I trust myself. but,

uh, you, you, you may not. Um, the other thing, the other thing that somebody brought up to me recently that makes a lot of sense is that also you've gotta worry about your backups. So if you, if you've just en encrypted at the dis level and you've got a Postgres backup, And you're, storing that somewhere that can be restored and it's not, it's not encrypted. So there are other, uh, things to consider.

[00:17:07] Nikolay: And usually, people store backups and object storage, which makes total sense because of its durability in terms not about not accessibility, like high availability, maybe high availability of s3, for example. It's less than of on of compared to E Bs volumes on in the Ws. But, , durability in terms of, uh, data loss, it's, it's insane.

And like the data won't be lost. But they also, like, usually they are not, surrounded by, some, uh, network solutions. So, I mean, these buckets are available from a, anywhere and you just need proper keys and you can access them. From different regions, from different, customer, accounts and so on.

Like, it's, it's, it's interesting. But, , of course they usually provide a lot of things to, for encryption, and, Sometimes it's challenging to use, uh, your own keys. For example, it might, uh, break some processes like, , if you want to repeatable, uh, Ries for chunk upload in some clouds. if you use their keys, it's fine, but if you use customer managed keys, it might not work and you need to retry whole file.

Which might be one gigabyte in progress. Well, everything is stored, like all data and indexes are split to one gigabyte files, but if you compress it, it'll be probably three, 400 megabytes depending on data and so on. In this case, it think it means that if you want to use your own keys, probably you need to perform Ries of whole file and it's not super efficient.

So there are many challenges there. And also , If you encrypt, , and then you want also to like, one, we need, this is security topic, so it's very big. And uh, for example, one of the, problems, it's not like encryption can work in both ways and sometimes encryption is used against the data owner.

You know, like ransomware, which encrypts all your data. And then they say they will give you key only if you pay. So how to protect against, against that. And sometimes people, like people see bigger danger in this area, like how to avoid encryption. So I mean, not to avoid, but you probably need to store backups in two clouds, for example.

Because one cloud can be stolen, your account is stolen. Everything like you lost excess and losing data is probably so, so two big risks. Losing data is dif different risk. Of course, one risk is leaking data, very big risk, but also losing data is different. Risk and encryption can be, used, uh, against you in the second case.

So, yeah.

[00:19:46] Michael: Is it worth mentioning a couple of third party tools,

[00:19:50] Nikolay: Yeah.


[00:19:51] Michael: like, well, not tools necessarily, but I think site, both cyber tech and EDB have non-core. Solutions here for that.

[00:20:00] Nikolay: to


[00:20:01] Michael: either in production,

like, transparent data encryption from cyber

[00:20:05] Nikolay: Oh, it's, it's different. It's, it's at, at at upper level. It's not encrypting disk. It's encrypting, uh,

data inside

pos. And


[00:20:13] Michael: which, which then results in even your backups having, like being

[00:20:18] Nikolay: No. To me as a Postgres user, I think it's very good, feature we should all, all have. But I remember some discussions about it and, , some, , like dead end in development. I mean, uh, somehow it's not in court. It cannot be brought to core yet, and so on. I like, I don't remember details unfortunately, but I, in my opinion, it should be in ous as a feature. It would be great.

[00:20:41] Michael: Yeah, I, I'll link to the ones that I'm aware of, in the show notes, just in case anybody does want or need this and wasn't aware of those. I think we've probably got time for one more. I'm scared that the one I'm gonna pick is a bit big, but do you have any on the list that you wanted to make sure we talked about?

Question 5: migration from other DBMSs

[00:20:56] Nikolay: no, but I can choose a, a couple. Let's, Well, migration from other databases like Oracle Top pocus, also on non-rational ones like. Couch base, Cassandra to Pocos. It's a big topic, but fortunately in this case, I'm not a big expert in migration.

The main migration I did was from my SQL to Pocos, uh, many, many years ago in my project. But my colleagues did a lot of, uh, work, migrating from Oracle and so on, and understand the process. Of course, it depends usually on how much code you have on database side

[00:21:28] Michael: And by that you mean like procedures and functions,

[00:21:31] Nikolay: right. Right, right, Migrating schema is quite, easy, but if you have a lot of PL square or TQL code, you need to rewrite it and this probably will take a lot of time. And second, biggest challenge with, I mean, converting schema is also challenge, but it's solvable. There are automation tools. Uh, you will deal with some.

Issues, which probably is easy to fix. The key is to test a lot as usual experiments, right? And, um, second biggest challenge after code this, uh, server code is probably to, if you want, near zero downtime, migration. In this case, migration is used in proper meaning. I, I think not in weird meaning when we change schema, well, we also change schema, but we change, we change engine for schema.

Right. But yeah. Yeah. In this case you, you need some, uh, replication solution. Logical replication solution.

[00:22:23] Michael: Like change data capture type thing.

[00:22:25] Nikolay: Yeah. the,

[00:22:26] Michael: I actually, this is something I have got a little bit of, at least secondhand experience of with, there was a cross company working group in France that are doing a lot of migrations from. Oracle from SQL Server from Sybase. It's, it's been become really popular Postgres in France, amongst huge organizations

and they wrote.

[00:22:49] Nikolay: organizations as

[00:22:50] Michael: Yeah, exactly. lots of large organizations including government, agencies, and they wrote a, they collaborated on a guide, but it was all in French. and I'd spent a while with, with one of them, so I, I ended up translating a few of the chapters into English.

So a few of them are, are thanks to me. A few of them are thanks to other people. So I'll link that up as well, cuz there's, there's basically a, a tried and tested formula for doing these. A lot of co consultancies make a lot of their money helping people do this, so you can get help from, from others. , but yeah, it's, it's not a small project because a lot of these databases do encourage using sort procedures and functions.

So Yeah, if you have a lot of those, don't expect it to be a small project.

[00:23:35] Nikolay: What about NoSQL database

systems to pos?

[00:23:39] Michael: I haven't seen much of it, to be honest, but hopefully we will in the, in the coming years.

[00:23:44] Nikolay: I, I saw cases, but I never, I, I don't remember any big questions

about it. Just do it. That's it.

[00:23:51] Michael: I guess it's quite simple, right? I guess by, by its definition it doesn't have much schema.

[00:23:56] Nikolay: if it's Mongo, Tobi, there is new project called Fer db, which like speaks Mongo. But yeah, this is one, one of the way just to use some extension or some project, uh, on top of which will help. But if it's Cassandra or Couch Base, I don't know.

You can use J so as usual, right? Yeah. There will be difficulties. Definitely there will be difficulties.

[00:24:20] Michael: yeah, well, if anybody does have experience with that, let us know, but I imagine those are much simpler projects


[00:24:27] Nikolay: With, uh, complex queries to Jason documents in some, no sequel just on like, , storage. Uh, we can, it can be easy to convert them as is, but you probably will have not good performance. and also consistency. Why do you migrate to the public? You won't probably benefit from its, Strong sites and relational, strong relational data model A, c i D and so on.

Probably you should move your, some of data, some of parts of your data to relational model,

[00:24:56] Michael: Maybe

[00:24:56] Nikolay: o n.

[00:24:57] Michael: it. Yeah.

[00:24:58] Nikolay: Yes. And in this case, uh, it's kind of, uh, like, building

something from scratch

[00:25:04] Michael: Okay, good point. actually performance is a big part of some of these migrations. So once you've done the scheme, once you've done the code, you sometimes they get stuck and you see it come up on the mailing list quite a lot. People saying, this used to be one second in Oracle and now it's 20 seconds in Postgres.

What can I do about it? You know, you, you only get the. Uh, the squeaky wheels, you only hear about the ones that are slower, of course, cause they're the ones holding the project up. But it's quite common to have, you know, if you've got, thousands of different queries going through your system, a few of them are probably gonna be slower in Postgres than Oracle without some tuning.


[00:25:38] Nikolay: Well to, to be fully fair, SQL Sovereign Oracle. Optimization code is much more complex, sophisticated, bigger than Postgres, but on the other side, Oracle has hints and, uh, tendency to use them quite a lot. Manual control on plan of, of plans. But also like SQL Server, if you compare performance with, sometimes I saw, saw some works, uh, SQL servers winning.

Quite obviously, quite often. But, , it does mean that in we cannot make things work well and it's improving also. Right? So, so I dunno, like, I, I understand that it's not that simple that like, it's easy to improve. well, code base is huge. Uh, some methods for query optimization used in commercial big database systems are very interesting. And, many of the sgu doesn't have yet. So

[00:26:38] Michael: That's a good point actually, like index, sometimes it doesn't have them like natively, like an index, skip scan, for example. But it

[00:26:46] Nikolay: loose end scan, you need to master it yourself manually, and other systems have it. We've, we've got, , transaction control in storage procedures, stored, stored procedures, non functions, and POSCO only in version 12 or when,

[00:27:00] Michael: Maybe even. Yeah, exactly.

[00:27:03] Nikolay: Well, those guys have it for many years. So like, I, I mean, we should, , not be like, we are the best.

We are the best in many areas, but not in all. Right. I mean, and

understanding pros and cons is, uh, important. Not to be like, like, all good or good. No, no. Sometimes it's not that good. And improvements are needed and so workarounds are needed, like

loosened, index scan.


[00:27:30] Michael: I, all I meant is it can be a big part of those migrations. They can, and they can

get stuck at that

[00:27:35] Nikolay: What. And well, I can say like Sgu has so many things. Uh, usually we have like, okay, maybe not in a way you got used to, but in different way we can achieve very good performance, definitely. And you can put very mission critical systems, uh, with, uh, under big tps, a lot of data, a lot of tps, and so on and so on. So it's possible to build big, reliable system using positive

[00:28:01] Michael: Nice. And I think this is another one of those ones where we could do a whole,

[00:28:05] Nikolay: Yeah, migration is huge topic. Again, I'm not an expert, but, I can explore and say something things definitely, well, maybe some small

last thing and that's it.


what do you

[00:28:14] Michael: Sounds good. yeah, go on then one more.

[00:28:17] Nikolay: Okay.

Question 6: latest failover best practices

[00:28:18] Nikolay: Latest fell over best practices,

[00:28:21] Michael: Oh, you picked a nice small one then.

[00:28:23] Nikolay: for

lower best practices. For fall lover is, uh, when things go wrong, right? What do you do when failover happens? Answer is you if, if well prepared, you shouldn't do anything. And this is the key, right? I remember times when no good fall over systems. existed and also there was consensus that in pogo, in my opinion, it should be been pocos inside Pogo, but so far it doesn't seem to be to be happening at all.

But we have Petron and others. The recommendation is to use Petron or other system, which follows consensus. Uh uh, well developed consensus algorithms. Right, like Raf. but, uh, if, if you use rep, manager, rep, M G R, I have bad news for you. Split brain is very often, in large systems or in in cases when you have many clusters, it's, it's very likely.

So migrate from it. This is the key. And just use Patron for example, or I dunno, like Patron is, is obvious winner right now.

[00:29:33] Michael: Are there any particularly good guides or books on it that you would recommend?

[00:29:38] Nikolay: Well just documentation, but, well, there are some tricks there, but it's worth a separate episode probably. But one of the things, for example, you should understand if you use, we mentioned a synchronous replicas, so if you have a synchronous replica during fall over, you might have data loss and P patron defines by default.

If I remember correctly, it's 10 megabytes. MI bytes, 10 megabytes of data might be lost in, in case of failover. So this should be understood, or, or you need to start using, , quorum commit and summary. So commit goes to at least to two node and Patoni will choose, , the best one, and probably you won't to lose any data during fall hour.

actually no, I, I suspect maybe the author of this question. Meant something different. Not fall over, but, but switch over.

[00:30:26] Michael: Or maybe high availability, like maybe it's the, I think it might be the same one as one above, which is also a huge topic about high

[00:30:33] Nikolay: depression. Yeah. Yeah. Because, there are also best practices how to perform switchover and, to avoid the long downtime and, , not to lose data. It's not rocket science, but there are some tricks there. let's promise to explore these both areas in

the, in the future.

[00:30:50] Michael: Sounds good. Well, thanks everybody for these questions and suggestions. Thank you, Nikolai.

[00:30:56] Nikolay: Thank you for being back and continuing this because I was in fear. you You

want to stop it?

[00:31:05] Michael: You are, you're crazy. I said I'll be back.

[00:31:08] Nikolay: Okay. And by the way, don't watch all those draft recordings in, uh, this riverside. Just delete them. They are all bad.

[00:31:18] Michael: That's so funny. all right. Take care.

[00:31:20] Nikolay: Thank you everyone. Thank you, Michael. Bye.

[00:31:22] Michael: Cheers. Bye.

Some kind things our listeners have said