[00:00:00] Michael: Hello, and welcome to PostgresFM, a weekly show about all things PostgreSQL. I am Michael, founder of PG Mustard. This is my cohost Nikolay, founder of Postgres ai. Hey, Nikolai, what are we gonna be talking about today?

[00:00:11] Nikolay: your time to announce

[00:00:13] Michael: Well, yeah, so.

Anybody following you on Twitter, probably saw your tweet this week. they've got a couple of hundred retweets, a lot of likes. And, it was a summary of tips for beginners, you said, right? So, 10 tips for people that are new to Postgres.

So we thought we'd discuss that.

[00:00:32] Nikolay: new means probably people use post for a few years already, but Like, you know, like you can be developer, you can use some or backend developer or, even front end developer. Sometimes you can use some or ram rails jungle or something. Maybe touch some sequel sometimes, but consider as some kind of black box, relational black box, you know?

But at some point, of course, it's, it's worth understanding. At least something like to scratch surface and understand what's happening under the hood just to avoid bad surprises, like why my delete is so slow, or why it's behaving like that. Like I deleted a lot of data, but disc space consumption is still the same, right? Or why bloat accumulates so quickly and so on.

[00:01:23] Michael: Yeah, so I think you're right. I think some of these will be useful to people, even people that have been using Postgres for years. But I also think some of them will be really useful to people that have only picked it up recently or whose organization are migrating to Postgres from another database. So I think there's a whole range of people that are.

New to Postgres or even, I mean, even I enjoyed reading these and, you know, thought, oh yeah, I do need to look into that. I haven't, I've never installed that extension or something. So, yeah, even for me, there were a few things.

[00:01:51] Nikolay: Right. But as you very correctly noticed all of these things we already discussed. So each, each, every, every, one of all those 10 chips have a link, can have a link, , to our previous episode for a deeper dive, right?

[00:02:11] Michael: Yeah, absolutely. I think some of them aren't the exact same topic, so I'm interested to ask you a couple of questions on a few of them. but yeah, I'll include links to episodes for , maybe people that are newer to our podcast and haven't, listened to every single episode we've done.

[00:02:24] Nikolay: Right. And from there you can go to very beautiful blog post articles talks and other books and so on and so on. you can use this episode as an entry point, other episodes as like deeper entry points to specific topics. And from there you can learn a lot. So it's kind of like a tree, right?

[00:02:43] Michael: Yeah. Yeah, absolutely. Right. So I mean, we don't have that long to discuss 10 things. Should we get started?

[00:02:49] Nikolay: Yeah, let's,

Tip 1: tuples are physical versions of rows
---

[00:02:50] Michael: So the first one on our list is all about topples. Do you want to give the summary of this and.

[00:02:56] Nikolay: yeah. So, I don't know why everyone continues mixing. I know not everyone, many people continue mixing rows and taps. Tables are physical version of row. It's mvcc model, a multi versions concurrency control model POCUS users, and you can feel them if you look into, , secret columns, not secret, but invisible columns.

CT I D X mean X max and we discussed it, right? So C T I D is , physical location of this topple. Again, topple is a physical version of row, so if you change it, Physical location usually also changes. Well, in most cases it changes. And C T I D consists of two parts. It's a page number and internal address inside page.

So like I've said, And, my usual trick, if you need to extract only for page number, for example, you convert it to point data type and then address it like X or, or Y component. So X is page a, y is offset, and then you can extract it and convert it back to number. And, and that's it.

So it's easy cause, this like C data type, I think it lacks some. Operators or something, or functions. by the way, maybe it's a small addition that could be first contribution to if you want to try to writing some C code. It's quite simple, So this CT id, it's interesting and it's, one of my favorite tricks to surprise people like saying, update table set ID equals ID where ID equals one.

And you suddenly see that CT id, this hidden column changes its value even if logically you didn't do anything but pogo copies whole topple each time you change it. Right in, in the best case, it can happen inside the same page, and so you don't receive index updates. You don't need to update all indexes.

In the worst case, it goes to different page and all indexes should be updated. Unfortunately, receiving new interest and, and x mean X max, it's like date of birth. Date of death. Right. How to say in terms of transaction, I, we discussed it, right? We discussed it. We have episode about, BCC or, or blood or so and so on, but it's, it's all related.

[00:05:10] Michael: I think the episode we discussed this most in was our other more beginner focused episode, which was how to become a dba. So I'll link up that one for sure. I thought it was a great tip. I had nothing to add. I think it's quite simple as well. And you've gone into good detail.

[00:05:25] Nikolay: Right, but I'm quite sure that this should, be clear not only to DBAs, this knowledge should be. Clear for everyone who works with pos because, , even if these cones are hidden, you inevitably will deal with blood with slow down with, some slow delete, some slow update, massive update, for example, and so on. And, understanding this starts with understanding what double is.

[00:05:50] Michael: Yeah, I completely agree. It's gonna be coming up again in a couple of the later tips as well.

[00:05:54] Nikolay: Because DBS usually don't write most of the code. Most of the code is written by developers and if developers don't understand that update set ID equals ideal leads to physical action, quite expensive one, they will continue writing it, right. If they understand it, they will avoid it because they understand what's happening under the under the hood.

[00:06:15] Michael: I agree. Number two.

[00:06:17] Nikolay: number two.

Tip 2: always use EXPLAIN (ANALYZE, BUFFERS)
---

[00:06:18] Michael: Number two is one's close to my heart as well, isn't it? , always use buffers when checking. Explain, analyze.

[00:06:24] Nikolay: Yeah, we, we mentioned it so many times. If you, if like it's not your first episode, you definitely know it and nothing to add, but if it's first episode, just remember, buffers should always be inside parenthesis. We've analyze. When you write, explain, analyze. Analyze should be in parenthesis, comma buffers.

Explain. Don't use and explain. Analyze, use, explain. Analyze buffers, always. No exclusions.

[00:06:47] Michael: I was gonna add to it, I think especially if you are brand new to it, using Explain, and you might have to ask a colleague or somebody online for help. I would also recommend what, when you're getting the explain plan to use verbose and settings as well. They're two more parameters. Verbose because it gives the people reading it more information about your schema, as well as, , output columns and things like that.

So more information for people that aren't as familiar with the query that you're working with and settings because if you've changed anything locally that will show up, or if you've configured the database differently to have productions configured or something like that, , those things will show up and they'll help people avoid wasting time when helping you. They might spot an obvious issue in there.

[00:07:33] Nikolay: Yeah, well, yeah, yeah, yeah. So deviations from of settings, but, , I also like, okay, verbose will make it verbose. If you write json for json, it'll be even more wear and so on. But more difficult to consume by humans. But I also think sometimes it makes sense to turn off costs. For example, if you just care about execution in some cases and you won't narrower, output to share with your colleagues.

In some cases it makes sense to remove costs. But, Of course costs are very important to, to understand what planner is doing

[00:08:10] Michael: As a beginner, I wouldn't be, I wouldn't be doing that just in case somebody helping you can

[00:08:14] Nikolay: right. But if you have your boss, it's so wide and long and so on. Like I, I just see sometimes people share, plans with cost O off just because they want, they, the place where they share. If they don't do it, it's, it's hard on to read the plan if it's text textual.

[00:08:32] Michael: I kind of disagree, buffers for example, in Jason format, buffers add 12 keys per node. So I mean, that's a lot more than for Bose is adding.

[00:08:42] Nikolay: well, okay. Okay. Okay. I just noticed quite experience guys do it. Cause off. I also started to use it sometimes, especially if you already collected a plan without execution using, just explain. There is no reason to repeat the same information in subsequent plans, right? If it's, you know, the same plan, but sometimes of course you add index plan changes, so you, if you don't use costs, then you, it's hard to compare and understand, , why planner chose this, what planner thinks about cost of particular nodes inside the plan.

Okay. Okay. I agree with you, but sometimes still like, okay,

[00:09:18] Michael: Let's move on.

[00:09:19] Nikolay: move on.

[00:09:20] Michael: so always use buffers no matter what else we've just said. Use buffers.

Tip 3: throw away pgAdmin
---

[00:09:23] Michael: number three, you've said throwaway, pg, admin. Start with,

[00:09:28] Nikolay: yeah, I said that many years ago. And I keep, saying the same.

[00:09:31] Michael: So, why, tell us why.

[00:09:34] Nikolay: many, many reasons and,where to start. Okay, this is my personal opinion, same opinion I have, for example, about Windows, but I understand that some people like windows. So it's okay. But in this case, I see that it's not only my opinion, every person I talk to says the same.

But itin, takes very important stage. It pretends it's, official post ui, graphical ui, right? But it's a separate project. , it's not developed by, by, by not the same people. it has different, release cycle and so on, and I think it's advertised unfairly. And it's, it forms, wrong expectations because, , the quality of this product is not good. That's it. it has a lot of issues,UX issues, first of all.

[00:10:25] Michael: I think you're right. I think they're better free and even open source, graphical user interfaces available. I think you're right also that because it comes with a couple of the. officially linked to distribution. So if you download Postgres for Macs or for Windows, PG admin comes bundled, which does lend itself to believing it's,

official. But equally, I was looking at the Stack Overflow survey results recently, and I was seeing that even though Postgres has gone ahead of my SQL in terms of popularity, in terms of even usage for professional developers, when people are learning to code. It's still a long way behind MySQL and I was actually thinking that one of the reasons is like, it, I think it is useful for beginners to have a graphical interface, in a lot of cases.

So I can see the temptation to bundle one, but yeah. I think this is good advice and there are good alternatives, which is really

[00:11:18] Nikolay: Yeah, there, there are a lot of alternatives and, like, just check,opinions. Don't trust me. Check reviews opinions from others, and you'll see so many people don't like pi mean, especially then, they already saw something better. There are, there are some better tools.

If you want ui, but main advice for me is, learn, console, p scale and Tmax and vi and well, maybe even without vi, some people just use Nano and that's it.

It's not for them. P scale is like something worth learning. Even if you spend more of your time in ui, you'll still benefit from understanding some peer skill tricks because cli, it's better for programmers because you, you can, code something in it. you can make, your colleagues repeat the very same steps you did, right?

Because it'll be a script.

[00:12:06] Michael: Well, more, more than that. It's also Postgres quality. It's also shipped with Postgres written

[00:12:11] Nikolay: yes. A lot of people, same people, many same, like, not the same people, but a lot of, people who work on POCUS engine itself, they work also sometimes with on psl. And it has so many features. It has the same release cycle, the same quality. Same, same, same. It's, it's Psq is definitely, native, tool is not native tool, but pretends to be native, but it's not.

[00:12:34] Michael: one counter. I have heard to this from a few people that have also said they prefer other, interfaces, is that they still use PG admin for its, plp, gsl, debugger. So I don't know of another tool that has one

built in.

[00:12:49] Nikolay: I think you can use, tux and vi still have debugger. It's it's hardcore approach. I personally don't use debugger, honestly. I just use print lining everywhere somehow. I, I use debugger in many, many, languages and, environments, quite often trying to pretend that work, but somehow always, like print lining is always with you.

Always. And if you know how to do it right. And in P P G sql, there are approaches how to do it right. For example, you can raise debug, for example, right? And or raise log raise in fo raise, notice

[00:13:21] Michael: Mm-hmm.

[00:13:22] Nikolay: raise exception, although to, it's an error, but it called. Call the exception, you can catch them.

In this case, you will deal with sub transactions. Also, interesting thing, I have an article about sub transactions to warn about their issues under heavy loads. But, , print line is always a few, it's always working. Maybe less convenient, maybe a little bit more time consuming in some cases, but, it's good.

So, Deb, I. Respect bagger, obviously, like Deb bagger is a great thing. It's like professional level, right? But still, , I personally don't use it, so I cannot recommend, but to be fair, paja also has a sche iif component. For example, I looked into it, it's called, it's quite interesting code. , and you shouldn't expect, , magic there.

I mean, it's schema, it'll produce, some ddl. To convert one schema to another without context. So it might, you might have issues, if for example, you rename column versus you drop an AT column. So data can be lost solely. You should be careful with these steps. And also, of course it'll be ddl which is written in non zero downtime fashion.

And I don't see anyone who implemented zero downtime. And d in terms of dl, it's, it's a hard topic, but, others also some other, UI also have component for Div. Also, P admin has good compan to, to work with, GIS systems and present some data on map. I just saw people using this quite well. I personally don't do it, but yes, it has some, it has some good features. I think The Beaver also has it already, right?

[00:15:02] Michael: that's actually when you, when you compare, when you look at comparison discussions online between the two or between various duties, the, the one that comes up most often for praise when it comes to displaying graphical information is deep beaver. That's what a lot of people love about it.

[00:15:19] Nikolay: So, but bottom line, I personally don't like peja me at all. I think it's, it's should be replaced everywhere and, But I'm not alone. And if you don't, trust my opinion here, just check other reviews. That's

it.

[00:15:32] Michael: let's move on.

Tip 4: enable as much logging as you can afford
---

[00:15:33] Nikolay: next is, oh, settings, logging settings, right? So, logging is super important and usually when you need that, and you find yourself, , working with default logging and,something happened and you don't see it, don't, don't see data.

So you should enable logging of checkpoints of auto vacuum. I recommend to log everything. Of course. , if you have a lot of tiny tables and auto vacuum, constantly touching it, , writing, like, okay, I, I just processed, like one megabyte table, It's maybe spamming your logs, but it's still, worth understanding some details usually and, and, and so on.

I usually recommend enable logging for all auto vacuum, occurrences and, temp files. I also prefer logging all temp files. Log log, lock waits, , which is based on deadlock, timeout, which is by default. One second. Some people tend to increase it. It's for dead log detection, but it also in indirectly, defines how, events of being blocked by other session will be logged, right?

Blocked, logged, so, this is by default disabled, so if some locking issues are happening in locks, you don't see them unfortunately, and that's, that's bad. I prefer seeing everything here. So just enable it and, and every time, some session where it's more than one second, by default, that doc deadlock time out, you will see it.

details about this. Case and it'll help you, improve your application and so on. And also log duration statement. well, sometimes it's okay to set to zero, but very rarely because Yeah, yeah, yeah. To log all queries with duration

[00:17:10] Michael: I'm shaking my head.

[00:17:11] Nikolay: Well, there is opinion we should have it.

For example, if you have engine X, for example, right? It's normal to log all requests to HTP server, I mean EngineX, for example, right? HTTP server. We log all requests. Why? With database we can do, we cannot do log all requests. Because usually, especially if you put, if you use this log journal, journal D and also you put, your logs on very like mag some magnetic device, well, it'll be sequentially sequential rights.

It's not that bad usually, but still, if you, you can do the math. How many iops and mega megabytes per second, you will need to have, right? But, yeah, so observe effect is no joke. I put, very important production systems a few times myself down, put them down multiple times and, it was super expensive experiment and the lessons learned were very hard.

So, yeah, I wouldn't do it on serious production. I mean, putting zero, but trying to go down, definitely. Yes.

[00:18:21] Michael: Well, and saying it to something I think is the key here.

[00:18:24] Nikolay: Not minus one, you mean

[00:18:26] Michael: yeah, something positive. but there's a value that anybody can afford here, right? Like there's, there's,

[00:18:31] Nikolay: five seconds. One second.

[00:18:33] Michael: it would be, yeah, right? So like if you can't afford to do that, you've got bigger problems.

[00:18:38] Nikolay: Right, because, if you, if everything is slow, it won't be so, so many queries to

[00:18:43] Michael: You can't have that many.

[00:18:45] Nikolay: Right, but I think it's super fast. But still, you need to learn your observer effect. understand these capabilities where you have logs and maybe test it and understand, okay, we are okay to log maybe 10, log entries per second.

For example, normally, and maybe like hundred during peak times. It's okay for us, for example, or maybe we should adjust settings and raise some bars like, log duration statement, or, log out vacuum mean duration. We can also raise it a little bit and so on. Yeah.

[00:19:17] Michael: Wonderful. Should we move on?

[00:19:19] Nikolay: Yeah.

Tip 5: install pg_stat_statements
---

[00:19:20] Michael: So it's quite related, still looking at performance, but also here, bit more logging. We've got, I install PGS stat statements, which is a must you've said. And if you can, a few more extensions, pge, stat, K, cash, pg, weight sampling, auto explain, even you mentioned PG Sentinel here as well, but with a few extra decisions on settings for those last few.

[00:19:41] Nikolay: Right, right. So yeah, purchase assessments should be everywhere installed, everywhere. overhead is very low. We had episode about it, of course, and other extensions are these pieces of advice are mostly for, people who manage postpositive themselves because cloud providers usually provide something extra to al statements.

For example, RDS provides performance insights, so you don't need pg sampling, but they don't provide anything, similar to business kk, which gives you, , physical angle for query analysis. Unfortunately, so you cannot answer, who is responsible for high CPU consumption, which query is responsible for high CPU consumption?

You can only indirectly answer this with, you can directly answer this. So, yeah. So these extensions are good, good things, but how to explain is everywhere, right? How to explain is everywhere.

[00:20:34] Michael: It is now. Yeah, we had an episode on that as well,

where I claimed that it wasn't on Heroku, but then the next day it was available.

[00:20:41] Nikolay: Yeah, yeah, yeah, yeah. I remember this and you, worth reminding, you have an article saying that sometimes it's okay to. Enable timing option for what to explain. And of course we want buffers there also. Both have some overhead. And your post was about unlike, common opinion, sometimes this overhead is not huge and you can enable them both.

Probably always,

[00:21:07] Michael: and also never with, a threshold of zero milliseconds.

[00:21:10] Nikolay: Oh, right, right, right. So again, it's only for slower

queries. But

slow down zero is fine if you use sampling. And it,

it was, it was you who told me that sampling in auto explain exists, for already many years.

Unlike, slow query logging component, which is log log mean duration statement. based now we have in modern post discussion, we have sampling capabilities for both and these things are like, they compete.

If you choose auto, explain, you don't need log duration statement because, well, we discussed this, like, let's keep details, for those who are interested. And we have a link to episode where we dive deeper into this topic.

[00:21:54] Michael: Perfect.

[00:21:54] Nikolay: Let's move on.

Tip 6: run experiments on realistic data sets (use thin cloning and branching)
---

[00:21:55] Nikolay: So my opinion is that all development should happen on, larger data sets Like it's not worth wasting time, and developing on small. The databases on empty databases, just take bigger size, realistic size, that clone and just develop using it. Of course, you should remove pii, personal identifiable information like phones, ssn, emails, and so on. because if you, if you use it for development, some auditors won't be happy. if you use production clone for development, but not always. Sometimes people are okay to clone in many projects. It's fine because you might work with just public data, for example.

But my idea is like I, I see people tend to develop with small or like fake, some mocked databases. then they sur are surprised why some queries work slow or don't work at all.

[00:22:54] Michael: Yeah, exactly. I think people are surprised. Not only that, more those can be slower, but also that it can affect planner decisions quite so drastically. Even whether, you know, I think people understand that join algorithms depend on data sizes, but they don't always realize that scan operations can depend on it as well and all sorts of planner choices.

and it's even not just. Queer performance, right, is also spotting locking issues. if you have a tiny table, you might not spot something that's taken an exclusive lock, that's gonna cause you downtime if you apply that in production. So I think it can help you with other kind of spot, other issues as well by doing this.

I think it's a good tip.

[00:23:31] Nikolay: Right, but even, even not related to performance, if you develop using some, like if you present your feature to your product manager or project manager, or some QA specialist or some other people, not necessarily engineers, and it has some random data, you don't have good page navigation.

Search doesn't work and because you just generated a few rows, that's it.

[00:23:54] Michael: Mm-hmm.

[00:23:54] Nikolay: also not working well. compared to the case when you have production clone with PII removed or you generated more realistic data sets and you can share a clone quickly using database lab, for example. database branching, like cheap and fast.

You can share it and they can independently test it. It's very good. It unblocks many. not clear in the beginning processes, inside development process. But I also wanted to say that the idea of using large databases sometimes conflicts with the idea, I want to take my laptop and go to some vacation or work on plane where internet is slower.

So, So people of course, want to have database on their laptops, on their working machines sometimes to avoid, the need to use internet connection. And I understand that, of course. And in this case, if you want, one terabyte, well probably today we'll have already have discs with like one terabyte.

Some, sometimes you need subsiding of course. So you need to minimize. But still, You can work with more realistic data and if you can clone, if you can have, one terabyte on disc, but clone 10 times without extra costs. And in terms of money and time, this is how things should work. And worth noticing.

We have copy on right everywhere already. We have it on containers, it's copyright on file level, layer overlay two like this. When you create container based on some image, it doesn't copy everything.

Because it's copyright and we also have gi. GI is also copyright on file level. You, create branch. It doesn't copy.

Whole repository, right? It, it just virtually creates like a copy. But, you share the same files among all branches, which, , are the same until you change some file, right? Same here. you need to develop, applying copy on write to databases. And this is what, this is exactly what we implemented trying to help people. I mean, we.

[00:25:55] Michael: Yeah. Yeah. You've spent a lot of time thinking about this. we did a whole episode on it. I think the database branching episode's the best one to share here, so I'll do

[00:26:02] Nikolay: Good. Yeah. Agreed.

[00:26:04] Michael: Nice. so we've got a few more, but luckily some of these are shorter.

Tip 7: make sure data checksums are enabled
---

[00:26:07] Michael: make sure data check sums are enabled. So this is about avoiding corruption and we've got a whole episode on that. Right. Anything, anything additionally to add there?

[00:26:16] Nikolay: Well, unfortunately, the places created a few years ago probably don't have them enabled, and I think Steel pocus itself has default, off, because it, it should be done during a need tob time. When you create cluster, of course they have overhead. Everything has overhead. But this overhead war heading because without them risks are high and it's hard to, like, you need to implement a lot of, additional procedures to control, your corruption risks. And somebody, by the way, asked about if you have checks, sums on lower level. For example, if you have zfs.

[00:26:49] Michael: Mm-hmm.

[00:26:50] Nikolay: database lab, because it uses it fast as well. In this case probably you don't need data checks, sums. I think like we have, , data checks, sums in level are at higher level. They protect not only from, hardware issues.

For example, we, when data is corrupted and data checks sums shut says, okay, I, I expected to read something, but I see this page has something else. So we have corruption. I think it also protects from, from particular, maybe not like very likely, but some box and po itself, or in file system itself because, something at higher level feels safer to me.

You created some content in shared buffers. You have check some and then everywhere, like post itself, file system, personal system, and then hardware. All of them are under control. If you do it at lower level, you have some, like layers of your system which bypass that check something, right? Maybe like of course it's already good if you have check something.

File system definitely good, but it feels not as complete as would do it. Corruption can happen in many, many, many places.

[00:28:03] Michael: Yeah. if we remember, this is for beginners. I guess a couple of other things to mention that we, I think we covered both in the episode. I can't remember. One is that you could do a smoke test, to see if you've definitely got some corruption, for example, in some of your indexes using AM check.

And you can also,

[00:28:19] Nikolay: Use arm check for hip, if you just, open documentation was surprise for me a couple of weeks ago. arm check is officially for HIP as well because

it has. Yes, it, it has PJM check, which has, paralyzation option dash J. You can use like multiple jobs and, move faster of course, using more course and put more pressure on your disc IO system so you can say, hip all indexed, some flag hip, all indexes.

I remember it. And in this case, if all your tables have, unique keys or primary keys, Which cover whole table, primary key covers, whole table hip will be checked as well.

[00:29:00] Michael: Oh, great.

[00:29:01] Nikolay: In this case, you check you need, don't need pitch, dump to null anymore because it turned out, I, I started recently like, how to run pitch dump to null, in parallel and it turned out not to be so trivial. And then, someone basically, well, Andre Bo, told me that PJA check can be used for checking everything, both indexes and hip, and that's great

[00:29:26] Michael: Nice. The other thing I wanted to mention is that some backup tools, have the option to check for corruption at backup time, which is quite nice if you can't, if you've already yet.

[00:29:36] Nikolay: But, is it checking corruption in database or in backups? Corruption can happen in many places. This is the main rule of corruption that can happen everywhere.

[00:29:46] Michael: I think it checks it at the database level, but I'm, I could be wrong.

[00:29:50] Nikolay: It should be used, something existing like, Azek. Worth looking into it. I don't know. It, it's interesting. Mm-hmm.

[00:29:58] Michael: Move on.

[00:29:58] Nikolay: Yeah.

Tip 8: tune autovacuum to run frequently and move faster
---

[00:29:59] Nikolay: Number eight is, two, not the vacuum to run frequently and to make it move faster.

Yes, default are bad. Default settings are not good enough for you. Usually in most modern cases, although recently it was adjusted like a few years ago, default settings, we were adjusted. Cost limit, cost delay. Cost delay went down to two milliseconds from 20 to two. So 10 times more, quota in terms of CPU and especially io and now a single worker can move faster and so many things, inside this. The vacuum tuning, basically you need to move it. Faster. So because, , by default it's, also throttled. So throttling and also frequency of occurrences. By default, it happens like with 10%, 20% of taps affected. It's too infrequent, it's too conservative. For ltp, you should use like 1%, half a percent. So make it 10 times to run it 10 times more frequently, or even 20 times more frequently. And so,

[00:31:06] Michael: Yep. Completely agree. We've got a whole episode on vacuum. I think that's worth people checking out if they aren't aware of this. But yeah, the best analogy I ever heard was, vacuum's a bit like exercise, if it hurts, you're not doing it often enough.

[00:31:19] Nikolay: Right. And I, I usually say if you, for example, update, delete all in, insert, and we rolled back something, you produce some dead doubles. Well, you put some pressure and if like you have 96 core, 128 cores, And all your like course can do that, right? They can do rights and produce that doubles.

Why do you have only three workers? Three workers is not enough to clean up after all those guys, right? You should have, you should allocate like at least 10% of your course, like 10 workers or maybe 20 workers to clean up faster because a lot of work done, a lot of cleanup work should be done.

[00:31:59] Michael: Yeah. and the bigger your tables get, the worse the default settings get for them in terms of those scale

[00:32:04] Nikolay: right.

[00:32:05] Michael: so yeah.

[00:32:06] Nikolay: And you partitioned but didn't, didn't change auto working worker's number, it's still three, but you partitioned well, it's not good as well. You cannot benefit from having a lot. Of course, maybe. So yeah, it is an interesting topic we definitely discussed, but this should be done by all because defaults are not enough in most cases.

[00:32:27] Michael: Agreed.

Tip 9: query optimization will eventually be more important than configuration tuning
---

[00:32:30] Nikolay: topic for optimization.

[00:32:31] Michael: Yeah, this is probably my favorite. I actually look back and we've done two episodes on this one, so the introduction to query optimization and, we called it, 1 0 2 as well. So yeah, obviously I liked your tip here, it's that pre optimization will eventually be more important than confi tuning,

[00:32:48] Nikolay: Yeah, conflict tuning is, is enough, like one per year. query optimization should be done every time you perform a lot of changes in application, so if you tend to have couple of releases per day, you need a lot of optimization activities.

[00:33:03] Michael: Yeah. I've seen quite a few people end up, throwing money at this problem instead, you know, instead of quick tuning a few queries, they, scale up

the database and,

[00:33:12] Nikolay: people think, configuration tuning is something like silver ballot. No, you change infrastructure level quite infrequently. For example, you upgrade your VM or move to different. Infrastructure, upgrade your operational system. In this case, you need to consider positive settings, but usually you done it like 58 approach.

You did good enough job, it's not worth spending time to fine tune further and further it won't pay off. But if you have very frequent releases, you have big instability in this area. Fortunately, usually it's not so wide as configuration tuning. Configuration tuning. Usually it's like global.

You changed work, ma'am. Default work, ma'am. It'll affect all queries in, query tuning. Usually it's very narrow. Okay. Couple of tables, several queries. Working with those tables so you can, scale this process and involve more people. Into this process, especially if they all have independent clone databases. I, I won't, I won't, stop, saying this because it's obvious. Everyone should have their own big size database.

[00:34:21] Michael: Yeah. and again, going back to previous tips, peach stat statements makes this a lot easier as well.

[00:34:27] Nikolay: yeah, yeah. Well, from top to bottom analysis. Yes, I agree. Okay.

Tip 10: indexes need to be rebuilt, unfortunately, since their health decline over time
---

[00:34:31] Nikolay: And last tip is about indexes and I found that interesting that people don't realize that Index Health declines over time. Even in modern pogo, even with all the application optimization happened in Pocos 13 and 14, if you have POS 15, the latest major version 15, even with it, over time index, health will decline because they receive a lot of updates.

not in all cases they can rebalance, right? They receive some bloat still. It's much better with newer postgre, much better than before, Postgre 12 is much worse than Postgres 15 in this area. But, still you need to rein index them occasionally from time to time you need to index them and you, you should just control bloat using some BLO estimate queries.

They will produce some numbers with some mistakes sometimes, like up to like two. Sometimes I saw 20, 30% is mistake. It says 50, but it has only 20% of load. It's okay. I mean, we can index them, but indexing is heavy operation and it requires,it requires two things. This cayo and it also requires some brief locking.

And it can be blocked by the vacuum running in transaction idea around prevention mode. So yeah, you need three tries. You need to be careful and usually you need to put this additional work on some like weekend or nighttime. But, it should be automated. And, I think, already it's already obvious that we need some better tooling and, it's a shame that cloud providers don't provide additional tools to automate this work because almost everyone needs it. this is another popular tweet with, got a lot of likes when I criticized cloud providers, because progress went in the wrong direction, in my opinion, because they implemented backups, provisioning, replication, auto fell over, and then jumped to very exotic topics like serverless. And, and they left us with all these needs alone, basically like up to query analysis, index maintenance.

some of the vacuum related work bloat analysis, repacking and so on and so on. And, monitoring in cloud providers has very slow progress as well, you know, like, down to earth topics and not exotic ones. I, it's very great that we have a lot of progress on the, frontier of exotic topics, but, we, we still need to.

close some gaps in administration, and I think this is one of them. index maintenance, also a used indexes. Redundant indexes. Invalid indexes. Sorry, I'm, I'm, I cannot stop here.

[00:37:11] Michael: We've got a whole episode on this for anybody new to this topic. I'll link that up. but yeah, I do wanna give a shout out the work on post groups 13 on de-duplication, and I think on 14 on bottom up deletion is wonderful work and has massively reduced this problem. But it still exists. But it does mean if you're upgrading, like I, I've seen quite a few people lately up.

Doing upgrades from like 11 to 15 or 11 to 14. So those people, especially if you are upgrading in a way that, preserves your, bloat, definitely worth looking into re-indexing as part of your, well after your upgrade, cuz you might get significant savings and speedups.

[00:37:51] Nikolay: Okay. You mean the index all indexes to be in better shape because these, these optimizations will work only after we rebuild your index. Right.

[00:37:59] Michael: will the optimizations work only afterwards? I'm not sure, but

[00:38:03] Nikolay: Some kinds of, some kinds of optimization definitely will work only after you, you create new index, so you need to rebuild it. Yes.

[00:38:10] Michael: but they're, yeah. The, the damage is already done, basically, so it's, , going back to the start and then it won't degrade as quickly in the new version, but you'll also get that initial boost.

[00:38:21] Nikolay: but the main idea, you need to index a new way, even though for neuro ous, and I don't think this, need will go away in the future because, looking at other systems like SQL Server for example, they also have this need.

index maintenance, meaning we rebuild index sometimes of, of course, index maintenance also means we clean up a used indexes, redundant indexes, but also we rebuild indexes, which health is not so good.

Meaning blood accumulated, this is normal.

[00:38:50] Michael: Nice. So these were your 10 tips. I've actually got a bonus one. I wanted to add,

Bonus tip
---

[00:38:55] Michael: I think not all beginners realize that the Postgres documentation and release notes will be as good as they are. And even the source code, the comments in the source code is really accessible to even to beginners. So those things are your friends when you're trying to work out why something's happening or what the, what the truth should be.

So yeah, documentation, release notes, and source code comments are all brilliant.

[00:39:15] Nikolay: And the README files in source code as

well. In, in some directors we have README files and they're quite good at plain English, explaining things and not explained in documentation sometimes.

[00:39:27] Michael: Yeah. Completely agree.

[00:39:29] Nikolay: Good. I also completely agree. I think it was good. I, I think it was helpful for some people. please, share your opinion and, don't forget to share this episode with colleagues you think might benefit from listening And till next time, thank you.

[00:39:45] Michael: Much appreciated. Take care. Thank you.

[00:39:47] Nikolay: Bye bye.

Some kind things our listeners have said