[00:00:00] Nikolay: Hello hello, this is Postgres fm, episode number 54. And today is my turn to announce the title. Although like we, we mixed everything, uh, after vacation. So Michael chose this topic. I wouldn't do it, but I need to announce it. Connection pullers

[00:00:18] Michael: Yeah, well, I think this is a really important topic, for Postgres in general. so I'm, I'm keen to talk about it, but I'm, uh, I also am aware that I need to get back into my, reputation of picking topics that you find boring. So that's, that was my main goal here.

[00:00:33] Nikolay: it's not as boring as others.

[00:00:35] Michael: Oh damn.

[00:00:36] Nikolay: Yeah, and I think it's not only important to pos, it's also important to any database system, and that's also important to how applications work with, uh, database system.

[00:00:48] Michael: Yeah. So should we talk a little bit about why first, before getting into some of the details?

[00:00:53] Nikolay: Well, I think, , today we don't need that because, uh, we have application pools on application site.

[00:00:59] Michael: Yeah, so the question is why do we need something on the postal side in addition? And I actually, I

[00:01:05] Nikolay: Let's start from application side. Why do we need, why do, why do we need it on there? Because, uh, to create connection is very expensive.

[00:01:14] Michael: Yeah, there's a few different types of overhead aren't there? You know, in terms of latency, in terms of server resources. and yeah, in combination. I guess at the, at the beginning you could argue you don't, if you only have a few users, maybe five users of your little application,

[00:01:31] Nikolay: If you have just hundred users, for example, working simultaneously,

[00:01:36] Michael: Then you don't need anything. No application. Paula. You don't need database. Paula continue happily with your simple setup, uh, and probably don't do do anything.

[00:01:45] Nikolay: If you have a hundred users and 90 of them have very slow connection. For example, some internet in California. And, uh, they work remote, like from running application on their home, internet, connecting to your database somewhere. And, uh, you have only like eight cores, probably. You already need pitch bouncer or something.

[00:02:10] Michael: good point, actually, slow queries as well, even if it's, um, even if, even if it's not necessarily slow connection, but really long running queries is an interesting point as well. But yeah. in fact, actually worth mentioning, a lot of application frameworks come with Paula's by default, right? So even if you don't do anything, there's a chance that you're using one,

[00:02:32] Nikolay: right. Because we know POCUS is slow, right?

I mean, creation connection is slow, not positive is slow. I, I, I, let me like, let me apologize. POCUS is very fast, but connection creation is slow.

[00:02:45] Michael: Yeah. and until recently, the overhead of each connection was relatively high in terms of,

[00:02:52] Nikolay: Until post was 14. Right.

[00:02:56] Michael: So is there a little bit of a, what's the advice? What's the standard before that version and a different standard afterwards? Does, does it just change the threshold

[00:03:05] Nikolay: like. Yeah, like some kind of rule. you know, take your number of course and multiply by 2, 3, 4, 5, and this, this should be your max connections. Don't go about it. For example, if you have an Intel server with 96 core or 120 something, 28 course, probably you shouldn't go bef above 500.

But it was before Poco 14, which like post 14 now has improved, work with snapshots and, uh, connection, scalability, and probably you can go, well, I, I saw before people went to Thousand, 2000, I saw 3000 with servers, like 9 96 scores. I told them, oh, it's not, it doesn't feel good. They said, we are, but we are fine.

I said, okay, let's, Let's just run pigeon bench in. Its, uh, silly default behavior, stress testing when it tries to max out and, consumer resources. And if you have additional thousand, idle connections, you will. How overhead affects you And it's easy.

It's a test you see around picture bench, CTPs latencies. In this case, tps, usually the main metric, and then additional a thousand connections. And you see like 20, 30% penalty. I don't remember details, but it was like something like that. So this is your price. You are paying constantly. You, make your server do additional work.

it could avoid. And , interesting that in that particular case, they resisted installing pitch bouncer. Because it also said we have a Java application, we have connection puller. It was, something interesting name. I don't remember. It was interesting name. there is a puller on Java site, on Java application side, so we don't need it.

But the prob, you know, the problem, right? Not on, not only if your application works very far and, connection is slow. This is one of the cases, but usually in good project, this is not a problem. Like usually application code is quiet, at least in the same region, uh, as your database, but the problem is different.

Uh, when they scale stateless nodes, they add more and more nodes and forget to, okay, they say we can scale. Let's, uh, multiply number of application notes by two, but they don't decrease, uh, pool sizes by two. So more idle connections are created because active connections don't change when they just add node, right?

[00:05:36] Michael: Not immediately, no. Yeah.

[00:05:37] Nikolay: yeah. Well, they can grow over time or if there is some marketing campaign, they can spike, of course, load can spike. But if they just add more application notes with the same U usage in terms of users doing some work, asking for database to do some work. these scalability efforts for those who are responsible for application nodes lead to, significant increase in number of ID connections.

And this is how you can end up, uh, having 2000 connections on SCOs. And then you try to convince them to decrease pools. They also resist, like we, it doesn't feel safe for us and like, okay, this is time when, when we probably need. connection pool, their own database side.

[00:06:21] Michael: Yeah. Awesome. So you've already mentioned PG bouncer that from my experience, that's feels like very much the defacto standard. and it has been for a long time. I actually looked it up. Do you know which year it was first released in?

[00:06:35] Nikolay: Well, I can suspect it was around 2000, probably six, seven or

[00:06:40] Michael: Yeah. Great guess. Uh, 2007. that's the date I saw.

[00:06:44] Nikolay: I remember, ASCO Oya and Marco Green, from maybe I, I pronounced it wrong. I invited them to conference in 2007 or eight. the developers of skis and so on because of like Skype. Skype was hot in terms of usage at

that time because it was like big company with goals, like we need scalability to billion users. So it was impressive. Not only picture bouncer, but of course picture bouncer is probably the most successful product they created.

[00:07:17] Michael: Yeah, and still to this day, pretty much the standard that it does seem though in recent years we've had a proliferate, uh, proliferation hardware to say, of other tools.

[00:07:28] Nikolay: There is new generation of

pullers. It's related to many reasons. And one of the reasons is, Peter Bouncer became true POGO product. it can pass five years, uh, if you have major functionality proposed. And, uh, so

[00:07:43] Michael: Well, I guess to defend it a tiny bit, I feel like PG Bounce is a similar stability level to Postgres. I feel like issues are as rare and if you need something that is safe, that has been battle tested for years, PG bouncer for me is still the number one choice. Now, if you, there are obviously newer ones that people are testing in very high, not necessarily testing, but , have developed for themselves in extremely high, , throughput environments that are clearly working for their case. But if I needed one tomorrow and didn't have the resources to go and test all the others properly, I would still pick pg mounter

[00:08:23] Nikolay: And which mode do you would put it

[00:08:26] Michael: Well, so this is, okay, so let's get into the, cuz it's not free, right? There were downsides to having a porter.

[00:08:33] Nikolay: no. Maybe, maybe let's mention like we discussed this now, like there is new generation. Let's mention some names,

[00:08:39] Michael: Yeah. Okay, great.

[00:08:40] Nikolay: Because, , first I think, , in this generation was from Yandex team. It was several years ago, and I know how exactly they decided to create it. It's also written in sea and the idea was, , our poor requests are not accepted, fast enough in Peter Barn and also some things we would do differently. They, I think they use threads. Because PG Monster is, is very similar to Pocos process and, uh, I personally bumped into the issue of single CPU usage, , not once, and it's very painful. You don't expect it if you don't monitor, uh, this single process, , situation risks usually it's like after you passed like 10,000 TPS on single, no, maybe 15,000, Single PGE bouncer process is not enough. And at that time, so reuse port, feature wasn't supported by pigeon bouncer, so you need to run pigeon bouncer for different ports. And then you need to teach your applications to load balance basically. Or you need to put, some people put shape proxy, for example, as additional layer.

And this like everything sounds not good. A, so use port, it's very good feature. You know it. Right.

[00:09:47] Michael: No, but it makes sense.

[00:09:49] Nikolay: So as sous port, it's a Linux feature, which allows multiple processors to listen to the same port. So few years ago, finally PGE bouncer started to support it, and now you can just run multiple page bouncers configured to listen the same port, and Linux will decide how to balance it.

So, so you can go beyond 10,000, 20,000 TPS and, and more and more and utilize all cores you want to utilize. If you run pitch bouncer on the same machine, you of course take some resources from. This is also interesting topic. We should probably, probably touch it where to run it, because when we say closer to pore, it might be on the same machine or on different machine,

[00:10:35] Michael: a lot of what I hear is that, putting it on a different machines. Smart. As long as it's close to the, database.

[00:10:40] Nikolay: Some people say we lack a structure in our podcast. Uh, and I, I now see how, why, right, because we jump, jump, jump. Well let's return to this topic as well, but, uh, mentioning new players, what do you say? Right? It's quite interesting. It has interesting features. Really interesting features. So like, I remember they presented it on at pg Co a few years ago and so on. And it also quite battle tested in I think, thousands of databases already. But I saw also complaints about some bugs. You are right. Pooler should have, should be very reliable. It's like a network. If some issue happens, it affects everything and it's global, incident, But I think, uh, in many cases, I say already battle, proven, polished, and so on. this, I cannot say about new players. I like, I think, you need, , several years of active usage. And of course we have chicken versus egg problem because if people don't trust, they don't use, but, uh, they need to use to start trusting.

But, uh, okay, so new players are PG Cat, right?

[00:11:43] Michael: Yeah, saw a really good blog post from the team at Instacart about adopting PG Cat, so that's a, a huge deployment using it. So I'll, I'll link that one up as well.

[00:11:52] Nikolay: And uh, it's written in rust if I'm not mistaken.

So it's

[00:11:57] Michael: And is that from the team at Postgres ml?

[00:12:00] Nikolay: right.

[00:12:00] Michael: yeah. Great.

[00:12:01] Nikolay: And, uh, I especially like, like I never used it yet, but it's my quite short to do, to try when I finally have free time. Uh, or probably I will. Try to use it in some projects I have, why is it in that list?

Because in February I created issue in their GitHub repository with idea. so I, I, I've noticed these guys implement features very fast. developing very quickly. Impossible. For, for, for example, pigeon. Bouncer, absolutely impossible. It'll take a few years. So I ask them like, we in community, we like good feature mirroring.

So we want, from connection pooler, from this middleware, we want to receive, , requests to send it to the main server. and, uh, pass back the result, but additionally send it to another server and ignore responses. it's very good thing to, for testing. And, , the key, besides reliability, the key metric for me for any, , connection pooling software is latency overhead.

And it should be tested in proper way. So like in my favorite way, Instead of running pg bench in, default mode, when it tries to maximize everything, right, you try to limit tps. To have,

, same numbers of CPU usage and all resource usage. For example, 25% up to 50% CPU usage, normal case for some loaded, uh, production, for example.

And then you just compare latencies. Uh, you have, , with one middleware. One pull and another pull ideally a puller should add, very small, let like below one millisecond. One millisecond is already quite big. Let, I mean,

[00:13:46] Michael: Especially on O L T P. Yeah. Which is I guess what we're mostly talking about.

[00:13:49] Nikolay: I all, so I wonder if mirroring what will happen. It's very interesting and I just don't have capacity right now.

But I'm very curious and I think this is super interesting feature. So they implemented it like in a month.

[00:14:01] Michael: Yeah.

[00:14:02] Nikolay: already merged already there, but I haven't tested it. If, if anyone needs similar functionality, because it's, it can give you similar to like, like in brew deployments or realistic testing next to production, you create database like clone in like primary.

And, uh, you can just, , promote it at some point. And, and the same, at the same time. You start mirroring , some queries will fail, but it doesn't matter if you have a lot of queries, uh, like big numbers. this testing will be much better than attempts to replay logs or something.

[00:14:32] Michael: Yeah. When you said it was super interesting, I thought you were trying to segue to the other, the other one that popped up recently being supervisor.

[00:14:42] Nikolay: It should be a super balancer, but it's somehow called supervisor. Supervisor means probably it, I don't know, like this, this name I would use like for, uh, things like Patoni, for example.

[00:14:53] Michael: I think that's actually part of their roadmap

[00:14:55] Nikolay: Ah, maybe. Yeah. Yeah. Well, yeah, it's very ambitious. And it's written in Alexia, right?

Yeah. And also have country proxy and

[00:15:05] Michael: Yeah, I was gonna ask you about that. It looked like crunchy proxy hadn't been wor, it was last I saw, it was like a beta from 2017 or

[00:15:12] Nikolay: abandoned Maybe. I never touched, never tried. I don't know. I tried what you say, I'm going to try pg k definitely. And I, I use a lot of pg bouncer. Ah, also there is, , RDS proxy, but we're not going to discuss proprietary software on this podcast. Right. Unless it's a special event,

[00:15:32] Michael: I am

not sure I have

[00:15:33] Nikolay: But r ds proxy, they have interesting feature. Like we need to look at, proprietary things sometimes because it can give you insights if you develop your own tool. It has super interesting feature. So when they started to develop, I mean aws R Ds, they started to develop, for Aurora global database of multi-region setup.

secondary region is, uh, read only. Right, and you put RDS proxy there and local connections are constantly, uh, like just reading. But what if they want rights? , our primary is in different region what to do. So r ds proxy can receive, right. Then go to primary instead perform this, right?

Wait until this right is propagated to local, standby server. And then read from it. Maybe not from it. I, I don't, I may be mixing some details, but it's very interesting concept of like inverted load balancing. Instead of saying, oh, this is all right, let's go to that node. No, no, no. We go to the, to our node always.

But there we have some magic to create. Right. And wait until it propagates. Interesting concept.

[00:16:44] Michael: Yeah, indeed. so I was gonna ask you a question around this. Do you know, like, why are there so many of these projects? they seem to all have similar goals, right? Like, we want something that PG Bouncer doesn't support. We've got a couple of extra requirements. Why not work together?

[00:16:58] Nikolay: because Cathedral versus is bizarre as usual, open source, it's normal for open source to have many, many, many competing attempts. Because , people have different views. If it was a single corporation, of course it, uh, it, it would immediately like, unless, uh, sometimes the competition is, , provoked inside corporations as well.

Like two teams competing, but not 10 teams competing. Right? but then usually in case of corporation, cathedral, model, we have a roadmap name already defined. Approved by management and so on. Here we have many teams with different views, different needs, and trying to fulfill these needs. I think it's similar to what we had, , with auto failover and backups, backup tools.

A few, leaders will survive and probably remain, I

[00:17:48] Michael: Yeah, I've not been around long enough to know that there were loads of different backup options. ,

[00:17:52] Nikolay: What about auto fell over? What about replication systems? Before replication went to pores? Nine zero, it was a, it was opinion of pores. Developers that replication should be outside. Always replication. Can you imagine?

So we had slo then the same Skype guys created Aire and it also, I, I used both and, uh, kind of painful, also Bucardo and many others.

and then it went to

core, right?

Backup system is slightly different, but we, we now have obvious leaders, Ji and Jibe Crest.

[00:18:24] Michael: And

still a barman,

[00:18:27] Nikolay: barman,

yes, many others he has, but leaders are these two. Butman is much less in terms of popularity, in my opinion, at

least around me. course, I'm

[00:18:35] Michael: I. Well, I see most, I see most using backrest still,

[00:18:39] Nikolay: Maybe because it's more popular maybe, but I have a lot of World G cases

as well. and well e already I think out of, , consideration and some others also like, and uh, about, uh, auto filler also. Auto filler should be outside of policy. Okay. It's still outside. And we have obvious leader, pat, and many attempts to.

Change it. But, , in this case, I think leader is like one big leader, and, and that's it,

right? So here I also expect we, we have a long-term leader, PJ Bouncer, but, uh, many attempts to compete and these attempts from latest years, and I, I'm not sure what will be the result, because of course, and they also have, uh, pressure on Peter Bouncer as well, because I, I, I observe it very closely,

[00:19:28] Michael: Well, uh, yeah, you sent me, one of you sent me a pool request that seems to be making progress. So there, there seem to be some signs that pg bouncer may speed up a little bit or may get some of these improvements and may as it does get some of them.

[00:19:41] Nikolay: Prepare statements. For, yeah. For transaction mode,

right? For, yeah, For, transaction mode.

[00:19:47] Michael: Yeah. This is where,

this is where we fork off, right. transaction mode's the default. Right. And it's what most people use as far as I've seen.

[00:19:56] Nikolay: Honestly, I don't remember default. I know like, let's start with session mode because it's easier. It's the simplest mode. Like you hold your, you have your session, you always connect it to the same backend. PO backend means PO process through, uh, this pull and it never changes. Context never changes and so on.

Good. Then you say, I, I, I'm going to disconnect. Okay. It also beneficial, by the way, you know, why, right? Or to

fight with idle

connections, for example,

[00:20:28] Michael: pardon me.

[00:20:29] Nikolay: to fight with I connections, for example. So we still, like, we don't need to keep a lot of idle connections to, to database. So we disconnect faster if not needed, for example.

[00:20:39] Michael: Yeah, but the, the benefit of the session, I mean, obviously it has more overhead, but I always thought the benefit was you get the session level features, like prepared statements. So,

[00:20:50] Nikolay: But, uh, prepared statements can be implemented for transaction model as well, So,

[00:20:56] Michael: haven't been in Polars.

[00:20:58] Nikolay: There are already three requests in repository, so hopefully it'll be soon there.

[00:21:04] Michael: Mm-hmm.

[00:21:05] Nikolay: I don't remember, but I think I would just say support suit. I might be mistaken. I remember discussions, but I

[00:21:10] Michael: I think at least, yeah. I think at least one of them was. Started partly to support, like that was one of the main features they wanted. but I can't remember which one.

[00:21:21] Nikolay: So transaction model is the best. Why? Because, we can reuse, backend to do something, some other, to serve some other requests, some other transactions, between transactions in the same session. So you connected one transaction, happens on one backend, post goes backend.

Then we have some. An activity, for example, it's not a a idle transaction, it's a idle session. I mean, it's regular idle. We, went somewhere to do something like an application code or something else and during this process, but can, can be used, uh, by other sessions. By other transactions. So it's, we can mul how it's called multiplexing or something.

So like it's the backends remain not busy, less time. They are most time busy in this case.

[00:22:10] Michael: Yeah, so

[00:22:11] Nikolay: efficient,

[00:22:12] Michael: higher utilization of our, of our cause, of our resources.

[00:22:17] Nikolay: Right, right. And, uh, statement mode is, uh, kind of, strange because you can switch to different backend and sign single transaction and it's like, doesn't sound too well. Well, it's, in some cases it probably will switch suit, but, in general it's, it's not safe

[00:22:32] Michael: Yeah, I've, not seen a project that's used it. I suspect there is a use case, but

[00:22:36] Nikolay: Just for, just for completeness, I guess it's, it's, it is. Right. So transaction mode is what we usually want to for best efficiency, but in some cases, session mode also makes some sense.

For example, also Pitch Master is responsible for working with slow clients because POCUS already generated result. We shouldn't, keep POCUS backend busy, just while we transfer data.

It's better to transfer it from Baner and Becken can do some something else or just, do at all or anything,

[00:23:08] Michael: Or be ready, at least be available.

[00:23:10] Nikolay: Right. So yeah, transaction mode and prepare statements. It, it's like the sweet spot many people want because prepare statements of course help with, uh, performance as well.

[00:23:23] Michael: I saw a really good writeup recently, from JP camera they felt that they've often read, oh, if you get to a certain number of connections, you should just use pg bouncer. And this, and the advice was quite, limited or only very like, It said it as if there were no downsides and they've gone through a very thorough blog post of all of the downsides that they've come across.

And I'll share that in the show notes. Cause

[00:23:48] Nikolay: What do you remember

[00:23:49] Michael: do

[00:23:49] Nikolay: downsides? What's interesting?

[00:23:51] Michael: well. I think these ones were the biggest. let's have a quick look.

Lock timeouts statement, timeouts.

[00:23:59] Nikolay: application name usually challenging because, I remember some from old days. I remember pja bouncer hides application name and IP address or something like that. Like, you need to do special tricks to keep it. But in general, pros outweigh cons. If you have a lot of gps.

[00:24:18] Michael: But like one of the good points they made is something I think you've often talked about is having timeouts for things. so in general, you time out things very quickly, but then you can override it from time to time from maintenance tasks. But if you've already set that, you can't override anymore.

If you're in transaction mode, you, unless you connect around the pooler. So either if you want to go through PG bouncer, you can no longer use, like set a longer timeout for this. Maintenance, operations. So I think that was a really good point that I hadn't

[00:24:51] Nikolay: Yeah.

[00:24:51] Michael: elsewhere.

[00:24:53] Nikolay: Yeah. But also, uh, features, like for example, good feature is to, good feature is that. PGE bouncer can give you understanding how many qps you have and the average latencies because POS doesn't have it. In internal statistics, it's strange, but only tps and that's it. Even latencies are not recorded if unless you deal with PG start statements, which is, uh, limited because it all has max, number of queries, right?

but PG bouncer constantly writing it to logs, tps, qps, latencies, and also it has internal statistics. it's implemented in an interesting way. It's like you can connect with Psql to it and say, show starts, show, show help, show everything, show service clients, , and so on. , and, uh, when you want to join it, I, I want, for example, to take one information to join with something El else, be.

There is no sequel there. It only show commands. So you usually you need to export it to CSV and then to import to normal and then to work with it. But, uh, there you also can find qps, tps, and, and latencies. And this is very good to monitor.

[00:26:01] Michael: Stupid question. But, one thing that's strong to mind is does that also then measure failed? Like, you know, when you're talking about pg stat statements, you only get successful queries, right?

[00:26:10] Nikolay: Yeah, that's a good question. I don't know actually. So from the bouncer point of view, some query which was canceled, failed, it's still. Produced an error. It still consumed like a second of time and still should be, should contribute to average, right? I think it should count it, but it's worth checking. I don't know.

Good. Very good question. Actually. I just remember I was curious like how many, okay, we have this tps, we can see it from Preta database, but how many qps, like on on average, how many queries are in one transaction? And I usually, usually found myself checking the logs for this information.

To understand the, our workload. So this is, this is benefit, but also pause, resume. I think it's undervalued functionality and I see other, pullers also plan to implement it. So you can, for example, restart your server, performing minor upgrade without downtime at all. You can issue your pause. By the way.

It's tricky to issue your pause. When you issue a post to page mouse, what it does, it says, first of all, no more new, all new incoming requests to, run some query, uh, should wait, and it starts waiting itself, all ongoing queries to complete, and I feel it lacks some additional options because I don't want to wait forever.

What if a query lasts an hour?

[00:27:36] Michael: Yeah,

[00:27:36] Nikolay: Well, of course we have statement amount, but maybe

maybe we don't. And so, so I would, I would like to wait, but not more, some number of seconds. Like give it a, give it, chance to complete, give, query, ongoing queries, some chance to complete, for example. But no, no more like two seconds, three seconds or so, because others were already waiting, right?

[00:27:56] Michael: Yep.

[00:27:58] Nikolay: Okay, this PO sponsor cannot do, but you can do it yourself. You can, uh, terminate all long running queries in parallel. And, uh, this, in this case, pause will succeed. Succeed and it'll return control to you. And in this situation, you can restart, pause on background, and then say resume. And, , users notice only some spike in latency.

And that's it. Kind of almost, uh, zero downtime , minor upgrade or restart. Of course, it also helps. We shouldn't forget that. It helps, uh, with restart because restart can take a lot. Policy restart can take a lot because of, , checkpoint, , it's called shut down checkpoint. If you tuned, uh, your checkpoint, for example, increased Maxwell size and significantly shut down checkpoint might take a lot of time.

And, uh, in this case, you should issue explicit checkpoint before you perform, attempt to restart progress or shutting down. And in this case, it'll be much faster. So, so because, uh, shut down checkpoint will have almost nothing to do because your explicit checkpoint already did something. So you need to engineer this anyway, right?

Because it's not easy to use in general case under load, but then you. After you start, you say resume, that's great but also you can substitute SGU node if your pitch bouncer is running on different node or you can reroute it or something. It can be different sgu, major version.

[00:29:22] Michael: Yeah, that's a good point. So, I mean, I think I saw this as one of the main goals of the supervisor project, is to be able to, I think for them, one of it's not so much about major version upgrades or even minor version upgrades. It was about changing the resources. So if you wanted to go, like, if you wanted to increase, cpu, like if you, I, I think a lot of these, a lot of these providers have, have done some clever stuff behind the scenes to, to be able to resize.

Disc or resize memory, like, you know, different things. but yeah, I, it's like middleware, isn't it? It's like a, it could almost be a little cue for a while, even if it's only for a second or two.

Um, so yeah, very clever.

[00:30:02] Nikolay: Right. So, summary ban is the king, but, uh, who knows what will happen next because we, we see some candidates around

with interesting ideas and implementations.

Uh, let's see. It should be interesting competition.

[00:30:18] Michael: Last question, cause I know it's a quick one. should this be in court?

[00:30:23] Nikolay: Of course Stan is a guy who I know tried to implement it. I remember I was reading very long threads, but it's obviously very hard to convince people on details and also, well, I think POCUS will have it right after threats.

[00:30:46] Michael: Okay.

[00:30:47] Nikolay: So that's, and then, and then internal pooler, that's it. But remember, if you implement it inside, you lose this, uh, benefits of running it outside. Cause this rerouting, for example, is a good, uh, is a good reason to, like, there are pros and cons to run it on the same node closer, very close to politic, all on different node.

[00:31:07] Michael: Yeah.

[00:31:08] Nikolay: yeah.

[00:31:09] Michael: Yeah.

Makes sense. Awesome.

[00:31:11] Nikolay: Good. Thank

you

[00:31:12] Michael: so much, Nicola.

[00:31:13] Nikolay: I hope it was requested by users and I hope

[00:31:16] Michael: Yeah.

[00:31:16] Nikolay: it, is interesting to someone.

[00:31:19] Michael: Th this was requested,

[00:31:20] Nikolay: Great. Next time my choice, I will work hard on choosing new topic.

[00:31:25] Michael: Nice. Take care

[00:31:27] Nikolay: Aye.

[00:31:28] Michael: Bye.

Some kind things our listeners have said