Nikolay and Michael discuss auditing — why we might want it, what the options are, and even an idea for a future solution...

Michael: [00:00:00] Hello, and welcome to Postgres fm, a weekly show about all things PostgreSQL. I'm Michael, founder of PG Mustard, and this is my co-host Nikolay, founder of Postgres ai. Hey, Nikolay, what are we talking about today?

Nikolay: Hi, Michael. Your turn. You tell me but don't choose, boring topics, please.

Michael: Guilty. so today I have chosen auditing. so I decided you, you claimed the last topic I chose was boring. So I thought I'd go for the absolute most boring topic I could imagine. but on the, on the flip side, this, I actually think this is quite commonly needed. not for everybody, not in all cases, but it comes up often enough, and we've had.

Two listener requests for this now. So it's pretty cool to see, these starting to kind of cluster into things people are interested in. So, yeah, auditing, and by that we mean, well, I guess it can cover a few different use cases, but specifically the things people seem to be interested in, are the different options they have for seeing who's changed what and when.

And they, these, [00:01:00] these fall into a few different categories, but, we have a couple of really common options in Postgres and a couple of interesting options.

Nikolay: Right before, before we discuss options, let me tell you that I consider this topic even more boring than transaction idea up around two weeks ago because it's related to security. due to my, professional activities, I, I must, deal with a lot of security and, auditing and, and other things.

All the time compliance. But I don't like it. And I like, uh, I, I, I like to build something, but, uh, when we, well, we can consider building auditing system, right? So this is entertaining to consider options, but in general, security not my favorite topic. So, my opinions might be, not. Advanced because I, sometimes try to avoid it and, use other people help.

So, so don't listen to me if I, I can be wrong as well, and so on.

Michael: Well, and I think, uh, probably transitioned into the options. I think a lot of [00:02:00] the development in this area has been out of necessity. People needing this not out of passion, not out of, you know, wanting to contribute to open source or anything like that.

Nikolay: Requirements. Uh, if, if you are a big company, you thinking about ipo, you already might have, might have, uh, requirements from external auditors or like, it, it, it's, or compliance, uh, various compliance

processes. Yeah.

Michael: Yep. If you're in a regulated industry, perhaps there's all of those things. But, but even, even, and this, this is, um, people also want to use these same solutions sometimes for feature-based things, but they tend to be, they tend to be the on the less interesting side. Uh, so that I can see why there aren't maybe loads of, uh, projects around this from hobby developers.

Nikolay: Right. So I think if you are a smaller company, a smaller project, and, but you think in, in, in the future it might happen [00:03:00] with you or some requirements may arise in the future. So not yet. Or, or already. They, you have, um, I recommend checking crunch data, uh, document, uh, they prepared. with, United States Defense Information System E Agency.

So it's a security technical implementation guide, S T I G. it's like huge list of various, items, like, requirements, basically, each requirement has a severity level. And so like, you can start with most important ones, critical ones, and, it's impressive.

So if you want to reach good level, it's a good document to check and maybe to use. But, general, I agree with you sometimes, uh, we implement this as a feature. For example, part like if you implement trigger-based, Auditing, it can be also your way to restore data, right? So if you, if you save old value when someone deletes something, it's a way to restore it if needed.

Right? Manually restore some[00:04:00] wrong, wrongly deleted, Parts of data, but, if you switch to options, my recommendation is to draw a matrix three horizontal, three verticals. So three cells, nine cells overall. And, it'll help to understand the, uh, use case and the options and choose.

First of all, there are three type big types of events that can. logged or remembered for auditing purposes. DDL changes of course, right? Schema changes. Then DML in terms of, modification of data update and sur delete and copy. By the way, copy is, not usually, it's not reading.

It can be. To is doubt or to file or, but sometimes it's, uh, a massive data upload, to database. and, and final third one is, uh, access of data. Using select copy or with statement [00:05:00] also can, it's also like select. So, it's already interesting, these cases. And, three basic big options are using logs, BOGOs log. You are right PO locks. And then two options, I would call them both are like logical level, so either triggers or logical decoding to send events to some someplace. And, uh, for cellex we can quickly exclude, second and third option because we cannot, see cellex in, in triggers and we cannot see selects in logical decoding stream. In wall. We, we don't have, cell in wall. Well, there are CELs that can trigger some world rights, but it's different, uh, story in general. We, we should not think of that day. It's hap it's happening. So for cel, it's the only option is to use something like P, which is quite popular, extension,

Michael: Yeah. So on that note, yeah, the document you mentioned from crunchy data is incredibly impressive, but it's also [00:06:00] incredibly long. I think it's a, I think it's 130 pages or something, and I did a. when they announced it or when they announced the public version, they gave a shout out to PG audit in it, and I did a quick, search on the document and PG audits mentioned about 140 times in 135 page document.

So that goes to show how important that is as an external. So as, as an extension, not as something that's part of Postgres core to, running Postgres or in their opinion running Postgres. a very secure manner. So it's, incredibly important, incredibly robust, and uh, has a lot of development history by the team at edb or not edb, second quadrant originally, who are now part of EDB and the team at Crunch Data as well, at least, at least one person there.

So it does have real Postgres experts behind it, but it's not part of Postgres core, which is interesting.

Nikolay: right? But, uh, document is huge. But, uh, you can check, only first, uh, the tip of the iceberg, [00:07:00] to take the most, uh, important, uh, items and then go down to like main body and so on. So like to, to, to take it in steps, right? So, It's, it's possible because all items, as I said, they are, they have properties of severity, so criticality or something, I don't like, uh, if so, if the, like, it's a big question. Do we need to take care of access of data or, or we just need to track only data changes because in both requests from our, uh, listen. The, the discussion was about data changes because if we need the access of data to, to log it, it's uh, probably we should choose PPG audit or, or similar, uh, thing.

We cannot do it on triggers.

Michael: Yeah, exactly. The more common use case I see is that you do only need to track the CHA changes, um, rather. access, but I don't see why, like, I, [00:08:00] accessing of data is one of the big, um, big security incidents these days, right? Somebody who shouldn't be reading a lot of data, reading a lot of data is a, is a big issue.

So I suspect that is why p g audit is so popular. So yeah, just to recap on

Nikolay: If somebody shouldn't use, shouldn't. Data, just, uh, revoke access from those parts of data for this user. That's it. And distinguish users so it well, and check, uh, regularly that, uh, permissions are properly, configured, right? So, so you don't need to log access to data, which cannot be read by some database user because it's not possible.

Michael: Yeah, but like if there's, if they like, even if they should have access to it, if they're doing an, an inordinate amount, like a, an a not usual amount, if it a support person who should be checking customer accounts,

Nikolay: this is by the way, interesting. Yeah. I, I would, I wish, uh, [00:09:00] uh, we had very simple way to log, uh, massive, uh, reads. Sometimes I like, I think like we should, for example, have. Alerts, uh, if, some massive, reads is happening for some user, but how to do it, probably maybe even from perta statements because it it tracks sells as well, right?

And for it tracks it for particular user as well, and so on and so on. So, and we have a, a number of blocks, uh, red there, right? So we can notice that, uh, some copy, for example, happened. We, if we distinguish users, we can quickly see, okay, we have plus one in terms of calls for, for this query for this user.

And it's, it has a lot of, uh, or, or we can do it in, in logs, for example. Even with, uh, auto explain with buffers enabled, well, it's, uh, it's something, uh, something which seems to me quite common task, but not, uh, well solved or, or maybe, I don't know. As I said, uh, my disclaimer in [00:10:00] the beginning, I'm not an expert here. So

Michael: Yeah. Well, let's go back to the, let's, uh, very interesting topic, but probably not one we can cut, like neither of us necessarily know how, but the PG audit is, is very much in that log category, right? There's the only way of doing things is via logging. It's highly customizable, but by default it does log a lot.

So it's, it needs to be customized basically. the, the other, like, I. SP people using triggers quite a lot, for different use cases. Not necessarily just, not necessarily the security ones, but I had, I don't see much of the third option you mentioned, so I'll be interested to hear more about that.

Nikolay: Logical. Well, well, let's, uh, I, I don't want to forget to mention it. Uh, with pt we have an issue, by the way. Uh, maybe it's not common, but I see it obvious issue. And I had in my practice this issue, uh, triggering. So PT is logging everything to pause this log. Everything that which, [00:11:00] what is configured and uh, only super user can change settings of course, right?

So regular users cannot change settings. But what if we want to audit all actions of our DBAs, for example, which is quite common task. If DBAs usually have super users, they need it and they cannot override settings.

Michael: Yeah. Well this is another thing for security, right? Like how, if you're doing this for security, what are the, yeah, what are the loopholes, what, like what's, what's your exposure to with each of these? Yeah. It's a good question. I don't know

Nikolay: you might, you might have alerts configured, uh, for each. Happened. This is one way. So if someone decided to override it, it's very, visible to, uh, to other people. Or you might

Michael: an alarm.

Nikolay: yes, or if it is an interesting small extension from onus called, uh, no set, and it disables, setting particular settings, just [00:12:00] you, you cannot change them without changing consider.

So it's also interesting approach. I, I know about this extension. I never used it, but I think maybe it's something that POGO should have. For example, prohibiting changing of statement, time out. This is different story, but still, or setting, uh, prohibiting changing of PPG audit settings completely.

it'll be good and nice feature in in the engine in the core.

Michael: but it has, yeah, it has to be changeable by somebody, right? Like you have to be able to configure it.

Nikolay: Well change it and config, uh, send signal or restart server. And, and if, if, if it's something that you don't want to let, for example, in some cases we. one hour ago I reviewed, um, uh, Ansible playbook, for upgrade, major upgrade of pogs, uh, based on, first we switched from physical ecological, and when we upgrade, uh, pogs and, uh, we want, old cluster to set it, uh, like Redon State.

So there is an option, [00:13:00] there is a setting, setting. I don't remember transaction default, uh, something like that. I always forget search names and you can set it, uh, in configuration, so on the primary, so it won't accept, read, write queries at all. But unfortunately, any user can change it and still insert something to old cluster and this, that's not good.

I will, I would have, uh, no set there and prohibited. So only if you have access to files, you can. I don't know, like it's, it's, it's maybe, uh, additional topic here, right. But, uh, I see some problem here. Uh, we've bejo it as well, right?

Michael: Well, similar, similar problem with triggers, right? Like if people can remove them or disable them for a while,

Nikolay: oh, well in this case, uh, we should have. Ggl logged. But yeah, we can first set, a log statement to like none. And, and, but it should, it should be alerted, invi who, who decided [00:14:00] to keep silence and do some bad things. Right. So, so, uh, before we switch to triggers, one thing to mention about logging overall in general, it's, um, uh, it's, I, I don't, I don't.

Dealing with lock at all because to do it properly, if you have many servers, you need central, centralized, uh, storage with a lot of power. And the best system. I, I, I was dealing with it, it's Splunk. But you need a lot of money to, because it's commercial software, so it's like huge system. Uh, usually people use Cubana and it's not perfect interface.

So like many things and you need to think about, uh, a lot of stuff. So like, uh, PII also, and so on and so on. But, uh, in general, if you. Consider pedi. You need to think what you will do with a lot of logs. And also, of course, uh, in current POG implementation, it may be bottleneck in terms of performance, especially if you forget to have a good [00:15:00] disc, uh, where you store logs.

Sometimes it's, uh, some magnetic disc. It's good for sequential rights actually. But, uh, also if in terms of IOPS, you may, may hit some ceilings. So dealing with ox, it's not not big fun

Michael: And I think the advice I've seen, which seems sensible is only log what you have to, I think. I think some people attempted to turn it on and log everything that feels like really, that feels really bad. Genuinely. I've seen, I've reading, like researching for this, I've seen several blog posts that say it's fine and I just don't agree.

Nikolay: You can test it easily. Just, create p bench database with a hundred million rows or something, and then just unleashed unlimited, uh, TPSs, uh, regular setting. Don't use dash, uh, r capital r r and compare it with log statement. All you will see huge drop in c.

Michael: Yep.

Nikolay: Very big drop and then try to con, configure [00:16:00] a log collector compare it.

Also interesting thing, so if you, if G logs go through slog zone od is something, it can be very big limiting factor. I, I, I know, I don't know current systems, but I explore it like five, four years ago and found that, uh, if you use slog, performance may, may be limited.

In slog itself. So, yeah, so, so if, if you enable a lot of logging, you prob probably should configure, uh, logging, collector and, uh, check throughput, ensure that everything is fine and you will be, you will have capacity for it. So capacity planning needed here? Definitely.

Michael: Yep. Uh, whichever option you go with. Definitely. Like even on, on the trigger side as well. Right.

Nikolay: Right, right.

Michael: I saw some anyway, but I'm, yeah. I saw some advice that, that put some numbers to it, but if you have a right heavy table,

Nikolay: Right. Amplification happens if you, if you go with triggers I likes because [00:17:00] they have, they give you flexibility and also you deal, , with SQL all the time so you can check what you have inside your database. You don't need to go outside because dealing with logs means if dealing with either Cubana or a lot of shell script.

Oak and so on, like a lot of such stuff. But dealing with sql, it's good. Like a lot of data. Maybe we have timescale there. It's partitioned to insert and store a lot. Great. But of course it's, it, it has right amplification because if you, uh, for example, before you wrote a kilobyte, now you're right, probably a couple of kilobytes instead of one.

Michael: Yep.

Nikolay: And deletes, uh, you deleted, then vacuum cleaned up that taps. But if you need to keep all data, it, it, it's kept database. So database only grows. But as I've said, uh, the good thing here that you, it gives you option to restore and old [00:18:00] pos guys, like. Recall that originally the POGS had time travel, in core it was removed.

So INV VCC was implemented using some time travel features. So when you deleted, the data was present and you could, jump to to the past, to point in the past. Uh, but now it's.

Michael: it was removed for performance reasons, I believe.

Nikolay: Right, right. But it's so easy to implement it. And uh, a few days ago we had discussion with Hannah Crossing and Peter Zeit, Peter Zeitz of raised this topic, , mentioning that it should be easy to set up this like shadow, table probably, which keeps all deleted data or maybe also old versions of if you update data, but it's actually like 10 lines of. Because you can use Jason to convert everything to one line. You don't think about schema so easy. I don't know. Maybe I'm wrong.

Michael: I. I think I saw, I saw the conversation and I [00:19:00] think Peter's suggestions seemed to be for UX reasons, for, you know, as a user I'd like to be able to specify this and maybe for a time limited period. I think it was like for 30, can we keep the history for 30 days, for example? That's, it's rare that you get

Nikolay: that's interesting to have some cleanup process.

Michael: Exactly. If you don't have to keep it forever, which I think was the original design. Then I imagine some of. Trade offs become less bad, but yeah. probably a different topic. We've def we've had a request for that one as well, actually temporal,

Nikolay: Yeah. Well, yeah, that, that's, that's an interesting topic. But, if we don't think about retention and some com complex, uh, capabilities here, we talk about, couple of lines versus 10 lines or 20 lines of code. It's, it's like not a big deal. it's quite good question for DBA interview, right?

Uh, a trigger which will, can be attached to any table and it stores, which was deleted. So we can always know when it was deleted, who deleted it, which user. It's also possible and [00:20:00] if needed, we can, we can, we have some procedure to restore it with a couple of actions. So it should be easy to restore it.

So, not, not, not big problem. And we can use the same approach for auditing. So triggers, right? Uh, all data, new data. Who did it When it happened. but, one thing here in terms of performance, usually most articles I saw, they talk about, triggers for each row. But, for this task, probably we, it's better to work for each statement.

And, since we have transition tables, we can, we access all data, new data so we can, Here in terms of overhead, so it can be easier for pos, to save everything in one query, actually additional query from trigger.

Michael: And based on the blog post you sent me, I think that was added in version 10. So everyone should have that now cuz everybody's on a supported version of

Nikolay: It's very old feature already. Time flies.

Michael: Yeah. well, so [00:21:00] some of these blog posts were from even older, like I th I think one of them was, so that was 2017, right? Five, five years ago roughly. Um, 2007. I saw one of the ones around the trigger, one of the trigger-based solutions. So we, we do have, so that was pr that must have been, uh,

Nikolay: This is probably around the time when I fir first wrote it myself, for some project.

Michael: Yeah, but that's, so that's a good, uh, interesting point about how the implementation, another big thing that seemed to come up time and again, some people doing, um, well, a lot of people implementing this. in a, in the simplest way they could. So trying to do a single audit table for all tables now that there were trade-offs, obviously, depending on exactly what you're trying to do.

Um, but it is interesting how simple this could be and you, you can do it ver, you can roll your own extremely easily. There are some, there are some people bundling up. I, I saw super base, uh, bundled it as an extension recently. [00:22:00] super audit, I think they called it. . so that's really cool. That's a freely license.

There's everything they do is, oh, and with noting PG audit is free, is Postgres license. So that's, it's really cool that's, that's available to

Nikolay: Yeah. Well, with, I don't see big problems with, uh, single, table approach. I know we will attach the beautiful article from, uh, cyber tech with a table with comparison, and it says, for single audit table, it has right amplification and also in cons, column and also audit search queries might need some Jason skills.

Yes, but I don't see the difference with multiple tables here because if you, for example, have additional column in this audit table, you can put their relation. right? And have an index on it. So it may, may be you, you may not see a difference between dealing with multiple tables or a single table. And of course this table, in my opinion, should be for large systems, should be partitioned.[00:23:00]

Michael: Yeah.

Nikolay: if you have timescale, you can control what to do with all petitions, all chunks they call chunks, right? So a lot of things to do. So we, I think this is perfect. actually use case four timescale. I, I wonder if they already have a blog post about it.


Michael: didn't see one, but yeah, it would make sense. The in the super base one they

Nikolay: it's time serious, right?

We have, we have event of ch data change. Uh, we have timestamp who did it and perfect.

Michael: it's append only by definition. Right? You shouldn't be deleting things out of an audit log. Like it's

Nikolay: Right. But you can compress, older and, and, and so on.

Michael: The, the cyber tech table is great. I do see their point on, um, if, if you don't want to add columns to, if the whole point of a single table for all is that you you don't wanna change the schema of that for each one, then I do see the advantages if you're using it for, like, if you want to index individual columns and not, not have to worry about like the issues with gin indexes, for

Nikolay: [00:24:00] Well, uh, I, I made this mistake. I, I, I had a couple of systems at least, where, these shadow tables were individual for each table we want to track for deletes. It was not for audit, it was for, data recovery purposes. and, uh, it was, uh, actually not convenient because each time you change schema, you need to change schema for shadow table as well.

So, I, I even thought about having, um, event trigger. trying to help me not to forget and engineers not to forget about, shadow tables. So it's kind of, uh, it depends. It may be convenient some. In some cases, but, uh, these days I would definitely prefer JSON and maximum flexibility fully independent of schema.

I don't see any issues if I need, an index, I can use gene index or I can have, battery three indexes for specific paths in, uh, json if need. It also, like it's not different from having everything in separate. And like in repeat, repeating, mirroring, [00:25:00] scheme of original table and then indexers.

I don't see it's a big problem at all. And Jason is beautiful. So why I would go this way path here.

Michael: Cool. Oh, while we're on the tangent, there was a really good blog post

Nikolay: Let me, let me press a little bit. Timescale, I, I, few days ago, I, I was reading about their bottomless approach. I really liked it. Uh, it's only for cloud, uh, for their cloud offering. So you can decide what to do with all data, and it can go to s3, so bottomless interesting. And, uh, in this case, you can implement, this, auditing and data goes to S3 for, lower cost storage and so on.

It's like bottomless pauses, you know?

Michael: Yeah. super interesting when when you have such a specific, specific use case and you know that old data is not gonna be accessed as often and performance on it doesn't matter as much. It's, yeah. Really cool.

Nikolay: And, uh, off topic, I, I also was researching what people do [00:26:00] with branching and noon and so on and, and playing scale. And I, I googled something and made mistake. I, I, I wrote bottom list branching, uh, with, with you, not branching, but branching.

Michael: Brunching

Nikolay: There is such thing as bottomless branch, you know

Michael: Normally that involves a lot of alcohol, right?

Nikolay: Yeah, yeah, yeah, yeah, yeah. So bottomless mik and so on. So a lot of, we, we can make some memes here, definitely

Michael: Oh goodness.

Nikolay: about this new cloud versions of pos.

Michael: yeah, speaking of which, should, so, so these, the two big options I see compared all the time are logging via PPG audit. Very, very scalable. Very customizable.

Nikolay: available almost everywhere. So all cloud. Mm-hmm.

Michael: Exactly. If

everyone I checked supported it, Yeah.

every cloud, all of their managed services all supported it. , but if you have, if, if you want, you can implement something via triggers.

If, [00:27:00] if the trade-offs are okay for you, , can be really simple. Much easier to do per, per object, but, Equally, you don't have to. and then there's the third, I want to make sure, right. While we're on the cloud topic, it feels like that's what, what it feels to me like that's what's driving this, maybe alternative logical approach.

Nikolay: Right. So logical, uh, based on something based on logical replication or logical decoding, and you send, events to either different po. Maybe with timescale, maybe it's, uh, this, uh, hybrid new pos, which is, uh, open source, uh, snowflake. We can, uh, a attach links to POS tv episodes about it, right? Or it can be something like Snowflake or Click House or anything like vertical, what you have there, or I don't know, like big table redshift, anything, right?

Or maybe even not SQL at all. many opportunities here. And good thing is there's no, like, no right amplification, but bad things. [00:28:00] Uh, you cannot use logical on secondary. You don't need it because we, we, we already discussed that cellex are not possible to track. So it's only about data changes. Uh, but use the use of logical, slots can happen only on primary.

and it has some risks, of course. So, uh, like out of disk space, uh, fortunately fresher version of pogs, the newest version of pogs, I think it was added to 15 or 14, I don't remember. You can, uh, specify the maximum of threshold for, uh, slot size. So you are protect here. And of course, setup is.

But a lot of flexibility and there is no right amplifications, almost, there is a small overhead of writing additional data to walls. So there are limitations. So for example, you should have, primary keys, uh, maybe not by the way, for, for the sake of auditing. Anyway. So a lot of interesting things.

[00:29:00] Also interesting that, If we talk about audit, they, if you check documentation, they immediately say, why it's better to use pologic because, for example, if you have dynamic SQL and you, accessed or changed something and table, which, name is, uh, created dynamically. So consisting from several parts, normally with log statement equals all or log mini statement, zero, you cannot grip then, or, or search. But uh, with p today, you have normal relation name. You can, you can search. And, uh, if we talk about logical decoding, well sounds, uh, something's probably, well, everything is there, right? So user is there, right?

Everything is there. You can decode and, uh, know. Who did what except Cellex, of course. So data modifications, ddl, everything is there.

Michael: Question it seems, does it have that same downside you mentioned [00:30:00] as shadow tables? I'm s I'm cheating and using the cyber tech. Um, the excellent table they have, but they've got typically requires some schema changes and extra care when the schema evolves.

Nikolay: yeah. Well, if you have schema changes, well, if you use logical replication, uh, each schema change will break it.

Michael: Yeah.

Nikolay: In current version of podcast, there is ongoing work to improve this, but, like destination schema should measure, publisher schema should match subscriber schema, right? And if you, if you use this, you need to take care of DDL.

for example, in physiological, there are special function which you need to, to use as a wrapper for all your detail. so it's, it's, there are limitations here. Definitely. So you are right.

Michael: But it's an interesting area. I, I've very much, credit the cyber tech team for that. But yeah. it's super interesting and it feels like we've got a few really good options. and depending on our use case, we've, we can hopefully, it'd be very surprising [00:31:00] if one of those doesn't work, um, on some level.

So hopefully that's given people plenty to think about. I did have one, uh, slightly less serious question for you. Have you seen the PG audit logo?

Nikolay: No, I mean, I, I, I'm sure I saw it, but what, what's about it,

Michael: I don't know. I, I can't really work it out. I think it's just got like a golf visor on the, on the Postgres elephant.

Nikolay: Let me, let me check it.

Michael: is that what audit people wear?

Nikolay: Uh, okay. It, it has green. Yeah, yeah, yeah. I'm not sure. Never thought about it.

Michael: So weird. Um, but yeah, I, it

Nikolay: Doesn't look serious. Right? It's a serious extension with, with not serious at all. Logo.

Michael: Extremely.

Nikolay: before we finish, let me put, uh, the fourth option on the table, which I think maybe will be winning.

So imagine if we can observe all queries on the. Not installing triggers, [00:32:00] not writing anything to log and send them to somewhere like Log Collector or using udp, those different servers. Somehow we can send everything, all details about each query executed and even not finished. We can, we can send, uh, information about the query, which is already just started.

Of course we could do it observing previous activity, but it has limited, query. By default, its column length

Michael: Like 5,000 characters or

Nikolay: 1024 by default track activity. I always remember, don't remember, but, not limited. So here we can do, we can see everything and we, we just do it right and we, we can send it and everything, uh, comes almost. Imagine such approach. It's, it's possible with modern Linux and it's called eBPF.

Michael: Oh, interesting.

Nikolay: Yes. We, we had a couple of episodes recently on Postgres tv, [00:33:00] about this particular thing. , but it was about, , monitoring and query performance troubleshooting. So observability, but nobody prevents, , from.

This exactly approach for auditing purposes. And I think it has obvious pro and cons is complexity, so you need to write and deal with it. But it can be very, very low overhead, like, because we don't need to sample it. we don't need to go to execution plan here. We just need information about query with parameters who did it?

That's it. It can be about cellex, right?

Michael: some of the same downsides that PG audit solved, you know, in the pg like the, I know it's very contrived example, but I would guess it, like, for example, the piping of the object name together in the PG audit. read me, I

Nikolay: Oh yeah, you are right. Mm-hmm. . Mm-hmm. . You

Michael: But other than that, like I, it does sound really interesting.

Nikolay: Yes, and PO logo, it's a single [00:34:00] file, , and, uh, yes, quite, uh, like there, there is lack of flexibility in maintaining that file. And of course overhead to write, write of writing it to disk. Here we can use, network and send this somewhere and filter and so on.

But you're right, in terms of dynamic sequel.

Michael: But yeah, it doesn't like I, I don't know enough to know how common or how often that is an issue. Go

Nikolay: actually probably we can extract, uh, relations somewhere. Well worth checking. I think it's possible to re, to extract relations and, , to have additional like tags, uh, like what objects or database subjects involved users, which user initiated it and so on. So I wouldn't

Michael: That might solve it.

Nikolay: Yeah, well, uh, I know, I don't, I'm not aware of any work in this direction and I just, came into, in with this idea like 10 minutes before we started.

So, , it's a fresh idea, but I think, um, I think I'm, [00:35:00] I'm quite sure many people already thought about it. It's obvious, and the eBPF is the future of observability, so maybe it'll be future of auditing.

Michael: what would be exciting if, if me forcing you to talk about a, a boring subject actually comes to some good,

Nikolay: Well, it's about building something, so it's, uh, it's becoming interesting.

Michael: of course. Awesome. anything else? Any last comments or thought.

Nikolay: I think that's, that's it.

Michael: Nice one. Well, thank you everybody, and thank you, Nicola. See you next week.

Nikolay: Thank you. Bye. See you.

Some kind things our listeners have said