<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: alexey.zh</title>
    <description>The latest articles on DEV Community by alexey.zh (@alzhi_f93e67fa45b972).</description>
    <link>https://dev.to/alzhi_f93e67fa45b972</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/alzhi_f93e67fa45b972"/>
    <language>en</language>
    <item>
      <title>A Long Story about how I dug into the PostgreSQL source code to write my own WAL receiver, and what came out of it</title>
      <dc:creator>alexey.zh</dc:creator>
      <pubDate>Sat, 18 Apr 2026 03:36:23 +0000</pubDate>
      <link>https://dev.to/alzhi_f93e67fa45b972/a-long-story-about-how-i-dug-into-the-postgresql-source-code-to-write-my-own-wal-receiver-and-what-1648</link>
      <guid>https://dev.to/alzhi_f93e67fa45b972/a-long-story-about-how-i-dug-into-the-postgresql-source-code-to-write-my-own-wal-receiver-and-what-1648</guid>
      <description>&lt;p&gt;Some thoughts are unpredictable.&lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;strong&gt;"I wonder how &lt;code&gt;pg_receivewal&lt;/code&gt; works internally?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;From the outside, it sounds almost innocent. Really, what could possibly be wrong with that? Just ordinary engineering curiosity. I will take a quick look,&lt;br&gt;
understand the general structure, satisfy my curiosity, and then go on living peacefully.&lt;/p&gt;

&lt;p&gt;But then, for some reason, this happens:&lt;br&gt;
you are already building PostgreSQL from source, digging into &lt;code&gt;receivelog.c&lt;/code&gt;, comparing the behavior of your little creation with the original step by&lt;br&gt;
step, arguing with &lt;code&gt;fsync&lt;/code&gt;, looking at &lt;code&gt;.partial&lt;/code&gt; files like old friends, and suddenly discovering that you are writing&lt;br&gt;
your own &lt;a href="https://github.com/pgrwl/pgrwl" rel="noopener noreferrer"&gt;WAL receiver&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In short, everything started quite normally and with absolutely no signs of anything serious.&lt;/p&gt;
&lt;h2&gt;
  
  
  Why PostgreSQL in the First Place
&lt;/h2&gt;

&lt;p&gt;I have been using PostgreSQL as the main DBMS in almost all of my projects for a long time - both personal and work-related. And the longer you&lt;br&gt;
work with it, the more clearly you understand: this is not just a "good database". This is a system designed by people with a very&lt;br&gt;
serious engineering culture.&lt;/p&gt;

&lt;p&gt;When you read notes, discussions, and articles from PostgreSQL developers, you quickly notice how deeply they think through&lt;br&gt;
changes, trade-offs, new features, and behavior in complex scenarios. After such materials, I usually&lt;br&gt;
had a mixed feeling:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;admiration&lt;/li&gt;
&lt;li&gt;respect&lt;/li&gt;
&lt;li&gt;and a slight feeling that I had once again looked at work of a level unreachable for me&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;PostgreSQL gives you everything you need out of the box for backups and continuous WAL archiving. Including&lt;br&gt;
&lt;code&gt;pg_receivewal&lt;/code&gt; - the utility that eventually set everything in motion for me.&lt;/p&gt;
&lt;h2&gt;
  
  
  Why Exactly &lt;code&gt;pg_receivewal&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;Because it is a very good utility. And good utilities are especially dangerous: they make you want to understand exactly how they&lt;br&gt;
are built.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;pg_receivewal&lt;/code&gt; continuously receives WAL segments, can work in synchronous and asynchronous replication modes, and in general&lt;br&gt;
looks fairly straightforward. From a distance.&lt;/p&gt;

&lt;p&gt;Up close, it turns out that there are quite a few subtle things there:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;how the main loop starts&lt;/li&gt;
&lt;li&gt;how connection drops are survived&lt;/li&gt;
&lt;li&gt;how restart is performed&lt;/li&gt;
&lt;li&gt;at what point &lt;code&gt;.partial&lt;/code&gt; becomes a complete WAL file&lt;/li&gt;
&lt;li&gt;how timeline switching is handled&lt;/li&gt;
&lt;li&gt;where and when important &lt;code&gt;fsync&lt;/code&gt; calls must happen&lt;/li&gt;
&lt;li&gt;what to do so that it is reliable, not slow, and not embarrassing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, as usual: a simple utility with a decent amount of engineering accuracy hidden around it.&lt;/p&gt;
&lt;h2&gt;
  
  
  A Few Words About Other Good Solutions I Looked at With Respect and Envy
&lt;/h2&gt;

&lt;p&gt;Before writing something of my own, of course, I spent a lot of time looking at already existing solutions.&lt;/p&gt;

&lt;p&gt;I use two of them at work for continuous archiving of the most critical and main databases.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;code&gt;pgBackRest&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;pgBackRest&lt;/code&gt; is, without exaggeration, an engineering tank. Everything in its source code is impressive:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;logging&lt;/li&gt;
&lt;li&gt;testing&lt;/li&gt;
&lt;li&gt;architectural discipline&lt;/li&gt;
&lt;li&gt;incremental and differential backups&lt;/li&gt;
&lt;li&gt;support for large installations&lt;/li&gt;
&lt;li&gt;attention to edge cases&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And, of course, validation by the community and by time.&lt;/p&gt;

&lt;p&gt;When you read the code of this tool, you catch yourself thinking: yes, this is what a product&lt;br&gt;
written by people who know what they are doing looks like.&lt;br&gt;
And then you open your own repository and immediately become humble.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;code&gt;Barman&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;I like Barman for a different reason.&lt;br&gt;
It does not try to magically solve everything in the world.&lt;br&gt;
It is, essentially, a very understandable orchestrator around standard PostgreSQL tools: &lt;code&gt;pg_receivewal&lt;/code&gt; and &lt;code&gt;pg_basebackup&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;It has a quality that I value a lot: &lt;strong&gt;a simple and reliable model&lt;/strong&gt;.&lt;br&gt;
Not "everything at once", but careful automation around already existing, proven tools.&lt;/p&gt;

&lt;p&gt;This also strongly influenced how I started thinking about my own tool.&lt;/p&gt;
&lt;h2&gt;
  
  
  Why Go, If I Had to Look at So Much C
&lt;/h2&gt;

&lt;p&gt;I decided to write my tool in Go.&lt;/p&gt;

&lt;p&gt;The reasons are fairly ordinary:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;recently, I have really enjoyed writing in this concise language&lt;/li&gt;
&lt;li&gt;simplicity and a UNIX background&lt;/li&gt;
&lt;li&gt;it is convenient for writing network and system-level things&lt;/li&gt;
&lt;li&gt;concurrency is handled well in it&lt;/li&gt;
&lt;li&gt;it fits cloud-native scenarios very naturally&lt;/li&gt;
&lt;li&gt;and, importantly, it is still a little harder to accidentally shoot yourself in the foot with a grenade launcher&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But there is an important nuance: to understand PostgreSQL, I had to seriously dig into C code.&lt;/p&gt;

&lt;p&gt;And here I want to separately say something I formulated for myself a long time ago:&lt;br&gt;
&lt;strong&gt;C is, in my opinion, both the most difficult and the most brilliant language at the same time.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I have not spent as much time on any other language trying to understand its semantics.&lt;br&gt;
Syntax is nothing - semantics are everything. Pointers alone are a simple concept, but&lt;br&gt;
hide a whole chain of icebergs underneath. There was even a time when I was making a compiler for C, with a preprocessor,&lt;br&gt;
assembler, and PE32 output (&lt;code&gt;*.exe&lt;/code&gt;). I played with that for a long time; it was a very interesting experience and time spent happily.&lt;/p&gt;

&lt;p&gt;The C language is so direct, so honest, and so close to the metal that it becomes scary. It feels like&lt;br&gt;
it is very easy to make six sextillion mistakes in it just while opening a file and taking a breath. One pointer going the wrong way -&lt;br&gt;
and that is it, hello, a new form of humiliation. Segmentation Fault becomes a kind of spell that must not be said out loud, lest you&lt;br&gt;
summon it.&lt;/p&gt;

&lt;p&gt;With all that said, I cannot say that I know C.&lt;br&gt;
Honestly, I probably know about three percent of it. And even that only on a good day.&lt;/p&gt;

&lt;p&gt;But even those three percent were extremely useful to me.&lt;br&gt;
Without them, I would not have been able to read PostgreSQL properly: to separate real logic from my own delusions,&lt;br&gt;
follow the control flow, and at least roughly understand why everything here is arranged this way and not another.&lt;/p&gt;

&lt;p&gt;So formally I wrote the tool in Go, but in practice this project also became my way of touching C a little more deeply&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;and gaining even more respect for the people who have been writing such systems in it for years.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  The Beginning: Compiling PostgreSQL, Debugging, and the First Signs of Recklessness
&lt;/h2&gt;

&lt;p&gt;To understand the implementation details at all, I had to go into the PostgreSQL source code.&lt;/p&gt;

&lt;p&gt;I had to learn how to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;build PostgreSQL from source&lt;/li&gt;
&lt;li&gt;run it in debug mode&lt;/li&gt;
&lt;li&gt;attach a debugger&lt;/li&gt;
&lt;li&gt;watch how calls flow&lt;/li&gt;
&lt;li&gt;understand what happens inside the replication loop&lt;/li&gt;
&lt;li&gt;establish the relationship between components and functions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And here I got a surprise: all of this turned out to be less scary than I had imagined. PostgreSQL built, &lt;code&gt;pg_receivewal&lt;/code&gt;&lt;br&gt;
started, the debugger attached to the process, and this immediately gave me the dangerous confidence that "well,&lt;br&gt;
now I will definitely figure this out quickly".&lt;/p&gt;

&lt;p&gt;Of course, I did not figure it out.&lt;/p&gt;

&lt;p&gt;The first thing I did was, like a true amateur, add the most aggressive tracing possible. I logged everything:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;function entries&lt;/li&gt;
&lt;li&gt;exits&lt;/li&gt;
&lt;li&gt;variable values&lt;/li&gt;
&lt;li&gt;branches&lt;/li&gt;
&lt;li&gt;important calls&lt;/li&gt;
&lt;li&gt;and sometimes, it seemed, the mere fact that the universe existed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At first, it seems very clever. Then you have gigantic logs, you no longer understand whether you are reading the system or whether it is slowly&lt;br&gt;
breaking your mind, and the realization comes: &lt;strong&gt;many logs do not mean much understanding&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;But at this stage, the overall picture started to emerge. I began to understand how entities are connected, where the WAL receiving&lt;br&gt;
loop starts, how errors are survived, what happens to &lt;code&gt;.partial&lt;/code&gt;, and at which moments decisions are made about completing a segment.&lt;br&gt;
I discovered libraries, very well-written and years-polished file handling functions, and many more insanely cool things for&lt;br&gt;
the piggy bank of my mind.&lt;/p&gt;

&lt;p&gt;And at some point I could not resist: enough watching, time to write.&lt;/p&gt;
&lt;h2&gt;
  
  
  The First Prototype: "I Will Just Reproduce &lt;code&gt;pg_receivewal&lt;/code&gt;"
&lt;/h2&gt;

&lt;p&gt;I had a very naive idea: not to invent anything new, but simply to reproduce the behavior of&lt;br&gt;
&lt;code&gt;pg_receivewal&lt;/code&gt; as closely as possible.&lt;/p&gt;

&lt;p&gt;In theory, it sounds wonderful.&lt;br&gt;
In practice, it means that you voluntarily sign up for weeks of studying:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;exactly how the streaming loop starts&lt;/li&gt;
&lt;li&gt;how it reacts to connection drops with the database&lt;/li&gt;
&lt;li&gt;what a correct restart should look like, from which file and from which offset inside it&lt;/li&gt;
&lt;li&gt;when a &lt;code&gt;.partial&lt;/code&gt; file can be considered complete&lt;/li&gt;
&lt;li&gt;how timeline changes are handled&lt;/li&gt;
&lt;li&gt;where you misunderstood something&lt;/li&gt;
&lt;li&gt;and where you no longer understand anything at all, but continue out of stubbornness&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;My first more-or-less stable prototype appeared after a couple of weeks. And those were very fun weeks. At times I&lt;br&gt;
felt like a researcher and a super-cool mega-hacker, at other times - like a person who crawled into an aircraft engine without a license to repair it&lt;br&gt;
using someone else's notes.&lt;/p&gt;

&lt;p&gt;But there is one thing I really want to point out: PostgreSQL code is surprisingly pleasant to read. Good comments, competent&lt;br&gt;
decomposition, respect for the reader and colleagues. Even if you yourself understand about twenty percent, it is still clear that in front of you is very&lt;br&gt;
strong engineering work.&lt;/p&gt;
&lt;h2&gt;
  
  
  When You Realize That Simply Receiving WAL Is Only the Beginning
&lt;/h2&gt;

&lt;p&gt;When the prototype finally worked, the joy did not last long.&lt;/p&gt;

&lt;p&gt;Because I already understood: &lt;strong&gt;receiving WAL is only half the job&lt;/strong&gt;. And then the usual engineering carnival begins:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;compression&lt;/li&gt;
&lt;li&gt;encryption&lt;/li&gt;
&lt;li&gt;uploading to S3&lt;/li&gt;
&lt;li&gt;uploading to SFTP&lt;/li&gt;
&lt;li&gt;cleaning up old files&lt;/li&gt;
&lt;li&gt;monitoring&lt;/li&gt;
&lt;li&gt;external scripts&lt;/li&gt;
&lt;li&gt;cron&lt;/li&gt;
&lt;li&gt;more scripts&lt;/li&gt;
&lt;li&gt;and then scripts that fix the previous scripts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And I have never liked this universe of external glue. Because it almost always looks like it was written&lt;br&gt;
at night under the threat of a production incident, and then everyone was afraid to touch it. And all of it smells bad and looks disgusting.&lt;/p&gt;

&lt;p&gt;Scripts around WAL archiving are often fragile, non-obvious, poorly tested, and live on faith that "it somehow&lt;br&gt;
works". And in critical things, I wanted exactly the opposite.&lt;/p&gt;

&lt;p&gt;I wanted the main program itself to manage the archive:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;to know what can already be compressed&lt;/li&gt;
&lt;li&gt;to know what still cannot be deleted&lt;/li&gt;
&lt;li&gt;to understand when a file can be sent to remote storage&lt;/li&gt;
&lt;li&gt;and not to try to make such decisions through a layer of suspicious bash magic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So management components began to appear around the WAL receiver:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;one receives the log&lt;/li&gt;
&lt;li&gt;another archives and encrypts&lt;/li&gt;
&lt;li&gt;a third sends files to S3 or SFTP&lt;/li&gt;
&lt;li&gt;a fourth handles retention and automatic cleanup&lt;/li&gt;
&lt;li&gt;a fifth collects metrics and monitors process state&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And at that point, the project stopped being "just a utility". It started turning into a small system where coordination,&lt;br&gt;
order, and the absence of internal fights between components mattered.&lt;/p&gt;
&lt;h2&gt;
  
  
  About Base Backup: I Did Not Want To, but Curiosity Won
&lt;/h2&gt;

&lt;p&gt;Initially, I had no intention of implementing base backup at all.&lt;/p&gt;

&lt;p&gt;The reason is simple: the replication protocol is single-threaded. For small databases, that is fine. For large ones - not so rosy anymore.&lt;br&gt;
If a backup takes ten hours every ten hours, that is, to put it mildly, not always convenient.&lt;/p&gt;

&lt;p&gt;Multi-threaded approaches usually require the tool to live next to the database itself. And I wanted exactly the opposite: to remotely&lt;br&gt;
collect WAL and make backups from databases located anywhere - in the cloud, on virtual machines, in Kubernetes - and at the same time not&lt;br&gt;
require sidecar containers or any special infrastructure changes from them.&lt;/p&gt;

&lt;p&gt;But then the thing that happens to many technical projects happened:&lt;br&gt;
I did not plan this functionality, and then it simply became interesting.&lt;/p&gt;

&lt;p&gt;In the end, I did implement streaming base backup. It does not claim to be a universal solution for huge&lt;br&gt;
installations, but for databases around 200 GiB it turned out to be quite practical. A couple of hours for a nightly job is already a reasonable&lt;br&gt;
scenario.&lt;/p&gt;

&lt;p&gt;So it turned out not to be a "superweapon", but an honest working tool in a clear niche.&lt;/p&gt;
&lt;h3&gt;
  
  
  Why I Did Not Go Deeper Into Incremental Backups
&lt;/h3&gt;

&lt;p&gt;Of course, I also looked at incremental / differential backups.&lt;/p&gt;

&lt;p&gt;But there you quickly understand an unpleasant thing: taking an incremental backup is not victory yet. You then have to&lt;br&gt;
assemble it back correctly. And that means a completely different level of complexity begins:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;either write your own analogue of &lt;code&gt;pg_combinebackup&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;or very carefully depend on an external tool&lt;/li&gt;
&lt;li&gt;or drown in the number of edge cases and incompatibilities&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At that point I honestly looked at the task and decided that I already had enough problems without it.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;pgBackRest&lt;/code&gt; does such things in a truly well-thought-out way. But reproducing that level is not "built over a couple of&lt;br&gt;
weekends on enthusiasm". It is large, heavy engineering work for years. So I consciously stopped at a simpler&lt;br&gt;
model: reliable base backup for small and medium production environments.&lt;/p&gt;

&lt;p&gt;Without claims to world domination. Just a working, predictable thing.&lt;/p&gt;
&lt;h2&gt;
  
  
  Architecture: The Moment When You Are No Longer Writing a Utility but Coordinating Chaos
&lt;/h2&gt;

&lt;p&gt;As soon as you have several background processes, it immediately becomes clear that the main difficulty is no longer WAL as&lt;br&gt;
such, but making sure this whole household does not fight with itself.&lt;/p&gt;

&lt;p&gt;You need to be able to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;not start a backup if another one has not finished yet&lt;/li&gt;
&lt;li&gt;not start archiving if it is already running&lt;/li&gt;
&lt;li&gt;not delete something that may still be needed&lt;/li&gt;
&lt;li&gt;handle errors correctly&lt;/li&gt;
&lt;li&gt;carefully stop background processes&lt;/li&gt;
&lt;li&gt;keep the system in a predictable state&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here I had to seriously think about patterns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;job queue&lt;/li&gt;
&lt;li&gt;worker pool&lt;/li&gt;
&lt;li&gt;supervisor&lt;/li&gt;
&lt;li&gt;pipes&lt;/li&gt;
&lt;li&gt;task lifecycle management&lt;/li&gt;
&lt;li&gt;safe shutdown&lt;/li&gt;
&lt;li&gt;goroutine coordination&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At some point I realized that I was no longer "writing a WAL receiver". I was assembling a gearbox. And if even one gear&lt;br&gt;
shifts a little, all of this will either start screaming or silently break. And silently breaking software is the worst kind of software.&lt;br&gt;
At the same time, the main task was to make sure the main WAL receiving process was not affected by "noisy neighbors".&lt;/p&gt;
&lt;h2&gt;
  
  
  Streaming Large Files: Another Source of Creativity
&lt;/h2&gt;

&lt;p&gt;There is another pleasant task as well: transferring large backup files to remote storage.&lt;/p&gt;

&lt;p&gt;When a database weighs, for example, 300 GiB, you quickly understand:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;you do not want to save everything locally, and often it is not convenient&lt;/li&gt;
&lt;li&gt;you cannot pull it all into memory&lt;/li&gt;
&lt;li&gt;you also do not want to write a crooked intermediate scheme, because you will have to maintain it yourself later&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So you need a proper streaming pipeline: read the data, transform it on the way, and immediately send it further - without&lt;br&gt;
intermediate garbage, without extra storage, without special effects.&lt;/p&gt;

&lt;p&gt;Here Go was useful again. It has good primitives for streaming processing. Although the presence of primitives, of course, does not&lt;br&gt;
stop you from making design mistakes for a very long time.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;code&gt;fsync&lt;/code&gt;: The Most Subtle Part and My Own Little Nervous Breakdown
&lt;/h2&gt;

&lt;p&gt;If I had to choose what drained the most blood from me, the winner is obvious: &lt;code&gt;fsync&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This is the place where you first think: "well, this part is simple". And then you discover that you have been staring at&lt;br&gt;
the &lt;code&gt;receivelog.c&lt;/code&gt; source code for several hours with the expression of a person who has voluntarily entered a very strange stage of life.&lt;/p&gt;

&lt;p&gt;The problem here is that it is easy to be wrong in both directions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;call &lt;code&gt;fsync&lt;/code&gt; too often - everything slows down&lt;/li&gt;
&lt;li&gt;call it too rarely - later you may look at the result very sadly&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So it is either slow or shameful. Quite a rich choice, to put it mildly.&lt;/p&gt;

&lt;p&gt;I had to literally compare the behavior of my implementation with &lt;code&gt;pg_receivewal&lt;/code&gt; step by step:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;where exactly synchronization happens&lt;/li&gt;
&lt;li&gt;at what moment&lt;/li&gt;
&lt;li&gt;why exactly there&lt;/li&gt;
&lt;li&gt;which scenarios must force &lt;code&gt;fsync&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;and how to do neither too much nor too little&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the end, the key points turned out to be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;fsync&lt;/code&gt; after finishing writing a segment&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;fsync&lt;/code&gt; when renaming &lt;code&gt;.partial&lt;/code&gt; to the final WAL file&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;fsync&lt;/code&gt; on keepalive if the server requests a reply&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;fsync&lt;/code&gt; on errors in the receiving loop&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then the truly fun part began: integration checks. I ran two receivers simultaneously (&lt;code&gt;pg_receivewal&lt;/code&gt;, &lt;code&gt;pgrwl&lt;/code&gt;), generated&lt;br&gt;
WAL, compared timings, then compared the resulting files byte by byte, measured timing differences in milliseconds, and tried to remove&lt;br&gt;
everything unnecessary.&lt;/p&gt;

&lt;p&gt;I even got to logging: in places like this, you begin to understand that it can be either a helper or a quiet&lt;br&gt;
saboteur. For example, you do not need to parse attributes if the logging level does not require it; extra CPU cycles&lt;br&gt;
can be spent on more useful things.&lt;/p&gt;

&lt;p&gt;In the end, I managed to achieve very similar behavior and complete matching of the resulting WAL files over the same interval. And&lt;br&gt;
the small timing difference remained only where it is normal: two daemons cannot be started in the exact same&lt;br&gt;
physical microsecond, no matter how hard you try.&lt;/p&gt;

&lt;p&gt;In the fight against slowness, I even quickly wrote a small utility that injects&lt;br&gt;
a &lt;code&gt;defer&lt;/code&gt; into EVERY function, where the runtime of that function is measured. Not the best check,&lt;br&gt;
but, as practice showed, it helps quickly identify especially hot functions, and then point&lt;br&gt;
the profiler, debugger, and so on at them. My tracing looks something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FUNCTION                            CALLS  TOTAL_NS     TOTAL_SEC
--------                            -----  --------     ---------
storecrypt.Put                      70     23061361400  23.06
receivesuperv.uploadOneFile         35     11606918000  11.61
fsync.Fsync                         106    8813968000   8.81
xlog.processOneMsg                  4481   6818721600   6.82
xlog.processXLogDataMsg             4481   6814495400   6.81
xlog.CloseWalFile                   35     6561511500   6.56
xlog.closeAndRename                 35     6559979000   6.56
fsync.FsyncFname                    70     6525596900   6.53

.....500 more lines
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Metrics: Because I Wanted to See Whether It Was Still Alive or Already Dead
&lt;/h2&gt;

&lt;p&gt;Over time, I also added metrics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;number of files&lt;/li&gt;
&lt;li&gt;archive size&lt;/li&gt;
&lt;li&gt;number of errors&lt;/li&gt;
&lt;li&gt;transferred bytes&lt;/li&gt;
&lt;li&gt;state of background tasks&lt;/li&gt;
&lt;li&gt;deleted files&lt;/li&gt;
&lt;li&gt;general runtime statistics&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I even made a Grafana dashboard. Not the most beautiful one in the world, but useful enough to quickly understand: everything is still&lt;br&gt;
alive or it is already time to get nervous.&lt;/p&gt;

&lt;p&gt;It was important to me to make metrics free if they are disabled. So wherever possible, I used the&lt;br&gt;
noop approach: if observability is not needed, the system should not pay for it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Logging: Where I Also Realized I Still Have a Long Way to Go
&lt;/h2&gt;

&lt;p&gt;Logging had its own coming-of-age story.&lt;/p&gt;

&lt;p&gt;At first, I logged everything. Because, as everyone knows, any person who has deeply entered a complex system for the first time&lt;br&gt;
starts with the phrase: "I will just add more logs and understand everything".&lt;/p&gt;

&lt;p&gt;No.&lt;/p&gt;

&lt;p&gt;Many logs are not understanding. They are just many logs.&lt;/p&gt;

&lt;p&gt;Good logging is when, at the moment of a problem, logs really help you understand what is going on, and do not turn into&lt;br&gt;
an additional source of noise and despair.&lt;/p&gt;

&lt;p&gt;I have not yet managed to make this part as good as I would like. The current result is normal, but&lt;br&gt;
not exemplary. And in this sense, &lt;code&gt;pgBackRest&lt;/code&gt; still remains for me an example of a very smart and thoughtful approach: you can see&lt;br&gt;
how much discipline and engineering care went specifically into diagnostics.&lt;/p&gt;

&lt;h2&gt;
  
  
  Integration Tests: The Hardest and Most Important Part
&lt;/h2&gt;

&lt;p&gt;One of the most difficult and at the same time most necessary parts of the whole project is integration testing.&lt;/p&gt;

&lt;p&gt;Because a daemon that depends on another daemon is already not the easiest object to test. And if you&lt;br&gt;
also want to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;start PostgreSQL&lt;/li&gt;
&lt;li&gt;generate WAL&lt;/li&gt;
&lt;li&gt;stop processes&lt;/li&gt;
&lt;li&gt;make a backup&lt;/li&gt;
&lt;li&gt;restore the database&lt;/li&gt;
&lt;li&gt;compare the state before and after&lt;/li&gt;
&lt;li&gt;run failure scenarios&lt;/li&gt;
&lt;li&gt;check compatibility and correctness&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;then life starts playing in especially bright colors&lt;/p&gt;

&lt;p&gt;I settled on this approach: simple shell scripts that start the test environment in a container,&lt;br&gt;
populate the database, perform actions, then restore everything and check the result.&lt;br&gt;
I also really did not want to drag a ton of dependencies like testcontainers into the project.&lt;/p&gt;

&lt;p&gt;In the end, it turned out like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;shell scripts&lt;/li&gt;
&lt;li&gt;docker compose&lt;/li&gt;
&lt;li&gt;matrix in GitHub Actions&lt;/li&gt;
&lt;li&gt;isolated scenarios&lt;/li&gt;
&lt;li&gt;without unnecessary heavy magic where understandable mechanics are enough&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is how I got tests for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;comparison with &lt;code&gt;pg_receivewal&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;backup/restore&lt;/li&gt;
&lt;li&gt;uploading to S3 and SFTP&lt;/li&gt;
&lt;li&gt;correctness of WAL files&lt;/li&gt;
&lt;li&gt;stopping and restarting&lt;/li&gt;
&lt;li&gt;different failure scenarios&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And honestly, integration tests are what give me the main confidence in releases. Not one hundred percent, of course. One hundred&lt;br&gt;
percent in such things is promised either by madmen or by marketers. But good, engineering-honest confidence - yes.&lt;/p&gt;

&lt;p&gt;Unit tests, of course, also exist. But for me, integration checks are the main criterion&lt;br&gt;
that all of this is not only nicely written (not nicely everywhere), but actually works.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Came Out of It
&lt;/h2&gt;

&lt;p&gt;Over time, from the fairly harmless desire to "just see how &lt;code&gt;pg_receivewal&lt;/code&gt; works", a tool grew that now has:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;streaming WAL receiver&lt;/li&gt;
&lt;li&gt;archiving&lt;/li&gt;
&lt;li&gt;compression&lt;/li&gt;
&lt;li&gt;encryption (streaming AES-256-GCM)&lt;/li&gt;
&lt;li&gt;uploading to S3 (streaming, +multipart)&lt;/li&gt;
&lt;li&gt;uploading to SFTP&lt;/li&gt;
&lt;li&gt;retention and automatic cleanup&lt;/li&gt;
&lt;li&gt;metrics&lt;/li&gt;
&lt;li&gt;logging (mostly zero-cost)&lt;/li&gt;
&lt;li&gt;base backup&lt;/li&gt;
&lt;li&gt;configuration through a file and environment variables&lt;/li&gt;
&lt;li&gt;controlled shutdown&lt;/li&gt;
&lt;li&gt;unit and integration tests&lt;/li&gt;
&lt;li&gt;behavior comparison with &lt;code&gt;pg_receivewal&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;documentation with diagrams and examples&lt;/li&gt;
&lt;li&gt;as many usage examples as possible (standalone/docker-compose/k8s)&lt;/li&gt;
&lt;li&gt;helm-chart (quite simple and working)&lt;/li&gt;
&lt;li&gt;website (in progress, but at least now it is clear how this is done and that it is possible)&lt;/li&gt;
&lt;li&gt;a set of patterns and libraries for further reuse in Go projects&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, as usually happens, the project long ago stopped being what it seemed to be at the beginning.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Planned
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;improve metrics, remove what is unnecessary, add what is needed, build a truly useful and beautiful dashboard&lt;/li&gt;
&lt;li&gt;improve logging quality, make it consistent, think through levels more carefully, preserve zero-cost semantics&lt;/li&gt;
&lt;li&gt;add new capabilities for base backup - around fine-tuning retention periods&lt;/li&gt;
&lt;li&gt;a huge amount of space for refactoring and documentation&lt;/li&gt;
&lt;li&gt;add even more integration tests, I am planning a V2 version&lt;/li&gt;
&lt;li&gt;add every "breaking" scenario to the tests that my imagination can produce&lt;/li&gt;
&lt;li&gt;make the website properly, right now it is just a copy of the documentation&lt;/li&gt;
&lt;li&gt;create a user guide (because it is simply interesting)&lt;/li&gt;
&lt;li&gt;and much more&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What I Took Away From This
&lt;/h2&gt;

&lt;p&gt;Perhaps the main result is not that I wrote yet another tool.&lt;/p&gt;

&lt;p&gt;The main result is something else:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I understood PostgreSQL much more deeply&lt;/li&gt;
&lt;li&gt;I gained even more respect for C, although I know about a miserable three percent of it&lt;/li&gt;
&lt;li&gt;I saw how difficult it is to reproduce even a small part of the behavior of a well-made system utility&lt;/li&gt;
&lt;li&gt;and once again I became convinced that high-quality code written by others is the best way to quickly cure yourself of excessive
self-confidence&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because one thing is to look at architecture from the outside and admire it.&lt;br&gt;
And it is a completely different thing to try to reproduce at least part of that logic yourself and not fall apart along the way.&lt;/p&gt;

&lt;p&gt;And yes. If it ever seems to you that the thought&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"maybe I should also write some utility for PostgreSQL?"&lt;/strong&gt;&lt;br&gt;
sounds like a good idea for a couple of quiet weekends -&lt;/p&gt;

&lt;p&gt;I have two pieces of news for you.&lt;/p&gt;

&lt;p&gt;The first: the idea really is interesting.&lt;br&gt;
The second: you most likely will not have quiet weekends anymore.&lt;/p&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.postgresql.org/docs/current/app-pgrwl.html" rel="noopener noreferrer"&gt;pg_receivewal Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/postgres/postgres/blob/master/src/bin/pg_basebackup/pg_receivewal.c" rel="noopener noreferrer"&gt;pg_receivewal Source Code&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.postgresql.org/docs/current/protocol-replication.html" rel="noopener noreferrer"&gt;Streaming Replication Protocol&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.postgresql.org/docs/current/continuous-archiving.html" rel="noopener noreferrer"&gt;Continuous Archiving and Point-in-Time Recovery&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.postgresql.org/docs/current/continuous-archiving.html#BACKUP-ARCHIVING-WAL" rel="noopener noreferrer"&gt;Setting Up WAL Archiving&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pgbackrest.org/" rel="noopener noreferrer"&gt;pgBackRest&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pgbarman.org/" rel="noopener noreferrer"&gt;Barman&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Repository: &lt;a href="https://github.com/pgrwl/pgrwl" rel="noopener noreferrer"&gt;https://github.com/pgrwl/pgrwl&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks for reading!&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>go</category>
    </item>
    <item>
      <title>SQL-First PostgreSQL Migrations Without the Magic</title>
      <dc:creator>alexey.zh</dc:creator>
      <pubDate>Sun, 12 Apr 2026 14:29:11 +0000</pubDate>
      <link>https://dev.to/alzhi_f93e67fa45b972/sql-first-postgresql-migrations-without-the-magic-22b0</link>
      <guid>https://dev.to/alzhi_f93e67fa45b972/sql-first-postgresql-migrations-without-the-magic-22b0</guid>
      <description>&lt;p&gt;If you work with PostgreSQL long enough, you start noticing a pattern: migration tools often become more complicated than the schema changes they are supposed to manage.&lt;/p&gt;

&lt;p&gt;Some tools invent their own DSL.&lt;br&gt;
Some hide behavior in config files.&lt;br&gt;
Some couple migrations to an ORM.&lt;br&gt;
Some force a directory layout that looks neat in a demo but awkward in a real project.&lt;/p&gt;

&lt;p&gt;And then there is the simpler question:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why can’t PostgreSQL migrations just stay plain SQL?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That is the idea behind &lt;strong&gt;&lt;a href="https://github.com/hashmap-kz/gopgmigrate" rel="noopener noreferrer"&gt;gopgmigrate&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It is a SQL-first migration tool for PostgreSQL that keeps the core workflow boring in the best possible way:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;write normal &lt;code&gt;.sql&lt;/code&gt; files&lt;/li&gt;
&lt;li&gt;organize them however you want&lt;/li&gt;
&lt;li&gt;run them in order&lt;/li&gt;
&lt;li&gt;track what was applied&lt;/li&gt;
&lt;li&gt;support rollbacks&lt;/li&gt;
&lt;li&gt;support repeatable migrations&lt;/li&gt;
&lt;li&gt;make non-transactional migrations explicit&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No YAML. No hidden DSL. No ORM lock-in. No magic comments.&lt;/p&gt;

&lt;p&gt;Just SQL files and a clear naming convention.&lt;/p&gt;
&lt;h2&gt;
  
  
  Why this approach matters
&lt;/h2&gt;

&lt;p&gt;A migration file should be easy to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;read in a code review&lt;/li&gt;
&lt;li&gt;open in your editor&lt;/li&gt;
&lt;li&gt;run directly with &lt;code&gt;psql&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;troubleshoot at 2 AM&lt;/li&gt;
&lt;li&gt;keep using even if you stop using the tool&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That last point matters more than many teams realize.&lt;/p&gt;

&lt;p&gt;A good migration format should outlive the tool that executes it. Your schema history is long-term infrastructure. It should not depend on a framework-specific abstraction that becomes painful to migrate away from later.&lt;/p&gt;

&lt;p&gt;With &lt;strong&gt;gopgmigrate&lt;/strong&gt;, the migration files remain usable as ordinary SQL. The tool adds safety and structure on top, but it does not take ownership of your database change process.&lt;/p&gt;
&lt;h2&gt;
  
  
  What gopgmigrate does
&lt;/h2&gt;

&lt;p&gt;At a high level, the workflow is simple:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Scan a directory tree recursively for SQL migration files&lt;/li&gt;
&lt;li&gt;Sort them globally by revision&lt;/li&gt;
&lt;li&gt;Compare them with the migration history stored in PostgreSQL&lt;/li&gt;
&lt;li&gt;Apply only what is pending&lt;/li&gt;
&lt;li&gt;Record hashes and metadata for auditability&lt;/li&gt;
&lt;li&gt;Support rolling back the last applied migrations&lt;/li&gt;
&lt;li&gt;Re-run repeatable scripts only when their content changes&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That gives you a clean PostgreSQL migration workflow with a small mental model.&lt;/p&gt;
&lt;h2&gt;
  
  
  The naming convention is the API
&lt;/h2&gt;

&lt;p&gt;One of the nicest design choices in gopgmigrate is that the file name itself declares the migration behavior.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcfqb5lsh6ang4np4l2d3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcfqb5lsh6ang4np4l2d3.png" alt=" " width="800" height="489"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;0000001-create-users-table.up.sql
0000001-create-users-table.down.sql
0000003-fn-get-users.r.up.sql
0000004-vacuum-users.notx.up.sql
0000005-refresh-stats.rnotx.up.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is refreshingly explicit.&lt;/p&gt;

&lt;h3&gt;
  
  
  Versioned migrations
&lt;/h3&gt;

&lt;p&gt;These run once in order:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;0000002-add-roles-table.up.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Rollbacks
&lt;/h3&gt;

&lt;p&gt;Rollback files are separate and predictable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;0000002-add-roles-table.down.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Repeatable migrations
&lt;/h3&gt;

&lt;p&gt;Useful for functions, views, triggers, or other SQL objects you may want to refresh when the file changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;0000003-fn-get-users.r.up.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Non-transactional migrations
&lt;/h3&gt;

&lt;p&gt;Some PostgreSQL operations cannot run inside a transaction, for example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;VACUUM&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;CREATE INDEX CONCURRENTLY&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;DROP INDEX CONCURRENTLY&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;some forms of &lt;code&gt;REINDEX&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ALTER SYSTEM&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those are made explicit in the file name:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;0000004-vacuum-users.notx.up.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And if a migration is both repeatable and non-transactional:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;0000005-refresh-stats.rnotx.up.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a small detail, but it solves a real operational problem: the migration behavior is visible before you open the file.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real projects are not flat folders
&lt;/h2&gt;

&lt;p&gt;A lot of migration tools quietly assume every team wants the same directory structure.&lt;/p&gt;

&lt;p&gt;Reality is messier.&lt;/p&gt;

&lt;p&gt;Some teams want to split:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;schema&lt;/li&gt;
&lt;li&gt;data&lt;/li&gt;
&lt;li&gt;functions&lt;/li&gt;
&lt;li&gt;maintenance&lt;/li&gt;
&lt;li&gt;environment-specific files&lt;/li&gt;
&lt;li&gt;release-based groups&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is why I like that gopgmigrate does &lt;strong&gt;not&lt;/strong&gt; force a rigid directory layout.&lt;/p&gt;

&lt;p&gt;You can organize migrations by concern:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;migrations/
  schema/
  data/
  functions/
  no-transaction/
  down/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or by release:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;migrations/
  v1.0.0/
  v1.1.0/
  down/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or however your team naturally thinks about database changes.&lt;/p&gt;

&lt;p&gt;The only rule is that version ordering remains global.&lt;/p&gt;

&lt;p&gt;That is a practical compromise: freedom in layout, predictability in execution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why SQL-first migrations are still the best default
&lt;/h2&gt;

&lt;p&gt;There is a reason SQL-first tools keep appealing to engineers who work close to PostgreSQL.&lt;/p&gt;

&lt;p&gt;PostgreSQL already has a powerful language for schema and data changes. It is called SQL.&lt;/p&gt;

&lt;p&gt;When a tool stays out of the way, you get a few concrete advantages:&lt;/p&gt;

&lt;h3&gt;
  
  
  Better reviewability
&lt;/h3&gt;

&lt;p&gt;A migration diff is just SQL. Reviewers do not have to mentally decode a framework abstraction.&lt;/p&gt;

&lt;h3&gt;
  
  
  Better portability
&lt;/h3&gt;

&lt;p&gt;You can run the file with &lt;code&gt;psql&lt;/code&gt;, a database IDE, automation scripts, or CI jobs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Better debugging
&lt;/h3&gt;

&lt;p&gt;When something fails, you are looking at the actual statement PostgreSQL rejected.&lt;/p&gt;

&lt;h3&gt;
  
  
  Better longevity
&lt;/h3&gt;

&lt;p&gt;Your migration history remains useful years later, even if your application stack changes.&lt;/p&gt;

&lt;p&gt;That makes SQL-first migration tooling especially attractive for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;platform teams&lt;/li&gt;
&lt;li&gt;backend teams with multiple services&lt;/li&gt;
&lt;li&gt;teams that avoid ORM-heavy workflows&lt;/li&gt;
&lt;li&gt;projects with long-lived PostgreSQL databases&lt;/li&gt;
&lt;li&gt;teams that want plain operational ownership&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Safety features that matter in practice
&lt;/h2&gt;

&lt;p&gt;Simple does not mean naive.&lt;/p&gt;

&lt;p&gt;For a migration tool to be usable in production, it needs a few guardrails. gopgmigrate includes some of the right ones:&lt;/p&gt;

&lt;h3&gt;
  
  
  Advisory locking
&lt;/h3&gt;

&lt;p&gt;This helps prevent concurrent migration runs from stepping on each other.&lt;/p&gt;

&lt;h3&gt;
  
  
  Transactional safety by default
&lt;/h3&gt;

&lt;p&gt;Most PostgreSQL DDL can run inside a transaction, and that is the safe default.&lt;/p&gt;

&lt;h3&gt;
  
  
  Explicit non-transactional mode
&lt;/h3&gt;

&lt;p&gt;Instead of hiding exceptions, the tool makes them obvious in the filename.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hash-based change detection
&lt;/h3&gt;

&lt;p&gt;This is particularly useful for repeatable migrations. If the content changes, the tool knows it should re-apply the script.&lt;/p&gt;

&lt;h3&gt;
  
  
  History tracking
&lt;/h3&gt;

&lt;p&gt;Applied migrations are recorded in a history table, along with metadata such as hash and timing-related details.&lt;/p&gt;

&lt;p&gt;That is the kind of boring reliability you want from migration tooling.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example CLI workflow
&lt;/h2&gt;

&lt;p&gt;The CLI is intentionally straightforward.&lt;/p&gt;

&lt;p&gt;Apply pending migrations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gopgmigrate migrate &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--dirname&lt;/span&gt; ./migrations &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--connstr&lt;/span&gt; postgres://user:pass@localhost:5432/mydb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Preview without applying:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gopgmigrate migrate &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--dirname&lt;/span&gt; ./migrations &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--connstr&lt;/span&gt; postgres://user:pass@localhost:5432/mydb &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--dry-run&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Rollback the last migration count:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gopgmigrate rollback-count 2 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--dirname&lt;/span&gt; ./migrations &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--connstr&lt;/span&gt; postgres://user:pass@localhost:5432/mydb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use environment variables in CI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;PGMIGRATE_DIRNAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;./migrations
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;PGMIGRATE_CONNSTR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;postgres://user:pass@localhost:5432/mydb

gopgmigrate migrate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is the kind of interface that works well in local development, CI pipelines, containerized jobs, and release automation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where this fits especially well
&lt;/h2&gt;

&lt;p&gt;I think gopgmigrate is especially appealing in a few scenarios.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. PostgreSQL-first teams
&lt;/h3&gt;

&lt;p&gt;If your team understands PostgreSQL and prefers direct SQL over framework migration layers, this fits naturally.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Teams with mixed migration types
&lt;/h3&gt;

&lt;p&gt;Schema changes, data fixes, repeatable view/function refreshes, and non-transactional maintenance are all first-class cases here.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Repos with real structure
&lt;/h3&gt;

&lt;p&gt;If your migration directory stopped being a cute flat demo folder a long time ago, recursive scanning and flexible layouts are genuinely useful.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. CI/CD and automation
&lt;/h3&gt;

&lt;p&gt;The CLI is simple enough to drop into pipelines without teaching your delivery system a new configuration language.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Engineers who dislike lock-in
&lt;/h3&gt;

&lt;p&gt;Your migration files stay plain SQL. That is a strong long-term property.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I like most about this design
&lt;/h2&gt;

&lt;p&gt;The best tools often win not because they do more, but because they make fewer damaging decisions for you.&lt;/p&gt;

&lt;p&gt;gopgmigrate seems built around a healthy principle:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;the tool should manage execution, not redefine how SQL migrations ought to exist.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;your files remain readable&lt;/li&gt;
&lt;li&gt;your shell workflows still work&lt;/li&gt;
&lt;li&gt;your database knowledge stays relevant&lt;/li&gt;
&lt;li&gt;your migration history does not become framework glue&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In database tooling, that is a strong design choice.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;There are plenty of PostgreSQL migration tools out there. Many are good. But a lot of them drift toward abstraction for its own sake.&lt;/p&gt;

&lt;p&gt;If what you want is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;PostgreSQL migrations&lt;/li&gt;
&lt;li&gt;plain SQL files&lt;/li&gt;
&lt;li&gt;explicit rollbacks&lt;/li&gt;
&lt;li&gt;repeatable migrations&lt;/li&gt;
&lt;li&gt;non-transaction support&lt;/li&gt;
&lt;li&gt;advisory locking&lt;/li&gt;
&lt;li&gt;transactional safety&lt;/li&gt;
&lt;li&gt;hash-based change detection&lt;/li&gt;
&lt;li&gt;flexible directory layouts&lt;/li&gt;
&lt;li&gt;clean CLI usage&lt;/li&gt;
&lt;li&gt;minimal ceremony&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;then &lt;strong&gt;gopgmigrate&lt;/strong&gt; is worth a look.&lt;/p&gt;

&lt;p&gt;It takes a very practical path: keep migrations human-readable, keep behavior explicit, and keep the tool small enough that you can trust what it is doing.&lt;/p&gt;

&lt;p&gt;That is a solid direction for database change management.&lt;/p&gt;

&lt;p&gt;If you find gopgmigrate useful, consider giving the repo a star on GitHub. It helps more people discover the project.&lt;/p&gt;

&lt;p&gt;Repository: &lt;a href="https://github.com/hashmap-kz/gopgmigrate" rel="noopener noreferrer"&gt;https://github.com/hashmap-kz/gopgmigrate&lt;/a&gt;&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>go</category>
      <category>database</category>
    </item>
    <item>
      <title>Finding Hidden Bottlenecks in Go Apps: A Lazy, Hacky, and Bruteforce Method</title>
      <dc:creator>alexey.zh</dc:creator>
      <pubDate>Thu, 02 Apr 2026 07:23:37 +0000</pubDate>
      <link>https://dev.to/alzhi_f93e67fa45b972/finding-hidden-bottlenecks-in-go-apps-a-lazy-hacky-and-bruteforce-method-3dhb</link>
      <guid>https://dev.to/alzhi_f93e67fa45b972/finding-hidden-bottlenecks-in-go-apps-a-lazy-hacky-and-bruteforce-method-3dhb</guid>
      <description>&lt;p&gt;When developing &lt;a href="https://github.com/pgrwl/pgrwl" rel="noopener noreferrer"&gt;pgrwl&lt;/a&gt; - a PostgreSQL WAL receiver - performance is a critical concern.&lt;/p&gt;

&lt;p&gt;Every part of the program must be predictable. There should be no hidden bottlenecks.&lt;/p&gt;

&lt;p&gt;But what about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;typos that silently degrade performance?&lt;/li&gt;
&lt;li&gt;missing tests that fail to catch inefficiencies?&lt;/li&gt;
&lt;li&gt;slow logic introduced "just for now"?&lt;/li&gt;
&lt;li&gt;accidental O(n^2) behavior?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These issues are often hard to detect.&lt;/p&gt;




&lt;h3&gt;
  
  
  The Problem
&lt;/h3&gt;

&lt;p&gt;For instance, you may concatenate a huge template in a loop, but that may be done once outside of the loop. And this will work fine, until the heavy load is reveal that.&lt;/p&gt;

&lt;p&gt;Of course you should profile CPU/RAM, and there are a lot of great tools, but sometimes it's not enough.&lt;/p&gt;




&lt;h3&gt;
  
  
  The Bruteforce Idea
&lt;/h3&gt;

&lt;p&gt;A lazy decision for measuring the whole picture is to to trace each function execution time, and the total number that function being called. &lt;/p&gt;

&lt;p&gt;Yeah, that cannot help you to inspect all the loops, nasty conditions, memory leaks, etc... But may help a LOT to find as fast as possible really heavily loaded functions, and start profiling them deeply with more advanced profiling tools.&lt;/p&gt;




&lt;h3&gt;
  
  
  The Solution
&lt;/h3&gt;

&lt;p&gt;I wrote a KISS &lt;a href="https://github.com/hashmap-kz/gotrackfunc" rel="noopener noreferrer"&gt;library&lt;/a&gt; called &lt;code&gt;gotrackfunc&lt;/code&gt; that injects timing into each single function in the whole project at one CLI command.&lt;/p&gt;




&lt;h3&gt;
  
  
  How it works
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;gotrackfunc&lt;/code&gt; injects timing code into all functions&lt;/li&gt;
&lt;li&gt;Run your program&lt;/li&gt;
&lt;li&gt;Apply load&lt;/li&gt;
&lt;li&gt;Stop execution&lt;/li&gt;
&lt;li&gt;Analyze the report&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It's rough and primitive, but it works!!!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You MUST have a version control system of course, so you can drop&lt;br&gt;
all these changes.&lt;/strong&gt;&lt;/p&gt;


&lt;h3&gt;
  
  
  Usage And Example Output
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# execute in directory of your project&lt;/span&gt;
gotrackfunc ./...

&lt;span class="c"&gt;# run your app (a gotrackfunc.log will produced)&lt;/span&gt;
go run main.go

&lt;span class="c"&gt;# make a report (turn gotrackfunc.log into readable form)&lt;/span&gt;
gotrackfunc summarize
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;In this example I've found that my Put() function is the slowest &lt;br&gt;
part of the whole things. So I can inspect it, refactor, optimize,&lt;br&gt;
write more unit-tests, write integration-tests and measure again.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FUNCTION                            CALLS  TOTAL_NS     TOTAL_SEC
--------                            -----  --------     ---------
storecrypt.Put                      70     23061361400  23.06
receivesuperv.uploadOneFile         35     11606918000  11.61
fsync.Fsync                         106    8813968000   8.81
xlog.processOneMsg                  4481   6818721600   6.82
xlog.processXLogDataMsg             4481   6814495400   6.81
xlog.CloseWalFile                   35     6561511500   6.56
xlog.closeAndRename                 35     6559979000   6.56
fsync.FsyncFname                    70     6525596900   6.53
receivesuperv.performUploads        2      3036884600   3.04
receivesuperv.uploadFiles           1      3034237400   3.03
fsync.FsyncFnameAndDir              35     435023800    0.44
xlog.WriteAtWalFile                 4481   208340500    0.21
cmd.mustInitPgrw                    1      201671300    0.20
xlog.NewPgReceiver                  1      201671300    0.20
xlog.SyncWalFile                    1      48771200     0.05
codec.Flush                         35     42261100     0.04
xlog.OpenWalFile                    36     16938600     0.02
xlog.createFileAndTruncate          36     11396900     0.01
storecrypt.ListInfo                 4      9813800      0.01
receivesuperv.performRetention      2      4906900      0.00
conv.ToUint64                       4482   2664000      0.00
receivesuperv.filterFilesToUpload   2      2647200      0.00
fsx.FileExists                      35     2628000      0.00
xlog.XLogSegmentOffset              8963   2083300      0.00
conv.Uint64ToInt64                  4517   1656300      0.00
cmd.loadConfig                      1      1563600      0.00
config.MustLoad                     1      1563600      0.00
config.mustLoadCfg                  1      1563600      0.00
pipe.CompressAndEncryptOptional     35     1278200      0.00
receivemetrics.AddWALBytesReceived  4481   1121600      0.00
codec.Close                         35     1001200      0.00
xlog.sendFeedback                   3      690700       0.00
xlog.findStreamingStart             1      608600       0.00
receivemode.Init                    1      608600       0.00
storecrypt.NewLocal                 1      522500       0.00
xlog.GetSlotInformation             2      522500       0.00
shared.SetupStorage                 1      522500       0.00
xlog.parseReadReplicationSlot       2      522500       0.00
cmd.mustInitStorageIfRequired       1      522500       0.00
codec.NewWriter                     35     504900       0.00
storecrypt.fullPath                 37     504000       0.00
xlog.XLogFileName                   36     42900        0.00
shared.InitOptionalHandlers         1      0            0.00
config.IsLocalStor                  3      0            0.00
jobq.Start                          1      0            0.00
receivemetrics.IncJobsExecuted      4      0            0.00
receivemetrics.IncWALFilesReceived  35     0            0.00
xlog.IsPartialXLogFileName          2      0            0.00
receivemetrics.ObserveJobDuration   4      0            0.00
xlog.ScanWalSegSize                 1      0            0.00
cmd.needSupervisorLoop              1      0            0.00
shared.getWriteExt                  1      0            0.00
storecrypt.transformsFromName       35     0            0.00
config.checkBackupConfig            1      0            0.00
receivemode.NewReceiveModeService   1      0            0.00
conv.ParseUint32                    2      0            0.00
storecrypt.isSupportedWriteExt      1      0            0.00
receivemode.NewReceiveController    1      0            0.00
receivemetrics.IncJobsSubmitted     4      0            0.00
config.checkMode                    1      0            0.00
storecrypt.encodePath               35     0            0.00
config.expandEnvsWithPrefix         1      0            0.00
config.checkLogConfig               1      0            0.00
xlog.IsPowerOf2                     1      0            0.00
aesgcm.NewChunkedGCMCrypter         1      0            0.00
shared.NewHTTPSrv                   1      0            0.00
xlog.XLogSegmentsPerXLogId          72     0            0.00
xlog.IsValidWalSegSize              1      0            0.00
config.checkMainConfig              1      0            0.00
config.checkStorageConfig           1      0            0.00
logger.Init                         1      0            0.00
receivesuperv.NewArchiveSupervisor  1      0            0.00
middleware.Middleware               6      0            0.00
xlog.existsTimeLineHistoryFile      1      0            0.00
xlog.IsXLogFileName                 2      0            0.00
config.IsExternalStor               1      0            0.00
receivesuperv.log                   83     0            0.00
cmd.App                             1      0            0.00
xlog.parseShowParameter             2      0            0.00
config.expand                       1      0            0.00
jobq.log                            8      0            0.00
xlog.updateLastFlushPosition        37     0            0.00
receivemetrics.IncWALFilesUploaded  35     0            0.00
strx.HeredocTrim                    1      0            0.00
cmd.checkPgEnvsAreSet               1      0            0.00
storecrypt.NewVariadicStorage       1      0            0.00
middleware.Chain                    1      0            0.00
xlog.SetStream                      1      0            0.00
jobq.Submit                         4      0            0.00
config.String                       1      0            0.00
config.validate                     1      0            0.00
jobq.NewJobQueue                    1      0            0.00
config.checkReceiverConfig          1      0            0.00
xlog.calculateCopyStreamSleepTime   3      0            0.00
middleware.SafeHandlerMiddleware    3      0            0.00
xlog.NewStream                      1      0            0.00
conv.Uint32ToInt32                  1      0            0.00
xlog.XLByteToSeg                    36     0            0.00
cmd.initMetrics                     1      0            0.00
xlog.GetShowParameter               2      0            0.00
shared.log                          1      0            0.00
xlog.log                            107    0            0.00
config.Cfg                          3      0            0.00
receivesuperv.filterOlderThan       2      0            0.00
storecrypt.decodePath               70     0            0.00
storecrypt.supportedExts            70     0            0.00
xlog.CurrentOpenWALFileName         37     0            0.00
config.checkStorageModifiersConfig  1      0            0.00
xlog.GetStartupInfo                 1      0            0.00
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;This approach is rough, primitive - and surprisingly effective.&lt;/p&gt;

&lt;p&gt;Sometimes, brute force wins.&lt;/p&gt;




&lt;h3&gt;
  
  
  Links
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;WAL receiver project: &lt;a href="https://github.com/pgrwl/pgrwl" rel="noopener noreferrer"&gt;https://github.com/pgrwl/pgrwl&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Tracking library: &lt;a href="https://github.com/hashmap-kz/gotrackfunc" rel="noopener noreferrer"&gt;https://github.com/hashmap-kz/gotrackfunc&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>go</category>
      <category>postgres</category>
    </item>
    <item>
      <title>Backup Is Not Enough: A PostgreSQL Recovery Story</title>
      <dc:creator>alexey.zh</dc:creator>
      <pubDate>Tue, 31 Mar 2026 13:57:06 +0000</pubDate>
      <link>https://dev.to/alzhi_f93e67fa45b972/backup-is-not-enough-a-postgresql-recovery-story-26cd</link>
      <guid>https://dev.to/alzhi_f93e67fa45b972/backup-is-not-enough-a-postgresql-recovery-story-26cd</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;This experiment is designed to &lt;strong&gt;test and validate the pgrwl tool in&lt;br&gt;
real conditions&lt;/strong&gt;: &lt;a href="https://github.com/pgrwl/pgrwl" rel="noopener noreferrer"&gt;https://github.com/pgrwl/pgrwl&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Instead of synthetic examples, we simulate a real-world failure and&lt;br&gt;
verify that recovery actually works end-to-end.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let’s do something slightly uncomfortable.&lt;/p&gt;

&lt;p&gt;We're going to simulate a database crash and recovery.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Think disk failure. Whole server gone.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;And then bring it back - byte for byte - as if nothing happened.&lt;/p&gt;

&lt;p&gt;Not "some" data.&lt;br&gt;
Not "close enough".&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Everything.&lt;/strong&gt;&lt;/p&gt;


&lt;h1&gt;
  
  
  The Myth of "Backups"
&lt;/h1&gt;

&lt;p&gt;Most people think:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"I have a backup, so I'm safe."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That's... half true.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;base backup&lt;/strong&gt; is just a snapshot - a frozen picture of your&lt;br&gt;
database at one moment.&lt;/p&gt;

&lt;p&gt;But databases don't sit still.&lt;/p&gt;

&lt;p&gt;Every insert, update, delete - all of that happens &lt;strong&gt;after&lt;/strong&gt; your&lt;br&gt;
backup.&lt;/p&gt;

&lt;p&gt;So where does that data live?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;-&amp;gt; In WAL (Write-Ahead Log)&lt;/strong&gt;&lt;/p&gt;


&lt;h1&gt;
  
  
  The Real Rule
&lt;/h1&gt;

&lt;p&gt;If you remember one thing from this post, let it be this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Recovery = Base Backup + WAL&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Without WAL:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;your backup is outdated&lt;/li&gt;
&lt;li&gt;your data is incomplete&lt;/li&gt;
&lt;li&gt;your recovery is a lie&lt;/li&gt;
&lt;/ul&gt;


&lt;h1&gt;
  
  
  The Experiment
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;Warning: this is not intended to be run on production environment&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note: For simplicity, both the database and the backup tool are running on the same machine. In production, you should never store backups on the same host where the database is running.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We'll prove this using a bunch of simple shell commands.&lt;/p&gt;

&lt;p&gt;Note: env-vars are omitted for simplicity.&lt;/p&gt;

&lt;p&gt;A full working script will be attached at the end of the article.&lt;/p&gt;


&lt;h1&gt;
  
  
  Step 1 --- Build a Database From Nothing
&lt;/h1&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;log &lt;span class="s2"&gt;"Initializing PostgreSQL cluster..."&lt;/span&gt;
initdb &lt;span class="nt"&gt;-D&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGDATA&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-A&lt;/span&gt; trust &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--auth-local&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;trust &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--auth-host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;trust &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null

log &lt;span class="s2"&gt;"Starting PostgreSQL..."&lt;/span&gt;
pg_ctl &lt;span class="nt"&gt;-D&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGDATA&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$WORKDIR&lt;/span&gt;&lt;span class="s2"&gt;/pg.log"&lt;/span&gt; start &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null
wait_for_postgres

log &lt;span class="s2"&gt;"Creating physical replication slot: &lt;/span&gt;&lt;span class="nv"&gt;$REPL_SLOT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
psql &lt;span class="nt"&gt;-d&lt;/span&gt; postgres &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="nv"&gt;ON_ERROR_STOP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"select pg_create_physical_replication_slot('&lt;/span&gt;&lt;span class="nv"&gt;$REPL_SLOT&lt;/span&gt;&lt;span class="s2"&gt;');"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null

log &lt;span class="s2"&gt;"Creating test database: &lt;/span&gt;&lt;span class="nv"&gt;$DBNAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
createdb &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DBNAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We start from zero.&lt;/p&gt;


&lt;h1&gt;
  
  
  Step 2 --- Start Capturing WAL
&lt;/h1&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;log &lt;span class="s2"&gt;"Writing pgrwl configuration..."&lt;/span&gt;
&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGRWL_CONFIG&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
{
  "main": {
    "listen_port": 7070,
    "directory": "&lt;/span&gt;&lt;span class="nv"&gt;$WAL_ARCHIVE_DIR&lt;/span&gt;&lt;span class="sh"&gt;"
  },
  "receiver": {
    "slot": "&lt;/span&gt;&lt;span class="nv"&gt;$REPL_SLOT&lt;/span&gt;&lt;span class="sh"&gt;",
    "no_loop": true
  },
  "log": {
    "level": "debug",
    "format": "text",
    "add_source": false
  }
}
&lt;/span&gt;&lt;span class="no"&gt;EOF

&lt;/span&gt;log &lt;span class="s2"&gt;"Starting pgrwl receiver..."&lt;/span&gt;
pgrwl daemon &lt;span class="nt"&gt;-m&lt;/span&gt; receive &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGRWL_CONFIG&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$WORKDIR&lt;/span&gt;&lt;span class="s2"&gt;/pgrwl-receive.log"&lt;/span&gt; 2&amp;gt;&amp;amp;1 &amp;amp;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Starting &lt;a href="https://github.com/pgrwl/pgrwl" rel="noopener noreferrer"&gt;pgrwl&lt;/a&gt; in a &lt;code&gt;receive&lt;/code&gt; mode&lt;/p&gt;


&lt;h1&gt;
  
  
  Step 3 --- Take a Base Backup
&lt;/h1&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;log &lt;span class="s2"&gt;"Creating base backup..."&lt;/span&gt;
pgrwl backup &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGRWL_CONFIG&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This is your snapshot in time.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Backup created by using PostgreSQL replication protocol (i.e. additional tools are not required).&lt;/em&gt;&lt;/p&gt;


&lt;h1&gt;
  
  
  Step 4 --- Populate DB
&lt;/h1&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;log &lt;span class="s2"&gt;"Initializing pgbench data (scale=10 ~ about 1 million rows in pgbench_accounts)..."&lt;/span&gt;
pgbench &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; 10 &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DBNAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

log &lt;span class="s2"&gt;"Running pgbench workload..."&lt;/span&gt;
pgbench &lt;span class="nt"&gt;-c&lt;/span&gt; 4 &lt;span class="nt"&gt;-j&lt;/span&gt; 2 &lt;span class="nt"&gt;-t&lt;/span&gt; 200 &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DBNAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;All this data exists ONLY in WAL.&lt;/p&gt;


&lt;h1&gt;
  
  
  Step 5 --- Save the Truth
&lt;/h1&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;log &lt;span class="s2"&gt;"Dumping cluster state before destruction..."&lt;/span&gt;
pg_dumpall &lt;span class="nt"&gt;--quote-all-identifiers&lt;/span&gt; &lt;span class="nt"&gt;--restrict-key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$WORKDIR&lt;/span&gt;&lt;span class="s2"&gt;/before.sql"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This dump becomes our ground truth.&lt;br&gt;
After restore + WAL replay, we expect the cluster to match this state.&lt;/p&gt;


&lt;h1&gt;
  
  
  Step 6 --- Delete Everything
&lt;/h1&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;log &lt;span class="s2"&gt;"Stopping PostgreSQL and pgrwl receiver..."&lt;/span&gt;
stop_postgres
stop_pgrwl_receive

log &lt;span class="s2"&gt;"Removing original PGDATA to simulate data loss..."&lt;/span&gt;
&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGDATA&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Database gone.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No tables.&lt;/li&gt;
&lt;li&gt;No data.&lt;/li&gt;
&lt;li&gt;No second chances.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Only backup and WAL remain.&lt;/p&gt;


&lt;h1&gt;
  
  
  Step 7 --- Restore the Base Backup
&lt;/h1&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;log &lt;span class="s2"&gt;"Restoring PGDATA from base backup..."&lt;/span&gt;
pgrwl restore &lt;span class="nt"&gt;--dest&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGDATA&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGRWL_CONFIG&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="nb"&gt;chmod &lt;/span&gt;0750 &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGDATA&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;chown&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; postgres:postgres &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGDATA&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;# recovery.signal tells PostgreSQL to start in archive recovery mode.&lt;/span&gt;
&lt;span class="nb"&gt;touch&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGDATA&lt;/span&gt;&lt;span class="s2"&gt;/recovery.signal"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We are back to snapshot state only.&lt;/p&gt;


&lt;h1&gt;
  
  
  Step 8 --- Replay History
&lt;/h1&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;log &lt;span class="s2"&gt;"Starting pgrwl restore server..."&lt;/span&gt;
pgrwl daemon &lt;span class="nt"&gt;-m&lt;/span&gt; serve &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGRWL_CONFIG&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$WORKDIR&lt;/span&gt;&lt;span class="s2"&gt;/pgrwl-serve.log"&lt;/span&gt; 2&amp;gt;&amp;amp;1 &amp;amp;
&lt;span class="nv"&gt;PGRWL_SERVE_PID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$!&lt;/span&gt;

&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGDATA&lt;/span&gt;&lt;span class="s2"&gt;/postgresql.conf"&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
restore_command = 'pgrwl restore-command --serve-addr=127.0.0.1:7070 %f %p'
&lt;/span&gt;&lt;span class="no"&gt;EOF

&lt;/span&gt;log &lt;span class="s2"&gt;"Starting restored PostgreSQL cluster..."&lt;/span&gt;
pg_ctl &lt;span class="nt"&gt;-D&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGDATA&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$WORKDIR&lt;/span&gt;&lt;span class="s2"&gt;/postgres-restored.log"&lt;/span&gt; start &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null

wait_for_postgres
wait_until_out_of_recovery
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Start pgrwl in serve mode for restore_command, run cluster, PostgreSQL starts replaying WAL.&lt;/p&gt;

&lt;p&gt;It is &lt;strong&gt;replaying history&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Every insert.&lt;br&gt;
Every update.&lt;br&gt;
Every commit.&lt;/p&gt;

&lt;p&gt;Reconstructed from WAL.&lt;/p&gt;


&lt;h1&gt;
  
  
  Step 9 --- Did It Work?
&lt;/h1&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;log &lt;span class="s2"&gt;"Dumping cluster state after recovery..."&lt;/span&gt;
pg_dumpall &lt;span class="nt"&gt;--quote-all-identifiers&lt;/span&gt; &lt;span class="nt"&gt;--restrict-key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$WORKDIR&lt;/span&gt;&lt;span class="s2"&gt;/after.sql"&lt;/span&gt;

log &lt;span class="s2"&gt;"Comparing dumps..."&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;diff &lt;span class="nt"&gt;-u&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$WORKDIR&lt;/span&gt;&lt;span class="s2"&gt;/before.sql"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$WORKDIR&lt;/span&gt;&lt;span class="s2"&gt;/after.sql"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$WORKDIR&lt;/span&gt;&lt;span class="s2"&gt;/dump.diff"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;log &lt;span class="s2"&gt;"SUCCESS: restored cluster matches original state"&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"before: &lt;/span&gt;&lt;span class="nv"&gt;$WORKDIR&lt;/span&gt;&lt;span class="s2"&gt;/before.sql"&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"after : &lt;/span&gt;&lt;span class="nv"&gt;$WORKDIR&lt;/span&gt;&lt;span class="s2"&gt;/after.sql"&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"diff  : &lt;/span&gt;&lt;span class="nv"&gt;$WORKDIR&lt;/span&gt;&lt;span class="s2"&gt;/dump.diff (empty)"&lt;/span&gt;
&lt;span class="k"&gt;else
  &lt;/span&gt;&lt;span class="nb"&gt;echo
  echo&lt;/span&gt; &lt;span class="s2"&gt;"FAIL: restored cluster differs from original state"&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"See diff: &lt;/span&gt;&lt;span class="nv"&gt;$WORKDIR&lt;/span&gt;&lt;span class="s2"&gt;/dump.diff"&lt;/span&gt;
  &lt;span class="nb"&gt;exit &lt;/span&gt;1
&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;If there is no diff:&lt;/p&gt;

&lt;p&gt;We recovered &lt;strong&gt;every single transaction&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Not approximately. Not logically. &lt;strong&gt;Exactly&lt;/strong&gt;.&lt;/p&gt;


&lt;h1&gt;
  
  
  Mental Model
&lt;/h1&gt;

&lt;p&gt;Think Git:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;backup = commit&lt;/li&gt;
&lt;li&gt;WAL = commits after&lt;/li&gt;
&lt;li&gt;recovery = replay commits&lt;/li&gt;
&lt;/ul&gt;


&lt;h1&gt;
  
  
  Final Thought
&lt;/h1&gt;

&lt;p&gt;If you don't understand WAL, you don't understand PostgreSQL recovery.&lt;/p&gt;


&lt;h1&gt;
  
  
  Using docker environment for integration tests
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://github.com/pgrwl/pgrwl/tree/master/test/integration/environ" rel="noopener noreferrer"&gt;Integration Tests&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/usr/bin/env bash&lt;/span&gt;
&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-Eeuo&lt;/span&gt; pipefail

&lt;span class="c"&gt;# setup docker-compose env&lt;/span&gt;
&lt;span class="nb"&gt;cd&lt;/span&gt; /tmp
git clone https://github.com/pgrwl/pgrwl.git
&lt;span class="nb"&gt;cd &lt;/span&gt;pgrwl/test/integration/environ
make restart

&lt;span class="c"&gt;# exec into container&lt;/span&gt;
docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; pg-primary bash

&lt;span class="c"&gt;# run tests&lt;/span&gt;
su - postgres
&lt;span class="nb"&gt;cd &lt;/span&gt;scripts/tests
bash 011-basic-flow.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h1&gt;
  
  
  Full Script
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/usr/bin/env bash&lt;/span&gt;
&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-Eeuo&lt;/span&gt; pipefail

&lt;span class="c"&gt;###############################################################################&lt;/span&gt;
&lt;span class="c"&gt;# Simple 'Point In Time Recovery' tutorial with pgrwl&lt;/span&gt;
&lt;span class="c"&gt;#&lt;/span&gt;
&lt;span class="c"&gt;# What this script demonstrates:&lt;/span&gt;
&lt;span class="c"&gt;#&lt;/span&gt;
&lt;span class="c"&gt;#   1. Start a fresh PostgreSQL cluster&lt;/span&gt;
&lt;span class="c"&gt;#   2. Start pgrwl in WAL receiver mode&lt;/span&gt;
&lt;span class="c"&gt;#   3. Take a base backup&lt;/span&gt;
&lt;span class="c"&gt;#   4. Generate more data AFTER the base backup&lt;/span&gt;
&lt;span class="c"&gt;#   5. Save a logical dump of the final database state&lt;/span&gt;
&lt;span class="c"&gt;#   6. Destroy PGDATA (simulate disaster)&lt;/span&gt;
&lt;span class="c"&gt;#   7. Restore from the base backup&lt;/span&gt;
&lt;span class="c"&gt;#   8. Replay archived WAL files&lt;/span&gt;
&lt;span class="c"&gt;#   9. Compare the restored database with the original state&lt;/span&gt;
&lt;span class="c"&gt;#&lt;/span&gt;
&lt;span class="c"&gt;# Main idea:&lt;/span&gt;
&lt;span class="c"&gt;#&lt;/span&gt;
&lt;span class="c"&gt;#   A base backup is only a snapshot at one point in time.&lt;/span&gt;
&lt;span class="c"&gt;#   All changes made after that snapshot live in WAL.&lt;/span&gt;
&lt;span class="c"&gt;#   To recover to the latest committed transaction, we need BOTH:&lt;/span&gt;
&lt;span class="c"&gt;#&lt;/span&gt;
&lt;span class="c"&gt;#     - the base backup&lt;/span&gt;
&lt;span class="c"&gt;#     - the WAL generated after the backup&lt;/span&gt;
&lt;span class="c"&gt;#&lt;/span&gt;
&lt;span class="c"&gt;###############################################################################&lt;/span&gt;

&lt;span class="c"&gt;###############################################################################&lt;/span&gt;
&lt;span class="c"&gt;# Configuration&lt;/span&gt;
&lt;span class="c"&gt;###############################################################################&lt;/span&gt;

&lt;span class="nv"&gt;PGDATA&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/tmp/pgrwl-basic/pgdata"&lt;/span&gt;
&lt;span class="nv"&gt;WAL_ARCHIVE_DIR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/tmp/pgrwl-basic/wal-archive"&lt;/span&gt;
&lt;span class="nv"&gt;PGRWL_CONFIG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/tmp/pgrwl-basic/pgrwl-config.json"&lt;/span&gt;

&lt;span class="nv"&gt;DBNAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"bench"&lt;/span&gt;
&lt;span class="nv"&gt;REPL_SLOT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"pgrwl_v5"&lt;/span&gt;

&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;PGHOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"localhost"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;PGPORT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"5432"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;PGUSER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"postgres"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;PGPASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"postgres"&lt;/span&gt;

&lt;span class="nv"&gt;PGRWL_RECEIVE_PID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;
&lt;span class="nv"&gt;PGRWL_SERVE_PID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;

&lt;span class="c"&gt;###############################################################################&lt;/span&gt;
&lt;span class="c"&gt;# Small helper functions&lt;/span&gt;
&lt;span class="c"&gt;###############################################################################&lt;/span&gt;

log&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="nb"&gt;printf&lt;/span&gt; &lt;span class="s1"&gt;'\n[%s] %s\n'&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; &lt;span class="s1"&gt;'+%F %T'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$*&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

die&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"ERROR: &lt;/span&gt;&lt;span class="nv"&gt;$*&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;amp;2
  &lt;span class="nb"&gt;exit &lt;/span&gt;1
&lt;span class="o"&gt;}&lt;/span&gt;

wait_for_postgres&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  log &lt;span class="s2"&gt;"Waiting for PostgreSQL to accept connections..."&lt;/span&gt;
  &lt;span class="k"&gt;for &lt;/span&gt;_ &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;seq &lt;/span&gt;1 120&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
    if &lt;/span&gt;pg_isready &lt;span class="nt"&gt;-h&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGHOST&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGPORT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-U&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGUSER&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null 2&amp;gt;&amp;amp;1&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
      return &lt;/span&gt;0
    &lt;span class="k"&gt;fi
    &lt;/span&gt;&lt;span class="nb"&gt;sleep &lt;/span&gt;1
  &lt;span class="k"&gt;done
  &lt;/span&gt;die &lt;span class="s2"&gt;"PostgreSQL did not become ready in time"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

wait_until_out_of_recovery&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  log &lt;span class="s2"&gt;"Waiting for PostgreSQL to finish recovery..."&lt;/span&gt;
  &lt;span class="k"&gt;for &lt;/span&gt;_ &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;seq &lt;/span&gt;1 120&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
    if &lt;/span&gt;psql &lt;span class="nt"&gt;-d&lt;/span&gt; postgres &lt;span class="nt"&gt;-Atqc&lt;/span&gt; &lt;span class="s2"&gt;"select pg_is_in_recovery()"&lt;/span&gt; 2&amp;gt;/dev/null | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-q&lt;/span&gt; &lt;span class="s1"&gt;'^f$'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
      return &lt;/span&gt;0
    &lt;span class="k"&gt;fi
    &lt;/span&gt;&lt;span class="nb"&gt;sleep &lt;/span&gt;1
  &lt;span class="k"&gt;done
  &lt;/span&gt;die &lt;span class="s2"&gt;"PostgreSQL did not finish recovery in time"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

stop_postgres&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGDATA&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;log &lt;span class="s2"&gt;"Stopping PostgreSQL..."&lt;/span&gt;
    pg_ctl &lt;span class="nt"&gt;-D&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGDATA&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt; immediate stop &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null 2&amp;gt;&amp;amp;1 &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true
  &lt;/span&gt;&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

stop_pgrwl_receive&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PGRWL_RECEIVE_PID&lt;/span&gt;&lt;span class="k"&gt;:-}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;log &lt;span class="s2"&gt;"Stopping pgrwl receiver..."&lt;/span&gt;
    &lt;span class="nb"&gt;kill&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGRWL_RECEIVE_PID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null 2&amp;gt;&amp;amp;1 &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true
    wait&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGRWL_RECEIVE_PID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null 2&amp;gt;&amp;amp;1 &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true
    &lt;/span&gt;&lt;span class="nv"&gt;PGRWL_RECEIVE_PID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;
  &lt;span class="k"&gt;fi&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

stop_pgrwl_serve&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PGRWL_SERVE_PID&lt;/span&gt;&lt;span class="k"&gt;:-}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;log &lt;span class="s2"&gt;"Stopping pgrwl restore server..."&lt;/span&gt;
    &lt;span class="nb"&gt;kill&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGRWL_SERVE_PID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null 2&amp;gt;&amp;amp;1 &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true
    wait&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGRWL_SERVE_PID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null 2&amp;gt;&amp;amp;1 &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true
    &lt;/span&gt;&lt;span class="nv"&gt;PGRWL_SERVE_PID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;
  &lt;span class="k"&gt;fi&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

cleanup&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  stop_pgrwl_receive
  stop_pgrwl_serve
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="nb"&gt;trap &lt;/span&gt;cleanup EXIT

&lt;span class="c"&gt;###############################################################################&lt;/span&gt;
&lt;span class="c"&gt;# Phase 0. Start from a clean state&lt;/span&gt;
&lt;span class="c"&gt;###############################################################################&lt;/span&gt;

log &lt;span class="s2"&gt;"Cleaning up old processes and files..."&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;pkill &lt;span class="nt"&gt;-9&lt;/span&gt; postgres &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true
sudo &lt;/span&gt;pkill &lt;span class="nt"&gt;-9&lt;/span&gt; pgrwl &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true
sudo rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; &lt;span class="s2"&gt;"/tmp/pgrwl-basic"&lt;/span&gt;

log &lt;span class="s2"&gt;"Preparing work directory: /tmp/pgrwl-basic"&lt;/span&gt;
&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"/tmp/pgrwl-basic"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$WAL_ARCHIVE_DIR&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;###############################################################################&lt;/span&gt;
&lt;span class="c"&gt;# Phase 1. Create and start a fresh PostgreSQL cluster&lt;/span&gt;
&lt;span class="c"&gt;###############################################################################&lt;/span&gt;

log &lt;span class="s2"&gt;"Initializing PostgreSQL cluster..."&lt;/span&gt;
initdb &lt;span class="nt"&gt;-D&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGDATA&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-A&lt;/span&gt; trust &lt;span class="nt"&gt;--auth-local&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;trust &lt;span class="nt"&gt;--auth-host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;trust &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null

&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGDATA&lt;/span&gt;&lt;span class="s2"&gt;/postgresql.conf"&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
listen_addresses      = '*'

# Settings required for WAL streaming / archiving style workflows
wal_level                = replica
max_wal_senders          = 10
max_replication_slots    = 10
wal_keep_size            = 64MB

# Durability settings
fsync                    = on
synchronous_commit       = on
full_page_writes         = on

# Basic logging settings
log_directory            = '/tmp/pgrwl-basic'
log_filename             = 'pg.log'
log_lock_waits           = on
log_temp_files           = 0
log_checkpoints          = on
log_connections          = off
log_destination          = 'stderr'
log_error_verbosity      = 'DEFAULT' # TERSE, DEFAULT, VERBOSE
log_hostname             = off
log_min_messages         = 'WARNING' # DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1, INFO, NOTICE, WARNING, ERROR, LOG, FATAL, PANIC
log_timezone             = 'Asia/Aqtau'
log_line_prefix          = '%t [%p-%l] %r %q%u@%d '
&lt;/span&gt;&lt;span class="no"&gt;EOF

&lt;/span&gt;log &lt;span class="s2"&gt;"Starting PostgreSQL..."&lt;/span&gt;
pg_ctl &lt;span class="nt"&gt;-D&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGDATA&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="s2"&gt;"/tmp/pgrwl-basic/pg.log"&lt;/span&gt; start &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null
wait_for_postgres

log &lt;span class="s2"&gt;"Creating physical replication slot: &lt;/span&gt;&lt;span class="nv"&gt;$REPL_SLOT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
psql &lt;span class="nt"&gt;-d&lt;/span&gt; postgres &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="nv"&gt;ON_ERROR_STOP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"select pg_create_physical_replication_slot('&lt;/span&gt;&lt;span class="nv"&gt;$REPL_SLOT&lt;/span&gt;&lt;span class="s2"&gt;');"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null

log &lt;span class="s2"&gt;"Creating test database: &lt;/span&gt;&lt;span class="nv"&gt;$DBNAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
createdb &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DBNAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;###############################################################################&lt;/span&gt;
&lt;span class="c"&gt;# Phase 2. Configure and start pgrwl in receive mode&lt;/span&gt;
&lt;span class="c"&gt;###############################################################################&lt;/span&gt;

log &lt;span class="s2"&gt;"Writing pgrwl configuration..."&lt;/span&gt;
&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGRWL_CONFIG&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
{
  "main": {
    "listen_port": 7070,
    "directory": "&lt;/span&gt;&lt;span class="nv"&gt;$WAL_ARCHIVE_DIR&lt;/span&gt;&lt;span class="sh"&gt;"
  },
  "receiver": {
    "slot": "&lt;/span&gt;&lt;span class="nv"&gt;$REPL_SLOT&lt;/span&gt;&lt;span class="sh"&gt;",
    "no_loop": true
  },
  "log": {
    "level": "debug",
    "format": "text",
    "add_source": false
  }
}
&lt;/span&gt;&lt;span class="no"&gt;EOF

&lt;/span&gt;log &lt;span class="s2"&gt;"Starting pgrwl receiver..."&lt;/span&gt;
pgrwl daemon &lt;span class="nt"&gt;-m&lt;/span&gt; receive &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGRWL_CONFIG&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="s2"&gt;"/tmp/pgrwl-basic/pgrwl-receive.log"&lt;/span&gt; 2&amp;gt;&amp;amp;1 &amp;amp;
&lt;span class="nv"&gt;PGRWL_RECEIVE_PID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$!&lt;/span&gt;

&lt;span class="c"&gt;# Give the receiver a moment to connect and begin streaming.&lt;/span&gt;
&lt;span class="nb"&gt;sleep &lt;/span&gt;3

&lt;span class="c"&gt;###############################################################################&lt;/span&gt;
&lt;span class="c"&gt;# Phase 3. Take a base backup&lt;/span&gt;
&lt;span class="c"&gt;###############################################################################&lt;/span&gt;

log &lt;span class="s2"&gt;"Creating base backup..."&lt;/span&gt;
pgrwl backup &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGRWL_CONFIG&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;###############################################################################&lt;/span&gt;
&lt;span class="c"&gt;# Phase 4. Generate data AFTER the base backup&lt;/span&gt;
&lt;span class="c"&gt;#&lt;/span&gt;
&lt;span class="c"&gt;# This is the important part.&lt;/span&gt;
&lt;span class="c"&gt;# If we recover only from the base backup, these changes would be lost.&lt;/span&gt;
&lt;span class="c"&gt;# They survive only because the WAL receiver captures the WAL stream.&lt;/span&gt;
&lt;span class="c"&gt;###############################################################################&lt;/span&gt;

log &lt;span class="s2"&gt;"Initializing pgbench data (scale=10 ~ about 1 million rows in pgbench_accounts)..."&lt;/span&gt;
pgbench &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; 10 &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DBNAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

log &lt;span class="s2"&gt;"Running pgbench workload..."&lt;/span&gt;
pgbench &lt;span class="nt"&gt;-c&lt;/span&gt; 4 &lt;span class="nt"&gt;-j&lt;/span&gt; 2 &lt;span class="nt"&gt;-t&lt;/span&gt; 200 &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DBNAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;###############################################################################&lt;/span&gt;
&lt;span class="c"&gt;# Phase 5. Save the final logical state before disaster&lt;/span&gt;
&lt;span class="c"&gt;#&lt;/span&gt;
&lt;span class="c"&gt;# This dump becomes our ground truth.&lt;/span&gt;
&lt;span class="c"&gt;# After restore + WAL replay, we expect the cluster to match this state.&lt;/span&gt;
&lt;span class="c"&gt;###############################################################################&lt;/span&gt;

log &lt;span class="s2"&gt;"Dumping cluster state before destruction..."&lt;/span&gt;
pg_dumpall &lt;span class="nt"&gt;--quote-all-identifiers&lt;/span&gt; &lt;span class="nt"&gt;--restrict-key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="s2"&gt;"/tmp/pgrwl-basic/before.sql"&lt;/span&gt;

&lt;span class="c"&gt;###############################################################################&lt;/span&gt;
&lt;span class="c"&gt;# Phase 6. Force PostgreSQL to emit final WAL and let receiver catch up&lt;/span&gt;
&lt;span class="c"&gt;###############################################################################&lt;/span&gt;

log &lt;span class="s2"&gt;"Forcing checkpoint and WAL switch..."&lt;/span&gt;
psql &lt;span class="nt"&gt;-d&lt;/span&gt; postgres &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="nv"&gt;ON_ERROR_STOP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"checkpoint;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null
psql &lt;span class="nt"&gt;-d&lt;/span&gt; postgres &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="nv"&gt;ON_ERROR_STOP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"select pg_switch_wal();"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null

&lt;span class="c"&gt;# Give pgrwl time to receive the last WAL segment(s).&lt;/span&gt;
&lt;span class="nb"&gt;sleep &lt;/span&gt;3

&lt;span class="c"&gt;###############################################################################&lt;/span&gt;
&lt;span class="c"&gt;# Phase 7. Simulate disaster&lt;/span&gt;
&lt;span class="c"&gt;###############################################################################&lt;/span&gt;

log &lt;span class="s2"&gt;"Stopping PostgreSQL and pgrwl receiver..."&lt;/span&gt;
stop_postgres
stop_pgrwl_receive

log &lt;span class="s2"&gt;"Removing original PGDATA to simulate data loss..."&lt;/span&gt;
&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGDATA&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;###############################################################################&lt;/span&gt;
&lt;span class="c"&gt;# Phase 8. Restore the base backup&lt;/span&gt;
&lt;span class="c"&gt;###############################################################################&lt;/span&gt;

log &lt;span class="s2"&gt;"Restoring PGDATA from base backup..."&lt;/span&gt;
pgrwl restore &lt;span class="nt"&gt;--dest&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGDATA&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGRWL_CONFIG&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="nb"&gt;chmod &lt;/span&gt;0750 &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGDATA&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;chown&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; postgres:postgres &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGDATA&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;# recovery.signal tells PostgreSQL to start in archive recovery mode.&lt;/span&gt;
&lt;span class="nb"&gt;touch&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGDATA&lt;/span&gt;&lt;span class="s2"&gt;/recovery.signal"&lt;/span&gt;

&lt;span class="c"&gt;###############################################################################&lt;/span&gt;
&lt;span class="c"&gt;# Phase 9. Start pgrwl in serve mode for restore_command&lt;/span&gt;
&lt;span class="c"&gt;###############################################################################&lt;/span&gt;

log &lt;span class="s2"&gt;"Starting pgrwl restore server..."&lt;/span&gt;
pgrwl daemon &lt;span class="nt"&gt;-m&lt;/span&gt; serve &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGRWL_CONFIG&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="s2"&gt;"/tmp/pgrwl-basic/pgrwl-serve.log"&lt;/span&gt; 2&amp;gt;&amp;amp;1 &amp;amp;
&lt;span class="nv"&gt;PGRWL_SERVE_PID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$!&lt;/span&gt;

&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGDATA&lt;/span&gt;&lt;span class="s2"&gt;/postgresql.conf"&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
restore_command = 'pgrwl restore-command --serve-addr=127.0.0.1:7070 %f %p'
&lt;/span&gt;&lt;span class="no"&gt;EOF

&lt;/span&gt;&lt;span class="c"&gt;###############################################################################&lt;/span&gt;
&lt;span class="c"&gt;# Phase 10. Start restored PostgreSQL and let it replay WAL&lt;/span&gt;
&lt;span class="c"&gt;###############################################################################&lt;/span&gt;

log &lt;span class="s2"&gt;"Starting restored PostgreSQL cluster..."&lt;/span&gt;
pg_ctl &lt;span class="nt"&gt;-D&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PGDATA&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="s2"&gt;"/tmp/pgrwl-basic/postgres-restored.log"&lt;/span&gt; start &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null

wait_for_postgres
wait_until_out_of_recovery

&lt;span class="c"&gt;###############################################################################&lt;/span&gt;
&lt;span class="c"&gt;# Phase 11. Dump restored state and compare&lt;/span&gt;
&lt;span class="c"&gt;###############################################################################&lt;/span&gt;

log &lt;span class="s2"&gt;"Dumping cluster state after recovery..."&lt;/span&gt;
pg_dumpall &lt;span class="nt"&gt;--quote-all-identifiers&lt;/span&gt; &lt;span class="nt"&gt;--restrict-key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="s2"&gt;"/tmp/pgrwl-basic/after.sql"&lt;/span&gt;

log &lt;span class="s2"&gt;"Comparing dumps..."&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;diff &lt;span class="nt"&gt;-u&lt;/span&gt; &lt;span class="s2"&gt;"/tmp/pgrwl-basic/before.sql"&lt;/span&gt; &lt;span class="s2"&gt;"/tmp/pgrwl-basic/after.sql"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="s2"&gt;"/tmp/pgrwl-basic/dump.diff"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;log &lt;span class="s2"&gt;"SUCCESS: restored cluster matches original state"&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"before: /tmp/pgrwl-basic/before.sql"&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"after : /tmp/pgrwl-basic/after.sql"&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"diff  : /tmp/pgrwl-basic/dump.diff (empty)"&lt;/span&gt;
&lt;span class="k"&gt;else
  &lt;/span&gt;&lt;span class="nb"&gt;echo
  echo&lt;/span&gt; &lt;span class="s2"&gt;"FAIL: restored cluster differs from original state"&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"See diff: /tmp/pgrwl-basic/dump.diff"&lt;/span&gt;
  &lt;span class="nb"&gt;exit &lt;/span&gt;1
&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>postgres</category>
      <category>go</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>PostgreSQL Streaming WAL Archiver and a backup tool in Go (pgrwl)</title>
      <dc:creator>alexey.zh</dc:creator>
      <pubDate>Sat, 28 Mar 2026 08:13:15 +0000</pubDate>
      <link>https://dev.to/alzhi_f93e67fa45b972/postgresql-straming-wal-archiver-in-go-pgrwl-g91</link>
      <guid>https://dev.to/alzhi_f93e67fa45b972/postgresql-straming-wal-archiver-in-go-pgrwl-g91</guid>
      <description>&lt;p&gt;A &lt;strong&gt;production-grade, cloud-native PostgreSQL WAL archiving system&lt;/strong&gt; designed for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;streaming WAL to S3 with compression, encryption, and retention&lt;/li&gt;
&lt;li&gt;Kubernetes-native PostgreSQL backup workflows&lt;/li&gt;
&lt;li&gt;zero data loss and reliable Point-in-Time Recovery (PITR)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Project
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/pgrwl/pgrwl" rel="noopener noreferrer"&gt;https://github.com/pgrwl/pgrwl&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Features
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;WAL receiver (replication protocol)&lt;/li&gt;
&lt;li&gt;Continuous WAL streaming&lt;/li&gt;
&lt;li&gt;Backup to S3 (MinIO, AWS, etc.)&lt;/li&gt;
&lt;li&gt;Backup to SFTP (backup server)&lt;/li&gt;
&lt;li&gt;WAL compression (gzip, zstd-ready)&lt;/li&gt;
&lt;li&gt;WAL encryption (AES-GCM)&lt;/li&gt;
&lt;li&gt;WAL retention management&lt;/li&gt;
&lt;li&gt;WAL monitoring and observability&lt;/li&gt;
&lt;li&gt;Kubernetes &amp;amp; container ready&lt;/li&gt;
&lt;li&gt;Helm chart support&lt;/li&gt;
&lt;li&gt;YAML / JSON / ENV config&lt;/li&gt;
&lt;li&gt;Lightweight single binary&lt;/li&gt;
&lt;li&gt;Structured logging&lt;/li&gt;
&lt;li&gt;Integration tests (containerized)&lt;/li&gt;
&lt;li&gt;Unit tests&lt;/li&gt;
&lt;li&gt;Backup automation (streaming basebackup)&lt;/li&gt;
&lt;li&gt;Continuous backup for PostgreSQL&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Key Capabilities
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Streaming WAL
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Uses PostgreSQL replication protocol&lt;/li&gt;
&lt;li&gt;Supports synchronous WAL streaming&lt;/li&gt;
&lt;li&gt;Enables zero data loss setups&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Storage Backends
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;S3-compatible storage&lt;/li&gt;
&lt;li&gt;SFTP backup servers&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Compression + Encryption
&lt;/h3&gt;

&lt;p&gt;Pipeline based on filename:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;000000010000000000000001.gz.aes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Flow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;compress -&amp;gt; encrypt -&amp;gt; upload&lt;/li&gt;
&lt;li&gt;download -&amp;gt; decrypt -&amp;gt; decompress&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PostgreSQL
   | (replication protocol)
WAL Receiver
   |
Local FS (fsync)
   |
Uploader (S3 / SFTP)
   |
Retention manager
   |
HTTP server (restore_command)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Continuous Backup
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;real-time WAL streaming&lt;/li&gt;
&lt;li&gt;safe off-site storage&lt;/li&gt;
&lt;li&gt;full PITR support&lt;/li&gt;
&lt;li&gt;near-zero RPO&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Kubernetes Ready
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;run as StatefulSet&lt;/li&gt;
&lt;li&gt;works with StatefulSets / CNPG / Virtual Machines&lt;/li&gt;
&lt;li&gt;deploy via Helm&lt;/li&gt;
&lt;li&gt;GitOps-friendly&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Configuration Example
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;main&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;listen_port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;7070&lt;/span&gt;
  &lt;span class="na"&gt;directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;wals&lt;/span&gt;
&lt;span class="na"&gt;receiver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;slot&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pgrwl_v5&lt;/span&gt;
&lt;span class="na"&gt;log&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;level&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;trace&lt;/span&gt;
  &lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;text&lt;/span&gt;
  &lt;span class="na"&gt;add_source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Testing
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;integration tests with real PostgreSQL containers&lt;/li&gt;
&lt;li&gt;end-to-end WAL validation&lt;/li&gt;
&lt;li&gt;unit-tested components&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Why pgrwl?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;simple deployment (single binary)&lt;/li&gt;
&lt;li&gt;production-grade reliability&lt;/li&gt;
&lt;li&gt;cloud-native design&lt;/li&gt;
&lt;li&gt;built for Kubernetes and containers&lt;/li&gt;
&lt;li&gt;secure and efficient WAL handling&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Contribute
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Star the repo (&lt;a href="https://github.com/pgrwl/pgrwl" rel="noopener noreferrer"&gt;https://github.com/pgrwl/pgrwl&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Open issues (&lt;a href="https://github.com/pgrwl/pgrwl/issues" rel="noopener noreferrer"&gt;https://github.com/pgrwl/pgrwl/issues&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Suggest improvements&lt;/li&gt;
&lt;li&gt;Submit PRs (&lt;a href="https://github.com/pgrwl/pgrwl/blob/master/CONTRIBUTING.md" rel="noopener noreferrer"&gt;https://github.com/pgrwl/pgrwl/blob/master/CONTRIBUTING.md&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;pgrwl is a &lt;strong&gt;lightweight, powerful, production-ready WAL archiving solution&lt;/strong&gt; that brings:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;streaming&lt;/li&gt;
&lt;li&gt;security&lt;/li&gt;
&lt;li&gt;automation&lt;/li&gt;
&lt;li&gt;observability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;to PostgreSQL backups.&lt;/p&gt;

</description>
      <category>postgressql</category>
      <category>kubernetes</category>
      <category>go</category>
    </item>
    <item>
      <title>Patch-based, environment-aware Kubernetes deployments using plain YAML and zero templating</title>
      <dc:creator>alexey.zh</dc:creator>
      <pubDate>Wed, 25 Jun 2025 14:18:53 +0000</pubDate>
      <link>https://dev.to/alzhi_f93e67fa45b972/patch-based-environment-aware-kubernetes-deployments-using-plain-yaml-and-zero-templating-5gh1</link>
      <guid>https://dev.to/alzhi_f93e67fa45b972/patch-based-environment-aware-kubernetes-deployments-using-plain-yaml-and-zero-templating-5gh1</guid>
      <description>&lt;p&gt;Meet &lt;a href="https://github.com/kubepatch/kubepatch" rel="noopener noreferrer"&gt;kubepatch&lt;/a&gt; — a simple tool for deploying Kubernetes manifests using a patch-based approach.&lt;/p&gt;

&lt;p&gt;Unlike tools that embed logic into YAML or require custom template languages, &lt;code&gt;kubepatch&lt;/code&gt; keeps your &lt;strong&gt;base manifests clean and idiomatic&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Simple&lt;/strong&gt;: No templates, DSLs, or logic in YAML, zero magic&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Predictable&lt;/strong&gt;: No string substitutions or regex hacks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Safe&lt;/strong&gt;: Only native Kubernetes YAML manifests - readable, valid, untouched&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Layered&lt;/strong&gt;: Patch logic is externalized and explicit via JSON Patch (RFC 6902)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Declarative&lt;/strong&gt;: Cross-environment deployment with predictable, understandable changes&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🛠 Example
&lt;/h2&gt;

&lt;p&gt;Given a base set of manifests for deploy a basic microservice &lt;br&gt;
&lt;a href="https://github.com/kubepatch/kubepatch/tree/master/examples" rel="noopener noreferrer"&gt;see examples&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NodePort&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;

&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp&lt;/span&gt;
          &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;localhost:5000/restapiapp:latest"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A &lt;code&gt;patches/prod.yaml&lt;/code&gt; might look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;myapp-prod&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;deployment/myapp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;replace&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/spec/replicas&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;replace&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/spec/template/spec/containers/0/image&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;localhost:5000/restapiapp:1.21"&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;add&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/spec/template/spec/containers/0/env&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RESTAPIAPP_VERSION&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prod&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LOG_LEVEL&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;info&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;add&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/spec/template/spec/containers/0/resources&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;500m"&lt;/span&gt;
          &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;512Mi"&lt;/span&gt;
        &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;64m"&lt;/span&gt;
          &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;128Mi"&lt;/span&gt;
  &lt;span class="na"&gt;service/myapp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;add&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/spec/ports/0/nodePort&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30266&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A &lt;code&gt;patches/dev.yaml&lt;/code&gt; might look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;myapp-dev&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;deployment/myapp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;replace&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/spec/template/spec/containers/0/image&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;localhost:5000/restapiapp:1.22"&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;add&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/spec/template/spec/containers/0/env&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RESTAPIAPP_VERSION&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dev&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LOG_LEVEL&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;debug&lt;/span&gt;
  &lt;span class="na"&gt;service/myapp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;add&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/spec/ports/0/nodePort&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30265&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the appropriate patch set based on the target environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubepatch patch &lt;span class="nt"&gt;-f&lt;/span&gt; base/ &lt;span class="nt"&gt;-p&lt;/span&gt; patches/dev.yaml | kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Rendered manifest may look like this (note that all labels are set, as well as all patches are applied)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp-dev&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp-dev&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;nodePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30265&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
      &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp-dev&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NodePort&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp-dev&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp-dev&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp-dev&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp-dev&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RESTAPIAPP_VERSION&lt;/span&gt;
              &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dev&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LOG_LEVEL&lt;/span&gt;
              &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;debug&lt;/span&gt;
          &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost:5000/restapiapp:1.22&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Manual Installation
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Download the latest binary for your platform from
the &lt;a href="https://github.com/kubepatch/kubepatch/releases" rel="noopener noreferrer"&gt;Releases page&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Place the binary in your system's &lt;code&gt;PATH&lt;/code&gt; (e.g., &lt;code&gt;/usr/local/bin&lt;/code&gt;).&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Installation script
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;
&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-euo&lt;/span&gt; pipefail

&lt;span class="nv"&gt;OS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;uname&lt;/span&gt; | &lt;span class="nb"&gt;tr&lt;/span&gt; &lt;span class="s1"&gt;'[:upper:]'&lt;/span&gt; &lt;span class="s1"&gt;'[:lower:]'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nv"&gt;ARCH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;uname&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt; | &lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s1"&gt;'s/x86_64/amd64/'&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s1"&gt;'s/\(arm\)\(64\)\?.*/\1\2/'&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s1"&gt;'s/aarch64$/arm64/'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nv"&gt;TAG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; https://api.github.com/repos/kubepatch/kubepatch/releases/latest | jq &lt;span class="nt"&gt;-r&lt;/span&gt; .tag_name&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

curl &lt;span class="nt"&gt;-L&lt;/span&gt; &lt;span class="s2"&gt;"https://github.com/kubepatch/kubepatch/releases/download/&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TAG&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/kubepatch_&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TAG&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;_&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;OS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;_&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;ARCH&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.tar.gz"&lt;/span&gt; |
&lt;span class="nb"&gt;tar&lt;/span&gt; &lt;span class="nt"&gt;-xzf&lt;/span&gt; - &lt;span class="nt"&gt;-C&lt;/span&gt; /usr/local/bin &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nb"&gt;chmod&lt;/span&gt; +x /usr/local/bin/kubepatch
&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Package-Based installation (suitable in CI/CD)
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Debian
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; curl
curl &lt;span class="nt"&gt;-LO&lt;/span&gt; https://github.com/kubepatch/kubepatch/releases/latest/download/kubepatch_linux_amd64.deb
&lt;span class="nb"&gt;sudo &lt;/span&gt;dpkg &lt;span class="nt"&gt;-i&lt;/span&gt; kubepatch_linux_amd64.deb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Alpine Linux
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;apk update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apk add &lt;span class="nt"&gt;--no-cache&lt;/span&gt; bash curl
curl &lt;span class="nt"&gt;-LO&lt;/span&gt; https://github.com/kubepatch/kubepatch/releases/latest/download/kubepatch_linux_amd64.apk
apk add kubepatch_linux_amd64.apk &lt;span class="nt"&gt;--allow-untrusted&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  ✨ Key Features
&lt;/h2&gt;

&lt;h3&gt;
  
  
  JSON Patch Only
&lt;/h3&gt;

&lt;p&gt;Patches are applied using &lt;a href="https://tools.ietf.org/html/rfc6902" rel="noopener noreferrer"&gt;JSON Patch&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;replace&lt;/span&gt;
  &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/spec/replicas&lt;/span&gt;
  &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every patch is minimal, explicit, and easy to understand. No string manipulation or text templating involved.&lt;/p&gt;

&lt;h3&gt;
  
  
  Plain Kubernetes YAML Manifests
&lt;/h3&gt;

&lt;p&gt;Your base manifests are 100% pure Kubernetes objects - no logic, no annotations, no overrides, no preprocessing. This&lt;br&gt;
ensures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Easy editing&lt;/li&gt;
&lt;li&gt;Compatibility with other tools&lt;/li&gt;
&lt;li&gt;Clean Git diffs&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Cross-Environment Deploys
&lt;/h3&gt;

&lt;p&gt;Deploy to &lt;code&gt;dev&lt;/code&gt;, &lt;code&gt;staging&lt;/code&gt;, or &lt;code&gt;prod&lt;/code&gt; just by selecting the right set of patches. All logic lives in patch files, not&lt;br&gt;
your base manifests.&lt;/p&gt;
&lt;h3&gt;
  
  
  Common Labels Support
&lt;/h3&gt;

&lt;p&gt;Inject common labels (like &lt;code&gt;env&lt;/code&gt;, &lt;code&gt;team&lt;/code&gt;, &lt;code&gt;app&lt;/code&gt;), including deep paths like pod templates and selectors.&lt;/p&gt;
&lt;h3&gt;
  
  
  Env Var Substitution (in Patch Values Only)
&lt;/h3&gt;

&lt;p&gt;You can inject secrets and configuration values directly into patch files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;add&lt;/span&gt;
  &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/spec/template/spec/containers/0/env&lt;/span&gt;
  &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PGPASSWORD&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${IAM_SERVICE_PGPASS}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Strict env-var substitution (prefix-based) is only allowed inside patches - never in base manifests.&lt;/p&gt;

&lt;h2&gt;
  
  
  Feedback
&lt;/h2&gt;

&lt;p&gt;Have a feature request or issue? Feel free to &lt;a href="https://github.com/kubepatch/kubepatch/issues" rel="noopener noreferrer"&gt;open an issue&lt;/a&gt; or submit a PR!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>go</category>
      <category>devops</category>
    </item>
    <item>
      <title>Apply Kubernetes Manifests Atomically With Rollback</title>
      <dc:creator>alexey.zh</dc:creator>
      <pubDate>Sat, 21 Jun 2025 06:57:27 +0000</pubDate>
      <link>https://dev.to/alzhi_f93e67fa45b972/kubectl-atomic-apply-apply-kubernetes-manifests-atomically-with-rollback-ipm</link>
      <guid>https://dev.to/alzhi_f93e67fa45b972/kubectl-atomic-apply-apply-kubernetes-manifests-atomically-with-rollback-ipm</guid>
      <description>&lt;p&gt;&lt;code&gt;katomik&lt;/code&gt; - Atomic Apply for Kubernetes Manifests with Rollback Support.&lt;/p&gt;

&lt;p&gt;Applies multiple Kubernetes manifests with &lt;strong&gt;all-or-nothing&lt;/strong&gt; guarantees. Like &lt;code&gt;kubectl apply -f&lt;/code&gt;, but transactional:&lt;br&gt;
if any resource fails to apply or become ready, all previously applied resources are rolled back automatically.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/hashmap-kz/katomik" rel="noopener noreferrer"&gt;GitHub Repo →&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvf3wo90u6wmlp7gtjav7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvf3wo90u6wmlp7gtjav7.png" alt="Image description" width="602" height="624"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Features
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Atomic behavior&lt;/strong&gt;: Applies multiple manifests as a unit. If anything fails, restores the original state.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Server-Side Apply&lt;/strong&gt; (SSA): Uses &lt;code&gt;PATCH&lt;/code&gt; with SSA to minimize conflicts and preserve intent.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Status tracking&lt;/strong&gt;: Waits for all resources to become &lt;code&gt;Current&lt;/code&gt; (Ready/Available) before succeeding.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rollback support&lt;/strong&gt;: Automatically restores previous state if apply or wait fails.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Recursive&lt;/strong&gt;: Like &lt;code&gt;kubectl&lt;/code&gt;, supports directories and &lt;code&gt;-R&lt;/code&gt; for recursive traversal.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;STDIN support&lt;/strong&gt;: Use &lt;code&gt;-f -&lt;/code&gt; to read from &lt;code&gt;stdin&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Manual Installation
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Download the latest binary for your platform from
the &lt;a href="https://github.com/hashmap-kz/katomik/releases" rel="noopener noreferrer"&gt;Releases page&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Place the binary in your system's &lt;code&gt;PATH&lt;/code&gt; (e.g., &lt;code&gt;/usr/local/bin&lt;/code&gt;).&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Installation script
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;
&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-euo&lt;/span&gt; pipefail

&lt;span class="nv"&gt;OS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;uname&lt;/span&gt; | &lt;span class="nb"&gt;tr&lt;/span&gt; &lt;span class="s1"&gt;'[:upper:]'&lt;/span&gt; &lt;span class="s1"&gt;'[:lower:]'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nv"&gt;ARCH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;uname&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt; | &lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s1"&gt;'s/x86_64/amd64/'&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s1"&gt;'s/\(arm\)\(64\)\?.*/\1\2/'&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s1"&gt;'s/aarch64$/arm64/'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nv"&gt;TAG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; https://api.github.com/repos/hashmap-kz/katomik/releases/latest | jq &lt;span class="nt"&gt;-r&lt;/span&gt; .tag_name&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

curl &lt;span class="nt"&gt;-L&lt;/span&gt; &lt;span class="s2"&gt;"https://github.com/hashmap-kz/katomik/releases/download/&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TAG&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/katomik_&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TAG&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;_&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;OS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;_&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;ARCH&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.tar.gz"&lt;/span&gt; |
&lt;span class="nb"&gt;tar&lt;/span&gt; &lt;span class="nt"&gt;-xzf&lt;/span&gt; - &lt;span class="nt"&gt;-C&lt;/span&gt; /usr/local/bin &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nb"&gt;chmod&lt;/span&gt; +x /usr/local/bin/katomik
&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Homebrew installation
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew tap hashmap-kz/homebrew-tap
brew &lt;span class="nb"&gt;install &lt;/span&gt;katomik
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Usage
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Apply multiple files atomically&lt;/span&gt;
katomik apply &lt;span class="nt"&gt;-f&lt;/span&gt; manifests/

&lt;span class="c"&gt;# Read from stdin&lt;/span&gt;
katomik apply &lt;span class="nt"&gt;-f&lt;/span&gt; - &amp;lt; all.yaml

&lt;span class="c"&gt;# Apply recursively&lt;/span&gt;
katomik apply &lt;span class="nt"&gt;-R&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; ./deploy/

&lt;span class="c"&gt;# Set a custom timeout (default: 5m)&lt;/span&gt;
katomik apply &lt;span class="nt"&gt;--timeout&lt;/span&gt; 2m &lt;span class="nt"&gt;-f&lt;/span&gt; ./manifests/

&lt;span class="c"&gt;# Process and apply a manifest located on a remote server&lt;/span&gt;
katomik apply &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/user/repo/refs/heads/master/manifests/deployment.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Example Output
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# katomik apply -f test/integration/k8s/manifests/

┌───────────────────────────────────────┬──────────────┐
│               RESOURCE                │  NAMESPACE   │
├───────────────────────────────────────┼──────────────┤
│ Namespace/katomik-test                │ (cluster)    │
│ ConfigMap/postgresql-init-script      │ katomik-test │
│ ConfigMap/postgresql-envs             │ katomik-test │
│ ConfigMap/postgresql-conf             │ katomik-test │
│ Service/postgres                      │ katomik-test │
│ PersistentVolumeClaim/postgres-data   │ katomik-test │
│ StatefulSet/postgres                  │ katomik-test │
│ ConfigMap/prometheus-config           │ katomik-test │
│ PersistentVolumeClaim/prometheus-data │ katomik-test │
│ Service/prometheus                    │ katomik-test │
│ StatefulSet/prometheus                │ katomik-test │
│ PersistentVolumeClaim/grafana-data    │ katomik-test │
│ Service/grafana                       │ katomik-test │
│ ConfigMap/grafana-datasources         │ katomik-test │
│ Deployment/grafana                    │ katomik-test │
└───────────────────────────────────────┴──────────────┘

+ watching
| Service/grafana                       katomik-test Unknown
| Deployment/grafana                    katomik-test Unknown
| StatefulSet/postgres                  katomik-test InProgress
| StatefulSet/prometheus                katomik-test InProgress
+ watching

✓ Success
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Quick Start
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd test/integration/k8s
bash 00-setup-kind.sh
katomik apply -f manifests/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🔒 Rollback Guarantees
&lt;/h2&gt;

&lt;p&gt;On failure (bad manifest, missing dependency, timeout, etc.):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Existing objects are reverted to their exact pre-apply state.&lt;/li&gt;
&lt;li&gt;New objects are deleted.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This guarantees your cluster remains consistent - no partial updates.&lt;/p&gt;




&lt;h2&gt;
  
  
  Flags
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Flag&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;-f&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;File, directory, or &lt;code&gt;-&lt;/code&gt; for stdin&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;-R&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Recurse into directories&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;--timeout&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Timeout to wait for readiness&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Feedback
&lt;/h2&gt;

&lt;p&gt;Have a feature request or issue? Feel free to &lt;a href="https://github.com/hashmap-kz/katomik/issues" rel="noopener noreferrer"&gt;open an issue&lt;/a&gt;&lt;br&gt;
or submit a PR!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>cicd</category>
      <category>go</category>
    </item>
    <item>
      <title>Streaming PostgreSQL Backups with pgrwl: Now with Time &amp; Count-Based Retention!</title>
      <dc:creator>alexey.zh</dc:creator>
      <pubDate>Sun, 15 Jun 2025 14:41:16 +0000</pubDate>
      <link>https://dev.to/alzhi_f93e67fa45b972/streaming-postgresql-backups-with-pgrwl-now-with-time-count-based-retention-1m69</link>
      <guid>https://dev.to/alzhi_f93e67fa45b972/streaming-postgresql-backups-with-pgrwl-now-with-time-count-based-retention-1m69</guid>
      <description>&lt;p&gt;A new release of &lt;a href="https://github.com/hashmap-kz/pgrwl" rel="noopener noreferrer"&gt;&lt;code&gt;pgrwl&lt;/code&gt;&lt;/a&gt; just shipped, a cloud-native WAL receiver and backup agent for PostgreSQL — and it comes with a new feature: &lt;strong&gt;streaming basebackups&lt;/strong&gt; with &lt;strong&gt;automated retention&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's New
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;feat(basebackup): time/count based retention&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;pgrwl&lt;/code&gt; can now:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run &lt;strong&gt;basebackups on a schedule&lt;/strong&gt; (via built-in cron),&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stream backups directly to remote storage&lt;/strong&gt; (S3, SFTP, etc),&lt;/li&gt;
&lt;li&gt;And &lt;strong&gt;automatically enforce retention policies&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Retain the last &lt;strong&gt;N&lt;/strong&gt; backups (e.g. &lt;code&gt;count=3&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Or retain all backups for a certain &lt;strong&gt;duration&lt;/strong&gt; (e.g. &lt;code&gt;days=7&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;This makes it easier than ever to keep your PostgreSQL cluster safely backed up &lt;strong&gt;without relying on external scripts&lt;/strong&gt; or schedulers.&lt;/p&gt;




&lt;h2&gt;
  
  
  Streaming Basebackup?
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;pgrwl&lt;/code&gt; performs a &lt;em&gt;streaming basebackup&lt;/em&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Uses replication protocol&lt;/li&gt;
&lt;li&gt;Streams backup with &lt;strong&gt;optional compression and encryption&lt;/strong&gt;,&lt;/li&gt;
&lt;li&gt;Uploads to remote storage &lt;strong&gt;as the backup progresses&lt;/strong&gt; — no need for temporary local copies!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This design drastically reduces time-to-storage and fits perfectly into Kubernetes-native workflows.&lt;/p&gt;




&lt;h2&gt;
  
  
  Retention Policies in Action
&lt;/h2&gt;

&lt;p&gt;Here's a typical config for scheduled backups:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;backup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;schedule&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;3&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*"&lt;/span&gt;  &lt;span class="c1"&gt;# every day at 3AM&lt;/span&gt;
  &lt;span class="na"&gt;retention&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;count&lt;/span&gt;          &lt;span class="c1"&gt;# count-based retention&lt;/span&gt;
    &lt;span class="na"&gt;count&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;             &lt;span class="c1"&gt;# keep last 3 backups&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After each successful backup, &lt;code&gt;pgrwl&lt;/code&gt; automatically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lists available backups from storage,&lt;/li&gt;
&lt;li&gt;Sorts them by timestamp,&lt;/li&gt;
&lt;li&gt;Deletes old ones beyond the retention threshold — &lt;strong&gt;clean and simple&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Example: Kubernetes StatefulSet
&lt;/h2&gt;

&lt;p&gt;A typical setup consists of &lt;strong&gt;two StatefulSets&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Receiver&lt;/strong&gt; — continuously streams and archives WALs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backup&lt;/strong&gt; — schedules full basebackups and handles retention.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;On restore, the receiver pod can switch to &lt;strong&gt;serve mode&lt;/strong&gt;, exposing archived WALs over HTTP to your PostgreSQL’s &lt;code&gt;restore_command&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Why This Matters
&lt;/h2&gt;

&lt;p&gt;For teams running PostgreSQL in Kubernetes (or other containerized environments), backup tooling is often a headache:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No clear story for WAL + basebackup orchestration,&lt;/li&gt;
&lt;li&gt;Manual scripts with little observability,&lt;/li&gt;
&lt;li&gt;Poor retention handling.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;pgrwl&lt;/code&gt; aims to &lt;strong&gt;solve all of that&lt;/strong&gt;, with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Streaming basebackups&lt;/li&gt;
&lt;li&gt;Streaming WAL receiver&lt;/li&gt;
&lt;li&gt;Pluggable storage&lt;/li&gt;
&lt;li&gt;Built-in compression/encryption&lt;/li&gt;
&lt;li&gt;Clean Kubernetes integration&lt;/li&gt;
&lt;li&gt;Minimal dependencies&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Try It Out
&lt;/h2&gt;

&lt;p&gt;Get started here 👉 &lt;a href="https://github.com/hashmap-kz/pgrwl" rel="noopener noreferrer"&gt;https://github.com/hashmap-kz/pgrwl&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;If you have feedback, issues, or want to contribute, we’d love to hear from you! 🙌&lt;/p&gt;

</description>
      <category>devops</category>
      <category>postgres</category>
      <category>go</category>
      <category>backup</category>
    </item>
    <item>
      <title>Introducing relimpact: Fast Release Impact Analyzer for Go Projects</title>
      <dc:creator>alexey.zh</dc:creator>
      <pubDate>Sun, 08 Jun 2025 17:02:16 +0000</pubDate>
      <link>https://dev.to/alzhi_f93e67fa45b972/introducing-relimpact-fast-release-impact-analyzer-for-go-projects-46im</link>
      <guid>https://dev.to/alzhi_f93e67fa45b972/introducing-relimpact-fast-release-impact-analyzer-for-go-projects-46im</guid>
      <description>&lt;h1&gt;
  
  
  relimpact
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;Release Impact Analyzer for Go projects — catch breaking API changes, docs updates &amp;amp; important file diffs — fast.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/hashmap-kz/relimpact" rel="noopener noreferrer"&gt;GitHub Repo →&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;In modern Go projects, it's too easy for accidental API changes or subtle documentation edits to sneak through pull requests or release processes unnoticed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;relimpact&lt;/code&gt;&lt;/strong&gt; is a lightweight CLI tool that helps you understand &lt;strong&gt;what really changed&lt;/strong&gt; between two Git refs — with clean, structured, human-friendly reports.&lt;/p&gt;

&lt;p&gt;Use it in CI pipelines, release PRs, or locally before tagging new versions.&lt;/p&gt;




&lt;h2&gt;
  
  
  ✨ Features
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;🔍 &lt;strong&gt;API Diff&lt;/strong&gt; — Track breaking public API changes (structs, interfaces, functions, constants, variables).&lt;/li&gt;
&lt;li&gt;📝 &lt;strong&gt;Docs Diff&lt;/strong&gt; — Section-aware Markdown diff to highlight meaningful content changes.&lt;/li&gt;
&lt;li&gt;🗂️ &lt;strong&gt;Other Files Diff&lt;/strong&gt; — Group file changes by extension (.sh, .sql, .json, etc.) to surface migrations and auxiliary files.&lt;/li&gt;
&lt;li&gt;🚀 &lt;strong&gt;Designed for Release PR reviews&lt;/strong&gt; — Quickly see the real impact of changes.&lt;/li&gt;
&lt;li&gt;🖋️ &lt;strong&gt;Markdown Reports&lt;/strong&gt; — Ready to paste into GitHub Releases, Slack, or changelogs.&lt;/li&gt;
&lt;li&gt;⚙️ &lt;strong&gt;Works in GitHub Actions, GitLab CI, or locally&lt;/strong&gt; — Integrates easily.&lt;/li&gt;
&lt;li&gt;🔒 &lt;strong&gt;No server required&lt;/strong&gt; — Pure CLI tool.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🚀 Quickstart
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Run on a GitHub PR:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;relimpact &lt;span class="nt"&gt;--old&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;v1.0.0 &lt;span class="nt"&gt;--new&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;HEAD &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; release-impact.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Example Output:
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiwpsq0nci95z4x6cz7j0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiwpsq0nci95z4x6cz7j0.png" alt="Image description" width="800" height="2312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Expanded Sections&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fykd1zn32tqj13x7105lm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fykd1zn32tqj13x7105lm.png" alt="Image description" width="800" height="892"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PR Comment Generated&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhw4a78p9ylfsow3jv1c7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhw4a78p9ylfsow3jv1c7.png" alt="Image description" width="800" height="315"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  ⚙️ GitHub Action Integration
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Release Impact on PR&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;master&lt;/span&gt; &lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;types&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;opened&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;synchronize&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;reopened&lt;/span&gt; &lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;release-impact&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Generate Release Impact Report&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;

    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;fetch-depth&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Determine previous tag&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prevtag&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;git fetch --tags&lt;/span&gt;
          &lt;span class="s"&gt;TAG_LIST=$(git tag --sort=-version:refname)&lt;/span&gt;
          &lt;span class="s"&gt;PREV_TAG=$(echo "$TAG_LIST" | head -n2 | tail -n1)&lt;/span&gt;
          &lt;span class="s"&gt;echo "Previous tag: $PREV_TAG"&lt;/span&gt;
          &lt;span class="s"&gt;# Fallback to first tag if no previous&lt;/span&gt;
          &lt;span class="s"&gt;if [ -z "$PREV_TAG" ]; then&lt;/span&gt;
            &lt;span class="s"&gt;PREV_TAG=$(echo "$TAG_LIST" | head -n1)&lt;/span&gt;
            &lt;span class="s"&gt;echo "Fallback to first tag: $PREV_TAG"&lt;/span&gt;
          &lt;span class="s"&gt;fi&lt;/span&gt;
          &lt;span class="s"&gt;echo "prev_tag=$PREV_TAG" &amp;gt;&amp;gt; $GITHUB_OUTPUT&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Determine new ref&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;newref&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;if [ "${{ github.event_name }}" = "pull_request" ]; then&lt;/span&gt;
            &lt;span class="s"&gt;echo "new_ref=${{ github.event.pull_request.head.sha }}" &amp;gt;&amp;gt; $GITHUB_OUTPUT&lt;/span&gt;
          &lt;span class="s"&gt;else&lt;/span&gt;
            &lt;span class="s"&gt;echo "new_ref=HEAD" &amp;gt;&amp;gt; $GITHUB_OUTPUT&lt;/span&gt;
          &lt;span class="s"&gt;fi&lt;/span&gt;

      &lt;span class="c1"&gt;# Cache restore for old ref&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Cache API snapshot (old ref)&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/cache/restore@v4&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cache-old&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.cache/relimpact-api-cache&lt;/span&gt;
          &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;relimpact-api-${{ steps.prevtag.outputs.prev_tag }}&lt;/span&gt;
          &lt;span class="na"&gt;restore-keys&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;relimpact-api-&lt;/span&gt;

      &lt;span class="c1"&gt;# Cache restore for new ref&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Cache API snapshot (new ref)&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/cache/restore@v4&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cache-new&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.cache/relimpact-api-cache&lt;/span&gt;
          &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;relimpact-api-${{ steps.newref.outputs.new_ref }}&lt;/span&gt;
          &lt;span class="na"&gt;restore-keys&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;relimpact-api-&lt;/span&gt;

      &lt;span class="c1"&gt;# Run your relimpact-action (this runs SnapshotAPI and writes cache)&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hashmap-kz/relimpact-action@main&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;old-ref&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ steps.prevtag.outputs.prev_tag }}&lt;/span&gt;
          &lt;span class="na"&gt;new-ref&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ steps.newref.outputs.new_ref }}&lt;/span&gt;
          &lt;span class="na"&gt;output&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;release-impact.md&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;RELIMPACT_API_CACHE_DIR&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ github.workspace }}/.cache/relimpact-api-cache&lt;/span&gt;

      &lt;span class="c1"&gt;# Cache save for old ref — only if not already restored&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Save API snapshot cache (old ref)&lt;/span&gt;
        &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;steps.cache-old.outputs.cache-hit != 'true'&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/cache/save@v4&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.cache/relimpact-api-cache&lt;/span&gt;
          &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;relimpact-api-${{ steps.prevtag.outputs.prev_tag }}&lt;/span&gt;

      &lt;span class="c1"&gt;# Cache save for new ref — only if not already restored&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Save API snapshot cache (new ref)&lt;/span&gt;
        &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;steps.cache-new.outputs.cache-hit != 'true'&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/cache/save@v4&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.cache/relimpact-api-cache&lt;/span&gt;
          &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;relimpact-api-${{ steps.newref.outputs.new_ref }}&lt;/span&gt;

      &lt;span class="c1"&gt;# Upload the release impact report&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Upload Release Impact Report&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/upload-artifact@v4&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;release-impact-${{ github.run_id }}-${{ github.run_attempt }}&lt;/span&gt;
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;release-impact.md&lt;/span&gt;

      &lt;span class="c1"&gt;# Post release impact to PR comment&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Post Release Impact to PR&lt;/span&gt;
        &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;github.event_name == 'pull_request'&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;marocchino/sticky-pull-request-comment@v2&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;recreate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;release-impact.md&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  📦 Installation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Homebrew
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew tap hashmap-kz/homebrew-tap
brew &lt;span class="nb"&gt;install &lt;/span&gt;relimpact
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Manual Download
&lt;/h3&gt;

&lt;p&gt;👉 &lt;a href="https://github.com/hashmap-kz/relimpact/releases" rel="noopener noreferrer"&gt;Download latest release&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🧠 How It Works
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1️⃣ Go Source API Changes
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Uses Go type system &amp;amp; AST parsing:

&lt;ul&gt;
&lt;li&gt;Detects breaking changes: method signatures, removed fields, new/removed types, etc.&lt;/li&gt;
&lt;li&gt;Ignores formatting &amp;amp; comment noise.&lt;/li&gt;
&lt;li&gt;Based on &lt;code&gt;golang.org/x/tools/go/packages&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  2️⃣ Markdown Docs Changes
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Section-aware diff of &lt;code&gt;.md&lt;/code&gt; files:

&lt;ul&gt;
&lt;li&gt;Headings added/removed.&lt;/li&gt;
&lt;li&gt;Links and images changes.&lt;/li&gt;
&lt;li&gt;Section word count diffs.&lt;/li&gt;
&lt;li&gt;Based on &lt;code&gt;goldmark&lt;/code&gt; parser.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  3️⃣ Other Files Changes
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Groups changes by file type:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;.sql&lt;/code&gt;, &lt;code&gt;.sh&lt;/code&gt;, &lt;code&gt;.json&lt;/code&gt;, &lt;code&gt;.yaml&lt;/code&gt;, &lt;code&gt;.conf&lt;/code&gt;, etc.&lt;/li&gt;
&lt;li&gt;Uses &lt;code&gt;git diff --name-status&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;h2&gt;
  
  
  Why Use It?
&lt;/h2&gt;

&lt;p&gt;Most release PRs include:&lt;/p&gt;

&lt;p&gt;✅ API changes&lt;br&gt;
✅ Doc updates&lt;br&gt;
✅ Migration scripts&lt;br&gt;
✅ Other important config tweaks&lt;/p&gt;

&lt;p&gt;But &lt;strong&gt;raw &lt;code&gt;git diff&lt;/code&gt; is noisy and hard to review&lt;/strong&gt;.&lt;br&gt;
&lt;code&gt;relimpact&lt;/code&gt; gives you a &lt;strong&gt;release-ready summary&lt;/strong&gt;, focusing on what's important.&lt;/p&gt;




&lt;h2&gt;
  
  
  📜 License
&lt;/h2&gt;

&lt;p&gt;MIT License. See &lt;a href="https://github.com/hashmap-kz/relimpact/blob/master/LICENSE" rel="noopener noreferrer"&gt;LICENSE&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;👉 Try it today: &lt;a href="https://github.com/hashmap-kz/relimpact" rel="noopener noreferrer"&gt;https://github.com/hashmap-kz/relimpact&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;If you found this useful, leave a ⭐ on GitHub — it helps others discover the project!&lt;/p&gt;

&lt;p&gt;Happy releasing 🚀&lt;/p&gt;

</description>
      <category>go</category>
      <category>devops</category>
      <category>ci</category>
    </item>
    <item>
      <title>High-Speed File Transfer to and from Kubernetes PVCs</title>
      <dc:creator>alexey.zh</dc:creator>
      <pubDate>Sat, 07 Jun 2025 11:33:39 +0000</pubDate>
      <link>https://dev.to/alzhi_f93e67fa45b972/high-speed-file-transfer-to-and-from-kubernetes-pvcs-54fc</link>
      <guid>https://dev.to/alzhi_f93e67fa45b972/high-speed-file-transfer-to-and-from-kubernetes-pvcs-54fc</guid>
      <description>&lt;p&gt;When working with Kubernetes, &lt;em&gt;Persistent Volume Claims&lt;/em&gt; (PVCs) are the backbone of many stateful workloads.&lt;br&gt;
However, transferring data to and from a PVC is surprisingly hard to do efficiently.&lt;/p&gt;

&lt;p&gt;Sure, there are plenty of tools that look like they can help - tools that mount PVCs locally, migrate data between PVCs,&lt;br&gt;
or provide backup/restore functionality.&lt;/p&gt;

&lt;p&gt;But when I needed a fast and reliable way to copy large files into and out of PVCs, especially for PostgreSQL WAL archiving workflows, I ran into a big roadblock:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;kubectl cp&lt;/code&gt; became painfully slow - I gave up waiting on a 100 GiB copy&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;kubectl exec&lt;/code&gt; requires shell tools inside your container&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And CSI-backed PVCs (like Ceph RBD) don't easily expose the underlying storage for direct access.&lt;/p&gt;

&lt;p&gt;That’s why I built &lt;a href="https://github.com/hashmap-kz/kubectl-syncpod" rel="noopener noreferrer"&gt;kubectl-syncpod&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  What is kubectl-syncpod?
&lt;/h2&gt;

&lt;p&gt;It’s a CLI tool that enables high-speed file transfers between your local machine and Kubernetes PVCs - even if the workload uses minimal container images (distroless, scratch) or CSI-backed block storage.&lt;/p&gt;

&lt;p&gt;Unlike kubectl cp, it leverages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ A temporary helper Pod that mounts the PVC&lt;/li&gt;
&lt;li&gt;✅ An ephemeral SSHD server for fast and secure file transfer&lt;/li&gt;
&lt;li&gt;✅ A parallel SFTP client that transfers files concurrently&lt;/li&gt;
&lt;li&gt;✅ Automatic cleanup (no leftover Pods or Services)&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  About
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;kubectl cp&lt;/code&gt; and &lt;code&gt;kubectl exec&lt;/code&gt; degrade in performance with a large set of files (~100 GiB).&lt;/li&gt;
&lt;li&gt;They also require additional tools (&lt;code&gt;tar&lt;/code&gt;, &lt;code&gt;sh&lt;/code&gt;) and fail with distroless/scratch images.&lt;/li&gt;
&lt;li&gt;Most importantly, they do not support concurrent reading/writing.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Features
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Upload local files/directories to a pod volume&lt;/li&gt;
&lt;li&gt;Download the pod volume files to the local&lt;/li&gt;
&lt;li&gt;Safe overwrite protection&lt;/li&gt;
&lt;li&gt;Auto-rename remote directories&lt;/li&gt;
&lt;li&gt;Concurrent file transfer with worker pool&lt;/li&gt;
&lt;li&gt;Preserves directory structure&lt;/li&gt;
&lt;li&gt;Optional &lt;code&gt;chown&lt;/code&gt; of uploaded files inside the pod&lt;/li&gt;
&lt;li&gt;Fully based on SFTP + Kubernetes Exec API&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Typical Use Cases
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;✅ Download backup from PVC for verification/restore&lt;/li&gt;
&lt;li&gt;✅ Sync files between PVCs and local&lt;/li&gt;
&lt;li&gt;✅ Testing PVC mount behavior&lt;/li&gt;
&lt;li&gt;✅ CI/CD pipelines for volume data prep&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  How It Works
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Spins up a helper pod mounting the PVC&lt;/li&gt;
&lt;li&gt;Runs SSHD server with ephemeral key&lt;/li&gt;
&lt;li&gt;Local SFTP client transfers files&lt;/li&gt;
&lt;li&gt;Auto cleans up pod/service&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;See also &lt;a href="https://github.com/hashmap-kz/kubectl-syncpod/tree/master/examples/k8s" rel="noopener noreferrer"&gt;Examples&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Homebrew
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew tap hashmap-kz/homebrew-tap
brew &lt;span class="nb"&gt;install &lt;/span&gt;kubectl-syncpod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Manual
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/hashmap-kz/kubectl-syncpod/releases/latest" rel="noopener noreferrer"&gt;Download the latest release&lt;/a&gt; and place it in PATH.&lt;/p&gt;

&lt;p&gt;Installation script for UNIX-based OS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(
set -euo pipefail

OS="$(uname | tr '[:upper:]' '[:lower:]')"
ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')"
TAG="$(curl -s https://api.github.com/repos/hashmap-kz/kubectl-syncpod/releases/latest | jq -r .tag_name)"

curl -L "https://github.com/hashmap-kz/kubectl-syncpod/releases/download/${TAG}/kubectl-syncpod_${TAG}_${OS}_${ARCH}.tar.gz" |
tar -xzf - -C /usr/local/bin &amp;amp;&amp;amp; \
chmod +x /usr/local/bin/kubectl-syncpod
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Example Usage
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Upload with safe rename and chown
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl-syncpod upload &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--namespace&lt;/span&gt; my-db &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--pvc&lt;/span&gt; postgres-data &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--mount-path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/var/lib/postgresql/data &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--src&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;backups &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--dst&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;pgdata-new &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--allow-overwrite&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--owner&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"999:999"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Download the remote directory
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl-syncpod download &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--namespace&lt;/span&gt; my-db &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--pvc&lt;/span&gt; postgres-data &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--mount-path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/var/lib/postgresql/data &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--src&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;pgdata-new &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--dst&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;backups-copy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Comparison Table
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;&lt;code&gt;kubectl cp&lt;/code&gt;&lt;/th&gt;
&lt;th&gt;&lt;code&gt;kubectl exec&lt;/code&gt;&lt;/th&gt;
&lt;th&gt;
&lt;code&gt;kubectl-syncpod&lt;/code&gt; (SFTP mode)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Uses sidecar or helper pod&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Works with PVCs&lt;/td&gt;
&lt;td&gt;⚠️ Only if mounted in container&lt;/td&gt;
&lt;td&gt;⚠️ Manual path required&lt;/td&gt;
&lt;td&gt;✅ Helper pod mounts PVC&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Requires tools in container (&lt;code&gt;tar&lt;/code&gt;, &lt;code&gt;sh&lt;/code&gt;)&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌ (uses &lt;code&gt;sshd&lt;/code&gt; in helper pod)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Supports &lt;code&gt;readOnlyRootFilesystem&lt;/code&gt; pods&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Works on &lt;code&gt;distroless&lt;/code&gt;/&lt;code&gt;scratch&lt;/code&gt; images&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Affects main application container&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Requires container to run as root&lt;/td&gt;
&lt;td&gt;Often yes&lt;/td&gt;
&lt;td&gt;Often yes&lt;/td&gt;
&lt;td&gt;❌ or configurable via helper pod spec&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Safe for production workloads&lt;/td&gt;
&lt;td&gt;⚠️ Risky&lt;/td&gt;
&lt;td&gt;⚠️ Risky&lt;/td&gt;
&lt;td&gt;✅ (safe for read)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Auto-cleans after sync&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Supports concurrent transfers&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅ (parallel SFTP workers)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Performance on large file trees&lt;/td&gt;
&lt;td&gt;🐢 Slow&lt;/td&gt;
&lt;td&gt;🐢 Slow&lt;/td&gt;
&lt;td&gt;🚀 Fast (streaming + concurrency)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;kubectl-syncpod&lt;/code&gt; fills a small but important gap in the Kubernetes toolchain:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;👉 Efficient and fast file transfer to/from PVCs&lt;/li&gt;
&lt;li&gt;👉 No need to modify your main containers&lt;/li&gt;
&lt;li&gt;👉 Works great with CSI-backed volumes&lt;/li&gt;
&lt;li&gt;👉 Production-safe and automated&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The project is open source and evolving:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;👉 &lt;a href="https://github.com/hashmap-kz/kubectl-syncpod" rel="noopener noreferrer"&gt;https://github.com/hashmap-kz/kubectl-syncpod&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you find it useful - contributions, feedback, and GitHub ⭐️ stars are very welcome!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>backup</category>
    </item>
    <item>
      <title>🔬 Internals of PostgreSQL WAL Streaming with pgrwl: A Cloud-Native Receiver Built in Go</title>
      <dc:creator>alexey.zh</dc:creator>
      <pubDate>Thu, 05 Jun 2025 15:39:11 +0000</pubDate>
      <link>https://dev.to/alzhi_f93e67fa45b972/internals-of-postgresql-wal-streaming-with-pgrwl-a-cloud-native-receiver-built-in-go-16jh</link>
      <guid>https://dev.to/alzhi_f93e67fa45b972/internals-of-postgresql-wal-streaming-with-pgrwl-a-cloud-native-receiver-built-in-go-16jh</guid>
      <description>&lt;p&gt;PostgreSQL's Write-Ahead Logging (WAL) underpins everything from crash recovery to PITR and streaming replication. But &lt;strong&gt;streaming WALs safely and reliably&lt;/strong&gt;, especially into cloud or container-native environments, can be fragile.&lt;/p&gt;

&lt;p&gt;Enter &lt;a href="https://github.com/hashmap-kz/pgrwl" rel="noopener noreferrer"&gt;&lt;code&gt;pgrwl&lt;/code&gt;&lt;/a&gt;: a modern WAL receiver written in Go that gives you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;📆 WAL streaming to local/S3/SFTP&lt;/li&gt;
&lt;li&gt;🔐 On-the-fly encryption + compression&lt;/li&gt;
&lt;li&gt;🐳 Kubernetes-native workflows&lt;/li&gt;
&lt;li&gt;📊 Prometheus metrics + Grafana dashboards&lt;/li&gt;
&lt;li&gt;🧹 Automatic retention + cleanup&lt;/li&gt;
&lt;li&gt;⚙️ Safe WAL handling, with proper &lt;code&gt;fsync&lt;/code&gt; guarantees&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s explore how it works under the hood.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;🧪 &lt;strong&gt;Note&lt;/strong&gt;: &lt;code&gt;pgrwl&lt;/code&gt; is a &lt;strong&gt;research project&lt;/strong&gt;, inspired by PostgreSQL’s built-in tools like &lt;code&gt;pg_receivewal&lt;/code&gt;, and is&lt;br&gt;
based on careful reading of their &lt;strong&gt;official source code&lt;/strong&gt; and streaming behaviors.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;pgrwl&lt;/code&gt; serves as a &lt;strong&gt;starting point&lt;/strong&gt; to explore more &lt;strong&gt;cloud-native&lt;/strong&gt;, container-friendly, and extensible&lt;br&gt;
implementations of these time-tested concepts.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  📆 The Problem:
&lt;/h2&gt;

&lt;p&gt;For a cloud-native WAL streamer, these features are expected:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cloud uploads&lt;/strong&gt; (S3, SFTP)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Streaming compression&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Encryption&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Metrics and observability&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Retention and cleanup&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Pod/container integration&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most of all, it should be integrated with &lt;strong&gt;modern DevOps pipelines&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  ⚙️ How &lt;code&gt;pgrwl&lt;/code&gt; Works
&lt;/h2&gt;

&lt;p&gt;The architecture captures the streaming and archival lifecycle:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8bm4uf0ymbd71uvm78g9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8bm4uf0ymbd71uvm78g9.png" alt=" " width="800" height="635"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🔍 Component Breakdown
&lt;/h2&gt;

&lt;h3&gt;
  
  
  📅 WAL Receiver
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Connects to PostgreSQL replication slot&lt;/li&gt;
&lt;li&gt;Receives &lt;code&gt;XLogData&lt;/code&gt; messages via &lt;a href="https://github.com/jackc/pglogrepl" rel="noopener noreferrer"&gt;&lt;code&gt;pglogrepl&lt;/code&gt;&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Buffers segment into a &lt;code&gt;.partial&lt;/code&gt; file&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Once the 16 MiB boundary is reached:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;fsync()&lt;/code&gt; the file&lt;/li&gt;
&lt;li&gt;Atomically rename to final segment&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  🧠 Archive Supervisor
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Periodically triggers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;WAL uploader&lt;/li&gt;
&lt;li&gt;WAL retention sweeper&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  ☁️ WAL Uploader
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Scans for complete segments&lt;/li&gt;
&lt;li&gt;Compresses + encrypts&lt;/li&gt;
&lt;li&gt;Streams directly to remote storage&lt;/li&gt;
&lt;li&gt;Deletes local copy on success&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🧹 WAL Retainer
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Lists remote storage&lt;/li&gt;
&lt;li&gt;Applies time-based retention&lt;/li&gt;
&lt;li&gt;Deletes expired segments&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🧪 Integrity Testing: Byte-for-Byte Fidelity
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/hashmap-kz/pgrwl/tree/master/test/integration/environ/scripts/tests" rel="noopener noreferrer"&gt;&lt;code&gt;pgrwl&lt;/code&gt;&lt;/a&gt; is tested to ensure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Received WALs match PostgreSQL originals&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;.partial&lt;/code&gt; files are never uploaded&lt;/li&gt;
&lt;li&gt;Segment naming and metadata are preserved&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Related articles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev.to/alzhi_f93e67fa45b972/stream-postgresql-wals-with-zero-data-loss-in-mind-5e17"&gt;Stream PostgreSQL WALs With Zero Data Loss In Mind&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/alzhi_f93e67fa45b972/testing-postgresql-wal-streamers-for-byte-level-fidelity-a15"&gt;Testing PostgreSQL WAL Streamers for Byte-Level Fidelity&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🔄 Use Cases
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;🛡️ WAL archiving for disaster recovery (PITR)&lt;/li&gt;
&lt;li&gt;🔐 Encrypted WAL pipelines&lt;/li&gt;
&lt;li&gt;🐳 Kubernetes integration&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🤝 Contribute &amp;amp; Collaborate
&lt;/h2&gt;

&lt;p&gt;You can help by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Suggest features&lt;/li&gt;
&lt;li&gt;File issues&lt;/li&gt;
&lt;li&gt;Improving CLI and UX&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Start here: &lt;a href="https://github.com/hashmap-kz/pgrwl/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3A%22good+first+issue%22" rel="noopener noreferrer"&gt;Contributing&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  📂 Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;GitHub: &lt;a href="https://github.com/hashmap-kz/pgrwl" rel="noopener noreferrer"&gt;hashmap-kz/pgrwl&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Docker: &lt;code&gt;ghcr.io/hashmap-kz/pgrwl&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Docs: In-repo README &amp;amp; examples&lt;/li&gt;
&lt;li&gt;License: MIT&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Stream safely, backup smart. 🌊&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>go</category>
      <category>devops</category>
      <category>backup</category>
    </item>
    <item>
      <title>Testing PostgreSQL WAL Streamers for Byte-Level Fidelity</title>
      <dc:creator>alexey.zh</dc:creator>
      <pubDate>Sat, 31 May 2025 14:36:40 +0000</pubDate>
      <link>https://dev.to/alzhi_f93e67fa45b972/testing-postgresql-wal-streamers-for-byte-level-fidelity-a15</link>
      <guid>https://dev.to/alzhi_f93e67fa45b972/testing-postgresql-wal-streamers-for-byte-level-fidelity-a15</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Verifying that WAL streamers preserve exact database state — bit by bit.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  🧭 Context
&lt;/h2&gt;

&lt;p&gt;In the &lt;a href="https://dev.to/alzhi_f93e67fa45b972/stream-postgresql-wals-with-zero-data-loss-in-mind-5e17"&gt;previous post&lt;/a&gt;, we explored the motivations behind building &lt;strong&gt;&lt;code&gt;pgrwl&lt;/code&gt;&lt;/strong&gt;, a PostgreSQL WAL receiver designed for zero data loss (RPO=0) scenarios in containerized environments. We covered its architecture, features like compression/encryption, and its suitability for Kubernetes-based disaster recovery.&lt;/p&gt;

&lt;p&gt;This follow-up post focuses on testing — specifically validating that &lt;code&gt;pgrwl&lt;/code&gt; produces WAL archives that are &lt;strong&gt;byte-for-byte identical&lt;/strong&gt; to PostgreSQL’s official tool (&lt;code&gt;pg_receivewal&lt;/code&gt;) and that it supports full PITR (Point-in-Time Recovery) after abrupt system crashes.&lt;/p&gt;

&lt;h2&gt;
  
  
  🚀 Intro
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Write-Ahead Logs (WALs)&lt;/strong&gt; are at the heart of PostgreSQL’s crash recovery and replication capabilities. But what happens when we replace the native WAL receiver (&lt;code&gt;pg_receivewal&lt;/code&gt;) with a third-party tool like &lt;a href="https://github.com/hashmap-kz/pgrwl" rel="noopener noreferrer"&gt;&lt;code&gt;pgrwl&lt;/code&gt;&lt;/a&gt;? Can we &lt;strong&gt;trust&lt;/strong&gt; it to preserve data integrity byte-for-byte?&lt;/p&gt;

&lt;p&gt;This post dives into a &lt;strong&gt;golden test&lt;/strong&gt; designed to answer that question — by simulating real-world PostgreSQL workloads, abrupt crashes, and full recovery workflows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; All Bash scripts shown here are simplified examples to illustrate the core logic. The full implementation with deep technical details and automation scripts is available in the &lt;a href="https://github.com/hashmap-kz/pgrwl" rel="noopener noreferrer"&gt;pgrwl GitHub repository&lt;/a&gt;. This post focuses on explaining the &lt;strong&gt;primary test goal&lt;/strong&gt;, rather than every integration nuance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/hashmap-kz/pgrwl/blob/master/test/integration/environ/scripts/tests/001-fundamental.sh" rel="noopener noreferrer"&gt;Integration Test Source Code&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  ✅ Goal
&lt;/h2&gt;

&lt;p&gt;To verify that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;pgrwl&lt;/code&gt; can reliably stream WALs during active writes.&lt;/li&gt;
&lt;li&gt;The restored database is &lt;strong&gt;identical&lt;/strong&gt; to its pre-crash state.&lt;/li&gt;
&lt;li&gt;WAL files produced by &lt;code&gt;pgrwl&lt;/code&gt; match those produced by &lt;code&gt;pg_receivewal&lt;/code&gt; &lt;strong&gt;bit-for-bit&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🛠️ Tools Used
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;PostgreSQL 16+&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;pg_receivewal&lt;/code&gt; — the official WAL receiver.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/hashmap-kz/pgrwl" rel="noopener noreferrer"&gt;&lt;code&gt;pgrwl&lt;/code&gt;&lt;/a&gt; — WAL receiver with encryption/compression/backends.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;pg_dumpall&lt;/code&gt;, &lt;code&gt;pgbench&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Bash for orchestration&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🧪 Test Procedure: Step-by-Step
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;We simulate a live system, insert tons of data, kill everything mid-flight, and then recover from base backup + WALs.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  1. Start PostgreSQL
&lt;/h3&gt;

&lt;p&gt;Initialize a clean cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;initdb -D /tmp/pgdata
pg_ctl -D /tmp/pgdata -l logfile start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Launch WAL Receivers
&lt;/h3&gt;

&lt;p&gt;Run both in parallel (in background):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pg_receivewal --slot=test_slot -D /tmp/pgwal_pg ...
pgrwl --mode=receive -c config.yml ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Take a Base Backup
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pg_basebackup \
  --pgdata="/tmp/base_backup" \
  --wal-method=none \
  --checkpoint=fast \
  --progress \
  --no-password \
  --verbose
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Simulate Real Workload
&lt;/h3&gt;

&lt;p&gt;Insert timestamps every second:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;psql -c 'CREATE TABLE ticks(ts TIMESTAMPTZ DEFAULT now());'
while true; do psql -c 'INSERT INTO ticks DEFAULT VALUES;'; sleep 1; done &amp;amp;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run &lt;code&gt;pgbench&lt;/code&gt; to add load:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pgbench -i -s 10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create 100 tables in parallel:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for i in $(seq 1 100); do
  psql -c "CREATE TABLE t_$i AS SELECT * FROM generate_series(1, 10000) AS g(id);" &amp;amp;
done
wait
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  5. Capture Golden Snapshot
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pg_dumpall &amp;gt; /tmp/before.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Kill the &lt;code&gt;ticks&lt;/code&gt; inserter.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Simulate Crash
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pkill -9 postgres || true
pg_ctl -D /tmp/pgdata -m immediate stop
rm -rf /tmp/pgdata
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  7. Restore from Base + WALs
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cp -r /tmp/base_backup /tmp/pgdata
touch /tmp/pgdata/recovery.signal
echo "restore_command = 'pgrwl restore-command --serve-addr=127.0.0.1:7070 %f %p'" &amp;gt;&amp;gt; /tmp/pgdata/postgresql.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;💡 Rename all *.partial WALs to their final names before restart.&lt;/p&gt;

&lt;h3&gt;
  
  
  8. Restart PostgreSQL
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pg_ctl -D /tmp/pgdata -l logfile start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wait for recovery to complete.&lt;/p&gt;

&lt;h3&gt;
  
  
  9. Validate Database Consistency
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pg_dumpall &amp;gt; /tmp/after.sql
diff -u /tmp/before.sql /tmp/after.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Expect: No differences.&lt;/p&gt;

&lt;p&gt;Also verify &lt;code&gt;ticks&lt;/code&gt; table for the latest inserted row — confirming no data loss.&lt;/p&gt;

&lt;h3&gt;
  
  
  10. Compare WAL Files
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;diff -r /tmp/pgwal_pg /tmp/pgwal_pgrwl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Expect: Identical content and filenames.&lt;/p&gt;

&lt;h3&gt;
  
  
  📉 Post-Crash: Retest on New Timeline
&lt;/h3&gt;

&lt;p&gt;Restart both WAL streamers on a new timeline (due to crash + recovery) and verify they pick up correctly.&lt;/p&gt;

&lt;p&gt;Then rerun the diff again.&lt;/p&gt;

&lt;h3&gt;
  
  
  🧠 What This Test Proves
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;WALs received by pgrwl are valid and byte-identical to official ones.&lt;/li&gt;
&lt;li&gt;PostgreSQL can recover from pgrwl's archived WALs to the latest committed transaction.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🔬 Bonus: Add Compression and Encryption
&lt;/h2&gt;

&lt;p&gt;Add this to the config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;compression&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;algo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gzip&lt;/span&gt;
&lt;span class="na"&gt;encryption&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;algo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aesgcm&lt;/span&gt;
  &lt;span class="na"&gt;pass&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;${PGRWL_ENCRYPT_PASS}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;💡 WALs will no longer match byte-for-byte (they’re transformed), but recovery should still work identically.&lt;/p&gt;




&lt;h2&gt;
  
  
  ✅ Conclusion
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Testing WAL archiving isn’t just about receiving files — it’s about trust.&lt;/em&gt;&lt;br&gt;
This golden test validates &lt;code&gt;pgrwl&lt;/code&gt; as a reliable WAL receiver with byte-level fidelity and advanced features&lt;br&gt;
like encryption and compression.&lt;/p&gt;

&lt;p&gt;📦 Check out the code: &lt;a href="https://github.com/hashmap-kz/pgrwl" rel="noopener noreferrer"&gt;github.com/hashmap-kz/pgrwl&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  🙌 Get Involved
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;pgrwl&lt;/code&gt; is an open-source project built for the PostgreSQL community — and your feedback matters!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🐞 Found a bug? &lt;a href="https://github.com/hashmap-kz/pgrwl/issues" rel="noopener noreferrer"&gt;Open an issue&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;💡 Have an idea or feature request? We'd love to hear it.&lt;/li&gt;
&lt;li&gt;🧪 Want to improve WAL testing coverage? Run
the &lt;a href="https://github.com/hashmap-kz/pgrwl#integration-testing" rel="noopener noreferrer"&gt;integration tests&lt;/a&gt; or add your own cases.&lt;/li&gt;
&lt;li&gt;🔧 Found a rough edge or an unclear doc? Contributions are always welcome.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Start by starring ⭐ the repo, trying it out in your own cluster, and sharing what you learn.&lt;/p&gt;

&lt;p&gt;Let’s build better PostgreSQL backup tooling — together.&lt;/p&gt;

</description>
      <category>backup</category>
      <category>go</category>
      <category>postgres</category>
    </item>
  </channel>
</rss>
