Recently, I deployed a small CSS change for this blog. Normally, it's a simple tweak, just shifting a few pixels, but after hitting git push, that inexplicable tension settled over me again. It was as if I'd deployed a critical banking system; the question of "what if" started swirling somewhere inside.
This feeling is familiar to me; I've experienced it after every deploy for 20 years. Automatically, my hand reaches for the tail -f /var/log/nginx/access.log command, and I open the Cloudflare dashboard in my browser to check cache hit ratios and error logs. Even if everything appears fine, I remain vigilant for a while longer.
Symptoms of That "What If" Feeling
This tension that arises after a deploy is a situation many of us are familiar with. Sometimes it manifests as a minor twitch, other times as a mild paranoia lasting hours. There are even times when I wake up in the middle of the night with an urge to check, wondering, "Did I forget something?"
I don't just experience this process with large projects. Even on my own VPS, in an environment where I manage over 13 Docker containers, I feel this way after a simple configuration change. Despite everything being automated, one still thinks, "What if, just maybe?"
Past Painful Experiences and Triggers
At the root of this "what if" feeling are, I believe, the painful experiences we've had in the past. Those moments are deeply etched in our brains and are triggered with every deploy. For me, some of these triggers are very clear.
On my own VPS, I experienced this feeling most intensely on April 28th. I had deployed a new container, and the next morning, the Pipeline-health monitor sent a "DEGRADED" email. I saw the system was choked with kcompactd %92 CPU; it couldn't even accept SSH connections. The helplessness at that moment, and the hours of debugging that followed, explain the reason for this tension.
⚠️ Docker Disk Fire
Once, again on my own VPS, I experienced a Docker disk fire. The disk filled to 100% due to 33 GB of build cache and 23 GB of unused images. All my applications went down instantly, requiring urgent intervention. Such incidents are among the most significant reasons that reinforce that 'what if' feeling after a deploy.
There were also times when my Astro build consumed 2.5 GB of RAM, pushing the system's 7.6 GB RAM to its limits and causing an OOM (Out Of Memory) error. Or the pain of deleting directories inside _work/_temp on a GitHub Actions runner... All these scenarios have repeatedly shown me that a system can react unexpectedly. That's why, no matter how prepared I am, that meaningless stress lingers with me for a while.
The Balance of Risk and Control
This situation is essentially a reflection of risk management and the need for control. The uncertainty of what consequences our developed or managed systems might cause when they go live confronts us with this stress. Even if we use robust tests, automations, and monitoring tools, the "production" environment always holds its own surprises.
Striking this balance—finding a way between the desire for fast deploys and the goal of risk-free deploys—is often challenging. Sometimes we compromise on certain controls to gain speed, and we might pay the price later. On the other hand, trying to perfect everything also slows down the process.
My Coping Mechanisms
Over the years, I've developed my own methods to cope with this stress. While it hasn't completely disappeared, I've managed to reduce its impact. Automation and comprehensive monitoring are at the forefront of these methods.
I've set up automatic deploy processes with GitHub Actions. Every change is automatically pushed to production after passing tests. With Prometheus and Grafana, I monitor every corner of the system, and with Alertmanager, I receive instant notifications for anomalies. For pipeline reliability, I've specifically implemented preflight resource guards; these check if system resources are sufficient before a deploy.
💡 Small and Frequent Deploys
Instead of large, monolithic deploys, I prefer small, atomic changes. This narrows the scope of a potential problem and makes rolling back much easier. When an issue arises, it becomes much simpler to pinpoint what changed.
Rollback mechanisms are vitally important to me. When a deploy is found to be problematic, I need to be able to revert to the previous stable version with a single command. This sense of security somewhat alleviates that initial moment of stress. Furthermore, I'm not ashamed to make mistakes. Last month, when I wrote sleep 360 and got OOM-killed, I told myself, "this too was a lesson," and switched to a polling-wait mechanism. Learning from my self-created problems helps me be more careful in the next deploy.
The "It Happens" Philosophy and Acceptance
Ultimately, a certain amount of risk and uncertainty is inherent in this line of work. There's no such thing as a perfect system; there can always be a vulnerability, a bug, or an unexpected interaction. Accepting this truth, embracing the "it happens" philosophy, reduces the pressure on me.
Of course, this is not a state of complacency. On the contrary, it constantly pushes me to build better, more resilient, and more secure systems. There are times when I implement kernel module blacklists (like algif_aead for CVE-2026-31431) as part of CVE mitigation; this is also part of the job. I learn from every mistake, every problem, and enter the next deploy better prepared.
ℹ️ Self-Hosted Runner Economics
To avoid exceeding GitHub Actions quotas, I use a self-hosted runner on my own VPS. This both reduces costs and gives me more control. However, it also brings its own operational overhead. Every decision has a trade-off.
This constant state of vigilance has, I suppose, become a part of my profession. Perhaps this situation is a source of motivation that drives us to build better systems. It's not about striving for perfection, but a continuous cycle of improvement and learning.
Do you also have similar experiences after a deploy? How do you cope with that "what if" feeling? What's the first thing you do after a deploy? I'd love if you could share in the comments; perhaps we can learn from each other.
Top comments (0)