DEV Community

Cover image for Cooking Without Burning: My DevOps Doings in the Past Few Years
Mubarak Alhazan for AWS Community Builders

Posted on • Originally published at poly4.hashnode.dev

Cooking Without Burning: My DevOps Doings in the Past Few Years

DevOps, much like cooking, is all about balance. Too little automation, and the process stays raw; too much change without control, and something catches fire. Over the past few years, I’ve spent countless hours in the kitchen, experimenting with tools, tweaking workflows, and sometimes cleaning up after the inevitable smoke. Each deployment, like a dish, taught me a lesson: precision, timing, and preparation make all the difference.

Cooking

From Code to Cloud

My journey into DevOps began after attending OSCAFEST 2022, where I attended multiple sessions on DevOps and cloud-native technologies. Listening to the speakers talk about automation, scalability, and continuous delivery opened up a new world for me. I became fascinated by what happens beyond the code: how software is deployed, monitored, and kept running smoothly.

L-R: Michael Balli, Nader Dabit, Paul Ibeabuchi and I at OSCAFEST 2022

L-R: Michael Balli, Nader Dabit, Paul Ibeabuchi and I at OSCAFEST 2022

At the time, I was a frontend engineer. Inspired by what I learned at OSCAFEST, I decided to take the first step by deploying the frontend applications I built at work. That experience sparked a deeper interest in understanding the full deployment process, and soon I found myself exploring other aspects of DevOps.

Since then, I’ve worked on multiple DevOps projects that have shaped my perspective on software delivery and reliability. Each project presented unique challenges, ranging from automating complex AWS infrastructures to managing deployments on bare servers.

In this article, I’m shining the spotlight on some of the most interesting projects that tested my problem-solving skills, deepened my understanding of infrastructure, and helped me appreciate the craft of building systems that work.

Automating AWS with Terraform

At one time, our AWS infrastructure at my current company was managed entirely through the console: VPCs, EC2, load balancers, security groups, etc, were all created and updated manually. It worked, but it was messy. There were inconsistencies across environments, occasional missing configurations, and the constant risk of someone making a change in production that wasn’t mirrored elsewhere.

That pain led us to adopt Terraform for Infrastructure as Code. The first step was to replicate our existing setup so we could version and reproduce it easily. Afterwards, I began organising it into modular components such as networking, frontend and backend services, each reusable across environments.

This modular approach completely changed how we handled deployments. Instead of manually provisioning resources, we could spin up or tear down entire environments with a single command. It eliminated configuration drift and brought consistency across development, staging, and production environments.

To make testing faster and safer, I integrated LocalStack, a local AWS emulator. This allowed us to validate Terraform changes and experiment confidently before applying them to live resources.

The result was a leaner, more predictable workflow that saved time, reduced human error, and gave us consistent, reproducible environments across the board.

Deploying on Bare Server

When I joined this particular team, the company was facing a tough challenge: skyrocketing cloud costs. The dollar-to-naira exchange rate had become a major burden, and even after applying several AWS cost-optimisation strategies, we still weren’t hitting the company’s cost targets. That reality pushed us to make a bold decision to move away from AWS and deploy on a local cloud provider that billed in naira. It meant giving up the scalability and managed services AWS offered, but because our business operated in a B2B model with predictable growth, the trade-off was viable.

With only raw servers available, I had to design a production-ready deployment that was secure, automated, and maintainable. We ran two main layers: the backend and the database, each on separate servers.

For the backend layer, I containerised all services using Docker to ensure consistency and easier updates. I configured Nginx as a reverse proxy to route traffic across the microservices and set up SSL using Let’s Encrypt, which provided free certificate issuance and automatic renewal (free is important, given our cost-saving goals). You can read the detailed SSL implementation in this article.

To automate deployments, I created a GitHub Actions pipeline that built Docker images, pushed them to private Amazon ECR (which was practically free for our usage), and redeployed them on the server whenever a new release was made. I document the complete workflow in this article.

Of course, deployment alone wasn’t enough. We needed monitoring, something AWS CloudWatch had previously handled for us. This time, I manually set up Prometheus to track database performance, server metrics, and resource utilisation. The metrics were visualised in Grafana dashboards, and I configured alerts to trigger Slack and email notifications when thresholds were breached.

For database reliability, we used Acronis for daily backups. This required installing backup agents on the database server and syncing data to the Acronis dashboard.

On the security side, I implemented least-privilege principles at both the security group and server firewall levels. This ensured that access to the app and database servers was tightly controlled and auditable.

In the end, we were able to cut infrastructure costs by more than half, with the added advantage of paying locally in naira, protecting the business from foreign exchange volatility.

More importantly, the experience reminded me that no solution fits all contexts. AWS is usually my go-to platform because of its maturity and range of services, but this project forced me to look in a different direction. It was like a cook realising that not every dish needs the same spice; sometimes you need to reach for something unexpected to get the right flavour 😅. This project was my ghetto DevOps moment; it was hands-on, challenging, but full of learning.

If you’re a Nigerian business looking for a cloud provider that bills in naira, I’d genuinely recommend Nobus; their support team is excellent

Migrating Infrastructure Across AWS Accounts

While working at a consultancy firm, I was assigned to a project that required moving a client’s entire cloud deployment from our company’s AWS account to the client’s own AWS account, all with minimal downtime. The migration was part of a new contract. The main challenge was that much of the infrastructure hadn’t been fully codified with Infrastructure as Code (IaC), which meant every migration step had to be carefully planned and executed.

We began with the database layer. The client’s data was stored in DynamoDB, and we decided to use the S3 export-import method for the migration. This approach was cost-effective and efficient for the dataset size we were dealing with. To avoid disrupting the live environment, we scheduled the migration outside active business hours, and the entire process was completed smoothly.

Next was the backend layer, which ran on AWS Lambda. For this, we wrote a Python script using Boto3 to automate copying function configurations and code from the source account to the destination account.

Then came the frontend migration, which turned out to be the most challenging part of the entire process. The frontend stack combined S3 (for hosting), CloudFront (for distribution), and Route 53 (for DNS management), but I couldn’t find a clear, end-to-end guide on migrating this exact stack. So, I had to piece together best practices from multiple AWS resources, carefully sequencing the migration of S3 buckets, CloudFront distributions, and DNS records to prevent service interruption. When the migration was finally complete, I documented the entire process in an article so the next person would find it easier. You can read that detailed walkthrough here.

Documenting the DevOps Process

Across every team I’ve worked with, one thing that has remained consistent is my commitment to documentation. While many engineers see documentation as an afterthought, I’ve always treated it as a core part of engineering. It is a way to make complex systems understandable and sustainable. Over time, I’ve become known as the person who ensures things are written down, organised, and easy to follow.

My motivation has always been easy onboarding, knowledge sharing, and reducing dependency on any single engineer. I’ve seen how teams can slow down or lose context when crucial setup steps or troubleshooting processes live only in someone’s head. Good documentation turns individual know-how into collective knowledge.

My approach varies based on the type of content. For technical references that evolve frequently, such as configuration steps, I prefer GitHub README files, where updates can easily follow version control. For broader, long-form guides like deployment workflows, architecture decisions, or troubleshooting procedures, I use Confluence, which provides better structure and discoverability for team-wide access.

Documentation is something I do for myself and for others. It helps me think clearly, ensures the next person can build faster, and makes sure that when systems scale, the knowledge behind them scales too.

Reflections: Growth Beyond the Pipeline

Looking back at these projects, I see more than just deployments, configurations, or scripts; I see growth. Each challenge pushed me to think beyond technical correctness and focus on building systems that serve real business needs; solutions that are resilient, cost-conscious, and adaptable to change.

If there’s one lesson I’ve learned and want to leave you with, it’s that DevOps isn’t about fancy tools; it’s about making sure the kitchen runs smoothly even when no one’s watching the stove*.*

Thank you for Reading

You can follow me on LinkedIn and subscribe to my YouTube Channel, where I share more valuable content.

What’s a project you’re most proud of or learned the most from? I’d love to hear from you in the comments.

Top comments (0)