DEV Community

Deepak Poudel
Deepak Poudel

Posted on

How AWS saved me a lot of headache in my job

I recently deployed my first solo project for my job at Digo and I’ve never been more grateful for AWS.

Digo is a business management platform where automation is one of its core features. Internally, we call these automations “workflows”. Since Digo has been growing recently and more demanding customers are using our system, we were noticing huge CPU and memory spikes in our server due to workflows. Our server was a monolith and it was time to implement a new workflow engine and move it to a separate service.
Implementation
This was my first independent feature and I was very excited and also a bit nervous! I made sure not to overlook anything. It took weeks to design, discuss and redesign the new workflow engine to finally come upon a performant solution that met all our business needs. I learned a lot from my mentor during this process.

I had a solid view of the feature and how I was going to implement it. But a thought was creeping behind me: How am I going to deploy it?
Deployment Problems
I had very little experience with deploying software to production. All of our infrastructure is deployed in AWS. During my dev cycles, I spin up a free tier ec2 instance early testing and progress demo. I am well familiar with linux as I had done a ground up installation of arch linux in my previous laptop. But deploying softwares in linux and managing it as a server was an entirely new topic for me. I somehow ran the workflow engine server in ec2 with the help of multiple articles on the internet. But I hated every time I had to make new deployments. Looking back my problems came to be because:
I didn’t have deployment automation. Every time I had to deploy, I’d ssh into ec2, pull from master and run the server.
My servers kept crashing. Initially, I didn’t use any process managers like pm2, docker etc. So every time my server crashed, I’d ssh into ec2 and restart the server. Later, I switched to pm2 and it saved me from the need to do this.
I didn’t have a proper logging library. So I had to look through linux log files when my process crashed.

Deploying features for testing during deployment has many benefits. It is good for the feature itself because we get continuous feedback early in the development cycle. It is good for QA, managers and the rest of the team because they can test the workflow engine and get exposed to its features. But, it sucked for me because of my little experience with deployment. I wish I had spent some time learning the basics of DevOps and AWS early. But I decided to focus more of my time on implementing the new workflow engine than learning DevOps.
Resolving Issues with AWS services
My development was over and the feature was reaching release. It was finally time to learn DevOps. I had knowledge transfer sessions with my mentor. I also read more articles and explored AWS. Because I already had such a terrible experience once, I realized the importance of DevOps. I took courses at the AWS academy as well.

Some weeks before production release, I addressed the problems I faced previously. These were all the different AWS services I used:
Docker and Amazon Elastic Container Service (Amazon ECS) to delegate all my server management responsibilities. I didn’t have to worry about managing linux servers, scaling my infrastructure, server availability, etc because Amazon ECS handled everything for me.
Task definition files for ECS to define my infrastructure blueprint for different environments (staging, production, etc)
Cloudwatch logs to stream my container’s logs.
Amazon DynamoDB to delegate all my database management responsibilities.
Amazon parameter store for managing my environment secrets

With these in place, I didn’t have to worry about extensively managing my servers and database. As for deployment, I set up deployment pipelines using github actions and aws code deploy.

Learning all these was easier than I imagined. While use cases differ and we have to choose the best option based on our current situation, using these amazon services has been a good decision for the workflow engine at Dig. We have had no server downtime since deployment.

I made many mistakes before finally following best practices of DevOps. But it has given me many unique insights that one gets only after failing. There are tons of high quality resources online including AWS academy, which I benefited the most from.

Thanks for reading my article, if you have any problem, you can reach out to me in linkedin or in the comments below.

Top comments (0)