Continuous Integration and Continuous Deployment (CI/CD) pipelines automate the process of building, testing, and deploying code to remote servers, streamlining software delivery. Having built CI/CD pipelines for many applications in real business environments, I’ve made mistakes and seen colleagues make them too — each providing valuable lessons.
These experiences build up the expertise of any DevOps engineer, so I wanted to share what I’ve learned.
Making these mistakes can ruin your projects, disrupt your application environment, or cause your projects to drag on far longer than they should.
Based on my experience, here’s what you should do — or watch out for — when setting up your CI/CD pipeline. These tips are vendor-agnostic, meaning they apply whether you’re using GitHub Actions, Jenkins, TravisCI, AmplifyCI, CircleCI, or others.
Let’s explore these common mistakes and how to avoid them.
It’s common knowledge to use environment variables or secrets in your pipeline to avoid manually typing in passwords, SSH keys, connection strings, and other sensitive details. Since this is widely understood, I won’t dwell on it here.
Take a Snapshot of Your Server Before You Begin
If you’re planning to make changes to your server environment — such as adding or deleting files — especially if you’re new to this, it’s crucial to take a snapshot of your server before setting up your CI/CD pipeline. Snapshots are quick and straightforward to create, and they provide an easy way to restore your server to its previous state if something goes wrong.
I once witnessed a colleague accidentally delete our server environment and critical system files while configuring a CI/CD pipeline to sync code changes. This could have been easily avoided with a simple snapshot.
Set Up Your SSH Key Properly
To connect to your remote server where you’ll deploy your application, you’ll need both a private key and a public key. You can either create a new key pair specifically for your pipeline or use the existing public key you already use to access your instance. Either option works, but you’ll need to copy the contents of your private key file and paste them into GitHub Secrets.
Here are two important things to keep in mind:
Avoid Opening Your Key File as a Text File
When copying your private key, it’s better to display its contents in your terminal and copy it from there, especially if you’re using a Windows system. Opening the file directly as a text file can lead to formatting issues. Use the following commands to display the private key file content:cat key.pem
for Linux ortype key.pem
for Windows.Copy the Entire Key Content, Including Headers
Ensure you copy the entire content of the key file, including the headers and footers like;— –BEGIN RSA PRIVATE KEY — —
and— –END RSA PRIVATE KEY — –
. Paste the complete content into your GitHub Secrets box without any spaces.
Additionally, it’s better to append the key content to a file using commands like echo or cat rather than copying and pasting it manually into a text editor. For example, if you need to add a new SSH public key to your ~/.ssh/authorized_keys
file, using the command line approach is more reliable and less error-prone. Here’s how to do it:
echo "ssh-rsa ***your key content***" >> ~/.ssh/authorized_keys
or using cat:
cat ~/.ssh/rsa_key.pub >> ~/.ssh/authorized_keys
This method reduces the risk of errors compared to manual editing.
Carefully Review Your Deployment Path
It’s crucial to thoroughly review and understand the deployment path where your code or files will be affected, especially when deleting or syncing files on your server. If you’re resyncing files to your server, create a dedicated folder where your code files will be resynced, and always double-check the path before testing the pipeline.
I once worked on a project where a colleague mistakenly resynced files directly to the root directory (/home/bitnami/). This error resulted in the code being deployed in the root environment and inadvertently deleting other essential folders, including our /.ssh/ directory, environment paths, and other critical files.
This led to significant work to regain SSH access to the server and recreate the SSH public and private keys. Since we had done extensive configurations on the server, starting from scratch would have been far more stressful and time-consuming.
- name: Upload new files
run: |
rsync -avz --no-times --delete-after --exclude '.git' ./ bitnami@${{ secrets.YOUR_SERVER_IP }}:/home/bitnami
As example, the code above will delete all files in the root directory (/home/bitnami) including your SSH key which is usually located in that directory.
To avoid such scenarios, always keep a snapshot of your server as a backup.
Here are the high-level steps to recreate SSH keys for your server if you find yourself in a similar situation:
Prerequisite: Ensure you can connect to the server via SSH through an alternative method, such as browser-based sessions.
Generate an SSH RSA Key Pair:
Use your terminal to generate a new SSH RSA key pair.
Add the Public Key Content to Your Server:
Append the public key file content (.pub) to your ~/.ssh/authorized_keys
file. It’s recommended to use the echo or cat commands, as discussed earlier:
echo "ssh-rsa ***your key content***" >> ~/.ssh/authorized_keys
or
cat ~/.ssh/rsa_key.pub >> ~/.ssh/authorized_keys
Add the Private Key to Your Local Machine or Pipeline Secrets:
Store the private key content on your local machine or copy it to the secrets environment of your pipeline.
Test the New Key:
Attempt to connect to your server using the new SSH key. It should work now.
By following these steps and being meticulous about your deployment paths, you can avoid costly mistakes and ensure smoother operations.
Use Server Configuration Management Tools for Your Environment
In one of the challenging experiences I mentioned earlier, we could have saved ourselves a lot of stress if we had set up our environment using configuration management tools like Ansible, Chef, or Puppet.
These tools would have allowed us to easily replicate the same configuration on another server when we lost SSH access to the previous one.
Instead of struggling to regain access, we could have simply spun up a new server and run the configuration playbook or cookbook to restore our setup.
Although DevOps engineers typically don’t create configuration scripts for a single server, it’s still a best practice to do so. It’s not only important but also incredibly useful for recreating your server configuration in various scenarios.
Build Fast, Fail Fast, and Enable Detailed Monitoring
Building your code quickly, testing it promptly, and making necessary changes is essential. This approach enables you to deploy more often, reducing context switching, which is a best practice in DevOps. Regular deployments ensure that code is tested in staging and production as soon as possible.
Detailed monitoring of your builds and deployments allows you to quickly spot issues and address them directly, minimizing guesswork. Trust me, this will save you a significant amount of time.
Some Additional Useful Tips
Build Once: Ensure you build your code once, run your tests, and deploy the same artifact to staging and production if successful. Avoid building the code separately for each stage, as this might introduce inconsistencies. You can store your artifacts or outputs in repositories like Docker, ECR, or S3. Also, make sure to version your code appropriately, ensuring that the code you deploy is the same as what you built and tested, so it will perform consistently.
Code, Build, and Deploy Frequently: Frequent coding, building, and deployment are at the heart of DevOps. This approach ensures that mistakes and errors are spotted and corrected quickly, and it provides immediate feedback from both testing teams and customers.
These are the tips I have for you. I have personally experienced how these practices can save you time, improve your DevOps experience, prevent unnecessary mistakes, and help you quickly remediate any errors that do occur.
Conclusion
By implementing these best practices, you’ll streamline your DevOps workflow, minimize costly errors, and enhance your deployment efficiency. Embrace these tips to improve your DevOps experience, ensuring faster, more reliable, and consistent software delivery.
Please share any additional tips you might have, or let me know if I missed something important!
Top comments (7)
Thank you for writing this article—I really enjoyed reading it! I wanted to ask for your advice on a practice we're using at my workplace. Currently, we maintain a separate container registry for each environment to address security concerns. However, promoting images from one environment to the next requires copying the image from one registry to another, which feels inefficient. Do you have any recommendations for a better approach?
Inefficient in what ways? Please share. I understand it is good enough to build once and test at the various stage, then promote it to production if it passes. I would like more context to your concerns.
I was referring to the practice of copying the same container image across multiple registries. On one hand, it seems more efficient to maintain a single registry that houses all the images for every environment. However, I can see a potential security concern, where lower environments might gain access to production images, and vice versa. That said, if the same container image is being used across environments, does it really matter from a security perspective?
I acknowledge the efficiency of using a single registry, but it's important to consider the security concerns. Sharing the same container image across multiple environments (development, staging, and production) can compromise production integrity.
I recommend using separate registries where possible or, at minimum, ensuring that the production environment is highly secure and isolated. If using a single registry, apply strict access controls to prevent unauthorized access between environments and maintain security boundaries. Use two environments at minimum.
Yeah, I kind of talked myself out of considering a single repository while I was typing my response, LOL
This is really insightful! How would CI/CD configurations differ if you're handling a multi-tier application with several dependencies? Would love to see a follow-up post on that topic!
Definitely, I would share a post on that soon. Thank you.