DEV Community

Shahin Sheikh
Shahin Sheikh

Posted on • Edited on

How I deployed my first project for my devops portfolio: CI/CD during development vs CI/CD while live

CI/CD during development and CI/CD now during live

CI/CD setup during development

First I setup my Jenkins. I didn't want to manually run jenkins every time I boot up my raspberry pi so I made a Custom Daemon Service for it.

[Unit]
Description=Jenkins
After=network.target

[Service]
Type=simple
User=<username>
Group=users
ExecStart=/home/<username>/jenkins/jdk-21.0.5/bin/java -Dhudson.plugins.git.GitSCM.ALLOW_LOCAL_CHECKOUT=true -DJENKINS_HOME=/home/<username>/jenkins/.jenkins-config-new -jar /home/<username>/jenkins/jenkins.war
Restart=always

[Install]
WantedBy=multi-user.target
Enter fullscreen mode Exit fullscreen mode

Then I systemctl enabled and it was working fine. The journalctl -xeu jenkins.service helped me to get the initial password as jenkins was starting the first time.

Second I have installed docker and docker-compose in my pi and using it I used to spin up the containers.

Third At first I used to checkout to local repo that I created using git init --bare at my HDD but later shifted to github. This reason will explain the Dhudson.plugins.git.GitSCM.ALLOW_LOCAL_CHECKOUT=true in the daemon service.

Fourth I used custom database that I got from this mysql generic bianries instead of pulling the already present image at the dockerhub. Reason behind is that I wont be changing anything later and also I don't want to use the env line in compose at that time. All I wanted is just let the DB container get spunned up and all get ready so that is the reason why I customized it to my own liking.
Here is the entrypoint script for the custom DB image I made.

! /bin/bash

bin/mysqld --initialize-insecure --user=mysql --bind-address=0.0.0.0
bin/mysqld_safe --user=mysql &

# waiting time for the mysql_safeto start
for((i=20; ; --i)); do
    if((i == 0)); then
        echo "Starting Database...";
        break;
    fi

    echo -ne "Starting Database in ...${i} sec [/] \r";
    sleep 0.2;
    echo -ne "Starting Database in ...${i} sec [-] \r";
    sleep 0.2;
    echo -ne "Starting Database in ...${i} sec [|] \r";
    sleep 0.2;
    echo -ne "Starting Database in ...${i} sec [-] \r";
    sleep 0.2;
    echo -ne "Starting Database in ...${i} sec [\\] \r";
    sleep 0.2;


done

bin/mysql -u root --skip-password  -e "ALTER USER 'root'@'localhost' IDENTIFIED BY '<password>';"
bin/mysql -u root -p<inline_password> -e "UPDATE mysql.user SET Host = '%' WHERE User = 'root' AND Host = 'localhost';"
bin/mysql -u root -p<inline_password> -e "FLUSH PRIVILEGES;"
bin/mysql -u root -p<inline_password> -e "create database Users;"

echo "Creating importance filler table"
bin/mysql -u root -p<inline_password> -D Users -e "create table if not exists importance(class varchar(1) primary key not null, xp_gain double not null);"
echo "Filling importance table"
bin/mysql -u root -p<inline_password> -D Users -e "insert into importance values('S', 10);"
bin/mysql -u root -p<inline_password> -D Users -e "insert into importance values('A', 8);"
bin/mysql -u root -p<inline_password> -D Users -e "insert into importance values('B', 6);"
bin/mysql -u root -p<inline_password> -D Users -e "insert into importance values('C', 4);"
bin/mysql -u root -p<inline_password> -D Users -e "insert into importance values('D', 2);"
bin/mysql -u root -p<inline_password> -D Users -e "insert into importance values('E', 1);"


# start node exporter
#./node_exporter --web.listen-address=:5000

# to keep the container up and alive
echo "DB server online..." | tee stay.txt
tail -f stay.txt
Enter fullscreen mode Exit fullscreen mode

Here I got the snag of DB not connecting at localhost so upon scraping the web I found out that in mysql.user table in HOST there is something denied so that is the reason why I used this query below and it started working.

bin/mysql -u root -p<inline_password> -e "UPDATE mysql.user SET Host = '%' WHERE User = 'root' AND Host = 'localhost';"
bin/mysql -u root -p<inline_password> -e "FLUSH PRIVILEGES;"
Enter fullscreen mode Exit fullscreen mode

CI/CD Workflow during development

I made/added in my code then I pushed it to repo and using the jenkins pipeline I created it builds the images and then docker compose up it in the pipeline itself. Then I checked and repeated this entire process.

CI/CD Setup and Workflow as it is live

As I was planning to set up some kind of CI/CD during this live as I want to put off some updates there and my plan was very tedious and time consuming like setting up jenkins there then after code push hit jenkins build and yada yada... I got introduced to github actions when I was searching for things like "How to trigger jenkins job as I push code to github repo". Man I loved the actions and implemented it in my git repo.

I currently have two branches, one main and one website. Main has my project and website has my personal website http://shahin.zita.click/Shahin with both having .github/workflows/.yaml each. I did used the if condition if: github.ref == 'refs/heads/<branch_name>' to run only specific yaml for the branch.

Snag: As I saw that actions has many vm and so I choose the ubuntu-24.04-arm since my project is built on ARM but I found out that for some reason when I was compiling the code in actions, building and pushing the image to my dockerhub repo and when I was trying to pull it back into my EC2 or my rasp it was not working. Upon some search I came to conclusion that its nothing but environment inconsistency, so I came up with a utility container idea which solely compiles the code and outputs the generated bianry for the next step where new image is built with the generated binaries and push it to dockerhub for the pull and it started working fine.

docker run --rm -v $PWD:/build-app:rw schd1337/portfolioapp:compiler
Enter fullscreen mode Exit fullscreen mode

I used volume mount along with uses: actions/checkout@v3 with rw permissions and it works like a charm.

So now if I want a small change I just make the change in local and then push it to github and in the actions it compiles and pushes the image and then in next job I used a simple ssh into the EC2 using the secrets in actions to apply the yaml file.

    - name: SSH into EC2 and deploy
      run: |
            echo "${{ secrets.EC2_KEY }}" | tee $PWD/ec2.pem
            chmod 600 $PWD/ec2.pem
            ssh -o "StrictHostKeyChecking=no" -i "ec2.pem" admin@ec2-13-235-247-44.ap-south-1.compute.amazonaws.com "git clone https://github.com/AstroKabutar/SoloLeveling.git"
            ssh -o "StrictHostKeyChecking=no" -i "ec2.pem" admin@ec2-13-235-247-44.ap-south-1.compute.amazonaws.com "kubectl apply -f SoloLeveling/kubernetes/dbdeploy.yaml"
            ssh -o "StrictHostKeyChecking=no" -i "ec2.pem" admin@ec2-13-235-247-44.ap-south-1.compute.amazonaws.com "kubectl apply -f SoloLeveling/kubernetes/project-upskill.yaml"
            ssh -o "StrictHostKeyChecking=no" -i "ec2.pem" admin@ec2-13-235-247-44.ap-south-1.compute.amazonaws.com "kubectl apply -f SoloLeveling/kubernetes/ingress.yaml"
            ssh -o "StrictHostKeyChecking=no" -i "ec2.pem" admin@ec2-13-235-247-44.ap-south-1.compute.amazonaws.com "rm -rf SoloLeveling"
      shell: bash    
Enter fullscreen mode Exit fullscreen mode

Problem: Even if this works good I have to manually ssh into the EC2 to delete pods so I am thinking of should I go for simple kubectl delete then apply again or even better using some simple bash script using awk to delete the pods from the actions ssh itself. Either way I will update the post as I come to any of the conclusions and why.

EDIT : I updated my workflow to run additional bash script that gets the pod names and deletes it and all I have to do is wait and its done.

kubectl get pods -n<namespace> | awk 'NR == 2 {print $1}' | xargs kubectl delete pods -n<namespace>
Enter fullscreen mode Exit fullscreen mode

Conclusion

From what I have seen in companies and my own project, what I found out is that on offshore deployment scenarios it's best to have jenkins to do the stuff and with a separate repo or branch would suffice and when that code is approved for next stage like prod or pre-prod the code must be merged to main and let actions take over for the EKS cluster or EC2 or whatever. It makes work easy and using terraform (IaC) is the best way to keep track and also make changes without manually clicking across the console as its far too much prone to human error.

Top comments (0)