Introduction
Deploying applications to Amazon EKS across multiple environments like Dev, Test, Pre-Prod, and Prod requires a robust CI/CD pipeline to ensure reliability, security, and scalability. This blog details how to implement a CI/CD pipeline using Jenkins and GitHub Actions with industry best practices. The pipeline will include scanning, testing, and approval gates for deploying to EKS clusters in a secure and efficient manner.
Overview of the Pipeline
The CI/CD pipeline consists of the following stages:
- Source Code Management: Code hosted on GitHub with branching strategy.
- Build and Test: Application build, unit testing, and integration testing.
- Containerization: Build Docker images and push to Amazon Elastic Container Registry (ECR).
- Static Analysis and Security Scans: Perform vulnerability scanning on Docker images and code.
- Continuous Deployment to EKS: Deploy to respective environments with environment-specific configurations.
- Monitoring and Rollback: Implement monitoring and rollback strategies for production.
Tools Used
- Jenkins: For CI/CD orchestration.
- GitHub Actions: To handle Git-based CI/CD triggers and workflows.
- Amazon EKS: Kubernetes service for hosting the application.
- Amazon ECR: Docker image registry.
- KubeCTL and Helm: For Kubernetes deployments.
- Trivy: Container image security scanning.
- SonarQube: Static code analysis.
- Prometheus and Grafana: Application and infrastructure monitoring.
Best Practices for CI/CD
-
Environment Segregation:
- Maintain separate EKS clusters or namespaces for
Dev
,Test
,Pre-Prod
, andProd
. - Use
ConfigMaps
andSecrets
for environment-specific configurations.
- Maintain separate EKS clusters or namespaces for
-
Branching Strategy:
- Use
feature
,develop
,release
, andmain
branches to control code flow. - Automate deployments for
develop
(Dev),release
(Test/Pre-Prod), andmain
(Prod).
- Use
-
Security and Compliance:
- Enable vulnerability scanning for code and container images.
- Perform static analysis and use Infrastructure as Code (IaC) scanning tools like Checkov or Terrascan.
-
Approval Gates:
- Enforce manual approvals before deploying to
Pre-Prod
orProd
.
- Enforce manual approvals before deploying to
-
Automated Testing:
- Include unit, integration, and end-to-end tests in the pipeline.
-
Observability:
- Ensure proper monitoring and alerting for deployments using Prometheus and Grafana.
Detailed CI/CD Workflow
Step 1: GitHub Actions for Continuous Integration
Workflow File: .github/workflows/ci.yml
name: CI Pipeline
on:
pull_request:
branches:
- develop
- release/*
- main
jobs:
build-test:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout@v3
- name: Set up JDK
uses: actions/setup-java@v3
with:
java-version: 11
- name: Build Application
run: |
./gradlew build
- name: Run Unit Tests
run: |
./gradlew test
- name: Static Code Analysis
uses: SonarSource/sonarcloud-github-action@v1.8
with:
token: ${{ secrets.SONAR_TOKEN }}
Step 2: Jenkins for Continuous Deployment
Jenkinsfile for Deployment
pipeline {
agent any
environment {
AWS_REGION = 'us-west-2'
ECR_REPO = '123456789012.dkr.ecr.us-west-2.amazonaws.com/my-app'
EKS_CLUSTER = 'my-eks-cluster'
NAMESPACE = 'dev' // Change namespace per environment
}
stages {
stage('Checkout Code') {
steps {
git branch: 'develop', url: 'https://github.com/your-repo.git'
}
}
stage('Build Docker Image') {
steps {
script {
docker.build("${ECR_REPO}:${BUILD_NUMBER}")
}
}
}
stage('Push to ECR') {
steps {
script {
sh """
aws ecr get-login-password --region ${AWS_REGION} | docker login --username AWS --password-stdin ${ECR_REPO}
docker push ${ECR_REPO}:${BUILD_NUMBER}
"""
}
}
}
stage('Security Scanning') {
steps {
sh """
trivy image --severity HIGH ${ECR_REPO}:${BUILD_NUMBER}
"""
}
}
stage('Deploy to EKS') {
steps {
script {
sh """
aws eks update-kubeconfig --region ${AWS_REGION} --name ${EKS_CLUSTER}
kubectl set image deployment/my-app my-app=${ECR_REPO}:${BUILD_NUMBER} -n ${NAMESPACE}
"""
}
}
}
}
post {
always {
cleanWs()
}
}
}
## Implementation Details
Image Tagging: Use Git SHA or build numbers for Docker image tags to uniquely identify each build.
Manual Approvals:
GitHub Actions requires workflow input for approvals before deploying to higher environments.
Jenkins pipelines use the input stage for manual gating.
Kubernetes Configurations:
Use ConfigMaps for environment-specific settings.
Use kubectl to set the image in deployments dynamically for respective namespaces.
Security Scanning:
Use SonarCloud for static code analysis in GitHub Actions.
Use Trivy for container image vulnerability scanning in both GitHub Actions and Jenkins.
Monitoring:
Integrate Prometheus and Grafana to monitor deployed applications and provide visibility into the pipeline's health.
Step 3: Environment Promotion
- For promoting from
Dev
→Test
→Pre-Prod
→Prod
, use Jenkins pipelines with approval gates:
stage('Approval for Promotion') {
steps {
input "Approve deployment to Test environment?"
}
}
Monitoring and Rollback
-
Prometheus Alerts:
- Configure Prometheus to send alerts for application failures or performance issues.
-
Grafana Dashboards:
- Monitor CPU, memory, and error rates for your application.
-
Rollback Strategy:
- Use Helm for versioned deployments:
helm rollback <release-name> <revision-number>
Folder Structure for Codebase
repo/
├── .github/
│ └── workflows/
│ └── ci.yml
├── Jenkinsfile
├── src/
│ └── main/
│ └── Application Code
├── helm/
│ └── charts/
│ └── my-app/
├── Dockerfile
└── README.md
Sample Data for Demo
For testing purposes, you can use a simple Python Flask application. Add the following app.py
:
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello_world():
return "Hello, EKS!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5000)
Conclusion
This blog covered how to build a CI/CD pipeline to deploy applications to Amazon EKS using Jenkins and GitHub Actions. By following these best practices, you can ensure your pipeline is secure, efficient, and scalable. Implementing scanning, automated tests, and monitoring guarantees a reliable and robust deployment process.
Feel free to adapt the architecture and pipeline as per your organization's requirements.
Top comments (0)