DEV Community

Sourav kumar
Sourav kumar

Posted on

How Your Microservice Actually Gets Deployed on Kubernetes (Model 1 vs Model 2)

If you’re building or operating microservices on Kubernetes (especially EKS), one of the biggest architectural decisions is:

Where do application code, Helm charts, and deployment configurations live — and how do they flow to production?

In real production environments, two models dominate:

1. Model 1 — Same Repo (Service-Owned Deployments)
2. Model 2 — Hybrid GitOps (Platform-Driven)

This guide explains both end-to-end: repositories, ECR usage, CI/CD, versioning, folder structures, deployment flow, and when to use each.

Model 1 — App + Helm in the SAME Repository

Each microservice owns everything needed to run it.
One repo per service contains:

  • Application source code
  • Dockerfile
  • Helm chart
  • Environment values
  • CI/CD pipeline

Deployment is usually CI-driven (not pure GitOps)

Typical Repository Structure

payment-service/
├── src/                          # Application source code
│   ├── main.py / index.js / App.java
│   ├── config/
│   └── modules/
│
├── tests/                        # Unit & integration tests
│   ├── unit/
│   └── integration/
│
├── Dockerfile                    # Container build instructions
├── .dockerignore                 # Files excluded from image build
│
├── helm/                         # Helm chart for this service
│   ├── Chart.yaml                # Chart metadata
│   ├── values.yaml               # Default configuration
│   ├── values-dev.yaml           # Dev environment overrides
│   ├── values-staging.yaml       # Staging overrides
│   ├── values-prod.yaml          # Production overrides
│   ├── .helmignore               # Files Helm should ignore
│   │
│   ├── templates/                # Kubernetes manifest templates
│   │   ├── _helpers.tpl          # Template helper functions ⭐
│   │   ├── deployment.yaml       # Deployment resource
│   │   ├── service.yaml          # Service resource
│   │   ├── ingress.yaml          # Ingress resource
│   │   ├── serviceaccount.yaml   # ServiceAccount (if needed)
│   │   ├── hpa.yaml              # Horizontal Pod Autoscaler
│   │   ├── configmap.yaml        # ConfigMap
│   │   ├── secret.yaml           # Secret (optional)
│   │   ├── poddisruptionbudget.yaml
│   │   ├── networkpolicy.yaml    # Network security rules
│   │   ├── NOTES.txt             # Post-install info
│   │
│   └── charts/                   # Subcharts (dependencies)
│       └── (empty or dependencies)
│
├── scripts/                      # Optional helper scripts
│   └── deploy.sh
│
├── README.md                     # Service documentation
├── .gitignore
│
└── .github/
    └── workflows/
        └── ci-cd.yaml            # GitHub Actions pipeline
Enter fullscreen mode Exit fullscreen mode

Deep Explanation of Important Helm Files

Chart.yaml — Chart Metadata : Defines the Helm chart identity.

apiVersion: v2
name: payment-service
description: Helm chart for Payment Service
type: application
version: 1.2.0          # Chart version
appVersion: "1.2.0"     # Application version
Enter fullscreen mode Exit fullscreen mode

👉 version = chart version
👉 appVersion = container app version

values.yaml — Default Configuration : Base config used unless overridden.

replicaCount: 2

image:
  repository: <ecr-url>/payment-service
  tag: "latest"
  pullPolicy: IfNotPresent

service:
  type: ClusterIP
  port: 80
Enter fullscreen mode Exit fullscreen mode

Environment-Specific Values Files: Override defaults for each environment.

values-dev.yaml

replicaCount: 1

image:
  tag: "dev"

resources:
  requests:
    cpu: 100m
    memory: 128Mi
Enter fullscreen mode Exit fullscreen mode

values-prod.yaml

replicaCount: 4

image:
  tag: "1.2.0"

resources:
  requests:
    cpu: 500m
    memory: 512Mi
Enter fullscreen mode Exit fullscreen mode

templates/_helpers.tpl (VERY IMPORTANT)

Reusable template functions.

Used for:

  • Naming conventions
  • Labels
  • Chart metadata
  • DRY templates

Example:

{{- define "payment-service.fullname" -}}
{{ .Release.Name }}-payment-service
{{- end }}
Enter fullscreen mode Exit fullscreen mode

Container Image Storage (ECR)

Each service has its own ECR repository:

123456789.dkr.ecr.ap-south-1.amazonaws.com/payment-service
Enter fullscreen mode Exit fullscreen mode

ECR stores Docker images only — not Helm charts.

Complete Deployment Flow (Model-1, CI-Driven Helm on EKS)

1️⃣ Developer Pushes Code Change

A change is pushed to the service repository:
Examples:

  • Business logic update
  • New environment variable
  • Resource changes
  • Bug fix
  • Dependency update

Git event triggers CI.

Developer → Git push → main branch
Enter fullscreen mode Exit fullscreen mode

2️⃣ CI Pipeline Starts

Triggered by:

  • GitHub Actions
  • Pipeline checks out code.

3️⃣ Build Container Image

Docker image is created from the Dockerfile.

docker build -t payment-service:1.5.0 .
Enter fullscreen mode Exit fullscreen mode

Tagging usually includes:

  • Semantic version
  • Git SHA
  • Build number
  • Timestamp

Example:

payment-service:1.5.0
payment-service:sha-7f3a9c
Enter fullscreen mode Exit fullscreen mode

4️⃣ Authenticate to AWS ECR

Pipeline logs into ECR:

aws ecr get-login-password \
 | docker login --username AWS --password-stdin <ecr-url>
Enter fullscreen mode Exit fullscreen mode

5️⃣ Push Image to ECR

docker push <ecr-url>/payment-service:1.5.0
Enter fullscreen mode Exit fullscreen mode

Now ECR stores the new image.
👉 This is the artifact Kubernetes will run.

6️⃣ Pipeline Prepares Deployment

Helm chart references the new image tag.
This may happen in two ways:

🔹 Method A — Dynamic Override (most common)

helm upgrade payment-service ./helm \
  --set image.tag=1.5.0 \
  -f values-prod.yaml
🔹 Method B — Update values file
Enter fullscreen mode Exit fullscreen mode

Pipeline edits values file with new tag.

7️⃣ Helm Upgrade / Install Executes

Pipeline runs Helm:

helm upgrade --install payment-service ./helm \
  -f values-prod.yaml
Enter fullscreen mode Exit fullscreen mode

Helm does NOT run containers.
👉 It generates Kubernetes manifests and sends them to the cluster.

8️⃣ Kubernetes API Server Receives New Spec

Deployment resource updated:

spec:
  template:
    spec:
      containers:
        - image: <ecr-url>/payment-service:1.5.0
Enter fullscreen mode Exit fullscreen mode

Changing image tag updates the Pod template hash.

9️⃣ Deployment Controller Triggers Rolling Update

Kubernetes detects a change in Pod template.

It creates a new ReplicaSet.

Old ReplicaSet → payment-service-abc123
New ReplicaSet → payment-service-def456
Enter fullscreen mode Exit fullscreen mode

🔟 Scheduler Assigns Pods to Nodes

New Pods created but initially Pending.

Scheduler selects nodes based on:

  • Available CPU/memory
  • Node selectors
  • Taints/tolerations
  • Affinity rules

1️⃣1️⃣ Kubelet Starts Pod on Selected Node

Kubelet (agent running on node) receives instruction:

👉 “Run this container image”

⭐ THIS IS THE CRITICAL PART
🔥 How Cluster Pulls Image from ECR
1️⃣2️⃣ Node Needs Image Locally

Container runtime (containerd/Docker) checks:

👉 Is this image already cached on the node?

✔️ If YES → use cached image
❌ If NO → pull from registry
1️⃣3️⃣ Authentication to ECR

In EKS, nodes are allowed to pull from ECR via IAM.

How authentication happens:

👉 Node IAM Role (Instance Profile) has permission:

ecr:GetAuthorizationToken
ecr:BatchCheckLayerAvailability
ecr:GetDownloadUrlForLayer
ecr:BatchGetImage
Enter fullscreen mode Exit fullscreen mode

1️⃣4️⃣ Kubelet Requests Auth Token

AWS provides temporary registry credentials.

1️⃣5️⃣ Container Runtime Pulls Image Layers

Image downloaded layer by layer from ECR.

ECR → Internet/VPC Endpoint → Node

Large images take longer.

🔐 Important: Private Cluster Scenario

If nodes have no internet:

👉 Use VPC Interface Endpoint for ECR.

1️⃣6️⃣ Image Stored Locally on Node

After download:

/var/lib/containerd/

Future Pods may reuse it.

1️⃣7️⃣ Container Created & Started

Runtime launches container process.

Now Pod enters:

ContainerCreating → Running
1️⃣8️⃣ Health Checks Begin

Kubernetes executes:

Readiness probe

Liveness probe

Startup probe

If probes fail → Pod restarted.

1️⃣9️⃣ Service Starts Routing Traffic

Once Pod is Ready:

Added to Endpoints list

Load balancer can send traffic

2️⃣0️⃣ Rolling Update Completes

Old Pods gradually terminated.

Zero-downtime deployment achieved.

🏆 Complete Visual Flow

Code change pushed
        ↓
CI pipeline triggered
        ↓
Build Docker image
        ↓
Push image to ECR
        ↓
Helm upgrade executed
        ↓
Kubernetes Deployment updated
        ↓
New ReplicaSet created
        ↓
Pods scheduled to nodes
        ↓
Node authenticates to ECR
        ↓
Image pulled to node
        ↓
Container started
        ↓
Health checks pass
        ↓
Service routes traffic
Enter fullscreen mode Exit fullscreen mode

Team Collaboration

Developers can modify:

  • Environment variables
  • Resource limits
  • Feature configs

DevOps/Platform team enforces:

  • PR reviews
  • Branch protection
  • Deployment policies

Advantages of Model 1

✔ Simple to implement
✔ Fast delivery
✔ Single source of truth
✔ Easy debugging
✔ Ideal for small–mid teams

Limitations

⚠ Harder governance at scale
⚠ Risk of inconsistent configs
⚠ Limited multi-cluster control
⚠ Not ideal for hundreds of services

Model 2 — Hybrid GitOps (Enterprise Standard)

Separates application ownership from runtime control.

  • Developers own code
  • Platform team owns deployments

Deployments are GitOps-driven (Argo CD / Flux).

  • Architecture Components
  • Application Repository (Service Team)

Contains:

  • Source code
  • Dockerfile
  • Sometimes Helm chart templates

🏗️ Platform / GitOps Repository

Contains:

  • Environment configs
  • Helm values per environment
  • ArgoCD applications
  • Cluster definitions

App Repository Structure

payment-service/
├── src/
├── Dockerfile
├── tests/
└── helm/ (optional)
Enter fullscreen mode Exit fullscreen mode

Deployment (GitOps) Repository Structure

k8s-platform-config/
├── dev/
│   └── payment-service/
│       └── values.yaml
├── staging/
│   └── payment-service/
│       └── values.yaml
├── prod/
│   └── payment-service/
│       └── values.yaml
└── argocd-apps/
    └── payment-service.yaml
Enter fullscreen mode Exit fullscreen mode

ECR Usage
Same as Model 1:

👉 ECR stores container images only.

Some enterprises also store Helm charts as OCI artifacts, but not required.

Deployment Flow — End-to-End

1️⃣ Developer Pushes Code

To app repository.

2️⃣ CI Builds and Pushes Image to ECR

docker build -t payment-service:2.1.0 .
docker push <ecr-url>/payment-service:2.1.0
Enter fullscreen mode Exit fullscreen mode

3️⃣ Deployment Repo Updated

Image tag updated in environment config:

image:
  repository: <ecr-url>/payment-service
  tag: 2.1.0
Enter fullscreen mode Exit fullscreen mode

This update may be:

  • Manual PR
  • Automated bot
  • Promotion pipeline

4️⃣ GitOps Tool Detects Change

Argo CD / Flux watches the repo.

5️⃣ Automatic Deployment

GitOps controller syncs cluster state.

👉 No direct kubectl or helm from CI.

🔄 Versioning in Model 2

  1. Application Version: Container image tag.
  2. Deployment Version: Tracked by Git commits in deployment repo.

Environment Promotion

Controlled separately:
Dev → Staging → Prod
Often via PR approvals.

Governance Benefits

Platform team controls:

  • Resource policies
  • Security standards
  • Network rules
  • Secrets integration
  • Multi-cluster routing

Developers cannot accidentally break production infrastructure.

Advantages of Model 2

✔ Scales to hundreds of services
✔ Strong governance
✔ Full audit trail
✔ Safer production changes
✔ Multi-environment consistency
✔ True GitOps workflows

Trade-offs

⚠ More complex setup
⚠ Requires platform team
⚠ Slower initial development
⚠ Additional repositories to manage

Model 1 vs Model 3 — Side-by-Side

When to Use Each Model

Model 1 — Best For

  • Startups
  • Small–mid teams
  • Rapid development
  • Few services
  • CI-based deployments

Model 3 — Best For

  • Large organizations
  • Platform engineering culture
  • Many microservices
  • Multi-cluster setups
  • Compliance requirements
  • Production at scale

Final Takeaway

There is no universally “best” model.

👉 Organizations typically evolve:

Startup → Model 1
Growing company → Hybrid approaches
Enterprise → Model 3 (GitOps)

Understanding both gives you the ability to design systems for any scale.

Top comments (0)