Originally published on devopsstart.com. When production breaks, every second counts—here is how to use kubectl set image for a precise and rapid rollback.
Introduction
You've just deployed a new container image to production, and almost immediately, monitoring alerts start screaming. Latency is spiking, error rates are through the roof, and your customers are experiencing service degradation. In these high-pressure moments, a fast, reliable rollback mechanism is critical. While Kubernetes offers robust rollout and rollback capabilities via kubectl rollout undo, there are specific scenarios where kubectl set image can provide a quicker, more direct path to recovery, especially when you know exactly which image version you need to revert to.
This tip focuses on leveraging kubectl set image for urgent rollbacks. You'll learn when this command is most effective, how to accurately identify the correct previous image tag, and how to execute the command to quickly stabilize your application.
Understanding kubectl set image
The kubectl set image command is primarily designed to atomically update the image of one or more specific containers within a Kubernetes resource. It typically targets Deployments, StatefulSets, DaemonSets, or ReplicationControllers. When executed, it modifies the resource's Pod template to point to the new image tag, which then triggers a new rolling update.
While kubectl set image is frequently used for forward deployments (e.g., updating v1.1.9 to v1.2.0), its direct nature makes it exceptionally well-suited for rapid rollbacks. When you specify a previous, stable image, Kubernetes initiates a new rollout toward that desired state. This behavior differentiates it from kubectl rollout undo, which inherently steps back through the deployment's recorded history, revision by revision.
Here’s a common example of how kubectl set image is used to update an image:
kubectl set image deployment/my-app my-container=my-registry/my-app:v1.2.0 -n production
Expected output:
deployment.apps/my-app image updated
This command updates my-container within the my-app deployment in the production namespace to use the v1.2.0 image from my-registry.
The Urgent Rollback Scenario
Consider this scenario: your my-app:v1.2.0 release introduced a critical bug that bypassed your staging environment checks. You pushed it to production an hour ago, and now, critical alerts are firing, indicating significant application failures. You need to revert to the last known good image, let's say my-app:v1.1.9, immediately.
Why might kubectl set image be preferred over kubectl rollout undo in such a situation?
-
Directness and Precision: If you know the exact, stable image tag to which you need to revert,
kubectl set imageoffers an explicit and precise command. This avoids ambiguity and ensures you land on the intended stable state directly. -
Bypassing Unhealthy Revisions: If multiple faulty deployments occurred after your last stable one (e.g., you tried
v1.2.0, thenv1.2.1-hotfix, both failed),kubectl rollout undowould sequentially step back through these potentially problematic revisions.kubectl set imageallows you to jump directly to the known goodv1.1.9without traversing the unstable intermediate states. -
Forced Redeploy (Edge Cases): In rare cases, even if an image tag is theoretically the same, you might want to force Kubernetes to re-pull container images and redeploy pods due to local caching issues or other inconsistencies. Re-setting the image explicitly with
kubectl set imagecan achieve this, ensuring fresh pods are created. For more on debugging common Kubernetes issues, refer to our article on Troubleshooting CrashLoopBackOff in Kubernetes.
Identifying the Previous Image Tag
The critical first step for a kubectl set image rollback is accurately identifying the last known good image tag. You can achieve this by inspecting your deployment's revision history:
-
Check Rollout History:
This command provides a concise summary of your deployment's revision history, showing the changes made at each step.
kubectl rollout history deployment/my-app -n production
shell
**Expected output:**
```bash
deployment.apps/my-app
REVISION CHANGE-CAUSE
1 <none>
2 my-container: my-registry/my-app:v1.1.8
3 my-container: my-registry/my-app:v1.1.9
4 my-container: my-registry/my-app:v1.2.0
From this output, if `v1.2.0` (revision 4) is currently causing issues, then `v1.1.9` (revision 3) is your immediate target for rollback. Note that `CHANGE-CAUSE` may also contain details if `--record` was used during deployment.
-
Describe a Specific Revision (Optional Verification):
To be absolutely certain about the container images used in a particular revision, you can describe it in detail. This is a good verification step before initiating a rollback.
kubectl rollout history deployment/my-app --revision=3 -n production
shell
**Expected (truncated) output:**
```bash
deployment.apps/my-app with revision 3
Pod Template:
Labels: app=my-app
pod-template-hash=54c9c76...
Containers:
my-container:
Image: my-registry/my-app:v1.1.9
Port: 8080/TCP
Environment: <none>
...
This confirms that `my-registry/my-app:v1.1.9` was indeed the image used for revision 3, making it a reliable rollback target.
Executing the kubectl set image Rollback
Once you have identified the precise desired image tag (e.g., my-registry/my-app:v1.1.9 in our example), executing the rollback is straightforward and immediate:
kubectl set image deployment/my-app my-container=my-registry/my-app:v1.1.9 -n production
Expected output:
deployment.apps/my-app image updated
Upon execution, Kubernetes will immediately initiate a new rolling update. It will begin replacing the currently failing v1.2.0 pods with new ones running the specified stable v1.1.9 image.
You can monitor the progress of this new rollout using the following command:
kubectl rollout status deployment/my-app -n production
Expected output during rollout:
Waiting for deployment "my-app" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "my-app" rollout to finish: 1 old replicas are pending termination...
deployment "my-app" successfully rolled out
Once the rollout is complete, your application should be consistently running the stable v1.1.9 image, and your monitoring alerts should ideally begin to subside as service is restored.
Important Considerations
-
Rollback Strategy Impact: This
kubectl set imagemethod performs a rolling update. It's crucial that your application is designed to handle a brief period where both the old (problematic) and new (stable) versions of pods are running concurrently. This typically means ensuring backward and forward compatibility for APIs and data schemas. -
Image Immutability: Always strive to use immutable image tags (e.g.,
v1.1.9,v1.2.0,sha256:abcdef...) rather than mutable tags likelatest. Immutable tags guarantee that a specific tag always refers to the exact same image content, which is fundamental for reliable and reproducible rollbacks. -
Auditing and History: Using
kubectl set imagecreates a new revision in the deployment's history. This automatically ensures that your rollback action is recorded, providing a clear audit trail of changes made to your deployment. - Stateful Workloads: For StatefulSets, exercising caution when changing image versions is paramount. If a new image version introduces changes that affect persistent storage or state, a simple image rollback might not fully resolve database schema migrations or data portability issues. Always understand the data implications.
Conclusion
When a problematic image release throws production into disarray, reaction time is paramount. While kubectl rollout undo is a valuable tool, kubectl set image provides a direct, efficient, and precise alternative for reverting to a specific, known-good image. This capability can significantly reduce Mean Time To Recovery (MTTR) by allowing you to bypass potentially multiple failing revisions and jump straight to stability. By understanding your deployment history and precisely targeting the last stable
Top comments (0)