DEV Community

Cover image for Container image promotion across environments - YAML
Davide 'CoderDave' Benvegnù
Davide 'CoderDave' Benvegnù

Posted on • Updated on

Container image promotion across environments - YAML

When you use containers for your application, one of the things you need to think about is how to move (aka promote) the container images you generate across different environments.

In this series, I will explore different ways to do so... with the help of Azure DevOps

Intro

In the first article of the series we explored the "Base Registry" approach to promote a Container image across different environments.

In the second part we got rid of the additional registry and we used a Build Artifact.

In this third and final part we will use the same approach as the second one, but using 100% the YAML Pipelines (or, more appropriately, the Multi-stage Pipelines).

Before starting

While the Multi-stage Pipeline editor should be already the default experience at the time of writing, if you can't see it make sure to have it enabled in the preview features.

In this article I will use some YAML code snippet to go through the examples.

You can download the complete YAML Pipeline definition file here.

If you want to have the FULL definition, which includes all the build steps for a .Net Core application, you can download it here.

Ok, let's start

Pipeline structure

As I have mentioned, we are going to use the Multi-stage Pipeline, the YAML-based version. As the name says, we can have multiple Stages inside a single pipeline definition. Each Stage will be delegated to a specific function.

In this example, riding on the previous articles, I'm going to build the application and the image, and deploy to 3 environments (Dev, Uat, and Prod).
Therefore, we are going to use 4 stages.

 stages:

   - stage: CI
     displayName: 'CI stage'
     jobs:
     #JOBS HERE

  - stage: CDDev
     displayName: 'CD stage for DEV'
     jobs:
     #JOBS HERE

  - stage: CDUat
     displayName: 'CD stage for UAT'
     jobs:
     #JOBS HERE

  - stage: CDProd
     displayName: 'CD stage for PROD'
     jobs:
     #JOBS HERE
Enter fullscreen mode Exit fullscreen mode

I'm going to explain later why I have decided to use different stages for different environments instead of one "CD" stage and multiple jobs for the different environments.

The CI - aka Building the image

As in the last article, we will start with building the application and creating our container image from it, with all we need for our application to function properly.

We will again use the "Artifact way" to publish our Image.

To do so, we start including in the CI Stage all the tasks the are responsible for building/compiling/publishing our application:

- stage: CI
     displayName: 'CI stage'
     jobs:
     - job: Build
       displayName: 'Build for MyProject'
       steps:
        # Add all the tasks you need to build your project
       - task: WhateverTask@1
         displayName: 'Build MyProject'
         # all the parameters you need
Enter fullscreen mode Exit fullscreen mode

When we have that, we need to create the Container Image and, as we did last time, export that in the Tar format:

   - task: Docker@2
     displayName: 'Build MyProject image'
     inputs:
       repository: '$(ImageName)'
       command: build
       Dockerfile: MyProject/Dockerfile
       tags: $(Build.BuildId)

   - task: Docker@2
     displayName: 'Save image to TAR'
     inputs:
       repository: '$(ImageName)'
       command: save
       arguments: '--output $(build.artifactstagingdirectory)/$(ImageName).image.tar $(ImageName):$(Build.BuildId)'
       addPipelineData: false
Enter fullscreen mode Exit fullscreen mode

Those commands are exactly like the ones we used in the previous article, but in their YAML representation.

The "Save image to TAR" step uses the docker save command to export the container image we have just created into a .tar file.

I have defined variables for the names so I can reuse them across the different environments.

Variables can be specified inside the YAML or in the UI. In this case I decided to include it into the YAML, therefore I placed this before the stages definition (so they can be accessed from within any of the stages and jobs):

   variables:
     ImageName: 'myprojectreportexecutor'
Enter fullscreen mode Exit fullscreen mode

Also, note that I have to specify the output for the command, and I use the $(build.artifactstagingdirectory) system variable to store the exported image in the folder the Azure DevOps service uses as base path.

As soon as I have my image saved in a "normal file" I then can use it as an Artifact. In the previous article we used the Build Artifact object, in this example we will instead use the new Pipeline Artifact object (which is available only in the YAML Pipelines, not in the Classic ones). To do so, I have to add a Publish Pipeline Artifact step:

  - task: PublishPipelineArtifact@1
    displayName: 'Publishing Image as Pipeline Artifact'
    inputs:
      path: $(build.artifactstagingdirectory)
      artifact: 'ContainerImage'
Enter fullscreen mode Exit fullscreen mode

Pipeline Artifacts are stored into Azure DevOps services directly. Differently from the Build Artifacts, at the time of writing it is not possible to select your own fileshare for saving them.

More info about the difference between Pipeline and Build Artifacts here

Nothing too difficult here. This step just takes the content of the $(build.artifactstagingdirectory) folder, zips it and makes it available to subsequent steps/jobs/stages.

The artifact parameter represent the name you want to give to that file on the system. It is an optional parameter, but I'd advise to always name your artifacts so it will be easier to reference them later on.

And then the CD - aka deploy time

Ok, now we have our container image, and we have created a Pipeline Artifact with it. It's time to fill in the other stages.

Before starting, it's importat to understand few things here.

Deployment jobs

In the CI part of our pipeline we used one (or more) job.

A job is a generic "collection of task", and usually it is associated with Continuous Integration. Why? Because every job automatically download the source code from the repo associated with the pipeline.

We don't want our code to be re-dowloaded before each deployment, do we?

For this reason, Azure Pipelines makes available another type of job, specialized in deployment operations: the deployment job.

This particular job not only doesn't download the code from the repo, but it automatically downloads any Pipeline Artifact published in any previous stage!

Strategies

Another peculiarity of the deployment jobs is that they support different deployment strategies. At the time of writing they are runOnce, rolling and canary.

I will not cover them all in this post (mode information here), we are going to use the runOnce one that, as the name says, executes every step only once.

Lifecycle hooks

Every deployment strategy has some "Lifecycle hooks" which basically orchestrate the deployment operations: preDeploy, deploy, etc (more info here).

Again, I'm not going to cover them in details in this post. I will use the deploy hook because it is the only one which actually downloads automatically the artifacts.

The deployment steps

Ok, now that we have a much clearer (hopefully) idea about those topics, let's take a look at the top of the deployment stage.

I will focus on the DEV environment, because the other ones will be pretty much the same.

   - stage: CDDev
     displayName: 'CD stage for DEV'
     jobs:
     - deployment: Dev
       displayName: 'Deploy to DEV'
       environment: MyProject-DEV
       strategy:
         runOnce:
           deploy:
             steps:
Enter fullscreen mode Exit fullscreen mode

As you can see, and as I mention before, here I'm using the runOnce strategy with the deploy hook.

By default, this stage will be executed only if the previous stage (the CI) has been succesful.

Wait a minute. What is that "environment"?

Environments are another new concept available only in YAML pipelines. They allow you to define and map some actual environment for using them as your deployment targets.

At the time of writing, they support only Kubernetes and Virtual Machines as specific resource, but you can create one with no resources. The reason for doing so is that you will have a full history of any deployment to that specific environment.

Also, you can now require approvals for deploying to a specific environment, and that approvals have to be defined in the environment object.

Now, as in the previous example we have three different environments: Dev, Test, and Prod. First thing to do is to create them in Azure DevOps.

If you don't manually create the environment in Azure DevOps UI, they will be created automatically the first time the pipeline is executed.

Now that we have the environment ready, we can continue.

First step: we need to restore the image from the .tar file we created in the build.

  - task: Docker@2
    displayName: 'Load Image from Tar'
    inputs:
      command: load
      arguments: '--input $(Pipeline.Workspace)/ContainerImage/$(ImageName).image.tar'
Enter fullscreen mode Exit fullscreen mode

Note that the input path uses the $(Pipeline.Workspace) system variable: it represents the folder where the Pipeline Artifacts are downloaded. The full path is composed by the base directory, the name of the Artifact we have chosen in the CI Publish Artifact step) and finally the name of the file. Once again, I have defined variables for the names so I can reuse them across the different environments.

Next we need to tag the image differently, to add the name of the registry. This is because if you have to push an image to a certain registry, you need the image full name as "registryName/ImageName", where registryName is the full qualified domain in case of anything different from Docker Hub (for Azure Container Registry, it would be something like myregistryname.azurecr.io).

  - task: Docker@2
    displayName: 'ReTag Image with ACR Name - BuildId'
    inputs:
      containerRegistry: MyProjectACRdev # This comes from the Service Connections
      repository: '$(ImageName)'
      command: tag
      arguments: '$(ImageName):$(Build.BuildId) $(ContainerRegistryNameDev)/$(ImageName):$(Build.BuildId)'
Enter fullscreen mode Exit fullscreen mode

The use of variables is optional but once again I recommend it, it just makes everything easier to automate and templatize.

The containerRegistry parameter value needs to match the name of the Service Connection you defined in the Settings that maps your Azure Container Registry (or any other Docker Registry)

Last step, we need to push the image to the new registry.

  - task: Docker@2
    displayName: 'Push Image to ACR'
    inputs:
      containerRegistry: MyProjectACRdev
      repository: '$(ImageName)'
      command: push
      tags: $(Build.BuildId)
Enter fullscreen mode Exit fullscreen mode

As for the previous step, also in here we can reference some Build variables.

And we are done for Dev!

Now we can replicate the same process to the other environments, just changing the source and target registries (and of course the environment name in the Stage).

Ideally when pushing to the environment-specific registry you should have a mechanism to notify your target host service (App Service, AKS, Container Instances, etc) of the new image so the deployment can be executed.

One last thing, as I mention before you probably want to set some Approvals for deploying to Uat and Prod. This can be done in the Environments section.

Multiple Stages vs Single Stage for deployment

As I have mentioned before, I decided to create 4 stages (1 for CI and 3 for CD).

This has been necessary because I want to use Approvals.

In fact, approvals are applied at Environment level, and the approval engine of Azure Pipelines requires that "the execution of a run pauses before entering a stage that uses the environment. Users configured as approvers must review and approve or reject the deployment." (from the official documentation here)

This means that even thought the Environment parameter belongs to the Deployment Job, it is not that job which will wait for the approval. Instead, the Stage containing that Deployment Job will be the one waiting.

Therefore this also means that if you have multiple Deployment Jobs in a Stage, with each job associated to a different environment, and each environment having its own approval, then you would need to approve all the deployment for having the stage to run, making it useless.

If, instead, you have multiple Stages, one per each environment, everything works great.

Let's run this

When you run the pipeline, the UI will be sligthly different from what you are used to.

This is the main screen:

Pipeline execution

This instead is the execution log screen:

Pipeline execution in logs

I have added an Approval request for both Uat and Prod, so this is what happens when the deployment to Dev is successful:

Waiting for approval

And this is inside the logs:

Waiting for approval in logs

This is what you see when clicking on the "Review" button:

Approve or Reject

You have obviously the chance to Approve or to Reject the deployment. If you reject, nothing will be deployed to that environment and the whole pipeline process will stop.

The approval is customizable, in my case I specified that at lest one of the required approvers must approve in order to continue.

Last but not least, the Environments view:

Environment with deployments

It's pretty useful because you can have an immediate picture of what is deployed in any environment at any given time.

Conclusion

This process is not widely used, mostly because people don't know about it, but it is definitely my favorite:

  • You have 1:1 mapping between build and release
    • this means that the image you build and the one you deploy are for sure the same.
  • You can directly reference the Build number, or any other parameter that comes from the Build, because you CI and your CD pipelines are directly related.
  • You have full traceability

In this article we achieved everything using the new Multi-stage Pipelines experience, which allows to version and protect the YAML files (for example setting up a branch policy).

Of course, if you take a look at the full YAML file, it's pretty long and not very readable. In fact, one of the things the we usually recommend is to use Templates and link them in instead of having everything in the same file.

I will cover this in a new post sometimes soon, but if you want to take a look at it here is the documentation

Like, share and follow me 🚀 for more content:

📽 YouTube
Buy me a coffee
💖 Patreon
🌐 CoderDave.io Website
👕 Merch
👦🏻 Facebook page
🐱‍💻 GitHub
👲🏻 Twitter
👴🏻 LinkedIn
🔉 Podcast

Buy Me A Coffee

Top comments (28)

Collapse
 
dsartain18 profile image
Don Sartain

Thanks so much for writing this! I searched for a couple days trying to find a way to stash the docker image in the artifact so I could use it in the deploy phase after my ARM templates run.

I'm running into a problem though. The save command doesn't work because it says the reference doesn't exist. I know it exists because I can build and push to the Azure Container Registry just fine, but somehow the argument you use doesn't work for me.

Also, I added another Docker step to list the docker images, and that comes back empty.

Any suggestions?

Collapse
 
n3wt0n profile image
Davide 'CoderDave' Benvegnù

That's interesting... Are you trying to save/list the image in a different stage? or on another job or another agent job?

Collapse
 
dsartain18 profile image
Don Sartain

Well, turns out I made a couple of mistakes. I had the command to list docker images as "Docker Images ls" instead of "Docker Images". I'm not sure why. Then because I set the container registry value on the build part (I looked at a couple of different examples sorting this out) it attached the ACR URL to the front of the repository name. I sorted it all out in the end. Usually it was just a matter of asking Google the right question, and in the right tone of voice, apparently. Thanks again for putting this together!

Thread Thread
 
n3wt0n profile image
Davide 'CoderDave' Benvegnù

Glad you've been able to figure this out.

Always happy to help :)

BTW, not sure you've seen my YouTube channel, where I cover topics on DevOps, Azure DevOps and GitHub... Maybe you can take a look at it 😇youtube.com/CoderDave

Collapse
 
jobinjosem2020 profile image
jobinjosem2020

Very nice explanation! BTW I have a question. Would it be possible to control the CD part based on branch used in CI phase? I need the CI to build image for all commits in all branches, but only the images from master branch to be promoted to UAT/Prod registry.

Collapse
 
n3wt0n profile image
Davide 'CoderDave' Benvegnù

Thanks! Yes, what you ask is very common. In Classic Release Pipelines you can filter by Build branch in the trigger settings UI (see here docs.microsoft.com/en-us/azure/dev... ), while in YAML you can use a custom condition in the Stage which does the deployment (like this: docs.microsoft.com/en-us/azure/dev... )

Collapse
 
fauresco profile image
Fernando Auresco

First, thank you for the excellent article. I am trying myself to figure out how to migrate from classic Release to this new YAML and mainly worried if this new model is "production/real life ready". My major concern now is how do you "redeploy" to the same environment? Sometime this is necessary and in classic release there is a handy Redeploy button. What about rolling back to a previous version? In classic release you just open the release and click the redeploy for the environment you want and as a plus it warns you that you are rolling back because the current version in that environment is newer. Is that possible in YAML multistage pipeline? Another thing I like in classic release is that we have a very clear indication of what version is currently deployed at each environment by looking at the Releases page you see in green which version is the current for the specific environment. For YAML pipeline is this easy to track using the new Environment feature?

Collapse
 
n3wt0n profile image
Davide 'CoderDave' Benvegnù

First, thank you for the excellent article.

Thanks! Always Happy to help :)

worried if this new model is "production/real life ready"

It sure is, we have thousands of Enterprise clients using YAML rather than classic. That said, you don;t have to forcely use the YAML pipelines if you don't want to... Classic ones will still be available :) But YAML ones have their own benefits.

in classic release there is a handy Redeploy button

You can do the same in YAML. Just expand the "Stage" and you can Re-run it (see below)

in classic release is that we have a very clear indication of what version is currently deployed at each environment by looking at the Releases page you see in green which version is the current for the specific environment. For YAML pipeline is this easy to track using the new Environment feature?

In YAML, as you've mentioned, you can to it in Environments. Take a look at my Environments Deep Dive video for more info on this

Collapse
 
fauresco profile image
Fernando Auresco

Awesome! This is what I was missing! I'll convert one of my deployments today. Thanks again!

Thread Thread
 
n3wt0n profile image
Davide 'CoderDave' Benvegnù

No problem! Let me know how it goes 😀

Thread Thread
 
fauresco profile image
Fernando Auresco

Sorry taking so long to answer, it went very well. Actually, now I have already several builds migrated to YAML and working just fine! :)
Once you have the YAML created it is very easy to replicate it to other repos or inherit from common repo with predefined jobs and steps. Thanks a lot!

Collapse
 
nagashekar profile image
NagaShekar

Thank you So much Article..This is very helpful and detailed. I have a quick question. why do we need to re-tag the image? Can you please explain that. Would you be able to cover how do we deploy this image into cluster.

Collapse
 
n3wt0n profile image
Davide 'CoderDave' Benvegnù

We need to re-tag the images because when you push an image to a registry, the image full name format must be REGISTRYNAME\IMAGENAME:TAG.
We generate the image with no registry name, so it only has the IMAGENAME:TAG part. When we push to a registry, we need to re-tag it to ad dthe REGISTRYNAME part.

To deploy the image, it would depend on what are you going to deploy to (i.e. Kubernetes, Docker Swarm, OpenShift, CloudFoundry, etc) because each one has it's own way, including different commands, toolsets, etc. And there may even be multiple ways to deploy to the same cluster (in K8S, for example, you can use the yaml definitions, the kubectl command, or yet something like helm) :)

Collapse
 
nagashekar profile image
NagaShekar

Thank you Davide for the response. One thing I noticed when I initially built the image is..Image name is appended with container registry name. Example in my case would be quay.apps.ocp.com/cmdiscovery. Looks like this is happening when I login to container registry before building the image. Is this intended behavior or as I am missing something?

Thread Thread
 
n3wt0n profile image
Davide 'CoderDave' Benvegnù

Uhm, not sure… never seen that happening 😅 perhaps they may have changed the way the task work.. I will try to reproduce that behavior

Collapse
 
ayang920 profile image
AYang

Thank you for your post. It works well, but I have a question about the last step 'Push Image to ACR' (containerRegistry: MyProjectACRdev). Can I use modify this step by using the same push task to deploy the Docker image to an on-prem cluster? If not, what task can I use to achieve that?

Collapse
 
n3wt0n profile image
Davide 'CoderDave' Benvegnù

When you work with container images, you still need to push them to a container registry anyway before you can “deploy” them to a cluster. So you can change registry on that command (for example push it to docker hub or another service rather than ACR) but you still have to do it before you can actually have it on the clister

Collapse
 
romina2001 profile image
romina2001 • Edited

Hi,

First of all thank you for this great article.
I tried to see your entire yaml file "You can download the complete YAML Pipeline definition file here.", but the link is broken. Can you please share the correct one.
Many thanks.

Collapse
 
n3wt0n profile image
Davide 'CoderDave' Benvegnù

Sorry for the long wait. It is available now.

Collapse
 
gustavgahm profile image
Gustav Gahm

Another great article! This descripts more och less the pipeline I already setup in my project. Awesome to get feedback that I was on the right track.

Collapse
 
n3wt0n profile image
Davide 'CoderDave' Benvegnù

Thank you man, happy to know you've found it helpful.

Btw, I will soon have a video-explanation of it on my new YouTube channel, maybe you can take a look at it ;) youtube.com/channel/UCtiFg7r8WBBzC...

Collapse
 
n3wt0n profile image
Davide 'CoderDave' Benvegnù

Not sure you've seen it already, but now the video with the explanation is out. Check it out here: youtube.com/watch?v=tG0O8vsO1LE

Collapse
 
nagashekar profile image
NagaShekar

Hi Davide. Can we templatize these steps so that they can re-used across projects? I think that will be an awesome addition to what you have out here.

Collapse
 
n3wt0n profile image
Davide 'CoderDave' Benvegnù

Definitely. I actually mention that at the end of the post, but I've forgotten to write the post about templates... Thanks for reminding me haha :D

Some comments may only be visible to logged-in visitors. Sign in to view all comments.