If you’ve been working with containers on Google Cloud, you probably know about the Google Container Registry (GCR).
However, GCR is now considered legacy, and Google recommends using the Artifact Registry — a more modern, secure, and flexible solution.
🧠 What is Google Artifact Registry?
Google Artifact Registry is a universal artifact management service that allows you to store and manage different types of build artifacts — including Docker images, npm packages, Maven artifacts, and more — all in a single, centralized location.
Think of it as your private, Google-managed DockerHub + Package Registry.
Key Benefits:
- Centralized storage for all your build artifacts
- Native integration with Google Cloud IAM for fine-grained access control
- Fully managed and scalable
- Supports regional and multi-regional repositories
- Recommended replacement for Google Container Registry (GCR)
🧩 What You’ll Learn
In this tutorial, we’ll cover everything from building an image locally to deploying it via GKE using Google Artifact Registry.
Here’s the plan:
- Build a Docker image on your local desktop or Google Cloud Shell
- Run and test the image locally
- Create a Google Artifact Repository to store your Docker image
- Configure authentication for pushing images to Artifact Registry
- Tag and push the image to your Artifact Repository
- Verify the uploaded image in the Artifact Registry
- Update your Kubernetes manifests to pull images from the registry
- Deploy to GKE and confirm that the image is successfully pulled from Artifact Registry
🛠️ Step-by-Step Overview
Step 1: Build a Docker Image
We’ll start by creating a simple Docker image on your local machine or Google Cloud Shell using a basic app (for example, Nginx or Node.js).
Step 2: Run and Test It Locally
Before pushing, make sure it works! Run the image locally using Docker and test it through a browser or curl.
Step 3: Create an Artifact Repository
Go to the Google Cloud Console:
- Navigate to Artifact Registry > Repositories
- Click Create Repository
- Choose Format: Docker
- Select a Region
- Give it a name (e.g., my-docker-repo)
- Click Create
Step 4: Configure Authentication
To push to Artifact Registry, you’ll need to authenticate Docker with Google Cloud:
gcloud auth configure-docker REGION-docker.pkg.dev
Replace REGION with your repository’s region (for example: asia-south1).
Step 5: Tag and Push the Image
Now, tag your local image with the Artifact Registry path and push it:
docker tag myapp:latest asia-south1-docker.pkg.dev/PROJECT_ID/my-docker-repo/myapp:1.0
docker push asia-south1-docker.pkg.dev/PROJECT_ID/my-docker-repo/myapp:1.0
Step 6: Verify in the Console
Go to Artifact Registry → Your Repository → Packages
You should now see your Docker image listed.
Step 7: Update Kubernetes Deployment
In your Kubernetes manifest (deployment.yaml), update the image path:
containers:
- name: myapp
image: asia-south1-docker.pkg.dev/PROJECT_ID/my-docker-repo/myapp:1.0
Step 8: Deploy and Verify
Apply the manifest:
kubectl apply -f deployment.yaml
kubectl get pods
Then, confirm the image is being pulled from the Artifact Registry.
🎯 Final Thoughts
By the end of this guide, you’ve:
- Built and pushed Docker images securely
- Set up a modern, scalable artifact storage system
- Integrated Artifact Registry seamlessly with your Kubernetes workloads
Artifact Registry isn’t just a replacement for GCR — it’s a complete, future-ready solution for managing all your build artifacts under one roof.
🌟 Thanks for reading! If this post added value, a like ❤️, follow, or share would encourage me to keep creating more content.
— Latchu | Senior DevOps & Cloud Engineer
☁️ AWS | GCP | ☸️ Kubernetes | 🔐 Security | ⚡ Automation
📌 Sharing hands-on guides, best practices & real-world cloud solutions
Top comments (0)