DEV Community

Alain Airom
Alain Airom

Posted on

The Case of the Invisible Container: Fixing the ErrImageNeverPull Loop

Podman vs. Minikube: Why ‘Image Found’ Doesn’t Mean ‘Image Found’

Introduction

For a recent testing phase, I needed a simple, static web application packaged in a container image to push to my local Kubernetes cluster. This should have been the easiest part of the project — a 15-minute task at most, covering the sample app and image build. Instead, I plunged into a two-hour spiral of troubleshooting. My development environment is simple and common: the Podman container engine, managed via Podman Desktop, running against a local Minikube cluster. The core frustration? Fighting the single most stubborn Kubernetes status message: ErrImageNeverPull.

The base: the application

Let’s say we have a simple code as shown below which is kind of a hello world!

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Hello Node.js Concept App</title>
    <!-- Load Tailwind CSS -->
    <script src="https://cdn.tailwindcss.com"></script>
    <style>
        body {
            font-family: 'Inter', sans-serif;
            background-color: #f7f7f7;
            display: flex;
            flex-direction: column;
            align-items: center;
            justify-content: center;
            min-height: 100vh;
            margin: 0;
        }
        .terminal-green {
            color: #00ff00;
        }
    </style>
</head>
<body>

    <div class="max-w-xl w-11/12 bg-white p-8 md:p-10 rounded-xl shadow-2xl transition duration-300 hover:shadow-3xl border-t-4 border-emerald-500">

        <header class="text-center mb-8">
            <svg class="mx-auto w-12 h-12 text-emerald-600" fill="none" stroke="currentColor" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg">
                <path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M9.75 17L9.25 10L10.75 10L10.25 17L9.75 17Z"></path>
                <path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M12 21a9 9 0 100-18 9 9 0 000 18z"></path>
                <path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M15.75 17L15.25 10L16.75 10L16.25 17L15.75 17Z"></path>
                <path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M12 10a2 2 0 100-4 2 2 0 000 4z"></path>
            </svg>
            <h1 class="text-3xl font-extrabold text-gray-800 mt-2">
                Minimal Service App
            </h1>
            <p class="text-sm text-gray-500 mt-1">
                Client-side simulation of a server application response.
            </p>
        </header>

        <div class="bg-gray-800 text-gray-200 p-4 rounded-lg shadow-inner mb-6 min-h-[120px]">
            <p class="font-mono text-sm mb-2 terminal-green">> Running main.js...</p>
            <pre id="output-message" class="font-mono text-base whitespace-pre-wrap"></pre>
        </div>

        <div class="flex justify-center">
            <button id="action-button" class="w-full py-3 px-6 bg-emerald-600 text-white font-semibold rounded-lg shadow-md hover:bg-emerald-700 transition duration-150 transform hover:scale-[1.01] focus:outline-none focus:ring-2 focus:ring-emerald-500 focus:ring-opacity-50">
                Generate Server Time
            </button>
        </div>

        <footer class="mt-8 pt-4 border-t border-gray-100 text-center text-xs text-gray-400">
            <p>Executed in the browser environment (JavaScript/HTML).</p>
        </footer>

    </div>

    <script>
        const outputMessage = document.getElementById('output-message');
        const actionButton = document.getElementById('action-button');

        function generateGreeting() {
            const now = new Date();
            const timeString = now.toLocaleTimeString('en-US', { hour: '2-digit', minute: '2-digit', second: '2-digit', hour12: false });
            const dateString = now.toLocaleDateString('en-US', { year: 'numeric', month: 'long', day: 'numeric' });

            // The main "Hello" message, 
            const message = `
=================================
[INFO] Hello, Node.js Concept!
[TIME] ${timeString}
[DATE] ${dateString}
[CORE] Process exited with status 0.
=================================
`;

            outputMessage.innerHTML = `<span class="terminal-green">${message}</span>`;
        }

        window.onload = function() {
            generateGreeting();
        };

        actionButton.addEventListener('click', generateGreeting);

    </script>
</body>
</html>
Enter fullscreen mode Exit fullscreen mode

Just run the sample app and we get the desired output!

npm install -g http-server
http-server
http://127.0.0.1:8080/hello-app.html
Enter fullscreen mode Exit fullscreen mode

Containerization of the application

Building a container image for the sample code above is quite straightforward.

  • Write a Dockerfile 🚛
FROM nginx:stable-alpine

# Remove the default Nginx welcome page index.html
RUN rm -rf /usr/share/nginx/html/*

COPY hello-app.html /usr/share/nginx/html/index.html

EXPOSE 80

Enter fullscreen mode Exit fullscreen mode
  • Build an image of it 🏗️
podman build -t hello-app . # works the same with 'docker'
podman run -d -p 8080:80 --name hello-container hello-app
Enter fullscreen mode Exit fullscreen mode

It will show the same result when you access the application.

Deploy your application to Minikube

To follow my deployment process, I build a YAML file to deploy and expose the image on my Minikube cluster 🍇

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-app-deployment
  labels:
    app: hello-web
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello-web
  template:
    metadata:
      labels:
        app: hello-web
    spec:
      containers:
      - name: hello-nginx
        # CRITICAL FIX: Use the specified local image.
        image: localhost/final-app:latest
        # Since a local image, set the policy to Never
        # to prevent Kubernetes from trying to pull it from a remote registry.
        imagePullPolicy: Never
        ports:
        - containerPort: 80 #
---
apiVersion: v1
kind: Service
metadata:
  name: hello-app-service
spec:
  type: NodePort
  selector:
    app: hello-web
  ports:
  - protocol: TCP
    port: 80 
    targetPort: 80 
Enter fullscreen mode Exit fullscreen mode

Well I hoped by running the following command I would have all set!

kubectl apply -f hello-app.yaml
Enter fullscreen mode Exit fullscreen mode

I was wrong 😤

My Discovery of What Went Wrong

What should have been a straightforward deployment quickly turned into a relentless, two-hour battle. I found myself trapped in a frustrating loop: reading Minikube’s documentation, repeatedly building and removing container images, deleting and recreating the entire Minikube cluster — all while trying to push a simple static web app built with the Podman engine. After several failures and deep dives into the core issues, I finally cracked the problem, and the complex explanation of the fix follows. My local environment uses Podman, Podman Desktop, and a local Minikube cluster, and the core frustration was fighting the single most stubborn Kubernetes status message: ErrImageNeverPull.

The Invisible Image Paradox

The root of the problem wasn’t the YAML, the network, or the container registry. It was a vicious combination of cross-runtime caching and persistent references inside the Minikube virtual machine:

  • The Tagging Trap: Podman tags images with localhost/ by default (e.g., localhost/final-app:latest). If the Kubelet couldn't find the exact full tag, it simply failed, ignoring the image that was technically present.
  • Stale Caching: Every failed Pod attempt left behind a corrupted, low-level reference to the image ID inside Minikube’s internal Docker daemon. This meant even when I successfully loaded a fresh image, the system refused to delete the old, broken link.

The Winning Sequence

After I every normal option: resetting the cluster, retagging the image, and using minikube image load. The final solution required a multi-step cleanup that forcefully cleared the corrupted state:

  • Container Clean-Up
    : using minikube ssh to stop and remove all lingering container references, which was the only way to release the locked image ID.

minikube ssh 'docker ps -aq | xargs docker stop | xargs docker rm'
Enter fullscreen mode Exit fullscreen mode
  • Forced Image Deletion: using the nuclear option (-f) to finally remove the corrupted image from the Minikube daemon cache.

> I changed the app’s tag several times, the last time it was “final”!

docker rmi -f localhost/final-app:latest
Enter fullscreen mode Exit fullscreen mode
  • Manual File Transfer: To completely bypass the broken Podman-to-Minikube pipe, I manually saved the container to a .tar file on the host, copied it into the Minikube VM, and loaded it directly with the internal Docker client. This guaranteed a clean image transfer.
podman save final-app:latest -o app.tar
minikube cp app.tar /tmp/app.tar
minikube ssh "docker load -i /tmp/app.tar"

Enter fullscreen mode Exit fullscreen mode
  • Final Manifest Fix: The last hurdle was realizing the successfully loaded image had a slightly different tag (final-app:latest), which had to be precisely matched in the Deployment YAML.
The issues **Image Naming and Cross-Runtime Caching** .

| Problem                         | Cause                                                        | Solution                                                     |
| ------------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
| **`ErrImageNeverPull`**         | Kubernetes, running inside the Minikube VM (using Docker), couldn't find the image. | Setting `imagePullPolicy: Never` forced Kubernetes to stop looking online. |
| **`localhost/` Prefix**         | Podman automatically prefixes locally built images with `localhost/`. The image was loaded as `localhost/hello-static-app:latest` or `localhost/final-app:latest`. | The YAML file and the load commands had to be changed multiple times to match the exact `localhost/` prefix. |
| **Stale Cache & Locked Image**  | Previous failed Pods left behind low-level container references inside the Minikube Docker daemon. | Had to use `minikube ssh` to stop/remove all lingering containers and then use `docker rmi -f` to break the persistent image lock. |
| **Podman/Minikube I/O Failure** | Direct piping (`podman save | docker load`) and the simplified `minikube image load` command failed due to internal host configuration and tag search conflicts. |
| **Final Tag Mismatch**          | The successful load created an image named `localhost/final-app:latest`, which differed from the tag (`localhost/hello-static-app:latest`) in the initial YAML. | The final fix required updating the YAML to use the exactly matching tag: `localhost/final-app:latest`. |
Enter fullscreen mode Exit fullscreen mode
# --- Configuration Variables ---
IMAGE_NAME="final-app"
IMAGE_TAG="latest"
FULL_IMAGE="${IMAGE_NAME}:${IMAGE_TAG}"           # final-app:latest (Podman tag)
FULL_KUBE_IMAGE="localhost/${IMAGE_NAME}:${IMAGE_TAG}" # localhost/final-app:latest (Kubernetes tag)
YAML_FILE="hello-app.yaml"
TAR_FILE="app.tar"

echo "--- 1. Cleaning up previous Minikube resources and local images ---"

# Delete any running application resources (Deployment, Service, Pods)
kubectl delete deployment hello-app-deployment --ignore-not-found=true
kubectl delete service hello-app-service --ignore-not-found=true
kubectl delete pod --all --ignore-not-found=true

# Ensure no hanging processes are referencing the image inside Minikube VM
echo "Stopping and removing all existing containers inside Minikube VM..."
minikube ssh 'docker ps -aq | xargs docker stop | xargs docker rm' 2>/dev/null

# Remove the old image from the Minikube Docker cache
echo "Force-deleting old image from Minikube VM cache..."
eval $(minikube docker-env)
docker rmi -f ${FULL_IMAGE} ${FULL_KUBE_IMAGE} 2>/dev/null
eval $(minikube docker-env -u) # Reset environment variables

echo "--- 2. Building and Tagging Image with Podman ---"
# Build the image using the simple tag (e.g., final-app:latest)
podman build -t ${FULL_IMAGE} .

echo "--- 3. Guaranteed Image Load into Minikube VM (Manual Transfer) ---"
# Export the image to a tarball file from the host Podman store
echo "Saving image to local tar file: ${TAR_FILE}"
podman save ${FULL_IMAGE} -o ${TAR_FILE}

# Copy the file into the Minikube VM's /tmp directory
echo "Copying ${TAR_FILE} to Minikube VM..."
minikube cp ${TAR_FILE} /tmp/${TAR_FILE}

# Use Minikube SSH to run Docker load inside the VM (ensuring it's in the Minikube daemon)
echo "Loading image from file into Minikube's Docker daemon..."
minikube ssh "docker load -i /tmp/${TAR_FILE}"

echo "--- 4. Applying Deployment Manifest ---"
# The YAML file is configured to use the 'localhost/final-app:latest' tag
kubectl apply -f ${YAML_FILE}

echo "--- 5. Verification ---"
echo "Waiting for Pod to start (may take a few seconds)..."
kubectl wait --for=condition=ready pod -l app=hello-web --timeout=120s

echo ""
echo "Deployment Complete! Status:"
kubectl get pods

echo ""
echo "--- Access URL ---"
# The service URL will be displayed.
minikube service hello-app-service
Enter fullscreen mode Exit fullscreen mode

The Conclusion and the Takeaway

Local Kubernetes troubleshooting often requires you to think beyond the YAML and dive into the underlying daemon. If you hit persistent ErrImageNeverPull issues with local images, remember the manual save/copy/load trick—it's the only way to guarantee a pristine image transfer across different container runtimes.

Thanks for reading 🔟

Top comments (0)