I found a critical CVE in a production container image last month. It had been there for five months. Here's the setup I built on OCI so that doesn't happen again.
How I Found Out the Hard Way
A few weeks ago I ran docker scout cves on a container image that had been running in production for months. I wasn't expecting anything — I'd scanned it when I first built it and it was clean. But the base image (ubuntu:22.04) had picked up a handful of CVEs since then, including one rated critical.
Nobody had checked. The image was built once, pushed once, and forgotten. The container was humming along fine, but it was running vulnerable libraries that we never would've shipped knowingly.
This is a workflow problem more than a tooling problem. The scanners exist. I just wasn't running them at the right points. So I spent a weekend wiring together a proper pipeline on OCI using what was already available — Docker Scout, OCIR's built-in scanning, and OCI Vault for secrets.
The Setup (Three Places to Catch Problems)
The idea is simple: scan before you push, scan after you push, and never put secrets in the image. Three checkpoints, each catching different things.
Docker Scout runs on my laptop and in CI — it catches CVEs before the image leaves my machine. OCIR scans again after the push, which catches anything Scout might miss and gives me a second opinion from Oracle's vulnerability database. OCI Vault handles secrets so I'm not baking API keys into environment variables like it's 2015.
Docker Scout — Catching CVEs Before They Leave My Machine
I'd been ignoring Docker Scout for a while, thinking it was just another scanning tool. It's actually pretty good. It comes built into Docker Desktop and the CLI, so there's no extra install.
# Scan a local image
docker scout cves my-api:latest
# Quick view — just critical and high severity
docker scout cves my-api:latest --only-severity critical,high
# Compare two image versions
docker scout compare my-api:v2 --to my-api:v1
# Get remediation recommendations
docker scout recommendations my-api:latest
The recommendations command is the one that made me a convert. Instead of just listing CVE IDs and making you figure out what to do, it tells you exactly which base image version fixes the problem:
Recommended fixes:
Base image: golang:1.22-alpine → golang:1.22.4-alpine
Fixes: CVE-2024-24790, CVE-2024-24789
Base image: alpine:3.19 → alpine:3.20
Fixes: 3 vulnerabilities
That saved me probably 30 minutes of Googling CVE IDs and cross-referencing which alpine version patched what. I just updated the FROM line and rebuilt.
Putting Scout in CI
Running it locally is fine, but the real value is when it blocks bad images in CI automatically. Here's what I have in GitHub Actions:
# .github/workflows/security.yml
name: Container Security
on:
push:
branches: [main]
pull_request:
jobs:
scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build image
run: docker build -t my-api:${{ github.sha }} .
- name: Docker Scout scan
uses: docker/scout-action@v1
with:
command: cves
image: my-api:${{ github.sha }}
only-severities: critical,high
exit-code: true # Fail the build if critical/high CVEs found
- name: Docker Scout recommendations
if: always()
uses: docker/scout-action@v1
with:
command: recommendations
image: my-api:${{ github.sha }}
The important bit is exit-code: true. Without that flag, Scout just prints the results and the build happily continues. With it, any critical or high CVE fails the pipeline. I've had this block two PRs in the last month and both times it was a legitimate issue in the base image.
OCIR Scanning — The Second Pair of Eyes
OCIR has its own vulnerability scanner that runs against Oracle's database. It sometimes catches things Scout doesn't (different vulnerability feeds) and vice versa. I like having both.
Setting It Up
# Create a repository with scanning enabled
oci artifacts container repository create \
--compartment-id $COMPARTMENT_ID \
--display-name "production-api" \
--is-immutable true \
--readme-enabled true
The --is-immutable true flag is one I wish I'd known about earlier. It prevents anyone from overwriting a tag. So once v1.2.3 is pushed, that's it — nobody can push a different image with the same tag. Sounds obvious but I've been bitten by :latest being silently overwritten before.
Push and Scan
# Tag and push
docker tag my-api:v1.2.3 iad.ocir.io/$TENANCY/production-api:v1.2.3
docker push iad.ocir.io/$TENANCY/production-api:v1.2.3
# Trigger a scan (or it runs automatically)
oci vulnerability-scanning container scan create \
--compartment-id $COMPARTMENT_ID \
--image-id <image-ocid>
# Check scan results
oci vulnerability-scanning container scan result list \
--compartment-id $COMPARTMENT_ID \
--image-id <image-ocid>
Scan Policies
You can also set up automated scan targets in Terraform so every push to specific repos gets scanned without anyone having to remember to do it:
# Terraform — scan policy
resource "oci_vulnerability_scanning_container_scan_recipe" "strict" {
compartment_id = var.compartment_id
display_name = "strict-scan-recipe"
scan_settings {
scan_level = "STANDARD"
}
}
resource "oci_vulnerability_scanning_container_scan_target" "production" {
compartment_id = var.compartment_id
container_scan_recipe_id = oci_vulnerability_scanning_container_scan_recipe.strict.id
display_name = "production-registry-scan"
target_registry {
compartment_id = var.compartment_id
type = "OCIR"
repositories = ["production-api"]
}
}
OCI Vault — Stop Putting Secrets in Environment Variables
This one is embarrassing to admit, but I've shipped containers with API keys in environment variables more times than I'd like. Not in the Dockerfile directly (I know better than that), but in docker-compose files that ended up in git, or in Kubernetes manifests that got copy-pasted around.
# WRONG — secrets baked into the image
ENV DATABASE_PASSWORD=hunter2
ENV API_KEY=sk-abc123
# WRONG — secrets in compose file committed to git
environment:
- DATABASE_PASSWORD=hunter2
OCI Vault stores secrets in an HSM. The container pulls them at startup. They never exist in the image, in your compose file, or in git.
Create Vault and Secrets
# Create a vault
oci kms management vault create \
--compartment-id $COMPARTMENT_ID \
--display-name "docker-secrets-vault" \
--vault-type DEFAULT
# Create a master encryption key
oci kms management key create \
--compartment-id $COMPARTMENT_ID \
--display-name "docker-secrets-key" \
--key-shape '{"algorithm": "AES", "length": 32}' \
--endpoint $VAULT_MGMT_ENDPOINT
# Store a secret
echo -n "my-database-password" | base64 | \
oci vault secret create-base64 \
--compartment-id $COMPARTMENT_ID \
--vault-id $VAULT_ID \
--key-id $KEY_ID \
--secret-name "prod-db-password" \
--secret-content-content "$(cat -)"
Pull Secrets at Deploy Time
For OCI Container Instances, use an init script that fetches secrets from Vault:
#!/bin/sh
# entrypoint.sh — fetch secrets from OCI Vault before starting the app
# Using instance principal authentication (no credentials needed)
export DATABASE_PASSWORD=$(oci secrets secret-bundle get-secret-bundle-by-name \
--vault-id $VAULT_ID \
--secret-name "prod-db-password" \
--stage CURRENT \
--query 'data."secret-bundle-content".content' \
--raw-output | base64 -d)
export API_KEY=$(oci secrets secret-bundle get-secret-bundle-by-name \
--vault-id $VAULT_ID \
--secret-name "prod-api-key" \
--stage CURRENT \
--query 'data."secret-bundle-content".content' \
--raw-output | base64 -d)
# Start the application
exec /server
FROM gcr.io/distroless/static-debian12:nonroot
COPY --from=builder /app/server /server
COPY entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
For OKE, I use External Secrets Operator which syncs secrets from OCI Vault into Kubernetes secrets automatically:
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: app-secrets
namespace: production
spec:
refreshInterval: 1h
secretStoreRef:
name: oci-vault
kind: ClusterSecretStore
target:
name: app-secrets
data:
- secretKey: DATABASE_PASSWORD
remoteRef:
key: "prod-db-password"
- secretKey: API_KEY
remoteRef:
key: "prod-api-key"
The nice thing here is that when I rotate a secret in Vault, ESO picks it up within an hour and updates the Kubernetes secret. Pods get the new value on their next restart. No redeployment, no new image push, no PR to change a YAML file.
The Full Flow
# 1. Build
docker build -t my-api:v1.2.3 .
# 2. Scan locally (fail fast)
docker scout cves my-api:v1.2.3 --only-severity critical,high --exit-code
# 3. Push to OCIR (triggers registry scan)
docker tag my-api:v1.2.3 iad.ocir.io/$TENANCY/production-api:v1.2.3
docker push iad.ocir.io/$TENANCY/production-api:v1.2.3
# 4. Deploy (secrets pulled from Vault at runtime)
oci container-instances container-instance create \
--containers '[{
"imageUrl": "iad.ocir.io/'$TENANCY'/production-api:v1.2.3",
"environmentVariables": {
"VAULT_ID": "'$VAULT_ID'"
}
}]' \
...
Two scan layers, immutable tags, no secrets in source control. None of this required buying a third-party security platform.
What I Learned
The tooling was never the problem. Docker Scout, OCIR scanning, OCI Vault — they were all available. I just wasn't using them consistently. The five-month-old CVE I found wasn't a failure of technology, it was a failure of workflow.
Now scanning happens automatically at two points (local/CI and registry), secrets are in Vault instead of YAML files, and image tags are immutable so nobody accidentally overwrites a production image. It took a weekend to set up. I should've done it a year ago.
Pavan Madduri — Oracle ACE Associate, CNCF Golden Kubestronaut. I write about containers, Kubernetes, and GPU infrastructure. GitHub | LinkedIn
Top comments (0)