DEV Community

Pavan Madduri
Pavan Madduri

Posted on

Running Docker Containers on OCI Without Kubernetes

I needed to run a container in the cloud. Not a microservices platform. Not a service mesh. Just one container, one port, accessible from the internet. Here's how OCI Container Instances turned out to be the right tool.


Why I Stopped Reaching for Kubernetes First

I'll be honest — I'm a Kubestronaut. I have all the CNCF Kubernetes certifications. My default muscle memory is kubectl apply -f for everything. But last month I needed to deploy a small Go API for a side project and I caught myself writing a Helm chart for a single container.

That felt ridiculous.

The API had two endpoints. It needed 256MB of RAM. There was no reason to stand up a control plane, configure node pools, set up an ingress controller, and maintain all of that just to serve JSON over HTTP.

I'd used OCI Container Instances before for a quick test and remembered it being dead simple. So I tried it for real this time.

What OCI Container Instances Actually Are

The closest analogy is docker run but Oracle manages the host. You give it an image, tell it how much CPU and memory you want, point it at a subnet, and it runs. The container gets a real IP on your VCN. You can pull from OCIR, Docker Hub, or any OCI-compliant registry.

I was surprised by the resource limits — you can go up to 64 OCPUs and 1TB of RAM on a single instance. Fargate caps out at 16 vCPUs. Cloud Run at 8. For most of my use cases that doesn't matter, but it's nice to know the ceiling is high if I need it later.

Feature OCI Container Instances AWS Fargate Cloud Run
Max vCPUs 64 16 8
Max Memory 1024 GB 120 GB 32 GB
GPU Support Yes (A10, A100) No Yes (L4)
Cold Start ~2-3s 5-15s 2-8s
Min Billing 1 second 1 minute 100ms

The GPU support is worth mentioning — you can run NVIDIA GPU containers without managing drivers or CUDA installs on the host. I haven't used this in production yet but I've tested it with a vLLM image and it worked without any changes to the Dockerfile.

The API I Deployed

Nothing fancy. A Go service with two endpoints — /health and /info. I chose Go because the final image is tiny (under 15MB with distroless) and it starts in milliseconds, which matters when you're paying per second.

// main.go
package main

import (
    "encoding/json"
    "fmt"
    "log"
    "net/http"
    "os"
    "time"
)

type HealthResponse struct {
    Status    string `json:"status"`
    Timestamp string `json:"timestamp"`
    Host      string `json:"host"`
    Region    string `json:"region"`
}

type InfoResponse struct {
    Service  string `json:"service"`
    Version  string `json:"version"`
    Runtime  string `json:"runtime"`
    Platform string `json:"platform"`
}

func main() {
    port := os.Getenv("PORT")
    if port == "" {
        port = "8080"
    }

    http.HandleFunc("/health", func(w http.ResponseWriter, r *http.Request) {
        host, _ := os.Hostname()
        resp := HealthResponse{
            Status:    "healthy",
            Timestamp: time.Now().UTC().Format(time.RFC3339),
            Host:      host,
            Region:    os.Getenv("OCI_REGION"),
        }
        w.Header().Set("Content-Type", "application/json")
        json.NewEncoder(w).Encode(resp)
    })

    http.HandleFunc("/info", func(w http.ResponseWriter, r *http.Request) {
        resp := InfoResponse{
            Service:  "oci-docker-demo",
            Version:  os.Getenv("APP_VERSION"),
            Runtime:  "OCI Container Instances",
            Platform: "Oracle Cloud Infrastructure",
        }
        w.Header().Set("Content-Type", "application/json")
        json.NewEncoder(w).Encode(resp)
    })

    log.Printf("Starting server on :%s", port)
    log.Fatal(http.ListenAndServe(fmt.Sprintf(":%s", port), nil))
}
Enter fullscreen mode Exit fullscreen mode

The Dockerfile is a straightforward multi-stage build. The builder compiles the binary, and the final image is distroless so there's almost nothing in it:

FROM golang:1.22-alpine AS builder
WORKDIR /app
COPY go.mod main.go ./
RUN CGO_ENABLED=0 GOOS=linux go build -ldflags="-s -w" -o server .

FROM gcr.io/distroless/static-debian12:nonroot
COPY --from=builder /app/server /server
EXPOSE 8080
USER nonroot:nonroot
ENTRYPOINT ["/server"]
Enter fullscreen mode Exit fullscreen mode

Build and test locally:

docker build -t oci-docker-demo:v1 .
docker run -p 8080:8080 -e OCI_REGION=us-ashburn-1 oci-docker-demo:v1

# Test it
curl http://localhost:8080/health
Enter fullscreen mode Exit fullscreen mode

Pushing to OCIR

OCIR is OCI's private container registry. Free for standard usage, which is nice. The login command is a bit verbose compared to Docker Hub — you need the tenancy namespace prefix — but it works the same way after that.

# Log in to OCIR
docker login iad.ocir.io -u '<tenancy-namespace>/<username>'

# Tag for OCIR
docker tag oci-docker-demo:v1 iad.ocir.io/<tenancy-namespace>/docker-demos/oci-demo:v1

# Push
docker push iad.ocir.io/<tenancy-namespace>/docker-demos/oci-demo:v1
Enter fullscreen mode Exit fullscreen mode

I ran Docker Scout on it before pushing, mostly out of habit:

docker scout cves oci-docker-demo:v1
Enter fullscreen mode Exit fullscreen mode

Zero critical or high CVEs, which is expected with distroless. If you're using ubuntu or debian as your base, you'll probably see a few here.

Deploying — The Part That Surprised Me

This is where Container Instances won me over. One CLI command:

oci container-instances container-instance create \
  --compartment-id $COMPARTMENT_ID \
  --availability-domain "Uocm:US-ASHBURN-AD-1" \
  --display-name "docker-demo-api" \
  --shape "CI.Standard.A1.Flex" \
  --shape-config '{"ocpus": 1, "memoryInGBs": 2}' \
  --containers '[{
    "imageUrl": "iad.ocir.io/<tenancy>/docker-demos/oci-demo:v1",
    "displayName": "api",
    "environmentVariables": {
      "PORT": "8080",
      "OCI_REGION": "us-ashburn-1",
      "APP_VERSION": "1.0.0"
    },
    "resourceConfig": {
      "vcpusLimit": 1,
      "memoryLimitInGBs": 2
    }
  }]' \
  --vnics '[{
    "subnetId": "'$SUBNET_ID'",
    "isPublicIpAssigned": true
  }]'
Enter fullscreen mode Exit fullscreen mode

I ran this and had a public IP with a working API in about 3 seconds. No joke. I spent more time writing the CLI command than waiting for it to deploy. Coming from Kubernetes where I'm used to waiting for nodes to scale up, load balancers to provision, and pods to pass readiness checks... this was refreshingly fast.

Terraform Version (for when this isn't just a side project)

I wouldn't run that CLI command manually every time in a real workflow. Here's the same thing in Terraform, which I'd use for anything that needs to be reproducible:

resource "oci_container_instances_container_instance" "demo" {
  compartment_id      = var.compartment_id
  availability_domain = data.oci_identity_availability_domains.ads.availability_domains[0].name
  display_name        = "docker-demo-api"

  shape = "CI.Standard.A1.Flex"
  shape_config {
    ocpus         = 1
    memory_in_gbs = 2
  }

  containers {
    image_url    = "iad.ocir.io/${var.tenancy_namespace}/docker-demos/oci-demo:v1"
    display_name = "api"

    environment_variables = {
      PORT        = "8080"
      OCI_REGION  = var.region
      APP_VERSION = "1.0.0"
    }

    resource_config {
      vcpus_limit        = 1
      memory_limit_in_gbs = 2
    }

    health_checks {
      health_check_type = "HTTP"
      port              = 8080
      path              = "/health"
      interval_in_seconds = 30
    }
  }

  vnics {
    subnet_id             = var.subnet_id
    is_public_ip_assigned = true
  }

  image_pull_secrets {
    registry_endpoint = "iad.ocir.io"
    secret_id         = oci_vault_secret.ocir_creds.id
    secret_type       = "VAULT"
  }
}

output "container_public_ip" {
  value = oci_container_instances_container_instance.demo.vnics[0].private_ip
}
Enter fullscreen mode Exit fullscreen mode

Adding a Load Balancer

Container Instances give you a public IP directly, but for anything with real traffic you probably want a load balancer in front. TLS termination, health checks, the usual. Here's the Terraform for that:

resource "oci_load_balancer_load_balancer" "api_lb" {
  compartment_id = var.compartment_id
  display_name   = "docker-demo-lb"
  shape          = "flexible"
  subnet_ids     = [var.public_subnet_id]

  shape_details {
    minimum_bandwidth_in_mbps = 10
    maximum_bandwidth_in_mbps = 100
  }
}

resource "oci_load_balancer_backend_set" "api_backend" {
  load_balancer_id = oci_load_balancer_load_balancer.api_lb.id
  name             = "api-backends"
  policy           = "ROUND_ROBIN"

  health_checker {
    protocol          = "HTTP"
    port              = 8080
    url_path          = "/health"
    interval_ms       = 10000
    return_code       = 200
  }
}

resource "oci_load_balancer_listener" "https" {
  load_balancer_id         = oci_load_balancer_load_balancer.api_lb.id
  name                     = "https-listener"
  default_backend_set_name = oci_load_balancer_backend_set.api_backend.name
  port                     = 443
  protocol                 = "HTTP"

  ssl_configuration {
    certificate_ids          = [var.certificate_id]
    protocols                = ["TLSv1.2", "TLSv1.3"]
    server_order_preference  = "ENABLED"
  }
}
Enter fullscreen mode Exit fullscreen mode

What It Actually Costs

I tracked the bill for a month. It was comically low:

Setup Monthly Cost
OCI Container Instance (1 OCPU ARM, 2GB) ~$3.50
OCI Load Balancer (flexible, 10 Mbps) ~$12
Total ~$15.50/mo
AWS Fargate equivalent ~$35/mo
GCP Cloud Run equivalent ~$25/mo (usage-based)

ARM shapes on OCI are genuinely cheap. I was paying more for my morning coffee than for this API.

When I'd Still Use OKE

Container Instances aren't a Kubernetes replacement. They're for the cases where Kubernetes is more infrastructure than you need.

If I'm running 10+ services that talk to each other, need rolling deployments, RBAC, network policies, or auto-scaling — I'm using OKE. I work with Kubernetes daily and I'm not trying to avoid it.

But for a single API, a batch job, a webhook handler, or a quick prototype? Container Instances get me to production faster with less stuff to maintain. And the Docker workflow is the same — same Dockerfile, same docker build, same image. I just change where I deploy it.

Wrapping Up

I've been using Container Instances for about a month now for small services and side projects. The thing I keep coming back to is how little I think about infrastructure when using them. No node pools to right-size, no cluster upgrades to schedule, no ingress controllers to debug.

If you're on OCI and you haven't tried Container Instances yet, spend 10 minutes with it. You might realize, like I did, that half the containers you're running on Kubernetes don't actually need Kubernetes.


Pavan Madduri — Oracle ACE Associate, CNCF Golden Kubestronaut. I write about containers, Kubernetes, and GPU infrastructure. GitHub | LinkedIn

Top comments (0)