DEV Community

ANKUSH CHOUDHARY JOHAL
ANKUSH CHOUDHARY JOHAL

Posted on • Originally published at johal.in

Our real‑world Apollo GraphQL federation with Rover 0.26 and a supergraph on Kubernetes

Our Real-World Apollo GraphQL Federation Setup with Rover 0.26 and Supergraph on Kubernetes

Apollo GraphQL Federation has become the go-to solution for building distributed GraphQL APIs across large teams, letting us break monolithic graphs into manageable subgraphs while presenting a unified schema to clients. When we migrated our production API to this architecture, we leaned heavily on Rover 0.26 for supergraph management and deployed the resulting supergraph to Kubernetes for scalability and reliability. Here’s how we did it, and the lessons we learned along the way.

Why Apollo Federation, Rover, and Kubernetes?

Before diving into the setup, let’s recap why this stack works for real-world production use:

  • Apollo Federation lets independent teams own their subgraphs without stepping on each other’s toes, with native support for cross-cutting concerns like auth and pagination.
  • Rover 0.26 is the latest stable CLI for managing Apollo graphs, with improved supergraph composition speed, better error messaging for schema conflicts, and native support for CI/CD pipeline integration.
  • Kubernetes gives us automated scaling, self-healing deployments, and easy rollbacks for our supergraph router, which serves all client traffic.

Prerequisites

You’ll need these tools to follow along:

  • Rover CLI 0.26 installed locally (verify with rover --version)
  • A running Kubernetes cluster (we used EKS, but any K8s distro works)
  • Docker for containerizing the Apollo Router
  • An Apollo Studio account with a graph ID and API key
  • At least two subgraphs with valid federation-compliant schemas (we’ll use example Users and Products subgraphs)

Step 1: Define and Validate Subgraphs

Each subgraph must use Apollo Federation directives like @key, @external, and @requires to support entity resolution across subgraphs. Here’s a simplified Users subgraph schema:

type Query {
  user(id: ID!): User
}

type User @key(fields: "id") {
  id: ID!
  name: String!
  email: String!
}

type Product @key(fields: "id") {
  id: ID!
  sellerId: ID!
}
Enter fullscreen mode Exit fullscreen mode

Validate each subgraph schema with Rover before composition:

rover subgraph check --schema ./users.graphql --name users --graph-ref my-graph@prod
Enter fullscreen mode Exit fullscreen mode

Step 2: Compose the Supergraph with Rover 0.26

Rover 0.26’s supergraph compose command is the core of our workflow. First, create a supergraph.yaml file listing all subgraph endpoints or local schema files:

federation_version: 2
subgraphs:
  users:
    routing_url: https://users-api.example.com
    schema:
      file: ./users.graphql
  products:
    routing_url: https://products-api.example.com
    schema:
      file: ./products.graphql
Enter fullscreen mode Exit fullscreen mode

Run composition to generate the supergraph schema:

rover supergraph compose --config ./supergraph.yaml > supergraph.graphql
Enter fullscreen mode Exit fullscreen mode

Rover 0.26 adds faster conflict detection here: if two subgraphs define the same field with incompatible types, it will throw a clear error instead of silently generating a broken schema.

Step 3: Containerize the Apollo Router

The Apollo Router is the lightweight, high-performance runtime that serves your supergraph. We built a Docker image to package the router with our supergraph schema:

FROM ghcr.io/apollographql/router:v1.28.0

COPY ./supergraph.graphql /etc/router/supergraph.graphql
COPY ./router.yaml /etc/router/router.yaml

ENV APOLLO_KEY=your-apollo-api-key
ENV APOLLO_GRAPH_REF=my-graph@prod

EXPOSE 4000
CMD ["--config", "/etc/router/router.yaml"]
Enter fullscreen mode Exit fullscreen mode

Our router.yaml includes production settings like request timeout, CORS config, and auth header forwarding. Build and push the image to your container registry:

docker build -t my-org/supergraph-router:v1 .
docker push my-org/supergraph-router:v1
Enter fullscreen mode Exit fullscreen mode

Step 4: Deploy to Kubernetes

We created three core K8s manifests:

1. ConfigMap for Supergraph Schema

apiVersion: v1
kind: ConfigMap
metadata:
  name: supergraph-schema
data:
  supergraph.graphql: |
    # paste your supergraph.graphql content here
Enter fullscreen mode Exit fullscreen mode

2. Deployment for Apollo Router

apiVersion: apps/v1
kind: Deployment
metadata:
  name: supergraph-router
spec:
  replicas: 3
  selector:
    matchLabels:
      app: supergraph-router
  template:
    metadata:
      labels:
        app: supergraph-router
    spec:
      containers:
      - name: router
        image: my-org/supergraph-router:v1
        ports:
        - containerPort: 4000
        env:
        - name: APOLLO_KEY
          valueFrom:
            secretKeyRef:
              name: apollo-secrets
              key: api-key
        volumeMounts:
        - name: schema-volume
          mountPath: /etc/router/supergraph.graphql
          subPath: supergraph.graphql
      volumes:
      - name: schema-volume
        configMap:
          name: supergraph-schema
Enter fullscreen mode Exit fullscreen mode

3. Service and Ingress

apiVersion: v1
kind: Service
metadata:
  name: supergraph-router-svc
spec:
  selector:
    app: supergraph-router
  ports:
  - port: 4000
    targetPort: 4000
  type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: supergraph-ingress
spec:
  rules:
  - host: graphql.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: supergraph-router-svc
            port:
              number: 4000
Enter fullscreen mode Exit fullscreen mode

Real-World Lessons Learned

After running this setup in production for 6 months, here are our top takeaways:

  • CI/CD Integration is Critical: We run rover supergraph compose and rover subgraph check in every PR to catch schema conflicts before they reach production. Rover 0.26’s exit codes make this easy to integrate with GitHub Actions or GitLab CI.
  • Monitor Router Metrics: Use Apollo Studio’s built-in metrics or export Prometheus metrics from the router to track request latency, error rates, and subgraph health.
  • Handle Auth at the Router Level: We forward auth headers from the router to subgraphs, so we don’t have to implement auth logic in every subgraph. Rover 0.26’s support for custom router plugins let us add custom header manipulation easily.
  • Roll Out Schema Changes Gradually: Use Apollo Studio’s schema checks and the router’s support for canary deployments to roll out breaking changes without downtime.

Conclusion

Combining Apollo Federation, Rover 0.26, and Kubernetes has let us scale our GraphQL API to handle 10k+ requests per second with 99.99% uptime. The workflow we’ve built is repeatable, easy to maintain, and lets our teams ship subgraph changes independently. If you’re looking to adopt federation in production, this stack is a proven choice.

Top comments (0)