DEV Community

Cover image for CI/CD in the Era of AI and Platform Engineering: A Deep Dive into Dagger CI (Part 3)
Sami Chibani
Sami Chibani

Posted on

CI/CD in the Era of AI and Platform Engineering: A Deep Dive into Dagger CI (Part 3)

Part 3: From Scripts to a Platform: Your CI/CD Module Library

Let's be clear: Dagger doesn't eliminate the complexity of modern CI/CD systems. It tames that complexity into a testable, maintainable, and reusable system suitable for modern Platform Engineering practices.

In Part 1 we wrote pipelines as real code. In Part 2 we decoupled them from infrastructure. Our dagger-ci-demo module works. It builds, tests, and runs identically on a laptop and in CI. But it's a single module in a single repository. What happens when the organization grows?


The Growth Problem

Let's say you're at AcmeCorp. Your platform team adopted Dagger six months ago. The first project went well: a single .dagger/ module with build, test, and deploy functions. Then things accelerated.

New repositories appeared. The mobile team needs CI. The data team wants to containerize their pipelines. The infrastructure team is building internal tools. Each team creates their own .dagger/ module, and within weeks, you see the pattern:

  • Copy-paste proliferation. Every team has its own GCP authentication function. Some use service account keys. Some use OIDC. Some hardcode the project ID. None of them handle credential rotation.
  • Security drift. The data team's deploy function doesn't validate image signatures. The mobile team allows unauthenticated Cloud Run services in production. Nobody enforces the naming convention.
  • Maintenance burden. When GCP deprecates the old gcloud auth activate-service-account flow, someone has to find and update every copy. Nobody knows how many there are.

This is the exact same problem that Terraform modules and Helm charts solved for infrastructure. The solution is the same too: a module library.

The difference is that Dagger already has this built in.


The Daggerverse: Community Modules

Before building anything custom, check what already exists. Dagger is a community-driven project, and the community has published hundreds of modules for common use cases, all centralized in what's called the Daggerverse.

The Daggerverse is Dagger's public module registry. Any module published to a Git repository and indexed by Dagger Cloud becomes discoverable there. You can browse by category, search by keyword, and inspect the typed API of any module before installing it.

For example, if AcmeCorp runs on Google Cloud, a quick search reveals modules that already handle the GCP fundamentals:

Module What it does
gcp-auth GCP authentication via OIDC or service account key
gcp-artifact-registry Publish container images to Artifact Registry
gcp-cloud-run Deploy services to Cloud Run
gcp-firebase Deploy to Firebase Hosting
python-build Build Python applications with pip/uv
angular Build and test Angular applications
trivy Container vulnerability scanning with Trivy

These are real, production-tested modules. Each one has a typed API, documentation auto-generated from the code, and a version you can pin.

Full disclosure: I'm the author of most of the GCP modules listed above (gcp-auth, gcp-artifact-registry, gcp-cloud-run, gcp-firebase, python-build, angular). I'm recommending them not because they're mine, but because I built them to solve real problems I kept running into over several years of running production workloads on Google Cloud. They encode patterns and guardrails that my team and I needed — OIDC authentication that actually works across CI providers, Artifact Registry publishing with proper tagging, Cloud Run deployments with sane defaults. If you're on GCP, I genuinely think they'll save you time. If they don't fit your needs, fork them or build your own — that's the beauty of the module system.

You install them with a single command:

dagger install github.com/telchak/daggerverse/gcp-auth@gcp-auth/v0.2.1
Enter fullscreen mode Exit fullscreen mode

This adds the module as a dependency in your dagger.json. You can then call its functions from your pipeline code via dag.gcp_auth(), or directly from the CLI:

# Test GCP authentication from your terminal using a service account key
dagger call -m github.com/telchak/daggerverse/gcp-auth@gcp-auth/v0.2.1 \
  gcloud-container \
  --credentials=file:./my-service-account-key.json \
  --project-id=my-project \
  with-exec --args="gcloud","auth","list" \
  stdout
Enter fullscreen mode Exit fullscreen mode

The module system handles dependency resolution, version pinning, and cross-module type compatibility. You compose modules the same way you compose functions, by passing outputs as inputs:

# Authenticate
gcloud = dag.gcp_auth().from_service_account_key(key=sa_key, project=project)

# Publish (takes the authenticated container from gcp-auth)
image_uri = await dag.gcp_artifact_registry().publish(
    container=my_app, gcloud=gcloud, project_id=project,
    region="us-central1", repository="my-repo", image_name="api", tag="v1.0.0",
)

# Deploy (takes the image URI from artifact-registry)
url = await dag.gcp_cloud_run().service().deploy(
    gcloud=gcloud, service_name="api", image=image_uri, region="us-central1",
)
Enter fullscreen mode Exit fullscreen mode

This is the first layer of the module system: public modules that handle generic, well-understood operations. They're open-source, community-maintained, and available to anyone.

But for most organizations, public modules alone aren't enough.


Two-Layer Module Architecture

A typical production setup has two layers of modules:

┌──────────────────────┐     ┌──────────────────────────┐     ┌──────────────────────┐
│  Daggerverse         │────▶│  Private Modules Repo     │────▶│  Your CI Pipelines    │
│  (public modules)    │     │  (org-specific layer)      │     │  (.dagger/ per repo)  │
└──────────────────────┘     └──────────────────────────┘     └──────────────────────┘

 Generic operations          Security, compliance,           Project-specific
 (GCP auth, Docker,          naming, defaults,               build/test/deploy
  language builds)           org business logic              logic
Enter fullscreen mode Exit fullscreen mode

Layer 1: Public modules handle generic operations: authenticate to GCP, push a container image, deploy to Cloud Run. They have no opinion about your organization's naming conventions, security policies, or environment structure.

Layer 2: Private modules wrap public modules with organization-specific logic. They encode your security requirements, enforce naming conventions, inject audit trails, and provide a curated interface that teams consume without needing to understand the underlying complexity.

This separation matters because:

  • Public modules evolve independently. When gcp-auth adds a new authentication method, you get it for free on the next version bump.
  • Private modules enforce your rules. Every deploy goes through your security checks, regardless of which team triggered it.
  • Teams consume the private layer only. They don't need to know which public modules are underneath, or how authentication works internally. They call dag.acme_deploy() and get a compliant deployment.

Let's see what this looks like in practice for AcmeCorp. But first, a few principles that apply to both layers.


Module Development Best Practices

Whether you're building public daggerverse modules or private organizational ones, the same practices apply. Dagger modules are consumed across SDKs (Python, Go, TypeScript, Java) and from the CLI, so the way you write them directly impacts the experience for every consumer.

Documentation is your API

This is the single most important practice. Every docstring, every Doc() annotation, every class description you write is automatically used to generate documentation across all Dagger SDKs and the CLI. When someone runs dagger call your-function --help, they see your docstrings. When someone browses your module on the Daggerverse, they see your Doc() annotations. When someone uses your module from Go or TypeScript, their IDE shows your descriptions as inline documentation.

@object_type
class GcpAuth:
    """Authenticate to Google Cloud Platform.           ← Shows in module description

    Supports two authentication methods:
    - Service account key (local dev, testing)
    - OIDC / Workload Identity Federation (CI/CD)
    """

    @function
    async def from_oidc(
        self,
        token: Annotated[                              # ← Each parameter gets its own
            dagger.Secret,                             #   help text in CLI and IDE
            Doc("OIDC token from the CI provider "
                "(e.g. GitHub Actions ID token)"),
        ],
        provider: Annotated[
            str,
            Doc("Full Workload Identity Provider resource name "
                "(projects/{id}/locations/global/workloadIdentityPools/{pool}/providers/{provider})"),
        ],
    ) -> dagger.Container:
        """Authenticate using OIDC / Workload Identity. ← Shows in `dagger call --help`

        Best for CI/CD pipelines — no long-lived
        credentials to rotate or leak.
        """
Enter fullscreen mode Exit fullscreen mode

Write docstrings as if they're the only documentation someone will ever read, because for most consumers, they are.

Single responsibility per module

Each module should do one thing well. gcp-auth handles authentication. gcp-cloud-run handles Cloud Run deployments. Don't create a gcp-everything module that does auth, registry, Cloud Run, and Firebase. Smaller modules are easier to test, version, and compose.

A good rule of thumb: if your module needs two unrelated sets of credentials, it's probably two modules.

Use field() for constructor dependencies

When a module needs an authenticated context or shared state, accept it as a constructor field rather than repeating it on every function:

@object_type
class ArtifactRegistry:
    """Manage container images in Google Artifact Registry."""

    gcloud: Annotated[
        dagger.Container,
        Doc("Authenticated gcloud container from gcp-auth"),
    ] = field()
Enter fullscreen mode Exit fullscreen mode

This makes composition natural. You wire up authentication once and pass it in:

gcloud = dag.gcp_auth().from_service_account_key(key=key, project=project)
registry = dag.artifact_registry(gcloud=gcloud)  # Authenticated for all calls
Enter fullscreen mode Exit fullscreen mode

Return Dagger types, not strings

When possible, return Container, Directory, or File rather than strings. This enables chaining: the output of your module becomes the input of the next one. Return str only for terminal values (URLs, status messages) or when the consumer explicitly needs text output.

# Good: returns Container — composable with other modules
@function
async def build(self, source: ...) -> dagger.Container: ...

# Good: returns str — terminal value (a URL)
@function
async def deploy(self, image: ...) -> str: ...
Enter fullscreen mode Exit fullscreen mode

Use dagger.Secret for sensitive data

Never accept credentials, tokens, or keys as plain str parameters. Dagger's Secret type ensures sensitive data is never logged, cached, or written to disk. The CLI handles secret injection transparently:

# From environment variable
dagger call deploy --token=env:MY_TOKEN

# From file
dagger call deploy --key=file:./service-account.json
Enter fullscreen mode Exit fullscreen mode

Private functions stay private

Not every function in your module needs to be part of the public API. Dagger follows each SDK's language conventions for visibility. In Python, any method prefixed with an underscore (_) is treated as private and won't be exported by Dagger. It won't show up in dagger call --help, won't be callable from the CLI, and won't appear in the Daggerverse documentation:

@object_type
class AcmeDeploy:

    def _validate_and_resolve(self, team, service_name, environment, region):
        """Internal helper — not exported by Dagger."""
        ...

    def _authenticate(self, oidc_token, project):
        """Internal helper — not exported by Dagger."""
        ...

    @function
    async def cloud_run(self, ...) -> str:
        """Public function — exported and callable."""
        ctx = self._validate_and_resolve(...)
        gcloud = self._authenticate(...)
        ...
Enter fullscreen mode Exit fullscreen mode

Use this to keep your public API clean while organizing internal logic into reusable helpers. In Go, unexported (lowercase) functions serve the same role. In TypeScript, omit the @func() decorator.

Pin your dependencies

Always pin module dependencies to a specific version tag, not @main or @latest. This ensures reproducible builds and prevents upstream changes from breaking your pipelines unexpectedly:

{"name": "gcp-auth", "source": "github.com/telchak/daggerverse/gcp-auth@gcp-auth/v0.2.1"}
Enter fullscreen mode Exit fullscreen mode

Building the Private Module Layer

AcmeCorp's platform team creates a private repository, github.com/telchak/acme-dagger-modules, that contains organization-specific modules. These modules wrap the public daggerverse modules and add AcmeCorp's business logic.

Repository Structure

telchak/acme-dagger-modules/
├── acme-backend/                   # Python/FastAPI build, test, lint, SBOM
│   ├── dagger.json
│   ├── pyproject.toml
│   └── src/acme_backend/
│       └── main.py
├── acme-frontend/                  # Angular build, test, lint, audit
│   ├── dagger.json
│   ├── pyproject.toml
│   └── src/acme_frontend/
│       └── main.py
├── acme-deploy/                    # Compliant deployment (Cloud Run + Firebase)
│   ├── dagger.json                 # + Trivy scanning, production branch gating
│   ├── pyproject.toml
│   └── src/acme_deploy/
│       └── main.py
└── tests/                          # Integration tests for all modules
    ├── dagger.json
    ├── pyproject.toml
    └── src/tests/
        └── main.py
Enter fullscreen mode Exit fullscreen mode

Each module is a self-contained Dagger module with its own dagger.json, dependencies, and source code. They're versioned together via Git tags (e.g. acme-deploy/v1.2.0) or as a monorepo with a single version.

Let's walk through the three core modules that cover AcmeCorp's full-stack pipeline: build, test, and deploy.

acme-backend: Python Build & Test

This module wraps the public python-build module with AcmeCorp's standards: base image policy, cache volume conventions, health check endpoint, and standard entrypoint configuration.

"""AcmeCorp Python backend builder — standardized, cached, production-ready."""

from typing import Annotated

import dagger
from dagger import DefaultPath, Doc, check, dag, function, object_type


# Approved base images — only these are allowed in production containers.
APPROVED_BASE_IMAGE = "python:3.13-slim"

# Minimum test coverage threshold enforced across all backend services.
MIN_COVERAGE_PERCENT = 80


@object_type
class AcmeBackend:
    """Build and test Python backends the AcmeCorp way.

    Enforces:
    - Organization-approved base images (python:3.13-slim)
    - Standardized cache volumes for pip
    - Health check endpoint convention (/health)
    - Cloud Run-compatible entrypoint and port
    - Minimum 80% test coverage
    - CycloneDX SBOM generation for vulnerability tracking
    """

    def _base_container(self, source: dagger.Directory) -> dagger.Container:
        """Standard Python container with source and cached deps."""
        return (
            dag.container()
            .from_(APPROVED_BASE_IMAGE)
            .with_workdir("/app")
            .with_directory("/app", source)
            .with_mounted_cache("/root/.cache/pip", dag.cache_volume("acme-pip"))
            .with_exec(["pip", "install", "-r", "requirements.txt"])
        )

    @function
    def build(
        self,
        source: Annotated[dagger.Directory, Doc("Python backend source directory")],
        port: Annotated[int, Doc("Application port")] = 8080,
    ) -> dagger.Container:
        """Build a production-ready FastAPI container.

        Uses the organization-approved base image and standardized
        entrypoint. Returns a Container that can be passed directly
        to acme-deploy.
        """
        return (
            self._base_container(source)
            .with_env_variable("PORT", str(port))
            .with_exposed_port(port)
            .with_label("org.opencontainers.image.vendor", "AcmeCorp")
            .with_entrypoint([
                "uvicorn", "src.main:app",
                "--host", "0.0.0.0", "--port", str(port),
            ])
        )

    @function
    @check
    async def test(
        self,
        source: Annotated[
            dagger.Directory,
            Doc("Python backend source directory"),
            DefaultPath("."),
        ],
        coverage: Annotated[bool, Doc("Enforce minimum coverage threshold")] = True,
    ) -> str:
        """Run the test suite with AcmeCorp conventions.

        Uses pytest with verbose output, short tracebacks, and coverage
        reporting. Fails if coverage drops below the org-wide threshold.
        """
        pytest_args = ["pytest", "-v", "--tb=short"]
        if coverage:
            pytest_args.extend([
                "--cov=src", "--cov-report=term-missing",
                f"--cov-fail-under={MIN_COVERAGE_PERCENT}",
            ])
        pytest_args.append("tests/")

        return await (
            self._base_container(source)
            .with_exec(pytest_args)
            .stdout()
        )

    @function
    @check
    async def lint(
        self,
        source: Annotated[
            dagger.Directory,
            Doc("Python backend source directory"),
            DefaultPath("."),
        ],
    ) -> str:
        """Run linting (ruff) on the source code."""
        return await (
            dag.container()
            .from_(APPROVED_BASE_IMAGE)
            .with_workdir("/app")
            .with_directory("/app", source)
            .with_exec(["pip", "install", "ruff"])
            .with_exec(["ruff", "check", "src/"])
            .stdout()
        )

    @function
    async def sbom(
        self,
        source: Annotated[dagger.Directory, Doc("Python backend source directory")],
    ) -> dagger.File:
        """Generate a CycloneDX SBOM for vulnerability tracking.

        Produces a Software Bill of Materials in CycloneDX JSON format,
        suitable for upload to Dependency-Track or similar platforms.
        """
        return (
            self._base_container(source)
            .with_exec(["pip", "install", "cyclonedx-bom"])
            .with_exec([
                "cyclonedx-py", "requirements",
                "--input", "requirements.txt",
                "--output", "/app/sbom.json",
                "--format", "json",
            ])
            .file("/app/sbom.json")
        )
Enter fullscreen mode Exit fullscreen mode

Its dagger.json:

{
  "name": "acme-backend",
  "engineVersion": "v0.20.3",
  "sdk": { "source": "python" },
  "dependencies": [
    { "name": "python-build", "source": "github.com/telchak/daggerverse/python-build@python-build/v0.2.0" }
  ]
}
Enter fullscreen mode Exit fullscreen mode

acme-frontend: Angular Build & Test

Same pattern for the frontend: wraps the public angular module with AcmeCorp's Node version policy, cache conventions, and build configuration.

"""AcmeCorp Angular frontend builder — standardized builds and testing."""

from typing import Annotated

import dagger
from dagger import DefaultPath, Doc, check, dag, function, object_type


# Approved Node.js version — pinned to LTS for security compliance.
APPROVED_NODE_VERSION = "20"


@object_type
class AcmeFrontend:
    """Build and test Angular frontends the AcmeCorp way.

    Enforces:
    - Organization-approved Node.js version (20 LTS)
    - Standardized npm cache volumes
    - Production build configuration with source map exclusion
    - Vitest-based testing via Angular's built-in test runner
    - Audit check for known vulnerabilities in dependencies
    """

    def _base_container(self, source: dagger.Directory) -> dagger.Container:
        """Standard Node container with source and cached deps."""
        return (
            dag.container()
            .from_(f"node:{APPROVED_NODE_VERSION}-slim")
            .with_workdir("/app")
            .with_directory("/app", source)
            .with_mounted_cache("/root/.npm", dag.cache_volume("acme-npm"))
            .with_exec(["npm", "ci"])
        )

    @function
    def build(
        self,
        source: Annotated[dagger.Directory, Doc("Angular project source directory")],
    ) -> dagger.Directory:
        """Build the Angular app for production.

        Returns a Directory containing the dist/ output,
        ready to be passed to acme-deploy for Firebase Hosting.
        """
        return (
            dag.angular()
            .build(
                source=source,
                configuration="production",
                node_version=APPROVED_NODE_VERSION,
                npm_cache=dag.cache_volume("acme-npm"),
            )
        )

    @function
    @check
    async def test(
        self,
        source: Annotated[
            dagger.Directory,
            Doc("Angular project source directory"),
            DefaultPath("."),
        ],
    ) -> str:
        """Run the Angular test suite."""
        return await (
            self._base_container(source)
            .with_exec(["npx", "ng", "test"])
            .stdout()
        )

    @function
    @check
    async def lint(
        self,
        source: Annotated[
            dagger.Directory,
            Doc("Angular project source directory"),
            DefaultPath("."),
        ],
    ) -> str:
        """Run Angular linting."""
        return await (
            self._base_container(source)
            .with_exec(["npx", "ng", "lint"])
            .stdout()
        )

    @function
    @check
    async def audit(
        self,
        source: Annotated[
            dagger.Directory,
            Doc("Angular project source directory"),
            DefaultPath("."),
        ],
    ) -> str:
        """Check npm dependencies for known vulnerabilities.

        Runs npm audit at the 'moderate' severity level. Fails if any
        vulnerabilities at moderate or higher are found.
        """
        return await (
            self._base_container(source)
            .with_exec(["npm", "audit", "--audit-level=moderate"])
            .stdout()
        )
Enter fullscreen mode Exit fullscreen mode

Its dagger.json:

{
  "name": "acme-frontend",
  "engineVersion": "v0.20.3",
  "sdk": { "source": "python" },
  "dependencies": [
    { "name": "angular", "source": "github.com/telchak/daggerverse/angular@angular/v0.2.0" }
  ]
}
Enter fullscreen mode Exit fullscreen mode

acme-deploy: Compliant Deployment

Required Google APIs — The deployment module needs these APIs enabled on each target GCP project (staging and production):

gcloud services enable \
  run.googleapis.com \
  artifactregistry.googleapis.com \
  firebasehosting.googleapis.com \
  iam.googleapis.com

These cover Cloud Run deployment, container image storage in Artifact Registry, Firebase Hosting for the frontend, and IAM for service account authentication.

This module handles the full deployment lifecycle: vulnerability scanning, authentication, registry, Cloud Run for backends, and Firebase Hosting for frontends. It wraps four public daggerverse modules plus Trivy with organization-specific defaults, security checks, and production safeguards:

"""AcmeCorp deployment module — compliant, opinionated, simple."""

from typing import Annotated

import dagger
from dagger import DefaultPath, Doc, check, dag, function, object_type


ALLOWED_REGIONS = ["europe-west1", "us-central1"]
ALLOWED_ENVIRONMENTS = ["staging", "production"]

# Pin Trivy to a known safe version.
# Versions 0.69.4–0.69.6 were compromised in a supply chain attack (CVE-2026-33634).
TRIVY_VERSION = "0.69.3"

# Resource defaults enforced across all Cloud Run services.
CLOUD_RUN_DEFAULTS = {
    "min_instances": {"staging": 0, "production": 1},
    "max_instances": {"staging": 5, "production": 50},
    "cpu": "1",
    "memory": "512Mi",
    "concurrency": 80,
    "timeout": "300s",
}


@object_type
class AcmeDeploy:
    """Deploy services the AcmeCorp way.

    Wraps public daggerverse modules with org-specific defaults:
    - Enforces naming conventions (acme-{team}-{service}-{env})
    - Supports OIDC authentication (CI) and local ADC (developer laptops)
    - Deploys to the org's standard regions
    - Injects required labels and metadata for cost tracking and audit
    - Production services always authenticated (no public endpoints)
    - Git metadata (branch, commit SHA) attached to every deployment
    """

    def _validate_and_resolve(self, team: str, service_name: str, environment: str, region: str):
        """Shared validation and naming logic. Returns the full service name."""
        if region not in ALLOWED_REGIONS:
            msg = f"Region {region} not allowed. Must be one of: {ALLOWED_REGIONS}"
            raise ValueError(msg)
        if environment not in ALLOWED_ENVIRONMENTS:
            msg = f"Environment {environment} not allowed. Must be one of: {ALLOWED_ENVIRONMENTS}"
            raise ValueError(msg)

        return f"acme-{team}-{service_name}-{environment}"

    def _validate_production_branch(self, environment: str, git_branch: str) -> None:
        """Refuse production deployments from non-main branches."""
        if environment == "production" and git_branch != "main":
            msg = (
                f"Production deployment forbidden from branch '{git_branch}'. "
                f"Only the 'main' branch can deploy to production."
            )
            raise ValueError(msg)

    def _authenticate(
        self,
        project_id: str,
        oidc_request_token: dagger.Secret | None = None,
        oidc_request_url: dagger.Secret | None = None,
        gcloud_config: dagger.Directory | None = None,
    ) -> dagger.Container:
        """Authenticate to GCP — CI (OIDC) or local (ADC from host).

        In CI: pass oidc_request_token and oidc_request_url (from GitHub Actions).
        Locally: pass gcloud_config (your ~/.config/gcloud directory).
        """
        if gcloud_config:
            return dag.gcp_auth().gcloud_container_from_host(
                project_id=project_id,
                gcloud_config=gcloud_config,
            )

        if oidc_request_token and oidc_request_url:
            return dag.gcp_auth().gcloud_container_from_github_actions(
                workload_identity_provider="projects/123456/locations/global/workloadIdentityPools/github/providers/github-actions",
                project_id=project_id,
                oidc_request_token=oidc_request_token,
                oidc_request_url=oidc_request_url,
                service_account_email=f"ci-deployer@{project_id}.iam.gserviceaccount.com",
            )

        msg = "Provide either gcloud-config (local) or oidc-request-token + oidc-request-url (CI)"
        raise ValueError(msg)

    @function
    @check
    async def scan(
        self,
        source: Annotated[
            dagger.Directory,
            Doc("Backend source directory"),
            DefaultPath("."),
        ],
        port: Annotated[int, Doc("Application port")] = 8080,
    ) -> str:
        """Scan the built container for HIGH and CRITICAL CVEs.

        Builds the container from source using acme-backend, then runs
        Trivy against it. Fails if any vulnerabilities at HIGH severity
        or above are found. Pinned to a safe Trivy version.
        """
        container = dag.acme_backend().build(source=source, port=port)
        return await dag.trivy(version=TRIVY_VERSION).container(container).output(format="table")

    def _build_labels(
        self, team: str, environment: str,
        git_branch: str = "", git_sha: str = "",
    ) -> dict[str, str]:
        """Standard labels for cost tracking, audit, and ownership."""
        labels = {
            "team": team,
            "environment": environment,
            "managed-by": "dagger",
        }
        if git_branch:
            labels["git-branch"] = git_branch
        if git_sha:
            labels["git-sha"] = git_sha[:8]
        return labels

    @function
    async def cloud_run(
        self,
        source: Annotated[dagger.Directory, Doc("Backend source directory")],
        service_name: Annotated[str, Doc("Service name (without prefix)")],
        team: Annotated[str, Doc("Team name for naming and labels")],
        project_id: Annotated[str, Doc("GCP project ID to deploy to")],
        oidc_request_token: Annotated[dagger.Secret | None, Doc("ACTIONS_ID_TOKEN_REQUEST_TOKEN (CI)")] = None,
        oidc_request_url: Annotated[dagger.Secret | None, Doc("ACTIONS_ID_TOKEN_REQUEST_URL (CI)")] = None,
        gcloud_config: Annotated[dagger.Directory | None, Doc("Host gcloud config dir for local auth (~/.config/gcloud)")] = None,
        environment: Annotated[str, Doc("Target environment")] = "staging",
        region: Annotated[str, Doc("GCP region")] = "europe-west1",
        port: Annotated[int, Doc("Application port")] = 8080,
        repository: Annotated[str, Doc("Artifact Registry repository name (defaults to acme-{team})")] = "",
        git_branch: Annotated[str, Doc("Git branch (for audit labels)")] = "",
        git_sha: Annotated[str, Doc("Git commit SHA (for audit labels)")] = "",
        # NOTE: This flag is exposed here for demo/testing purposes only.
        # In a real production setup, IAM invoker checks should remain enabled
        # and access should be controlled via proper IAM bindings or a load balancer.
        disable_invoker_iam_check: Annotated[bool, Doc("Disable Cloud Run IAM invoker check (for demo/testing only)")] = False,
    ) -> str:
        """Build and deploy a backend service to Cloud Run with AcmeCorp compliance.

        Builds the container from source using acme-backend, scans for
        vulnerabilities, pushes to Artifact Registry, and deploys to
        Cloud Run — all in a single call. Enforces naming conventions,
        region whitelist, production branch gate, and access controls.

        Authentication: pass gcloud-config for local development, or
        oidc-request-token + oidc-request-url for CI (GitHub Actions).

        Production services are never publicly accessible — they require
        IAM authentication. Staging services allow unauthenticated access
        for testing convenience.
        """
        self._validate_production_branch(environment, git_branch)
        full_name = self._validate_and_resolve(team, service_name, environment, region)
        gcloud = self._authenticate(
            project_id=project_id,
            oidc_request_token=oidc_request_token,
            oidc_request_url=oidc_request_url,
            gcloud_config=gcloud_config,
        )
        labels = self._build_labels(team, environment, git_branch, git_sha)

        # Build the container from source
        container = dag.acme_backend().build(source=source, port=port)

        # Scan for vulnerabilities before shipping
        await dag.trivy(version=TRIVY_VERSION).container(container).output(format="table")

        # Publish to Artifact Registry
        image_uri = await dag.gcp_artifact_registry().publish(
            container=container,
            gcloud=gcloud,
            project_id=project_id,
            region=region,
            repository=repository or f"acme-{team}",
            image_name=service_name,
            tag=f"{environment}-latest",
        )

        # Deploy to Cloud Run with org-standard configuration
        url = await dag.gcp_cloud_run().service().deploy(
            gcloud=gcloud,
            service_name=full_name,
            image=image_uri,
            region=region,
            allow_unauthenticated=(environment != "production"),
            min_instances=CLOUD_RUN_DEFAULTS["min_instances"][environment],
            max_instances=CLOUD_RUN_DEFAULTS["max_instances"][environment],
            cpu=CLOUD_RUN_DEFAULTS["cpu"],
            memory=CLOUD_RUN_DEFAULTS["memory"],
            concurrency=CLOUD_RUN_DEFAULTS["concurrency"],
            timeout=CLOUD_RUN_DEFAULTS["timeout"],
            disable_invoker_iam_check=disable_invoker_iam_check,
        )

        return url

    @function
    async def firebase(
        self,
        source: Annotated[dagger.Directory, Doc("Frontend source directory")],
        service_name: Annotated[str, Doc("Service name for the hosting site")],
        team: Annotated[str, Doc("Team name")],
        project_id: Annotated[str, Doc("GCP project ID to deploy to")],
        oidc_request_token: Annotated[dagger.Secret | None, Doc("ACTIONS_ID_TOKEN_REQUEST_TOKEN (CI)")] = None,
        oidc_request_url: Annotated[dagger.Secret | None, Doc("ACTIONS_ID_TOKEN_REQUEST_URL (CI)")] = None,
        gcloud_config: Annotated[dagger.Directory | None, Doc("Host gcloud config dir for local auth (~/.config/gcloud)")] = None,
        environment: Annotated[str, Doc("Target environment")] = "staging",
        git_branch: Annotated[str, Doc("Git branch (for audit trail)")] = "",
    ) -> str:
        """Build and deploy a frontend to Firebase Hosting with AcmeCorp compliance.

        Builds the frontend from source using acme-frontend, then deploys
        to production channel ('live') or a preview channel matching the
        environment name. Production deploys are gated to the main branch only.

        Authentication: pass gcloud-config for local development, or
        oidc-request-token + oidc-request-url for CI (GitHub Actions).
        """
        self._validate_production_branch(environment, git_branch)
        self._validate_and_resolve(team, service_name, environment, region="europe-west1")

        # Authenticate to Firebase: use ADC credentials (local) or OIDC (CI).
        # gcp-firebase has its own auth — it doesn't use gcloud containers.
        firebase_auth: dict = {}
        if gcloud_config:
            credentials = dag.set_secret(
                "firebase_credentials",
                await gcloud_config.file("application_default_credentials.json").contents(),
            )
            firebase_auth["credentials"] = credentials
        elif oidc_request_token and oidc_request_url:
            gcloud = self._authenticate(
                project_id=project_id,
                oidc_request_token=oidc_request_token,
                oidc_request_url=oidc_request_url,
            )
            token_output = await gcloud.with_exec(["gcloud", "auth", "print-access-token"]).stdout()
            firebase_auth["access_token"] = dag.set_secret("firebase_access_token", token_output.strip())

        channel = "live" if environment == "production" else environment
        build_command = f"npm run build -- --configuration={environment}"

        if channel == "live":
            return await dag.gcp_firebase().deploy(
                project_id=project_id,
                source=source,
                build_command=build_command,
                deploy_functions=False,
                **firebase_auth,
            )

        return await dag.gcp_firebase().deploy_preview(
            project_id=project_id,
            channel_id=channel,
            source=source,
            build_command=build_command,
            **firebase_auth,
        )
Enter fullscreen mode Exit fullscreen mode

Its dagger.json:

{
  "name": "acme-deploy",
  "engineVersion": "v0.20.3",
  "sdk": { "source": "python" },
  "dependencies": [
    { "name": "gcp-auth", "source": "github.com/telchak/daggerverse/gcp-auth@gcp-auth/v0.2.1" },
    { "name": "gcp-artifact-registry", "source": "github.com/telchak/daggerverse/gcp-artifact-registry@gcp-artifact-registry/v0.2.0" },
    { "name": "gcp-cloud-run", "source": "github.com/telchak/daggerverse/gcp-cloud-run@gcp-cloud-run/v0.3.0" },
    { "name": "gcp-firebase", "source": "github.com/telchak/daggerverse/gcp-firebase@gcp-firebase/v0.2.0" },
    { "name": "trivy", "source": "github.com/sagikazarmark/daggerverse/trivy@v0.6.0" },
    { "name": "acme-backend", "source": "../acme-backend" },
    { "name": "acme-frontend", "source": "../acme-frontend" }
  ]
}
Enter fullscreen mode Exit fullscreen mode

What the private layer gives you

Compare what a developer writes without the private layer, a full-stack deploy across two GCP services:

# Without private modules — every team reinvents this (per service!)
gcloud = dag.gcp_auth().from_oidc(
    token=oidc_token,
    provider="projects/123456/locations/global/workloadIdentityPools/github/providers/github-actions",
    service_account="ci-deployer@acmecorp-staging.iam.gserviceaccount.com",
    project="acmecorp-staging",
)

# Backend: build, scan, push, deploy
backend = (
    dag.container().from_("python:3.13-slim")
    .with_workdir("/app")
    .with_file("/app/requirements.txt", backend_source.file("requirements.txt"))
    .with_exec(["pip", "install", "-r", "requirements.txt"])
    .with_directory("/app/src", backend_source.directory("src"))
    .with_env_variable("PORT", "8080").with_exposed_port(8080)
    .with_entrypoint(["uvicorn", "src.main:app", "--host", "0.0.0.0", "--port", "8080"])
)
# Vulnerability scan — easy to forget, and every team configures it differently
await dag.trivy().container(backend).output("table")
image_uri = await dag.gcp_artifact_registry().publish(
    container=backend, gcloud=gcloud, project_id="acmecorp-staging",
    region="europe-west1", repository="acme-backend", image_name="api",
    tag="staging-latest",
)
backend_url = await dag.gcp_cloud_run().service().deploy(
    gcloud=gcloud, service_name="acme-backend-api-staging", image=image_uri,
    region="europe-west1", allow_unauthenticated=True,
)

# Frontend: build, deploy
dist = (
    dag.container().from_("node:20-slim").with_workdir("/app")
    .with_directory("/app", frontend_source)
    .with_exec(["npm", "ci"])
    .with_exec(["npx", "ng", "build", "--configuration=production"])
    .directory("/app/dist/angular-frontend/browser")
)
token_output = await gcloud.with_exec(["gcloud", "auth", "print-access-token"]).stdout()
access_token = dag.set_secret("firebase_access_token", token_output.strip())
frontend_url = await dag.gcp_firebase().deploy_preview(
    project_id="acmecorp-staging", channel_id="staging", source=dist,
    access_token=access_token, skip_build=True,
)
Enter fullscreen mode Exit fullscreen mode

With the private layer, the same full-stack deploy becomes:

# With private modules — two calls, compliance baked in everywhere
backend_url = await dag.acme_deploy().cloud_run(
    source=backend_source, service_name="api", team="backend",
    project_id="acmecorp-staging",
)
frontend_url = await dag.acme_deploy().firebase(
    source=frontend_source, service_name="web", team="frontend",
    project_id="acmecorp-staging",
)
Enter fullscreen mode Exit fullscreen mode

Two calls instead of thirty lines. No authentication boilerplate, no naming convention to remember. The naming convention, region whitelist, authentication flow, label requirements, vulnerability scanning, production branch gating, Cloud Run resource defaults, SBOM generation — all of it is encoded once in the private modules and enforced everywhere. A new team member doesn't need to know which base image is approved, that containers are Trivy-scanned before shipping, or that production services must not be publicly accessible. The modules handle it.


Consuming Modules in Your Pipelines

With both layers in place, here's what a typical project's .dagger/ looks like at AcmeCorp:

Project Structure

dagger-ci-demo/
├── backend/                  # FastAPI application code
│   ├── src/
│   ├── tests/
│   └── requirements.txt
├── frontend/                 # Angular application code
│   ├── src/
│   └── package.json
├── .dagger/                  # Dagger module (CI/CD pipeline)
│   ├── dagger.json
│   ├── pyproject.toml
│   └── src/dagger_ci_demo/
│       └── main.py
└── .github/
    └── workflows/
        └── ci.yml            # GitHub Actions (just calls dagger)
Enter fullscreen mode Exit fullscreen mode

dagger.json: Dependencies

The project only depends on the private layer. Public modules are transitive dependencies; the project never references them directly:

{
  "name": "dagger-ci-demo",
  "engineVersion": "v0.20.3",
  "sdk": { "source": "python" },
  "dependencies": [
    { "name": "acme-backend", "source": "github.com/telchak/acme-dagger-modules/acme-backend@v1.0.0" },
    { "name": "acme-frontend", "source": "github.com/telchak/acme-dagger-modules/acme-frontend@v1.0.0" },
    { "name": "acme-deploy", "source": "github.com/telchak/acme-dagger-modules/acme-deploy@v1.0.0" }
  ]
}
Enter fullscreen mode Exit fullscreen mode

To install these modules, run dagger install for each dependency:

dagger install github.com/telchak/acme-dagger-modules/acme-backend@v1.0.0
dagger install github.com/telchak/acme-dagger-modules/acme-frontend@v1.0.0
dagger install github.com/telchak/acme-dagger-modules/acme-deploy@v1.0.0
Enter fullscreen mode Exit fullscreen mode

Pipeline Code

If you want to follow along, you can come back to the same dagger-ci-demo repository from Part 1. If you're starting fresh, run dagger init --sdk=python to scaffold a new module. Once you've installed the AcmeCorp modules above, replace the content of .dagger/src/dagger_ci_demo/main.py with the following:

This is where everything comes together. The project pipeline doesn't contain a single line of build logic, authentication boilerplate, or deployment configuration. It orchestrates the private modules:

"""CI/CD pipeline for dagger-ci-demo — powered by AcmeCorp modules."""

from typing import Annotated

import dagger
from dagger import Doc, dag, function, object_type


@object_type
class DaggerCiDemo:
    """Full-stack pipeline for the Angular + FastAPI product."""

    @function
    async def test(
        self,
        backend_source: Annotated[dagger.Directory, Doc("Backend source directory")],
        frontend_source: Annotated[dagger.Directory, Doc("Frontend source directory")],
    ) -> str:
        """Run all tests (backend + frontend)."""
        backend_result = await dag.acme_backend().test(source=backend_source)
        frontend_result = await dag.acme_frontend().test(source=frontend_source)
        return f"Backend:\n{backend_result}\n\nFrontend:\n{frontend_result}"

    @function
    async def lint(
        self,
        backend_source: Annotated[dagger.Directory, Doc("Backend source directory")],
        frontend_source: Annotated[dagger.Directory, Doc("Frontend source directory")],
    ) -> str:
        """Run linting on all source code."""
        backend_result = await dag.acme_backend().lint(source=backend_source)
        frontend_result = await dag.acme_frontend().lint(source=frontend_source)
        return f"Backend:\n{backend_result}\n\nFrontend:\n{frontend_result}"

    @function
    async def deploy(
        self,
        backend_source: Annotated[dagger.Directory, Doc("Backend source directory")],
        frontend_source: Annotated[dagger.Directory, Doc("Frontend source directory")],
        project_id: Annotated[str, Doc("GCP project ID to deploy to")],
        oidc_request_token: Annotated[dagger.Secret | None, Doc("ACTIONS_ID_TOKEN_REQUEST_TOKEN (CI)")] = None,
        oidc_request_url: Annotated[dagger.Secret | None, Doc("ACTIONS_ID_TOKEN_REQUEST_URL (CI)")] = None,
        gcloud_config: Annotated[dagger.Directory | None, Doc("Host gcloud config dir for local auth (~/.config/gcloud)")] = None,
        environment: Annotated[str, Doc("Target environment")] = "staging",
    ) -> str:
        """Test and deploy the full stack.

        1. Tests both apps (fails fast if anything breaks)
        2. Deploys backend to Cloud Run, frontend to Firebase Hosting
           (build, scan, push are handled internally by acme-deploy)

        Authentication: pass gcloud-config for local development, or
        oidc-request-token + oidc-request-url for CI (GitHub Actions).
        """
        # 1. Test first — fail fast
        await self.test(
            backend_source=backend_source,
            frontend_source=frontend_source,
        )

        # 2. Deploy both apps — build, scan, push, and deploy are all internal
        backend_url = await dag.acme_deploy().cloud_run(
            source=backend_source,
            service_name="api",
            team="product",
            project_id=project_id,
            oidc_request_token=oidc_request_token,
            oidc_request_url=oidc_request_url,
            gcloud_config=gcloud_config,
            environment=environment,
        )

        frontend_url = await dag.acme_deploy().firebase(
            source=frontend_source,
            service_name="web",
            team="product",
            project_id=project_id,
            oidc_request_token=oidc_request_token,
            oidc_request_url=oidc_request_url,
            gcloud_config=gcloud_config,
            environment=environment,
        )

        return f"Backend:  {backend_url}\nFrontend: {frontend_url}"
Enter fullscreen mode Exit fullscreen mode

Read this pipeline again. There are zero dag.container() calls. No base image choices. No pip install. No npm ci. No Artifact Registry URIs. No Cloud Run flags. No Firebase channel logic. No Trivy configuration. No branch gating logic. The only project-specific value is the project_id, which the caller provides. Every other operational detail is encapsulated in the three private modules, and every project at AcmeCorp gets the same standards, the same security, the same compliance, without any effort from the project team.

Try It Locally

You can test the pipeline right away on your machine. The test and lint functions work without any cloud credentials:

# Run the test suite on both apps
dagger call test \
  --backend-source=./backend \
  --frontend-source=./frontend

# Run linting on both apps
dagger call lint \
  --backend-source=./backend \
  --frontend-source=./frontend
Enter fullscreen mode Exit fullscreen mode

The deploy function requires GCP credentials. In CI, it uses OIDC tokens from GitHub Actions. For local development, you can pass your host's gcloud config directory instead:

# Deploy from your laptop using local ADC credentials
dagger call deploy \
  --backend-source=./backend \
  --frontend-source=./frontend \
  --project-id=acmecorp-staging \
  --gcloud-config=$HOME/.config/gcloud \
  --environment=staging

# Check that the module loads and all dependencies resolve
dagger functions
Enter fullscreen mode Exit fullscreen mode

GitHub Actions Workflow

The workflow file stays minimal. It just calls dagger:

name: CI/CD

on:
  push:
    branches: [main]
  pull_request:

permissions:
  contents: read
  id-token: write

jobs:
  check:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v6

      - name: Lint
        uses: dagger/dagger-for-github@v8.4.1
        with:
          version: "0.20.3"
          verb: call
          args: lint --backend-source=./backend --frontend-source=./frontend

      - name: Test
        uses: dagger/dagger-for-github@v8.4.1
        with:
          version: "0.20.3"
          verb: call
          args: test --backend-source=./backend --frontend-source=./frontend

  deploy:
    if: github.ref == 'refs/heads/main'
    needs: check
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v6

      - name: Deploy to staging
        uses: dagger/dagger-for-github@v8.4.1
        with:
          version: "0.20.3"
          verb: call
          args: >
            deploy
            --backend-source=./backend
            --frontend-source=./frontend
            --oidc-token=env:ACTIONS_ID_TOKEN_REQUEST_TOKEN
            --environment=staging
Enter fullscreen mode Exit fullscreen mode

The entire CI configuration calls three Dagger functions: lint, test, and deploy. All the logic (base images, dependency installation, build steps, authentication, compliance, registry push, Cloud Run flags, Firebase channels) lives in typed, testable, version-pinned modules.

The Local Developer Experience

This is where Dagger's CLI really shines. A developer clones dagger-ci-demo, and without reading any documentation, they can discover every available pipeline function, run them individually, and even inspect intermediate build artifacts, all from their terminal.

Discovering what's available:

$ dagger functions
Name    Description
deploy  Build, test, and deploy the full stack.
lint    Run linting on all source code.
test    Run all tests (backend + frontend).
Enter fullscreen mode Exit fullscreen mode

Every function name and description comes directly from the Python docstrings we wrote. The Doc() annotations on parameters become --help text:

$ dagger call deploy --help
Deploy the full stack.

  1. Tests both apps (fails fast if anything breaks)
  2. Builds both apps via acme-backend and acme-frontend
  3. Deploys backend to Cloud Run, frontend to Firebase Hosting

  Authentication: pass gcloud-config for local development, or
  oidc-request-token + oidc-request-url for CI (GitHub Actions).

USAGE
  dagger call deploy [arguments]

ARGUMENTS
      --backend-source Directory    Backend source directory (required)
      --frontend-source Directory   Frontend source directory (required)
      --project-id string           GCP project ID to deploy to (required)
      --oidc-request-token Secret   ACTIONS_ID_TOKEN_REQUEST_TOKEN (CI)
      --oidc-request-url Secret     ACTIONS_ID_TOKEN_REQUEST_URL (CI)
      --gcloud-config Directory     Host gcloud config dir for local auth (~/.config/gcloud)
      --environment string          Target environment (default "staging")
Enter fullscreen mode Exit fullscreen mode

No wiki page. No Confluence document. No Slack thread asking "how do I deploy again?" The CLI is the documentation, and it's always in sync with the code because it's generated from it.

Running individual functions:

A developer working on the backend doesn't need to run the full pipeline. Since acme-backend, acme-frontend, and acme-deploy are already installed as dependencies in the project's dagger.json, they can call any dependency's functions by name:

# Run backend tests only (calling the project's own pipeline function)
dagger call test --backend-source=./backend --frontend-source=./frontend

# Lint just the backend (calling the installed dependency directly)
dagger call -m acme-backend lint --source=./backend

# Generate an SBOM for the backend
dagger call -m acme-backend sbom --source=./backend -o ./sbom.json

# Run the frontend dependency audit
dagger call -m acme-frontend audit --source=./frontend
Enter fullscreen mode Exit fullscreen mode

The -m flag selects which module to call. When you use it with a dependency name (acme-backend), Dagger resolves it from your dagger.json, so there's no need for the full Git path. You can also use -m with a full path to call modules that aren't installed, which is useful for trying out a new daggerverse module before adding it as a dependency.

Inspecting build artifacts with terminal:

This is one of Dagger's most powerful features for local development. Any function that returns a Container can be chained with terminal to drop into an interactive shell inside the built container:

$ dagger call -m acme-backend build --source=./backend terminal
● Attaching terminal:
container: Container!
.from(address: "docker.io/library/python:3.13-slim@sha256:739e7213..."): Container!
withWorkdir /app
.withDirectory(
    ┆ ┆ path: "/app"
    ┆ ┆ source: Host.directory(path: "./backend"): Directory!
    ┆ ): Container!
.withMountedCache(
    ┆ ┆ path: "/root/.cache/pip"
    ┆ ┆ cache: cacheVolume(key: "acme-pip", ...): CacheVolume!
    ┆ ): Container!
withExec pip install -r requirements.txt
withEnvVariable PORT=8080
.withExposedPort(port: 8080): Container!
.withLabel(name: "org.opencontainers.image.vendor", value: "AcmeCorp"): Container!
.withEntrypoint(args: ["uvicorn", "src.main:app", "--host", "0.0.0.0", "--port", "8080"]): Container!

dagger /app $
Enter fullscreen mode Exit fullscreen mode

This means a developer can build the exact same container that will ship to Cloud Run, then poke around inside it: check installed packages, verify file layout, test imports, inspect environment variables. The container is identical whether you're on your laptop or in CI, because Dagger builds are hermetic.

The same trick works with any Container return type in the chain, even transitive dependencies you haven't installed directly. For example, to inspect the authenticated gcloud container that acme-deploy uses under the hood:

# Call a transitive dependency by its full Git path
$ dagger call -m github.com/telchak/daggerverse/gcp-auth@gcp-auth/v0.2.1 \
    gcloud-container \
    --credentials=file:./my-service-account-key.json \
    --project-id=acmecorp-staging \
    terminal

root@def456:/# gcloud auth list
         Credentialed Accounts
ACTIVE   ACCOUNT
*        ci-deployer@acmecorp-staging.iam.gserviceaccount.com
root@def456:/# gcloud config get-value project
acmecorp-staging
Enter fullscreen mode Exit fullscreen mode

Chaining functions from the CLI:

Any Container or Directory return type can be chained further. This means you can compose operations from the CLI the same way you compose them in code:

# Build the backend, then run a custom command inside the result
dagger call -m acme-backend \
  build --source=./backend \
  with-exec --args="python","-c","import sys; print(sys.version)" \
  stdout

# Build the frontend dist, then list its contents
dagger call -m acme-frontend \
  build --source=./frontend \
  entries
Enter fullscreen mode Exit fullscreen mode

Why this matters:

Every function the platform team writes (build, test, lint, sbom, audit, deploy) is immediately available as a CLI command with full documentation, tab completion, and composability. A developer who has never seen the codebase can run dagger functions, understand what's available, run dagger call test --help, understand what parameters are needed, and execute the pipeline, all without leaving the terminal. The same commands work identically in CI. There is no "it works on my machine" gap.


Testing Modules

Modules are code, and code should be tested. Each module repository should include integration tests that validate the public API:

"""Integration tests for AcmeCorp modules."""

import dagger
from dagger import dag, function, object_type


@object_type
class Tests:
    """Test suite for AcmeCorp private modules.

    Validates naming conventions, region whitelists, production branch
    gating, and module composition without hitting real GCP services.
    """

    @function
    async def test_region_whitelist_rejects_invalid(self) -> str:
        """Verify that regions outside the whitelist are rejected."""
        try:
            await dag.acme_deploy().cloud_run(
                source=dag.directory().with_new_file("requirements.txt", "fastapi\n"),
                service_name="test",
                team="platform",
                project_id="acmecorp-staging",
                region="asia-east1",  # Not in whitelist
            )
            return "[FAIL] Should have rejected region"
        except dagger.ExecError:
            return "[OK] Invalid region rejected"

    @function
    async def test_invalid_environment_rejected(self) -> str:
        """Verify that unknown environments are rejected."""
        try:
            await dag.acme_deploy().cloud_run(
                source=dag.directory().with_new_file("requirements.txt", "fastapi\n"),
                service_name="test",
                team="platform",
                project_id="acmecorp-staging",
                environment="development",  # Not staging/production
            )
            return "[FAIL] Should have rejected environment"
        except dagger.ExecError:
            return "[OK] Invalid environment rejected"

    @function
    async def test_production_deploy_requires_main_branch(self) -> str:
        """Verify that production deploys are blocked from non-main branches."""
        try:
            await dag.acme_deploy().cloud_run(
                source=dag.directory().with_new_file("requirements.txt", "fastapi\n"),
                service_name="test",
                team="platform",
                project_id="acmecorp-prod",
                environment="production",
                git_branch="feature/my-branch",
            )
            return "[FAIL] Should have blocked non-main production deploy"
        except dagger.ExecError:
            return "[OK] Production deploy blocked from non-main branch"

    @function
    async def test_backend_build_returns_container(self) -> str:
        """Verify that acme-backend build produces a container with the expected port."""
        source = (
            dag.directory()
            .with_new_file("requirements.txt", "fastapi\nuvicorn\n")
            .with_new_file("src/__init__.py", "")
            .with_new_file("src/main.py", (
                "from fastapi import FastAPI\n"
                "app = FastAPI()\n"
                "@app.get('/health')\n"
                "def health(): return {'status': 'ok'}\n"
            ))
        )
        container = dag.acme_backend().build(source=source)
        ports = await container.exposed_ports()
        port_numbers = [await p.port() for p in ports]
        if 8080 not in port_numbers:
            return f"[FAIL] Expected port 8080, got {port_numbers}"
        return "[OK] Backend build produces container with port 8080"
Enter fullscreen mode Exit fullscreen mode
# Run tests locally
dagger call -m tests/ test-region-whitelist-rejects-invalid
dagger call -m tests/ test-production-deploy-requires-main-branch
dagger call -m tests/ test-backend-build-returns-container
Enter fullscreen mode Exit fullscreen mode

For example, running the region whitelist test shows exactly how the module enforces compliance at the function level:

$ dagger call -m tests/ test-region-whitelist-rejects-invalid

✔ tests: Tests! 3.4s
✘ .testRegionWhitelistRejectsInvalid: String! 27.3s ERROR
✘ AcmeDeploy.cloudRun(
  ┆ container: Container.from(address: "alpine"): Container!
  ┆ serviceName: "test"
  ┆ team: "platform"
  ┆ oidcToken: setSecret(name: "test-token"): Secret!
  ┆ region: "asia-east1"
  ): String! 11.7s ERROR

ValueError: Region asia-east1 not allowed. Must be one of: ['europe-west1', 'us-central1']
Enter fullscreen mode Exit fullscreen mode

The test deliberately passes a forbidden region (asia-east1) and verifies the module rejects it with a clear error. The branch gating test works the same way:

$ dagger call -m tests/ test-production-deploy-requires-main-branch

✔ tests: Tests! 0.0s
✘ .testProductionDeployRequiresMainBranch: String! 23.2s ERROR
✘ AcmeDeploy.cloudRun(
  ┆ container: Container.from(address: "alpine"): Container!
  ┆ serviceName: "test"
  ┆ team: "platform"
  ┆ oidcToken: setSecret(name: "test-token"): Secret!
  ┆ environment: "production"
  ┆ gitBranch: "feature/my-branch"
  ): String! 11.9s ERROR

ValueError: Production deployment forbidden from branch 'feature/my-branch'.
  Only the 'main' branch can deploy to production.
Enter fullscreen mode Exit fullscreen mode

This is policy-as-code: region restrictions, naming conventions, and branch gating are all enforced by the module itself, not by documentation or code review.


Versioning Strategy

For a monorepo with multiple modules, use per-module Git tags:

telchak/acme-dagger-modules/
├── acme-backend/           → git tag: acme-backend/v1.0.0
├── acme-frontend/          → git tag: acme-frontend/v1.0.0
├── acme-deploy/            → git tag: acme-deploy/v1.2.0
└── tests/
Enter fullscreen mode Exit fullscreen mode

Automate with conventional commits:

# "feat(acme-deploy): add Firebase Hosting support" → acme-deploy/v1.3.0 (minor)
# "fix(acme-deploy): handle empty labels"           → acme-deploy/v1.2.1 (patch)
Enter fullscreen mode Exit fullscreen mode

Consuming teams pin to a version and upgrade on their own schedule:

{ "name": "acme-deploy", "source": "github.com/telchak/acme-dagger-modules/acme-deploy@v1.0.0" }
Enter fullscreen mode Exit fullscreen mode

When the platform team releases v1.3.0, teams can upgrade by bumping the version in dagger.json. The typed API ensures that breaking changes are caught at development time, not in production.


Toolchains: Zero-Code Consumption

So far, every project at AcmeCorp writes a .dagger/ module, a few dozen lines of Python that orchestrates the private modules. That's already much simpler than raw CI scripts, but Dagger recently introduced a feature that eliminates even that: toolchains.

A toolchain is a Dagger module installed directly into your project's dagger.json with no SDK code required. You install it, and its functions become available via dagger call and dagger check immediately.

Official toolchains

Dagger already maintains a growing set of official toolchains for popular tools and frameworks, ready to install with zero configuration:

Toolchain What it does Install
pytest Python test framework dagger toolchain install github.com/dagger/pytest
jest JavaScript testing (Jest) dagger toolchain install github.com/dagger/jest
vitest Test framework for Vite.js dagger toolchain install github.com/dagger/vitest
eslint JavaScript static analysis dagger toolchain install github.com/dagger/eslint
prettier Multi-language code formatter dagger toolchain install github.com/dagger/prettier
biomejs Web application formatter (Biome) dagger toolchain install github.com/dagger/biomejs
mochajs JavaScript testing (Mocha) dagger toolchain install github.com/dagger/mochajs
bun Bun JavaScript runtime dagger toolchain install github.com/dagger/bun

These official toolchains follow the single-tool-per-module philosophy: each integrates deeply with one specific tool, provides sensible defaults, and exposes checks that run via dagger check. You can mix and match them freely: install pytest for your backend and eslint + prettier for your frontend, and dagger check runs all of them.

But the real power of toolchains is that you can build your own, which is exactly what AcmeCorp's private modules become when you add @check and DefaultPath.

Making modules toolchain-ready

The key ingredients are the @check decorator and DefaultPath("."). Adding @check to an existing function makes it discoverable via dagger check. Adding DefaultPath(".") to the source parameter makes it optional. Dagger automatically passes the project's source directory when no --source flag is provided. Here's what it looks like in acme-backend:

from dagger import DefaultPath, Doc, check, dag, function, object_type

@object_type
class AcmeBackend:
    # ... build(), sbom() as before ...

    @function
    @check
    async def lint(
        self,
        source: Annotated[
            dagger.Directory,
            Doc("Python backend source directory"),
            DefaultPath("."),
        ],
    ) -> str:
        """Run linting (ruff) on the source code."""
        ...

    @function
    @check
    async def test(
        self,
        source: Annotated[
            dagger.Directory,
            Doc("Python backend source directory"),
            DefaultPath("."),
        ],
        coverage: Annotated[bool, Doc("Enforce minimum coverage threshold")] = True,
    ) -> str:
        """Run the test suite with AcmeCorp conventions."""
        ...
Enter fullscreen mode Exit fullscreen mode

No separate wrapper functions needed. @check goes directly on the existing lint and test functions. They keep their str return type and remain callable via dagger call as before, but they're now also registered as checks.

acme-frontend follows the same pattern: lint, test, and audit all get @check + DefaultPath(".").

The zero-code project setup

Our dagger-ci-demo pipeline is powerful, but it still required writing Python code. What if a team just wants standard CI, with no custom pipeline logic at all?

Let's try it. In our dagger-ci-demo repository, we can strip everything back: remove the .dagger/ directory, remove the SDK, remove the dependencies, and instead use dagger toolchain install to add AcmeCorp modules directly as toolchains:

# Remove the existing module code
rm -rf .dagger/

# Reset dagger.json to a minimal config (no SDK, no dependencies)
echo '{"name": "dagger-ci-demo"}' > dagger.json

# Install AcmeCorp modules as toolchains
dagger toolchain install github.com/telchak/acme-dagger-modules/acme-backend@acme-backend/v1.0.0
dagger toolchain install github.com/telchak/acme-dagger-modules/acme-frontend@acme-frontend/v1.0.0
dagger toolchain install github.com/telchak/acme-dagger-modules/acme-deploy@acme-deploy/v1.0.0
Enter fullscreen mode Exit fullscreen mode

This produces a dagger.json with no SDK, no Python, no .dagger/ directory:

{
  "name": "dagger-ci-demo",
  "engineVersion": "v0.20.3",
  "toolchains": [
    {
      "name": "acme-backend",
      "source": "github.com/telchak/acme-dagger-modules/acme-backend@acme-backend/v1.0.0",
      "pin": "2fc60326472c2141e7d6cc6cdfb3382f2d38b303"
    },
    {
      "name": "acme-frontend",
      "source": "github.com/telchak/acme-dagger-modules/acme-frontend@acme-frontend/v1.0.0",
      "pin": "2fc60326472c2141e7d6cc6cdfb3382f2d38b303"
    },
    {
      "name": "acme-deploy",
      "source": "github.com/telchak/acme-dagger-modules/acme-deploy@acme-deploy/v1.0.0",
      "pin": "f6d4de715efc78610f43900334eabd667d918b84"
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

That's it. One file, no code. Now they can run:

# Call any function directly
$ dagger call acme-backend lint --source=./backend
$ dagger call acme-backend test --source=./backend
$ dagger call acme-backend build --source=./backend terminal
Enter fullscreen mode Exit fullscreen mode

No Python. No .dagger/ directory. No pipeline code to maintain. The team gets AcmeCorp-compliant linting, testing, and building from a single JSON file.

Customizing toolchain defaults

Toolchains work out of the box, but you can override any argument's default value directly in dagger.json, without touching code. The customizations array lets you scope overrides to specific functions using the function field.

For example, say one team's backend doesn't follow the default coverage threshold and they want to target a specific function's argument:

{
  "name": "simple-api",
  "toolchains": [
    {
      "name": "acme-backend",
      "source": "github.com/telchak/acme-dagger-modules/acme-backend@v1.0.0",
      "customizations": [
        {
          "function": ["test"],
          "argument": "coverage",
          "default": "false"
        }
      ]
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

The function field scopes the override. Here, only the test function's coverage argument is changed. Without a function field, the override targets the module's constructor arguments instead.

This also works for directory paths. When a module uses DefaultPath(".") to pick up your source automatically, you can redirect it to a subdirectory. Use the function field to scope the override to specific functions:

{
  "name": "monorepo-api",
  "toolchains": [
    {
      "name": "acme-backend",
      "source": "github.com/telchak/acme-dagger-modules/acme-backend@v1.0.0",
      "customizations": [
        {
          "function": ["test"],
          "argument": "source",
          "defaultPath": "/services/api"
        },
        {
          "function": ["lint"],
          "argument": "source",
          "defaultPath": "/services/api"
        }
      ]
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

Each customization targets a specific function via the function field. Without it, the override applies to the module's constructor — not to individual functions. Here, both test and lint checks will automatically operate on ./services/api instead of the project root. No --source flag needed.

You can also filter out checks you're not ready for using ignoreChecks with glob patterns:

{
  "name": "simple-api",
  "toolchains": [
    {
      "name": "acme-backend",
      "source": "github.com/telchak/acme-dagger-modules/acme-backend@v1.0.0",
      "ignoreChecks": [
        "test"
      ]
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

This is useful for gradual adoption. A team can start with just linting and add test enforcement later. Run dagger check -l to see which checks are active after filtering.

Full-stack toolchains

Let's go back to our dagger-ci-demo repository and set it up as a full-stack toolchain project. Since our backend lives in ./backend and our frontend in ./frontend, we need to tell each toolchain's check functions where to find their source:

{
  "name": "dagger-ci-demo",
  "engineVersion": "v0.20.3",
  "toolchains": [
    {
      "name": "acme-backend",
      "source": "github.com/telchak/acme-dagger-modules/acme-backend@acme-backend/v1.0.0",
      "pin": "2fc60326472c2141e7d6cc6cdfb3382f2d38b303",
      "customizations": [
        {
          "function": ["test"],
          "argument": "source",
          "defaultPath": "/backend"
        },
        {
          "function": ["lint"],
          "argument": "source",
          "defaultPath": "/backend"
        }
      ]
    },
    {
      "name": "acme-frontend",
      "source": "github.com/telchak/acme-dagger-modules/acme-frontend@acme-frontend/v1.0.0",
      "pin": "2fc60326472c2141e7d6cc6cdfb3382f2d38b303",
      "customizations": [
        {
          "function": ["test"],
          "argument": "source",
          "defaultPath": "/frontend"
        },
        {
          "function": ["lint"],
          "argument": "source",
          "defaultPath": "/frontend"
        },
        {
          "function": ["audit"],
          "argument": "source",
          "defaultPath": "/frontend"
        }
      ]
    },
    {
      "name": "acme-deploy",
      "source": "github.com/telchak/acme-dagger-modules/acme-deploy@acme-deploy/v1.0.0",
      "pin": "f6d4de715efc78610f43900334eabd667d918b84",
      "customizations": [
        {
          "function": ["scan"],
          "argument": "source",
          "defaultPath": "/backend"
        }
      ]
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

Each customizations entry uses the function field to target a specific check function. This tells acme-backend:test and acme-backend:lint to use ./backend, acme-frontend:test, acme-frontend:lint, and acme-frontend:audit to use ./frontend, and acme-deploy:scan to build and scan the backend container from ./backend. Now dagger check runs all checks from all three toolchains with the right source directories:

$ dagger check
✔ acme-backend:lint    (12.9s)  OK
✔ acme-backend:test    (15.2s)  OK
✔ acme-deploy:scan     (18.4s)  OK
✔ acme-frontend:lint   (58.0s)  OK
✔ acme-frontend:test   (62.0s)  OK
✔ acme-frontend:audit  (25.6s)  OK
Enter fullscreen mode Exit fullscreen mode

When a check fails, Dagger surfaces the full error inline — no need to dig through logs. Here's what it looks like when acme-frontend:audit catches a real vulnerability in a transitive dependency:

$ dagger check
✔ acme-backend:lint    (0.1s)   OK
✔ acme-backend:test    (0.1s)   OK
✘ acme-frontend:audit  (26.2s)  ERROR
┇ .audit(source: context /tmp/dagger-ci-demo/frontend (exclude: [])
  ) ›
✘ withExec npm audit --audit-level=moderate  (2.3s)  ERROR
# npm audit report

undici  7.0.0 - 7.23.0
Severity: high
Undici has an HTTP Request/Response Smuggling issue
  - https://github.com/advisories/GHSA-2mjp-6q6p-2qxm
fix available via `npm audit fix --force`
Will install @angular/build@20.3.21, which is a breaking change
node_modules/undici
  @angular/build  21.0.0-next.0 - 21.2.3
  Depends on vulnerable versions of undici

2 high severity vulnerabilities
! exit code: 1

✔ acme-frontend:lint   (4.4s)   OK
✔ acme-frontend:test   (4.4s)   OK
Enter fullscreen mode Exit fullscreen mode

Notice how the failing check doesn't block the others — all six run in parallel, and Dagger reports the full audit output so the team knows exactly what to fix. This is the value of @check: the module author defined the security audit once, and every project that installs the toolchain gets it automatically.

Six checks, three toolchains, zero lines of code. The CI workflow becomes even simpler:

# .github/workflows/ci.yml
name: CI
on: [push, pull_request]
jobs:
  check:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v6
      - uses: dagger/dagger-for-github@v8.4.1
        with:
          version: "0.20.3"
          verb: check
Enter fullscreen mode Exit fullscreen mode

One step. One command. Every AcmeCorp standard enforced.

When to use toolchains vs. pipeline code

Both consumption models are valid. The right choice depends on how much customization the team needs:

Toolchains Pipeline code (.dagger/)
Setup dagger.json only, no SDK code .dagger/ module with Python/Go/TS
Best for Standard projects that follow org patterns Custom orchestration, conditional logic
Checks Automatic via dagger check Define your own @check functions
Customization customizations in JSON Full SDK, compose modules in code
Deployment Install acme-deploy as toolchain Call dag.acme_deploy() in pipeline

Most teams start with toolchains. When they need custom orchestration ("run tests in parallel, then deploy backend before frontend"), they graduate to a .dagger/ module that imports the private modules. The same modules power both paths.

Deploying with acme-deploy as a toolchain

Vulnerability scanning shouldn't wait until deploy time. In acme-deploy, we've promoted the Trivy scan from a private helper to a public @check function called scan. It builds the container from source using acme-backend, then runs Trivy against it:

@function
@check
async def scan(
    self,
    source: Annotated[
        dagger.Directory,
        Doc("Backend source directory"),
        DefaultPath("."),
    ],
    port: Annotated[int, Doc("Application port")] = 8080,
) -> str:
    """Scan the built container for HIGH and CRITICAL CVEs."""
    container = dag.acme_backend().build(source=source, port=port)
    return await dag.trivy(version=TRIVY_VERSION).container(container).output(format="table")
Enter fullscreen mode Exit fullscreen mode

Install it as a third toolchain alongside acme-backend and acme-frontend:

dagger toolchain install github.com/telchak/acme-dagger-modules/acme-deploy@acme-deploy/v1.0.0
Enter fullscreen mode Exit fullscreen mode

With the scan function's source customized to point at /backend:

{
  "name": "acme-deploy",
  "source": "github.com/telchak/acme-dagger-modules/acme-deploy@acme-deploy/v1.0.0",
  "customizations": [
    {
      "function": ["scan"],
      "argument": "source",
      "defaultPath": "/backend"
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

Now dagger check catches CVEs alongside linting and tests:

$ dagger check
✔ acme-backend:lint    OK
✔ acme-backend:test    OK
✔ acme-frontend:lint   OK
✔ acme-frontend:test   OK
✔ acme-frontend:audit  OK
✔ acme-deploy:scan     OK    # ← vulnerability scanning before any deploy
Enter fullscreen mode Exit fullscreen mode

The deployment functions (cloud_run, firebase) stay as regular functions — they're not checks, because they mutate external state. Deployments happen intentionally, with full orchestration, not as part of dagger check.

Implementing the deployment step in CI

To actually deploy, you need a CI workflow that calls acme-deploy's cloud_run and firebase functions after checks pass. This requires some GCP setup first.

Step 1: Set up Workload Identity Federation on GCP

Workload Identity Federation lets GitHub Actions authenticate to GCP without storing service account keys. Create a Workload Identity Pool and Provider tied to your GitHub repository:

# Create the Workload Identity Pool
gcloud iam workload-identity-pools create "github" \
  --location="global" \
  --display-name="GitHub Actions"

# Create the OIDC Provider for GitHub
gcloud iam workload-identity-pools providers create-oidc "github-actions" \
  --location="global" \
  --workload-identity-pool="github" \
  --issuer-uri="https://token.actions.githubusercontent.com" \
  --attribute-mapping="google.subject=assertion.sub,attribute.repository=assertion.repository" \
  --attribute-condition="assertion.repository_owner == 'your-github-org'"

# Create the CI service account
gcloud iam service-accounts create "ci-deployer" \
  --display-name="CI Deployer" \
  --project="acmecorp-staging"

# Grant Cloud Run permissions (deploy services, push to Artifact Registry)
gcloud projects add-iam-policy-binding "acmecorp-staging" \
  --member="serviceAccount:ci-deployer@acmecorp-staging.iam.gserviceaccount.com" \
  --role="roles/run.admin"

gcloud projects add-iam-policy-binding "acmecorp-staging" \
  --member="serviceAccount:ci-deployer@acmecorp-staging.iam.gserviceaccount.com" \
  --role="roles/artifactregistry.writer"

# Grant Firebase Hosting permissions
gcloud projects add-iam-policy-binding "acmecorp-staging" \
  --member="serviceAccount:ci-deployer@acmecorp-staging.iam.gserviceaccount.com" \
  --role="roles/firebase.admin"

# Cloud Run needs to pull images — grant the service agent access
gcloud projects add-iam-policy-binding "acmecorp-staging" \
  --member="serviceAccount:ci-deployer@acmecorp-staging.iam.gserviceaccount.com" \
  --role="roles/iam.serviceAccountUser"

# Link the service account to the Workload Identity Pool
# This allows GitHub Actions from your repo to impersonate ci-deployer
gcloud iam service-accounts add-iam-policy-binding \
  "ci-deployer@acmecorp-staging.iam.gserviceaccount.com" \
  --role="roles/iam.workloadIdentityUser" \
  --member="principalSet://iam.googleapis.com/projects/123456/locations/global/workloadIdentityPools/github/attribute.repository/your-github-org/dagger-ci-demo"
Enter fullscreen mode Exit fullscreen mode

The last command is the critical link: it tells GCP that GitHub Actions workflows running in the your-github-org/dagger-ci-demo repository are allowed to impersonate the ci-deployer service account. Without it, the OIDC token exchange will fail with a permission denied error. The attribute.repository condition ensures that only workflows from your specific repository can authenticate — other repositories in the same GitHub organization cannot.

Step 2: No secrets required

No GCP credentials or identifiers to store in GitHub. The WIF provider, service account, and project mappings are all encoded in the acme-deploy module itself — that's the whole point of centralizing deployment logic. The only values the CI workflow passes are the GitHub Actions OIDC environment variables (ACTIONS_ID_TOKEN_REQUEST_TOKEN and ACTIONS_ID_TOKEN_REQUEST_URL), which are automatically available when the workflow has id-token: write permission.

Step 3: CI workflow with checks + deployment

# .github/workflows/ci.yml
name: CI
on: [push, pull_request]

permissions:
  contents: read
  id-token: write  # Required for WIF OIDC token

jobs:
  check:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v6
      - uses: dagger/dagger-for-github@v8.4.1
        with:
          version: "0.20.3"
          verb: check

  deploy:
    needs: check
    if: github.ref == 'refs/heads/main' && github.event_name == 'push'
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v6
      - uses: dagger/dagger-for-github@v8.4.1
        with:
          version: "0.20.3"
          verb: call
          args: >-
            acme-deploy cloud-run
            --source=./backend
            --service-name=api
            --team=backend
            --project-id=acmecorp-staging
            --oidc-request-token=env:ACTIONS_ID_TOKEN_REQUEST_TOKEN
            --oidc-request-url=env:ACTIONS_ID_TOKEN_REQUEST_URL
            --environment=staging
            --region=europe-west1
            --git-branch=${{ github.ref_name }}
            --git-sha=${{ github.sha }}
Enter fullscreen mode Exit fullscreen mode

The check job runs all toolchain checks (lint, test, audit, scan). The deploy job only runs on pushes to main, after checks pass. No google-github-actions/auth step needed — the acme-deploy module handles the OIDC token exchange internally via the gcp-auth Dagger module. GitHub Actions automatically exposes ACTIONS_ID_TOKEN_REQUEST_TOKEN and ACTIONS_ID_TOKEN_REQUEST_URL when the workflow has id-token: write permission. The gcp-auth module uses the oidc-token module to exchange these for a GCP-compatible OIDC token, then authenticates via Workload Identity Federation. Authentication stays inside the pipeline, not in the CI glue.

Trying it locally

For local deployment testing, pass your host's gcloud config directory instead of the OIDC tokens. The acme-deploy module detects which authentication method to use based on the arguments you provide:

Artifact Registry setup: The cloud-run function pushes container images to Google Artifact Registry in the europe-west1 region by default. The target repository is named acme-{team} (e.g. acme-backend for --team=backend), but you can override it with the --repository flag. If the repository doesn't exist yet, create it first:

gcloud artifacts repositories create acme-backend \
  --repository-format=docker \
  --location=europe-west1 \
  --project=acmecorp-production \
  --description="Docker images for the backend team"

This creates a Docker-format repository in your GCP project. You only need to do this once per team/repository.

Cloud Run access for Firebase Hosting: In this setup, the frontend on Firebase Hosting communicates with the backend on Cloud Run through a Firebase Hosting rewrite. The rewrite proxies /api/** requests to your Cloud Run service, so the frontend uses relative URLs and never exposes the backend's address. However, Cloud Run services are private by default — they require IAM authentication for every request, including those from Firebase Hosting. Rather than granting public access (which may be blocked by your organization's IAM policies), you can disable the IAM invoker check using the --disable-invoker-iam-check flag on the cloud-run function. In a production setup, this would typically be handled by your infrastructure-as-code (Terraform, Pulumi, etc.) alongside network-level controls like VPC ingress rules.

# Deploy backend to Cloud Run
dagger call acme-deploy cloud-run \
  --source=./backend \
  --service-name=api \
  --team=backend \
  --project-id=acmecorp-production \
  --gcloud-config=$HOME/.config/gcloud \
  --environment=production \
  --git-branch=main \
  --disable-invoker-iam-check

# Deploy frontend to Firebase Hosting
dagger call acme-deploy firebase \
  --source=./frontend \
  --service-name=web \
  --team=backend \
  --project-id=acmecorp-production \
  --gcloud-config=$HOME/.config/gcloud \
  --environment=production \
  --git-branch=main
Enter fullscreen mode Exit fullscreen mode

Under the hood, acme-deploy calls gcp-auth's gcloud-container-from-host function, which mounts your local Application Default Credentials into the pipeline container. No service account keys needed — just run gcloud auth application-default login once on your machine. The checks (dagger check) don't require any GCP authentication at all.


Module Visibility with Dagger Cloud

Dagger Cloud provides a dashboard that automatically tracks every module used across your organization. When you enable module scanning on your GitHub repositories, Dagger Cloud indexes them and shows:

  • Which modules exist, their API documentation, and versions
  • Which pipelines depend on which modules
  • Pipeline traces linked to module function calls

This gives the platform team visibility into module adoption without any manual tracking, which is useful for knowing when it's safe to deprecate an old version or to identify teams that haven't upgraded yet.


Summary: The Module Architecture


Image generated with Google's Gemini "Nano Banana Pro"

                     ┌────────────────────────────────────┐
                     │          Daggerverse               │
                     │   gcp-auth, gcp-cloud-run,         │
                     │   gcp-firebase, python-build,      │
                     │   angular, trivy, ...              │
                     └──────────────┬─────────────────────┘
                                    │
                                    ▼
                     ┌────────────────────────────────────┐
                     │     Private Modules Repo           │
                     │   acme-backend, acme-frontend,     │
                     │   acme-deploy                      │
                     │                                    │
                     │   Encodes: security, compliance,   │
                     │   naming, defaults, audit,         │
                     │   vulnerability scanning           │
                     └──────────────┬─────────────────────┘
                                    │
                  ┌─────────────────┼─────────────────┐
                  ▼                 ▼                  ▼
          ┌──────────────┐  ┌──────────────┐  ┌──────────────┐
          │  Service A   │  │  Service B   │  │  Service C   │
          │              │  │              │  │              │
          │  .dagger/    │  │  dagger.json │  │  dagger.json │
          │  main.py     │  │  only        │  │  only        │
          │              │  │              │  │              │
          │  Custom      │  │  Toolchains  │  │  Toolchains  │
          │  pipeline    │  │  + checks    │  │  + checks    │
          │  code        │  │              │  │              │
          │  (SDK)       │  │  Zero code   │  │  Zero code   │
          └──────────────┘  └──────────────┘  └──────────────┘

           Full control:        Standard projects:
           compose modules      install as toolchains,
           in Python/Go/TS,     customize via JSON,
           custom orchestration  dagger check runs all
Enter fullscreen mode Exit fullscreen mode

Two consumption paths, same modules. Teams that need custom orchestration (conditional deploys, parallel builds, multi-step workflows) write a .dagger/ pipeline module that imports the private modules via SDK code. Teams that follow the standard pattern install them as toolchains in dagger.json and get lint, test, and audit checks with zero code. Both paths enforce the same security, compliance, and naming rules.

The pattern scales: public modules handle the generic complexity (GCP auth, Trivy scanning, language builds), private modules enforce your rules (naming, branch gating, vulnerability gates, resource defaults), and project pipelines stay simple, whether they're written in Python or declared in JSON. When something changes (a new compliance requirement, a GCP API deprecation, a CVE severity policy update), you update one module and every pipeline benefits on their next version bump.


What's Next

In Part 4, we'll add AI to the mix, by exploring Dagger's agentic capabilities. The modules we built today become the building blocks of a deterministic pipeline that an AI agent (Daggie) generates for you. And when that pipeline fails, coding agents (Monty for Python, Angie for Angular) analyze the error and post fix suggestions directly on the PR.

The pipeline stays fixed, deterministic, and fast. The AI writes it, reviews it, and fixes it. We'll also approach how we could build a Spec-driven development agentic platform on top of those foundations.


The full source code for AcmeCorp's private modules is available at github.com/telchak/acme-dagger-modules. Clone it as a starting point for your own private module library.

Up Next: Part 4: The AI-Native CI/CD Stack: Agents, Modules, and Spec-Driven Development


This is Part 3 of a 4-part series. Follow for updates.

Tags: #cicd #dagger #gcp #platform-engineering #modules

Top comments (0)