DEV Community

DevOps Start
DevOps Start

Posted on • Originally published at devopsstart.com

Supply Chain Security Proxy: Move Beyond Vulnerability Scanning

This article was originally published on devopsstart.com. Learn why relying solely on CVE scanning is a reactive strategy and how to implement a security proxy to proactively secure your software supply chain.

Vulnerability scanning is a reactive failure state, not a security strategy.

Most organizations treat Software Composition Analysis (SCA) as their primary defense against supply chain attacks. They plug in a scanner, wait for it to find a known CVE, and then assign a Jira ticket to a developer to update a library. This approach assumes that the vulnerability is already known and indexed in a database. It ignores the window of time between a malicious package upload and its discovery, and it does nothing to prevent zero-day supply chain attacks like dependency confusion or typosquatting.

If you rely solely on scanners, you are documenting how you were breached rather than preventing the attack. To secure the perimeter, you must implement a supply chain security proxy that controls the ingress of every byte of third party code before it touches your build server.

The Detection Gap

Reliance on scanning creates a dangerous detection gap. When a malicious actor uploads a package to npm or PyPI that mimics a popular library (typosquatting), there is often a several hour or even several day lag before scanners flag that specific version. In a modern CI/CD pipeline, that package is pulled, built, and deployed to production in minutes. Your secrets are exfiltrated before the scanner alerts you.

Consider dependency confusion. An attacker discovers the name of an internal corporate package, such as corp-auth-lib. They upload a malicious package with the same name but a higher version number to the public npm registry. Without a security proxy, the build agent sees the higher version on the public registry and pulls it instead of the internal one. A scanner won't stop this because the package isn't vulnerable in the CVE sense; it is performing exactly as the attacker intended.

I have seen this play out in environments with over 500 microservices where the scan and fix treadmill became a full time job for three engineers. They spent 40 hours a week chasing low severity CVEs while the actual architectural hole (direct internet access for build agents) remained open. By shifting focus from detecting a fire to controlling who enters the building, you eliminate entire classes of attacks. A security proxy acts as a mandatory checkpoint. If a package isn't on the allow list or fails a provenance check, it never enters the environment. This is the difference between a smoke detector and a locked door.

For those managing complex pipelines, this shift is similar to how you might secure Terraform PRs with an architecture firewall to prevent configuration drift. Instead of checking if the infrastructure is broken after the apply, you validate the intent before execution.

Balancing Velocity and Governance

The most common pushback from developers is that a security proxy kills velocity. The Request a Package workflow is often viewed as a bureaucratic nightmare. Developers argue that forcing every dependency through a manual approval process slows down feature delivery, especially during the inner loop of development where npm install is critical for prototyping.

This argument is partially correct. If you implement a security proxy as a manual ticket system where a security officer must click Approve on every version bump, you create a bottleneck that developers will eventually bypass. They will use personal hotspots or tunnel out of the build environment just to get work done. The friction of a poorly implemented proxy is a security risk because it encourages shadow IT.

The solution is to automate the governance. A modern security proxy should be a policy engine, not a manual gate. For example, you can set a policy that allows any package that has been public for more than 30 days, has more than 1,000 downloads, and is signed by a trusted vendor. This allows 95% of requests to pass through automatically while flagging high risk, brand new packages for a quick human review. The goal is to move from Allow All to Automated Governance.

When Scanning Still Wins

There are specific contexts where a security proxy is overkill. For very small teams (under 10 engineers) or early stage startups building a Proof of Concept (PoC), the operational overhead of maintaining a private registry like Artifactory or Nexus v3.x can outweigh the risk. At this scale, the attack surface is small and the priority is finding product market fit, not building a SLSA Level 4 compliant supply chain.

Scanning also remains superior for identifying vulnerabilities in code you have already mirrored. A proxy prevents the ingress of bad code, but it cannot predict when a previously safe library is suddenly found to have a critical flaw. When Log4Shell hit, the problem wasn't that the library was newly introduced, it was that an existing, trusted library had a critical flaw. In that case, a proxy provides no protection for existing deployments. You still need a robust SCA tool to scan your current Bill of Materials (SBOM) and identify where the vulnerable version is running.

For teams using fully managed serverless build environments where they have zero control over the network layer, a proxy is technically impossible to implement. These teams must rely on shift left scanning and strict dependency pinning in their lockfiles.

Implementing the Proxy Architecture

To move beyond scanning, you need a centralized gateway that acts as a policy enforcement point.

The Architectural Pattern

A supply chain security proxy sits between your build agents (GitHub Actions, GitLab Runners, Jenkins) and the public registries (Docker Hub, npm, PyPI). Instead of the build agent calling docker pull, it calls docker pull proxy.corp.com/image.

The proxy performs the following checks in order:

  1. Identity: Is the request coming from an authenticated build agent?
  2. Allow-list: Is this package/version approved for use in this project?
  3. Integrity: Does the checksum match the known good version?
  4. Provenance: Is there a signed attestation proving this was built in a trusted environment?

Hardening the Image Pipeline

For container images, the proxy should integrate with Sigstore/Cosign. You don't trust the tag latest or even a specific version; you verify the signature.

# Verifying an image signature using Cosign v2.2.4
cosign verify --key cosign.pub ghcr.io/my-org/my-app:v1.2.0
Enter fullscreen mode Exit fullscreen mode

If verification fails, the proxy blocks the pull. To take this further, enforce SLSA framework requirements. A SLSA attestation is a signed piece of metadata that tells you how the artifact was built. If the attestation shows the image was built on a developer's laptop rather than a hardened CI runner, the proxy rejects it.

Stopping Dependency Confusion

To kill dependency confusion, configure your proxy to use Virtual Repositories with strict resolution orders. In a tool like JFrog Artifactory v7.x, you create a virtual repository that aggregates a local (private) repo and a remote (public) repo.

Configure the resolution order so that the local repository is searched first. More importantly, implement Exclusion Patterns. If a package starts with corp-, the proxy must be configured to never check the public registry for that pattern.

# Conceptual Proxy Policy for Dependency Resolution
policies:
  - pattern: "corp-*"
    action: "BLOCK_EXTERNAL"
    reason: "Internal packages must never be resolved from public registries"
  - pattern: "*"
    action: "ALLOW_EXTERNAL"
    condition: "age > 30d && downloads > 1000"
Enter fullscreen mode Exit fullscreen mode

The Chain of Trust: Proxy to Admission Controller

The proxy is only the first half of the battle. The second half is ensuring that the Proxy-Approved status follows the artifact to the cluster. This is where the proxy integrates with a Kubernetes Admission Controller like Kyverno or OPA Gatekeeper.

The workflow:

  1. Ingress: Proxy pulls node:18-alpine, verifies the signature, and caches it.
  2. Attestation: The proxy (or a separate CI step) signs the image with a Security-Approved key.
  3. Deployment: A developer tries to deploy the image to GKE or EKS.
  4. Enforcement: The Admission Controller intercepts the request and checks for the Security-Approved signature.

If a developer tries to bypass the proxy by pointing their deployment to a public image on Docker Hub, the Admission Controller blocks the pod.

# Example Kyverno Policy to enforce proxy-signed images
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: check-proxy-signature
spec:
  validationFailureAction: Enforce
  rules:
    - name: verify-image-signature
      match:
        any:
          - resources:
              kinds:
                - Pod
      verifyImages:
        - imageReferences:
            - "proxy.corp.com/*"
          attestors:
            - entries:
                - keys:
                    publicKeys:
                      - key: "ssh-rsa AAAAB3Nza... [your-proxy-public-key]"
Enter fullscreen mode Exit fullscreen mode

This creates a complete chain of trust. The proxy ensures only vetted code enters the building, and the Admission Controller ensures only vetted code runs. If you see pods failing in your cluster, use guides on Kubernetes troubleshooting to determine if it was a signature mismatch or a network failure.

The Quarantine Zone

Moving to a proxy requires a shift in how developers interact with dependencies. The most successful implementations use a Quarantine Zone. When a developer requests a new library, the proxy pulls it into a restricted, isolated mirror. It is then automatically scanned for malware and analyzed for suspicious signals, such as a package created 2 hours ago that tries to access /etc/shadow.

If the package passes the automated gauntlet, it is promoted to the Approved repository. This allows developers to get tools quickly while keeping the production build environment sterile.

Implement Dependency Pinning as a hard requirement. Using ranges like ^1.2.0 in your package.json or requirements.txt is an invitation for disaster. The proxy should be configured to alert or block builds that do not use strict version pinning (for example, 1.2.3). This prevents stealthy updates where a vendor pushes a malicious version that fits within your range, bypassing your initial vetting.

To maintain this at scale, integrate the request process into an Internal Developer Platform (IDP) built with Backstage, allowing developers to Request a Package via a UI form that triggers the automated quarantine pipeline.

Operationalizing the Proxy

Do not flip the switch for the entire company at once. You will break every build in the organization. Instead, follow this three step rollout:

  1. Transparent Mode: Deploy the proxy and configure build agents to use it, but set all policies to Log Only. This provides a baseline of every dependency currently used across the org. You will likely find thousands of dependencies you didn't know existed.
  2. Caching Mode: Enable mirroring and caching. Ensure that if the public registry goes down, your builds still work. This provides immediate value to developers through faster builds and makes them allies in the security mission.
  3. Enforcement Mode: Start blocking the most dangerous patterns first (for example, dependency confusion patterns) before moving to strict signature verification and SLSA attestations.

The operational cost of maintaining this infrastructure is non-trivial. You need high availability for your registry, as it is now a single point of failure for all deployments. Use a distributed storage backend and ensure your proxy is scaled horizontally across multiple availability zones.

Scanning is a useful tool for auditing, but it is a weak defense mechanism. By implementing a supply chain security proxy, you stop reacting to CVEs and start controlling your perimeter. You move the security boundary from the end of the pipe (the cluster) to the start of the pipe (the registry). When you combine a proxy with signature verification and a Kubernetes admission controller, you create a hardened pipeline where untrusted code simply cannot execute.

Top comments (0)