DEV Community

Cover image for I Replaced 500 Lines of cert-manager Config With a 150-Line Blueprint
Tom McNamara
Tom McNamara

Posted on • Edited on

I Replaced 500 Lines of cert-manager Config With a 150-Line Blueprint

The 3 AM Certificate Expiry Crisis

Have you experienced this? Your phone explodes with PagerDuty alerts. A production API is down. Not because of bad code. Not because of a DDoS attack. But because an internal service certificate had expired.

The irony? This happens even with "automated" certificate management via cert-manager. Even if there is 90-day rotation, monitoring and runbooks.

This crisis is repeated all too often.

My experience: a 6-hour restoration process:

  1. Emergency cert renewal (45 minutes)
  2. Restarting pods to pick up new certs (30 minutes)
  3. Root cause analysis (2 hours)
  4. Writing a postmortem (1 hour)
  5. Updating runbooks so this "never happens again" (2 hours)

Three weeks later, a different service. Same issue. Different certificate.

What did I learn? Automating certificate issuance isn't the same as eliminating certificate problems.

The real question isn't "how do we rotate certs faster?"

It is: "Why are we using certificates at all for internal East-West traffic?"

Here's What "Automated" Certificate Management Actually Looks Like

cert-manager for internal mTLS compared with Blueprint with Synchronous Ephemeral Encryption (SEE™)

cert-manager yaml comparison with Lane7 Blueprint

Before: cert-manager

See full typical Cert Manager YAML gist

150+ lines of Certificate YAML

  • Issuer configuration
  • CertificateRequest management
  • Secret rotation scripts
  • Monitoring alerts for expiry
  • Runbooks for emergency renewal = 200+ lines of config = 30% of platform team's time managing PKI

After: Lane7 Blueprint

See full typical Blueprint pod YAML gist

# This is the entire deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app-pod-1
spec:
  # ... standard Kubernetes deployment
  # Workload Security Proxy (WoSP) sidecar handles all networking, identity, and encryption
Enter fullscreen mode Exit fullscreen mode

<150 lines total in Deployment YAML. Deploy in <30 minutes.
Secret credentials rotate with each communication session (as fast as once per minute). Identity credentials rotate daily (or more frequently for active services). Both must be valid for trust verification.

No certificates. No PKI. No 3 AM alerts.

The False Security Blanket of Automated PKI

Here's what I thought "automating certificates" meant:

  • ✅ Auto-renewal every 90 days
  • ✅ Automated distribution to pods
  • ✅ Monitoring for expiry
  • ✅ Zero manual intervention

Sounds like Zero Trust, right?

Wrong. It's just "Zero Touch" issuance.

The Problems That Automation Doesn't Solve:

1. The "Secret Zero" Problem
How do you securely deliver the first secret that fetches the certificate?
You bootstrap trust with... another secret. It's turtles all the way down.

2. Certificate Authority = Single Point of Failure
Your entire cluster trusts one CA. If that CA is compromised (or misconfigured, or has a bug), your entire security model collapses.

3. Every cert rotation is an entirely new pod identity
Every app loses its history of trusted interactions when a rotation occurs. It's former identity and any trust is completely abolished.

4. Handshake Overhead
Every connection requires:

  • Certificate validation
  • Chain verification
  • Revocation checking (OCSP/CRL)
  • Key exchange negotiation

This adds latency and CPU overhead to every single internal service call.

5. Static Identity Window
Even with 90-day rotation, that certificate sits on disk for 90 days. If an attacker gets it on Day 1, they have 89 days to use it.

6. Expiry Anxiety
No matter how good your automation, there's always the fear:

  • What if the rotation script fails?
  • What if the webhook times out?
  • What if the monitoring alert gets missed?

The Realization

I realized that "automated PKI" wasn't actually Zero Trust—it was just faster legacy security. PKI works well for root domains because they are vetted by a third party. Vetting of workloads submitting a CSR doesn't happen. The interest is PKI for fast TLS connections - Not trust.

The chain of trust ended at the Certificate Authority, not at the workload itself.

Zero Trust includes frequent verification of trust - not just once. I needed a way to verify the identity of the service at runtime without relying on files sitting on disk.

What If Identity Changed Every Day or Sooner?

I started researching alternatives to PKI certs for internal networking. Service meshes like Istio use SPIFFE/SPIRE, but they still use X.509 certificates (just with shorter lifespans. And they only "assert" trust not verify it - that's a big difference for true Zero Trust that requires verification).

That's when I thought about Cloud Native AMTD (Automated Moving Target Defense).

The concept is simple but radical:

Traditional security: Change the lock on the door every year.

AMTD: Change the location of the door every day and change the lock on the door every time two apps connect. But only trusted entrants can know where the door is and have the keys to the lock, just at the time they need it.

How This Works With Cryptographic Identities:

Instead of static certificates that sit on disk, what if the there were two credentials - a cryptographic identity credential and a secret credential - that rotated continuously and were stored it the workload but verified externally ?

  • Identities (with a verifiable chain of trust) rotate every 24 hours or sooner.
  • Secrets rotating as fast as once per minute.

If an attacker captures a secret at 10:00:00, it's useless at 10:01.

But how do you rotate credentials that fast without constant handshakes, without key vaults, or key insertions?

That's where CHIPS™ (Codes Hidden In Plain Sight) comes in.

Secret credentials rotating every minute in a live deployment without a key exchange
Secret credential loge entries showing rotation every minute in a live deployment without a key exchange

In addition to automated secrets rotation, a Zero Trust identity credential also rotates daily (or sooner for active applications/services) without depending on a central control plane

Both credentials must be right to access either workload in the connection

Lane7 Blueprints: The "Easy Button" for AMTD

After validating the AMTD approach provided by WoSP sidecars, we built Lane7 Blueprints, pre-configured Kubernetes deployments that implement Zero Trust networking without PKI, and protect applications with a Cloud Native AMTD.

What You Get:

Instead of stitching together cert-manager, Issuers, CertificateRequests, and rotation scripts, you get a working topology in one .zip bundle:

Available Blueprints:

  1. Bi-Pod (A → B relay)

    • Simple two-service communication
    • Perfect for API → Database
    • Deploy in 5 minutes
  2. Tri-Pod Fan-out (A → B, A → C)

    • Bifurcation pattern
    • Deploy in 8 minutes
  3. Tri-Pod Fan-In (A → C, B → C)

    • Aggregator pattern
    • Deploy in 10 minutes

Each Blueprint includes:

  • ✅ A basic Python app and Dockerfile (customizable business logic)
  • ✅ Pre-configured WoSP (Workload Security Proxy) sidecars
  • ✅ WoSP credentials and license (delivered securely)
  • ✅ All Kubernetes manifests (3 per pod in the blueprint)
  • ✅ Topology design that defines per-pod access authorization
  • ✅ Documentation
  • ✅ Everything you need to deploy immediately

How It Works (High Level):

Your Application:

# Your app just talks HTTP to localhost
response = requests.post('http://localhost:18001/api/data', json=payload)

# simple app with replaceable business logic
Enter fullscreen mode Exit fullscreen mode

WoSP Sidecar:

  • Intercepts the localhost request
  • Encrypts with current rotating key (generated via CHIPS™)
  • Sends to remote service's WoSP sidecar
  • Remote WoSP decrypts (using its identical CHIPS-generated key)
  • Delivers to remote app's localhost

NOTE: The very first communication for a new connection between WoSPs includes an identity trust verification step

Your app never knows encryption is happening.

Your app only sees traffic that is trusted.

The CHIPS™ and SEE™ Advantage:

CHIPS (Codes Hidden In Plain Sight) solves the key distribution problem. And Synchronous Ephemeral Encryption (SEE™) builds the secure communication channel (even across cluster boundaries):

  • Both workloads are configured to use the same CHIPS™ algorithm (delivered once, securely).
  • Seed material is collected on demand by Web Retriever when WoSPs run their CHIPS™ algorithm.
  • Each WoSP runs its algorithm at the start of a new connection (communication session) and generate an identical secret using a proven PQC crypto library.
  • The SEE™ protocol takes over from CHIPS™. Secrets are never passed or exchanged. They remain where they are generated and used as keys to encrypt or decrypt messages with their counterpart workload during the P2P session. The ephemeral keys exist for the session and vanish when the connection closes.
  • No key exchange over wire = no handshake, no Secret Zero problem

What This Means for You:

Deploy a Blueprint:

# Bi-pod Blueprint (three manifests in each pod directory)
kubectl apply -f pod-1/ -f pod-2/
Enter fullscreen mode Exit fullscreen mode

10 minutes later:

  • ✅ Zero Trust application network at Layer 7 is active
  • ✅ Unique ephemeral credentials for each pair of communicating workloads
  • ✅ Workloads access protected by a Cloud Native AMTD
  • ✅ Data in transit secured without exposure of the session key
  • ✅ No certificates to manage
  • ✅ No CA to secure
  • ✅ No expiry to worry about

And you can sleep through the night.

But Wait—What About My Application Code?

Here's where Blueprints get really interesting.

You're not just getting pre-configured Kubernetes YAML. You're getting a working "secure by defautl" network of applications that you can customize!

The App Shell Feature

The app in a Blueprint is a working "shell" with security built-in. Just swap in your business logic.

Every Blueprint includes an HTTP Python application that handles all the WoSP integration:

  • ✅ HTTP server setup (aiohttp)
  • ✅ WoSP sidecar communication (localhost egress/ingress)
  • ✅ Message routing and queuing (async workers)
  • ✅ Error handling and retries
  • ✅ Background task orchestration

You don't write this code. It's already done.

The file header says it all:

# -----------------------------------------------------------------------------
# Hopr.co Blueprint Demo Application
#
# The business logic within this file is provided free of charge for 
# demonstration purposes. Users are encouraged to modify the "Business Logic" 
# sections to test their application logic with the WoSP.
# -----------------------------------------------------------------------------
Enter fullscreen mode Exit fullscreen mode

What You DO Customize

The app is structured with clear separation:

Section 1: Business Logic (MODIFY THIS)

# ==============================================================================
# SECTION 1: BUSINESS LOGIC -- THIS SECTION CAN BE MODIFIED
# ==============================================================================

async def handle_post(request, my_name, is_initiator, queue):
    """
    Handles incoming POST requests.
    - Non-initiator pods: Adds the message to a queue to be forwarded.
    - Initiator pod: Receives the message, logs completion, and STOPS the chain.
    """
    # Your business logic here

async def initiator_task(my_name, queue):
    """A background task only for the first pod to start the relay race."""
    # Your business logic here
Enter fullscreen mode Exit fullscreen mode

Section 2: WoSP Integration (DO NOT MODIFY)

# ==============================================================================
# SECTION 2: NETWORKING & COMMUNICATION (WoSP INTEGRATION) -- DO NOT MODIFY
# ==============================================================================

async def send_message(session, my_name, payload):
    """Sends a JSON payload to the local WoSP egress proxy."""
    # Infrastructure code - already done

async def client_worker(my_name, queue):
    """A background worker that pulls messages from the queue and sends them."""
    # Infrastructure code - already done

async def main():
    """Main application entry point."""
    # Infrastructure code - already done
Enter fullscreen mode Exit fullscreen mode

The Separation of Concerns

What's handled for you (Section 2 - Don't Touch):

  • HTTP server creation (web.Application())
  • Async client sessions with timeouts
  • Message queuing (asyncio.Queue)
  • WoSP egress proxy communication (http://localhost:18001/)
  • Background worker orchestration
  • Graceful shutdown handling

What you customize (Section 1 - Modify Freely):

  • Message payload structure
  • Message processing logic
  • Initiation triggers and timing
  • Completion handling
  • Your application's data flow

Real Example: The Serial Relay (Baton) App

The default Bi-Pod Blueprint implements a relay race pattern—a message (the "baton") travels from Pod A → Pod B → back to Pod A, building a trail of who touched it.

Default Implementation:

async def handle_post(request, my_name, is_initiator, queue):
    data = await request.json()
    baton_num = data.get('baton_number', 0)

    if is_initiator:
        # Initiator receives the baton back - cycle complete
        print(f"🏁 CYCLE COMPLETE: Relay baton #{baton_num} retired.")
        print(f"Trail: {data.get('trail', [])}")
        return web.json_response({
            "status": "accepted",
            "message": f"Cycle complete. Baton #{baton_num} retired."
        })
    else:
        # Non-initiator forwards the baton
        await queue.put(data)
        print(f"✅ Queued baton #{baton_num} for delivery.")
        return web.json_response({
            "status": "accepted",
            "message": f"Baton accepted for forwarding."
        })

async def initiator_task(my_name, queue):
    await asyncio.sleep(20)  # Initial delay
    baton_counter = 0

    while True:
        baton_counter += 1
        print(f"🎬 Starting relay cycle for baton #{baton_counter}")

        initial_payload = {
            "baton_number": baton_counter,
            "message": f"New relay from {my_name}",
            "trail": []  # Tracks which pods touched this baton
        }
        await queue.put(initial_payload)
        await asyncio.sleep(20)  # Send a new baton every 20 seconds
Enter fullscreen mode Exit fullscreen mode

What this does:

  1. Initiator pod creates a "baton" message every 20 seconds
  2. Message travels through the chain: Pod A → Pod B → Pod A
  3. Each pod appends its name to the trail array
  4. When the baton returns to the initiator, the cycle completes
  5. Logs show: Trail: ['pod-1', 'pod-2', 'pod-1']

This demonstrates the WoSP networking works. Now customize it for your use case.


Example 1: Turn It Into a Task Queue

Change two functions to build a distributed task processor:

async def initiator_task(my_name, queue):
    """Producer: Creates tasks for workers"""
    await asyncio.sleep(5)
    task_counter = 0

    while True:
        task_counter += 1

        # Get next work item (e.g., from database, S3, message queue)
        image_url = get_next_unprocessed_image()

        task_payload = {
            "task_id": task_counter,
            "task_type": "image_resize",
            "image_url": image_url,
            "target_size": "1024x768",
            "created_at": time.time(),
            "status": "pending"
        }

        print(f"📋 Creating task #{task_counter}: {image_url}")
        await queue.put(task_payload)
        await asyncio.sleep(2)  # Create tasks as needed

async def handle_post(request, my_name, is_initiator, queue):
    """Worker: Processes tasks and returns results"""
    data = await request.json()
    task_id = data.get('task_id', 0)

    if is_initiator:
        # Initiator receives completed task
        result_url = data.get('result_url')
        processing_time = data.get('processing_time')

        print(f"✅ Task #{task_id} complete!")
        print(f"   Result: {result_url}")
        print(f"   Processing time: {processing_time}ms")

        # Store result (database, S3, etc.)
        save_result(task_id, result_url)

        return web.json_response({
            "status": "accepted",
            "message": f"Task #{task_id} result saved."
        })
    else:
        # Worker processes the task
        print(f"🔧 Processing task #{task_id}: {data['image_url']}")

        start_time = time.time()
        result_url = resize_image(
            data['image_url'], 
            data['target_size']
        )
        processing_time = (time.time() - start_time) * 1000

        data['result_url'] = result_url
        data['processing_time'] = processing_time
        data['status'] = "completed"
        data['processed_by'] = my_name

        await queue.put(data)  # Send result back to initiator

        return web.json_response({
            "status": "accepted",
            "message": f"Task #{task_id} processed."
        })
Enter fullscreen mode Exit fullscreen mode

You just built a distributed image processor with Zero Trust networking.

The WoSP infrastructure handles:

  • Encrypted communication between pods
  • 10-second credential rotation
  • No certificates, no PKI, no handshakes

You just wrote the resize_image() business logic.


Example 2: Turn It Into a Data Aggregator

Change the same functions to collect sensor data:

async def initiator_task(my_name, queue):
    """Coordinator: Initiates data collection rounds"""
    await asyncio.sleep(10)
    round_counter = 0

    while True:
        round_counter += 1

        collection_payload = {
            "round_id": round_counter,
            "sensor_ids": ["sensor_1", "sensor_2", "sensor_3"],
            "readings": [],
            "started_at": time.time()
        }

        print(f"📊 Starting data collection round #{round_counter}")
        await queue.put(collection_payload)
        await asyncio.sleep(60)  # Collect every minute

async def handle_post(request, my_name, is_initiator, queue):
    """Sensor nodes: Add readings and forward"""
    data = await request.json()
    round_id = data.get('round_id', 0)

    if is_initiator:
        # Coordinator receives complete dataset
        readings = data.get('readings', [])
        duration = time.time() - data['started_at']

        print(f"📈 Round #{round_id} complete!")
        print(f"   Collected {len(readings)} readings in {duration:.2f}s")

        # Analyze aggregated data
        avg_temp = sum(r['temperature'] for r in readings) / len(readings)
        print(f"   Average temperature: {avg_temp:.1f}°C")

        # Store in time-series database
        store_readings(round_id, readings)

        return web.json_response({
            "status": "accepted",
            "message": f"Round #{round_id} data stored."
        })
    else:
        # Sensor node adds its reading
        print(f"🌡️  Adding reading for round #{round_id}")

        reading = {
            "sensor_id": my_name,
            "temperature": get_temperature_reading(),
            "humidity": get_humidity_reading(),
            "timestamp": time.time()
        }

        data['readings'].append(reading)
        await queue.put(data)  # Forward to next sensor

        return web.json_response({
            "status": "accepted",
            "message": f"Reading added from {my_name}."
        })
Enter fullscreen mode Exit fullscreen mode

You just built a distributed sensor network with secure, encrypted data collection.

Again, WoSP handles all the networking. You just wrote get_temperature_reading().


Example 3: Turn It Into an API Chain

Same pattern, different business logic:

async def initiator_task(my_name, queue):
    """Frontend: Accepts user requests"""
    # In a real app, this would be triggered by incoming HTTP requests
    # For demo, we'll simulate requests
    await asyncio.sleep(5)
    request_counter = 0

    while True:
        request_counter += 1

        user_request = {
            "request_id": request_counter,
            "user_id": "user_123",
            "action": "get_recommendations",
            "context": {
                "location": "San Francisco",
                "preferences": ["italian", "outdoor seating"]
            },
            "enriched_data": {}  # Services will populate this
        }

        print(f"🌐 Processing user request #{request_counter}")
        await queue.put(user_request)
        await asyncio.sleep(10)

async def handle_post(request, my_name, is_initiator, queue):
    """Service chain: Each service enriches the request"""
    data = await request.json()
    request_id = data.get('request_id', 0)

    if is_initiator:
        # Frontend receives enriched response
        enriched = data.get('enriched_data', {})

        print(f"✅ Request #{request_id} processed!")
        print(f"   User profile: {enriched.get('user_profile')}")
        print(f"   Recommendations: {enriched.get('recommendations')}")

        # Return to user
        return web.json_response({
            "status": "success",
            "data": enriched
        })
    else:
        # Each service adds its contribution
        print(f"⚙️  Service '{my_name}' processing request #{request_id}")

        # Enrich the request based on service type
        if "user-service" in my_name:
            data['enriched_data']['user_profile'] = get_user_profile(
                data['user_id']
            )
        elif "recommendation-service" in my_name:
            data['enriched_data']['recommendations'] = get_recommendations(
                data['enriched_data']['user_profile'],
                data['context']
            )

        await queue.put(data)  # Forward to next service

        return web.json_response({
            "status": "accepted",
            "message": f"Processed by {my_name}."
        })
Enter fullscreen mode Exit fullscreen mode

You just built a microservices API chain with zero PKI overhead.


Blueprints Build Zero Trust App Networks: Same Infrastructure, Different Logic

Notice the approach?

The WoSP infrastructure (Section 2) never changes:

  • send_message() - sends to localhost:18001
  • client_worker() - pulls from queue and sends
  • main() - sets up HTTP server and workers

Only the business logic (Section 1) changes:

  • What data structure you send
  • What processing you do when you receive it
  • How you handle completion

The deployment stays the same:

kubectl apply -f pod-1/ -f pod-2/
Enter fullscreen mode Exit fullscreen mode

The SDLC Reversal

Traditional cert-manager/PKI flow:

1. Dev writes application code from scratch
2. Dev configures app to read TLS certificates from mounted volumes
3. Dev adds certificate rotation handling (watch for cert updates)
4. DevOps installs cert-manager in cluster
5. DevOps writes Certificate resources for each service
6. DevOps configures ClusterIssuer and CA infrastructure
7. DevOps writes Deployments with volume mounts for certs
8. Debug cert-manager issues:
   - "Certificate not ready"
   - TLS handshake failures
   - Apps caching old certificates
   - Rotation timing problems
9. Set up monitoring for certificate expiry
10. Write runbooks for emergency cert renewal
11. Deploy (finally)
12. Ongoing: Respond to cert expiry alerts, troubleshoot rotation failures
Enter fullscreen mode Exit fullscreen mode

Lane7 Blueprint flow:

1. DevOps deploys secure Blueprint (10 minutes)
   ✅ HTTP server: done
   ✅ WoSP integration: done
   ✅ Async workers: done
   ✅ Zero Trust networking: done
   ✅ Rotating credentials: done (every 10 seconds)
   ✅ No certificates, no PKI, no CA infrastructure

2. Dev modifies two functions in Section 1 (30 minutes)
   ✅ initiator_task() - what to send
   ✅ handle_post() - what to do when received

   Dev doesn't touch:
   ❌ TLS configuration
   ❌ Certificate management
   ❌ Volume mounts for certs
   ❌ Rotation logic

3. Redeploy (5 minutes)
   ✅ kubectl apply -f deployment.yaml

4. Done
   ✅ No ongoing cert maintenance
   ✅ No expiry alerts
   ✅ No 3 AM pages
Enter fullscreen mode Exit fullscreen mode

The difference:

Traditional (cert-manager) Lane7 Blueprints
12 steps to production 3 steps to production
2-4 days of work 45 minutes of work
Ongoing maintenance (8-10 hrs/week) Zero maintenance
Dev writes TLS integration code Dev writes only business logic
DevOps manages PKI infrastructure DevOps deploys and forgets
Certs expire (even with automation) Credentials rotate automatically

The infrastructure is already there. Just add your logic.

Why This Matters

For Platform Teams:
You provide a secure, working application template. Developers don't need to:

  • ❌ Understand WoSP sidecar integration
  • ❌ Learn Envoy proxy configuration
  • ❌ Debug async HTTP communication patterns
  • ❌ Manage credentials or certificates
  • ❌ Figure out localhost:18001 routing

For Developers:
You get a known-good starting point. You don't need to:

  • ❌ Start from scratch with boilerplate
  • ❌ Learn Kubernetes networking internals
  • ❌ Worry about security (it's built-in)
  • ❌ Debug "why isn't my sidecar working?"

You just write:

  • initiator_task() - your business logic for creating work
  • handle_post() - your business logic for processing work

For the Business:

  • ✅ Faster time-to-market (deploy in hours, not weeks)
  • ✅ Consistent patterns (all apps use same Blueprint structure)
  • ✅ Reduced complexity (one way to do internal networking)
  • ✅ Better security (Zero Trust by default, no optional PKI to misconfigure)

What Teams Can Build

Using Blueprint app shells as templates, teams can build:

Data Processing:

  • ETL pipelines (extract → transform → load)
  • Stream processing (ingest → filter → aggregate)
  • Batch job orchestration (coordinator → workers → results)

Distributed Systems:

  • Task queues (producer → workers → collector)
  • Request routers (frontend → services → aggregator)
  • Event processing (source → handlers → sink)

Internal Services:

  • API relay chains (A → B → C enrichment)
  • Microservice orchestration (secure by default)
  • Service coordination patterns (leader → followers)

The development pattern is always:

  1. Clone the Blueprint app
  2. Modify Section 1 (business logic)
  3. Leave Section 2 (WoSP integration) untouched
  4. Deploy

From Baton to Production

The serial relay "baton" app demonstrates:

  • ✅ Messages flow through the chain correctly
  • ✅ WoSP sidecars encrypt/decrypt automatically
  • ✅ Credentials rotate every 10 seconds (check the logs)
  • ✅ No certificates needed
  • ✅ The infrastructure works

Now make it yours:

  • Replace "baton_number" with "task_id"
  • Replace "trail" with "processing_steps"
  • Replace the relay logic with your business logic
  • Keep the WoSP integration untouched

Deploy. Test. Ship.


Next: Let's look at the time savings this approach delivers in practice.

The Before/After Timeline

Before: Setting Up Internal mTLS with cert-manager

Week 1: Infrastructure Setup

Day 1-2: DevOps installs and configures cert-manager
- Install cert-manager via Helm
- Create internal CA (or configure external CA)
- Write ClusterIssuer resource
- Test certificate issuance
- Debug "Certificate not ready" errors

Day 3-4: DevOps writes per-service Certificate resources
- Certificate for Service A (DNS names, SANs, usages)
- Certificate for Service B
- Certificate for Service C...
- Configure renewal thresholds (renewBefore)
- Set up monitoring for certificate expiry

Day 5: DevOps writes Deployment YAMLs
- Volume mounts for TLS certificates
- Init containers to wait for certs (if needed)
- Environment variables pointing to cert paths
Enter fullscreen mode Exit fullscreen mode

Week 2: Application Integration

Day 1-2: Dev modifies application code
- Read TLS cert and key from mounted volumes
- Configure HTTP client/server with TLS
- Add certificate rotation handling:
  * Watch certificate files for changes
  * Reload certs without downtime
  * Handle rotation failures gracefully

Day 3-4: Integration debugging
- "TLS handshake failure" errors
- Certificate validation issues
- Apps caching old certificates after rotation
- Timing problems (app starts before cert-manager issues cert)
- Volume mount permission issues

Day 5: Testing
- Test initial deployment
- Test certificate rotation
- Test certificate expiry scenarios
- Test CA rotation scenarios
Enter fullscreen mode Exit fullscreen mode

Week 3: Production Deployment

Day 1-2: Deploy to production
- Roll out cert-manager configuration
- Deploy services with new TLS code
- Monitor for certificate issues

Day 3-5: Operational overhead setup
- Create monitoring dashboards for cert expiry
- Set up alerts (15 days before expiry, 7 days, 1 day)
- Write runbooks:
  * Emergency certificate renewal
  * CA rotation procedures
  * Troubleshooting TLS handshake failures
Enter fullscreen mode Exit fullscreen mode

Ongoing (Forever):

  • Monitor certificate expiry alerts
  • Troubleshoot rotation failures
  • Respond to 3 AM pages when certs expire
  • Update runbooks after each incident
  • 8-10 hours/week managing PKI infrastructure

After: Setting Up Lane7 Blueprints

Week 1, Day 1 (Morning):

9:00 AM: DevOps downloads Bi-Pod Blueprint
9:05 AM: kubectl apply -f namespace.yaml
9:06 AM: kubectl apply -f secrets.yaml (pre-configured credentials)
9:07 AM: kubectl apply -f deployment.yaml
9:10 AM: ✅ Services running with Zero Trust networking
         ✅ Credentials rotating every 10 seconds
         ✅ No certificates, no PKI, no CA

Infrastructure: DONE
Enter fullscreen mode Exit fullscreen mode

Week 1, Day 1 (Afternoon):

2:00 PM: Dev opens blueprint Python app
2:05 PM: Reads Section 1 comments (business logic zone)
2:10 PM: Modifies initiator_task() for their use case
2:25 PM: Modifies handle_post() for their processing logic
2:30 PM: Tests locally (app talks to localhost:18001)
2:45 PM: kubectl apply -f deployment.yaml (redeploy)

Business logic: DONE
Enter fullscreen mode Exit fullscreen mode

Week 1, Day 2:

Testing, refinement, ship to production
Enter fullscreen mode Exit fullscreen mode

Ongoing (Forever):

Nothing. Zero certificate maintenance.
Enter fullscreen mode Exit fullscreen mode

Time saved per deployment: 4 days → 3 hours

Time saved per week: 8-10 hours → 0 hours

Peace of mind: Priceless

The Architecture: App + WoSP + Web Retriever

Lane7 Blueprints place three containers in each pod. The app uses a novel sidecar proxy, the WoSP, but with key security components. There is also a container for Web Retriever, an open source project for Envoy deployments.

WoSP (Workload Security Proxy)

An Envoy-based proxy that runs alongside your application container. The WoSP adds an Wasm filter to Envoy that that abstracts identity and secrets management from the app and eliminates the dependency on central identity managers like cert-manager.

What it does:

  • Listens on localhost for your app's HTTP requests
  • Encrypts traffic using current CHIPS-generated key
  • Routes to remote service's WoSP sidecar
  • Decrypts incoming encrypted traffic from remote WoSPs
  • Rejects incoming traffic that fails decryption
  • Delivers decrypted traffic to your app's localhost

What it doesn't do:

  • Modify your application code
  • Require changes to your app's configuration
  • Add external dependencies to your app

1. CHIPS™ (Codes Hidden In Plain Sight)

Patented cryptographic technology that generates ephemeral symmetric keys. What special about CHIPS™ is that it generates identical keys if two CHIPS™ algorithms run at nearly the same time.

Technical approach:

  • Dynamic elements are obtained by Web Retriever
  • The CHIPS algorithm uses the dynamic elements to produce a seed
  • Identical seed generation on both sides (no exchange needed)
  • Seeds are used by proven symmetric post-quantum crypto library in key generation

Why this works:

  • No key exchange = no handshake overhead
  • No Secret Zero problem (seed delivered once, securely)
  • Synchronized rotation = both sides always generate an identical key
  • Fast rotation is automatic and self-contained

2. Synchronous Ephemeral Encryption (SEE™)

A patented protocol that self-synchronizes session key generation and produces a bi-directional end-to-end encrypted communication channel (layer 7).

What it does:

  • Synchronizes key generation at both ends
  • Starts a secure communication session at both ends
  • Rejects access attempts that fail decryption at either end
  • Manages encryption/decryption
  • Integrates with Envoy's HTTP filter chain

Why SEE™:

  • There's no handshake to establish a session
  • Session key is not exposed outside the pod
  • There's no early termination to expose data
  • Isolated from application

2. Machine Alias ID (MAID™)

A rotating cryptographic identity credential that is derived from the history of a workload's activity.

What the MAID™ does:

  • Identifies each workload participating in a communication session
  • Meets the Zero Trust principal for frequent trust verification
  • Builds the chain of trust in the workload and not the CA
  • Prevents spoofing of the CHIPS/SEE processes

Why MAID™:

  • Verified trust of workload identity before a session begins
  • Eliminates PKI cryptographic key pairs, CSRs, or key managers
  • Insensitive to CA identity trust boundaries
  • Isolated from the application

The Flow:

Your App → localhost:18001 → WoSP Sidecar (layer 7)
                              ↓ (encrypt with CHIPS key)
                              Network
                              ↓ (encrypted traffic)
Remote WoSP Sidecar → localhost:8000 → Remote App (layer 7)
(decrypt with same CHIPS key)
Enter fullscreen mode Exit fullscreen mode

Key insight: Your app thinks it's talking HTTP to localhost. The app network exists at the application layer. The other layers see encrypted traffic only. The identity and secret credentials rotate frequently. A Cloud Native AMTD protects the pods and data.

The magic: Both WoSP sidecars generate the same key at the same time, without ever exchanging keys over the network.

The WoSP sidecar and Its Components

Lane7 Blueprints vs. cert-manager vs. Istio

Feature cert-manager Istio (SPIFFE/SPIRE) Lane7 Blueprints
Identity Model X.509 certificates X.509 certificates MAID credential
Rotation Frequency 90 days (default) 1-24 hours 500 sessions (default)
PKI Required Yes (CA infrastructure) Yes (SPIRE server) No
Secret Zero Problem Yes (how to get first cert) Yes (SPIRE agent bootstrap) No (self-generated seed)
Handshake Overhead High (TLS handshake) Medium (cached certs) None (self-generated PQC keys)
Config Complexity High (200+ lines) Very High (1000+ lines) Low (150 lines)
Scope Cert issuance only Full service mesh East-West encryption
Deployment Time 2-4 days 1-2 weeks <30 minutes
Ongoing Maintenance 8-10 hrs/week 10-15 hrs/week 0 hrs/week
Learning Curve Steep Very Steep Gentle
Best For Public-facing services Complex microservices with full mesh needs Internal Zero Trust service or app networking

When to Use Lane7 Blueprints:

Good fit:

  • Internal K8s service-to-service communication
  • HTTP application networks, Agentic AI systems, You want Zero Trust without PKI complexity
  • You want low cyber risk (high security) and Zero Trust
  • You want micro-segmentation out-of-the-box. Only authorized routes work. This is Zero Trust by design.
  • You don't need full service mesh features (traffic management, observability, etc.)
  • You're US-based (EAR compliance requirement)
  • You want to deploy and forge

Not a good fit:

  • You need Layer 7 traffic management (use Istio)
  • You need built-in observability/tracing (use Linkerd)
  • You need ingress/egress control (use a full mesh)
  • You're outside the US (currently - international licensing coming)

This Isn't a Silver Bullet (And That's OK)

Before you rush to deploy, here are the limitations:

1. Protocols

Blueprints are pre-configured for HTTP (Bi-Pod, Tri-Pod, Fan-Out-In). You can combine, chain, or stack multiple blueprints to build sophisticated service/application networks, but if you have other protocol needs, such at HL7, then contact us to move those up on our roadmap.

Why: Simplicity and security. Each topology has been tested and validated.

2. East-West (Currently)

Lane7 is for internal service-to-service traffic at the present time. You'll have to wait for:

  • Ingress (external → cluster traffic)
  • Egress (cluster → external traffic)
  • North-South traffic management (multi-cluster blueprints coming soon).

Why: We focused on solving one problem extremely well: internal mTLS without PKI.

Workaround: Use Lane7 for East-West + your existing ingress controller for North-South.

Need a particular Blueprint Pattern that's not yet in our Lane7 catalog?
email us here: lane7@hopr.co

3. US-Only (EAR Compliance)

CHIPS™, SEE™, and MAID™ uses advanced cryptographic technology that is subject to US Export Administration Regulations.

Why: WoSPs are US built technology and we're required to restrict delivery to US-based organizations. We are seeking ENC assessment to lessen the restriction and increase blueprint availability.

Current status: US-based organizations only.

Roadmap: International licensing in progress. Email us if you're outside US and interested — we're tracking demand.

Important: This is NOT ITAR (defense-specific). This is commercial export control (EAR). Similar to how many US crypto vendors start.

4. Not a Full Service Mesh

The WoSPs in each Lane7 Blueprint are built on Envoy. So certain characteristics can be inherited from Envoy capabilities:

  • Traffic splitting / canary deployments
  • Circuit breaking
  • Throttling
  • Built-in observability/tracing
  • Load balancing policies

Why: Service meshes are complex because they do everything. We do one thing: Zero Trust machine identity and encrypted communication without PKI.

Workaround: If you need full mesh features, use Istio or Linkerd. If you just need encrypted East-West traffic without the overhead, use Lane7.

5. Early Stage Product

You'll be in the first cohort of external users.

What this means:

  • ✅ We're responsive to feedback
  • ✅ We monitor deployments via observability plane
  • ✅ We offer support (15-min calls, email, docs)
  • ⚠️ You're helping us improve the product
  • ⚠️ Best for greenfield or isolated workloads initially

If you need battle-tested: Use Istio/Linkerd.

If you're OK being an early adopter: Welcome aboard.

The Questions Everyone Asks

"Why US-only? This seems arbitrary."

Not arbitrary—it's US law. WoSPs use cryptographic technology subject to Export Administration Regulations (EAR). We're required to restrict delivery to US-based organizations.

EAR ≠ ITAR. This is commercial export control, not defense-specific. Most US crypto vendors either:

  • (a) Go through export approval (takes 6-12 months)
  • (b) Use exemptions (limited use cases)
  • (c) Restrict to US initially (our approach)

We chose (c) to move fast and stay compliant.

If you're outside the US: Email us at lane7@hopr.co. We're tracking demand.


"How does CHIPS™ actually work? Sounds like magic."

Not magic—just a novel approach to key generation.

Simple explanation: Like TOTP (Google Authenticator) but not time-based. It is event-based. A connection between two pods starts CHIPS/SEE for encryption keys.

Technical explanation:

  1. Both workloads have WoSPs configured to use the same CHIPS algorithm. There are many thousands of them!
  2. License confirms the validity of a WoSP
  3. Each workload runs identically configured WoSPs
  4. Credentials rotate based on the session activity of the App
  5. No key exchange over wire = no handshake needed

This is NOT novel cryptography. We're not inventing ciphers. It's a novel application of well-understood primitives (AES).

Validated: 3rd-party independent security analysis + ethical hacker review.


"What happens if credentials get out of sync?"

They're designed to be independent (not synchronized) as a security measure.

How it works:

  • MAID™ (identity) verified by observability plane
  • SEE™ (secrets) verified locally in WoSP
  • Two different verification points = no single point of compromise

An attacker capturing one credential cannot derive the other.

Fault tolerance:
The system handles temporary issues (network delays, transient failures) with automatic re-attempts. Connections fail closed (deny) rather than open.

This is different from PKI where compromising the CA compromises everything.


"Is this production-ready?"

Depends on your definition.

Current state:

  • ✅ We run this in production for some workloads
  • ✅ Pre-configured and tested Blueprints
  • ✅ Observability plane monitoring
  • ✅ Support available
  • ✅ Free Blueprints on for local development and non-commercial use

Limitations:

  • ⚠️ Early stage (first external user cohort)
  • ⚠️ US-only (EAR)
  • ⚠️ Expanding to additional blueprints for non-HTTP protocols
  • ⚠️ Best for greenfield initially

If you need: Battle-tested, enterprise SLA → We have a paid tier for commercial use in a production environment

If you're OK with: Early adopter status, working with us to improve → Try it now


"What's the catch? Why is this free?"

Free Blueprints = Create demand and increase adoption

FREE tier:

  • 45-day license for each WoSP in the Blueprint
  • Pre-configured topologies and messaging patterns
  • non-commercial use (proof of concept)
  • intended for Dev environments
  • US-only

PAID tier (coming soon):

  • 365-day licenses
  • Commercial use
  • Intended for production environment
  • Priority support
  • Enterprise scaling

The catch: We need early adopters to validate the product. You're not the product. We're building a business around the WoSP/Lane7 and using free tier to accelerate adoption.

When you need longer licenses or custom topologies → you become a paying customer.


"How is this different from Istio/Linkerd?"

Scope:

  • Istio/Linkerd = Full service mesh (traffic management, observability, security)
  • Lane7 = East-West Zero Trust communications (one problem solved extremely well)

Identity:

  • Istio/Linkerd = X.509 certs via SPIFFE/SPIRE (still PKI)
  • Lane7 = MAID™ with Zero Trust verification (no PKI)

Complexity:

  • Istio = 1000+ lines of config, control plane, sidecars, learning curve
  • Lane7 = 150 lines, all deployment manifests, deploy in <30 min

Use case:

  • If you need full mesh → Istio/Linkerd
  • If you just want internal mTLS without PKI overhead and complexity → Lane7

Not replacements. Different tools for different needs.


"Can I see the code?"

WoSP uses open-source components (Envoy, etc.), but CHIPS™, SEE™, and MAID™ are patented proprietary crypto licensed by Hopr. The Cloud Native AMTD they produce is also protected by US patent.

We've already open-sourced Web Retriever, but the other technologies will remain licensed and controlled due to:

  • EAR compliance requirements
  • Intellectual property protection

You get full access to:

  • Blueprint YAML configurations
  • Deployment manifests
  • Documentation on how everything works

Technical deep dive on WoSP/CHIPS coming in a follow-up post.

You Don't Have to Live Like This

Six months ago, our team spent 8-10 hours a week confronting complex configurations and yaml hell:

  • Monitoring expiry dates
  • Troubleshooting rotation failures
  • Responding to broken configs

Today: 0 hours per week.

Not because we got better at certificate management. Because we stopped using certificates for internal traffic entirely.

The Shift in Thinking

We don't have to manage PKI like it's 2015.

Zero Trust doesn't require certificates.

It requires verifiable identity trust. And there are better ways to establish identity than files sitting on disk for 90 days.

Cloud Native AMTD (Automated Moving Target Defense) from the WoSP has been a game changer. And now, with Lane7 Blueprints, you don't need to be an advanced DevOps to quickly build an app network that is secure by default.

No PKI. No certificates. No Secret Zero. No 3 AM alerts.

Try It Yourself

The fastest way to see this in action is to deploy one of the two free Blueprints. the Bi-Pod Blueprint or the Quad-Pod Blueprint:

  1. Sign up (US-based orgs only): Lane7 Blueprints Catalog
  2. Download your Blueprint (pre-configured with credentials)
  3. Deploy:
    • Bi-Pod: kubectl apply -f pod-1/ -f pod-2/
    • Quad-Pod: kubectl apply -f pod-1/ -f pod-2/ -f pod-3/ -f pod-4/
  4. Watch the keys rotate every 60 seconds in the logs

Free 45-day license. Deploy in <30 minutes.

Technical documentation: https://docs.hopr.co

Questions? Email: support@hopr.co


Coming next: A technical deep dive on WoSP decentralized credential management and cryptographic innovations.
I'll explain how CHIPS™, SEE™, and MAID™ work together to secure apps, reject threat traffic, protect data in transit everywhere, and how this compares to Diffie-Hellman and SPIFFE/SPIRE.

If you want to be notified when that post drops, follow me here on Dev.to.

And if you deploy a Free Blueprint, I'd love to hear how it goes. Drop a comment below.

May your certificates never expire at 3 AM again. 🙂

Top comments (0)