DEV Community

Cover image for Secrets Management for LLM Tools: Don’t Let Your OpenAI Keys End Up on GitHub 🚨
Parth Sarthi Sharma
Parth Sarthi Sharma

Posted on

Secrets Management for LLM Tools: Don’t Let Your OpenAI Keys End Up on GitHub 🚨

"A practical guide to securing LLM API keys, embeddings, vector

TL;DR: If you're building with LLMs and you're not treating secrets as first-class infrastructure, you're already at risk.

Every week, we see:

  • OpenAI keys pushed to GitHub
  • API keys logged in CloudWatch
  • Secrets hardcoded in Streamlit demos that later go to production

LLM systems multiply secrets quickly. If you don’t design for this early, things get messy fast.

This is a production-ready blueprint for securing LLM systems properly.


The Problem: LLM Secrets Multiply Fast 🐰

One LLM integration turns into dozens of credentials:

1 LLM API key (OpenAI / Anthropic)
β†’ 3 embedding endpoints
β†’ 5 vector store connections (Pinecone / Weaviate)
β†’ 2 RAG databases
β†’ 10 external tools (SerpAPI, Wolfram, etc.)
β†’ 50 microservices
= 70+ secrets

The bigger your AI system gets, the larger your attack surface becomes.


1️⃣ Never Hardcode Secrets

❌ Wrong (guaranteed leak eventually)

# NEVER DO THIS
from openai import OpenAI

client = OpenAI(api_key="sk-123...")
Enter fullscreen mode Exit fullscreen mode

Hardcoded secrets:

  • End up in git history
  • Get copied into logs
  • Leak via screenshots or stack traces

βœ… Right: Runtime Environment Injection

# config.py

import os
from openai import OpenAI

OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")

client = OpenAI(api_key=OPENAI_API_KEY)
Enter fullscreen mode Exit fullscreen mode

Principle:
Secrets should be injected at runtime, never committed to source code.

2️⃣ Use Cloud-Native Secrets Managers

If you're in production, use a managed secrets service.

AWS Secrets Manager + Lambda Example

# lambda_function.py
import json
import boto3
from openai import OpenAI

def get_secrets():
    client = boto3.client("secretsmanager")
    secret = client.get_secret_value(SecretId="llm-prod/openai")
    return json.loads(secret["SecretString"])

def lambda_handler(event, context):
    secrets = get_secrets()
    client = OpenAI(api_key=secrets["OPENAI_API_KEY"])
    # LLM logic here

Enter fullscreen mode Exit fullscreen mode

Benefits:

  • Centralized storage
  • IAM-based access control
  • Audit logs
  • Automatic rotation support

Terraform for Secret Infrastructure

resource "aws_secretsmanager_secret" "llm_keys" {
  name = "llm-prod/openai"
  tags = {
    Environment = "Production"
    Team        = "AI"
  }
}

resource "aws_secretsmanager_secret_version" "llm_keys_version" {
  secret_id     = aws_secretsmanager_secret.llm_keys.id
  secret_string = jsonencode({
    OPENAI_API_KEY    = "sk-..."
    ANTHROPIC_API_KEY = "sk-ant-..."
    PINECONE_API_KEY  = "pxl-..."
  })
}

Enter fullscreen mode Exit fullscreen mode

Infrastructure-as-Code ensures:

  • Repeatability
  • Auditability
  • No manual copy-paste secret management

3️⃣ Prefer Dynamic Credentials Over Static API Keys ⚑

Static API keys are long-lived and high risk.

Dynamic credentials reduce blast radius.

IAM Roles for Service Accounts (Kubernetes + AWS IRSA)

apiVersion: v1
kind: ServiceAccount
metadata:
  name: llm-worker
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/llm-worker-role
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: llm-worker
spec:
  template:
    spec:
      serviceAccountName: llm-worker
      containers:
        - name: llm-worker
          env:
            - name: OPENAI_API_KEY
              valueFrom:
                secretKeyRef:
                  name: llm-secrets
                  key: openai-key

Enter fullscreen mode Exit fullscreen mode

Even better: eliminate API keys entirely where possible and use workload identity federation.


4️⃣ Secure CI/CD with OIDC (No Long-Lived AWS Keys)

Never store AWS credentials in GitHub secrets if you can avoid it.

Use OIDC federation instead:

name: Deploy LLM Pipeline

on: [push]

jobs:
  deploy:
    runs-on: ubuntu-latest
    permissions:
      id-token: write
      contents: read
    steps:
      - uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: arn:aws:iam::123456789012:role/github-actions-llm-deploy
          aws-region: us-east-1

      - run: python deploy.py

Enter fullscreen mode Exit fullscreen mode

This avoids:

  • Static AWS access keys
  • Manual credential rotation
  • CI secret sprawl

5️⃣ Agentic LLM Systems Need Scoped Secrets 🧠

When building multi-agent systems:

  • Each agent should have scoped credentials
  • Short-lived tokens preferred
  • No shared global API key across agents

Example pattern:

class LLMAgentSecrets:
    def __init__(self, sm_client):
        self.sm_client = sm_client

    def get_agent_secret(self, agent_id: str):
        secret_name = f"llm-agent-{agent_id}"
        secret = self.sm_client.get_secret_value(SecretId=secret_name)
        return secret["SecretString"]

Enter fullscreen mode Exit fullscreen mode

Design for:

  • Isolation
  • Least privilege
  • Auditable access

βœ… Production Security Checklist

☐ No hardcoded secrets (git grep -i "sk-")
☐ Cloud secrets manager in use
☐ IAM roles preferred over static keys
☐ OIDC for CI/CD
☐ Secrets scanning enabled (TruffleHog, GitGuardian)
☐ Log sanitization in place
☐ Rotation policy defined (≀ 90 days)
☐ Audit logging enabled
☐ Least privilege enforced

Enter fullscreen mode Exit fullscreen mode

Common Leak Vectors 🚫

Leak Vector Detection Prevention
Git commits `git log -p grep sk-` Pre-commit hooks
Logs CloudWatch Insights Log scrubbing
Docker images Inspect image layers Multi-stage builds
Memory dumps /proc/[pid]/environ Container hardening

Cost vs Risk πŸ’°

Typical monthly cost for secure secrets management:

  • AWS Secrets Manager: ~$0.40 per secret
  • Secret scanning tools: modest monthly fee
  • OIDC: no additional cost
    Compare that to:

  • Revoking leaked keys

  • Service outages

  • Customer trust damage

Security is cheaper than cleanup.

Key Takeaways 🎯

  1. Dynamic > Static
  2. Inject at runtime, never commit
  3. Audit secret access
  4. Rotate regularly
  5. Scan continuously
  6. Apply least privilege everywhere

LLMs are powerful.

But API keys are still just credentials β€” treat them like production infrastructure.

Have you ever dealt with an exposed LLM API key in production? What happened?
Let’s discuss πŸ‘‡

Top comments (0)