DEV Community

Aniefon Umanah
Aniefon Umanah

Posted on

Encrypting Secrets in Production (Without Breaking Everything)

I just spent way more time than I'd like to admit adding encryption to a NestJS app that was already live. The kind of feature that sounds simple until you're staring at a database full of plaintext API keys wondering how to migrate them without taking the whole thing offline.

Here's what actually happened.

The "Oh Shit" Moment

We store webhook secrets for GitHub and API credentials for Twitter/LinkedIn/Dev.to. All sitting in a PostgreSQL jsonb column. Unencrypted.

Not ideal? Sure. Security vulnerability? Absolutely. Something anyone actually exploits in a small B2B SaaS? Probably not, but still.

The real trigger was adding more integrations. Each new platform meant more credentials, more API keys, more things that could leak. At some point you have to stop pretending you'll "add encryption later."

The Part Nobody Tells You About

Here's the thing about encrypting existing data: you can't just flip a switch. You have to handle:

  1. Existing plaintext data - Can't just encrypt in place, you'll break production
  2. Backward compatibility - App needs to read both formats during migration
  3. Column size - Encrypted data is bigger (way bigger)
  4. Zero downtime - Can't just take the DB offline on a Tuesday

Every tutorial shows you how to encrypt new data. Nobody shows you how to migrate the old stuff while keeping the lights on.

The Solution (That Actually Worked)

Built a custom TypeORM transformer that auto-detects whether data is encrypted:

export class EncryptedStringTransformer implements ValueTransformer {
  to(value: string | null): string {
    if (!value) return '';
    // Check if already encrypted (has our format markers)
    if (this.isEncrypted(value)) return value;

    // Encrypt plaintext
    const iv = randomBytes(12);
    const cipher = createCipheriv('aes-256-gcm', key, iv);
    const encrypted = Buffer.concat([
      cipher.update(value, 'utf8'),
      cipher.final(),
    ]);
    const authTag = cipher.getAuthTag();

    // Format: iv:authTag:ciphertext (all base64)
    return `${iv.toString('base64')}:${authTag.toString('base64')}:${encrypted.toString('base64')}`;
  }

  from(value: string | null): string {
    if (!value) return '';
    // If not encrypted format, return as-is (backward compat)
    if (!this.isEncrypted(value)) return value;

    // Decrypt
    const [ivB64, authTagB64, encryptedB64] = value.split(':');
    // ... decryption logic
  }
}
Enter fullscreen mode Exit fullscreen mode

The magic: it reads both formats. Plaintext passes through untouched. Encrypted data gets decrypted. New writes always encrypt.

This meant I could:

  1. Deploy the transformer
  2. Run a script to UPDATE all records (triggers encryption)
  3. Zero downtime, zero breaking changes

What Actually Broke

Of course it wasn't that smooth.

Issue #1: Column length

Original column: VARCHAR(255)
Encrypted webhook secret: ~200 chars

Seems fine, right? Wrong. The migration failed on a few records because some secrets were already near the limit. Encrypted versions didn't fit.

Had to bump to VARCHAR(500) first, then migrate data.

Issue #2: The jsonb trap

Some credentials were nested in jsonb:

{
  "accessToken": "abc123",
  "refreshToken": "xyz789"
}
Enter fullscreen mode Exit fullscreen mode

You can't use TypeORM transformers on jsonb fields. Had to:

  • Add a new TEXT column for encrypted credentials
  • Write custom serialization
  • Drop the old jsonb column
  • Rename the new one

All while keeping the entity interface identical so the rest of the codebase didn't notice.

Issue #3: Environment variables

The encryption key comes from ENCRYPTION_KEY env var. Obvious, right?

Except in testing I kept getting decryption failures. Turns out the key was getting loaded after TypeORM initialized the transformers. Race condition.

Fix: Lazy-load the key in the transformer instead of at module init. Ugly but works.

The Things That Helped

  1. Testing with real prod data - Exported sanitized records, tested migration locally first
  2. Gradual rollout - Deployed transformer first, let it run for a day, then migrated data
  3. Monitoring - Added logs for every decryption failure, caught edge cases fast

What I'd Do Differently

Start with encryption from day one? Sure, but that's not useful advice when you're already live.

Real lesson: Build backward compatibility into your transforms from the start. Even if you're not encrypting yet, make your transformers detect and handle legacy formats. Future you will thank you.

Also: Don't underestimate column size requirements. Encrypted data is ~2-3x larger depending on encoding. Budget for it.

The Bonus Problem: Activity Logging

While I was in there, I added an activity feed. Every action (draft created, account connected, credentials updated) gets logged with full context.

Why mention this? Because it nearly broke the encryption migration.

The activity logger runs in a DB transaction. If it fails, the whole operation rolls back. Which meant every credentials update that should have triggered encryption... didn't, because the activity log couldn't serialize the metadata.

Had to make activity logging async and fire-and-forget. If it fails, whatever, we lose an activity log entry. Better than blocking critical operations.

The Result

~9,000 lines of changes. Most of it:

  • Migration files (careful, methodical schema changes)
  • Test fixtures (encryption breaks deterministic tests)
  • Activity logging infrastructure (the thing I didn't plan for)

Actual encryption code? Maybe 200 lines.

The rest is ceremony around not breaking production.


The real takeaway: Retrofitting security into a live system isn't about the encryption algorithm. It's about the 10 other things that break when you change how data is stored. The column sizes, the backward compatibility, the test fixtures, the race conditions.

Budget 10x the time you think it'll take. Then add logging so you can debug the things you didn't anticipate.


Tags: #nestjs #security #encryption #database #migration

Top comments (0)