DEV Community

Cover image for I Built an AI-Powered .env Manager for VS Code — Here's What I Learned
freerave
freerave

Posted on

I Built an AI-Powered .env Manager for VS Code — Here's What I Learned

Managing .env files is one of those things that sounds simple until you're on a team of 5, juggling 4 environments, and someone accidentally commits a Stripe secret key to GitHub at 2am.

I built DotEnvy — a VS Code extension that handles environment switching, secret detection, and encrypted backups. Version 1.6.0 just shipped and it's the most technically interesting release yet. Let me walk you through what's inside.

Install it:


The Problem

Every project I've worked on has this folder structure:

.env
.env.development
.env.staging
.env.production
Enter fullscreen mode Exit fullscreen mode

And every developer has their own "system" for switching between them — usually cp .env.staging .env and hoping they didn't forget. That's how secrets leak. That's how production breaks.

DotEnvy fixes this with one command: DotEnvy: Switch Environment.

See the new AI Secrets Scanner in action:

v1.6.0: The Big Changes

1. OS-Encrypted Secret Storage

The old version embedded the HMAC shared secret directly in the compiled bundle at build time:

// ❌ v1.5.0 — secret baked into the .vsix
private readonly sharedSecret = process.env.EXTENSION_SHARED_SECRET || 'REPLACE_AT_BUILD_TIME';
Enter fullscreen mode Exit fullscreen mode

Anyone could unzip the .vsix file (it's just a zip) and read it in plain text.

v1.6.0 stores it in VS Code's SecretStorage — OS-level encryption (Keychain on macOS, libsecret on Linux, Windows Credential Manager):

// ✅ v1.6.0 — OS-encrypted, never in the bundle
public static async initialize(context: vscode.ExtensionContext): Promise<LLMAnalyzer> {
    if (!LLMAnalyzer.instance) {
        LLMAnalyzer.instance = new LLMAnalyzer(context);
    }
    await LLMAnalyzer.instance.loadSecret();
    return LLMAnalyzer.instance;
}

private async loadSecret(): Promise<void> {
    this.sharedSecret = await this.secrets.get('dotenvy.llm.sharedSecret');
}

public async setSharedSecret(secret: string): Promise<void> {
    await this.secrets.store('dotenvy.llm.sharedSecret', secret);
    this.sharedSecret = secret;
}
Enter fullscreen mode Exit fullscreen mode

The user runs DotEnvy: Setup LLM Secret once. After that, every HMAC request is signed with the stored secret — never hardcoded anywhere.


2. The ML Feature Extractor — Carbon Copy of Python

The Railway backend uses a custom transformer model that takes a 35-element feature vector as input. The TypeScript extension was producing a 31-element vector and silently padding with zeros:

// ❌ v1.5.0 — 31 features + 4 silent zeros
while (features.length < 35) { features.push(0); }
Enter fullscreen mode Exit fullscreen mode

This means the model was seeing garbage for features 32–35 on every single request. The fix was to rewrite featureExtractor.ts as an exact mirror of feature_extractor.py:

// ✅ v1.6.0 — featureExtractor.ts mirrors feature_extractor.py exactly

// GROUP 2: Entropy (features 7-13)
f.push(shannonEntropy(secret) / 8.0);          // 7: normalized entropy  ← was /6.0 before!
f.push(ngramEntropy(secret, 2) / 8.0);          // 8: bigram entropy
f.push(ngramEntropy(secret, 3) / 8.0);          // 9: trigram entropy
f.push(compressionRatio(secret));               // 10: incompressibility proxy  ← was missing!
const maxRun = maxRunLength(secret);
f.push(maxRun / Math.max(1, len));              // 11: max repeat run
f.push(1.0 - maxRun / Math.max(1, len));        // 12: randomness score
f.push(localEntropyVariance(secret));           // 13: entropy variance  ← was missing!
Enter fullscreen mode Exit fullscreen mode

The entropy normalization was also wrong — dividing by 6.0 instead of 8.0. Shannon entropy for a random string can exceed 6 bits/char, so the model was receiving clipped values.

Here's the localEntropyVariance function that was missing:

// Mirrors _local_entropy_variance() in Python exactly
function localEntropyVariance(text: string, window = 8): number {
    if (text.length < window) { return 0.0; }

    const entropies: number[] = [];
    for (let i = 0; i <= text.length - window; i++) {
        entropies.push(shannonEntropy(text.slice(i, i + window)));
    }

    const mean = entropies.reduce((a, b) => a + b, 0) / entropies.length;
    const variance = entropies.reduce((a, b) => a + (b - mean) ** 2, 0) / entropies.length;
    const std = Math.sqrt(variance);

    // Low variance + high mean = consistently random = likely a real secret
    const consistency = 1.0 - Math.min(1.0, std / 2.0);
    const level       = Math.min(1.0, mean / 4.0);
    return (consistency + level) / 2.0;
}
Enter fullscreen mode Exit fullscreen mode

A string like sk-proj-abc123XYZ has consistent high entropy across every 8-char window. A human-readable string like my-api-production-key has variable entropy. The model uses this to distinguish them.


3. The Secrets Panel

The old flow was notification-based and capped at 5 results:

// ❌ v1.5.0
for (const secret of secrets) {
    if (notified >= 5) { break; }   // just gave up after 5
    await vscode.window.showWarningMessage(...);
    notified++;
}
vscode.window.showWarningMessage(`Found ${secrets.length - 5} more secrets...`);
Enter fullscreen mode Exit fullscreen mode

v1.6.0 opens a full WebviewPanel showing all secrets:

// ✅ v1.6.0 — show everything
public static show(secrets: DetectedSecret[], extensionUri: vscode.Uri): SecretsPanel {
    const panel = vscode.window.createWebviewPanel(
        'dotenvy.secretsPanel',
        '🔍 DotEnvy — Secrets Scanner',
        vscode.ViewColumn.One,
        { enableScripts: true, retainContextWhenHidden: true }
    );
    return new SecretsPanel(panel, secrets);
}
Enter fullscreen mode Exit fullscreen mode

The panel has live filter + search, confidence color-coding (red/orange/blue), and three actions per secret: 📍 View, 📥 Move to .env, 👁️ Not a Secret.


4. The AI Feedback Loop

Every time a user clicks "Not a Secret" or "Move to .env", DotEnvy records it as a training sample:

// FeedbackManager.ts
private static async record(
    secret: DetectedSecret,
    action: UserAction,    // 'confirmed_secret' | 'marked_false_positive'
    label: FeedbackLabel   // 'high' | 'medium' | 'low' | 'false_positive'
): Promise<void> {
    const features = FeatureExtractor.extract(
        secret.content,
        secret.context,
        variableName
    );  // 35-element vector

    const entry: FeedbackEntry = {
        id:                  crypto.randomUUID(),
        timestamp:           new Date().toISOString(),
        secret_value:        secret.content,   // already redacted: "sk-1****ef"
        context:             secret.context,
        user_action:         action,
        label,
        features,            // ← this is what the model learns from
        original_confidence: secret.confidence,
        sent:                false,
    };

    await FeedbackManager.save(entry);
    FeedbackManager.flush().catch((_err) => {
        // Silent fail — will retry on next event
    });
}
Enter fullscreen mode Exit fullscreen mode

Entries are batched and sent to the Railway backend via HMAC-signed POST:

// Flush sends up to 20 entries at a time
public static async flush(): Promise<void> {
    const pending = all.filter(e => !e.sent);
    if (pending.length === 0) { return; }

    for (let i = 0; i < pending.length; i += 20) {
        const batch = pending.slice(i, i + 20);
        await analyzer.sendFeedback(batch.map(e => ({
            secret_value:  e.secret_value,
            context:       e.context,
            features:      e.features,
            user_action:   e.user_action,
            label:         e.label,
        })));
        batch.forEach(e => { e.sent = true; });
    }
}
Enter fullscreen mode Exit fullscreen mode

The model gets smarter with every correction. The /extension/feedback endpoint on the server calls model.train() with the new samples.


5. .dotenvyignore

.dotenvy/history/*.json files were triggering 50 false positives because they contain AES-256-GCM ciphertext — which looks exactly like a high-entropy secret.

The fix: .dotenvyignore — same syntax as .gitignore, file-change-aware cache:

// dotenvyIgnore.ts — glob → RegExp conversion
private static globToRegex(glob: string): RegExp {
    let regexStr = '';
    let i = 0;

    while (i < glob.length) {
        const c = glob[i];
        if (c === '*') {
            if (glob[i + 1] === '*') {
                regexStr += glob[i + 2] === '/' ? '(?:.+/)?' : '.*';
                i += glob[i + 2] === '/' ? 3 : 2;
            } else {
                regexStr += '[^/]*';
                i++;
            }
        } else {
            regexStr += DotenvyIgnore.escapeRegex(c);
            i++;
        }
    }
    return new RegExp(regexStr + '$', 'i');
}
Enter fullscreen mode Exit fullscreen mode

The default .dotenvyignore excludes:

.dotenvy/**
.dotenvy-backups/**
out/**
**/*.test.ts
**/*.spec.ts
docs/**
*.md
k8s/**
Enter fullscreen mode Exit fullscreen mode

Right-click any file or folder in the Explorer → "DotEnvy: Ignore this path" — it adds the pattern instantly.


6. Centralized Logging

94 console.log/warn/error calls across 21 files → one Logger class with VS Code Output Channel:

// Before
console.log(`[DotEnvy] History recorded for ${fileName}`);
console.error('Failed to load history:', error);

// After
logger.info(`History recorded for ${fileName}`, 'HistoryManager');
logger.error('Failed to load history', error, 'HistoryManager');
Enter fullscreen mode Exit fullscreen mode

Output in VS Code:

2026-03-17 20:28:06.768 INFO  [Extension]    DotEnvy is now active!
2026-03-17 20:28:07.303 WARN  [LLMAnalyzer]  Shared secret not found in SecretStorage
2026-03-17 20:28:08.400 INFO  [extension]    LLM Service is online
Enter fullscreen mode Exit fullscreen mode

LogLevel.DEBUG in development, LogLevel.WARN in production — auto-detected via context.extensionMode.


The Architecture

VS Code Extension (TypeScript)
    │
    ├── featureExtractor.ts   ← 35 features, mirrors Python exactly
    ├── llmAnalyzer.ts        ← HMAC signing, SecretStorage, circuit breaker
    ├── feedbackManager.ts    ← collects training data, sends to Railway
    ├── dotenvyIgnore.ts      ← .gitignore-style file exclusions
    └── SecretsPanel.ts       ← WebviewPanel with filter/search/actions
            │
            │ HMAC-SHA256 signed POST
            ▼
    Railway Backend (Python/FastAPI)
            │
            ├── /extension/analyze    ← returns confidence + reasoning
            ├── /extension/feedback   ← accepts training samples
            └── CustomLLM (Transformer)
                    └── feature_embeds = matmul(features, embedding_matrix)
Enter fullscreen mode Exit fullscreen mode

What I Learned

1. Feature alignment matters more than model architecture. Feeding the wrong values to the right model produces worse results than feeding correct values to a simple heuristic. The 31→35 fix and the /6.0→/8.0 fix probably improved accuracy more than any model tuning would have.

2. vscode.SecretStorage is underused. It's OS-encrypted, persists across restarts, and requires zero dependencies. If you're building a VS Code extension that needs to store anything sensitive, use it.

3. Deleting code feels good. secretScanner.ts was 700 lines. SecretDetector.ts + 5 helper classes does the same thing with better separation of concerns and is actually testable. The 700-line version had accumulated debt that nobody wanted to touch.

4. .gitignore-style syntax is worth implementing properly. Users already know it, it's expressive enough for most cases, and the glob→regex conversion is about 50 lines of code.


Install


The feedback loop is the part I'm most excited about. Every "Not a Secret" click improves the model for the next user. It's a small flywheel, but it's running now.

If you try it and find false positives — that's the point. Click "Not a Secret" and help it learn.

Top comments (0)