DEV Community

Luke Fryer
Luke Fryer

Posted on • Originally published at aipromptarchitect.co.uk

How we built a type-safe prompt engineering framework in TypeScript

Embedding raw LLM prompts in codebases is a massive architectural anti-pattern.

We used to write features like this across our Next.js backend:

// BAD ❌
const prompt = \`You are a developer. Fix this code. We use React. Give me code only.\`;
const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: prompt }]
});
Enter fullscreen mode Exit fullscreen mode

The problem? As our LLM interactions scaled, prompt hallucination rates spiked. Different engineers structured their instructions differently. Vital context was forgotten. Formatting was unpredictable.

We needed a strict API for LLMs.

The STCO Framework

Rather than unstructured paragraphs, we developed the STCO framework — a structured methodology that forces every prompt to be explicitly split into 4 components:

  1. System: The persona, role, and constraints.
  2. Task: The specific action required.
  3. Context: Background variables and environmental data.
  4. Output: The exact desired format.

By forcing engineers to populate these variables programmatically, hallucination rates dropped by over 30%. However, we still lacked type-safety.

Building @lukefryer4/stco-prompt-builder

To standardize this universally, we built and open-sourced a lightweight TypeScript utility designed entirely around the STCO framework. It provides interfaces for type-safe prompting, string interpolation to format it into Markdown headers, and static validators.

You can view the package on NPM and clone the source code on GitHub.

Step 1: Type-Safe Definitions

We defined the strict STCOPrompt interface. Now, your IDE throws compilation errors if an engineer forgets to provide LLM Context.

import { STCOPrompt } from '@lukefryer4/stco-prompt-builder';

const prompt: STCOPrompt = {
  system: 'You are a senior React developer and performance expert.',
  task: 'Refactor the provided component to reduce unnecessary re-renders.',
  context: 'We are using React 18, Next.js App Router, and TailwindCSS.',
  output: 'Provide the complete refactored code block with brief inline comments.'
};
Enter fullscreen mode Exit fullscreen mode

Step 2: The Compilation Engine

Instead of mapping template literals, our structural compiler automatically parses the object into beautiful, machine-readable Markdown sections (e.g. ### System, ### Task) that OpenAI and Anthropic models respond highly to.

import { buildPrompt } from '@lukefryer4/stco-prompt-builder';

// Secure compilation
const llmReadyString = buildPrompt(prompt);

const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: llmReadyString }]
});
Enter fullscreen mode Exit fullscreen mode

Advanced Heuristics

While our open-source NPM package acts as a brilliant developer utility to structure your prompts natively in TypeScript, the real challenge is actually writing good STCO objects.

How do you know if your "System" definition is strong enough? How do you know if your "Context" is securely formatted?

For that, we utilize our visual A-F LLM heuristic grader over at AI Prompt Architect. Our proprietary web platform analyzes your prompt's logical bounds and grades its effectiveness before you ever copy-paste it into your codebase.

Start using Structured Prompts

If you are currently embedding messy paragraph strings directly into your backend code, I highly recommend transitioning to structured prompt architectures.

  1. npm install @lukefryer4/stco-prompt-builder
  2. Let us know how the STCO typescript implementation helps your code architecture!

If you want to read more about the research behind why this framework reduces latency and token hallucination, check out our A.I. Documentation.

Top comments (0)