DEV Community

Lakshmi Sravya Vedantham
Lakshmi Sravya Vedantham

Posted on

Type-safe LLM prompts in Rust: catching prompt bugs before they happen

Here's a bug I've seen in production more than once:

prompt = template.format(document=doc, language=lang, tone=tone)
Enter fullscreen mode Exit fullscreen mode

And somewhere upstream, tone wasn't passed. Python doesn't tell you. The LLM gets a prompt with a literal {tone} in it. It either ignores it, makes something up, or returns garbage. You find out when a user complains.

Rust can do better. Here's the same thing in prompt-rs:

let prompt = PromptTemplate::new("Summarize {document} in {language} with a {tone} tone")
    .fill("document", &doc)
    .fill("language", "English")
    // forgot tone
    .build()?;  // Returns Err("Missing required variable(s): tone")
Enter fullscreen mode Exit fullscreen mode

You get a Result. You handle it explicitly. The bug surfaces at the call site, not in production.

GitHub: LakshmiSravyaVedantham/prompt-rs


Install

[dependencies]
prompt-rs = "0.1"
serde_json = "1"  # for Chat serialization
Enter fullscreen mode Exit fullscreen mode

Prompt templates

use prompt_rs::PromptTemplate;

let template = PromptTemplate::new("Summarize {document} in {language}");

// Inspect what variables the template expects
let vars = template.variables();
// HashSet { "document", "language" }

// Fill them in and build
let prompt = template
    .fill("document", &my_doc)
    .fill("language", "English")
    .build()?;
// "Summarize [doc content] in English"
Enter fullscreen mode Exit fullscreen mode

If you miss a variable:

let result = template.fill("document", &my_doc).build();
// Err(MissingVariables("language"))
Enter fullscreen mode Exit fullscreen mode

The error message tells you exactly which variable you forgot. No guessing.


Chat messages

The Chat builder produces Vec<Message> in the exact format OpenAI and Anthropic expect:

use prompt_rs::chat::Chat;

let messages = Chat::new()
    .system("You are a concise technical writer")
    .user(&prompt)
    .build();

// Serialize directly — messages are serde::Serialize
let body = serde_json::json!({
    "model": "gpt-4o",
    "messages": messages,
    "max_tokens": 512
});
Enter fullscreen mode Exit fullscreen mode

The Role enum serializes to lowercase strings ("system", "user", "assistant") matching the OpenAI/Anthropic API format exactly:

pub enum Role {
    System,
    User,
    Assistant,
}
// serde: { "role": "user", "content": "..." }
Enter fullscreen mode Exit fullscreen mode

How it works internally

The template parser is 30 lines of plain Rust — no regex, no proc macros:

pub fn variables(&self) -> HashSet<&str> {
    let mut vars = HashSet::new();
    let mut rest = self.template.as_str();
    while let Some(start) = rest.find('{') {
        rest = &rest[start + 1..];
        if let Some(end) = rest.find('}') {
            let name = &rest[..end];
            if !name.is_empty() && name.chars().all(|c| c.is_alphanumeric() || c == '_') {
                vars.insert(name);
            }
            rest = &rest[end + 1..];
        } else {
            break;
        }
    }
    vars
}
Enter fullscreen mode Exit fullscreen mode

It scans for {...} pairs, validates that the name is alphanumeric (so {1} and {a b} are ignored), and deduplicates via HashSet. Then build() checks that every discovered variable has a value:

pub fn build(self) -> Result<String, PromptError> {
    let vars = self.template.variables();
    let missing: Vec<&str> = vars.iter()
        .filter(|v| !self.values.contains_key(*v))
        .copied().collect();

    if !missing.is_empty() {
        return Err(PromptError::MissingVariables(missing.join(", ")));
    }

    let mut result = self.template.template.clone();
    for (key, val) in &self.values {
        result = result.replace(&format!("{{{key}}}"), val);
    }
    Ok(result)
}
Enter fullscreen mode Exit fullscreen mode

The whole library is ~150 lines. You can read all of it in 5 minutes.


Why not a proc macro?

I considered making prompt! a compile-time macro so missing variables would be a compiler error rather than a runtime Result. That would be even better.

But it adds significant complexity (proc macros require a separate crate, the syntax gets weird), and most prompt templates are built from runtime data — you can't know at compile time what document will contain. So Result<String, PromptError> is the right boundary: compile time can't check it, but at least runtime checks it explicitly before the API call.


Part of a series

This is the third in a series of minimal Rust AI tools:

  1. nano-rag — RAG in 300 lines, no LangChain
  2. llm-bench — benchmark OpenAI vs Claude vs Groq
  3. prompt-rs — this one

All three follow the same philosophy: the smallest complete thing that does the real thing, with no magic hidden inside.

The code is at github.com/LakshmiSravyaVedantham/prompt-rs.

Top comments (0)