DEV Community

Cover image for VS Code extension that compiles structured prompts — no AI calls, no API keys
Ali Malik
Ali Malik

Posted on

VS Code extension that compiles structured prompts — no AI calls, no API keys

What I Built

Pup is a VS Code extension that treats your prompt like source code — you fill a form, it compiles to whatever format your model prefers.

Pup builder view

The insight behind it: LLMs aren't all trained the same way, and the format of your prompt actually matters.

Model Preferred Format
Claude (Anthropic) XML tags
GPT-4 / GPT-5 (OpenAI) Markdown or JSON
Gemini (Google) Markdown or structured JSON
Cursor / Copilot Chat Markdown

Pup compiles the same prompt into all four. One toggle. No rewriting.


How It Works

  1. Open the builderCmd/Ctrl+Shift+PPup: Open Builder
  2. Pick a preset — Bug Fix, New Feature, Refactor, Code Review, Research, Agentic Task, or Custom
  3. Fill the sections — Role, Context, Task, Constraints, Output Format
  4. Type @ inside any field to fuzzy-search and reference workspace files
  5. Copy or save — export as Markdown, XML, JSON, or plain text

Saved prompts live in .prompts/ as .md files with embedded frontmatter. Right-click → Open Selected .md as Prompt to pick up exactly where you left off.


A Few Details Worth Knowing

Live token counter. A char/word-blended estimator (±10–15%) shows context window usage in real time as you type — no per-model tokenizer required. Switch your target model and the usage bar updates.

Minimize hallucination toggle. Appends an <investigate_before_answering> directive that forces the model to read referenced files before responding. Surprisingly effective.

Caveman mode. Appends a directive that strips filler from the model's reply — roughly 75% fewer output tokens. Great for high-volume iteration.

Zero outbound requests. No API calls inside the extension. No accounts. No telemetry. Your prompts never leave your machine.

Remote-ready. Works in Remote-SSH, Codespaces, dev containers, and virtual workspaces.


Install

# VS Code
code --install-extension alimalik.pup

# Cursor
cursor --install-extension alimalik.pup
Enter fullscreen mode Exit fullscreen mode

Or search Pup in the Extensions panel (Cmd/Ctrl+Shift+X).

Marketplace listing


What I'm Looking For

This started as a personal workflow fix. I'm shipping it publicly because if I had this problem, others probably do too.

Feedback I'm specifically curious about:

  • Which output formats do you actually reach for?
  • What role presets are missing from your workflow?
  • Edge cases in the token estimator you've hit?

Source is on GitHub. PRs are welcome — new presets especially.


If it saves you time, a ⭐ on the repo helps others find it.

GitHub logo alimalikali / pup-prompt-engineering-toolkit

Building structured prompts (Markdown/JSON/XML/plaintext) for Cursor, Copilot, Claude, and ChatGPT.

Pup logo

Pup — A Prompt Engineering Toolkit

Build clean, structured prompts for Cursor, Copilot, Claude, ChatGPT and Gemini — without leaving your editor.

Visual Studio Marketplace Installs Rating License: MIT PRs Welcome

Install · Quick Start · Features · Why It Matters · Contributing

Pup — builder view


✨ Why Pup?

LLMs respond better to structured prompts — role, context, task, constraints, output format. Most developers freestyle into a chat window, get mediocre output, then iterate forever.

Pup is a form-based prompt compiler. Fill out sections → Pup compiles them into Markdown / JSON / XML / plain text. Save once, reuse forever. No accounts. No telemetry. No AI calls inside the extension.

Important

Zero outbound requests. Zero API keys. Zero telemetry. Your prompts never leave your machine. Pup is a pure compiler — your prompt, your model, your editor.


🎯 Why Format Matters

Different model vendors are trained against different prompt structures. Matching the format their training data used measurably improves output quality…

Top comments (1)

Collapse
 
alizulfiqarmalik profile image
Ali Malik