Ever stared at your code editor at midnight, wishing your autocomplete could just write that regex for you? Yeah—me too. The promise of a generative AI assistant in your editor is irresistible: explain a bug, refactor a chunk, or generate tests at the speed of thought. Building one sounded like an epic weekend project… until I actually tried. Turns out, creating a TypeScript-powered coding assistant is a rabbit hole full of subtle traps, unexpected type battles, and plenty of “wait, why doesn’t that work?” moments.
The Dream vs. Reality
I went in thinking, "TypeScript + OpenAI API + VSCode extension = done." But the gulf between a quick prototype and something genuinely useful (and robust) was much wider than I expected. TypeScript gave me safety and tooling, but it also exposed every rough edge in the process—especially around code parsing, context management, and API wrangling.
Below, I’ll walk you through the gnarliest challenges, a few solutions, and some hard-earned wisdom—so you can skip the headaches I ran into.
1. Parsing User Code Isn’t Simple (ASTs and TypeScript’s Quirks)
To do anything intelligent, your assistant needs to understand the code you’re working on. That means parsing the current file into an Abstract Syntax Tree (AST), so you can extract functions, variables, context, and maybe surface errors to the LLM.
TypeScript’s compiler API gives you everything you need… and about 100x more. The docs are dense, and the API surface is huge. Here’s a barebones example that parses a file and grabs function names:
import * as ts from "typescript";
// Assume 'code' is a string containing user source code
const code = `
function add(a: number, b: number) { return a + b; }
const subtract = (a: number, b: number) => a - b;
`;
// Create a SourceFile AST node
const sourceFile = ts.createSourceFile("example.ts", code, ts.ScriptTarget.Latest, true);
// Walk the tree and pull function names
function findFunctionNames(node: ts.Node, names: string[] = []) {
if (ts.isFunctionDeclaration(node) && node.name) {
names.push(node.name.getText());
}
if (ts.isVariableStatement(node)) {
node.declarationList.declarations.forEach(decl => {
if (
ts.isIdentifier(decl.name) &&
decl.initializer &&
ts.isArrowFunction(decl.initializer)
) {
names.push(decl.name.getText());
}
});
}
ts.forEachChild(node, child => findFunctionNames(child, names));
return names;
}
const functionNames = findFunctionNames(sourceFile);
console.log(functionNames); // ['add', 'subtract']
Key points:
-
ts.createSourceFilegives you the AST for any TypeScript code. - Traversing the tree is verbose and low-level. You’ll write a lot of type guards (like
ts.isFunctionDeclaration). - Arrow functions in variables are not function declarations. You have to check both!
I spent a weekend tripping over subtle differences like this. If your coding assistant is supposed to “find all functions,” you need to support every shape users might write.
2. Streaming LLM Responses Without UI Jank
When you call an LLM (OpenAI, Claude, etc.) from your extension or backend, you’ll usually get a streaming response. That’s great for UX—users see code generate in real time. But handling streaming in TypeScript, especially with something like VSCode’s API, can get awkward.
Here’s how I ended up wiring a readable stream from OpenAI’s API into a VSCode editor:
// Pseudocode: assumes Node.js and VSCode API context
import fetch from 'node-fetch';
import * as vscode from 'vscode';
async function streamCompletion(prompt: string) {
const response = await fetch('https://api.openai.com/v1/chat/completions', {
method: 'POST',
headers: {
'Authorization': `Bearer YOUR_OPENAI_KEY`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
model: "gpt-4", // or another model
messages: [{ role: "user", content: prompt }],
stream: true
})
});
if (!response.body) throw new Error("No stream in response");
// Create a writable edit in the editor
const editor = vscode.window.activeTextEditor;
if (!editor) return;
let result = '';
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { value, done } = await reader.read();
if (done) break;
// Decode chunk and append
const chunk = decoder.decode(value);
result += chunk;
// Update editor text (simplified)
await editor.edit(editBuilder => {
editBuilder.replace(editor.selection, result);
});
}
}
What’s tricky here:
- TypeScript’s type system won’t save you from async stream bugs. Race conditions are easy if you try to update the UI too often or don’t debounce.
- OpenAI streams are not just plain text—they’re Server-Sent Events, so you need to parse them properly in production (I’m skipping that here for clarity).
- Updating the editor from a stream without jank is hard; you’ll want to buffer or throttle updates.
I burned a lot of time fighting with VSCode’s extension API—sometimes the editor would flicker, or the partial result would overwrite itself in weird ways. Test with large completions, not just toy examples.
3. Keeping Context Windows in Check
Tokens are expensive. If you naively send the whole file and all the dependencies to the LLM each time, you’ll hit token limits (or your API bill will make you cry). But if you send too little, the model writes code that doesn’t fit your project.
You need to be smart about extracting relevant context. Here’s a quick-and-dirty function to grab the function around the user’s cursor:
import * as ts from "typescript";
function findEnclosingFunction(sourceFile: ts.SourceFile, position: number): ts.FunctionLikeDeclarationBase | null {
let matched: ts.FunctionLikeDeclarationBase | null = null;
function visit(node: ts.Node) {
if (
ts.isFunctionLike(node) &&
node.pos <= position &&
node.end >= position
) {
matched = node;
}
ts.forEachChild(node, visit);
}
visit(sourceFile);
return matched;
}
// Usage:
const code = `
function foo() {
function bar() {
// cursor is here
}
}
`;
const cursorPosition = code.indexOf("// cursor is here");
const sf = ts.createSourceFile("file.ts", code, ts.ScriptTarget.Latest, true);
const enclosingFn = findEnclosingFunction(sf, cursorPosition);
if (enclosingFn) {
console.log(enclosingFn.getText());
}
Pro tips:
- Always check for both arrow functions and classic declarations.
- Don’t just grab the nearest function—sometimes users want “the class” or “the module,” so you might need to walk up the AST.
- TypeScript’s
posandendare absolute offsets in the file, not line/column.
Our team found that sending just the current function plus a summary of nearby symbols to the LLM gave much better results than blasting the whole file. It’s a balancing act—you need enough context for the AI to be smart, but not so much that you hit the ceiling.
Common Mistakes
1. Trusting TypeScript Types Too Much
TypeScript catches a ton of bugs, but when you’re parsing code or handling LLM output, you’re dealing with untyped data. If you assume the LLM always outputs valid TypeScript, you’ll hit weird runtime errors. Always validate and sanitize code before trying to execute or insert it.
2. Ignoring Edge Cases in Code Parsing
I learned this the hard way: real-world code is messy. Users mix JS and TS, decorate classes, use nonstandard syntax (experimental features), or include syntax errors. Don’t assume the code you’re parsing is valid or well-formed. Your assistant should fail gracefully—or better yet, help fix broken code.
3. Overcomplicating Context Windows
It’s tempting to send as much context as possible, but that’s a trap. You’ll blow past token limits and slow everything down. Start small (current function, maybe imports), and only expand if users request more. Be transparent about what context you’re sending.
Key Takeaways
- Parsing real-world TypeScript code is much harder than textbook examples—test with messy, real code.
- Streaming LLM responses into your editor requires careful handling to avoid UI glitches and race conditions.
- Don’t trust LLM output blindly—always validate and handle errors, because garbage in means garbage out.
- Sending the right context is more art than science; start with less and tune based on user feedback.
- TypeScript gives you power, but also exposes all the rough edges. Embrace the type system, but be ready to work around it.
Building a generative AI coding assistant in TypeScript will stretch your skills and patience, but it’s worth it. The best tools are the ones that sweat the details—so if you’re heading down this road, learn from my stumbles, and make your assistant smarter (and saner) than mine was on day one.
If you found this helpful, check out more programming tutorials on our blog. We cover Python, JavaScript, Java, Data Science, and more.
Top comments (0)