DEV Community

Cover image for Solved: I thought LLMs were good with TypeScript but I have had zero luck with them
Darian Vance
Darian Vance

Posted on • Originally published at wp.me

Solved: I thought LLMs were good with TypeScript but I have had zero luck with them

🚀 Executive Summary

TL;DR: LLMs often fail at generating complex TypeScript because they lack a true type-checker and specific project context, leading to subtle errors like hallucinated types. The solution involves explicitly providing the LLM with relevant type definitions and interfaces, transforming it from a guessing engine into a more effective coding assistant.

🎯 Key Takeaways

  • Always provide ‘Just-in-Time Context’ by including relevant TypeScript interfaces directly in your prompt for single-shot requests to narrow the LLM’s scope.
  • For extended coding sessions, use ‘The Context Header’ by pasting a temporary llm-context.ts file containing all relevant types and function signatures at the start of your chat.
  • Leverage ‘RAG & IDE Integration’ tools like Cursor or Sourcegraph Cody, which automatically find and inject relevant code snippets from your codebase into LLM prompts for deep, targeted context.

Large Language Models often fail at generating complex TypeScript because they lack a true ‘type-checker’ and your project’s context. This guide provides practical, real-world strategies to give your LLM the context it needs to stop hallucinating types and start writing useful code.

So You Thought LLMs Were TypeScript Wizards? Let’s Talk.

It was 2 AM. The prod-api-gateway-01 deployment was failing, but the pipeline logs were maddeningly clean. Everything looked fine. The code, a simple data transformation function suggested by our friendly AI assistant, had passed all the local unit tests. Yet, in production, it was silently returning undefined for a nested object property, causing a downstream service to fall over. After an hour of pulling my hair out, I found the culprit: the LLM had confidently generated a type that was almost right, but it missed that one of the properties in a deeply nested generic was optional. It hallucinated a perfect world that didn’t exist in our actual types.ts file. That’s when I realized the problem isn’t the LLM; it’s how we’re using it.

Why Your LLM is a Terrible TypeScript Intern

Let’s get one thing straight: Large Language Models are not compilers. They are not type-checkers. They are incredibly sophisticated token-prediction engines. When you ask an LLM to write TypeScript, it isn’t “thinking” about your type system. It’s making a statistical guess about what sequence of characters is most likely to follow your prompt, based on the mountain of public GitHub code it was trained on.

The problem is, it has zero context about your codebase. It doesn’t know about your custom IUserSession interface or the TLogEntry generic you’ve defined in utils/logging.ts. It’s just guessing. This lack of context is why it generates code that looks plausible but is subtly, catastrophically wrong.

Your job as an engineer is to stop treating it like a magic oracle and start treating it like a junior dev on their first day: you have to give it the documentation it needs to succeed. Here are the three ways we do it at TechResolve.

Fix #1: The Quick Fix – “Just-in-Time Context”

This is your bread and butter for quick, one-off questions. Never, ever ask the LLM to write a function without first giving it the shape of the data it will be working with. Don’t just describe it, show it the code.

Bad Prompt (What most people do):

"Write me a TypeScript function called 'getUserAuthLevel' that takes a user object and returns 'admin' if their role is 1, 'editor' if their role is 2, and 'viewer' otherwise."
Enter fullscreen mode Exit fullscreen mode

Good Prompt (What you should do):

"Given the following TypeScript interfaces:

interface UserProfile {
  firstName: string;
  lastName: string;
  avatarUrl?: string;
}

interface User {
  id: string;
  email: string;
  // Role is a numeric enum: 1=Admin, 2=Editor, 3=Viewer
  roleId: 1 | 2 | 3;
  profile: UserProfile;
}

Write a TypeScript function called 'getUserAuthLevel' that takes a 'User' object and returns a string: 'admin' if their roleId is 1, 'editor' if their roleId is 2, and 'viewer' otherwise."
Enter fullscreen mode Exit fullscreen mode

By providing the interfaces directly in the prompt, you’ve narrowed the LLM’s world from “all possible user objects on the internet” to just the one that matters. Its chances of giving you correct, type-safe code just went up by an order of magnitude.

Fix #2: The Scalable Fix – “The Context Header”

Pasting types for every single prompt gets old fast. When I’m starting a new feature, I create a temporary llm-context.ts file. I’ll copy all the relevant interfaces, type aliases, and function signatures for the task at hand into this single file.

Then, I start my chat session with the LLM by saying, “Use the following TypeScript context for all subsequent requests:” and paste the entire contents of that file. Now I can have a more natural conversation without re-pasting the context every single time.

Pro Tip: Don’t paste your entire codebase. Keep your context header lean and focused on the immediate task. LLMs have token limits, and feeding them irrelevant information just adds noise and costs more money. Focus on the data structures and function signatures that form the “contract” for your new code.

Fix #3: The ‘Power User’ Option – RAG & IDE Integration

This is where things get serious. Retrieval-Augmented Generation (RAG) is a fancy term for tools that can automatically find relevant context from your codebase and inject it into the prompt for you. Instead of you manually copy-pasting, these tools build a searchable index (a vector database) of your code.

When you ask a question, they perform a similarity search to find the most relevant code snippets and automatically add them to a “super-prompt” behind the scenes. This is the most effective method, as it gives the LLM deep, targeted context without manual effort.

Tools like Cursor or Sourcegraph Cody are built on this principle. They aren’t just a chat window; they’re integrated directly into your development environment and are aware of your entire project.

Which Method Should You Use?

Method Best For Effort Level
1. Just-in-Time Context Quick, single-shot functions or bug fixes. Low
2. The Context Header Working on a single feature for an entire session. Medium
3. RAG / IDE Tools Daily, deep integration into your workflow. High (Initial Setup)

It’s a Tool, Not a Replacement

At the end of the day, an LLM is a powerful pair programmer, but it’s one with perfect syntax knowledge and zero project memory. It’s on you, the senior engineer in the room, to provide the context and guidance. Stop asking it to guess, and start giving it the information it needs to do its job. Your production servers will thank you.

Stay sharp out there.

– Darian Vance


Darian Vance

👉 Read the original article on TechResolve.blog


☕ Support my work

If this article helped you, you can buy me a coffee:

👉 https://buymeacoffee.com/darianvance

Top comments (0)