If you've tried to follow any AI coding discussion in the last six months, you've probably felt like everyone suddenly started speaking a dialect you never signed up to learn. "Vibe coding." "Agentic workflows." "Context windows." "Prompt engineering." The jargon is multiplying faster than JavaScript frameworks, and that's saying something.
Matt Pocock — who you might know from his TypeScript education work at Total TypeScript — apparently felt the same frustration. He's put together a dictionary-of-ai-coding repository on GitHub that attempts to explain AI coding jargon in plain English. It's been trending, and honestly, it's the kind of resource I wish existed six months ago.
Why This Matters More Than You Think
Here's the thing: the AI coding space is moving so fast that terms get invented, redefined, and sometimes abandoned within weeks. I've been in meetings where three developers used the same term to mean three different things. That's not a terminology problem — that's a communication breakdown that leads to bad architecture decisions.
Consider how many developers are now interacting with AI tools daily. Whether you're using Cursor, GitHub Copilot, Claude Code, or any other AI-assisted coding tool, you're swimming in terminology that didn't exist two years ago. Having a shared vocabulary isn't just nice — it's necessary.
Some Terms Worth Actually Understanding
Let me walk through a few AI coding terms that I think every developer should internalize, not just recognize.
Context Window
This is the total amount of text (measured in tokens) that an AI model can "see" at once. Think of it like the model's working memory.
# A simplified mental model of context windows
context_window = {
"system_prompt": 500, # instructions to the model
"conversation_history": 3000, # prior messages
"current_code": 2000, # the file you're working on
"available_for_response": 2500 # what's left for the AI to generate
}
# When you hit the limit, older context gets dropped
# This is why AI "forgets" things in long conversations
Why does this matter practically? Because when your AI coding assistant starts giving weird suggestions halfway through a session, it's probably not broken — it's lost context. Understanding this changes how you structure your interactions.
Agentic Coding
This is where the AI doesn't just suggest code — it takes actions. It reads files, runs commands, creates branches, executes tests. The shift from "autocomplete on steroids" to "junior developer who never sleeps" is the agentic shift.
// Non-agentic: AI suggests code inline
// You: "write a function to parse CSV"
// AI: here's a function (you copy-paste it)
// Agentic: AI takes autonomous actions
// You: "add CSV parsing to the data pipeline"
// AI:
// 1. reads your existing pipeline code
// 2. creates a new parser module
// 3. writes tests
// 4. runs the tests
// 5. fixes failures
// 6. commits the changes
I've been using agentic coding tools more heavily over the past few months, and the mental model shift is real. You stop thinking about writing code and start thinking about reviewing code. That's a fundamentally different skill.
Vibe Coding
Coined by Andrej Karpathy, this one describes the practice of building software by describing what you want in natural language and letting AI handle the implementation details. You're coding by vibes, not by syntax.
It sounds wild, but I've seen people build functional prototypes this way in hours. The catch? The code quality is often... questionable. Vibe coding is great for prototyping and terrible for production systems that need to be maintained.
Prompt Engineering vs. Prompt Design
I've noticed people using these interchangeably, but they're subtly different. Prompt engineering is the technical practice of crafting inputs to get specific outputs from a model. Prompt design is broader — it's about designing the entire interaction pattern, including system prompts, context management, and output formatting.
# Prompt engineering (tactical)
prompt: "Convert this function to use async/await.
Keep error handling. Return the same types."
# Prompt design (strategic)
system: "You are a code modernization assistant.
Always preserve existing tests.
Explain breaking changes before making them."
context:
- existing_code: "./src/legacy/"
- test_suite: "./tests/"
- style_guide: "./.eslintrc"
output_format: "diff with inline comments"
The Meta-Problem: Jargon as Gatekeeping
Here's where I get a bit opinionated. The rapid proliferation of AI coding jargon has a real gatekeeping effect. When senior engineers casually throw around terms like "RAG pipeline," "few-shot prompting," and "temperature tuning" in standups, junior developers nod along while internally panicking.
That's why open, community-maintained resources like Matt Pocock's dictionary matter. They lower the barrier to entry. You don't need to take a course or read a paper — you just need a plain-English explanation you can reference in two minutes.
How to Actually Keep Up
A few practical strategies that have worked for me:
- Learn terms in context, not in isolation. Don't memorize definitions. Use an AI coding tool, hit a concept you don't understand, look it up, then keep going. The hands-on context makes it stick.
- Build a personal glossary. I keep a markdown file in my notes app. When I encounter a new term, I write down what I think it means, then verify. The act of writing it down is what cements it.
- Follow the tool changelogs. Cursor, Copilot, Claude Code — they all publish updates. Reading changelogs teaches you terminology naturally because the terms are attached to real features.
- Track your own tools. On a related note, privacy-focused analytics tools like Umami or Plausible can help you understand how developers interact with your projects and docs without invasive tracking — useful if you're building developer tools yourself.
The Dictionary Approach Is Smart
What I appreciate about the dictionary-of-ai-coding repo is the format. It's not a tutorial. It's not a course. It's a reference. When you're in the middle of reading a blog post or sitting in a meeting and someone drops a term you don't know, you want a 30-second answer, not a 30-minute video.
The repo is open source, which means the community can contribute definitions and keep them updated as the terminology evolves. That's important because — and I cannot stress this enough — the definitions will change. "Agent" meant something different in AI circles twelve months ago than it does today.
My Advice: Don't Panic, But Don't Ignore It Either
If you're feeling overwhelmed by AI coding terminology, you're in good company. The field is genuinely moving fast, and nobody has it all figured out. But here's the thing — you don't need to know every term. You need to know the ones that affect your daily work.
Start with the basics: context windows, tokens, prompts, agents. Bookmark Matt's dictionary for when you hit something unfamiliar. And most importantly, don't let jargon stop you from actually using these tools.
The developers who'll thrive aren't the ones who can define every term perfectly. They're the ones who can ship code — with or without AI assistance — and communicate clearly about what they're doing. A shared vocabulary just makes that communication easier.
Top comments (0)