DEV Community

Cover image for Coding or Conversation? LLMs Are Our New Stack
<devtips/>
<devtips/>

Posted on

Coding or Conversation? LLMs Are Our New Stack

Fast tools, flawed copilots, and prompts that lie. Let’s unpack this.

Press enter or click to view image in full size
Software dev didn’t die it just got weird

I don’t write code anymore; I negotiate with a machine that does.”

A few years ago, “software engineering” meant building systems line by line, debugging for hours, shipping features over sprints, and praying your CI didn’t break before the demo. Now? Half the job is prompting an LLM, validating what it says, and figuring out if that function it hallucinated will silently fail in prod.

The job hasn’t disappeared it’s morphed. We’re still shipping software, but the tools, workflows, and even the definition of “developer” are in flux. The craft we knew is mutating into something more like architecture, curation, and AI babysitting less “I build” and more “I collaborate with a smart, overconfident intern who never sleeps.”

This isn’t the end of engineering. But it is the end of pretending LLMs are just tools like any other. The way we think about design, debugging, team dynamics, and even our job titles is changing and fast.

Let’s break down what’s really going on in the post-LLM dev world.

What I am covering:

  1. The broken pipeline Code, test, deploy? Nah, just prompt, nudge, ship
  2. Architects are the new full-stack devs Why high-level thinking matters more than ever
  3. LLMs: pair programmers with a god complex Fast, confident, sometimes wrong
  4. We’re debugging conversations now, not just code Prompt failures are the new stack traces
  5. Toolchains are mutating fast blink and you’ll miss it The rise of AI-native dev environments
  6. The new role of engineers: filters, not fabricators Your judgment is your job now
  7. So what should we actually build now? Ideas worth building in the LLM era
  8. Conclusion Stop pretending LLMs are just autocomplete
  9. Resources Tools, links, and dev rabbit holes worth exploring

1. The broken pipeline

Code, test, deploy? Nah, just prompt, nudge, ship.
Not long ago, software engineering followed a predictable, almost sacred sequence:
Plan → Design → Code → Test → Review → Deploy → Repeat.
It was clean. It was methodical. And it made sense mostly.

Then LLMs kicked in the door.

Now? You throw a prompt at a model, get back code that probably works, try it, tweak the prompt, and repeat until it behaves. It’s like doing kung fu with autocomplete. The entire dev loop has been collapsed into a conversation.

LLMs don’t care about your clean architecture plans. They’ll write the whole stack in one go, skipping steps that took you days. Frontend? Backend? Database models? One well-aimed prompt and boom you get an app scaffold that might’ve taken you a week. But also: might explode on edge cases you didn’t even think of yet.

The SDLC (software development life cycle) has become more of a dance:

  • You prompt.
  • The model responds with something impressive but deeply suspicious.
  • You prod it with follow-ups.
  • You test it manually because unit tests still aren’t magically perfect.
  • Then you ship it and wait for it to break in staging.

You’re not just building code now. You’re managing an intelligent collaborator who:

  • Doesn’t sleep
  • Doesn’t ask for PTO
  • Occasionally lies with confidence
  • And somehow knows obscure Python quirks you forgot

We’re not following a pipeline anymore we’re looping through conversations, pushing prototypes, and shipping fast, often before we fully understand what we’ve built.

The scary part? It’s working.

Press enter or click to view image in full size

2. Architects are the new full-stack devs

Why high-level thinking matters more than ever
Remember when being a “full-stack dev” meant you could wrangle React, Node, Docker, and a little Postgres on a good day? That was already a lot. But in the LLM era, knowing the stack isn’t enough you need to think above it.

Working with LLMs shifts your role from builder to architect. You’re not just writing components anymore. You’re designing workflows, system behaviors, and how the AI should think about solving the problem.

Here’s the twist: your prompt is the new interface.

  • The way you phrase a request determines structure, performance, and even edge-case safety.
  • LLMs don’t see your whole repo they see what you feed them.
  • This makes context management and input design the secret weapons of modern devs.

If you prompt like a rookie, you’ll get spaghetti logic. Prompt like an architect, and the model starts to feel like a senior engineer with 1000 StackOverflow tabs open.

It’s like design patterns 2.0 except now you’re designing thought patterns.

This is where real engineering intuition shines:

  • Knowing what the model probably won’t understand
  • Structuring logic for AI interpretability
  • Designing clean, narrow tasks that reduce risk of hallucination
  • Asking: “What’s the smallest, dumbest job I can give the model so it doesn’t go rogue?”

This isn’t just fancy prompting. It’s system architecture just with a less predictable compiler.

3. LLMs: Pair programmers with a god complex

Fast, confident, sometimes wrong
Imagine if your pair programming buddy could code in 40 languages, never needed coffee breaks, and responded instantly. Now imagine they also made things up sometimes, never apologized, and delivered bad answers with total confidence.

That’s what pairing with an LLM feels like.

They’re not your IDE they’re a junior dev on performance enhancers.

LLMs can scaffold an app, generate boilerplate, optimize SQL, write API docs, even suggest architecture decisions. In minutes. You just feed it some context, maybe a few “you are a helpful X”-style prompts, and boom it starts coding like it read every GitHub repo ever. (Because it kind of did.)

But here’s the catch:
They hallucinate. Hard.

An LLM might:

  • Invent non-existent Python functions
  • Use outdated libraries that haven’t been maintained since 2020
  • Combine patterns from five different StackOverflow answers into a franken-solution

And worst of all: it sounds right. You’ll only realize it lied after testing or worse, when it’s in production and users are sending screenshots in panic.

This is why LLMs are not replacements they’re unreliable overachievers. Like interns who try to rewrite your backend in Rust without telling anyone.

You still need to:

  • Know your language
  • Understand your system
  • Read the generated code like it’s a resume from a sociopath: impressive, but suspicious

Used right, they speed you up. But used lazily, they become productivity illusions.

Press enter or click to view image in full size

4. We’re debugging conversations now, not just code

Prompt failures are the new stack traces
You used to spend hours chasing null pointers, off-by-one errors, and sneaky race conditions. Now? You’re debugging why your prompt made the LLM write a while True loop that never ends.

This is the weird part of LLM-era engineering: you’re not just debugging programs anymore you’re debugging language.

Welcome to prompt debugging.

You tweak a phrase, add a comment, remove a clarifying sentence… and suddenly the AI starts behaving. It’s like reverse-engineering the brain of an overconfident genie.

Bad output isn’t always the model’s fault. It’s often:

  • Vague instructions
  • Incomplete context
  • Ambiguous intent
  • Too much context, or not enough

And unlike traditional bugs, you don’t get a stack trace. You get… vibes.
You feel something went wrong. And you go spelunking through the prompt chain like a dungeon crawler looking for the cursed token.

Prompt engineering is real engineering.

It’s just built on inference, tone, precision, and knowing how the LLM “thinks.”
If that sounds soft? It’s not. It’s the difference between this:

“Write a function that checks user access.”
(LLM returns a function that only checks admin role.)

…and this:

“Write a Python function that accepts a user object and a list of roles, returning True if the user’s role is in the list, and False otherwise. Assume roles are strings.”
(LLM nails it.)

Precision in prompts = precision in output.
You’re now writing language-level test cases, not just code-level ones.

Real link:
OpenAI’s Prompt Engineering Guide https://github.com/openai/openai-cookbook

5. Toolchains are mutating fast blink and you’ll miss it

The rise of AI-native dev environments
You ever come back from a weekend and discover your editor has a new LLM plugin, your team adopted a new AI code review tool, and somehow everyone’s talking about “context windows” like it’s normal?

Yeah. That’s 2025.

The dev stack isn’t just evolving it’s melting.

VSCode, once your comfy little text editor, is now a battleground of AI plugins:

  • GitHub Copilot is finishing your thoughts before you type.
  • Cursor is rewriting entire files on command.
  • Cody from Sourcegraph is explaining confusing legacy code like a calm TA.
  • Tabnine is still lurking, quietly autocomplete-ing everything.

And it’s not just IDEs. Your CI/CD pipeline? LLMs are suggesting fixes to failed jobs. PR bots review your commits and leave thoughtful comments (sometimes better than your teammates). Documentation gets autogenerated from code and vice versa.

The whole stack is becoming LLM-infused.

It’s no longer:

  • Code → Test → Deploy

It’s now:

  • Prompt → Generate → Validate → Comment → Refactor → Commit → Document → Ship → All done by five different AIs

The tooling ecosystem is adapting faster than we can build muscle memory. One month you’re writing YAML, the next month your AI agent is tweaking your Dockerfile because you forgot to set the right memory limits.

If you blink, your workflow is out of date.

What used to be dev tooling is now co-piloting.
The dev environment is no longer passive. It talks back.

Press enter or click to view image in full size

6. The new role of engineers: filters, not fabricators

Your judgment is your job now
The most valuable thing you bring to the table in 2025 isn’t your ability to write code fast. It’s your ability to decide which code shouldn’t be written at all.

With LLMs generating full implementations in seconds, the job of the developer is shifting from creator to curator. You’re not the author of every line you’re the filter, the gatekeeper, the “wait, this looks sketchy” person.

Think less like a coder, more like an editor.

The best engineers today:

  • Know which AI-generated code to keep, delete, or question
  • Break problems into smaller, AI-solvable chunks
  • Know when not to automate
  • Have the taste and intuition to know what “good code” still means

In the past, a dev’s value was often judged by how much they shipped. Now?
It’s how many bad generations they didn’t let reach main.

The AI will generate.

You decide whether it should ship.

This isn’t less engineering it’s more responsibility, just at a higher level.
We’ve moved up the abstraction ladder. Instead of writing functions, you’re designing workflows. Instead of building features, you’re shaping behaviors.

And no, this doesn’t mean junior devs are obsolete. It means everyone needs to learn the difference between fast and correct, and working vs. safe.

7. So what should we actually build now?

Ideas worth building in the LLM era
Now that the barrier to shipping code is almost gone, the real question isn’t “can we build it?” it’s “should we?”

LLMs let solo devs do what used to take teams. MVPs go from idea to demo in a weekend. Internal tools get whipped up over lunch. Hell, you can spin up a SaaS on a coffee break if you’ve got good prompts.

But that power comes with chaos. Because if everyone can build everything, a lot of people are going to build… garbage.

So what’s worth building?

  1. AI-native apps Tools that wouldn’t make sense without LLMs:
  • Context-aware coding agents
  • Smart doc generators
  • Natural-language dashboards
  • Chat-first CRMs
  • AI-assisted onboarding tools

2. Connectors and glue
LLMs are great at reasoning, but terrible at APIs. There’s gold in building bridges:

  • Zapier-for-AI workflows
  • Domain-specific wrappers for models
  • Reliable, secure input/output pipelines

3. UI for workflows nobody wants to type
Most LLMs live in chat interfaces, which sucks for actual work.
Building good UIs around model behavior is the next gold rush.

4. Guardrails and safety nets
The more code AI writes, the more we need smart systems to watch that code.

  • Linting for AI-generated code
  • Secure-by-default scaffolding
  • AI-driven test coverage analysis

We’re not building just software anymore.
We’re building collaborative behaviors, decision pipelines, and systems that evolve as models do.

If your idea would’ve been killed by complexity two years ago?
Now’s a great time to revisit it.

8. Conclusion

Stop pretending LLMs are just autocomplete
If you’re still treating LLMs like smarter StackOverflow search boxes or autocomplete on steroids, you’ve already fallen behind.

These models aren’t just tools they’re collaborators, interpreters, and sometimes chaotic creative partners. And yeah, they’re unreliable. But so are humans. The trick is understanding what they’re good at, when to trust them, and how to use your own judgment to make that output actually useful.

Software engineering isn’t dead it just shapeshifted.

Your IDE talks back. Your job description is fuzzier. And your value comes from how well you design, filter, ask, and think, not how fast you can bang out code by hand.

So what now?

  • Start thinking in systems, not scripts
  • Treat prompt design like interface design
  • Learn to debug language, not just logic
  • Accept that your dev stack will keep mutating
  • And most importantly, stay curious this is just the beginning

The future of engineering isn’t about writing perfect code.
It’s about being the human who knows what “perfect enough” actually means.

9. Resources

Want to go deeper into the LLM-dev rabbit hole? Start here.
Here’s a curated list of battle-tested resources, tools, and guides to help you level up as a dev in the LLM era:

OpenAI Cookbook
Prompt engineering tips, context tricks, embeddings, and more from the people who built the models
https://github.com/openai/openai-cookbook

LangChain Docs
If you’re building agents, workflows, or chaining tools together with LLMs, this is essential reading
https://docs.langchain.com

Simon Willison’s LLM blog
A fantastic real-world dev blog about building stuff with LLMs, complete with experiments and code
https://simonwillison.net/tags/llm

Papers with Code LLM Applications
Stay up to date with the latest academic and bleeding-edge projects that actually ship
https://paperswithcode.com/task/text-to-code

Cursor
VSCode-like editor designed around prompting and AI-native development
https://www.cursor.sh

Awesome ChatGPT Prompts (for devs)
An open-source collection of prompts curated for all kinds of dev workflows
https://github.com/f/awesome-chatgpt-prompts

You don’t need to master everything at once. Just start by asking better questions, debugging smarter prompts, and remembering: the most powerful tool in your stack is still your brain.

Clap | Comment | Share

Top comments (0)