There is a very boring announcement that engineering teams should take more seriously than most AI product launches.
AWS announced the end of support for Amazon Q Developer in the AWS Console mobile app. GitHub recently announced the upcoming deprecation of GPT-5.2 and GPT-5.2-Codex. Every week another AI product gets renamed, bundled, sunset, repriced, rate-limited, or quietly converted from “the future of software engineering” into “please migrate before June.”
This is architecture.
The thing we pretend is stable — the AI tool — is often the most disposable part of the system.
The thing we treat as informal — the workflow around it — is usually the part that survives.
My take is simple: engineering teams should design AI-assisted workflows as if every specific assistant, model name, IDE integration, and hosted feature has a short half-life.
Because it does.
tools churn faster than habits
Developers love tools, but organizations run on habits.
A tool can be replaced in a procurement cycle. A habit gets embedded in onboarding docs, pull request norms, incident response, security review, and the weird tribal rules people only learn after breaking production once.
That is why AI tooling churn matters.
If a team uses an assistant to summarize logs, generate test cases, write migration plans, or review Terraform, the dangerous dependency is not only the vendor API. It is the assumption that the tool-shaped workflow will keep existing in the same form.
Today the workflow is:
- open IDE plugin
- select code
- ask model X
- paste result into PR
- hope reviewer notices the spooky part
Tomorrow the IDE plugin is deprecated, model X is renamed, the context window changes, the pricing changes, the security team disables paste access, and the assistant now lives inside a chat tab with a different memory model.
The work did not disappear.
Only the interface did.
That is the annoying part. AI assistants are marketed like durable coworkers, but they currently behave more like SaaS features during a land grab. Some will become stable products. Many will not. They will keep changing faster than your engineering process should.
model names are not architecture
One smell I keep seeing is teams documenting workflows around specific model names.
“Use Claude X for refactors.”
“Use GPT Y for test generation.”
“Use Amazon Q for AWS questions.”
“Use Copilot for pull request summaries.”
That is fine as a preference. It is not fine as architecture.
A model name is a versioned implementation detail.
The architectural object should be the capability you need:
- propose a small refactor with constraints
- summarize an incident timeline from logs
- explain a cloud bill anomaly
- generate tests from observed behavior
- check a migration plan for rollback gaps
- classify dependency risk before merge
- produce a human-readable design review draft
Those capabilities can be routed through different assistants, models, prompts, policies, and environments over time. If the workflow is written around the capability, you can swap the tool. If it is written around the product surface, every vendor change becomes a tiny migration project.
This is the same lesson we learned with cloud services, CI providers, observability tools, and message queues. The abstraction should not pretend the implementation does not matter, but it should make the replaceable part obvious.
With AI tooling, that replaceable part is very often the branded assistant.
the stable unit is the engineering contract
So what should be stable?
Not the model.
Not the chat UI.
Not the plugin.
Not the button named “auto mode.”
The stable unit should be the engineering contract around the workflow.
If your team uses AI to help with database migrations, the contract might be:
- the assistant can draft the migration plan
- the plan must include rollback steps
- it must identify locking risks
- it must include expected runtime and blast radius
- it must cite the schema diff it used
- a human owner must approve it before execution
- the final artifact lives in the repo, not inside chat history
That contract can survive a tool migration.
Maybe today it runs through one assistant. Next quarter it goes through another. Later it becomes a GitHub Action or an internal platform feature with model routing, policy checks, and audit logs.
Good. That is how useful automation grows up.
The mistake is letting the first convenient UI become the process.
approval gates are the real product
This is why I find the industry’s recent obsession with “autonomous coding” slightly funny.
The demo always wants to show the agent doing everything. The production system usually becomes interesting at the approval gates.
Who can let the agent modify infrastructure? Who can approve package upgrades? Which paths can it edit without review? When does it need a test run or security approval? What does it do when CI fails? Where is the audit trail?
That is the real product surface.
Not the cute animation where an agent opens twelve files and looks busy.
When a specific AI tool goes away, the team that built around approval gates, artifacts, and clear ownership can migrate. The team that built around vibes has to rediscover its process under pressure.
This is also where senior engineers should spend more attention. Not “which AI tool writes the best boilerplate this month?” That question decays quickly.
The better question is:
Which parts of our engineering process become safer, faster, or more observable if an AI can draft work but not silently change the contract?
keep the artifacts outside the assistant
If I had to give one practical rule, it would be this:
Never let the assistant be the only place where the work exists.
Prompts, generated plans, evaluations, test outputs, review notes, and operational decisions should end up somewhere durable when they matter: repository files, PR comments, design docs, tickets, incident timelines, runbooks, audit logs.
Chat history is not a system of record. It is a scratchpad with better autocomplete.
AI tools do not just disappear. They mutate. Memory formats change. Export behavior changes. Enterprise retention settings change. Context windows change. Integrations get rebuilt. A workflow that depends on “the assistant remembers” has amnesia scheduled for a future date.
For small personal tasks, who cares. Let the chat be messy.
For engineering work that affects production, compliance, money movement, customer data, or infrastructure, the artifact needs to survive the tool.
design for boring replacement
The healthiest AI-assisted engineering stacks will probably look less magical than the demos.
They will have boring properties:
- prompts versioned near the code they affect
- model and vendor configuration separated from workflow logic
- output schemas for important generated artifacts
- tests and policy checks around agent-written changes
- approval gates for destructive or expensive actions
- audit trails for who asked what and what changed
- fallback paths when a model or provider is unavailable
- enough documentation that a human can perform the workflow manually
None of this is glamorous. That is why it is probably correct.
The teams that win with AI will not be the ones who bet the company on one assistant being magical forever. They will be the ones who turn useful AI behavior into replaceable workflow components.
That does not mean all AI tools are bad. Some are excellent. I use them constantly. The point is almost the opposite: because they are useful, we should stop treating them like toys.
Adult architecture assumes dependencies change.
the uncomfortable vendor lesson
Vendors are going to keep moving quickly because the market is still unstable. Model costs change, safety requirements change, partnerships change, enterprise controls change, and product teams are still figuring out what people actually use after the demo high wears off.
So yes, use the tools.
But do not confuse vendor velocity with platform stability.
If an assistant becomes central to your delivery process, ask the same boring questions you would ask about any other critical dependency: can we export the artifacts, switch providers, audit usage after an incident, and keep releasing when a model is deprecated or unavailable?
Boring questions are where production lives.
the punchline
The end of support for one Amazon Q surface is not the end of the world. A GitHub model deprecation is not a crisis. Most individual AI tooling changes are small.
But the pattern matters. AI developer tools are still in a fast-churn phase, and engineering teams should stop acting surprised when fast-churn things churn.
The durable investment is not memorizing this month’s assistant UI. It is building workflows where AI can help, humans can approve, artifacts can survive, and vendors can be swapped without turning delivery into archaeology.
The assistant is not the architecture.
The workflow is.
And if the workflow only works while one branded assistant exists in exactly its current shape, it is not a workflow yet.
It is a demo with a calendar invite for future pain.


Top comments (0)