DEV Community

Markus
Markus

Posted on • Originally published at the-main-thread.com on

AI-Assisted Development for Java Developers: The Specification Problem Is Back

AI-assisted development feels new.

The speed is new.
The fluency is new.
The confidence with which code appears in your editor is new.

The problem it exposes is not.

For more than sixty years, software engineering has struggled with one core challenge:

Translating human intent into machine execution.

AI did not solve this problem.
It reopened it — at scale.

If you are using GitHub Copilot, Claude, ChatGPT, or tools like IBM Bob in large Java systems, this matters more than you think.


The Real Issue: It’s Not About Code

In 1968, the NATO Software Engineering Conference described the “software crisis.” Projects were late, expensive, and unreliable.

The diagnosis was not “bad programmers.”

It was unclear intent.

Since then, we tried:

  • Waterfall
  • The V-Model
  • IEEE 830 specifications
  • UML
  • Agile
  • DevOps

Each wave tried to reduce ambiguity.

Each wave traded one failure mode for another.

Now AI enters the picture. And we are surprised that it also struggles with ambiguity?

AI generates code fast.
But it does not understand intent unless you make it explicit.

And unlike a junior developer, it does not hesitate when something is unclear. It guesses. Confidently.


1. Start With Constraints, Not Code

When you prompt an AI assistant, you are doing requirements engineering.

Loose prompts reproduce the oldest failure in our industry: underspecified intent.

Bad prompt:

“Refactor this service.”

Better prompt:

“Refactor this service to remove field injection, keep public APIs stable, and preserve current transaction behavior.”

Constraints are architecture.

Without them, correctness is luck.


2. Use AI for Mechanical Work, Not Architectural Judgment

History already taught us something important:

Design is contextual.

AI is excellent at:

  • Mechanical refactoring
  • API migrations
  • Pattern completion
  • Repetitive transformations

AI is bad at:

  • Long-term trade-offs
  • Domain boundaries
  • Organizational constraints
  • “Why” decisions

Senior developers use AI as a power tool.
Not as a decision engine.

Experience matters more now, not less.


3. Narrow the Scope or Invite Hallucination

Large, unconstrained requests fail for the same reason large requirement documents failed.

They mix:

  • Intent
  • Mechanism
  • Policy

Break work into small, reviewable units.

This already works for humans:

  • Small pull requests
  • Clear user stories
  • Focused tests

It also works for AI.

If you cannot describe the task in a few sentences without saying “and then…”, it is too big.

If you cannot explain how to verify the result, it is not ready for delegation.


4. Treat AI Output as a Draft

AI-generated code looks finished.

That is dangerous.

The V-Model taught us a painful distinction:

  • Verification: does it match the spec?
  • Validation: is the spec even correct?

AI can satisfy an implied spec perfectly — and still be wrong for your system.

Treat AI output like a pull request:

  • Is it understandable?
  • Is it testable?
  • Does it introduce hidden complexity?
  • Does it change behavior silently?

If it would not pass review from a human, it should not pass review from AI.


5. Externalize Context or Lose It

Most teams already suffer from tribal knowledge.

AI makes this worse.

If context lives only in people’s heads, the assistant will guess.

Externalize:

  • Architecture Decision Records (ADRs)
  • Coding rules
  • Constraints
  • Sensitive areas
  • Things that must never change

This is not about writing more documentation.

It is about writing the right documentation.

Good context answers:

  • What must not change?
  • What should be preferred?
  • Where is this system fragile?

Noise is harmful.
Exhaustiveness is not clarity.


6. Large Migrations Still Fail (Nothing Changed)

“AI couldn’t fix our migration.”

That sentence sounds modern. It is not.

Big-bang rewrites have failed for decades.

Why?

Because they delete undocumented knowledge:

  • Edge cases
  • Performance assumptions
  • Integration quirks
  • Historical trade-offs

AI does not fix that. It accelerates it.

Better approach:

  • Use AI for bounded, mechanical transformations.
  • Keep human judgment for structural redesign.
  • Blend modernization and selective rewriting.

Large systems evolve. They do not restart.


7. Agent Rules Are the New Specification

Traditional specs described what software should do.

Agent rules describe how work is allowed to happen.

Examples:

  • Do not change public APIs without approval.
  • Prefer existing domain types.
  • Avoid performance optimizations without benchmarks.
  • Do not introduce new frameworks.

These rules rarely appear in UML diagrams.
But they shape every real codebase.

Without explicit rules, AI will act inside invisible boundaries.

And invisible boundaries are always violated.


8. Shift Effort From Writing Code to Verifying Intent

AI made code cheap.

Verification is now the bottleneck.

This changes the cost structure of software development.

In the past:
Writing code was slow.

Now:
Reviewing and validating code is slow.

If you cannot clearly explain how to verify the result, do not delegate it to AI.

Speed without verification is delayed failure.


9. Use AI to Explain Code You Already Trust

Here is a powerful pattern:

Ask AI to explain working, production code.

If the explanation is weak or unclear, the intent was never explicit.

That is not an AI problem.

That is a specification problem.

Used this way, AI becomes a diagnostic tool.
It exposes ambiguity before it causes damage.


10. Optimize for Cognitive Load, Not Velocity

AI increases output speed.

Human cognition does not scale the same way.

Fast generation + constant decision-making = mental overload.

Signs of trouble:

  • Accepting changes too quickly
  • Moving on before understanding
  • Growing gap between code and comprehension
  • Quiet architectural drift

Introduce pauses.

Separate generation from review.

Keep tasks small.

Protect attention.

Sustainable engineering beats high-speed chaos.


Where This Is Going: Intent-Based Programming

AI did not solve the hardest problem in software.

It solved syntax.

The deeper problem remains:

Expressing human intent precisely enough that machines can execute it safely.

We are moving toward a world where:

  • Intent is primary
  • Constraints are explicit
  • Code becomes implementation detail
  • Verification becomes central

Java developers are well positioned for this shift.

We have lived through:

  • Heavy specification eras
  • Agile corrections
  • Test-driven development
  • Contract-based systems

The discipline that kept large Java systems alive before AI is the same discipline that makes AI safe today.


The Long Version

This article is a condensed version of a deeper piece where I:

  • Connect AI development to the 1968 software crisis
  • Explore specification failures across decades
  • Break down agent rules in detail
  • Discuss cognitive load and burnout in depth
  • Analyze the shift from code-writing to intent-design

If you are working on large Java systems and want the full argument:

👉 Read the complete article on The Main Thread:
https://www.the-main-thread.com/p/ai-assisted-development-java-specification-intent


AI did not end the software crisis.

It made ambiguity faster.

And that means intent matters more than ever.

Subscribe now

Top comments (0)