DEV Community

Cover image for The Formula Was Exact. The Assumption Was Wrong. That's Not an AI Problem.
Daniel Nwaneri
Daniel Nwaneri

Posted on

The Formula Was Exact. The Assumption Was Wrong. That's Not an AI Problem.

Your geology will always govern your geophysics.

My lecturer said it once. I wrote it down. I didn't fully understand it yet.

I do now.


What He Meant

We were studying Vertical Electrical Sounding at the Federal University of Technology Owerri. VES is how you read the earth without drilling it — you send current into the ground, measure how it returns, and infer what's down there from the resistivity curves. Clean method. Decades of field use. Textbook technique.

But the method assumes something. It assumes the layers beneath you are horizontal, homogeneous, well-behaved. The formula works perfectly under those conditions. Run your numbers, get your model, trust the output.

Except Nigeria's basement complex isn't horizontal or homogeneous. It's fractured. Laterally variable. Full of structural surprises that don't announce themselves in your data. You can run perfect VES and still drill a dry borehole — not because the method failed, but because you trusted a reading without interrogating the ground it came from.

The geology is always prior. The geophysics only tells you what's there if you already understand the conditions under which it's operating. Skip that step and the output is confidently wrong.

I spent a semester learning this in a classroom. Then I spent six months at the Nigerian Geological Survey Agency watching it happen in the field — solid mineral surveys, real terrain, results that came back and either confirmed your model or told you your assumptions were off.

I didn't know I was learning how to work with AI.


The Same Failure, Different Surface

I've been building production systems in Port Harcourt for years now. Cloudflare Workers, RAG pipelines, MCP servers, edge infrastructure for users where bandwidth costs real money per request.

And I keep watching the same failure repeat, dressed in different code.

Developer gets AI-generated output. Output looks right. Tests pass. Ships to production. Fails — not catastrophically, but wrongly. Subtly. In ways that take days to trace.

The model wasn't broken. The formula was fine. The geology was different.

AI generates for the environment its training data came from — abundant compute, fast connections, forgiving infrastructure, users on fast networks in cities where the grid is reliable. That's the assumed geology. Most of the time, nobody states that assumption. Nobody interrogates it. The output arrives confident and gets treated as ground truth.

When I built a VPN service targeting Nigerian users, the AI-suggested architecture was technically correct and completely wrong. Correct for the assumed geology. Wrong for mine. The difference wasn't a bug you could find with a linter. It was a mismatch between what the model assumed about the world and what the world actually was.

That gap — between assumed geology and actual geology — is where production failures live.


The VES Lesson Nobody Teaches in CS

VES nearly died as a professional discipline in the 1980s and 1990s.

Not because the physics were wrong. The physics were exact. It died because practitioners kept getting bad results — boreholes drilled on confident readings that came back dry. Clients stopped trusting the method. The reputation collapsed.

The post-mortem was brutal in its simplicity: the formula was exact. The assumption was wrong. Geophysicists had been applying a method that required horizontal homogeneity to terrain that wasn't horizontally homogeneous. The model was rigorous. The geology was inconvenient.

They fixed it — better field protocols, more explicit assumption-checking, ground-truthing before committing to a reading. The method recovered. But only after the field admitted that confident output isn't the same as correct output.

We are at that moment with AI.

The models are rigorous. The outputs are confident. And we are shipping to production without interrogating the geology — the actual environment, the actual users, the actual constraints — that the output will have to survive in.

Ben Santora has been stress-testing LLMs with logic puzzles designed to expose reasoning failures. His finding: most models are solvers, not judges. They produce an answer. They don't flag when the assumed conditions don't match the actual problem.

"Knowledge collapse happens when solver output is recycled without a strong, independent judging layer to validate it."

The judging layer is the geologist's job. It always was.


What Field Work Actually Trains

I did my industrial training at NGSA in 2015. Solid mineral surveys. Real field conditions. Mineralogy in the lab, VES in the terrain.

The thing fieldwork does that coursework doesn't is this: it makes the gap between model and ground visible in real time. You take your reading. You record your resistivity curves. You run your interpretation. Then you go back the next day and find out if the borehole hit water or came back dry.

That feedback loop — model, prediction, ground truth, reckoning — is what builds the instinct to hold your interpretation lightly. Not to distrust the method. To distrust the assumption.

When I ran my SEO audit agent against my own published content this month — seven URLs, seven FAILs — I wasn't surprised. I'd built the agent, I knew what it was checking, and I ran it on myself first because that's the only version of a demo I trust. The agent was right. Three freeCodeCamp tutorials had broken meta descriptions. Two DEV.to article titles were too long for Google to render cleanly.

That reflex — interrogate the output before you trust it — isn't something I learned from a JavaScript tutorial. It came from standing in terrain that didn't match the model and having to explain why.

The same thing happened when I shipped The Foundation's clipboard capture in February. The workflow looked right. I documented it, wrote the article, shipped to GitHub. It was broken — capturing only user messages, missing every AI response, missing everything above the visible viewport. I'd reviewed it at the same speed I built it. The geology was inconvenient. I didn't check.

Five days later I wrote publicly: "I launched The Foundation with big plans. But I underestimated the scope." Not in a GitHub issue. On DEV.to. The well came back dry. You say so.


The Reframe

The conversation about AI in software development keeps getting stuck on the wrong question.

Not: is the model capable?

The model is capable. That's not the problem.

The question is: does the model know your geology?

AI-generated code is optimised for an assumed environment. Plenty of RAM. Reliable connectivity. Users on fast networks. Infrastructure that forgives. Most of the time, nobody states this assumption — it's baked into the training data, invisible until the output meets terrain it wasn't built for.

The developers who catch this aren't necessarily the most experienced. They're the ones who learned — somewhere, from something — to name the geology before trusting the reading.

In the field: name the geology before you trust the reading.

In production: name the environment before you trust the output.

Same question. Different surface.


What Non-CS Backgrounds Actually Transfer

The argument I keep hearing: your background doesn't matter, code is code.

It's wrong. And it misses the point.

What transfers from geophysics isn't syntax knowledge. It's the prior question. The one you ask before you trust the output.

CS tracks teach you to evaluate whether the code is correct. They don't train the instinct to ask whether the assumed conditions match the actual ones. That instinct comes from fields where the gap between model and ground is visible, expensive, and immediately yours to own.

The guts come from somewhere. For some people it's painful production failures. For some it's a good mentor. For me it was a lecturer in Owerri who said one sentence I've never stopped thinking about.

Your geology will always govern your geophysics.

The model doesn't know your terrain. That's not a limitation to wait out. It's a gap you have to close yourself — every time, before you ship.

Top comments (1)

Collapse
 
benjamin_nguyen_8ca6ff360 profile image
Benjamin Nguyen

great article and valid point!