DEV Community

Cover image for AI gets you — it just can't remember your API field names
Bao Ngo
Bao Ngo

Posted on

AI gets you — it just can't remember your API field names

What can I do anyway...

AI is scary good, duh. Like, I describe a workflow in plain English — "check this spreadsheet, send a Slack if the number's off, log it to the database" — and it just... gets it? The intent, the order, the edge cases. That part still blows my mind every time. 🤯

But then. BUT THEN.

It writes channelId instead of channel_id. Or messages.write instead of chat:write. Or nests the JSON one level too deep. And your workflow 403s at 3am and you wake up to an angry Slack thread. Fun times. 😅

It's not like this happens everywhere. AI writes Python, SQL, whole apps — and it's terrifyingly accurate. But some things are just... arbitrary strings. An API field name isn't something you can reason about. It's channel_id because someone at Slack decided it's channel_id. No amount of intelligence helps you guess that.

When work depends on you, "close enough" isn't

And this is the key part — it's not about AI being bad. AI is incredible. But there's a difference between "AI helps me write code and I review it" versus "AI generates a plan that runs autonomously while I sleep."

One of those can be 95% right and you're fine. The other one... you need the API contract to be exactly right or nothing works.

Will future models fix this? Honestly, probably yeah. Claude 4.7, GPT-5.5, whatever's next — I wouldn't be surprised if they nail exact field names way more consistently. Maybe this post ages like someone complaining about dial-up. 🙈

But I'm shipping today. "Wait for the next model" isn't really a strategy when you have users.

So here's what I did

In CFFBRW, when you write a workflow and the AI compiles it — we don't ask it to get every parameter name perfect. That's asking it to be a database. Instead, we let it describe what it wants to do. Natural language. Its comfort zone.

Then we have this thing called the Resolver that maps that to reality.

AI says:       "Send a message to #alerts in Slack"
Resolver maps:  slack.send_message → { channel: "#alerts" }
                (exact slug, exact param keys, from a real catalog)
Enter fullscreen mode Exit fullscreen mode

The AI describes intent. The Resolver enforces the contract. It matches against a catalog of real actions and real field names — handles typos, handles naming drift, produces exact output. No AI involved in this step. Just code doing what code is good at.

The full pipeline (quick version)

This Resolver is stage 4 of 5. Every workflow compilation goes through:

  1. Enrich — load context and best practices into the AI prompt
  2. Screen — AI sanity-checks the markdown before compiling
  3. Compile — AI generates the execution plan (retries if validation catches stuff)
  4. Resolve — deterministic mapping to real API catalogs ← this one
  5. Validate — structural and semantic checks, contract enforcement and re-try, following Instructor.

AI handles 1-3. Stages 4-5 are pure logic. The plan that actually hits your APIs has been verified by code, not vibes.

And honestly — even when models get perfect at remembering field names, having a verification layer before you execute against real APIs with real credentials is just... good practice? Like, I don't think you ever want to skip that step. The Resolver isn't a hack for today's AI limitations. It's the design.

Anyway — if you want to see how this works end-to-end, the whole platform is exposed as an MCP server so any AI agent can compile and run workflows through this pipeline. The Resolver is invisible to the caller. It just works.


Building CFFBRW at cffbrw.com. Still learning as I go — would love feedback. ✌️

Top comments (0)