TL;DR: AI coding assistants are amazing, until they confidently suggest APIs that don't exist. I wasted a week on Tauri 2 because of this. Here's what I learned.
My Tauri 2 Nightmare
Last month, I thought I'd be clever. Build a desktop app with Tauri 2, let Cursor + Sonnet 4.5 write most of the code. Ship fast. What could possibly go wrong?
Everything. Everything could go wrong.
Day 1: I asked Cursor to add file read permissions. It happily generated:
{ "tauri": { "allowlist": { "fs": { "all": true } } } }
Build failed. Huh?
Turns out Tauri 2 completely nuked the old allowlist system. They replaced it with something called "capabilities" that lives in a totally different directory. My AI had no clue.
Me: "No, this is v2. Please check the official docs."
Day 2: Cursor imported from @tauri-apps/api/fs. Module not found. Of course, v2 moved everything to @tauri-apps/plugin-fs.
Me: "No, this is v2. Please check the official API."
I swear I said that sentence fifty times.
Day 3: Maybe it's just Sonnet? I switched to Opus 4.5. Same hallucinations. Tried GPT 5.2 Codex, the supposedly "best coding model", and it still kept mixing v1 and v2 APIs. These are the top models. If they don't know it, nobody does.
Day 7: Still debugging. Config fields from v1 randomly mixed with v2. Deprecated APIs failing silently. Methods that straight-up didn't exist.
At some point I realized, I'm not coding with AI. I'm babysitting it.
Why Does This Keep Happening?
Look, I don't blame Cursor and the models. This is just how LLMs work, and it sucks sometimes.
The Cutoff Problem
Every LLM has a "knowledge cutoff", basically, the date when they stopped learning. Tauri 2 came out October 2024. If your AI trained before that? It literally doesn't know Tauri 2 exists.
But here's the annoying part: even if the cutoff is technically after the release, the training data is still mostly old stuff. The internet had years of Tauri 1 tutorials. Tauri 2 docs? Maybe a few months. Guess which one the AI learned better?
Everything Moves Too Fast
Frameworks these days ship breaking changes like it's going out of style:
- Tauri: v1 → v2, completely new permission system
- React: Class components → Hooks → Server Components
- Next.js: Pages Router → App Router
I saw a study that said LLMs suggest deprecated APIs somewhere between 25-38% of the time. That's... not great.
Bad Docs = Bad AI
AI learns from text. If a library has weak documentation, barely any Stack Overflow questions, and like three blog posts about it, the AI just makes stuff up. It sounds right. It looks right. It's completely wrong.
The Thing Nobody Talks About
Here's what I realized after that week from hell:
Your choice of libraries matters way more now than it used to.
Think about it. A library that's been around forever, has stable APIs, tons of community content, AI knows it cold. But pick something new and shiny? You're on your own. The AI will try to help and just make things worse.
That's the tradeoff. You might pick a newer library for cool features, but you pay for it with constant corrections and debugging.
So I Built a Thing
After all that pain, I wanted a way to actually measure this stuff before I commit to a tech stack. So I built AI Era Stack.
It's free. You give it any GitHub project, and it scores it across 8 things that actually matter for AI coding:
- Coverage, is the latest version in the AI's training data, or is it too new?
- Language AI, some languages (TypeScript, Python) AI knows cold. Others... not so much.
- AI Readiness, does it have types? An llms.txt file? Clear topics and license?
- Documentation, good README? Docs folder? Examples? Changelog?
- Model Capability, how good is your AI at coding tasks in general?
- Adoption, stars, forks, downloads, basically, is it popular enough that AI learned it?
- Momentum, is it actively developed, or is it a ghost town?
- Maintenance, when you file an issue, does anyone respond?
I Ran Tauri 2 Through It
After building this, I finally understood why my week sucked:
- Big version bump = the AI literally didn't know the new APIs
- Smaller community than Electron = way less training data
- Rust + Web hybrid = AI gets confused about the boundaries
Electron, which everyone clowns on for being "heavy", actually scored higher for AI friendliness. It's been stable for years. Tons of tutorials. The AI knows it inside out.
Ironic, right?
What I Do Now
Before picking any library these days, I check it on AI Era Stack. Quick sanity check. Takes 30 seconds.
Three things I look for:
- Coverage score, is the latest version in the training data?
- Compare with alternatives, maybe there's something similar the AI knows better
- Know the tradeoff, sometimes "boring" saves you a week of debugging
This isn't about avoiding new tech. I still use new stuff. I just go in with eyes open now.
Will This Get Better?
Probably? Eventually?
New stuff is coming that might help:
- llms.txt, a standard way for libraries to write docs AI can understand
- MCP servers, real-time context so AI can look things up
- Skills, pre-built capabilities for AI agents
But honestly, we're not there yet. It's 2026 and I'm still typing "No, this is v2" more than I'd like.
Check It Out
aierastack.com Free, open source, takes 5 seconds
Seriously, if you've ever had an AI confidently generate code for a library that doesn't work that way anymore, I feel you. Drop a comment, I'm curious what libraries burned you the worst.
Or hit me up on X: @AIEraStack
Happy coding. And good luck with your AI.
Top comments (0)