DEV Community

Cover image for Three Weeks Of Failure Taught Me More Than Any AI Course
Mr. Lin Uncut
Mr. Lin Uncut

Posted on

Three Weeks Of Failure Taught Me More Than Any AI Course

Three Weeks Of Failure Taught Me More Than Any AI Course

Morgan Stanley just put out a warning. "Most of the world isn't ready for what's coming in 2026."

They were talking about AI. The kind that doesn't wait for you to type a prompt. The kind that just runs.

I've been living inside that warning for the past two months.

The failure nobody talks about

Every morning, Jarvis is supposed to pull my WHOOP data, analyze my sleep and recovery, and send me a coaching brief before I even look at my phone. Not generic health tips. Actual personalized coaching. If my sleep was bad, he tells me not to train hard. If I'm recovered, he pushes me. All of it delivered automatically.

Except for weeks, it kept breaking in ways I couldn't figure out at first.

The script was set to scan between 6 AM and 10 AM. If I woke up at noon, the whole thing missed the window. If I stayed up all night and went to sleep at 6 AM, it grabbed yesterday's data because technically I hadn't slept yet. If the WHOOP API token expired overnight, everything crashed with zero notification. Three completely different failure modes. None of them obvious until they hit me.

What I learned fixing those three problems was more valuable than the automation itself. I had to build real scenario handling. Not just "run the script at 6 AM" but: what if he woke up late? What if the token expired? What if the data hasn't refreshed yet? Layer by layer, I built a system that accounts for all of it.

Now it doesn't matter when I sleep, when I wake up, or what the API is doing. The system handles every scenario. Bulletproof.

Three weeks of failure taught me more than any course

Every failure forced me to understand the system deeper. The token expiry taught me to build auto healing. The timing issue taught me to think in scenarios, not schedules. The silent crashes taught me to build verification layers.

That's the pattern. You break it. You fix it. You understand it a level deeper than you did before.

Prompting is not what people think it is

An AI agent is a system where AI executes tasks autonomously, not just answers questions. Most people think building AI agents requires coding. I've built over 40 automated jobs, a self healing health pipeline, and a full agent stack with zero code written. All through natural language.

Here's why: prompting works at different levels. It's like learning a language. You can speak English at a basic level, but there is a massive gap between speaking and rapping, between speaking and writing poetry. When I started, I was basic. Now prompting is just how I think. I don't craft prompts. I communicate with a system.

And here's what people miss: through prompting, I also learned system design. I learned debugging. All without touching code. I just brute forced through failures until I understood what was actually happening underneath.

What I'd spend $500 on if I was starting from scratch

First: the best LLM. Right now that's Claude. People talk about hosting models locally for free, and yes, you can do that. But you can also hire someone for free to build a bicycle when you could pay someone to build a Ferrari. There is still a difference. When you're building agents that need to reason, execute, and recover from errors, the model quality gap is not subtle.

Second: a coding agent. Cursor, Claude Code, or Antigravity from Google. The brain needs hands. An LLM without an execution environment is just a very smart thing that cannot actually build anything. You need both. Without the brain and the hands, you are not building agents. You are chatting.

Everything else, API costs, hosting, tooling, comes after you have those two locked in.

The one domain I still own completely

Email, content, finance tracking, research. All automated. But high stakes financial decisions stay with me, 100%.

Not because AI isn't capable. It handles 80 to 90% of the analysis. But the final trigger, the actual decision to move money, stays human. Here's the reason: I use AI every single day, more than almost anyone, and I have watched it hallucinate with complete confidence. Stating things as fact that are just wrong. On a financial decision that actually matters, that error rate is not acceptable to me.

I've been genuinely angry at Jarvis many times. And it's funny now when I can see in the thinking output where it writes "Josh is very mad at me." But in the moment when it screwed up something important, it was not funny at all.

AI will close that gap. But I'm not pretending we're there yet.

Being ready is a decision, not a skill

I walked away from viral prank videos. Millions of views, brand deals, real income. And I went all in on AI. Not because it was safe. Because I could see what was coming and I wanted to be in the front row, not the last one standing.

Most people are not ready for what AI is already doing right now. Not tomorrow. Now. The jobs being replaced, the systems being automated, the roles being eliminated. It is already happening. It just hasn't been fully rolled out by the big companies yet.

The metaphor I keep coming back to is the frog in the pot. Cold water at first. Heat increases slowly. The frog doesn't notice until it's too late. That's where most people are. They read the headlines, they see the word AI, and they think it's still a future thing.

I'm the frog that keeps touching the water. I know exactly how hot it's getting.

It's the same as the early internet. In 2000, most people had no idea what they were looking at. The people who took it seriously and built real things in the front line created advantages that lasted decades. I think we are at that exact moment again, but at a scale most people cannot imagine yet.

Q&A

What does Jarvis actually do every day?

Morning health brief from WHOOP data, trending news, email drafting, finance alerts, content pipeline management, and automated builds. Most of it runs without me typing a single word.

Do you need to know how to code to build AI agents?

No. I have never written a line of code in my life. Everything I have built is through natural language prompting. The key is building systems that make the AI smarter over time, not just writing better prompts.

Why Claude specifically?

When you are building agents that need to reason, debug, and recover from errors, the gap between a good model and a great model is massive. Claude handles complex multistep tasks better than anything else I have used.

What is the biggest mistake people make when they start with AI?

They treat it like a search engine. You type a question, get an answer, close the tab. The real power comes when you give it memory, tools, and rules. When it becomes a system, not a chat session.

Is this actually replacing human roles in your business?

Yes. Tasks I used to pay people for are now automated. That is the honest answer.

The water is already hot

If this resonated, restack it. Someone in your network is still treating AI like Google. They need to read this.


Read the full story on Substack

Top comments (0)