“Anyone can build a demo. Shipping AI to production is a completely different sport.”
AI-powered web apps are everywhere right now—but most arti...
For further actions, you may consider blocking this person and/or reporting abuse
this is a helpful breakdown - thanks! Some of this is just 'working effectively with any remote API', but AI adds some new failure patterns to the mix, along with costs that can spiral quickly, and highly variable outputs
I especially like how you highlight the AI-specific failure patterns and cost risks; it’s a thoughtful reminder that building with AI requires a different level of care and discipline.
This hits the real gap between demos and production. Treating AI as an unreliable subsystem (not a magic function) is such an important mindset shift.
The points on orchestration, cost guards, and observability feel especially hard-earned. Great read for anyone shipping real AI, not just experimenting.
Thank you, I really appreciate that perspective. I agree—seeing AI as an unreliable subsystem forces us to design better safeguards, and I think strong orchestration and observability will only become more critical as systems scale. I’m genuinely interested in how these ideas evolve in real production environments and would love to see where you take this next.
Hey Art! Great post and we were looking to do something very similar with Flywheel.
I learned this real fast when I kept getting non 200s back from Claude's API and found out they were in an outage so now I have an check that reads their status page and caches the result. This way we can disable our AI functionality and inform our users why.
Curious what ya'll are using to stream your message? We're using SSE and its been pretty great for us.
Another thing we haven't run into but know will be in the future is token costs. Even testing is $$$.
Anyways, thanks for the post!
Thanks so much for the kind words — I really appreciate you sharing your experience. I completely agree that treating AI as a powerful but unreliable subsystem is the right mindset, and your approach to handling outages and communicating clearly with users is a solid, practical solution. The points you raised around streaming and token costs are especially interesting to me, and I’m excited to explore how those trade-offs will shape more resilient and cost-aware AI systems going forward.
Spot-on advice for anyone serious about shipping production AI web apps with LLMs—demos are easy, but this architecture nails the real challenges like orchestration, RAG pitfalls, and cost guards.
Great post, really informative 👍
Thanks.
Thanks for this. Will be embracing this for my design.
Glad you found it useful! 😊 Hope it inspires your design process and leads to something great—looking forward to seeing what you create.
"Brilliant breakdown, Art. Your analogy between AI agents as microservices and agentic AI as a coordinated Kubernetes deployment is spot on.
I’m currently implementing these patterns in a project called REMI, but with a specific challenge: I’m operating on a hybrid infrastructure with physical partitions (sda5/sda7) for large-scale data recovery management. Moving from 'isolated agents' (standalone recovery scripts) to a truly 'Agentic Architecture' that manages persistent memory across hardware restoration processes is precisely where I am right now.
I found your take on 'Accumulated Damage' in senior engineering particularly relevant to the resilience needed when AI agents interact with bare metal and raw partitions.
I’ll reach out on Discord (lighthouse4661) to exchange some thoughts on how to scale these planners without losing system observability. Cheers!"