PlantCareAI is a weather and context-aware AI plant advisor that gives personalized care guidance based on each plant's history and live local weather data. I built it solo after getting laid off in August 2025. Here's what I actually learned about building with AI when no one else is on the team.
What I built
PlantCareAI is a weather and context-aware AI plant advisor. Free users can enter a plant name and their location to get care recommendations from the AI. Signed-in users get a personal plant dashboard where they can track their plants, set watering and care reminders, and add journal entries. The AI uses each plant's context, plus live weather data from OpenWeather, to tailor its advice.
The stack: Python/Flask, PostgreSQL (via Supabase), Tailwind CSS, plain JavaScript, Render for deployment, Cloudflare for DNS, and Anthropic's Claude Haiku as the primary LLM with Google Gemini as a fallback, orchestrated through LiteLLM. Authentication is OTP email-based. Payments are handled through Stripe. The whole thing is WCAG 2.1 AA accessible and PWA-friendly.
I own and operate all of it. But I want to be honest about what "built it solo" actually means when you work with Claude Code.
My honest role: product owner and technical director
My role on this project was as much product owner as programmer. Claude Code wrote the majority of the code. My job was to define what needed to be built, make every product and architecture decision, review what Claude Code produced, run the tests it generated, and decide what shipped.
That is not a lesser role. It is a different one, and I think it is worth being honest about what it looks like in practice, because I suspect a lot of people are doing something similar without quite knowing how to describe it.
The mental model that made the biggest difference: I treated Claude Code like a developer on my team. I was the product owner and technical lead. I owned the vision, the data model, the feature priorities, and the quality bar. Claude Code executed. I reviewed everything using Anthropic's code review tools before committing, and I pushed back when something did not meet the standard or did not match what I was trying to build.
Key principle
Directing Claude Code well requires knowing what you want clearly enough to describe it, understanding the output well enough to evaluate it, and having the judgment to know when to accept, revise, or reject what comes back. All three of those things draw on real experience.
The LLM fallback chain
One of the product decisions I am most proud of is the fallback chain for AI responses.
I specified a three-tier fallback: Claude Haiku first, then Google Gemini, then a set of hardcoded general plant care responses if both APIs fail. The requirement was that the app should never simply die because one provider has an outage. Claude Code implemented it using LiteLLM, and also wrote the pytest tests to verify the fallback behavior under simulated API failures. I ran the tests manually and reviewed each one before committing.
Reviewing tests I did not write taught me something useful: reading a test carefully enough to approve it requires understanding the behavior it is verifying. That is a different kind of comprehension than writing the test yourself, but it is not a shallow one.
Shipping to production at scale
PlantCareAI launched in January 2026 as a free app. By April 2026 it had a live Premium tier with Stripe billing, webhook handling, and Stripe Tax configured for Texas sales tax compliance. Premium adds unlimited plant tracking, AI photo diagnosis, CSV and PDF export, custom themes, and Plant Stories, an AI-generated narrative of each plant's care history.
The SEO surface is larger than most solo projects at this stage. The site has 48 indexed URLs across 23 landing pages, 3 hub pages, and 15 species guides, all with structured data markup for Article, HowTo, and FAQPage schema. Every page is available in English and Spanish with hreflang implementation and SEO-translated slugs. An interactive watering decision tool lives at /tools/water-check and handles both houseplants and outdoor plants with separate decision logic for each.
The production infrastructure runs on Render with GitHub Actions CI/CD, Cloudflare for DNS and analytics, Supabase for the database, and Resend for transactional email. The test suite has over 500 pytest tests. None of that is unusual for a professional web application. What I found meaningful about it is that I designed all of it, made every infrastructure and product decision, and understood every moving part, which is harder to say about projects where you only own one layer.
The daily weather job
Every morning at 6 AM UTC, an APScheduler job runs inside the Flask app. It pulls all users who have active plant care reminders, gets their city, fetches the day's forecast from OpenWeather, and applies conditional logic to each reminder.
Is the plant outdoors? Is there precipitation expected? Is there a freeze or heatwave warning? Is the sunlight level significantly lower than usual? Based on the answers, the job adjusts the reminder due date and records the highest-priority reason for the adjustment in the database. A user who has a watering reminder on a rainy day will see it pushed automatically, with an explanation they can override.
I designed the logic for this feature. Claude Code implemented it. The distinction matters, and I have come to think it is the most important one in AI-assisted development: who is deciding what the software should do?
What directing AI development actually requires
AI-assisted development does not reduce the need for product and engineering judgment. It relocates it.
Claude Code will build whatever you describe. It will not tell you that you are designing yourself into a corner three months from now. The decisions about data modeling, feature scope, user experience, and system boundaries were mine to make. Getting them right required the same depth of thinking they would have required without AI in the loop, and sometimes more, because moving fast means mistakes compound faster too.
I also had to develop taste. Not every technically correct implementation is a good one to maintain. When reviewing Claude Code's output, I made a lot of small editorial decisions about naming, structure, and clarity. That part cannot be delegated, and I found I had stronger opinions about it than I expected.
What I would tell someone starting this way
- Be honest with yourself about your role. If Claude Code is writing the code and you are directing it, own that clearly. "I directed the build and reviewed all code before shipping" is accurate and respectable. Pretending you wrote every line helps no one, including you.
- Invest in your ability to review. The quality of AI-assisted development depends heavily on the quality of the review. Reading code you did not write, understanding what it does, and catching what it gets wrong are skills worth developing deliberately.
- Your product instincts matter more than you might expect. The part of this project I feel best about is not any individual technical decision. It is the feature set, the user experience, and the product logic. Those came from me. Claude Code made them real.
- Keep a context document. I maintain a living
CONTEXT.mdfile in the repo that orients any new Claude Code session to the project: architecture decisions, naming conventions, known constraints. It is the kind of documentation that keeps a project coherent across many sessions and many features.
PlantCareAI is an early-stage app. But it is real: live Stripe billing, real infrastructure, real users, real production operations. I learned more building it than I expected, and the best part is that I genuinely enjoyed it.
What I enjoyed most, it turns out, was not writing code. It was owning a product, making decisions that mattered, and watching something I designed actually work for people. That clarity about what I find meaningful has been as valuable as anything else that came out of this project.
Top comments (0)