<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kelson Qu</title>
    <description>The latest articles on DEV Community by Kelson Qu (@kelson_qu).</description>
    <link>https://dev.to/kelson_qu</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kelson_qu"/>
    <language>en</language>
    <item>
      <title>Building an AI Travel Planner with Claude: and Using Claude Code to Build It</title>
      <dc:creator>Kelson Qu</dc:creator>
      <pubDate>Sat, 18 Apr 2026 22:18:37 +0000</pubDate>
      <link>https://dev.to/kelson_qu/building-an-ai-travel-planner-with-claude-and-using-claude-code-to-build-it-1nfh</link>
      <guid>https://dev.to/kelson_qu/building-an-ai-travel-planner-with-claude-and-using-claude-code-to-build-it-1nfh</guid>
      <description>&lt;p&gt;We built TripMind: an AI travel planner that generates a full day-by-day itinerary from a destination, budget, and travel style: then immediately scores it with a second AI pass. The whole thing is built with Next.js 15, Supabase, and the Anthropic Claude API.&lt;/p&gt;

&lt;p&gt;But the more interesting story is how we built it. We used Claude Code as our primary development tool throughout the project, and we want to share what actually worked, what didn't, and what we'd do differently.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Core Architecture: One Claude Call, Not Many
&lt;/h2&gt;

&lt;p&gt;The first design decision was how to structure the AI calls. A naive approach would be to make separate calls for the itinerary, the budget breakdown, the food recommendations, and the attractions list. We went a different direction: one Claude call returns everything at once using &lt;code&gt;tool_use&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;claude-sonnet-4-5&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;max_tokens&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;4096&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;tools&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;submit_itinerary&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;input_schema&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;object&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;properties&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;itinerary&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;array&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;items&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="na"&gt;budgetItems&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;array&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;items&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="na"&gt;attractionItems&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;array&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;items&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="na"&gt;foodItems&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;array&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;items&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}],&lt;/span&gt;
  &lt;span class="na"&gt;tool_choice&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;tool&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;submit_itinerary&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;tool_choice: { type: "tool" }&lt;/code&gt; forces Claude to respond in structured JSON via tool use rather than prose. We then parse the result with Zod to validate the shape before touching any of it.&lt;/p&gt;

&lt;p&gt;This pattern: &lt;code&gt;tool_use&lt;/code&gt; + &lt;code&gt;tool_choice&lt;/code&gt; + Zod: gave us reliable structured output without any prompt hacking.&lt;/p&gt;

&lt;h2&gt;
  
  
  LLM-as-Judge: Evaluating Your Own Output
&lt;/h2&gt;

&lt;p&gt;After generation, a second Claude call scores the itinerary across three dimensions: cost accuracy, diversity, and feasibility. Each dimension gets a score (0–100), a reasoning paragraph, and an overall verdict. We display this as a score card with expandable reasoning.&lt;/p&gt;

&lt;p&gt;Why evaluate at all? Because raw generation is easy to make look impressive but hard to trust. By forcing Claude to critique its own output, we surface weaknesses: an itinerary that's beautiful but unrealistic on a $500 budget, or one that crams 12 activities into a single day. The judge scores don't change the output, but they help the user calibrate trust.&lt;/p&gt;

&lt;h2&gt;
  
  
  What We Actually Learned About Claude Code
&lt;/h2&gt;

&lt;p&gt;We used Claude Code for almost every feature. Here's the honest breakdown.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What worked well:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Scaffolding new features from a skill template was fast. We wrote a &lt;code&gt;/add-feature&lt;/code&gt; skill that describes the service + API route + Zod pattern for our project. Claude followed it reliably: every new route came out with proper input validation and try/catch error handling.&lt;/p&gt;

&lt;p&gt;Hooks changed how we worked. A PostToolUse hook runs &lt;code&gt;tsc --noEmit&lt;/code&gt; after every file edit. A Stop hook runs the full test suite at end of session. We caught type errors and test failures that would have silently slid into CI. The PreToolUse hook blocking &lt;code&gt;.env&lt;/code&gt; edits prevented at least one accidental secret exposure.&lt;/p&gt;

&lt;p&gt;The GitHub MCP integration made PR creation seamless. We created both PRs with C.L.E.A.R. checklists and AI disclosure metadata without leaving the terminal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What didn't work:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Claude Code is confidently wrong sometimes. Early on it suggested using &lt;code&gt;generateObject&lt;/code&gt; from the Vercel AI SDK: a function that doesn't exist in &lt;code&gt;@anthropic-ai/sdk&lt;/code&gt;. The code compiled, the types passed, and it failed at runtime. We now review every service-layer suggestion against the actual SDK docs before accepting it.&lt;/p&gt;

&lt;p&gt;Infrastructure decisions need human ownership. The ESLint flat config error took two hours to debug. Claude proposed three solutions; the first two made it worse. The fix (&lt;code&gt;FlatCompat&lt;/code&gt; from &lt;code&gt;@eslint/eslintrc&lt;/code&gt;) is the standard Next.js 15 pattern, but Claude didn't know it. For anything involving build tooling or CI config, we treat Claude's suggestions as starting points, not answers.&lt;/p&gt;

&lt;h2&gt;
  
  
  TDD With AI Assistance
&lt;/h2&gt;

&lt;p&gt;The most disciplined part of our process was TDD on the judge components. We committed failing tests before writing a single line of implementation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git log &lt;span class="nt"&gt;--oneline&lt;/span&gt;
9a7c949 &lt;span class="nb"&gt;test&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;judge&lt;span class="o"&gt;)&lt;/span&gt;: add failing tests &lt;span class="k"&gt;for &lt;/span&gt;JudgeScoreCard   &lt;span class="c"&gt;# RED&lt;/span&gt;
2ea5197 feat&lt;span class="o"&gt;(&lt;/span&gt;judge&lt;span class="o"&gt;)&lt;/span&gt;: implement JudgeScoreCard               &lt;span class="c"&gt;# GREEN&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We ran &lt;code&gt;npm test&lt;/code&gt; on the failing-test commit to confirm it actually failed before moving to implementation. This sounds obvious but it's easy to skip when you're moving fast. Having the Stop hook force a test run at session end made the red-green cycle feel natural rather than like extra work.&lt;/p&gt;

&lt;p&gt;Claude Code was useful here in a specific way: it wrote the test cases faster than we would have by hand, but we reviewed every assertion before committing. The combination: AI speed on the boilerplate, human review on the semantics: was genuinely better than either alone.&lt;/p&gt;

&lt;h2&gt;
  
  
  Parallel Development With Worktrees
&lt;/h2&gt;

&lt;p&gt;Late in Sprint 2 we needed to work on coverage config and sprint documentation simultaneously. Git worktrees let us check out two branches into two different folders and commit independently:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git worktree add &lt;span class="nt"&gt;-b&lt;/span&gt; feat/coverage-config ../tripmind-coverage
git worktree add &lt;span class="nt"&gt;-b&lt;/span&gt; feat/sprint-docs ../tripmind-sprints
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Two terminals, two branches, no stashing, no context switching. The interleaved commits in the git log are real evidence of parallel work. It's a small thing but it made the last week of the project noticeably less chaotic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security Without Slowing Down
&lt;/h2&gt;

&lt;p&gt;Five of the eight security pipeline gates are active:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;PreToolUse hook blocks &lt;code&gt;.env&lt;/code&gt; file edits in Claude Code sessions&lt;/li&gt;
&lt;li&gt;Gitleaks scans the full git history for leaked secrets on every CI run&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;npm audit --audit-level=high&lt;/code&gt; catches vulnerable dependencies&lt;/li&gt;
&lt;li&gt;ESLint runs as SAST on every PR&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;claude-code-action&lt;/code&gt; posts an AI PR review with security acceptance criteria on every PR&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The security-reviewer sub-agent in &lt;code&gt;.claude/agents/&lt;/code&gt; checks every API route for Zod validation, RLS bypass, and secret exposure before we open a PR. It caught a missing 401 check on the trips endpoint before it hit code review.&lt;/p&gt;

&lt;h2&gt;
  
  
  What We'd Do Differently
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Write acceptance criteria before starting features.&lt;/strong&gt; We opened GitHub Issues retroactively. Writing testable specs first would have made the TDD workflow feel more natural and caught scope creep earlier.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Set up CI on day one.&lt;/strong&gt; We added the CI pipeline in Sprint 2. Discovering that &lt;code&gt;eslint-config-next&lt;/code&gt; doesn't support flat config natively would have been less painful two weeks earlier.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trust Claude Code for structure, verify Claude Code for semantics.&lt;/strong&gt; It's very good at producing code that looks right. It's less reliable at producing code that is right for your specific SDK, toolchain, or architecture. The skill logs we kept (&lt;code&gt;docs/skills/task1-log.md&lt;/code&gt;) were worth every minute: they made v2 of the skill genuinely better than v1.&lt;/p&gt;




&lt;p&gt;TripMind is live at &lt;a href="https://ai-travel-planner-wkm7.vercel.app/" rel="noopener noreferrer"&gt;https://ai-travel-planner-wkm7.vercel.app/&lt;/a&gt;  . The full source is at &lt;a href="https://github.com/arinaa77/ai-travel-planner" rel="noopener noreferrer"&gt;github.com/arinaa77/ai-travel-planner&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>claude</category>
    </item>
    <item>
      <title>Building DailyMood: How We Used AI-Assisted Development to Ship a Full-Stack Mood Tracker in Two Sprints</title>
      <dc:creator>Kelson Qu</dc:creator>
      <pubDate>Wed, 11 Mar 2026 23:06:46 +0000</pubDate>
      <link>https://dev.to/kelson_qu/building-dailymood-how-we-used-ai-assisted-development-to-ship-a-full-stack-mood-tracker-in-two-411a</link>
      <guid>https://dev.to/kelson_qu/building-dailymood-how-we-used-ai-assisted-development-to-ship-a-full-stack-mood-tracker-in-two-411a</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;What does it look like to build a production-ready full-stack web application in 2+ sprints, with AI as a core part of the development process? That was the challenge we took on with DailyMood, a mood tracking app that lets users log how they feel each day, browse their history on a calendar, and understand their emotional patterns through a trends dashboard.&lt;/p&gt;

&lt;p&gt;In this post we walk through the technical decisions we made, the architecture we landed on, the challenges we hit along the way, and most importantly, how we used two distinct AI modalities to accelerate every phase of the project without sacrificing code quality.&lt;/p&gt;

&lt;p&gt;The code is available on GitHub:&lt;br&gt;
&lt;a href="https://github.com/arinaa77/DailyMood-An-AI-Powered-Mood-Tracker" rel="noopener noreferrer"&gt;https://github.com/arinaa77/DailyMood-An-AI-Powered-Mood-Tracker&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The live application is available at:&lt;br&gt;
&lt;a href="https://daily-mood-an-ai-powered-mood-track.vercel.app" rel="noopener noreferrer"&gt;https://daily-mood-an-ai-powered-mood-track.vercel.app&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  The Problem We Were Solving
&lt;/h2&gt;

&lt;p&gt;Most journaling and mood tracking apps ask for too much. They want paragraphs, categories, tags, and ratings before you have even finished your morning coffee. Our core UX constraint was simple: a full entry in under 30 seconds. Pick a mood. Add an optional note. Done.&lt;/p&gt;

&lt;p&gt;The challenge was building something that felt lightweight on the surface while being technically solid underneath: real authentication, a relational database with security policies, real-time sync across browser tabs, a responsive UI that works on both desktop and mobile, and a CI/CD pipeline that enforces quality on every push.&lt;/p&gt;


&lt;h2&gt;
  
  
  Tech Stack Decisions
&lt;/h2&gt;

&lt;p&gt;Before writing a single line of code, we spent time in Claude Web defining the tech stack.&lt;/p&gt;
&lt;h3&gt;
  
  
  Next.js 15 with App Router
&lt;/h3&gt;

&lt;p&gt;We chose Next.js because it handles both frontend and backend in a single project. API route handlers replace the need for a separate Express server, server components give us SSR without configuration, and the App Router's route groups let us cleanly separate authenticated and unauthenticated screens.&lt;/p&gt;
&lt;h3&gt;
  
  
  Supabase
&lt;/h3&gt;

&lt;p&gt;Supabase provides PostgreSQL with Row Level Security, JWT-based authentication, and realtime subscriptions in one managed platform. Instead of wiring together a separate database, auth service, and WebSocket server, Supabase collapsed everything into a single SDK and dashboard.&lt;/p&gt;
&lt;h3&gt;
  
  
  Tailwind CSS 4
&lt;/h3&gt;

&lt;p&gt;Tailwind's utility-first approach allowed us to build consistent styling without maintaining custom CSS classes. We used Plus Jakarta Sans via next/font/google for a modern typographic feel.&lt;/p&gt;
&lt;h3&gt;
  
  
  Recharts
&lt;/h3&gt;

&lt;p&gt;Recharts is a React-native charting library that integrates smoothly with React components. It allowed us to build charts without dealing with the complexity of D3 or managing canvas rendering manually.&lt;/p&gt;
&lt;h3&gt;
  
  
  Vitest + Playwright
&lt;/h3&gt;

&lt;p&gt;Vitest with React Testing Library was used for unit and integration tests, while Playwright handled end-to-end tests. Playwright tests used HTTP-level mocking of Supabase calls. This approach allowed us to achieve 97% coverage without requiring a real database during testing.&lt;/p&gt;


&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;The application includes three authenticated screens that share a common layout.&lt;/p&gt;
&lt;h3&gt;
  
  
  Log Page
&lt;/h3&gt;

&lt;p&gt;The MoodPicker component handles mood selection and note input. A sidebar shows today's status, the current week's mood strip, the user's streak, and the total number of entries. All data flows through a custom hook called useMoodEntries, which manages CRUD operations and maintains a Supabase realtime subscription.&lt;/p&gt;
&lt;h3&gt;
  
  
  Calendar Page
&lt;/h3&gt;

&lt;p&gt;The calendar page displays a monthly grid built with date-fns. Clicking a date expands the full entry. Editing and deleting entries both open confirmation modals to avoid accidental destructive actions.&lt;/p&gt;
&lt;h3&gt;
  
  
  Insights Page
&lt;/h3&gt;

&lt;p&gt;The insights page includes three stat cards (average mood, top mood, and entry count), a Recharts bar chart with selectable time ranges (7 days, 30 days, 90 days), and a Personal Records card showing the longest streak, best month, favorite mood, and total entries. All records are calculated client-side from existing data.&lt;/p&gt;

&lt;p&gt;One architectural rule we enforced strictly was that data fetching lives in hooks, never directly in components. Components receive data through props or hooks. This keeps components pure and easily testable while isolating the data layer for mocking during tests.&lt;/p&gt;


&lt;h2&gt;
  
  
  Database Design
&lt;/h2&gt;

&lt;p&gt;The mood_entries table was intentionally designed to remain simple.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;create&lt;/span&gt; &lt;span class="k"&gt;table&lt;/span&gt; &lt;span class="n"&gt;mood_entries&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="n"&gt;uuid&lt;/span&gt; &lt;span class="k"&gt;primary&lt;/span&gt; &lt;span class="k"&gt;key&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="n"&gt;gen_random_uuid&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="n"&gt;user_id&lt;/span&gt; &lt;span class="n"&gt;uuid&lt;/span&gt; &lt;span class="k"&gt;references&lt;/span&gt; &lt;span class="n"&gt;auth&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;users&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;on&lt;/span&gt; &lt;span class="k"&gt;delete&lt;/span&gt; &lt;span class="k"&gt;cascade&lt;/span&gt; &lt;span class="k"&gt;not&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;mood_score&lt;/span&gt; &lt;span class="nb"&gt;integer&lt;/span&gt; &lt;span class="k"&gt;not&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt; &lt;span class="k"&gt;check&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;mood_score&lt;/span&gt; &lt;span class="k"&gt;between&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="k"&gt;and&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="n"&gt;note&lt;/span&gt; &lt;span class="nb"&gt;text&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="s1"&gt;''&lt;/span&gt; &lt;span class="k"&gt;not&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;created_at&lt;/span&gt; &lt;span class="n"&gt;timestamptz&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="n"&gt;now&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;not&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The constraint check (mood_score between 1 and 5) enforces the mood scale directly at the database level. The foreign key with on delete cascade ensures that deleting a user account automatically removes all associated entries.&lt;/p&gt;

&lt;p&gt;Row Level Security (RLS) policies ensure users can only read or write their own rows, even if a bug in the application layer omits the user filter. As an additional safety layer, every query explicitly includes .eq('user_id', user.id).&lt;/p&gt;




&lt;h2&gt;
  
  
  Real-Time Updates
&lt;/h2&gt;

&lt;p&gt;Real-time synchronization significantly improves the user experience. When an entry is saved on one device, the calendar updates on other open sessions almost immediately.&lt;/p&gt;

&lt;p&gt;This is implemented using a Supabase realtime subscription inside the useMoodEntries hook.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;channel&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;supabase&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;mood_entries_changes&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;postgres_changes&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;event&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;*&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;schema&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;public&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;table&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;mood_entries&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;fetchEntries&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;subscribe&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The channel listens for INSERT, UPDATE, and DELETE events on the mood_entries table and triggers a refetch whenever data changes. The subscription is cleaned up when the component unmounts to prevent memory leaks.&lt;/p&gt;




&lt;h2&gt;
  
  
  Testing Strategy
&lt;/h2&gt;

&lt;p&gt;We set 80% coverage as the minimum requirement enforced by CI and ultimately reached 99%.&lt;/p&gt;

&lt;p&gt;The testing strategy followed a three-layer test pyramid.&lt;/p&gt;

&lt;h3&gt;
  
  
  Unit Tests
&lt;/h3&gt;

&lt;p&gt;We wrote 119 unit tests using Vitest and React Testing Library. Tests simulate user actions such as clicking, typing, and submitting forms. The userEvent API was used instead of fireEvent to better simulate real user behavior. Recharts was mocked entirely because jsdom does not support SVG rendering.&lt;/p&gt;

&lt;h3&gt;
  
  
  Integration Tests
&lt;/h3&gt;

&lt;p&gt;Integration tests focus on page-level behavior. The useMoodEntries hook was mocked to return controlled datasets. These tests verified logic such as whether the sidebar correctly displays "Entry logged today" or "Not logged yet," and whether streak calculations in MilestoneCards behave correctly.&lt;/p&gt;

&lt;h3&gt;
  
  
  End-to-End Tests
&lt;/h3&gt;

&lt;p&gt;Playwright runs full browser tests against the actual Next.js server. Supabase HTTP calls are intercepted with page.route() and replaced with mock responses. These tests verify navigation between pages, the full entry logging flow, and mobile responsiveness using a Pixel 5 viewport.&lt;/p&gt;

&lt;p&gt;One debugging challenge occurred when Vitest accidentally detected Playwright .spec.ts files and attempted to run them, causing a crash. The solution was adding an exclude rule in vite.config.js.&lt;/p&gt;




&lt;h2&gt;
  
  
  CI/CD Pipeline
&lt;/h2&gt;

&lt;p&gt;Every push to the main branch triggers a four-stage GitHub Actions pipeline.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lint
&lt;/h3&gt;

&lt;p&gt;ESLint with Next.js rules runs first and fails immediately on errors.&lt;/p&gt;

&lt;h3&gt;
  
  
  Test and Coverage
&lt;/h3&gt;

&lt;p&gt;Vitest runs all tests and enforces coverage thresholds. Reports are uploaded to Codecov.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security Scan
&lt;/h3&gt;

&lt;p&gt;npm audit with the high severity threshold detects dependency vulnerabilities.&lt;/p&gt;

&lt;h3&gt;
  
  
  Build
&lt;/h3&gt;

&lt;p&gt;A production Next.js build verifies TypeScript compilation and page generation.&lt;/p&gt;

&lt;p&gt;Stages three and four only run if linting and tests pass.&lt;/p&gt;

&lt;p&gt;Vercel is connected to the GitHub repository. Every push to main triggers a production deployment, while each pull request generates a preview deployment URL for UI review.&lt;/p&gt;




&lt;h2&gt;
  
  
  How We Used AI: Two Modalities
&lt;/h2&gt;

&lt;p&gt;The most interesting part of this project was intentionally using two different AI tools for different phases of development.&lt;/p&gt;

&lt;h3&gt;
  
  
  Claude Web for Planning
&lt;/h3&gt;

&lt;p&gt;Claude Web was used before any code was written. It helped draft the product requirements document (PRD), generate wireframes, design the database schema, and define the technology stack. The conversational workflow allowed iterative discussion of tradeoffs and requirements.&lt;/p&gt;

&lt;p&gt;We also created a project memory folder containing the PRD and wireframes so the implementation phase had a clear specification to follow.&lt;/p&gt;

&lt;h3&gt;
  
  
  Antigravity for Implementation
&lt;/h3&gt;

&lt;p&gt;Antigravity acts as an IDE-integrated AI assistant with access to the file system, terminal, and codebase context. It uses cursor position, open tabs, the import graph, and code embeddings to understand the project structure automatically.&lt;/p&gt;

&lt;p&gt;We used Antigravity's planning mode to break features into parallel tasks, its skills system to enforce Scrum practices and test coverage requirements, and its self-review capability to run a pre-commit audit that caught lint errors before CI.&lt;/p&gt;

&lt;p&gt;A key technique was maintaining an agent.md file as a lock file containing a chronological log of architectural decisions, fixes, and changes. When the AI session reached its context window limit, agent.md allowed a new session to reconstruct the project state instantly.&lt;/p&gt;




&lt;h2&gt;
  
  
  Challenges and Solutions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Next.js Breaking Change
&lt;/h3&gt;

&lt;p&gt;A framework update renamed middleware.ts to proxy.ts with a different export signature. Since this change happened after the AI's training data cutoff, the solution was providing the real error message so the AI could reason using the latest information.&lt;/p&gt;

&lt;h3&gt;
  
  
  SSR Hydration Mismatch
&lt;/h3&gt;

&lt;p&gt;A useMemo function containing Math.random() produced different values between server and client rendering. React detected the mismatch during hydration. The fix was replacing the random value with a constant placeholder string.&lt;/p&gt;

&lt;h3&gt;
  
  
  Coverage Scope
&lt;/h3&gt;

&lt;p&gt;Including Next.js page files in unit test coverage lowered the metrics because page components rely on server-only APIs incompatible with jsdom. The solution was scoping coverage to src/components, src/lib, and src/app/(auth). Page-level behavior was covered through Playwright tests instead.&lt;/p&gt;




&lt;h2&gt;
  
  
  What We Would Do Differently
&lt;/h2&gt;

&lt;p&gt;If we restarted the project, we would establish the agent.md lock file from the beginning and require updates after each major change. In practice, we introduced it midway through development and had to reconstruct some early decisions.&lt;/p&gt;

&lt;p&gt;We would also configure the CI pipeline earlier. A pipeline that enforces tests and security on every push provides a safety net that allows faster iteration on features.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;DailyMood became more than just a mood tracker: it became an experiment in deliberate AI-assisted software development.&lt;/p&gt;

&lt;p&gt;Using Claude Web for planning and Antigravity for implementation allowed each stage of development to use the most appropriate AI tool. The final result was a production-deployed full-stack application with 119 tests, 99% coverage, a four-stage CI pipeline, realtime synchronization, and a polished UI across desktop and mobile.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>showdev</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
