If you use tools like Claude Code, Cursor, or other AI coding agents, you've seen this message:
Usage limit reached
Try again in 4h 59m
Your flow stops. The AI is locked. And your first instinct is to scroll X or grab another coffee.
I used to do the same. Then I started tracking how I spent those cooldown windows — and realized something:
The quality of my next AI session was almost entirely decided by what I did during the wait.
AI tools are fast at generating code. But they still depend on humans for architecture, debugging, product thinking, and context. If you feed them better input, you get dramatically better output.
Here are 11 things worth doing while the limit resets — ranked roughly by impact.
1. Write Better Input for the Next AI Run
When AI output is bad, most developers blame the model. But the problem is almost always the input — either the prompt was vague, or the AI lacked context about what already exists in the project.
Example of a prompt that backfires:
I had a prompt that said "add email notifications for team invites". Claude Code generated a complete email setup — installed Nodemailer, created SMTP transport config, built HTML templates from scratch with inline CSS. The problem: my project already had Resend integrated with a shared sendEmail() helper and a template system in src/lib/email/. The AI just didn't pick up on it.
What better input looks like:
For a small task, a precise prompt with constraints is enough:
Add email notification when a user is invited to a workspace.
Constraints:
- use existing sendEmail() helper from src/lib/email/client.ts
- use the existing template system in src/lib/email/templates/
- follow the same pattern as the welcome email in welcome.ts
Do NOT:
- install new email libraries
- create new transport/provider config
For a larger feature, write a short spec before the session starts:
## Feature: Team Invitations
**Goal:** Allow workspace owners to invite new members via email.
**Flow:**
1. Owner clicks "Invite" → enters email
2. System generates a signed invite link (expires in 48h)
3. Recipient clicks link → creates account or joins if existing
**Data model:**
- invitations table: id, workspace_id, email, token, status, expires_at
**Edge cases:**
- Invite clicked after expiry → show "expired" page with re-request option
- User invited to workspace they're already in → friendly message, no duplicate
- Owner revokes invite before acceptance → soft delete, invalidate token
**Out of scope:** role-based permissions (separate feature)
The common thread: negative constraints (what not to do), pointers to existing code (what to reuse), and explicit edge cases (what not to forget). These three things eliminate most "redo" sessions. When I started writing input like this, the number of failed AI runs dropped by roughly half.
2. Break the Next Feature Into Atomic Tasks
AI performs best with small, deterministic tasks — not vague feature requests.
You might think: "Claude Code has plan mode — doesn't it break down tasks automatically?"
It does, and it's good. But there's a key difference. Plan mode breaks down a task within a single AI session. The AI decides the steps, the order, and the scope — and if it makes a wrong architectural decision in step 2, steps 3–6 inherit that mistake. By the time you notice, you've burned most of your session on code you'll throw away.
The alternative: you define the steps during the cooldown, each with a clear verification checkpoint.
Bad:
"Implement billing"
Even with plan mode, this gives the AI too many architectural decisions at once.
Better — a human-defined execution plan with checkpoints:
1. Create Stripe customer on signup (use existing User model)
2. Add subscription table: user_id, stripe_subscription_id, status, plan, current_period_end
3. POST /api/billing/checkout — create Stripe Checkout session
4. POST /api/webhooks/stripe — handle checkout.session.completed, invoice.paid, customer.subscription.deleted
5. GET /api/billing/status — return current plan and expiry
6. Add billing UI component reading from /api/billing/status
This doesn't mean every step must be a brand new session. For smaller features, you can run through several steps in one conversation — just verify the output at each checkpoint before moving on. For larger features, separate sessions (or even parallel runs via git worktree) give you cleaner isolation. Claude Code supports resuming conversations and rewinding to earlier checkpoints, so you have flexibility here.
The key principle: you control the granularity and the verification points, not the AI. Plan mode is great within each step — but the high-level breakdown should be yours.
3. Make Your Repo AI-Readable
Claude Code scans your repository to build context. But most repos are optimized for humans (at best) — not for LLMs.
The single highest-impact thing you can do: maintain a CLAUDE.md file in your project root. This is Claude Code's native mechanism for persistent project instructions. When you run /init, it generates a starter CLAUDE.md automatically — but the real value comes from curating it over time.
A good CLAUDE.md includes:
# CLAUDE.md
## Build & test commands
- `npm run dev` — start dev server
- `npm run test` — run Vitest
- `npm run test:e2e` — run Playwright
- `npm run lint` — ESLint + Prettier check
## Architecture
Frontend (Next.js App Router) → API Routes → Background Workers (BullMQ) → PostgreSQL
## Key patterns
- All DB access through Drizzle ORM (src/db/)
- Auth via Supabase, middleware in src/middleware.ts
- Stripe webhooks handled in src/app/api/webhooks/stripe/
- Shared types in src/types/ — always import from here
- Email sending via shared helper in src/lib/email/client.ts
## Naming conventions
- API routes: kebab-case (e.g., /api/billing-status)
- Components: PascalCase, one component per file
- Database columns: snake_case
## What to avoid
- Do not install new dependencies without checking if an existing helper covers the use case
- Do not create new API clients — reuse the ones in src/lib/
For more granular rules, you can also use .claude/rules/ — a directory of markdown files scoped to specific parts of the codebase. For example, .claude/rules/api.md could contain conventions that apply only when Claude works on API routes.
Claude Code also has auto memory — it can persist learnings from corrections you make during a session. If you tell it "always use pino, not console.log" during a run, it can remember that for future sessions. Between the CLAUDE.md, scoped rules, and auto memory, you're building a persistent knowledge base that compounds over time.
Beyond these Claude Code-specific mechanisms, general repo hygiene still matters:
Explicit type definitions. AI hallucinates types less when real ones exist. A single types/billing.ts file prevents dozens of wrong assumptions.
Short comments on non-obvious logic. Not // increment counter — but // Stripe sends checkout.session.completed before invoice.paid, so we must handle idempotency here.
I spent one cooldown window writing my first CLAUDE.md and restructuring the src/ folder. The next Claude Code session required zero corrections.
4. Review and Refactor AI-Generated Code
AI-generated code has a specific failure pattern: it works, but it's not good.
Common issues I keep finding:
-
Duplicate logic. The AI builds a similar helper to one that already exists three files away. You end up with
formatDate()in four different files. Claude Code's auto memory andCLAUDE.mdreduce this problem — if you've documented shared utilities, the AI is more likely to reuse them. But it still happens, especially across longer projects with many sessions. Code review catches what memory doesn't. - Over-abstraction. A simple API call wrapped in a factory pattern with a strategy interface — for a function called once.
-
Naming that technically describes the function but doesn't communicate intent.
handleData()vs.syncSubscriptionStatusFromStripe(). - Missing cleanup. Dangling event listeners, unclosed connections, unused imports.
This kind of review is hard to automate and hard to delegate to AI. It requires understanding the whole codebase — which is exactly what humans do better than LLMs right now.
Practical approach: pick one file per cooldown and read it slowly, line by line. You'll find things.
5. Write the Tests AI Missed
AI can generate tests, but it consistently misses the cases that matter.
What AI writes: happy path tests confirming the function returns the expected output.
What AI skips:
- What happens when the Stripe webhook fires twice for the same event? (idempotency)
- What if the database write succeeds but the API response times out? (partial failure)
- What if a user submits a form with valid-looking but malicious input? (validation edge cases)
- What if two users accept the same invite simultaneously? (concurrency)
These are the tests that catch production bugs. Spend the cooldown writing 3–5 of them for whatever feature you just built.
Framework note: if you're still on Jest for a Vite/Next.js project, consider switching to Vitest — it's faster, natively supports ESM, and the migration is usually trivial. For E2E, Playwright is the default choice I'd reach for today.
6. Actually Use Your Product
This is the most underrated item on the list.
AI writes code. It does not experience the product. It doesn't feel the friction of a confusing onboarding flow, the frustration of a slow page load, or the confusion of an unclear error message.
Things I've found by actually clicking through my own app:
- A signup form that accepted empty strings because validation only ran on blur
- A dashboard that took 4 seconds to load because it fetched all data on mount with no pagination
- A mobile layout where the primary CTA was hidden below the fold
- An error page that said "Something went wrong" with no way to recover
None of these would show up in unit tests. None of them would be caught by AI code review. They required a human sitting in front of the screen and thinking "would I use this?"
Spend 15 minutes clicking through the app the way your users would.
7. Debug What AI Built — Properly
AI generates code confidently. That confidence masks bugs that only appear at runtime.
Most common patterns I've seen:
- Wrong API assumptions. AI may generate code against examples or docs that don't match your exact API version or current SDK behavior. It builds something that looks right, passes a quick glance, and breaks at runtime.
- Async race conditions. Two state updates that work individually but produce inconsistent UI when they resolve in unexpected order.
-
Silent failures. A
catchblock that logs the error but doesn't propagate it — so the function appears to succeed while the data is never saved.
Debugging workflow that actually works:
- Reproduce the bug with a specific, repeatable input
- Add structured logging around the suspected area (not
console.log("here")) - Isolate: does the bug come from the AI-written code or from how it interacts with existing code?
- Fix manually if small, or write a precise bug-fixing prompt for the next AI session
The key insight: AI is great at writing code but bad at understanding why something doesn't work. Debugging is still a human skill.
8. Improve System Design and Architecture
Cooldown windows are perfect for zooming out.
When you're deep in implementation, it's easy to lose sight of the bigger picture. Use the break to ask:
- Is the current data model going to scale to the next 10x of users?
- Are there synchronous operations that should be async (webhooks, email sending, PDF generation)?
- Is there a single point of failure that would take down the whole system?
Even a quick diagram clarifies thinking:
Client → Next.js API → [sync: DB write] → Response
→ [async: BullMQ] → Send email
→ Sync to Stripe
→ Generate invoice PDF
Tools: Excalidraw for quick sketches, Mermaid for version-controlled diagrams, Eraser.io for collaborative architecture docs. Pick whatever you'll actually use — the diagram itself matters less than the thinking you do while drawing it.
9. Audit Security and Error Handling
AI-generated code is notoriously weak on security and error handling. It optimizes for "does it work?" not "does it fail safely?"
This is partly a general software engineering problem — edge cases and failure modes have always been underrepresented in generated code. But working with an AI agent adds a specific layer: the agent can execute commands, modify files, and make network calls on your behalf. That means security review has two dimensions.
Application security checklist:
- Are all API routes behind auth middleware? (AI often forgets to protect new endpoints)
- Is user input validated on the server, not just the client? (AI loves client-only validation)
- Are database queries parameterized? (Usually yes with ORMs, but check raw queries)
- Are rate limits in place for auth endpoints and public APIs?
- Are permissions checked — not just "is the user logged in" but "does this user have access to this resource"?
Agent workflow security checklist:
- Review commands the agent proposes before approving them, especially anything that modifies infrastructure, env files, or deployment configs
- Be cautious when the agent works with untrusted data — user-submitted content, webhook payloads, or third-party API responses should not be passed raw into prompts
- Keep critical files (production configs, secrets, CI/CD pipelines) outside the agent's default write scope
- For riskier operations, consider running Claude Code in an isolated environment or VM
Error handling checklist:
- Do API routes return appropriate HTTP status codes, or is everything 200/500?
- Are external service calls (Stripe, email, S3) wrapped in retries with exponential backoff?
- Do errors include enough context to debug, without leaking internals to the client?
- Is there a global error boundary in the frontend?
I run through these checklists after every major feature. It takes 15 minutes and has prevented multiple production incidents.
10. Set Up Observability (Before You Need It)
This is the thing you'll wish you'd done before the first production incident.
AI rarely sets up observability well unless you ask for it explicitly. It writes application code, not operational infrastructure. But without logs, metrics, and traces, debugging production becomes guesswork.
Minimum viable observability stack:
- Structured logging — JSON logs with consistent fields (event, user_id, duration_ms, error). Use pino for Node.js — it's fast and structured by default.
- Error tracking — Sentry or equivalent. Captures unhandled exceptions with full stack traces and context.
- Basic metrics — response times, error rates, queue depth. Prometheus + Grafana if self-hosted, or your cloud provider's built-in dashboards.
- Tracing (when you need it) — OpenTelemetry for distributed tracing across services.
You don't need all of this on day one. Start with structured logging and error tracking — those two alone cover 80% of debugging scenarios.
11. Prepare the Next AI Session
Instead of improvising prompts when the limit resets, prepare a short session brief. The difference between a productive AI session and a wasted one is usually decided before the first prompt.
What I write during every cooldown:
## Session prep — March 8
### Current state
Invitation feature is half-done. The data model and API endpoint (`POST /api/invitations`) are merged and working.
Still missing:
- acceptance flow
- email sending
- edge case handling
### Known issues from the last session
- invite token generation needs a cryptographically secure approach
- no expiry check on the acceptance endpoint yet
- email template exists but is not wired into the invite flow
### Tasks (in order)
1. **Fix token generation**
Replace the current token generation in `src/api/invitations/create.ts`.
Do this first so all new test data uses the correct format.
2. **Acceptance endpoint**
Implement `GET /api/invitations/accept?token=xxx`
- validate token exists and status = pending
- check `expires_at > now` — return 410 if expired
- if user exists → add to workspace
- if not → redirect to `/signup?invite=xxx`
- set invitation status to `accepted`
Reference: existing membership creation logic in `src/api/workspaces/members.ts`
3. **Wire email sending**
On successful invite creation, send email via the existing `sendEmail()` helper.
Template: `src/lib/email/templates/invite.tsx`
4. **Integration tests**
Cover:
- happy path
- expired token (410)
- already accepted token (409)
- existing user
- new user
- duplicate invite to the same email
Use the test factory in `src/test/factories/`.
### Done when
- all tests pass
- manual test succeeds end-to-end
- no TypeScript errors
- no lint warnings
What makes this useful is simple:
- current state tells the AI what already works
- known issues prevent rediscovering the same problems
- ordered tasks stop the AI from jumping ahead
- done when gives both you and the model a stopping condition
When the limit resets, I start with the current task, the relevant context, and the success criteria. No warmup, no context switching, no vague "continue this feature" prompt.
This workflow — plan during cooldown, execute during active session — is one of the biggest productivity multipliers I've found in AI-assisted development.
Final Thought
AI coding tools change the rhythm of development.
It's no longer write code → test → write more code. It's:
AI executes → Human reviews → Human plans → AI executes again.
The developers getting the most out of these tools aren't the ones typing the fastest or writing the cleverest prompts. They're the ones who use every minute between sessions to give the AI better context, clearer specs, and fewer decisions to make.
The limit gives you a built-in planning phase. Use it.
How do you spend the time between AI sessions? I'm curious what other developers have figured out — drop a comment below.
Top comments (0)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.