A follow-up to The AI Development Workflow I Actually Use
I wrote about my AI development workflow a couple of weeks ago. Task Master for structured tasks, Context7 for current docs, handover documents between fresh chats, multiple AI perspectives before coding. That workflow shipped working software.
Today, a significant part of that workflow became optional.
Claude Opus 4.6 launched in Cursor on February 5th, 2026, with a 1 million token context window. I've been using Opus 4.5 with its 200K limit for months. The jump to 1M isn't an incremental improvement. It changes what's possible in a single conversation.
This is what happened when I tested it on a real project.
The Project: ironPad
ironPad is a local-first, file-based project management system I've been building with AI. Rust backend (Axum), Vue 3 frontend, markdown files as the database, Git integration for versioning. It's a real application, not a demo.
Tech stack:
- Backend: Rust, Axum 0.8, Tokio, git2, notify (file watching)
- Frontend: Vue 3, Vite, TypeScript, Pinia, Milkdown (WYSIWYG editor)
- Data: Plain Markdown files with YAML frontmatter
- Real-time: WebSocket sync between UI and filesystem
The codebase has around 80 files across backend and frontend. Not massive, but too large for 200K context.
The Old Way: 200K Context
With Opus 4.5 and 200K tokens, my workflow for this project looked like this:
- Break big features into 3-5 tasks — because the AI can only hold a few files at once
- Write handover documents between each chat — so the next session knows what happened
- Carefully select which files to show — can't load everything, so I'd pick the 3-5 most relevant files
- Repeat context-setting every session — paste the handover, re-explain the architecture, point to the right files
It worked. I shipped features. But there was some friction in the handover system.
The handover system was my solution to a constraint. A good solution, but still a workaround.
The New Way: 1M Context
Today I opened a fresh chat with Opus 4.6 and said: "Load the entire codebase into your context and analyze it."
That's it. No carefully selected files. No handover document. No context-setting preamble.
The AI proceeded to:
- List the entire project structure — every directory, every file
- Read every single source file — all Rust backend code, all Vue components, all stores, all configs, all documentation
- Hold all of it simultaneously — ~80 files, thousands of lines of code, across two languages and multiple frameworks
Then I asked: "Are there any bugs or improvements we should make?"
What It Found
The AI identified 16 issues across the entire codebase. Not surface-level stuff. Deep, cross-file bugs that required understanding how multiple components interact:
Real bugs:
-
Auto-commit was silently broken — The background task checked a
pending_changesflag, but nothing in the entire codebase ever set it totrue. Auto-commits never fired. This is the kind of bug that requires readingmain.rs,git.rs, and every route handler to spot. No single file reveals the problem. -
JavaScript operator precedence bug —
remote.value?.ahead ?? 0 > 0evaluates0 > 0first due to precedence, making push/pull buttons always show wrong state. - Port binding race condition — The server checked if a port was available, dropped the connection, then tried to bind again. Another process could grab the port in between.
-
Own saves triggering "external edit" dialogs — Only one of eight write paths called
mark_file_saved(). The file watcher would detect the app's own saves and pop up "File changed externally. Reload?" for task and project saves.
Architectural improvements:
- Non-atomic writes risking data corruption in 3 route files
-
confirm()blocking the UI thread - WebSocket reconnect using fixed delay instead of exponential backoff
- 120 lines of duplicated task parsing logic
- Missing CORS middleware
- No path traversal validation on asset endpoints
- Debug
console.logleft in production code
What It Fixed
I said: "Can you fix all of these please?"
In one session, the AI:
- Rewrote the auto-commit system to simply try committing every 60 seconds (the existing
commit_all()already handled "no changes" gracefully) - Fixed the port binding by returning the
TcpListenerdirectly instead of dropping and rebinding - Made
atomic_write()public and switched all write paths to use it (which also solved themark_file_saved()problem automatically) - Added frontmatter helper functions and deduplicated the task parsing code
- Replaced the blocking
confirm()with a non-blocking notification banner - Added CORS, path validation, exponential backoff
- Fixed the operator precedence bug
Result: cargo check passes. Zero lint errors on the frontend. 14 issues fixed, 2 intentionally deferred (one was a large library migration, the other a minor constant duplication across files).
This was done in a single conversation. No handovers. No task splitting. No lost context.
What Actually Changed
Before: The Handover Tax
With 200K context, every bigger task or change had overhead, and we had to split it up into tasks.,
That overhead was the cost of the constraint. Good handover-systems made it manageable, but it was never free.
After: Direct Work
With 1M context, the full codebase audit looked like this:
Time for entire audit + fixes:
Loading codebase: ~2 min (AI reads all files)
Analysis: ~3 min (AI identifies 16 issues)
Fixing all issues: ~15 min (AI applies all fixes)
Verification: ~1 min (cargo check + lint)
Total: ~20 min
Overhead: ~0 min
The same work with 200K context would have been 5+ separate sessions, each needing its own handover, each limited to the files it could see at once. Some of the cross-file bugs (like the auto-commit issue) might never have been found because no single session would have had both main.rs and git.rs and all the route handlers in context simultaneously.
Does This Kill the Handover Workflow?
No. But it changes when you need it.
Still valuable:
- Collaborating with someone who needs to understand what you've done
- Documenting decisions for your future self
- Projects larger than 1M tokens
No longer necessary:
- Splitting a feature into artificial micro-tasks just to fit context
- Writing handovers between closely related tasks
- Carefully curating which files the AI can see
- Re-explaining architecture every session
The handover system goes from "required for every task" to "useful for session boundaries." That's a big shift.
The Broader Pattern
What I've noticed building ironPad is that each AI capability jump doesn't just make existing tasks faster, it enables tasks that weren't practical before.
A full codebase audit wasn't practical at 200K. You could audit individual files, but finding bugs that span the entire system required a human to manually trace connections across files and then describe them to the AI. Now the AI just sees everything.
Cross-cutting refactors weren't practical at 200K. Changing how atomic writes work across 6 files, while also updating the file watcher integration, while also ensuring frontmatter helpers are available everywhere, that's a single coherent change when you can see all the files. At 200K, it's 3-4 sessions with risk of inconsistency between them.
Architecture-level reasoning wasn't practical at 200K. The auto-commit bug is a perfect example. The AutoCommitState was created in main.rs, the mark_changed() method existed in git.rs, but no route handler had access to it. Finding that requires understanding the full request flow from HTTP handler through service layer. That's trivial with the whole codebase loaded.
What's Next for ironPad
The project is open source, i released it 30 minutes ago on GitHub.
We're also going open method. Not just the code, but the process. How every feature was built with AI, what prompts worked, what didn't, how the workflow evolved from 200K to 1M context.
Because the tools keep getting better, but the process of using them well still matters. A 1M context window doesn't help if you don't know what to ask for.
Try It Yourself
The core of what worked today:
- Load everything. Don't curate files. Let the AI see the whole picture.
- Ask open questions first. "What's wrong?" before "Fix this specific thing." The AI found bugs I didn't know existed.
- Let it work in batches. The AI fixed 14 issues in one session because it could see all the dependencies between them.
-
Verify mechanically.
cargo checkand lint tools confirm correctness faster than reading every line. - Keep your structured workflow for session boundaries. Handovers and PRDs still matter for smaller tasks and bigger projects. They just aren't needed between every micro-task anymore.
The context window went from a limitation you worked around to a space you fill with your entire project. That changes the game.
*ironPad is being built in the open. Follow the project on GitHub


Top comments (0)