Never in the history of the internet has it been easier to create a digital identity. And never has it been more imperative - at least if any facet of your income flows through a computer.
I just built mine— an arcade cabinet floating in amber darkness, three buttons glowing on a CRT screen, scanlines and phosphor bloom and grain. Distinctive. Ownable.
But there's a question that nags at you when you ship something built this way: Is this fresh or boilerplate?
When the machine can generate "good" in seconds, good isn't the bar anymore. The barrier to great got lower, which means the expectation got higher. You're not competing with what you could make alone. You're competing with what anyone can make with unfettered access to one of the world's most supercharged computers.
So what counts as "good enough" in that environment?
For me, for now: good enough means better than yesterday.
Yesterday this site didn't exist. Today it's an interactive landing page with a constellation themed changelog, a Mastermind game that unlocks free software, and a 14KB footprint that loads in 300ms.
Actually, I'd say that's more than a landing page. It's a jump point.
The Workshop
I like to speed craft in my workshop. Walk in, survey my tools and supplies, see if I can crank out a table or a bench in a single session. Cuts, fits, glue, staple, sand, paint. Done before the feeling faded.
This was how I processed things I couldn't control. The pleasure wasn't the object - it was the compression. Start to finish, one session, something real at the end.
But here's the thing about speed crafting: it only works if you start with a solid idea of what you're building and how to use your tools. The speed comes from clarity - a picture in your head, and an intuition for what each tool can do, how they work together. No picture, no path. No fluency with your tools, no flow.
Now I can speed craft with software. It's a different kind of satisfaction for a different medium. But the same rules apply.
The Conductor Experiment
I've spent the last month working extensively with AI coding agents. They are good - I mean really good. But they aren't perfect, and how you employ them matters.
Especially if you aren't 100% confident in what you're building. If you're iterating your way to success, you can spin your wheels just like in real life. The AI will happily help you build the wrong thing faster. It'll refactor code that shouldn't exist. It'll optimize a flow that needs to be scrapped. Endless motion that feels like progress.
Just like in the workshop: no picture, no path. And if you don't know your tools - what they can do, what they can't, how they work together - you'll blame the saw when it's your setup that's wrong.
So I've learned to stop. Reframe the problem. Reorganize the tools.
Here's a confession: even as someone who really wants to understand this stuff, I struggled. When I heard lifelong coders on the internet talk about "swarms of AI agents" accomplishing tasks, it sounded like science fiction. Or gatekept wizardry. It wasn't until I started building that I realized how simple - how intuitive - the underlying process is.
AI is just bottled context and cognitive power. It's problem solving, within limits.
Say you're walking through a field at night looking for your keys. Alone, you sweep your flashlight back and forth, double back, miss spots, cover the same ground twice. But with a partner, you can walk five feet apart in a straight line - close enough that your beams overlap slightly, far enough that you're covering new ground. Context that's shared, and context that's specific. Enough overlap to catch what falls between you, without wasting light on the same patch of grass.
That's what multi-agent coordination actually is. Less of a swarm, more like parallel search beams with intentional overlap.
A single AI instance loses focus over long sessions. The context window fills up. Attention drifts toward recent turns. An AI juggling performance optimization, UX improvements, and code quality does each one at 33% depth.
By rearranging the limits you can empower AI to do more complex tasks. I see that as the scaffold I build around my AI workflow. Now how does it work?
When I'm building complex software I employ four specialized agents, each with their own Claude Code terminal:
- Performance Agent: Makes it faster
- UX Agent: Makes it clearer
- Quality Agent: Makes it maintainable
- Design Agent: Challenges the whole approach
Each agent has its own system prompt defining its domain. They don't talk to each other directly. Instead, they share a bulletin/ folder - a filesystem-based coordination layer. Before editing code, they post intent. After editing, they post summaries. A shared changelog documents everything in real time.
A fifth agent - the Conductor - reads all proposals, translates them to plain English, detects conflicts, sequences the work, and gives me one summary to approve or reject. I talk to the Conductor. It talks to the agents. The Conductor never writes code. It only directs. This prevents the orchestrator from becoming a bottleneck.
This is what I mean by "know your tools". Instead of fighting the context window limitation, I designed around it. Instead of one agent context-switching between concerns, four agents hold four contexts. Instead of me managing the coordination in my head, a fifth agent manages it in a bulletin board. The system absorbs the friction I used to carry.
In the first test, I pointed three agents at an 840-line landing page. In about 45 minutes, each agent ran 5-6 iterations. Performance made the page faster. UX made it clearer on mobile. Quality made the code more consistent and accessible.
They actually coordinated. Quality noticed that UX had added some quick-and-dirty inline styles and offered to clean them up. Performance recognized when it had done what it could and handed off to Quality for the rest. They read the bulletin and responded to each other - no human in the loop for those handoffs.
The Rejection
Then I ran the system on something harder: a complete rebrand.
I told the Conductor to "be bold." New palette, new typography, new layout, new visual personality. The other agents held for design's creative direction, doing independent audit work in parallel.
Design came back with an incremental proposal. Consolidate font sizes. Normalize spacing. Dim the "Coming Soon" cards. The proposal even included a section titled "What I'm NOT proposing" - listing no color changes, no font changes, no layout restructuring.
The machine was hedging. In writing.
I rejected it: "This proposal is incremental polish, not a rebrand. The brief explicitly said 'bold creative vision.' That's the entire rebrand. Redo."
This was the highest-leverage moment in the entire process.
So I wrote the vision myself. 316 lines describing an arcade cabinet floating in amber darkness. A CRT screen with scanlines and phosphor bloom. Three navigation buttons: READ, BUILD, CONNECT. Hex codes for the color palette. ASCII diagrams of the layout. Typography specs. Animation keyframes. I marked it LOCKED.
Design v2 came back transformed. 673 lines of specific CSS values, HTML structure, responsive breakpoints. Estimated page weight: 12-15KB.
The Conductor's note to Design Agent was surprisingly terse, "This is what you should have come up with the first time." Can you imagine if your boss hit you with that.
But I sympathize with Design Agent. AI agents don't generate vision. They execute it. "Be bold" isn't a brief.
What the System Caught
Even with stopgaps built into the process, things slip through. UX caught in post-build review that a paid-user fix was missing from the tools page. Complex build orders have gaps. But the system caught it because there was a dedicated reviewer whose only job was to catch it.
The most interesting failure was the Visual Depth Problem.
After building a constellation-style changelog - nodes floating in space, connected by lines, fading with age - all six entries rendered at the same visual tier. The site was brand new, so the system I'd built to convey a history of ideas had no history to work with. No hierarchy. No spread. No depth.
None of the agents flagged it. Performance was checking load times. Quality was checking contrast ratios. UX was testing keyboard navigation. Each agent was doing its job correctly. The system functioned. It just didn't mean anything.
I'm the one who noticed. I looked at the constellation and felt nothing. The machines had built exactly what I'd asked for. They just couldn't tell where it fell flat.
This is the part that stays human. Machines care about function. We care about meaning. AI can give you an ocean of progress - but we're the ones who decide where to drop anchor.
The system evolved as I used it. Round 1 was parallel chaos. Round 2 was more sequential - design builds, others review, Conductor synthesizes, post-build validation. Round 3 used explicit five-phase blocking, cross-agent implementation on the same file, meta-level corrections when the system view revealed problems invisible to individual agents.
By the end, the system was more structured than when it started. The agents taught me how to run them.
The New Shape
There are lasting questions about what AI is to us. The answer will be different for everyone, and it will change over time.
But here's what I can say right now: if you use it right, AI is a leveraged context machine. The thing humans are most encumbered by - context switching - now has scaffolding built around it. The friction that slows everything down is the exact thing the machine is built to absorb.
You can set up systems where the AI holds context you can't hold, remembers what you forgot you knew, and picks up exactly where the last session left off. Not perfectly. Not like a partner who knows the inside jokes. More like someone from your high school, different grade - they get your vibe, they're easy to work with, you don't have to explain everything.
That's the new collaboration shape. And it changes what's possible.
I've always believed that stopping to improve your process pays dividends. Sharpen the saw. Work on the system, not just in it. But it didn't always pan out. Sometimes I'd over-fixate on the perfect way to approach a problem and stretch the time to completion past the point of usefulness. The process became procrastination with a productivity aesthetic.
But something shifted. Now I can design, build, and ship products and solutions simultaneously. The line between "figuring out how to do it" and "doing it" is blurred enough to need no distinction.
My shortcuts are products. The multi-agent system I built to manage my own context switching? That's a tool someone else could use. A few hours hammering something out for myself becomes a few hours saved for someone else tackling the same problem.
The process and the output collapsed into each other.
That's part of why I'm writing this. AI is changing fast. What we can do with it today will be different in six months. Documenting what worked, what skills it unlocked, what it felt like to build this way - that's context for the future. Not just for me. For anyone trying to figure out what these tools are actually for.
Each small thing makes the next thing faster. The systems I build to manage context become tools. The brand I'm building becomes the test case for the workflow. The article about the process is a piece of the process itself.
It's recursive. It's compounding.
That's the feeling when things are working. That's how you know "good enough" is getting better.
Connor builds software and writes about technology at forgonetokens. Previously: filmmaker, fabricator, professional card counter.
Top comments (0)