I've been writing code professionally for over a decade and I run a small agency out of Bali — UI/UX, development, digital marketing. Most posts I read about "AI and agency work" are either doom ("we're all cooked") or vendor pitches ("buy our platform"). Neither matches what I actually see day-to-day.
This is the version I wish someone had written for me two years ago.
What actually changed (and what didn't)
The honest summary: AI automation didn't replace agency work. It changed which parts of the work are valuable.
Here's the breakdown from inside our studio:
Compressed dramatically:
- First-draft copy and content
- Component scaffolding and boilerplate
- Research synthesis (interview transcripts, competitor scans)
- Performance reporting and dashboard prep
- Initial UI variations and design exploration Barely moved:
- Information architecture decisions
- Edge-case handling in production code
- Stakeholder alignment and discovery
- Brand voice and creative direction
- The judgment calls about what to build next If your role was 80% in the top bucket, you're feeling this hard. If your role was 80% in the bottom bucket, you're probably busier than ever — because the top bucket got cheap, so clients want more strategic work in the same engagement.
A concrete example: a marketing site rebuild
Two years ago, a typical marketing site rebuild for us:
Discovery: 1 week
Wireframes: 1 week
Visual design: 2 weeks
Frontend build: 2 weeks
QA + launch: 1 week
Total: ~7 weeks
Same project today:
Discovery: 1 week (unchanged — humans still talk to humans)
Wireframes: 2 days (AI-assisted exploration, faster iteration)
Visual design: 1 week (AI for variations, senior designer for direction)
Frontend build: 1 week (Cursor + scaffolding, devs on edge cases)
QA + launch: 3 days (automated visual regression, AI-assisted a11y review)
Total: ~3 weeks
That looks like a 50%+ reduction in calendar time. It is — but the framing is misleading. We didn't fire half the team and pocket the difference. Two things happened:
- We added scope inside the same engagement. More iterations, more A/B tests, more polish on the parts users actually touch.
- We rebalanced senior vs. junior time. The "junior dev hammers out a navbar" hours mostly disappeared. The "senior dev figures out why this auth flow drops 12% of users" hours expanded. ## The stack that actually runs our work now
Skipping the buzzword versions. Here's what's in daily use:
Engineering side:
- Cursor + Claude for component scaffolding and refactors
- Custom internal
AGENTS.mdfiles per project so the AI has context on conventions - LibreChat as our internal AI gateway (auditable, multi-model, no data leak to consumer accounts)
- Standard stack stays Node.js, Python, React, Go — none of that changed Design and content side:
- AI for first-pass copy variations against tight briefs (briefs are the bottleneck, not the model)
- Figma + AI plugins for repetitive layout work
- Human-in-the-loop edit pass before anything reaches the client Ops side:
- Automated weekly client reports pulled from analytics → drafted summary → strategist edits before send
- AI-assisted intake forms that pre-qualify before discovery calls The pattern is consistent: AI handles volume, humans handle direction. Reverse that order and you ship garbage faster.
The four traps I see other agencies fall into
If you're evaluating an agency (or running one), these are the failure modes worth naming:
1. Treating AI as a margin grab
The agency cuts internal production time by 60% and keeps charging the same rates without expanding scope or improving outcomes. The client captures none of the benefit. This works for about one renewal cycle, then the client figures it out.
2. Automating broken processes
If your discovery process is broken, automating it produces broken discovery faster. AI is a multiplier on whatever's underneath it. Diagnose the process before you automate it.
3. Tool stacking without integration
I've seen agency pitches with 15+ AI tools listed. In practice, 13 of them aren't connected to each other and the team uses 3. What matters is reliable end-to-end workflows, not the size of the logo grid.
4. Removing the human-in-the-loop entirely
AI-generated content with no editorial pass is the agency equivalent of shipping console.log to production. It mostly works. Until it doesn't, very publicly.
The business model problem nobody's solved
This is the part I'm still actively figuring out, and I think most agency owners are too.
Traditional agency retainers are priced on time — N hours per month, defined scope. That model assumes time is the constraint. AI broke that assumption.
If we deliver in 10 hours what used to take 40, do we:
Option A: Charge for 10 hours at the old rate. (Client wins, we lose 75% of revenue on that engagement.)
Option B: Charge for 10 hours at 4× the rate. (Client revolts unless we can prove the outcome justifies it.)
Option C: Charge for the outcome. (Pipeline generated, conversion improved, ship date hit.)
We've moved selectively toward C on engagements where the outcome is measurable and we have enough signal to predict our ability to deliver it. It's better for clients. It's also significantly riskier for us — vague scopes don't protect anyone in outcome-based pricing.
For engineers thinking about freelancing or starting an agency in this environment: pick your pricing model early, and price the value, not the hours. The hours metric is becoming structurally misleading.
What this means for engineers inside agencies
If you're a developer working at an agency right now, the parts of your job that are most exposed are the most templated ones — landing pages with no novel state, CRUD admin panels, glue-code integrations. Those have been collapsing.
The parts that are less exposed are the ones that require holding the whole system in your head: API design that anticipates the next three features, refactors that don't break six other things, performance work, security review, debugging in production. None of that has gotten meaningfully cheaper.
Practical advice from inside the change:
- Get fluent with AI tooling, but don't outsource your judgment to it. The senior engineers I see thriving treat AI like a fast junior dev — useful, needs supervision.
- Push yourself toward the parts of the work that require system-level reasoning. That's where compensation is going.
- Learn enough product and business to participate in scoping conversations. "I just build what's specced" is a shrinking job description. ## The takeaway
AI didn't kill agency work. It killed the cheap version of agency work.
What's left — and what's growing — is the part of the work that requires judgment, context, and ownership of outcomes. That's harder. It's also more interesting, and from where I'm sitting, more sustainable.
If you're an engineer wondering whether to stay at an agency, start one, or go in-house: the answer depends a lot on which side of the judgment / execution line you want to live on. AI shifted the line. It didn't erase it.
I run Lenka Studio, a small digital agency in Bali working with SMBs across AU/SG/CA/US. This post is adapted from the original essay on our blog.

Top comments (0)