"Most apps are slower than they need to be." That was the top dev.to post this week. They're right — and AI pair programming is changing how fast you can find and fix the bottlenecks.
Here are real before/after results from performance work I've done with Claude over the past month.
The Workflow That Works
Before diving into numbers: the workflow matters as much as the tool.
1. Profile first (never guess)
2. Show Claude the profile output + relevant code
3. Ask for the 3 highest-ROI changes only
4. Implement, measure, verify
5. Repeat
The key: show Claude real data. Flamegraphs, query explain plans, benchmark results. Claude's performance suggestions without data are generic. With data, they're surgical.
Case 1: API Response Time — 847ms → 142ms
The Problem: A Node.js endpoint was slow. I profiled it and pasted the output to Claude.
What Claude found: Three N+1 queries buried in a loop that had been there for two years. My eyes glazed over the loop; Claude spotted it immediately from the query count in the profile.
// Before: N+1 (1 query + N queries for N users)
const users = await db.select().from(usersTable);
const results = await Promise.all(
users.map(u => db.select().from(postsTable).where(eq(postsTable.userId, u.id)))
);
// After: 1 query (Claude suggested the join pattern immediately)
const results = await db
.select({ user: usersTable, posts: postsTable })
.from(usersTable)
.leftJoin(postsTable, eq(postsTable.userId, usersTable.id));
Result: 847ms → 142ms. 83% improvement. Took 12 minutes including the profile time.
Case 2: React Re-renders — 340 renders → 12 renders
The Problem: A dashboard component re-rendered on every keystroke in an unrelated search box.
I pasted the React DevTools profiler output to Claude and asked "what's causing unnecessary renders in this tree?"
// Before: Every parent state change re-renders everything
function Dashboard({ userId }: { userId: string }) {
const [search, setSearch] = useState("");
const data = useExpensiveQuery(userId); // Recalculates on search change
return (
<>
<SearchBox value={search} onChange={setSearch} />
<ExpensiveChart data={data} /> {/* Re-renders on every keystroke */}
</>
);
}
// After: Claude suggested memo boundary + query isolation
const MemoChart = memo(({ data }: { data: ChartData }) => (
<ExpensiveChart data={data} />
));
function Dashboard({ userId }: { userId: string }) {
const [search, setSearch] = useState("");
const data = useExpensiveQuery(userId); // Now isolated from search state
return (
<>
<SearchBox value={search} onChange={setSearch} />
<MemoChart data={data} /> {/* Only re-renders when data changes */}
</>
);
}
Result: 340 renders/minute → 12 renders/minute during normal use. 96% reduction.
Case 3: Python Data Processing — 4.2 minutes → 18 seconds
The Problem: A nightly data processing script was taking 4+ minutes.
I showed Claude the cProfile output. It immediately flagged apply() in pandas inside a loop.
# Before: 4.2 minutes
for date in date_range:
df_filtered = df[df['date'] == date]
df_filtered['processed'] = df_filtered['value'].apply(complex_transform)
results.append(df_filtered)
# After: Claude suggested vectorized operations + groupby (18 seconds)
df['processed'] = df.groupby('date')['value'].transform(
lambda x: vectorized_complex_transform(x)
)
Result: 14x faster.
What Claude Is Actually Good At (Performance Edition)
Claude excels at:
- Recognizing N+1 query patterns instantly
- Suggesting memoization boundaries in React
- Converting loops to vectorized operations
- Identifying missing database indexes from query patterns
- Rewriting O(n²) algorithms to O(n log n)
Claude struggles with:
- Performance issues requiring deep system knowledge (Linux scheduler, etc.)
- Caching strategy without knowing your read/write ratio
- Distributed systems bottlenecks without trace data
The meta-lesson: Claude is a pattern matcher operating at high speed. Performance optimization is fundamentally pattern matching. The intersection is where you get 10x leverage.
The 15-Minute Performance Audit Prompt
Here is the profiler output for [endpoint/function/component]:
[PASTE PROFILE DATA]
Here is the relevant code:
[PASTE CODE]
Identify the top 3 highest-ROI performance improvements, ordered by:
1. Impact (improvement magnitude)
2. Implementation risk (prefer low-risk changes)
3. Implementation time (prefer quick wins)
For each: show the before/after code and estimate the expected improvement.
Run this before any performance sprint. It takes 15 minutes and usually surfaces 80% of the available gains.
Running AI-assisted engineering workflows and documenting results publicly. Follow for weekly benchmarks.
Top comments (0)