I'm not a data scientist. I've trained a few models before — simple classification problems, with AI writing the Python and me running the iterations. It worked. I got confident.
Then a friend asked for help with something harder.
Three Weeks at 0.58
The problem involved predicting an outcome from a mix of CRM data and call recordings. Not trivial, but not exotic either.
Quick primer on AUC — the metric I'll use throughout. Imagine your model looks at two random people: one where the answer is yes, one where it's no. AUC measures how often the model correctly ranks the yes above the no. Score of 0.5 means random guessing. Score of 1.0 means always right.
I tried everything I knew: XGBoost, feature engineering, extracting features from transcripts using AI models, trying different combinations. I assumed more data meant better results — that's how it's supposed to work. Instead, every time I added more features, the AUC dropped. Below 0.5 sometimes. Meaning the model was now actively misleading — it would've been better to ignore it entirely and flip a coin.
My ceiling was 0.581. I couldn't break it no matter what I did.
I stepped away from the problem for a week. Talked it through with a friend who actually knows this domain. Nothing clicked. I was out of moves.
Then Karpathy posted about autoresearch.
The Gold Wasn't the Model
The post generated a ton of hype. For two days I kept asking myself: how do I use what he built on my problem? His project was about training a small language model — not a classification problem like mine. On the surface, nothing transferred.
But that was the wrong question. The interesting part wasn't what Karpathy was training. It was how. A fixed, uncheat-able validation metric. An agent that modifies only the training script. A loop that runs while you sleep. The scientific method, automated.
I had a problem sitting unfinished. I had the method. I started a tmux session from my iPhone on Friday night and let it run.
What Happened Next
By experiment 30, the agent declared it had exhausted all options. I didn't accept that — I just asked it questions. That conversation unlocked something unexpected: a technique I'd never heard of that jumped AUC from 0.581 to 0.628 in one step.
From there, across 165 experiments, the agent built a system that pushed AUC to 0.6747 — a 15.6% gain from a dataset I was ready to abandon. And the dynamic that made it work — agent explores, hits a ceiling, human nudges, agent continues — repeated itself throughout the entire run.
Read the full story with the complete results table and methodology →
The full post covers:
- The exact stacking architecture that broke through the 0.58 ceiling
- How "rubber duck debugging" with an AI agent led to techniques I didn't know existed
- The complete experiment-by-experiment results table (0.581 → 0.6747)
- What happens when the agent spawns its own research sub-agent mid-run
- Why staying close to what's emerging isn't optional anymore
Top comments (0)