How I’m Applying Andrej Karpathy’s AI Methods in Practice
Over the last few months, I’ve been less interested in AI as a novelty and more interested in it as operating infrastructure.
A lot of that thinking has been sharpened by Andrej Karpathy’s recent work and essays. Not because I’m trying to copy his exact stack, but because his framing is unusually practical. He keeps coming back to a few ideas that matter in the real world: agency beats raw intelligence, small systems are easier to trust, good notes compound, and AI works best when it is embedded into repeatable workflows instead of treated like magic.
That maps closely to how I’m building.
1. I’m using AI as leverage, not theater
One of Karpathy’s points that stuck with me is that agency matters more than intelligence. I think that is even more true when you’re a solo founder.
The bottleneck is usually not whether a model can generate text or code. The bottleneck is whether you can turn that capability into shipped systems, useful decisions, and consistent output.
In practice, that means I use AI to increase execution speed across a portfolio of projects:
- content and SEO systems
- internal knowledge capture
- product iteration
- research synthesis
- technical debugging
The goal is not to say “look, AI did this.” The goal is to reduce the drag between idea and implementation.
2. I’m building a compile-first knowledge system
Karpathy’s “LLM wiki” idea lines up with something I’ve been converging on: don’t just retrieve information on demand, compile it into a maintained knowledge base.
That distinction matters.
RAG is useful, but it often recomputes context every time you ask a question. A maintained wiki compounds. It becomes a durable operating layer.
So instead of letting insights disappear inside chat history, I’m organizing work into a structure with:
- raw source material
- compiled markdown knowledge
- rules and hypotheses
- changelogs and decision logs
That makes it easier to reuse insights across sessions, spot contradictions, and keep momentum when switching between projects.
3. I’m treating notes as a working system, not an archive
Another Karpathy idea I like is the append-and-review note.
The principle is simple: don’t over-design your note system. Keep capturing. Periodically review. Rescue the high-signal items. Let the rest sink.
That is a much better fit for speed than building elaborate folder structures you never maintain.
I’m applying that idea operationally:
- append new findings quickly
- review and promote only what keeps proving useful
- turn recurring insights into rules
- keep the system lightweight enough that it actually gets used
In other words, notes are not just memory. They are a filter for what deserves to become process.
4. I’m using agents for bounded work, with humans on judgment
Karpathy has also been clear that agents are getting much better, but they are still jagged. I think that’s exactly right.
An agent can feel brilliant one minute and strangely careless the next. That means the right pattern is not blind autonomy. It’s constrained autonomy.
So I’ve been using agents for:
- background research
- drafting and rewriting content
- code edits and implementation passes
- monitoring and checks
- structured summarization
But I still keep human judgment on:
- strategic direction
- final publishing decisions
- sensitive external actions
- quality control when stakes are high
That has been a much better model than pretending full autonomy is already solved.
5. I’m favoring small, legible systems
One thing I appreciate about Karpathy’s work is that even when he talks about powerful systems, he still values inspectability.
That resonates with how I build.
I’d rather have a set of small scripts, explicit files, and clear handoffs than an opaque automation tower that nobody can debug. This matters even more when AI is involved, because the system has to be understandable when something goes wrong.
Legibility is not just an engineering preference. It’s a speed advantage.
6. I’m using AI across multiple operating contexts
A lot of the public discussion about AI still focuses on chat as a destination. For me, chat is just one interface.
The more useful pattern is embedding AI into a broader operating environment:
- research workflows
- knowledge pipelines
- publishing systems
- product support tasks
- technical execution loops
That’s where it starts to feel less like a demo and more like infrastructure.
7. The biggest shift is compounding
The most important practical lesson I’ve taken from Karpathy is not a specific prompt or tool.
It’s that AI becomes much more valuable when you design for compounding.
A prompt is disposable.
A workflow is reusable.
A maintained knowledge base compounds.
A network of owned content compounds.
A system that gets slightly better every day compounds.
That is the lens I’m using now.
I’m not trying to use AI for isolated wins. I’m trying to build operating systems around it, where research, execution, memory, and publishing reinforce each other over time.
That shift has made AI much more useful in practice.
If you’re building with these tools seriously, that’s the question I’d ask: are you just generating outputs, or are you building compounding systems?
Top comments (0)