Tinker, Tailor, Soldier, Spy
I've been a lifelong technologist and tinkerer. Linux CLI user for 25 years. Built a web design business at 13 (long before JavaScript existed). Even assembled a basic Linux distro from scratch once.
Despite decades of solving computer problems and writing the occasional bash script, I never quite broke through to serious programming. I'd tried to learn a few times and failed - the learning curve couldn't hold my interest long enough with the tools available.
And I didn't work in tech. I was an 18-year intelligence officer and senior leader of a global workforce. My work was spying (well, leading it).
The Philosophy That Shaped Everything
My entire leadership ethos came down to one principle: remove barriers so people can do their best work.
Let the people in the field--the doers--do what they do best. Give them the safety and decision space to succeed. Make innovation feel safe, not risky.
I was obsessive about plain language because anything else created uncertainty, and uncertainty breeds caution. When people aren't sure if they're allowed to try something, they don't try it.
As a heavily regulated 'business', law and policy are deeply linked to intelligence outcomes. I dug deep into every law and policy that could impact our work and systematically questioned everything that might be a myth. "We can't do that" barriers often turned out to be misunderstandings (or complete fabrications), not rules or laws. My job was finding those barriers and destroying them, along with any others that kept good people from doing good things.
This philosophy worked. We turned around failing teams, made a (relatively) small $50M program into a blockbuster that garnered White House attention -- attention that exceeded programs 100x our size and got me a couple of surprise phone calls that skipped about 8 layers of management.
AI Solved a Problem and Changed Everything
Two things happened in close proximity that changed my path.
First, I found ChatGPT right after its first public release and when it was powerful enough to fuel my insatiable appetite for deep learning. As someone with ADHD who learns by diving deep into problems, this was transformative. ChatGPT could work with my hands-on, tinkering style and accelerate my learning process dramatically.
Second, I hit a recurring data problem at work that I couldn't shake. I was managing operations for a 300+ person global organization, constantly frustrated that I couldn't get solid analysis of workforce activity and performance when all the data I needed was there. It just wasn't in a usable form. I tried to find expertise to help, but it didn't exist or wasn't available.
I did what came naturally: I learned to do it myself in Python, using AI-accelerated learning to iterate and solve real problems.
The limitations of AI were actually a hidden benefit. AI could help me dig deep into complex problems - like building and automating feature extraction and inference pipelines - but couldn't quite dig me out or debug it. It forced me to understand the fundamentals, to learn to shape my questions and context for better outcomes, to understand the 300-ish things I'd skipped along the way.
I was hooked. The AI-driven programming loop was incredibly powerful. I was already a fast learner, but AI let me accelerate that by 3x (maybe 10x...).
I spent every spare moment coding, learning, iterating. Literally every week I'd look back at my code and think "what was I thinking? This is amateur hour stuff." Three years later, I still have that experience, though now it's more like a month or two.
By February 2025, I was professionally proficient in Python and TypeScript, with some Rust under my belt.
When You Can't Lead Authentically Anymore
Then came the crisis. The new administration issued executive orders that conflicted with my core leadership principles - the ones that had made me effective for 18 years.
I believe deeply in what Amazon calls "have backbone; disagree and commit" - if I don't like a decision, I'll tell you why and try to convince you to change it. If I fail to change your mind, I'll turn around and champion it completely.
That's how effective organizations work, with leaders who put outcomes before themselves.
But this was different. I couldn't do that anymore without sacrificing who I was.
I couldn't lead genuinely while being asked to implement policies that contradicted the values I'd built my career on. Transparency. Psychological safety. Empathy. Speaking truth to power. The same values that made me effective at removing barriers and enabling people to do their best work.
In my mind, I had to go. There was no choice. I could stay and compromise my principles - either by championing values I deeply disagreed with or by breaking my commitment to effective followership. Or I could leave, keep my principles, and use them to build something.
When the administration offered its infamous "fork in the road" deferred resignation initiative - continue receiving salary and benefits until September 30th in exchange for resigning - I jumped.
Same Values, Different Problem
At the same time, I'd been watching something troubling: AI was creating a new divide. The technical few who understood prompt engineering, context windows, and model selection were getting incredible value. Everyone else was struggling.
All the most powerful AI tools required high technical expertise to use effectively. This bothered me. It was the same barrier problem I'd spent 18 years solving - just in a different domain. So, I started building Knitli with a clear mission: democratize AI for normal people. Remove the barriers between technical and non-technical users.
Then I hit the wall myself.
As I was building, I kept getting frustrated by how AI coding tools were simultaneously powerful and really, really dumb. They'd generate code that ignored my architecture, my patterns, my existing conventions, my dependencies. A quick conversation about this problem, and a lot of reflection, ballooned over a couple days into a realization: the fundamental issue was context.
AI agents needed carefully tailored context that shifts and adapts throughout interactions. The "dump all the context" approach everyone was using made agents both ineffective and unnecessarily expensive. What was missing was a context layer - systematic, intelligent, adaptive.
The insight clicked: this was the same problem I'd been trying to solve for non-technical users, just one level deeper. And solving it for developers first would actually enable the broader vision later.
Building With the Same Philosophy
Now I'm building Knitli to solve the fundamental context problem in AI-driven software development.
We're building tools that help AI agents understand codebases the way I helped my teams understand their mission space: deeply, systematically, and with enough context to be confident.
Thread provides intelligent, adaptive context to AI agents so they understand your architecture, patterns, and existing code. CodeWeaver makes code navigation and comprehension actually pleasant for humans.
My philosophy hasn't changed: remove barriers. Make complex things accessible. Use plain language. Question assumptions about what's possible. Enable people--and AI--to do their best work without friction.
The vision evolved, but the values stayed constant.
The journey from intelligence executive to AI entrepreneur wasn't planned. But it makes perfect sense in hindsight. I'm solving the same problems I've always solved - just in a different domain, with the freedom to practice my values without compromise.
Sometimes the best solutions come from understanding problems from multiple angles. And sometimes you need uncomfortable realities to force the leap.
(I'm not immune to the problems I'm trying to solve, either. I had to completely scrap the first versions of Thread and CodeWeaver. The speed of using AI produced code that would have been impossible to maintain, I now assign AI agents much smaller tasks while I deliberately shape their context to get what I need from them -- a process Thread and CodeWeaver hope to eventually automate.)
I'd like to hear about similar decisions other people made - when did you realize the only way forward was to do something completely different?
]]>
Top comments (0)