I don't know where any of this is going. That isn't a confession. It's the method.
Last week I had five separate planning sessions in one day for a curriculum project I'm building. Two weeks before that, I shipped a feature for a personal operating system I'm developing for myself. Last month, I started a free tech-help service for my neighborhood. Each of those things has its own thesis. None of them have collapsed into "the one big idea," and I have stopped trying to make them.
The branching is the point. ADHD doesn't fail to generate ideas. It fails to retain them. Branches show up, get half-explored, and dissolve before the next branch arrives. The problem was never the branching. The problem was the losing.
So I built a system that doesn't lose them. The fact that I'm sitting here now, pulling six separate research threads out of weeks of journal entries without dropping any of them, is the proof of concept.
The reason I built it, though, is something I've only recently learned to say plainly.
The Real Thesis
Flux Meridian, at heart, is my response to the imposter-syndrome plague in the modern workforce.
That's the whole frame underneath the projects. BRAIN, the curriculum work, the tech-help service, the methodology I'm writing in public — all of it is experiments testing different angles of the same response. The proliferation of theses inside the program isn't me being unfocused. It's me responding to a problem that shows up in a lot of places, in a lot of forms, all rooted in the same conditioning.
I should explain what I mean by the conditioning, because if I don't, the rest of this post sounds like a complaint.
I have nine years at my current company. Two bachelor's degrees. A master's in IT. I still feel, most days, like I know nothing.
I don't think that's wrong. I think it's the right posture for what I'm doing. Curious and certain tend to be inversely correlated, and I'd rather be the first one. But I notice the feeling is constant, not occasional. And I notice that almost everyone I work with has the same feeling, including the people who appear most confident. We are all, mostly, fighting a quiet sense that we're about to be exposed.
Here's what conditioning does with a feeling like that. It teaches you to perform certainty you don't have. It teaches you that admitting "I don't know" is the same as being wrong. It teaches you to treat questions as threats to your standing rather than as the actual mechanism by which you would learn the thing.
I'm not exempt from any of this. I've been conditioned by it long enough that I sometimes respond on reflex, and to my own discomfort, I've realized I will occasionally lie by accident. Not a deliberate lie. A reflexive one. Someone asks me something, and before I can think, my mouth has already produced a confident-sounding answer that I'm not actually sure of. That's conditioning. That's what years of being punished for visible gaps does to a brain.
If you've ever caught yourself doing the same, you know the feeling. It isn't a character flaw. It's training. And it's widespread enough that I'd call it a plague.
What the Conditioning Costs
The cost is that we never actually grow.
A gap you cover is a gap you keep. The reflexive "I know that already," said to a coworker, said to a manager, said to yourself, is the thing that prevents the actual learning from starting. The question you don't ask is the gap you don't fix. Multiply that across a career, across a department, across an industry, and what you get is a workforce that produces a great deal of confident output and astonishingly little real growth.
This is the part that, day in and day out, I find most frustrating about working life right now. Not that people don't know things. Nobody can know everything; the volume of information in 2026 is absurd, bordering on insulting. The frustration is that people will answer my question wrong rather than admit they don't know. They'll operate on assumption rather than ask the question that would expose their gap. They'll pick the default and act sure of it, because the default feels safer than the admission.
The cost of that, for everyone around them, is that the work gets done badly while everyone pretends it isn't.
I've been on the receiving end of this often enough to feel it as a daily weight. I've also done it to other people often enough to know I can't pretend I'm only ever the victim of it. We're all in this together. The conditioning doesn't care which side of the conversation you're on.
It also makes the smallest gaps feel the most exposing. Imagine someone mentions a popular show, or an artist, or an app, and you say you haven't seen it, haven't heard them, don't use it. Watch the reaction. People will sometimes treat you like you've failed a basic test of being human. Over a show. The conditioning runs deep enough that even our trivial gaps feel like indictments. So we cover them. So we lie a little. So we never go look the thing up, because looking it up would mean admitting we didn't already know.
Multiply that pattern by a thousand a week and you get the texture of modern professional life.
The Recovery Practice
The move that breaks the pattern is genuinely simple to name and genuinely hard to do:
"I don't know — let's find out."
That's the whole recovery practice. Not "I should have known." Not "give me a minute to bluff something plausible." Not the silent panic followed by the confident-sounding guess. Just the admission, followed by the search.
School isn't really about knowing everything. It's about learning how to learn. Career growth isn't about already-knowing. It's about continuing to learn. We are always growing. There are always gaps. The question isn't whether you have gaps; you do, and you always will. The question is whether you plug them the right way. Through testing and learning. Or through ducttape and "I know that already."
Acknowledged gaps can be filled. Hidden gaps cannot. That's most of it, right there.
This is where BRAIN comes in, and where the broader Flux Meridian work comes in. BRAIN is partly a tool for managing ADHD. More deeply, it's an external scaffold for the practice of acknowledging gaps. It works best when you tell it what you don't know. It's structurally hostile to bluster; there's nothing it can do for you if you pretend, to it or to yourself, that you've already mastered something. It rewards the admission and partners with you to fill the gap.
I'm building a tool that, by design, only helps you when you're willing to be honest about what you haven't figured out yet. That's the practice. The tool is just the scaffolding.
The Theses Currently Running
With that frame in place, here are the specific experiments inside the program. Each is a different angle on the same underlying response. None of them are complete. All of them are open.
1. Can a tool partner with ADHD instead of correcting it? Most productivity tools assume the goal is to make an ADHD brain function more like a neurotypical brain. BRAIN doesn't. The ADHD brain is generating useful signal — random recall, pattern jumps, hyperfocus surges — and the system's job is to catch the signal and route it forward. Not pointing the brain at neurotypical defaults. Pointing the ADHD in its own direction, faster.
2. Can AI simulate on-the-job training at depth, and what byproducts emerge? I'm building a game in Unreal Engine that I do not know how to build, while documenting the experience as it happens. The curriculum that falls out of that work is one byproduct. The methodology I use to do it is another. The dev log itself is a third. Each byproduct has a different audience — students, working developers, people designing onboarding programs — and each one is real because the underlying experiment is real.
3. Can a person with deep systems knowledge teach an AI to teach them back? I'm not a domain expert in game development. I am a domain expert in how systems work; that's what twenty years of breaking things apart for a living gives you. The hypothesis is that systems-knowledge is enough scaffolding to use AI as a domain tutor, if you bring rigor to the questions you ask and the notes you keep. The expert that emerges at the end isn't me, and isn't the AI. It's the artifact of the partnership.
4. Can recursive AI use build expertise at a scale no individual could reach alone, and can that expertise be made transferable? The recursive loop: AI helps me learn, I document what's learned, the documentation grows the AI's context, the AI helps me learn faster and deeper, repeat across domains. If it works, the question is whether the methodology itself transfers. Can someone else, with their own AI partner and their own discipline, run the same loop?
5. Does judgment-free tech help reach people the formal sector misses? I run free tech assistance for underserved neighbors in Springfield, Missouri. The hypothesis isn't "free help is good"; that's obvious. The hypothesis is that the judgment attached to most tech support, not the cost, is the actual barrier for many people. Removing it changes who walks in the door.
6. Is citizen research a real category in 2026? I'm not in academia. I have rigorous documentation. I have a repeatable method. I have a published corpus that grows every week. The institution is the only thing missing. The open question is whether the work I'm producing counts as research without that institutional layer, and if it does, how many other people are sitting on findings they don't think they're allowed to publish.
Six experiments. Six beneficiaries. One underlying response.
The Partnership, Not the Tool
A quick word about what BRAIN actually does, because most posts about AI tools make the AI the protagonist, and that framing is wrong, or at least wrong for what I'm doing.
There are two ways to use AI right now.
Passive use. You hand the AI a problem. It hands back an answer. You evaluate the answer at the surface — does this look right? — and either accept it or move on. This is most current AI usage. It works fine for simple things and produces shallow results for complex ones, and you usually don't notice, because you don't have the context to notice.
Partnership use. You and the AI both have jobs. The AI does what it's good at: relentless documentation, infinite patience, never forgetting a detail it was once told. You do what you're good at: pattern recognition, course correction, noticing when something doesn't match the shape it should have. The AI's well-known weakness is the context window; it can lose the plot over long conversations. So you build a system that holds the context for it. And you stay involved, because the AI cannot tell when a pattern is breaking unless you're watching with it.
What I bring is a thing I underestimated for most of my life: I notice when patterns break. ADHD recall is sometimes scattered, but it's also weirdly comprehensive; random details I shouldn't remember turn out to be the exact thing that flags a drift. I'm not, strictly speaking, the smartest person in the room. I'm the person who notices when the room has subtly rearranged itself.
What BRAIN brings is the memory layer. It writes down the good and the bad. It writes down the dead ends and the breakthroughs equally, with no judgment about which will turn out to matter. After a few months of this, BRAIN has accidentally become the domain expert on the work itself. Not because anyone designed it to. Because nothing got thrown away.
The expert is the artifact of the partnership. Neither one of us alone could have built it.
Failure Isn't Failure
Here's the part of the practice that's hardest, at least for me: actually being okay with finding out the thesis was wrong.
Real research works by being wrong in a structured way. You frame a hypothesis, you test it, the test fails, you adjust, you go again. The failure of the test isn't a failure of the researcher. It's the entire mechanism. Trying to prove a thesis because you don't want to be wrong is the whole problem the post above was diagnosing. It's the conditioning leaking into the lab.
I think most people already know this in their working life and just don't apply it to their ideas. Take troubleshooting. When I'm digging into a real problem, every check I run is technically an incorrect hypothesis until I hit the root cause. "Is it the network?" No. "Is it the config?" No. "Is it that one weirdly-named environment variable nobody documented?" Yes, finally. I am, depending on the problem, wrong fifty or a hundred times before I'm right. Nobody calls that failure. Everybody calls that the job.
The next time I see a similar issue, I'm wrong fifty times. The time after that, ten. Working knowledge accretes through being wrong in a structured way and writing down what didn't work. That's it. That's the whole thing. We just don't apply the same generosity to ideas that we apply to debugging.
So if any of these six theses turns out to be wrong, that's a result. The hypothesis was incorrect; the work wasn't. The takeaway is what makes it research and not just enthusiasm. I had this idea, it didn't work out, here's what I learned, on to the next one. If the public sees the whole thing fall apart, the public also sees how I responded to it. Which is, frankly, probably more useful than seeing me succeed.
This is also why everything is open. If I were trying to protect a thesis I needed to be right, I'd publish later, or never. Publishing as I go is a forcing function for honesty. The corpus has to be inspectable, including when it makes me look uncertain, because uncertainty is the actual condition of research and pretending otherwise is what got us into the imposter-syndrome plague in the first place.
What Makes This Research
A research program needs three things, and I want to be specific about what I'm claiming.
Rigorous documentation. I have a Postgres database with hundreds of tagged entries spanning multiple projects. A repeatable method. The protocol for how BRAIN and I work together is itself documented and is being iterated on as I go. A published corpus. The dev log is open, the methodology is open, and the findings will be open as they emerge.
The thing that's traditionally missing is the institution. I don't have one. I'm a person with a day job, three degrees, and a curiosity problem.
I don't think the institution is required. I think it's the easiest way to confer legitimacy, and I think a lot of people doing serious work outside academia have internalized that they aren't allowed to publish unless someone signs off on them first. Waiting for permission that's never going to come is just another way to cover the gap. I'm running the experiment of not asking for the signature. If the work is real, the corpus will speak for itself. If it isn't, I'll have learned a great deal anyway.
This is also part of the response to the conditioning. Waiting for permission is another version of bluster, dressed in better clothes.
Why It's Free
BRAIN is open-source. FluxHelp is free. The methodology is free.
This isn't generosity. It's the research mission. The data only matters if real people interact with it. A research program with no participants is a thought experiment, and thought experiments don't generate findings; they generate position papers. I'm not interested in writing position papers.
Free for individuals is structural. Organizations deploying this stuff at scale will pay, because organizational use is a different research question with different stakes. Individual use is the dataset. The dataset has to be free, or there's no research.
The Open Question
I don't know which of these theses lands first. Some may not land at all. The recursive expertise question is genuinely open. The citizen-research question is genuinely open. The methodology might turn out to be unteachable, or it might turn out to be the most transferable thing in the whole project. I have hypotheses. I do not have conclusions.
That's the part most posts about projects like this skip. They tell you what they figured out. They don't tell you what they're still working on, because uncertainty is harder to package.
I'm telling you what I'm still working on, because the uncertainty is the work.
The recovery practice — "I don't know, let's find out" — is what I'm trying to embody, in the open, with a tool that supports the practice rather than rewarding the bluster. It's also what I'm trying to recover at work, in tech culture more broadly, in any domain where pretending to certainty has started to feel like a job requirement.
Real knowledge is shaped diversely. You'll know things I don't. I'll know things you don't. The system fails when it pretends we should all have the same map. The system works when it treats the differences as the actual resource. BRAIN, at its core, is a scaffold for that practice. A way to acknowledge a gap, partner with a tool that can help fill it, and grow in a direction that's specifically yours.
If any of this resonates, if you have your own theses that haven't collapsed into one big idea, if you've been quietly running experiments without calling them that, if you've felt qualified-but-unqualified more often than not, consider this a nudge. The corpus is open. The methodology is in progress. Come watch. Push back. Bring your own gaps; we'll find out together.
I genuinely don't know where this goes. That's why it's worth doing.
Flux Meridian is a mission-driven organization based in Springfield, Missouri. BRAIN is on GitHub. FluxHelp serves the local community. The methodology, when it stabilizes enough to be teachable, will be too. This is the introduction. Future posts will go deeper into each thesis individually.
Top comments (0)