I. The Ambient Revolution
I keep seeing the same moment at work.
You’re on a call, someone’s sharing their screen, you toss out a weird angle on the problem. Not wrong, just slightly sideways. They pause, open a new tab, and quietly type your thought—loosely translated—into an AI chat window.
They’re not announcing it.
They’re not making a point.
They’re just… checking.
They scan the answer, nod almost imperceptibly, and fold the result back into the conversation. No one reacts. No one debates whether this is allowed. The tool is background, like a calculator or a search bar.
The revolution came quietly and nobody objected because it was useful.
The silence here isn’t fear. It isn’t uncertainty. It’s acceptance—the kind of acceptance that happens when something crosses the line from “controversial” to “infrastructure.”
We aren’t waiting to see whether humans will collaborate with AI. We already are. At scale. The question now isn’t if this relationship exists.
The question is whether we’re willing to see it clearly and do it on purpose.
II. The Accessibility Precedent
In spirit, this isn’t new. It just feels bigger.
We’ve always built tools that bridge cognitive and sensory gaps:
Glasses let us read what used to be a blur.
Spell check quietly cleans up what our fingers mangle.
T9 predictive text helped whole generations type with their thumbs.
Screen readers turned visual interfaces into sound.
IDE autocomplete sees the pattern in your code and finishes the line.
None of these are treated as cheating.
No one prefaces a document review with, “Full disclosure, I’m wearing bifocals.”
No one confesses, “I used spell check on this email.”
Because those tools don’t replace the thinking. They reduce the friction between thought and expression.
Now take someone with dyslexia using an AI system to rephrase their own words. They’re not asking it to think for them. They’re using it as a cognitive prosthetic—something that lets their existing thoughts reach the page with less pain.
There’s a straight line from glasses to screen readers to predictive text to this.
We’ve always made technology that lets people be more fully themselves. The only difference now is scale: this is the first tool that can help across almost every domain at once, and that breadth makes it feel uncanny.
III. The Collaboration We Don’t Quite Name
We’re already in a very specific kind of relationship with these systems, even if we don’t have shared language for it yet.
You know the feeling: you have a gut-level, protein-folding kind of thought—messy, half-verbal, more shape than sentence. You could spend an hour turning it into something clean. Or you can spend ten minutes iterating with a model and arrive at a version that’s crisp, shareable, and faithful to what you meant.
In that moment:
The meaning originates with you.
The form—the phrasing, structure, scaffolding—is co-produced.
You’re not abdicating thought. You’re changing how your thought becomes public.
That’s the heart of this relationship: you do the semantic heavy lifting, the system helps serialize it into language, code, plans, diagrams. The internal stays human. The externalization gets help.
The relationship hasn’t changed as radically as the marketing implies. You’re still the source of the idea. What’s changed is the available bandwidth: more of your interior world can make it out where other people can interact with it.
Not replacement. Reach.
IV. The Pattern We’ve Always Known
We’ve been collaborating with predictive systems for a long time. They used to be narrower and therefore less interesting.
Autocomplete looks at the first half of your word and guesses the rest.
IDEs infer the variable you probably meant.
Search engines finish your question because a million people before you asked something similar.
We accepted those systems because the boundaries were obvious. They predicted small things:
The next letter.
The likely function name.
The rest of a familiar query.
Now the prediction space is bigger.
You can ask a model:
“Explain this architecture idea I have like I’m a junior dev.”
“Turn this rant into a professional email.”
“Help me structure this course out of my half-baked notes.”
The dyslexic user isn’t outsourcing their point. They’re outsourcing the wrestling match with syntax and spelling. The senior engineer isn’t outsourcing the design. They’re using the tool to turn the architecture in their head into diagrams, ADRs, or code.
Same pattern, bigger surface area.
We haven’t suddenly started collaborating with machines. We’ve expanded what they’re allowed to help us express.
V. The Intentionality Line
This is where the silence can either be healthy or hazardous.
When you know you’re in this kind of collaboration—when you are consciously using the system as a thinking companion rather than a thinking replacement—you keep three things active:
Agency – You remember the idea is yours. You’re not asking, “What should I think?” You’re asking, “Help me say what I already think.”
Evaluation – You judge the output. You push back, correct, discard, refine.
Boundaries – You decide where the tool is welcome and where it is not.
That’s intentional use. Quiet, normal, not inherently dramatic.
The danger is unexamined use: when the collaboration is happening, but no one thinks of it as collaboration at all.
Then it’s easy to:
Cargo-cult the output (“It sounds confident, so it must be right.”)
Lose track of where ideas came from.
Accept patterns you don’t understand, just because they arrived pre-structured.
The relevant question isn’t “AI or no AI.”
It’s, “Are you awake to the relationship you’re already in?”
VI. When the Silence Hurts
This is where that intelligence asymmetry shows up—not as a side note, but as a practical risk.
For people with deep expertise, the silence around AI at work is mostly harmless. A seasoned engineer, writer, or analyst has a strong internal model of what “good” looks like. When they use these tools, they’re constantly running a comparison in the background: Does this match what I already know?
The tool accelerates articulation. It doesn’t define correctness.
For people without that scaffolding, the experience is different.
They see their peers quietly using AI with confidence. The norm is established: open the tab, paste the prompt, accept the answer. But if you don’t yet have the mental models to evaluate the output, the silence around how others are using the tool can feel like pressure to trust it blindly.
Suddenly:
The same quiet normalization that feels safe for one group can be dangerous for another.
The same invisible collaboration that helps an expert serialize can cause a novice to override their own judgment.
The asymmetry isn’t about who’s smart and who isn’t. It’s about who has enough context to stay intentional when the tool is invisible.
Which means the silence needs a counterpart: explicit norms, shared language, and a way to say, “Yes, we all use this—but here’s how we stay in charge of it.”
VII. We’re Past the Threshold
Regardless, the adoption curve has already bent.
The numbers are enormous: hundreds of millions of users, across consumer apps, workplace integrations, and dedicated tools. This tech isn’t sitting off to the side as a special experiment. It’s in:
note-taking apps,
email clients,
office suites,
coding environments,
customer support platforms,
creative tools.
We’re not at the “should we use this?” stage anymore.
We’re at the “this is already plumbing” stage.
The revealed preference is unambiguous:
Individuals keep coming back.
Companies keep shipping features with it.
Infrastructure keeps being built around it.
So the relevant question now is not, “Should humans collaborate with AI?”
The relevant question is, “Given that we already are, how do we do it with integrity and awareness?”
VIII. The Printing-Press Problem, Compressed
We’ve seen something like this before.
The printing press did not make humans wiser. It made text cheaper.
What followed was a long, messy period where:
Access to information exploded.
The ability to produce text spread more widely than the ability to evaluate it.
Societies had to invent new literacies: how to read critically, how to cite, how to distinguish pamphlet from scripture from scholarship.
AI is doing the same thing to expression and analysis that printing did to written words.
It doesn’t automatically make anyone more insightful. It just lets more thoughts—good, bad, confused, brilliant—reach the page faster and at scale.
And just like with print, we’re being forced to build new literacies on the fly:
How to interrogate generated text.
How to track provenance.
How to integrate these tools without dissolving our own judgment.
We don’t get to pause adoption while we figure this out. The presses are already running.
We’re learning to read as the pages come off the machine.
IX. A Responsibility Framework for This Kind of Collaboration
If this tool-mediated way of thinking is now part of everyday life, what does responsible practice look like?
Three anchors are a decent start:
Transparency
Don’t mystify the process. You don’t have to list every prompt you used, but you also don’t need to pretend your work emerged from a vacuum. Normalize sentences like, “I drafted this with help from a model and then edited heavily.”Evaluation
Treat every AI output as proposed text, not revealed truth.
- Check facts.
- Inspect the logic.
- Ask, “Does this actually match what I think or know?”
- Intention Be specific about why you’re using it:
- To explore alternatives?
- To reduce the friction of writing?
- To scaffold a structure you’ll then fill in?
Viewed through this lens, the accessibility angle gets clearer, not murkier. Helping someone express themselves more clearly—because their brain and the page don’t naturally align—is not replacing them. It’s amplifying them.
Used this way, AI isn’t a shortcut around humanity. It’s a ramp into being understood.
X. The Story We’re Already Writing
So here’s the honest state of things:
You’re probably already doing this.
Your coworkers are already doing this.
Your tools are already doing this on your behalf.
The silence you’re noticing is not a taboo. It’s the sound of something becoming normal.
The work now is not to decide whether this collaboration is acceptable. It’s to become conscious of it.
To be able to say, without defensiveness:
“Yes, I think with this system. I am still the one thinking.
I use it to serialize, to explore, to refine. I stay in charge.”
The next time you’re on a call and that extra tab opens, or you catch yourself pasting a half-formed idea into a prompt, pause for half a second.
Name what you’re doing in your own head.
You’re not cheating. You’re not outsourcing your brain. You’re extending it into the world with help from a tool that translates thought into shareable form.
That intentional, tool-mediated extension of your mind into the world—that’s what I’m calling exocogence.
We’re already there. The relationship exists.
The only thing left on the table is how deliberate you’re willing to be about it.
Top comments (0)