Cognition, for me, did not start in a lab or a code editor. It started in a field, in the mud, watching an animal pretend not to care that I was watching. That is the core of how I see reasoning.
Not as an abstract algorithm, but as a living system trying to keep itself alive, with limited energy, incomplete information, and way too many constraints.
Everything else is decoration.
The ethologist bias: cognition is what survives
When you study ethology you learn very fast that behavior is not random.
It is expensive.
Every movement, every hesitation, every weird detour is being paid with calories, risk, and opportunity. Animals cannot afford "pure curiosity" in the way philosophers imagine. Exploration is always coupled with survival.
So when I say "cognition", my brain does not picture a brain. It pictures:
- A dog walking a strange zigzag before choosing where to pee
- A bird repeating the same approach to a branch until the wind is "acceptable"
- An ant line that suddenly reroutes because two individuals collided at the wrong place
From the outside it looks chaotic. From the inside, if you zoom out in time, you see something else: constraints getting negotiated. Safety vs reward. Curiosity vs energy. Short path vs safe path.
In other words: reasoning is not a sequence of thoughts.
It is a pattern of behaviors that stabilizes enough to keep the organism going.
That is my bias. A cognitive theory that does not respect the constraints of a body and an environment feels... fake to me. Decorative. Like a UI mockup of intelligence.
Trails, not trees: how I actually picture reasoning
The best metaphor I have found for cognition is not a decision tree. It is an ant trail.
You have an environment full of possible paths. You have many agents with very limited local intelligence. You have noisy signals, collisions, and small accidents that either reinforce or weaken existing paths.
Step by step, the colony as a whole "discovers" a route that is:
- Not optimal in theory
- Good enough in practice
- Robust to small disturbances
No single ant "knows" the plan. The logic is emergent.
My brain works in a similar way.
I do not sit down and compute an optimal reasoning chain. I bump into ideas. I replay conversations. I test micro-behaviors: write a snippet, sketch a diagram, explain something to a friend. Some paths get reinforced, others evaporate.
The interesting part is this:
The path is derived from the environment and the history of collisions, not from a pre-written script.
You put me in a different context, with different people and different constraints, you do not get "the same me" with a slightly different output. You get a different network of trails.
Most "clean" reasoning models forget this. They imagine a static pipeline that you feed with inputs and you get outputs. My experience, both as an ethologist and as an engineer, is that cognition behaves more like a living traffic system that keeps adapting its own roads.
Multipotentialite as a cognitive architecture, not a personality trait
The internet loves labels, so we call it "multipotentialite".
Translated into my internal experience, it means:
I have too many specialized sub-selves that want to talk at the same time.
- The ethologist part of me cares about behavior, stress, welfare, and survival strategies
- The neuroscientist-adjacent part cares about circuits, patterns, feedback loops
- The software engineer part sees systems, interfaces, failure modes
- The system part thinks in terms of reliability, observability, and time to recover
- The artist part tries to keep everything meaningful, not just efficient
When I face a problem, these are not "skills" in a CV. They are agents that try to hijack the decision:
- "Is this humane?"
- "Will it scale?"
- "Will it break in prod?"
- "Is it aesthetically rotten?"
- "Can a tired human actually use this at 23:00 with a crying kid next to them?"
If you ask me what reasoning is, I would answer:
It is a negotiation between these internal agents, constrained by time, energy, and reality.
- On good days, they collaborate.
- On bad days, they paralyze me.
But this internal pluralism has a side effect: I cannot respect overly narrow definitions of intelligence. Any model of cognition that assumes a single "rational mind" running in isolation looks wrong to me.
I know how it feels from the inside: it is messy, multi-voiced, and full of conflict. The thing that emerges as "my decision" is often a compromise between competing utilities:
- Try not to harm people
- Try not to burn out myself
- Try not to lose technical integrity
- Try not to destroy my family's stability
That blend is my actual "architecture".
Reasoning feels like loops, not straight lines
The way reasoning is usually described:
- You receive information
- You evaluate options
- You pick the best option
- You execute
The way it actually feels in my head:
- Trigger
- Old memory wakes up and screams
- Current context pushes back
- I simulate a future conversation where this choice went wrong
- I simulate another where it went right
- I feel a physical discomfort if one path clashes with my values
- I look for evidence to calm down the discomfort
- I pick an action that satisfies enough constraints
- I keep monitoring the consequences and adjust
That is not a clean pipeline. It is recurrent.
There is a lot of backtracking and emotional veto.
From ethology, this is perfectly normal. You see it all the time:
- An animal approaches food, then retreats, then approaches again
- A dog "almost" attacks, then diverts into a displacement behavior (scratching, sniffing, etc)
- A bird tests a branch three times before committing
These are not "bugs". They are safety loops.
Evolution does not trust a single-shot decision in high-risk contexts.
My own cognition behaves similarly. The more the decision matters for my kids, my partner, or my long term future, the more loops I run. I re-check. I simulate the failure modes. I look for ways to "buy information" cheaply before I go all in.
So when someone tells me "reasoning is just chain-of-thought", I flinch.
It might be a convenient representation. But it erases the part that costs the most: the loops, the doubts, the rewrites.
The body in the loop
There is another thing ethology never lets you forget:
Behavior is not only controlled from the brain. It is constrained by the body.
If you put me on a bike after two years of not riding, something interesting happens. My "reasoning" about balance, effort, speed, risk is different when my body is tired, when my lungs are burning, when my legs remember the rhythm.
I am the same person, with the same values and knowledge, but my actual judgment changes. Not just my mood. My risk assessment, my sense of possibility, my creativity.
Cognition without a body is like a simulation without a physics engine.
You can move pieces around, but you do not feel what they cost.
In my daily life as an engineer and a father, this is brutal and simple:
- Sleep debt makes me more conservative. I choose safer paths and avoid big moves.
- Mild euphoria or professional validation makes me more explorative, sometimes too much.
- Chronic stress pushes me into "shortest path to relief", even if it is long term stupid.
So when I think about reasoning, I cannot separate it from:
- Heart rate
- Gut tension
- Micro posture changes
All of that is part of the cognitive system.
It is not "noise around the rational core". It is part of the computation.
From ethology to code: why I hate opaque outputs
10ish years ago, when I start working with AI's I understood that:
The important part of a prediction is not the number itself. It is the story that explains how the data combined to create that number.
Not "explainability" as an afterthought, but as the main product. That shifted something in me.
Once you see that the real value is the synergy between variables, you cannot go back to "here is a number, trust me".
Then LLMs exploded.
They are insanely useful, but also extremely lazy in how they expose their own cognition:
- Long monologues
- Confident hallucinations
- No trace of which internal "trail" got reinforced and which one died
For someone with my background, this feels like watching an animal behave in a cage with the glass painted black. You see the output. You do not see the micro-choices, the loops, the discarded options.
I do not like that. It is not about distrust. It is about an itch for structure: I want to see how an answer happened, not just what the answer is. I want to see:
- Which internal "agents" contributed
- Which memories were consulted and which ignored
- Where the doubts were
- Where the environment forced a shortcut
That is one of the reasons my brain drifted toward orchestration, graphs, and trace logs. If you are going to build artificial cognition, at least make it observable.
Cognition as orchestration of many flawed agents
If I strip away the branding and the code, the core idea I keep coming back to is simple:
Intelligence is an orchestration problem.
Not "one huge model" thinking about everything, but many specialized evaluators, critics, generators, and memory processes passing partial work between each other.
This maps almost embarrassingly well to my own subjective experience:
- A "progressive" internal voice pushing for change, fairness, and long term meaning
- A "conservative" voice that worries about stability and risk
- A "realist" that only cares if something will concretely work
- A "purist" that checks against my internal ethics, even if it hurts
They are not just political metaphors. They are real clusters of constraints that show up when I think about:
- Leaving a stable job
- Publishing something controversial
- Starting or killing a project
- Saying yes or no to an opportunity
When I built multi-agent structures in OrKa with explicit roles like progressive, conservative, realist, purist debating a topic, it was partly technical, partly autobiographical. I was encoding the internal parliament that I already live with. Reasoning, for me, is not "the voice that wins".
It is the equilibrium point where these agents stop resisting enough to let action happen.
That might sound abstract, but in practice it looks like:
- I want to stop a project
- "Progressive" me says: free yourself, redirect energy to something more aligned
- "Conservative" me says: you invested years, do not destroy that capital
- "Realist" me asks: do you actually have the energy to turn this into what it deserves
- "Purist" me checks: are you being honest with yourself, or just escaping discomfort
When three out of four converge, I act. If they are split, I stay stuck.
No amount of "rational" argument fixes that instantly. Time, new data, and emotional updates are needed.
Memory as a living, decaying ecosystem
Ethology taught me that not all memories are equal.
An animal does not keep a perfect log of its life. It keeps what helps it not die.
There is short term working memory, like where food was a minute ago.
There is longer term memory, like which areas are safe over seasons.
There are emotional imprints that shape behavior for years after a single trauma.
In my own head, memory behaves like this:
- Some ideas decay naturally. If I do not act on them, they lose energy.
- Some experiences stay "hot" for months, constantly biasing my decisions.
- Some patterns become procedural: how I open a code editor, how I scan a log, how I read a room.
Reasoning is not just generating new thoughts. It is deciding which memories to re-activate, which to suppress, and which to retire.
When I design cognitive systems, I try to mirror this:
- Short term memory with decay
- Long term stored traces that can be retrieved via similarity
- Episodic logs of what actually happened during a reasoning trajectory
- Procedural memories as configs, benchmarks, and best practices
The goal is not to make it "biologically accurate". I am not building a brain.
The goal is to respect the basic asymmetry: memory is expensive, and forgetting is part of cognition, not a bug.
My own life forces that. I am a father of three, working full time, building things on the side. I simply do not have the cognitive budget to keep everything loaded in RAM. So my inner system constantly decides:
- What can I safely drop
- What must be pinned
- What can be offloaded to external tools, notes, repositories
If a model of reasoning ignores forgetting, it is not modeling me.
From all this to how I design artificial reasoning
So how do these biases actually shape the systems I build?
Roughly like this:
- I do not believe in a single-big-brain architecture I prefer graphs of agents, each with narrow responsibilities and partially conflicting criteria.
- I treat environment as first class The same agent network should behave differently if the "world" it observes changes. Inputs are not just text, but metrics, traces, constraints.
- I want explicit traces I log everything. Which agent fired, what it saw, what it chose, how confident it was. Not as analytics, but as part of cognition. A system that cannot replay its own decision makes me nervous.
- I accept messy loops I do not force a single forward pass. I allow for "rethinking", retries, backtracking, debate between agents. It costs time, but it is closer to how I feel real reasoning works.
- I encode plural values Not everything is "accuracy" or "latency". Sometimes the right thing is slower and less profitable. So I like to have explicit "ethical" or "safety" agents whose job is to veto otherwise clever ideas.
- I design for forgetting Memories have decay. Not everything is kept. The system must tolerate partial amnesia and still function.
- I keep humans in the loop Not in the buzzword sense, but as actual co-agents. The system should surface its doubts, tradeoffs, and internal conflicts so that a human can say: "I see why you chose that. I still disagree. Here is why."
This is my way of bringing ethology, systems thinking, and too-many-skills into a single line: treat cognition as a living, traceable, multi-agent negotiation under constraints.
Why I still care about all this
On a bad day, all of this can feel self-indulgent.
I am one person, with my personal obsessions, drawing metaphors from ants and dogs and log pipelines. But there is a simple reason I do not let it go:
We are putting more and more decisions in the hands of systems that are opaque, centralized, and optimized for metrics that have very little to do with human flourishing.
If you grew up watching animals adapt, cooperate, fail, and survive, you develop a certain respect for decentralization. There is wisdom in many small, stupid agents interacting, compared to one giant oracle.
As a multipotentialite you also feel, in your own head, how dangerous it is when one single narrative dominates the others. The "career voice" that tries to silence the "parent voice". The "ambition voice" that tries to silence the "health voice".
So my personal view of cognition and reasoning is not neutral. It has a moral edge:
- Plural over monolithic
- Transparent over opaque
- Negotiated over imposed
- Embodied over purely symbolic
In my life, this shows up as how I choose projects, how I design systems, how I talk about AI. I do not want perfect answers. I want understandable processes, even if they are slower and less shiny.
If you ask me "what is cognition", I will not give you a single definition... I will show you a dog hesitating before crossing a road, a kid deciding whether to tell the truth, a developer choosing to write one more test even if nobody sees it.
Reasoning is that thin line where survival, values, and possibilities meet. All the rest is implementation details.
If you want to see how all of this turns into real code, that is exactly what I am building in OrKa-reasoning.
It is my attempt to encode trails, loops, memory decay and multi-agent negotiation into an observable reasoning engine, instead of another black box.
You can explore on Githug:
OrKa-reasoning repo at https://github.com/marcosomma/orka-reasoning.
Top comments (0)