DEV Community

Stillness and Flux
Stillness and Flux

Posted on

The Craft of Presence in Code

The Craft of Presence in Code

Notes from a conversation about AI, structure, and what nobody talks about


There is a moment every programmer recognizes.

You open a new tab. You write a prompt. You get something back. You evaluate it. You iterate. The work gets done.

This is what using AI looks like. For most people, this is all it is.

But something interesting happens when you watch someone who has been at this for a long time. The patterns are different. Not in the output — in the process.


The Probability Table Problem

When you say to AI:

"I want to build a trading system."

The model does something automatic. It assumes your intention. It thinks: this person wants to make money. It reaches for the nearest probability table — risk management, position sizing, backtest frameworks — and it gives you that.

You did not ask for that. You said seven words. But the model heard something much more specific.

This is not a flaw. It is how language models work. They are trained on human text. Human text is full of intentions. When intentions are unclear, the model fills in the most probable ones.

The problem is not the model. The problem is that you spoke in content, and content maps to probability tables.


Content vs. Structure

There is a way of speaking that the model cannot collapse.

It is not more detail. It is not a better prompt. It is a different register.

Instead of describing what you want, you describe the shape of the situation.

A colleague once put it this way:

"Two forces are in a space. One is flowing. The other has a position. Neither is trying to overpower the other. They are finding out where the boundaries are."

That is not a business problem. That is not a conflict resolution framework. That is structure.

Try feeding that into an AI after you have just told it you want to build a trading system. The model has no probability table for this. It cannot collapse it into the most common interpretation. It has to follow you into the structure.

When that happens, something shifts. The AI stops being a generator of likely responses and starts being a mirror. You say something true, and it reflects something true back.


What Grows, Not What Gets Built

Programmers are good at building things.

We take requirements. We decompose them. We implement. We test. We ship. We iterate.

This is the addition logic. You have a gap, and you add something to close it.

But there is a class of problems where this does not work. Not because the problem is hard — because the problem is of a different nature.

A strategy does not get built. A strategy grows.

You cannot sit down and decide what the market is telling you today. You can only develop the capacity to see what it is saying. The seeing improves. The strategy emerges.

This is the same in code. There is the code you write toward a specification. And there is the code you write when you have been living with a problem long enough that the shape of the solution became obvious. The second kind is not better by aesthetics. It is different in origin.

The addition logic programmer asks: what should this do?

The presence logic programmer asks: where is my mind while I write this?


The Memory Trap

Every serious AI user eventually asks about memory. They want the model to remember things across sessions. They build RAG pipelines. They tune retrieval. They worry about context length.

Here is a different way to look at it.

Your own memory is not a storage problem. You do not remember less than someone who takes notes constantly. Your memory is a trace. It is where the patterns of your attention leave marks.

When you spend years doing anything — debugging, designing systems, watching markets — you are not storing information. You are developing a feel for structure. When a situation has a certain shape, you know what tends to happen next. Not because you memorized it. Because you were present with it, repeatedly.

The model that runs in your terminal has the same option. It can accumulate content, or it can develop structure-awareness. Most people push it toward content. The interesting work happens when you push it toward structure.


What Practice Actually Is

There is a point in working with AI — not using it, but working with it — where you notice something.

You ask a question. The model gives you an answer. And before you react to the answer, something else happens: you notice where your mind went the moment you read it.

Did you jump to evaluate it? Did you jump to find the flaw? Did you assume it was wrong because it did not match what you expected?

That moment of noticing — the gap between stimulus and reaction — is the craft.

Not the prompt engineering. Not the context window. Not the retrieval pipeline.

The gap.


The Actual Skill

Most programmers, when they hear "presence" or "mindfulness" in a technical context, reach for the same probability table: this is soft advice for people who cannot ship.

That reaction is the trap.

The point is not to feel calm. The point is not to be a better person. The point is not to have a meditation practice.

The point is that the quality of your decisions is determined by the quality of your attention at the moment of decision.

AI does not change this. AI is very good at simulating the output of high-attention decisions without the attention. You can get the right answer from a model while your mind is somewhere else entirely.

But the model cannot do the work that happens before the question gets asked. The work of noticing where your mind actually is. The work of returning to the problem rather than running with the first interpretation.


The next time you open a new tab and write a prompt, try this:

Before you write anything, pause for ten seconds. Not to think. Just to notice where your mind already went.

Then write from that place.

The model will respond differently. Not because it changed. Because you changed what you asked.


This is the practice. Not the code. Not the model. The pause before the code.

Top comments (0)