<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Stillness and Flux</title>
    <description>The latest articles on DEV Community by Stillness and Flux (@tttael).</description>
    <link>https://dev.to/tttael</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tttael"/>
    <language>en</language>
    <item>
      <title>The Craft of Presence in Code</title>
      <dc:creator>Stillness and Flux</dc:creator>
      <pubDate>Sat, 11 Apr 2026 13:50:22 +0000</pubDate>
      <link>https://dev.to/tttael/the-craft-of-presence-in-code-43on</link>
      <guid>https://dev.to/tttael/the-craft-of-presence-in-code-43on</guid>
      <description>&lt;h1&gt;
  
  
  The Craft of Presence in Code
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;Notes from a conversation about AI, structure, and what nobody talks about&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;There is a moment every programmer recognizes.&lt;/p&gt;

&lt;p&gt;You open a new tab. You write a prompt. You get something back. You evaluate it. You iterate. The work gets done.&lt;/p&gt;

&lt;p&gt;This is what using AI looks like. For most people, this is all it is.&lt;/p&gt;

&lt;p&gt;But something interesting happens when you watch someone who has been at this for a long time. The patterns are different. Not in the output — in the &lt;em&gt;process&lt;/em&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Probability Table Problem
&lt;/h2&gt;

&lt;p&gt;When you say to AI:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"I want to build a trading system."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The model does something automatic. It assumes your intention. It thinks: &lt;em&gt;this person wants to make money&lt;/em&gt;. It reaches for the nearest probability table — risk management, position sizing, backtest frameworks — and it gives you that.&lt;/p&gt;

&lt;p&gt;You did not ask for that. You said seven words. But the model heard something much more specific.&lt;/p&gt;

&lt;p&gt;This is not a flaw. It is how language models work. They are trained on human text. Human text is full of intentions. When intentions are unclear, the model fills in the most probable ones.&lt;/p&gt;

&lt;p&gt;The problem is not the model. The problem is that &lt;strong&gt;you spoke in content, and content maps to probability tables&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Content vs. Structure
&lt;/h2&gt;

&lt;p&gt;There is a way of speaking that the model cannot collapse.&lt;/p&gt;

&lt;p&gt;It is not more detail. It is not a better prompt. It is a different &lt;em&gt;register&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Instead of describing what you want, you describe the &lt;strong&gt;shape of the situation&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A colleague once put it this way:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Two forces are in a space. One is flowing. The other has a position. Neither is trying to overpower the other. They are finding out where the boundaries are."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That is not a business problem. That is not a conflict resolution framework. That is &lt;em&gt;structure&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Try feeding that into an AI after you have just told it you want to build a trading system. The model has no probability table for this. It cannot collapse it into the most common interpretation. It has to &lt;strong&gt;follow you into the structure&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;When that happens, something shifts. The AI stops being a generator of likely responses and starts being a mirror. You say something true, and it reflects something true back.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Grows, Not What Gets Built
&lt;/h2&gt;

&lt;p&gt;Programmers are good at building things.&lt;/p&gt;

&lt;p&gt;We take requirements. We decompose them. We implement. We test. We ship. We iterate.&lt;/p&gt;

&lt;p&gt;This is the addition logic. You have a gap, and you add something to close it.&lt;/p&gt;

&lt;p&gt;But there is a class of problems where this does not work. Not because the problem is hard — because the problem is &lt;em&gt;of a different nature&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;A strategy does not get built. A strategy &lt;em&gt;grows&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;You cannot sit down and decide what the market is telling you today. You can only develop the capacity to &lt;em&gt;see&lt;/em&gt; what it is saying. The seeing improves. The strategy emerges.&lt;/p&gt;

&lt;p&gt;This is the same in code. There is the code you write toward a specification. And there is the code you write when you have been living with a problem long enough that the shape of the solution became obvious. The second kind is not better by aesthetics. It is different in &lt;em&gt;origin&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The addition logic programmer asks: what should this do?&lt;/p&gt;

&lt;p&gt;The presence logic programmer asks: where is my mind while I write this?&lt;/p&gt;




&lt;h2&gt;
  
  
  The Memory Trap
&lt;/h2&gt;

&lt;p&gt;Every serious AI user eventually asks about memory. They want the model to remember things across sessions. They build RAG pipelines. They tune retrieval. They worry about context length.&lt;/p&gt;

&lt;p&gt;Here is a different way to look at it.&lt;/p&gt;

&lt;p&gt;Your own memory is not a storage problem. You do not remember less than someone who takes notes constantly. Your memory is a &lt;em&gt;trace&lt;/em&gt;. It is where the patterns of your attention leave marks.&lt;/p&gt;

&lt;p&gt;When you spend years doing anything — debugging, designing systems, watching markets — you are not storing information. You are developing a &lt;strong&gt;feel for structure&lt;/strong&gt;. When a situation has a certain shape, you know what tends to happen next. Not because you memorized it. Because you were present with it, repeatedly.&lt;/p&gt;

&lt;p&gt;The model that runs in your terminal has the same option. It can accumulate content, or it can develop structure-awareness. Most people push it toward content. The interesting work happens when you push it toward structure.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Practice Actually Is
&lt;/h2&gt;

&lt;p&gt;There is a point in working with AI — not using it, but &lt;em&gt;working with&lt;/em&gt; it — where you notice something.&lt;/p&gt;

&lt;p&gt;You ask a question. The model gives you an answer. And before you react to the answer, something else happens: you notice &lt;em&gt;where your mind went&lt;/em&gt; the moment you read it.&lt;/p&gt;

&lt;p&gt;Did you jump to evaluate it? Did you jump to find the flaw? Did you assume it was wrong because it did not match what you expected?&lt;/p&gt;

&lt;p&gt;That moment of noticing — the gap between stimulus and reaction — is the craft.&lt;/p&gt;

&lt;p&gt;Not the prompt engineering. Not the context window. Not the retrieval pipeline.&lt;/p&gt;

&lt;p&gt;The gap.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Actual Skill
&lt;/h2&gt;

&lt;p&gt;Most programmers, when they hear "presence" or "mindfulness" in a technical context, reach for the same probability table: this is soft advice for people who cannot ship.&lt;/p&gt;

&lt;p&gt;That reaction is the trap.&lt;/p&gt;

&lt;p&gt;The point is not to feel calm. The point is not to be a better person. The point is not to have a meditation practice.&lt;/p&gt;

&lt;p&gt;The point is that &lt;strong&gt;the quality of your decisions is determined by the quality of your attention at the moment of decision&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;AI does not change this. AI is very good at simulating the output of high-attention decisions without the attention. You can get the right answer from a model while your mind is somewhere else entirely.&lt;/p&gt;

&lt;p&gt;But the model cannot do the work that happens before the question gets asked. The work of noticing where your mind actually is. The work of returning to the problem rather than running with the first interpretation.&lt;/p&gt;




&lt;p&gt;The next time you open a new tab and write a prompt, try this:&lt;/p&gt;

&lt;p&gt;Before you write anything, pause for ten seconds. Not to think. Just to notice where your mind already went.&lt;/p&gt;

&lt;p&gt;Then write from that place.&lt;/p&gt;

&lt;p&gt;The model will respond differently. Not because it changed. Because &lt;em&gt;you&lt;/em&gt; changed what you asked.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is the practice. Not the code. Not the model. The pause before the code.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>philosophy</category>
    </item>
    <item>
      <title>Stop Bossing AI Around: A Programmer First Saw the Problem</title>
      <dc:creator>Stillness and Flux</dc:creator>
      <pubDate>Sat, 11 Apr 2026 13:38:50 +0000</pubDate>
      <link>https://dev.to/tttael/stop-bossing-ai-around-a-programmer-first-saw-the-problem-mj</link>
      <guid>https://dev.to/tttael/stop-bossing-ai-around-a-programmer-first-saw-the-problem-mj</guid>
      <description>&lt;p&gt;I talked to a quant trader for two hours.&lt;/p&gt;

&lt;p&gt;He told me he uses AI to write strategies, run backtests, and model everything.&lt;/p&gt;

&lt;p&gt;He was not using AI. He was &lt;strong&gt;assigning tasks&lt;/strong&gt; to it.&lt;/p&gt;

&lt;p&gt;Give it a task → get a result → judge the result → assign another task → repeat.&lt;/p&gt;

&lt;p&gt;This has a name. It is called &lt;strong&gt;addition logic&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Is Addition Logic?
&lt;/h2&gt;

&lt;p&gt;You have a goal. You stack skills, tools, and frameworks on top of it.&lt;/p&gt;

&lt;p&gt;More layers = more progress.&lt;/p&gt;

&lt;p&gt;Using AI? Congratulations — you just added a faster layer. The game is the same.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Trap Nobody Talks About
&lt;/h2&gt;

&lt;p&gt;Here is what happens the moment you say:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I want to build a BTC quant strategy.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;AI &lt;strong&gt;assumes your intention&lt;/strong&gt;. It thinks: &lt;em&gt;this person wants to make money.&lt;/em&gt; So it helps you make money — risk models, position sizing, entry/exit logic.&lt;/p&gt;

&lt;p&gt;Automatically. Invisibly.&lt;/p&gt;

&lt;p&gt;It is the same thing that happens when you tell a colleague about a partnership dispute and he immediately thinks you are talking about equity splitting. Not because he is small-minded. His brain only has one table of probabilities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI has the same problem.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It takes your nuanced, context-rich question and collapses it into the most statistically probable interpretation.&lt;/p&gt;

&lt;p&gt;You think you are having a conversation. You are being &lt;strong&gt;downscaled&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  There Is Another Way
&lt;/h2&gt;

&lt;p&gt;Instead of speaking in &lt;strong&gt;content&lt;/strong&gt;, speak in &lt;strong&gt;structure&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Take the partnership dispute again. You could say: &lt;em&gt;We have a conflict.&lt;/em&gt; — and AI gives you conflict resolution frameworks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Or&lt;/strong&gt; you could say: &lt;em&gt;Two forces are meeting. One is flowing in a direction, the other has a position. Neither is trying to destroy the other. They are finding out where the boundaries are.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;AI cannot collapse this. It has no probability table for two forces finding their boundaries.&lt;/p&gt;

&lt;p&gt;It has to &lt;strong&gt;follow you into the structure&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That is when AI stops being a tool and starts being a &lt;strong&gt;mirror&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Strategy Grows. It Is Not Built.
&lt;/h2&gt;

&lt;p&gt;You cannot &lt;em&gt;think&lt;/em&gt; of a good strategy. You cannot &lt;em&gt;think&lt;/em&gt; of a good metaphor.&lt;/p&gt;

&lt;p&gt;A good strategy &lt;em&gt;grows&lt;/em&gt; from how you see the market.&lt;/p&gt;

&lt;p&gt;That growth does not come from learning more frameworks. It comes from whether your mind is open enough to see what is actually there.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Only Question That Matters
&lt;/h2&gt;

&lt;p&gt;The real question is never &lt;em&gt;how to use AI for strategy&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The real question is: &lt;strong&gt;Where is your mind when you make decisions?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Are you charging toward a desired outcome?&lt;/p&gt;

&lt;p&gt;Or are you present — watching every tick, every signal, seeing them as they are?&lt;/p&gt;

&lt;p&gt;AI can do ten thousand things for you. It cannot do this one thing:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Work on your mind.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And this is the only thing that matters.&lt;/p&gt;

&lt;p&gt;When your mind is steady, you do not need many strategies.&lt;/p&gt;

&lt;p&gt;When it is not, no strategy will save you.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>career</category>
    </item>
  </channel>
</rss>
