<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Roger Wang</title>
    <description>The latest articles on DEV Community by Roger Wang (@pigslybear).</description>
    <link>https://dev.to/pigslybear</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/pigslybear"/>
    <language>en</language>
    <item>
      <title>Stop "Jokering" My Project: Why AI is Too Good at Ruining Your mission.</title>
      <dc:creator>Roger Wang</dc:creator>
      <pubDate>Tue, 21 Apr 2026 05:06:59 +0000</pubDate>
      <link>https://dev.to/pigslybear/stop-jokering-my-project-why-ai-is-too-good-at-ruining-your-mission-2c5n</link>
      <guid>https://dev.to/pigslybear/stop-jokering-my-project-why-ai-is-too-good-at-ruining-your-mission-2c5n</guid>
      <description>&lt;h2&gt;
  
  
  Stop "Jokering" My Project: Why AI is Too Good at Ruining Your mission.
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Why is a deck of cards more honest than a project?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you were at the card table last night, you might have witnessed a very familiar scene.&lt;br&gt;
Someone says: "Let’s build a simple single-player poker game. It’s easy."&lt;br&gt;
Another person immediately chimed in: "Shuffle, deal, compare ranks—AI can write that in a second."&lt;br&gt;
A third person was even more optimistic: "Let’s throw in a Joker for variety; it’ll look cooler."&lt;/p&gt;

&lt;p&gt;It sounds perfectly reasonable.&lt;br&gt;
In this day and age, almost every project starts this way—&lt;br&gt;
The inspiration is fast, the AI is fast, and the output is fast.&lt;/p&gt;

&lt;p&gt;But the real problem usually doesn't start with "Can we build it?"&lt;br&gt;
It starts from a much smaller place:&lt;br&gt;
&lt;strong&gt;Is this deck of cards still the same deck you started with?&lt;/strong&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  In the beginning, everyone thought the problem was small.
&lt;/h3&gt;

&lt;p&gt;The initial requirements were quite simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build a poker dealing process.&lt;/li&gt;
&lt;li&gt;Use a standard 52-card deck.&lt;/li&gt;
&lt;li&gt;Deal to 4 players.&lt;/li&gt;
&lt;li&gt;5 cards per person.&lt;/li&gt;
&lt;li&gt;Just demonstrate the flow; don't worry about full rules yet.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At this point, everything is clear.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;REQ&lt;/strong&gt; is clear: Validate the dealing process, not build a full casino system.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SPEC&lt;/strong&gt; is clear: Shuffle, distribute, display results.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ADR&lt;/strong&gt; is clear: Build the Minimum Provable Case (MPC) first.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CONTRACT&lt;/strong&gt; is even clearer: Standard deck, fixed boundaries, no extra extensions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this scenario, having AI write the code for you feels great. It acts like a high-speed engineering assistant, rather than an accomplice who secretly changes the rules without anyone noticing.&lt;/p&gt;




&lt;h3&gt;
  
  
  The problem isn't that AI writes poorly; it's that humans find it too easy to say "one little change won't hurt."
&lt;/h3&gt;

&lt;p&gt;The real turning point usually comes from a single sentence:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Actually, let’s add a Joker. It’ll make the gameplay more interesting."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The danger of this sentence isn't the Joker itself. It’s that &lt;strong&gt;it sounds too reasonable.&lt;/strong&gt;&lt;br&gt;
Because once an AI is powerful enough, it won't just add a Joker for you; it will "thoughtfully":&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Update your data structures.&lt;/li&gt;
&lt;li&gt;Modify the win-condition logic.&lt;/li&gt;
&lt;li&gt;Fix the display logic.&lt;/li&gt;
&lt;li&gt;Adjust your test cases along the way.&lt;/li&gt;
&lt;li&gt;Rewrite the entire spec to make it look like it was always meant to be this way.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the end, what you see isn't an error. You see something that looks incredibly complete—perhaps even more polished than the original.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;That is the true danger.&lt;/strong&gt;&lt;br&gt;
Projects rarely die because "things couldn't be built." More often, they die because &lt;strong&gt;every step was reasonable, but the sum total is no longer the original thing.&lt;/strong&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  AxiomFlow: The Sober Observer at the Table
&lt;/h3&gt;

&lt;p&gt;The interesting part of &lt;strong&gt;AxiomFlow&lt;/strong&gt; isn't helping you deal the cards. It’s asking first:&lt;br&gt;
&lt;strong&gt;"Can this hand even be dealt?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the essence of &lt;strong&gt;PDR (Project Design Review)&lt;/strong&gt;.&lt;br&gt;
In the current version of AxiomFlow, PDR is no longer just a standard checklist. It acts like the person standing by the card table who isn't gambling but remains the most sober: He doesn't play the cards; he just asks right before you start—&lt;strong&gt;"Is the game you’re playing now the same one that was approved?"&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When &lt;strong&gt;SPEC-001&lt;/strong&gt; faithfully implements the original flow, PDR sees &lt;strong&gt;Alignment&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqqselul39b9a8kqyy53.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqqselul39b9a8kqyy53.png" alt=" " width="800" height="497"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When &lt;strong&gt;SPEC-002&lt;/strong&gt; secretly changes the scoring rules but the &lt;strong&gt;REQ&lt;/strong&gt; hasn't been updated, PDR sees a &lt;strong&gt;Conflict&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5slm6in2gj1veoorijhs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5slm6in2gj1veoorijhs.png" alt=" " width="800" height="506"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When &lt;strong&gt;SPEC-003&lt;/strong&gt; adds a Joker and breaks the original deck boundaries, PDR doesn't see a "cool new idea." It sees a definitive:
&amp;gt; &lt;strong&gt;Conflict Detected.&lt;/strong&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvil6uvvtunffmgl7u6sp.png" alt=" " width="800" height="519"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In that moment, the entire story changes. While most tools try to help you move faster, AxiomFlow aims to help you &lt;strong&gt;stop earlier.&lt;/strong&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  It’s not about stifling creativity; it’s about protecting intent.
&lt;/h3&gt;

&lt;p&gt;When people hear "governance," their instinct is to think of something that slows them down. But if you've actually run a project, you know that the biggest waste of time isn't "spending 10 minutes confirming boundaries." It’s:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Realizing after three days of work that the direction has drifted.&lt;/li&gt;
&lt;li&gt;Having a beautiful document that solves the wrong problem.&lt;/li&gt;
&lt;li&gt;New members taking over who can see the &lt;em&gt;what&lt;/em&gt; (result) but not the &lt;em&gt;why&lt;/em&gt; (reason).&lt;/li&gt;
&lt;li&gt;AI producing a new "reasonable" answer every time, with no one able to prove which version is correct.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is what AxiomFlow repeatedly emphasizes in its README: &lt;strong&gt;The problem isn't that AI produces too little; it's that when it produces too fast, the team loses the ability to decouple "Problem, Method, Reason, and Boundary."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That’s why it separates these layers:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;REQ:&lt;/strong&gt; What are we actually trying to solve?&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;SPEC:&lt;/strong&gt; How do we plan to do it?&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;ADR:&lt;/strong&gt; Why did we choose this path?&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;CONTRACT:&lt;/strong&gt; Which boundaries must not be crossed?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once these four layers get blurred, the project becomes like a deck of cards that someone has secretly swapped. You think you’re still playing the same game, but the rules changed long ago.&lt;/p&gt;




&lt;h3&gt;
  
  
  The poker story isn't really about cards.
&lt;/h3&gt;

&lt;p&gt;It’s about those common, yet rarely addressed moments in a project:&lt;br&gt;
&lt;strong&gt;"We didn't mean to drift. We just thought 'this little change is fine' every single time."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AI amplifies this "little bit" of drift with terrifying efficiency.&lt;br&gt;
If you are just looking for a tool that writes code better, AxiomFlow might not be your favorite. But if you’ve begun to realize that &lt;strong&gt;AI doesn't need more freedom—it needs clearer governance coordinates&lt;/strong&gt;—then this poker story is more than just a toy example.&lt;/p&gt;

&lt;p&gt;It asks every project lead:&lt;br&gt;
As a system grows more powerful, do you have the ability to know—is it doing what you &lt;em&gt;asked&lt;/em&gt; it to do, or is it just making the deviation look more beautiful?&lt;/p&gt;

&lt;p&gt;Because a truly mature team isn't brave enough to deal every hand.&lt;br&gt;
&lt;strong&gt;They are the ones who know which hand requires a pause.&lt;/strong&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;If the boundaries of a simple 52-card deck are worth defining clearly, then a software project involving AI certainly shouldn't run on "tacit understanding."&lt;/p&gt;

&lt;p&gt;AxiomFlow doesn't aim to replace human judgment. It aims to take the judgments that only exist in your head and turn them into a system that can be &lt;strong&gt;seen, inspected, and, when necessary, intercepted.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It’s not asking: "Can AI help you build this?"&lt;br&gt;
It’s asking a much more important question: &lt;strong&gt;"Before it’s built, can you be sure this is actually what you wanted?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://pigsly.github.io/AxiomFlow/" rel="noopener noreferrer"&gt;https://pigsly.github.io/AxiomFlow/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5mvt8l5o88h14latdsg4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5mvt8l5o88h14latdsg4.png" alt=" " width="800" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>githubcopilot</category>
      <category>openai</category>
      <category>architecture</category>
    </item>
    <item>
      <title>The more AI does, the clearer the system becomes—instead of more chaotic.</title>
      <dc:creator>Roger Wang</dc:creator>
      <pubDate>Thu, 16 Apr 2026 14:24:34 +0000</pubDate>
      <link>https://dev.to/pigslybear/the-more-ai-does-the-clearer-the-system-becomes-instead-of-more-chaotic-5bfg</link>
      <guid>https://dev.to/pigslybear/the-more-ai-does-the-clearer-the-system-becomes-instead-of-more-chaotic-5bfg</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F793hmimj1ds1df0zaqlv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F793hmimj1ds1df0zaqlv.png" alt=" " width="800" height="240"&gt;&lt;/a&gt;&lt;br&gt;
Have you noticed that many AI tools look incredibly powerful, yet once you try to use them in a real project, they still feel hard to trust?&lt;/p&gt;

&lt;p&gt;Today, they can help you generate a solution draft. Tomorrow, they can write a document for you.&lt;br&gt;
But the real problem is not whether they &lt;em&gt;can&lt;/em&gt; do it. It is this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Why did the output turn out this way?&lt;/li&gt;
&lt;li&gt;What was it actually based on?&lt;/li&gt;
&lt;li&gt;Once requirements change or team members rotate, the whole process starts to drift&lt;/li&gt;
&lt;li&gt;The project may move fast, but not necessarily in the right direction, while risk keeps piling up&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is exactly the problem many teams are starting to face:&lt;br&gt;
AI has accelerated output, but governance has not kept up.&lt;/p&gt;

&lt;p&gt;That gap is what &lt;strong&gt;AxiomFlow&lt;/strong&gt; is designed to solve.&lt;/p&gt;

&lt;p&gt;It is not another attempt to repackage AI as a “better chatbot.”&lt;br&gt;
Instead, it puts AI back into the place it actually needs to occupy inside teams, workflows, and governance systems.&lt;/p&gt;

&lt;p&gt;According to the project description, AxiomFlow is positioned as a governance model for AI-assisted software delivery. Its core goal is to ensure that even under AI-accelerated execution, delivery remains &lt;strong&gt;aligned, bounded, and traceable&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That may sound abstract, but it is actually extremely practical.&lt;/p&gt;

&lt;p&gt;Because in real projects, what people fear most is never that AI is not smart enough.&lt;br&gt;
What they fear is realizing too late that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It solved the wrong problem&lt;/li&gt;
&lt;li&gt;It crossed boundaries it should never have crossed&lt;/li&gt;
&lt;li&gt;It quietly turned a one-time judgment into a long-term structure&lt;/li&gt;
&lt;li&gt;It left behind outputs, but not the reasoning behind the decisions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What makes AxiomFlow powerful is that it does not only talk about &lt;strong&gt;how to do things&lt;/strong&gt;.&lt;br&gt;
It starts by separating different layers of project thinking.&lt;/p&gt;

&lt;p&gt;It uses document roles to structure reasoning:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;REQ&lt;/strong&gt; defines what problem should be solved&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SPEC&lt;/strong&gt; explains how it will be done&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ADR&lt;/strong&gt; explains why this architectural direction was chosen&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CONTRACT&lt;/strong&gt; clearly defines which boundaries must not be crossed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The value of this design is significant.&lt;/p&gt;

&lt;p&gt;Because once a team mixes together &lt;strong&gt;the problem, the implementation, the rationale, and the boundaries&lt;/strong&gt;, AI will only amplify the confusion.&lt;br&gt;
But if these four layers are clearly separated, AI has a chance to become a stable execution amplifier rather than a high-speed source of chaos.&lt;/p&gt;

&lt;p&gt;From a product perspective, I would say the real value of AxiomFlow is not just the method itself.&lt;br&gt;
It is that it addresses a market gap that very few people are tackling directly:&lt;/p&gt;

&lt;p&gt;Most AI products are focused on generating faster.&lt;br&gt;
AxiomFlow is focused on what happens &lt;strong&gt;after generation&lt;/strong&gt;—how a team can still stay in control of the whole system.&lt;/p&gt;

&lt;p&gt;That is also what makes it fundamentally different from ordinary AI tools.&lt;/p&gt;

&lt;p&gt;Most tools are strong at &lt;strong&gt;giving immediate answers&lt;/strong&gt;.&lt;br&gt;
AxiomFlow is strong at &lt;strong&gt;making those answers usable inside a long-term system&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Most tools give you outputs.&lt;br&gt;
AxiomFlow cares more about whether those outputs can be explained, verified, and carried forward.&lt;/p&gt;

&lt;p&gt;There is a line in the README that captures this very well:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;“Turn AI agents into governable builders.”&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That is almost the entire product thesis in one sentence.&lt;/p&gt;

&lt;p&gt;It does not treat AI as a black-box agent that operates freely.&lt;br&gt;
It turns AI into something that can actually be governed.&lt;/p&gt;

&lt;p&gt;What does that mean?&lt;/p&gt;

&lt;p&gt;It means the AI systems that truly succeed in the future may not be the ones that feel the most human or speak the most fluently.&lt;br&gt;
They may be the ones that can be trusted, handed over, reviewed, and evolved inside real organizations.&lt;/p&gt;

&lt;p&gt;If you are only looking for a tool to help you write a few more paragraphs, AxiomFlow may not be for you.&lt;/p&gt;

&lt;p&gt;But if you are already thinking about questions like these:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How does AI enter real software delivery workflows?&lt;/li&gt;
&lt;li&gt;How can teams avoid losing control as AI accelerates execution?&lt;/li&gt;
&lt;li&gt;How can documents, decisions, architecture, and boundaries form a positive feedback loop?&lt;/li&gt;
&lt;li&gt;How does a project move from simply “working” to being governable and evolvable?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then AxiomFlow is a direction worth paying attention to.&lt;/p&gt;

&lt;p&gt;In one sentence:&lt;/p&gt;

&lt;p&gt;It is not about helping AI do more.&lt;br&gt;
It is about helping teams build a new capability:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The more AI does, the clearer the system becomes—instead of more chaotic.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Project link:&lt;br&gt;
&lt;code&gt;https://github.com/pigsly/AxiomFlow&lt;/code&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>softwareengineering</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>Two powerful tools, but missing a layer in between.</title>
      <dc:creator>Roger Wang</dc:creator>
      <pubDate>Sat, 11 Apr 2026 04:44:34 +0000</pubDate>
      <link>https://dev.to/pigslybear/two-powerful-tools-but-missing-a-layer-in-between-ap7</link>
      <guid>https://dev.to/pigslybear/two-powerful-tools-but-missing-a-layer-in-between-ap7</guid>
      <description>&lt;p&gt;We already have great tools.&lt;/p&gt;

&lt;p&gt;OpenAI → helps you think&lt;br&gt;
Logseq → helps you store&lt;/p&gt;

&lt;p&gt;Both are powerful.&lt;/p&gt;

&lt;p&gt;But together, they still leave a gap.&lt;/p&gt;

&lt;p&gt;AI generates reasoning.&lt;br&gt;
Notes capture results.&lt;/p&gt;

&lt;p&gt;👉 But the reasoning itself disappears.&lt;/p&gt;

&lt;p&gt;You get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;outputs from AI&lt;/li&gt;
&lt;li&gt;knowledge in your notes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But not the process that created them.&lt;/p&gt;

&lt;p&gt;And that process is the most valuable part.&lt;/p&gt;

&lt;p&gt;Because thinking is not the answer.&lt;/p&gt;

&lt;p&gt;Thinking is the path.&lt;/p&gt;

&lt;p&gt;So I started asking:&lt;/p&gt;

&lt;p&gt;What if reasoning itself could be stored?&lt;/p&gt;

&lt;p&gt;Not just the result —&lt;br&gt;
but the full thinking flow.&lt;/p&gt;

&lt;p&gt;That question led to this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/pigsly/ClawMind" rel="noopener noreferrer"&gt;https://github.com/pigsly/ClawMind&lt;/a&gt;&lt;/p&gt;

</description>
      <category>openai</category>
      <category>logseq</category>
      <category>writing</category>
      <category>productivity</category>
    </item>
    <item>
      <title>OpenAI gives you answers. Logseq stores your knowledge. ClawMind builds how you think.</title>
      <dc:creator>Roger Wang</dc:creator>
      <pubDate>Fri, 10 Apr 2026 23:47:28 +0000</pubDate>
      <link>https://dev.to/pigslybear/openai-gives-you-answers-logseq-stores-your-knowledge-clawmind-builds-how-you-think-47hh</link>
      <guid>https://dev.to/pigslybear/openai-gives-you-answers-logseq-stores-your-knowledge-clawmind-builds-how-you-think-47hh</guid>
      <description>&lt;p&gt;Most of us use AI like this:&lt;/p&gt;

&lt;p&gt;Open OpenAI&lt;br&gt;
Ask a question&lt;br&gt;
Get an answer&lt;/p&gt;

&lt;p&gt;And that’s it.&lt;/p&gt;

&lt;p&gt;It feels powerful.&lt;/p&gt;

&lt;p&gt;But there’s a hidden problem:&lt;/p&gt;

&lt;p&gt;👉 Your thinking doesn’t stay.&lt;/p&gt;

&lt;p&gt;A few days later,&lt;br&gt;
you face a similar problem —&lt;/p&gt;

&lt;p&gt;and start from zero again.&lt;/p&gt;

&lt;p&gt;Like you’ve never thought about it before.&lt;/p&gt;

&lt;p&gt;So we try to fix it with tools like Logseq.&lt;/p&gt;

&lt;p&gt;We write things down.&lt;br&gt;
We connect ideas.&lt;/p&gt;

&lt;p&gt;But something is still missing.&lt;/p&gt;

&lt;p&gt;AI helps you think.&lt;br&gt;
Notes help you remember.&lt;/p&gt;

&lt;p&gt;👉 But they don’t connect.&lt;/p&gt;

&lt;p&gt;You end up with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;answers&lt;/li&gt;
&lt;li&gt;notes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;…but no thinking system.&lt;/p&gt;

&lt;p&gt;That’s the gap I wanted to solve.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/pigsly/ClawMind" rel="noopener noreferrer"&gt;https://github.com/pigsly/ClawMind&lt;/a&gt;&lt;/p&gt;

</description>
      <category>logseq</category>
      <category>codexcli</category>
      <category>openai</category>
    </item>
  </channel>
</rss>
