<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: James Sargent</title>
    <description>The latest articles on DEV Community by James Sargent (@sargentjamesa).</description>
    <link>https://dev.to/sargentjamesa</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sargentjamesa"/>
    <language>en</language>
    <item>
      <title>Code Is a Commodity. Judgment Is Not.</title>
      <dc:creator>James Sargent</dc:creator>
      <pubDate>Sun, 03 May 2026 14:00:00 +0000</pubDate>
      <link>https://dev.to/sargentjamesa/code-is-a-commodity-judgment-is-not-214a</link>
      <guid>https://dev.to/sargentjamesa/code-is-a-commodity-judgment-is-not-214a</guid>
      <description>&lt;p&gt;AI can write code.&lt;/p&gt;

&lt;p&gt;Good code. Clean code. Fast code.&lt;/p&gt;

&lt;p&gt;That doesn’t make development trivial. It shifts where the true value lives.&lt;/p&gt;

&lt;p&gt;When code was slow and expensive, writing it was the work. Decisions developed gradually. Architecture changed over time. Judgment was distributed throughout implementation.&lt;/p&gt;

&lt;p&gt;When the cost of code drops, that balance flips.&lt;/p&gt;

&lt;p&gt;The difficult part is no longer creating software. It’s about deciding what should exist, how components should fit together, and which tradeoffs are acceptable. Those are judgment calls, not coding tasks.&lt;/p&gt;

&lt;p&gt;Calling code a commodity doesn’t mean it’s unimportant. Commodities still matter. They just aren’t where differentiation comes from. Two teams can produce similar code and end up with very different results based on the decisions that shaped it.&lt;/p&gt;

&lt;p&gt;This is why framing AI as a replacement for developers is flawed.&lt;/p&gt;

&lt;p&gt;Developers aren’t valuable just because they write code. Their value comes from understanding systems, constraints, and consequences. They regularly make judgment calls, often quietly, about how components should work together.&lt;/p&gt;

&lt;p&gt;AI is excellent at execution.&lt;/p&gt;

&lt;p&gt;Judgment is still human work.&lt;/p&gt;

&lt;p&gt;And as execution becomes cheaper, judgment grows more visible, more valuable, and more difficult to evade.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Leadership takeaway&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When coding is inexpensive, judgment stands out as the key difference. Systems mirror the choices that formed them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Action cues&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Notice where decisions matter more than implementation&lt;/li&gt;
&lt;li&gt;Pay attention to “good enough” moments that lock paths&lt;/li&gt;
&lt;li&gt;Watch judgment quietly replace coding as the bottleneck&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>leadership</category>
      <category>accountability</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>The 40-Minute Chrome Extension</title>
      <dc:creator>James Sargent</dc:creator>
      <pubDate>Thu, 30 Apr 2026 18:00:00 +0000</pubDate>
      <link>https://dev.to/sargentjamesa/the-40-minute-chrome-extension-1kab</link>
      <guid>https://dev.to/sargentjamesa/the-40-minute-chrome-extension-1kab</guid>
      <description>&lt;p&gt;I built a Chrome extension in 40 minutes, from idea to working extension. No modifications were necessary after testing.&lt;/p&gt;

&lt;p&gt;The extension exports a complete ChatGPT conversation to a clean markdown file with numbered turns and speaker labels. You open a chat, click the extension, and it captures everything. The scope was simple, the constraints were clear, and there was a single deliverable.&lt;/p&gt;

&lt;p&gt;Three distinct AI systems from two companies handled three different roles. I collaborated with ChatGPT to develop the intent. I provided the concept, constraints, and scope boundaries. ChatGPT asked clarifying questions and wrote the intent document. After nine exchanges over about ten minutes, the spec was finalized. I was the Architect, and ChatGPT was the pen.&lt;/p&gt;

&lt;p&gt;Codex acted as the Manager. It read the intent and generated the run bundle: a task list, a developer prompt, and operating instructions. Seven tasks, clearly scoped, with acceptance criteria directly based on the intent. About ten minutes.&lt;/p&gt;

&lt;p&gt;Claude Code acted as the developer. It read the run bundle and executed it. Seven tasks completed with zero failures and no errors. In about ten minutes, it produced a working Chrome extension. The last ten minutes were spent testing. The extension functioned as specified, with no changes needed.&lt;/p&gt;

&lt;p&gt;That breakdown is intentional, not accidental. Spend ten minutes defining the intent, ten minutes planning the execution, ten minutes building, and ten minutes verifying. The work is done upfront. Once the decisions are made and documented, execution becomes simple.&lt;/p&gt;

&lt;p&gt;The choice of tools didn't matter because Trail remained consistent regardless of which AI filled which role. Three systems, two companies, different models. The framework succeeded because roles and handoffs were clear, and the intent was unambiguous going in, so there was no need for guesses later. The choice was intentional, not random. Each model was chosen for its strengths: Codex generated clearer planning artifacts and more organized developer instructions, while Claude Code excelled at interpreting intent and producing a usable, well-constructed result. The pattern is simple: use the model suited for the role, not your personal preference.&lt;/p&gt;

&lt;p&gt;There is one constraint that outweighs tool choice. The Developer must never run in the same conversation where the intent was created. Doing so allows the model to inherit context that isn't present in the artifacts, violating Trail’s most important rule: the Developer’s context must match the file context. If the Developer knows something not documented, the system can no longer be trusted. Use separate windows, separate contexts—no exceptions.&lt;/p&gt;

&lt;p&gt;Web: &lt;a href="http://trail.venturanomadica.com/" rel="noopener noreferrer"&gt;trail.venturanomadica.com&lt;/a&gt;&lt;br&gt;
GitHub: &lt;a href="http://github.com/Ventura-Nomadica/trail-framework" rel="noopener noreferrer"&gt;github.com/Ventura-Nomadica/trail-framework&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>webdev</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>Information Wants to be Free</title>
      <dc:creator>James Sargent</dc:creator>
      <pubDate>Tue, 28 Apr 2026 18:00:00 +0000</pubDate>
      <link>https://dev.to/sargentjamesa/information-wants-to-be-free-12fl</link>
      <guid>https://dev.to/sargentjamesa/information-wants-to-be-free-12fl</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;"On the one hand, information wants to be expensive, because it's so valuable. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time."&lt;br&gt;
— Stewart Brand&lt;/p&gt;

&lt;p&gt;"All generally useful information should be free."&lt;br&gt;
— Richard Stallman&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Trail is open-source because we believe useful information should be free. That is the reason. Not strategy, not distribution, not growth.&lt;/p&gt;

&lt;p&gt;The problem Trail addresses is real. More teams are building with AI every month. More decisions are getting lost in chat. More scope is drifting without anyone noticing. More systems are accumulating undocumented assumptions that will eventually cost someone a painful week to untangle. I have experienced that cost. Trail exists so I do not have to go through it again.&lt;/p&gt;

&lt;p&gt;But the decision to make it free isn't driven by the problem. It's based on the belief that if a method makes work more reliable and helps people avoid avoidable failure, there's no reason to restrict access. Putting that behind a paywall doesn't improve the outcome; it simply limits who can benefit from it.&lt;/p&gt;

&lt;p&gt;Trail isn't software you buy. It is a framework you implement. The documentation and methodology are licensed under Creative Commons Attribution 4.0, and the scaffold, including folder structure and template files, is MIT. This means you can use it, adapt it, share it, and build on it, personally or commercially, with proper attribution.&lt;/p&gt;

&lt;p&gt;The split is intentional. The approach remains flexible and applicable across various industries, while the scaffold can be dropped directly into real projects without legal issues. Both are as permissive as possible while still ensuring credit for the work.&lt;/p&gt;

&lt;p&gt;Trail requires no special tools or specific platform. It uses markdown files stored in folders. You can run it with Git, a shared drive, or any system you already use. The framework fits seamlessly into your environment; it does not replace it.&lt;/p&gt;

&lt;p&gt;If Trail works for you, use it. If it breaks, you'll see exactly where and why—that's the point.&lt;/p&gt;

&lt;p&gt;Web: &lt;a href="https://trail.venturanomadica.com" rel="noopener noreferrer"&gt;trail.venturanomadica.com &lt;/a&gt;&lt;br&gt;
GitHub: &lt;a href="https://github.com/Ventura-Nomadica/trail-framework" rel="noopener noreferrer"&gt;github.com/Ventura-Nomadica/trail-framework&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Photo by &lt;a href="https://unsplash.com/@willvanw?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Will van Wingerden&lt;/a&gt; on &lt;a href="https://unsplash.com/photos/photo-of-library-hall-dsvJgiBJTOs?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>community</category>
      <category>devjournal</category>
    </item>
    <item>
      <title>Architecture Is a Leadership Decision, not a Technical One</title>
      <dc:creator>James Sargent</dc:creator>
      <pubDate>Sun, 26 Apr 2026 14:00:00 +0000</pubDate>
      <link>https://dev.to/sargentjamesa/architecture-is-a-leadership-decision-not-a-technical-one-2mk9</link>
      <guid>https://dev.to/sargentjamesa/architecture-is-a-leadership-decision-not-a-technical-one-2mk9</guid>
      <description>&lt;p&gt;Architecture isn’t about tools, frameworks, or patterns.&lt;/p&gt;

&lt;p&gt;It’s about decisions.&lt;/p&gt;

&lt;p&gt;What stays flexible.&lt;/p&gt;

&lt;p&gt;What gets locked in.&lt;/p&gt;

&lt;p&gt;What can change later, and what can’t.&lt;/p&gt;

&lt;p&gt;Those choices shape how a system behaves long after the code is written. Once they’re set, they’re hard to undo.&lt;/p&gt;

&lt;p&gt;That’s why architecture isn’t just a technical concern. It’s how leadership decisions become permanent structures.&lt;/p&gt;

&lt;p&gt;I learned this clearly while working on TrekCrumbs. The hardest part of fixing the system wasn’t writing code. It meant stopping to evaluate the design. We had to determine what had to stay consistent, where the boundaries were, and which assumptions we were willing to lock in. Once those decisions were clear, the implementation was straightforward.&lt;/p&gt;

&lt;p&gt;The order mattered.&lt;/p&gt;

&lt;p&gt;When AI accelerates execution, architectural decisions harden faster. Tradeoffs get embedded earlier. Defaults turn into constraints before anyone realizes a choice was made. The cost doesn’t show up immediately. It shows up later, when change becomes expensive.&lt;/p&gt;

&lt;p&gt;This is where leadership quietly shows up.&lt;/p&gt;

&lt;p&gt;Not in meetings.&lt;/p&gt;

&lt;p&gt;Not in roadmaps.&lt;/p&gt;

&lt;p&gt;But in the decisions that shape the system before anyone hits “build.”&lt;/p&gt;

&lt;p&gt;AI didn’t change what architecture is.&lt;/p&gt;

&lt;p&gt;It changed how quickly you live with the consequences.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Leadership takeaway&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Architecture isn’t about how systems are built; it’s about which decisions get locked in.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Action cues&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Notice constraints no one remembers choosing&lt;/li&gt;
&lt;li&gt;Pay attention to decisions that remove flexibility by default&lt;/li&gt;
&lt;li&gt;Watch for tradeoffs becoming permanent without discussion&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>leadership</category>
      <category>accountability</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>Running Trail End-to-End Outside Software: A Fictional Business Case Study</title>
      <dc:creator>James Sargent</dc:creator>
      <pubDate>Sat, 25 Apr 2026 21:19:07 +0000</pubDate>
      <link>https://dev.to/sargentjamesa/running-trail-end-to-end-outside-software-a-fictional-business-case-study-o5l</link>
      <guid>https://dev.to/sargentjamesa/running-trail-end-to-end-outside-software-a-fictional-business-case-study-o5l</guid>
      <description>&lt;p&gt;Running the 7 intents for Otaku Haven this weekend. Work in progress. It's a fictional anime store I'm using to test Trail end-to-end outside software. Business plan, pitch deck, supporting artifacts, 45 documents in total. No code anywhere.&lt;/p&gt;

&lt;p&gt;I worked with Claude and ChatGPT to determine everything a business needs. I validated all the information, then used AI to build all 7 intent documents. Everything was human-reviewed. I'm using Claude as Manager, Codex as Developer, and I'm the reviewer.&lt;/p&gt;

&lt;p&gt;The point isn't whether AI can write a business plan. It's whether bounded intent can drive the work without execution drift or letting chat become the source of truth.&lt;/p&gt;

&lt;p&gt;When it's done, everything goes public: intents, run bundles, outputs, the lot.&lt;/p&gt;

&lt;p&gt;One honest note: the documents are bland and not well-formatted aesthetically, but the content looks solid.&lt;/p&gt;

&lt;p&gt;So far, it's holding.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>trailframework</category>
      <category>buildinpublic</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Front-loaded Friction</title>
      <dc:creator>James Sargent</dc:creator>
      <pubDate>Thu, 23 Apr 2026 18:00:00 +0000</pubDate>
      <link>https://dev.to/sargentjamesa/front-loaded-friction-585i</link>
      <guid>https://dev.to/sargentjamesa/front-loaded-friction-585i</guid>
      <description>&lt;p&gt;Special Operations have a saying: “Slow is smooth, smooth is fast.” Trail work follows the same principle. The friction occurs at the front, slowing you down so you can execute quickly and accurately.&lt;/p&gt;

&lt;p&gt;Most AI-assisted work does the opposite. It starts quickly. You open a chat, describe what you want, and begin building. It feels productive and like progress, and for simple work that is often true. But for anything complex, that speed is a loan. You're borrowing clarity from your future self and trusting you'll be able to reconstruct the decisions later. When you can't, the bill comes due as rework, misalignment, and decisions that no one can trace back to a source.&lt;/p&gt;

&lt;p&gt;Trail does not remove that cost. It charges it upfront.&lt;/p&gt;

&lt;p&gt;Before anything gets built, you specify the intent. You define the problem, the constraints, clarify what is explicitly out of scope, and define what “done” looks like. You also determine how the Manager functions and establish the operating rules. This all happens before writing a single line of code or producing any documents. There is no early execution phase where things remain flexible. The structure is set first.&lt;/p&gt;

&lt;p&gt;That is the part people resist. It feels like overhead, especially to anyone used to jumping straight into execution. But what it actually does is bring ambiguity to the surface early, when it is still inexpensive to resolve. Constraints are written down instead of assumed. Scope is limited instead of discovered halfway through the work. Decisions that would normally appear later as surprises show up immediately, allowing them to be handled deliberately.&lt;/p&gt;

&lt;p&gt;Once that work is done, execution shifts. The Developer no longer guesses what was intended or reconstructing prior conversations. They operate based on explicit inputs, clear tasks, and defined outputs. The path is already set. What initially seemed like friction then becomes the reason the rest of the work proceeds smoothly.&lt;/p&gt;

&lt;p&gt;The tradeoff is simple. Rework always costs more than planning, every time. Most systems make it easy to skip planning and face the consequences later. Trail makes that more difficult. It shifts the cost forward, where it belongs, and eliminates the need to pay it again later.&lt;/p&gt;

&lt;p&gt;GitHub: &lt;a href="http://github.com/Ventura-Nomadica/trail-framework" rel="noopener noreferrer"&gt;github.com/Ventura-Nomadica/trail-framework&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>productivity</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>Not Just for Software</title>
      <dc:creator>James Sargent</dc:creator>
      <pubDate>Tue, 21 Apr 2026 18:00:00 +0000</pubDate>
      <link>https://dev.to/sargentjamesa/not-just-for-software-5hch</link>
      <guid>https://dev.to/sargentjamesa/not-just-for-software-5hch</guid>
      <description>&lt;p&gt;&lt;strong&gt;intent&lt;/strong&gt; /ĭn-tĕnt′/&lt;br&gt;
&lt;em&gt;noun&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Something that is intended; an aim or purpose.&lt;/li&gt;
&lt;li&gt;The state of mind necessary for an act to constitute a crime.&lt;/li&gt;
&lt;li&gt;The act of turning the mind toward an object; hence, a design; a purpose.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Trail is not a software development framework. It works for software, but that's just where I ran into the problem.&lt;/p&gt;

&lt;p&gt;Trail doesn't care if you're writing code, writing a book, or building a complete set of business documents. It cares about one thing: intent. Not the vague, conceptual version of the word, but the operational one.&lt;/p&gt;

&lt;p&gt;In Trail, an intent is a formal artifact. It defines the problem, the constraints, what is explicitly out of scope, and what "done" means. It exists before anything gets built, and once execution starts, it does not change. Every decision traces back to it. None of that requires code to exist anywhere in the process.&lt;/p&gt;

&lt;p&gt;I'm validating that directly right now with a non-software proof of concept, producing the complete document set for a fictional retail business called Otaku Haven. That includes a business plan, operating agreements, employee handbook, marketing strategy, and vendor contracts. Dozens of documents, all produced through the same intent-and-run structure.&lt;/p&gt;

&lt;p&gt;The mechanics do not change. The Architect defines what needs to exist and under what constraints. The Manager plans the execution. The Developer, in this case AI, produces the document. The Reviewer verifies it meets the intent. The output is a Word document instead of a Flutter widget. Trail does not notice the difference.&lt;/p&gt;

&lt;p&gt;That's because Trail is not solving for software. It is solving for the handoff problem.&lt;/p&gt;

&lt;p&gt;Any time you delegate complex work, whether to a contractor, a team member, or an AI, you run into the same failure modes. Decisions live in conversations instead of files. Scope shifts without anyone noticing. Assumptions go undocumented. Changes cannot be traced back to a decision. Those risks do not care what the deliverable is.&lt;/p&gt;

&lt;p&gt;This is not a coding problem. It is a coordination problem.&lt;/p&gt;

&lt;p&gt;And coordination problems show up everywhere. HR policy rollout. Marketing campaigns. Legal document production. Financial close processes. Vendor onboarding. If someone defines the work, someone else executes it, and someone eventually has to verify the result, you are operating inside the same system whether you recognize it or not.&lt;/p&gt;

&lt;p&gt;Trail applies there.&lt;/p&gt;

&lt;p&gt;The tools do not matter. The output does not matter. The domain does not matter. Trail only enforces three things: artifacts are the source of truth, roles are separated, and every handoff is explicit. Everything else is your system.&lt;/p&gt;

&lt;p&gt;Web: &lt;a href="https://trail.venturanomadica.com" rel="noopener noreferrer"&gt;trail.venturanomadica.com&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>productivity</category>
      <category>leadership</category>
    </item>
    <item>
      <title>AI Makes Bad Decisions Look Reasonable</title>
      <dc:creator>James Sargent</dc:creator>
      <pubDate>Sun, 19 Apr 2026 14:00:00 +0000</pubDate>
      <link>https://dev.to/sargentjamesa/ai-makes-bad-decisions-look-reasonable-cbl</link>
      <guid>https://dev.to/sargentjamesa/ai-makes-bad-decisions-look-reasonable-cbl</guid>
      <description>&lt;p&gt;Most bad decisions don’t look bad at first.&lt;/p&gt;

&lt;p&gt;They look complete. Confident. Well-structured.&lt;/p&gt;

&lt;p&gt;That’s part of the problem.&lt;/p&gt;

&lt;p&gt;AI is very good at producing outputs that feel finished. It fills in gaps, smooths edges, and presents answers that sound plausible. When you look at the result, there’s nothing obviously wrong with it. No red flags. No clear failure.&lt;/p&gt;

&lt;p&gt;I ran into this while building TrekCrumbs. AI kept offering solutions that were technically sound. Add a layer here. Map fields there. Patch the edge cases. Each change made sense on its own. Nothing broke. Progress continued.&lt;/p&gt;

&lt;p&gt;But those “reasonable” decisions quietly locked in assumptions.&lt;/p&gt;

&lt;p&gt;Tradeoffs were made quietly. Defaults became structure. Nothing broke; the system just became harder to understand.&lt;/p&gt;

&lt;p&gt;That’s what makes this dangerous.&lt;/p&gt;

&lt;p&gt;When decisions aren’t explicit, AI doesn’t expose the problem; it hides it. The system works long enough for ambiguity to become expensive. By the time the cost shows up, it doesn’t look like a bad decision. It looks like friction, rework, and discomfort.&lt;/p&gt;

&lt;p&gt;AI didn’t create bad decisions.&lt;/p&gt;

&lt;p&gt;It made them easier to overlook.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Leadership takeaway&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When decisions aren’t made explicit, AI outputs can feel correct while quietly locking in unexamined assumptions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Action cues&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Notice decisions that feel “reasonable” but are hard to explain&lt;/li&gt;
&lt;li&gt;Pay attention to tradeoffs no one can clearly name&lt;/li&gt;
&lt;li&gt;Watch confidence replace clarity in reviews&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>leadership</category>
      <category>accountability</category>
      <category>management</category>
    </item>
    <item>
      <title>Project Planning for Side Projects?</title>
      <dc:creator>James Sargent</dc:creator>
      <pubDate>Fri, 17 Apr 2026 15:14:26 +0000</pubDate>
      <link>https://dev.to/sargentjamesa/project-planning-for-side-projects-1lc4</link>
      <guid>https://dev.to/sargentjamesa/project-planning-for-side-projects-1lc4</guid>
      <description>&lt;p&gt;I use Jira to track the work for my side projects. I've used Notion previously, but with a background in enterprise IT, it was cumbersome. I’m loose on dates, but long on detail, it makes sure everything gets done, and if there is a long break, I know whats next.&lt;/p&gt;

&lt;p&gt;Do you build project plans for your side projects? Why or why not, and what do you use?&lt;/p&gt;

</description>
      <category>buildinpublic</category>
      <category>sideprojects</category>
      <category>development</category>
      <category>discuss</category>
    </item>
    <item>
      <title>SMS to Notion</title>
      <dc:creator>James Sargent</dc:creator>
      <pubDate>Thu, 16 Apr 2026 20:46:10 +0000</pubDate>
      <link>https://dev.to/sargentjamesa/sms-to-notion-33gf</link>
      <guid>https://dev.to/sargentjamesa/sms-to-notion-33gf</guid>
      <description>&lt;p&gt;Wired up an SMS to Notion pipeline today. Twilio ingests the message, Make orchestrates, a small PHP endpoint handles file uploads, and structured records land in a Notion database with attachments and extracted hashtags.&lt;/p&gt;

&lt;p&gt;Ran into a couple of unexpected gaps. Twilio doesn't surface file extensions on media downloads, and Notion won't accept anything other than a valid public URL for attachments through the API. Fixed both by parsing the Content-Type header to derive the extension, and a small PHP script to handle uploads to a web server.&lt;/p&gt;

&lt;p&gt;It has a single number in Twilio. If the sender is my number, it sends to my database; if it’s my wife’s number, it sends to her database. The only unrequested feature is that multiple pictures create multiple records for now, which is fine for the POC. Building it out to determine whether it is worth building a real product.&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>buildinpublic</category>
      <category>notion</category>
      <category>devjournal</category>
    </item>
    <item>
      <title>Intents, Runs, and Roles</title>
      <dc:creator>James Sargent</dc:creator>
      <pubDate>Thu, 16 Apr 2026 18:00:00 +0000</pubDate>
      <link>https://dev.to/sargentjamesa/intents-runs-and-roles-1g</link>
      <guid>https://dev.to/sargentjamesa/intents-runs-and-roles-1g</guid>
      <description>&lt;p&gt;People keep asking me what Trail actually looks like in action. The philosophy makes sense, but eventually you want to see the machine.&lt;/p&gt;

&lt;p&gt;Trail consists of three structural concepts and four roles. That’s the entire system.&lt;/p&gt;

&lt;p&gt;The structure&lt;/p&gt;

&lt;p&gt;Meta is your project’s reality anchor. It stores global operating instructions, shared inputs, and a baseline that describes what exists today. It changes slowly and intentionally. If a rule applies everywhere, it’s kept here. If it doesn’t, it doesn’t belong here.&lt;/p&gt;

&lt;p&gt;Intents are bounded units of work. Each intent defines the problem, constraints, what is explicitly out of scope, and what “done” means. You write the intent before anything gets built. Once execution starts, it is immutable. You don’t go back and edit it. If the work changes, you create a new intent. If the Reviewer identifies something that requires a different scope, that’s a new intent. Old intents are always kept.&lt;/p&gt;

&lt;p&gt;That’s how Trail keeps history honest. You can see what was actually decided at the time, not what someone later claims was decided.&lt;/p&gt;

&lt;p&gt;An intent isn’t a ticket or a user story. It’s more like a project brief, but one that is enforceable. It includes the intent itself, manager instructions that guide how planning is carried out, and operating instruction overrides for any specific details related to that work. Collectively, these files make up the intent package. Nothing gets executed without it.&lt;/p&gt;

&lt;p&gt;Runs are execution containers. A single intent might require one run or several. Each run answers “how,” never “what.” The Manager decides how many runs are needed and what each one covers.&lt;/p&gt;

&lt;p&gt;Runs are disposable. They can fail, pause, or be abandoned—that’s normal. What matters is the artifacts they generate, not the run itself.&lt;/p&gt;

&lt;p&gt;Within each run, the Manager creates an execution bundle: operational instructions (fully written), a task list, a developer prompt, and a results file. That bundle constitutes the entire execution context. The Developer does not depend on anything outside of it.&lt;/p&gt;

&lt;p&gt;The roles&lt;/p&gt;

&lt;p&gt;Architect defines the “what.” Produces the intent package. Owns scope. This role is always performed by a human. Trail’s highest-leverage constraint is that the entity defining the problem is never the same as the one executing it.&lt;/p&gt;

&lt;p&gt;Manager translates intent into execution. It reads the intent and creates the run bundle. It does not add scope. It does not fill gaps. If the intent is incomplete or contradictory, the Manager stops. That’s not a failure; it’s a control point. It reports what is missing and returns the issue to the Architect.&lt;/p&gt;

&lt;p&gt;Developer executes the Run artifacts and produces output, either human or AI. The constraint is strict: the Developer’s context must match the file context. No chat memory allowed. No references to previous talks. If it’s not listed as an input, it does not exist.&lt;/p&gt;

&lt;p&gt;That constraint cuts both ways. The Developer is not allowed to guess. If something is missing, such as a file, a dependency, or a clear instruction, the Developer stops and reports it. Trail prefers to halt rather than produce output based on assumptions.&lt;/p&gt;

&lt;p&gt;Reviewer verifies the output against the intent and Run artifacts, then accepts or rejects it. If the output is incorrect, it is rejected. If the scope is wrong, a new intent is created. Always performed by a human. In smaller projects, the Architect and Reviewer are often the same person.&lt;/p&gt;

&lt;p&gt;Why should I care?&lt;/p&gt;

&lt;p&gt;Most AI-assisted work combines all these roles into one person in a conversation. You define the problem, plan the solution, carry out the work, and judge the results, all in the same place. That works until it doesn’t. And when it breaks, there’s no way to separate what went wrong from who decided it.&lt;/p&gt;

&lt;p&gt;Trail forces those boundaries to exist whether you like it or not.&lt;/p&gt;

&lt;p&gt;When AI fills the roles of Manager or Developer, the Architect and Reviewer stay human. That’s not about preference; it’s about governance. The human defines what should be built and confirms whether it was built correctly. AI works within boundaries it did not set and cannot change.&lt;/p&gt;

&lt;p&gt;Trail doesn’t slow down work; it makes the system visible. When something goes wrong, you don’t have to debug the whole project. Instead, you check the boundary: was the intent unclear, did the Manager misplan, did the Developer deviate, or did the Reviewer miss something?&lt;/p&gt;

&lt;p&gt;Every failure has a location.&lt;/p&gt;

&lt;p&gt;That’s the point. Not to prevent mistakes, but to make them traceable and recoverable.&lt;/p&gt;

&lt;p&gt;Web: &lt;a href="http://trail.venturanomadica.com/" rel="noopener noreferrer"&gt;trail.venturanomadica.com&lt;/a&gt;&lt;br&gt;
GitHub: &lt;a href="http://github.com/Ventura-Nomadica/trail-framework" rel="noopener noreferrer"&gt;github.com/Ventura-Nomadica/trail-framework&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>devops</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>Artifacts &gt; Conversations</title>
      <dc:creator>James Sargent</dc:creator>
      <pubDate>Tue, 14 Apr 2026 18:00:00 +0000</pubDate>
      <link>https://dev.to/sargentjamesa/artifacts-conversations-4f93</link>
      <guid>https://dev.to/sargentjamesa/artifacts-conversations-4f93</guid>
      <description>&lt;p&gt;Every tool in the AI ecosystem is optimizing for better conversation. Better prompts. Longer context windows. Memory across sessions. Smarter retrieval.&lt;/p&gt;

&lt;p&gt;All of it assumes the conversation is where the work lives.&lt;/p&gt;

&lt;p&gt;Trail assumes the opposite.&lt;/p&gt;

&lt;p&gt;Conversations are scratch paper. Useful in the moment, like a whiteboard in a meeting. But nobody ships a whiteboard. Nobody audits one. Nobody hands it to someone new and says, "this is how the system works and why."&lt;/p&gt;

&lt;p&gt;If a decision only exists in a conversation, it doesn't exist.&lt;/p&gt;

&lt;p&gt;Conversations don't transfer. They depend on who was there, what was remembered, and what gets re-explained. Files don't have that problem. A file can be read by someone who wasn't in the room, reviewed months later, or handed to a different executor and produce the same result. It doesn't degrade. It doesn't drift.&lt;/p&gt;

&lt;p&gt;This is why everything in Trail is written in plain English. Not code. Not configuration. Markdown files a human can open and understand without context or special tools. An intent reads like a brief. Tasks read like tasks. Operating instructions read like policy.&lt;/p&gt;

&lt;p&gt;The person reviewing the work six months from now shouldn't need to reconstruct a conversation. They should be able to open the files and see what was decided, what was built, and why.&lt;/p&gt;

&lt;p&gt;The conversation that produced those files? Disposable. Close the tab.&lt;/p&gt;

&lt;p&gt;Web: &lt;a href="https://trail.venturanomadica.com" rel="noopener noreferrer"&gt;trail.venturanomadica.com&lt;/a&gt;&lt;br&gt;
GitHub: &lt;a href="https://github.com/Ventura-Nomadica/trail-framework" rel="noopener noreferrer"&gt;github.com/Ventura-Nomadica/trail-framework&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>softwareengineering</category>
      <category>devjournal</category>
    </item>
  </channel>
</rss>
