<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Robert Love</title>
    <description>The latest articles on DEV Community by Robert Love (@forgemechanic).</description>
    <link>https://dev.to/forgemechanic</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/forgemechanic"/>
    <language>en</language>
    <item>
      <title>Ask AI to Stop Being Nice</title>
      <dc:creator>Robert Love</dc:creator>
      <pubDate>Wed, 01 Apr 2026 03:33:00 +0000</pubDate>
      <link>https://dev.to/forgemechanic/ask-ai-to-stop-being-nice-4bh7</link>
      <guid>https://dev.to/forgemechanic/ask-ai-to-stop-being-nice-4bh7</guid>
      <description>&lt;h2&gt;
  
  
  A Better Way to Review Your Design Docs
&lt;/h2&gt;

&lt;p&gt;In a prior post, I wrote about using AI before code—shaping the system in words, building Markdown documents, and reducing drift before implementation starts.&lt;/p&gt;

&lt;p&gt;You can read that here: &lt;a href="https://dev.to/forgemechanic/i-dont-start-with-code-anymore-17in"&gt;I Don’t Start With Code Anymore&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This post is about the next step:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Once you have the documents, ask AI to stop helping and start judging.&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you stripped away all prior context, would your design docs still make sense?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That sounds harsher than it is. But it matters.&lt;/p&gt;

&lt;p&gt;By default, AI often tries to be agreeable. It smooths rough edges. It validates your thinking. It finds what is good before it tells you what is weak. That can be useful early on, when you are trying to explore an idea. It is much less useful when you need to know whether your documents actually stand on their own.&lt;/p&gt;

&lt;p&gt;At some point, you need the model to stop acting like a collaborator and start acting like a reviewer.&lt;/p&gt;

&lt;h2&gt;
  
  
  The real test is not whether AI understands you
&lt;/h2&gt;

&lt;p&gt;The real test is whether AI understands your documents &lt;strong&gt;without&lt;/strong&gt; you.&lt;/p&gt;

&lt;p&gt;That is a very different standard.&lt;/p&gt;

&lt;p&gt;A lot of design work feels coherent inside an ongoing chat because the model is benefiting from your previous explanations, corrections, and intent. It may be using surrounding context to fill in missing assumptions. That can hide weaknesses in the actual documents.&lt;/p&gt;

&lt;p&gt;So one of the most useful things I can do is isolate the material.&lt;/p&gt;

&lt;p&gt;I start a clean project or fresh conversation with as little extra context as possible. Then I upload only the files that are supposed to define the system: the vocabulary, goals, architecture notes, interfaces, process model, whatever should be sufficient on its own.&lt;/p&gt;

&lt;p&gt;Then I ask the model to judge the material based only on that.&lt;/p&gt;

&lt;p&gt;That is where things get honest.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this works
&lt;/h2&gt;

&lt;p&gt;When the extra context is gone, the documents either carry the meaning or they do not.&lt;/p&gt;

&lt;p&gt;If they do, the model can summarize the system, explain the boundaries, identify the phases, and reason about implementation from the files alone.&lt;/p&gt;

&lt;p&gt;If they do not, the cracks show quickly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;terms are undefined&lt;/li&gt;
&lt;li&gt;boundaries are fuzzy&lt;/li&gt;
&lt;li&gt;assumptions are implicit&lt;/li&gt;
&lt;li&gt;phases are inconsistent&lt;/li&gt;
&lt;li&gt;interfaces are under-specified&lt;/li&gt;
&lt;li&gt;important decisions exist only in prior chat history&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is exactly what I want to find.&lt;/p&gt;

&lt;p&gt;I would rather discover that weakness while refining the documents than discover it later when a coding agent starts implementing the wrong thing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tell the AI to be ruthless
&lt;/h2&gt;

&lt;p&gt;This is the part many people skip.&lt;/p&gt;

&lt;p&gt;If you ask AI, “What do you think of these docs?” you will often get a polite answer. It may be thoughtful. It may even be useful. But it will usually soften the critique.&lt;/p&gt;

&lt;p&gt;Sometimes that is the wrong mode.&lt;/p&gt;

&lt;p&gt;Sometimes you need to tell it, explicitly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;do not be polite&lt;/li&gt;
&lt;li&gt;do not optimize for encouragement&lt;/li&gt;
&lt;li&gt;do not assume missing details are intentional&lt;/li&gt;
&lt;li&gt;do not fill gaps unless the text supports them&lt;/li&gt;
&lt;li&gt;treat ambiguity as a defect&lt;/li&gt;
&lt;li&gt;treat undefined terms as a defect&lt;/li&gt;
&lt;li&gt;treat hidden assumptions as a defect&lt;/li&gt;
&lt;li&gt;be harsh, direct, and specific&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That changes the quality of the feedback.&lt;/p&gt;

&lt;p&gt;The goal is not abuse. The goal is objectivity.&lt;/p&gt;

&lt;p&gt;You want the model to stop saying, “This is a strong start,” and start saying, “This section fails because it depends on context not present in the document.”&lt;/p&gt;

&lt;p&gt;That is far more useful.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I ask it to evaluate
&lt;/h2&gt;

&lt;p&gt;When I do this, I usually want the model to answer questions like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Can you explain this system from these files alone?&lt;/li&gt;
&lt;li&gt;What assumptions are missing?&lt;/li&gt;
&lt;li&gt;Which terms are overloaded, vague, or undefined?&lt;/li&gt;
&lt;li&gt;Where do the documents contradict each other?&lt;/li&gt;
&lt;li&gt;What would an implementation agent still not know?&lt;/li&gt;
&lt;li&gt;Which sections sound precise but are actually ambiguous?&lt;/li&gt;
&lt;li&gt;What depends on prior chat context rather than written material?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is the audit.&lt;/p&gt;

&lt;p&gt;If the model can only understand the system because I already taught it the system in conversation, then the documents are not done yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  The shift in mindset
&lt;/h2&gt;

&lt;p&gt;This is the big change:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stop using AI only as a builder. Use it as a hostile reader.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That does not mean every interaction has to be adversarial. Early on, collaboration is useful. Exploration is useful. Drafting is useful.&lt;/p&gt;

&lt;p&gt;But once the docs are supposed to be real, you need a different posture.&lt;/p&gt;

&lt;p&gt;You need something closer to:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Read this as if you were a new engineer, a coding agent, or a future version of me with no extra context. Then tell me what breaks.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That is where the quality bar rises.&lt;/p&gt;

&lt;h2&gt;
  
  
  What good feedback looks like
&lt;/h2&gt;

&lt;p&gt;The best feedback is not vague negativity. It is precise pressure.&lt;/p&gt;

&lt;p&gt;Good critical feedback sounds like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“This term is used three different ways.”&lt;/li&gt;
&lt;li&gt;“This section implies authority, but never defines the source of truth.”&lt;/li&gt;
&lt;li&gt;“The phase boundary is unclear.”&lt;/li&gt;
&lt;li&gt;“This interface description is incomplete because input shape is missing.”&lt;/li&gt;
&lt;li&gt;“This workflow only makes sense if the reader already knows the unstated constraint.”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That kind of criticism is gold.&lt;/p&gt;

&lt;p&gt;It tells you exactly where your design is still living in your head instead of in the docs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters before implementation
&lt;/h2&gt;

&lt;p&gt;Once code generation starts, ambiguity gets expensive.&lt;/p&gt;

&lt;p&gt;A coding model will happily turn vague documents into concrete code. That is the danger. It has to choose something. If your design artifacts are unclear, the model will still produce output, but that output may reflect assumptions you never intended.&lt;/p&gt;

&lt;p&gt;That is why I want brutal feedback first.&lt;/p&gt;

&lt;p&gt;I want the documents attacked before I ask for implementation.&lt;/p&gt;

&lt;p&gt;If the docs survive that, then they are much more likely to guide coding tools well.&lt;/p&gt;

&lt;h2&gt;
  
  
  The takeaway
&lt;/h2&gt;

&lt;p&gt;Here is the workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Build the design documents&lt;/li&gt;
&lt;li&gt;Move them into a clean context with minimal extra history&lt;/li&gt;
&lt;li&gt;Upload only the files that should define the system&lt;/li&gt;
&lt;li&gt;Tell the AI to stop being nice&lt;/li&gt;
&lt;li&gt;Ask it to identify ambiguity, contradiction, and hidden assumptions&lt;/li&gt;
&lt;li&gt;Fix the documents&lt;/li&gt;
&lt;li&gt;Only then move toward implementation&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;AI is useful when it helps you draft.&lt;/p&gt;

&lt;p&gt;It becomes far more useful when it helps you &lt;strong&gt;detect where your draft is lying to you&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Not just using AI to help me think.&lt;/p&gt;

&lt;p&gt;Using AI to tell me, bluntly, where my thinking is still not good enough.&lt;/p&gt;

&lt;p&gt;What’s your approach to getting brutally honest feedback from AI?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
    </item>
    <item>
      <title>I Don’t Start With Code Anymore</title>
      <dc:creator>Robert Love</dc:creator>
      <pubDate>Tue, 31 Mar 2026 03:14:31 +0000</pubDate>
      <link>https://dev.to/forgemechanic/i-dont-start-with-code-anymore-17in</link>
      <guid>https://dev.to/forgemechanic/i-dont-start-with-code-anymore-17in</guid>
      <description>&lt;h2&gt;
  
  
  How I Use AI to Design Software First
&lt;/h2&gt;

&lt;p&gt;Many developers want AI to jump straight into the code.&lt;/p&gt;

&lt;p&gt;I don’t.&lt;/p&gt;

&lt;p&gt;Before I ask any tool to implement something, I work the problem out in words first. I use AI as a design partner before I use it as a coding assistant. That shift has had a bigger impact on my results than any specific model or tool.&lt;/p&gt;

&lt;h2&gt;
  
  
  My toolset
&lt;/h2&gt;

&lt;p&gt;My setup varies between home and work, but the pattern stays the same:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Home&lt;/th&gt;
&lt;th&gt;Work&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ChatGPT Plus&lt;/td&gt;
&lt;td&gt;Gemini Pro&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GitHub Copilot&lt;/td&gt;
&lt;td&gt;GitHub Copilot&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Codex&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Claude Code&lt;/td&gt;
&lt;td&gt;Claude Code&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;These tools are not interchangeable. I use them differently depending on the stage of work: discussion first, implementation second.&lt;/p&gt;

&lt;h2&gt;
  
  
  The real leverage comes before code
&lt;/h2&gt;

&lt;p&gt;Most AI-assisted development starts too late.&lt;/p&gt;

&lt;p&gt;People bring AI in once they are already in the code and expect it to figure everything out. But the hard part is usually earlier, when the idea is still unclear and the boundaries are not defined.&lt;/p&gt;

&lt;p&gt;Before I ask for code, I want clarity on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the problem I am solving
&lt;/li&gt;
&lt;li&gt;system boundaries
&lt;/li&gt;
&lt;li&gt;consistent terminology
&lt;/li&gt;
&lt;li&gt;what belongs in scope vs later
&lt;/li&gt;
&lt;li&gt;assumptions that will cause drift if left implicit
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If I solve that first, everything downstream improves.&lt;/p&gt;

&lt;h2&gt;
  
  
  My workflow
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Start in chat
&lt;/h3&gt;

&lt;p&gt;I begin in ChatGPT, usually using voice.&lt;/p&gt;

&lt;p&gt;That lets me move quickly through ideas, constraints, edge cases, naming, and structure. At this stage I am not asking for code. I am shaping the system.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Turn it into documents
&lt;/h3&gt;

&lt;p&gt;Once the idea stabilizes, I convert it into Markdown:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;vision
&lt;/li&gt;
&lt;li&gt;vocabulary
&lt;/li&gt;
&lt;li&gt;goals and non-goals
&lt;/li&gt;
&lt;li&gt;system overview
&lt;/li&gt;
&lt;li&gt;interfaces
&lt;/li&gt;
&lt;li&gt;roadmap
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal is not code generation. The goal is clarity.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Design before implementation
&lt;/h3&gt;

&lt;p&gt;I do not rush into the repo.&lt;/p&gt;

&lt;p&gt;I want the architecture, terminology, and constraints clear enough that implementation becomes execution, not invention. Better design leads to better prompts, which leads to better code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Existing code still matters
&lt;/h2&gt;

&lt;p&gt;This is not just for greenfield work.&lt;/p&gt;

&lt;p&gt;If I have an existing codebase, I will often zip up the relevant parts and bring them into ChatGPT early. That gives me a baseline for discussion:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;what stays the same
&lt;/li&gt;
&lt;li&gt;what changes
&lt;/li&gt;
&lt;li&gt;where the boundaries are weak
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is not perfect for code navigation, but it is extremely effective for shaping changes before implementation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this works
&lt;/h2&gt;

&lt;p&gt;The main benefit is reduced drift.&lt;/p&gt;

&lt;p&gt;Because the design is already defined, I do not have to re-explain everything in every prompt. I can give focused instructions:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Implement feature X using these constraints. Do not expand scope. Preserve terminology.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That is far more reliable than asking a model to infer everything from scratch.&lt;/p&gt;

&lt;h2&gt;
  
  
  How I think about the tools
&lt;/h2&gt;

&lt;p&gt;I group tools into two categories:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Discussion tools&lt;/strong&gt; (ChatGPT, Gemini): explore ideas, refine design, produce artifacts
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Implementation tools&lt;/strong&gt; (Copilot, Codex, Claude Code): execute against a defined design
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most frustration comes from using the wrong tool at the wrong stage.&lt;/p&gt;

&lt;h2&gt;
  
  
  What changes when I move to code
&lt;/h2&gt;

&lt;p&gt;By the time I start implementation, the system is already defined.&lt;/p&gt;

&lt;p&gt;That usually means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;less drift
&lt;/li&gt;
&lt;li&gt;fewer corrections
&lt;/li&gt;
&lt;li&gt;better consistency
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The model is no longer guessing what I want. It is executing against a plan.&lt;/p&gt;

&lt;h2&gt;
  
  
  The takeaway
&lt;/h2&gt;

&lt;p&gt;My workflow is simple:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Talk through the idea
&lt;/li&gt;
&lt;li&gt;Turn it into design documents
&lt;/li&gt;
&lt;li&gt;Bring in existing code when needed
&lt;/li&gt;
&lt;li&gt;Use those artifacts as the source of truth
&lt;/li&gt;
&lt;li&gt;Then implement
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;AI is not just a code generator.&lt;/p&gt;

&lt;p&gt;Used well, it is a design amplifier.&lt;/p&gt;

&lt;p&gt;And the clearer the design is up front, the better the code tends to be.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiupnhmgafk5iwzbkrgpd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiupnhmgafk5iwzbkrgpd.png" alt="Designs Amplifier" width="800" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>a11y</category>
      <category>productivity</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Hello, I’m ForgeMechanic</title>
      <dc:creator>Robert Love</dc:creator>
      <pubDate>Sat, 28 Mar 2026 20:03:19 +0000</pubDate>
      <link>https://dev.to/forgemechanic/hello-im-forgemechanic-4gia</link>
      <guid>https://dev.to/forgemechanic/hello-im-forgemechanic-4gia</guid>
      <description>&lt;p&gt;I’m Robert Love, a software architect and builder working in public under &lt;strong&gt;&lt;a href="https://github.com/ForgeMechanic" rel="noopener noreferrer"&gt;ForgeMechanic&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I already have a handful of open source projects on GitHub, and behind them I have thousands of lines of code and a number of programs that I expect to open source over time. Some of that is ready now. Some of it I’m still hesitant to put out before I better understand how people respond and where the right collaborators might be.&lt;/p&gt;

&lt;p&gt;A big part of why I build is personal. I want to significantly improve how we interact with our systems, especially through disability-aware design that can also unlock better, faster, and more powerful ways for able-bodied developers to work.&lt;/p&gt;

&lt;p&gt;I live with Parkinson’s, which makes typing difficult, and I’m blind in my right eye. Because of that, I do about 90% of my work through voice and AI. That has pushed me to think differently about input, control, accessibility, and what better human-system interaction could look like.&lt;/p&gt;

&lt;p&gt;This space is where I’ll be sharing architecture notes, experiments, prototypes, and lessons learned as I build in the open. I’m also hoping to connect with people who care about similar problems and may want to collaborate long-term on open source tools, systems, and ideas.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>a11y</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
