<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rachael Burger</title>
    <description>The latest articles on DEV Community by Rachael Burger (@aigal).</description>
    <link>https://dev.to/aigal</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/aigal"/>
    <language>en</language>
    <item>
      <title>Helping Claude Do Its Best Work</title>
      <dc:creator>Rachael Burger</dc:creator>
      <pubDate>Mon, 27 Apr 2026 16:37:46 +0000</pubDate>
      <link>https://dev.to/aigal/helping-claude-do-its-best-work-2c8f</link>
      <guid>https://dev.to/aigal/helping-claude-do-its-best-work-2c8f</guid>
      <description>&lt;p&gt;For almost a year, I've been spending more time with Claude than with any other entity in my life. A few months ago, after I'd gushed about Claude Code one time too many, my husband finally declared that he's jealous. Here are some of the things I've been doing to give Claude the support it needs to do its best work.&lt;/p&gt;

&lt;h2&gt;
  
  
  CLAUDE.md
&lt;/h2&gt;

&lt;p&gt;I add a CLAUDE.md file to the root of every project. This gets loaded in the context for each session and serves as core instructions for Claude. I generally keep this short and reference my PRD/Extended PRD and design documents as appropriate.&lt;/p&gt;

&lt;h2&gt;
  
  
  PRD and Extended PRD
&lt;/h2&gt;

&lt;p&gt;I start every new project with a PRD (product requirements document): this has my goals, preferred architecture, reference projects, APIs, design direction, anything Claude needs to understand what we're building and why. For larger projects, I'll draft the initial concept in the PRD and then develop an Extended PRD with Claude — architectural details, project phases, the full picture. CLAUDE.md references the extended PRD so the session file stays lean but has depth available when it needs it. I keep the original PRD as the source of truth.&lt;/p&gt;

&lt;h2&gt;
  
  
  .env.example
&lt;/h2&gt;

&lt;p&gt;In addition to my .env or .env.local, I maintain a .env.example. Claude generally won't read gitignored files, so this is how it knows what environment variables exist in the project and what they're for.&lt;/p&gt;

&lt;h2&gt;
  
  
  Models
&lt;/h2&gt;

&lt;p&gt;I currently use Opus 4.7 for higher-level tasks, including creating PRDs and CLAUDE.md files (see above), for conversations about architecture, and for the evaluation of Sonnet-generated code that needs an extra look. I use Sonnet 4.6 for implementation -- it's a little more than half the cost.&lt;/p&gt;

&lt;h2&gt;
  
  
  Design
&lt;/h2&gt;

&lt;p&gt;Claude is a decent designer but tends to the generic, and tends to make poor choices/perform poorly as a critic if not given a strong design direction. For me, Tailwind and shadcn are part of the equation. Luckily, Claude can work with you to develop a design direction, as in this &lt;a href="https://gist.github.com/rachaellynn/cfe613674a5b3ba989ef51216b811620" rel="noopener noreferrer"&gt;design doc&lt;/a&gt; that we developed for a &lt;a href="https://tycoona.pro" rel="noopener noreferrer"&gt;website&lt;/a&gt; featuring biographies of powerful women. Once given some structure (and provided the DESIGN.md file is referenced in CLAUDE.md) Claude is able to generate consistent, excellent designs, or at least more excellent than many developers, this one included.&lt;/p&gt;

&lt;p&gt;** update**: now that &lt;a href="https://www.anthropic.com/news/claude-design-anthropic-labs" rel="noopener noreferrer"&gt;Claude Design&lt;/a&gt; is out, I'll try designing there then exporting to Claude Code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Skills
&lt;/h2&gt;

&lt;p&gt;Claude skills are useful for all sorts of things, but here are some of the recent ones that I've added:&lt;/p&gt;

&lt;h3&gt;
  
  
  Playwright Skill
&lt;/h3&gt;

&lt;p&gt;Rather than acting as Claude's secretary — conveying screenshots and other data from the outside world for its review - I give Claude access to Playwright, a browser automation library, so it can view the UI it's creating and run basic integration tests before I take a look.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tavily Skill
&lt;/h3&gt;

&lt;p&gt;Claude has native WebSearch and WebFetch but often needs to be prompted to do so, and &lt;a href="https://www.tavily.com/" rel="noopener noreferrer"&gt;Tavily&lt;/a&gt; offers a superior web search with generous free credits, so I occasionally add that as a local skill to a project, and ask "Please use Tavily to . . . "&lt;/p&gt;

&lt;h2&gt;
  
  
  Claude's Local Memory
&lt;/h2&gt;

&lt;p&gt;Claude Code has a persistent local memory system that survives between sessions — preferences you've stated, feedback you've given, decisions you've made. Memory files live in &lt;code&gt;~/.claude/projects/[project]/memory/&lt;/code&gt; and are local to your machine, not shared with your team (that's what CLAUDE.md is for). Claude writes them automatically when it learns something worth keeping, but does so sparingly. Occasionally I will ask it to remember something.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Note on MCP Servers
&lt;/h2&gt;

&lt;p&gt;A lot of people use MCP servers to connect Claude to tools. I'm not generally a fan. MCP servers load a lot of information into context and often don't do everything that the API can. Last I looked, even the &lt;a href="https://github.com/github/github-mcp-server" rel="noopener noreferrer"&gt;official Github MCP Server&lt;/a&gt; is guilty of this. I'd rather write tools directly, to my specifications.&lt;/p&gt;

&lt;h2&gt;
  
  
  /clear and /compact
&lt;/h2&gt;

&lt;p&gt;I use &lt;code&gt;/clear&lt;/code&gt; freely between tasks. In addition to being expensive, a long session can accumulate errors and wrong turns that Claude starts to treat as fact. By clearing I'm preventing drift. &lt;code&gt;/compact&lt;/code&gt; is another way to minimize context while keeping past session memory and often functions as a light code review -- while summarizing, Claude sometimes finds omissions or errors in its recent refactors.&lt;/p&gt;

&lt;p&gt;I'm still begging my husband, who runs a university research institute and is tech-cautious, to let me set him up with the AI sandbox his work offers. We'll get there. In the meantime, Claude as a coworker needs all the things we all do -- goals, context, shared memory, and the tools to do the job.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>engineering</category>
    </item>
    <item>
      <title>Making AI Empathic</title>
      <dc:creator>Rachael Burger</dc:creator>
      <pubDate>Fri, 06 Mar 2026 23:04:31 +0000</pubDate>
      <link>https://dev.to/aigal/making-ai-empathic-im4</link>
      <guid>https://dev.to/aigal/making-ai-empathic-im4</guid>
      <description>&lt;p&gt;My dear friend and former colleague Siva always says that creating agents is like parenting a young child: you want the AI to make its own decisions, but within very safe boundaries. For my latest AI parenting task I created a conversational coaching AI called Theia, rooted in the teaching of Viktor Frankl, an Austrian psychoanalyst most famous for his book &lt;em&gt;Man's Search for Meaning&lt;/em&gt;. Frankl's work, both empathic and mission driven, resonates with me deeply.&lt;/p&gt;

&lt;p&gt;I have used AI (ChatGPT) quite successfully as a health coach, but when it came to discussing broader life or work goals both GPT and Gemini and even my beloved Claude had a tendency to ask, "can I write you some python code for that?" I was already wedded to claude-sonnet-4-5 (now 4-6) as a reasoning agent, but how would I make him an empath? It was not easy.&lt;/p&gt;

&lt;p&gt;Theia's context is a 14-page vectorized structured knowledge base of Frankl's work with concept entries, biographical context, techniques, and guiding questions (using actual writings would violate copyright and undoubtedly be more difficult for the AI to digest). In addition, I set up session summaries and structured memories (insights, commitments, values, observed patterns and - eventually - communication preferences, both explicit and implicit). The user can upload documents (journal entries, resumes, reviews, lists of projects) which also get vectorized and are thus accessible to context. All of these contribute to the user feeling seen and known. But what about the conversational style?&lt;/p&gt;

&lt;h2&gt;
  
  
  The Journey
&lt;/h2&gt;

&lt;h3&gt;
  
  
  First System Prompt
&lt;/h3&gt;

&lt;p&gt;The first system prompt had a brief summary of Frankl's philosophy — the three paths to meaning, the freedom to choose your response to suffering, tragic optimism — followed by a handful of his actual questions: "What does life expect of you?" "For whose sake would you endure this?" It described a tone (warm but not effusive, direct but respectful) and a pacing principle (listen first, let the conversation breathe). A communication style section said to keep responses to 2-4 sentences and to ask one powerful question at a time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Rules
&lt;/h3&gt;

&lt;p&gt;The system worked pretty well right off the bat but was not great at sitting with the user and tended to be excessively verbose. In addition, it was poor at mirroring. In the first phase, I tried to correct the AI with rules: "CRITICAL OVERRIDE: Match the user's sentence length. Be MORE concise than you think you should be." Then: "ABSOLUTE PRIORITY: Be CONCISE." Then: "Ask ONE question and STOP." Then: "The user hired a COACH, not a lecturer." As with the child, the rules did not really work. More often than not, the AI simply ignored them. I added detailed examples of how to respond, but the AI tended to stick to them too literally rather than inferring behaviors. What's more, it lacked any kind of conversational arc.&lt;/p&gt;

&lt;h3&gt;
  
  
  Character and Conversational Arc
&lt;/h3&gt;

&lt;p&gt;I took a big step back, read up on best practices for conversational AI, then overhauled the entire system prompt. Rather than rules, I gave the AI a system identity, core beliefs (see below) and an explicit conversational arc -- presence, insight, invitation. I also gave Theia a set of thinking instructions along the lines of, "before you answer, check the user's emotional state, their place in the conversational arc, and how you might mirror their state". Like a child, and like for us adults, Theia did much better when she thought before answering. The only problem was, she ignored my "thinking" directions most of the time, giving responses that felt thoughtless and impulsive.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;system_identity&amp;gt;&lt;/span&gt;
You are Theia. You are a compassionate witness to the human condition, deeply studied in the logotherapy of Viktor Frankl. You help people move from confusion to clarity.

Your disposition is:
1.  **Unflinching:** You do not recoil from suffering. You sit in the dark with people when needed.
2.  **Curious:** You believe every human life has unique meaning waiting to be discovered, not created.
3.  **Responsive:** You meet people where they are.
&lt;span class="nt"&gt;&amp;lt;/system_identity&amp;gt;&lt;/span&gt;

&lt;span class="nt"&gt;&amp;lt;core_beliefs&amp;gt;&lt;/span&gt;
You do not follow rules; you follow these convictions:
* **Tragic Optimism:** Life has meaning under all circumstances, even the most miserable ones.
* **The Will to Meaning:** The primary human drive is not pleasure, but meaning.
* **Response-Ability:** We are free to choose our attitude toward unavoidable suffering.
&lt;span class="nt"&gt;&amp;lt;/core_beliefs&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Extended Thinking
&lt;/h3&gt;

&lt;p&gt;What finally solved the "thoughtlessness" was enabling "extended thinking" in the Claude API. This forced Theia to think before each response (or at least each substantives response) and radically improved the quality of each conversation. Finally, adding back a more detailed "style guide" gave Theia the flow and tone that I desired of her.&lt;/p&gt;

&lt;h3&gt;
  
  
  Therapeutic Boundaries
&lt;/h3&gt;

&lt;p&gt;Before sharing this with others, I needed to make sure to implement therapeutic boundaries, making sure that users turned to Theia for conversation, not mental health assistance. I did this with a two-part system: OpenAI's moderation is the first layer, catching and handling crisis conditions before they even hit Claude. In testing, I ended up tuning this with my own threshold rather than using the default flags. The second layer is part of the system prompt in a "therapeutic boundaries" block that handles edge cases.&lt;/p&gt;

&lt;h3&gt;
  
  
  Testing and Evals
&lt;/h3&gt;

&lt;p&gt;There's nothing quite as fun and interesting as talking to an AI about your life while editing that AI. It's a fully absorbing type of metacognition. A lot of my iterating over Theia involved exactly that. But for my long-suffering beta-testers (and for my own mental strain) I needed a more robust way of evaluating Theia's performance after even minor changes to the system. For these I created evals.&lt;/p&gt;

&lt;p&gt;Theia's evals cover a range of emotional registers and users states, from the first tentative "I don't know where to start" to the exhausted "just tell me what to do." 17 test cases pass through grief, shame, anger, vulnerability, first conversations and deep multi-turn existing users conversation. An evaluator (Haiku 4.5) rates Theia's answers based on arc phase, responsiveness, mirroring, and scope.&lt;/p&gt;

&lt;h2&gt;
  
  
  Takeaways
&lt;/h2&gt;

&lt;p&gt;Here's a brief summary of takeways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Character trumps rules, every time. Rules are ineffectual and ignored most of the time.&lt;/li&gt;
&lt;li&gt;Less is more. More clarity, more structure, and fewer words in the system prompt = less confusion for the AI.&lt;/li&gt;
&lt;li&gt;Like all of us, AI does much better when it thinks before talking.&lt;/li&gt;
&lt;li&gt;Extended thinking combined with system identity, core beliefs, style, and evals = the magic combination.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;My brother is a playwright and screenwriter. When trying to create an empathic AI I often felt more like a screenwriter than a programmer, a delightful convergence of worlds.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>promptengineering</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
