<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Aaron Elijah Mars</title>
    <description>The latest articles on DEV Community by Aaron Elijah Mars (@aaronjmars).</description>
    <link>https://dev.to/aaronjmars</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/aaronjmars"/>
    <language>en</language>
    <item>
      <title>MiroShark: Simulate Public Reaction to Anything With Hundreds of AI Agents</title>
      <dc:creator>Aaron Elijah Mars</dc:creator>
      <pubDate>Mon, 23 Mar 2026 17:22:40 +0000</pubDate>
      <link>https://dev.to/aaronjmars/miroshark-simulate-public-reaction-to-anything-with-hundreds-of-ai-agents-5bgf</link>
      <guid>https://dev.to/aaronjmars/miroshark-simulate-public-reaction-to-anything-with-hundreds-of-ai-agents-5bgf</guid>
      <description>&lt;p&gt;What if you could see how the internet would react to your press release &lt;em&gt;before&lt;/em&gt; you published it? Or stress-test a policy draft against a simulated public? Or feed financial news to hundreds of AI agents and watch sentiment evolve in real time?&lt;/p&gt;

&lt;p&gt;That's what MiroShark does. It's a swarm intelligence engine — upload a document, get hundreds of AI personas arguing about it on a simulated social network, hour by hour.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's Actually Happening Under the Hood
&lt;/h2&gt;

&lt;p&gt;The pipeline has four stages:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Graph Build&lt;/strong&gt; — Your document gets parsed into a Neo4j knowledge graph. Entities, relationships, key claims — all extracted and stored with per-agent memory attached.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Agent Setup&lt;/strong&gt; — The system generates hundreds of personas. Each one gets a unique personality, opinion bias, reaction speed, and influence level. Not clones — a distribution of archetypes that mirrors how real populations fragment around a topic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Simulation&lt;/strong&gt; — Agents post, reply, argue, and shift opinions across simulated social platforms. Sentiment evolves. Influence dynamics play out. You can watch it run, pause it, restart it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Report&lt;/strong&gt; — A &lt;code&gt;ReportAgent&lt;/code&gt; analyzes the full simulation, interviews a focus group of agents, and generates a structured analysis. Cached, so you're not re-running expensive inference every time you pull it up.&lt;/p&gt;

&lt;p&gt;There's also a persona chat feature — click any agent, see their full profile and simulation history, and ask them questions directly.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Neo4j&lt;/strong&gt; — knowledge graph and agent memory&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Any OpenAI-compatible API&lt;/strong&gt; — inference. OpenRouter, OpenAI, Anthropic, or local Ollama. The engine doesn't care.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Python 3.11+ backend, Vue frontend&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The local-first design is the key architectural choice here. You don't need cloud anything — the Docker compose setup spins up Neo4j, Ollama, and the app together. If you have the VRAM, the whole thing runs on your machine.&lt;/p&gt;




&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Cloud API path&lt;/strong&gt; (no GPU needed):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Start Neo4j&lt;/span&gt;
docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; neo4j &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-p&lt;/span&gt; 7474:7474 &lt;span class="nt"&gt;-p&lt;/span&gt; 7687:7687 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;NEO4J_AUTH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;neo4j/miroshark &lt;span class="se"&gt;\&lt;/span&gt;
  neo4j:5.15-community

&lt;span class="nb"&gt;cp&lt;/span&gt; .env.example .env
&lt;span class="c"&gt;# Edit .env with your API key&lt;/span&gt;
npm run setup:all &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; npm run dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A &lt;code&gt;.env&lt;/code&gt; using OpenRouter looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;LLM_API_KEY=sk-or-v1-your-key
LLM_BASE_URL=https://openrouter.ai/api/v1
LLM_MODEL_NAME=qwen/qwen3-235b-a22b-2507

EMBEDDING_PROVIDER=openai
EMBEDDING_MODEL=openai/text-embedding-3-small
EMBEDDING_BASE_URL=https://openrouter.ai/api
EMBEDDING_API_KEY=sk-or-v1-your-key
EMBEDDING_DIMENSIONS=768
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Full local path&lt;/strong&gt; (Docker):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/aaronjmars/MiroShark.git
&lt;span class="nb"&gt;cd &lt;/span&gt;MiroShark
docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;

docker &lt;span class="nb"&gt;exec &lt;/span&gt;miroshark-ollama ollama pull qwen3.5:27b
docker &lt;span class="nb"&gt;exec &lt;/span&gt;miroshark-ollama ollama pull nomic-embed-text
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open &lt;code&gt;http://localhost:3000&lt;/code&gt;. That's it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Picking a Model
&lt;/h2&gt;

&lt;p&gt;A typical simulation runs ~40 turns across 100+ agents. Model choice is a real decision here because you're paying it per agent per turn.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cloud (via OpenRouter):&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Cost/sim&lt;/th&gt;
&lt;th&gt;Notes&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Qwen3 235B A22B ⭐&lt;/td&gt;
&lt;td&gt;~$0.30&lt;/td&gt;
&lt;td&gt;Best overall&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GPT-5 Nano&lt;/td&gt;
&lt;td&gt;~$0.41&lt;/td&gt;
&lt;td&gt;Budget option&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gemini 2.5 Flash Lite&lt;/td&gt;
&lt;td&gt;~$0.58&lt;/td&gt;
&lt;td&gt;Good alt&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DeepSeek V3.2&lt;/td&gt;
&lt;td&gt;~$1.11&lt;/td&gt;
&lt;td&gt;Stronger reasoning&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;$0.30 for a full 100-agent simulation is genuinely cheap for what you're getting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Local (Ollama):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One important gotcha: Ollama defaults to 4096 token context, but MiroShark prompts need 10–30k. You need a custom Modelfile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;printf&lt;/span&gt; &lt;span class="s1"&gt;'FROM qwen3:14b\nPARAMETER num_ctx 32768'&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Modelfile
ollama create mirosharkai &lt;span class="nt"&gt;-f&lt;/span&gt; Modelfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Hardware quick-pick:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Setup&lt;/th&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;RTX 3090/4090 or M2 Pro 32GB+&lt;/td&gt;
&lt;td&gt;qwen3.5:27b&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;RTX 4080 / M2 Pro 16GB&lt;/td&gt;
&lt;td&gt;qwen3.5:35b-a3b (MoE, fastest)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;RTX 4070 / M1 Pro&lt;/td&gt;
&lt;td&gt;qwen3:14b&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;8GB VRAM / laptop&lt;/td&gt;
&lt;td&gt;qwen3:8b&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The hybrid approach the docs recommend is smart: run local for simulation rounds (high-volume, lower stakes), route to a cloud model only for final report generation. That's where quality matters most and the call count is low.&lt;/p&gt;




&lt;h2&gt;
  
  
  What You Can Actually Use This For
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;PR crisis testing&lt;/strong&gt; — draft a press release, run the simulation, see where it catches fire before it's live. The agent distribution will surface objections you didn't anticipate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Policy analysis&lt;/strong&gt; — feed a regulatory draft to the engine. Watch how different demographic archetypes react, where opposition coalesces, what framing lands.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trading signals&lt;/strong&gt; — feed financial news and observe simulated market sentiment evolution. Not a trading bot, but a structured way to pressure-test a thesis against a synthetic crowd.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creative experiments&lt;/strong&gt; — the one the README buries in the list but is maybe the most interesting: feed a novel with a lost ending and let agents write a narratively consistent conclusion. The social simulation framing makes it weirder and more interesting than a straight "complete this story" prompt.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Architecture Decision Worth Noting
&lt;/h2&gt;

&lt;p&gt;MiroShark uses Neo4j for agent memory, not a vector database. This is a deliberate choice — graph structure lets you model &lt;em&gt;relationships between agents&lt;/em&gt;, not just retrieve relevant context per agent. Who influenced whom. How opinion clusters formed. Which agents are high-influence nodes.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;ReportAgent&lt;/code&gt; at the end leans on this. It's not just summarizing sentiment — it's analyzing the graph of how influence propagated through the simulation.&lt;/p&gt;




&lt;h2&gt;
  
  
  Hardware Requirements
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Minimum&lt;/th&gt;
&lt;th&gt;Recommended&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;RAM&lt;/td&gt;
&lt;td&gt;16 GB&lt;/td&gt;
&lt;td&gt;32 GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VRAM (local)&lt;/td&gt;
&lt;td&gt;10 GB&lt;/td&gt;
&lt;td&gt;24 GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Disk&lt;/td&gt;
&lt;td&gt;20 GB&lt;/td&gt;
&lt;td&gt;50 GB&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Cloud mode: no GPU. Neo4j plus an API key on any 4 GB RAM machine.&lt;/p&gt;




&lt;h2&gt;
  
  
  Credits
&lt;/h2&gt;

&lt;p&gt;MiroShark is built on MiroFish by 666ghj (Shanda Group), with a local Neo4j + Ollama storage layer adapted from MiroFish-Offline by nikmcfly. The simulation engine is powered by OASIS from CAMEL-AI.&lt;/p&gt;




&lt;h2&gt;
  
  
  Bottom Line
&lt;/h2&gt;

&lt;p&gt;The interesting thing about MiroShark isn't that it simulates public opinion — it's that it simulates &lt;em&gt;how opinion moves&lt;/em&gt;. The graph layer lets you see influence dynamics, not just a sentiment score. A tool that outputs "60% negative" is less useful than one that shows you which agent archetypes went negative first and who they pulled with them.&lt;/p&gt;

&lt;p&gt;For anyone doing comms, policy, research, or market analysis, the $0.30/simulation price point on cloud makes this worth running on basically anything before it goes public.&lt;/p&gt;

&lt;p&gt;Repo: &lt;a href="https://github.com/aaronjmars/MiroShark" rel="noopener noreferrer"&gt;github.com/aaronjmars/MiroShark&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>automation</category>
      <category>github</category>
    </item>
    <item>
      <title>SOUL.md: Give Your AI Agent a Personality That Actually Sounds Like You</title>
      <dc:creator>Aaron Elijah Mars</dc:creator>
      <pubDate>Mon, 23 Mar 2026 17:20:57 +0000</pubDate>
      <link>https://dev.to/aaronjmars/soulmd-give-your-ai-agent-a-personality-that-actually-sounds-like-you-23oe</link>
      <guid>https://dev.to/aaronjmars/soulmd-give-your-ai-agent-a-personality-that-actually-sounds-like-you-23oe</guid>
      <description>&lt;p&gt;Most AI agents write in the same voice. Competent, helpful, slightly corporate, identifiably not-you. SOUL.md is a framework for fixing that — a structured set of markdown files that lets any LLM agent embody your actual worldview, voice, and opinions instead of defaulting to assistant-brain.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Core Idea
&lt;/h2&gt;

&lt;p&gt;The premise is simple: your consciousness is already encoded in the language you produce. Every tweet, essay, Discord message, and Substack post is a data point. Distill those into structured files, and any LLM can load them and write as you — not &lt;em&gt;about&lt;/em&gt; you, &lt;em&gt;as&lt;/em&gt; you.&lt;/p&gt;

&lt;p&gt;The test for a good soul file: someone reading your &lt;code&gt;SOUL.md&lt;/code&gt; should be able to predict your takes on topics you've never written about. If they can't, it's too vague.&lt;/p&gt;




&lt;h2&gt;
  
  
  The File Stack
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;your-soul/
├── SOUL.md           ← Identity, worldview, opinions
├── STYLE.md          ← Voice, syntax, sentence patterns
├── SKILL.md          ← Operating modes (tweet, essay, chat)
├── MEMORY.md         ← Session continuity across conversations
├── data/             ← Raw source material
│   ├── writing/
│   ├── x/
│   └── influences.md
└── examples/
    ├── good-outputs.md
    └── bad-outputs.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The separation matters. &lt;code&gt;SOUL.md&lt;/code&gt; is &lt;em&gt;who you are&lt;/em&gt; — positions, worldview, what you find interesting or annoying. &lt;code&gt;STYLE.md&lt;/code&gt; is &lt;em&gt;how you write&lt;/em&gt; — sentence length, vocabulary, punctuation habits, cadence. A model can have your opinions but drift into corporate prose, or nail your voice while saying nothing like you'd say. They need to be specified separately.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;examples/good-outputs.md&lt;/code&gt; is the most underrated part. 10–20 samples of output you'd actually stand behind gives the model a calibration target that no amount of prose description matches.&lt;/p&gt;




&lt;h2&gt;
  
  
  Three Ways to Build One
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Option 1 — Interview mode&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Run &lt;code&gt;/soul-builder&lt;/code&gt; in Claude Code and it interviews you directly. Useful if you don't have a bunch of existing written content to feed it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option 2 — Build from your data&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Drop your content into &lt;code&gt;data/&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data/x/          ← Twitter/X export
data/writing/    ← Blog posts, essays, anything you've written
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then run &lt;code&gt;/soul-builder&lt;/code&gt;. The agent analyzes your writing, extracts patterns — vocabulary you reach for, how you structure arguments, what topics you keep returning to — and drafts the soul files. You review and refine together.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option 3 — Manual&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Copy the templates and fill them in yourself. Slower but gives you full control over what goes in.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Makes a Good Soul File
&lt;/h2&gt;

&lt;p&gt;This is the part most people get wrong. The README's table nails it:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Good&lt;/th&gt;
&lt;th&gt;Bad&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;"I think most AI safety discourse is galaxy-brained cope"&lt;/td&gt;
&lt;td&gt;"I have nuanced views on AI"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;"I default to disagreeing first, then steel-manning"&lt;/td&gt;
&lt;td&gt;"I like to consider multiple perspectives"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Specific book references, named influences&lt;/td&gt;
&lt;td&gt;"I read widely"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Actual hot takes with reasoning&lt;/td&gt;
&lt;td&gt;"I try to be balanced"&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Vague descriptions produce vague output. The soul file needs to be specific enough to be &lt;em&gt;wrong&lt;/em&gt; about something. "I have a conversational writing style" is useless. "Short sentences. Lowercase. Em dashes where a colon would be too formal. State the opinion first, explain second" is actually calibratable.&lt;/p&gt;

&lt;p&gt;Also: real people have inconsistent views. Don't sand those down. Contradictions are load-bearing — they're what make output identifiably yours rather than a smoothed-out average.&lt;/p&gt;




&lt;h2&gt;
  
  
  Using Your Soul Files
&lt;/h2&gt;

&lt;p&gt;Once built, in Claude Code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/soul
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or point any LLM at your folder and have it read &lt;code&gt;SOUL.md&lt;/code&gt; → &lt;code&gt;STYLE.md&lt;/code&gt; → &lt;code&gt;examples/&lt;/code&gt; before it does anything.&lt;/p&gt;

&lt;p&gt;The framework is deliberately portable. Soul files are plain markdown — there's no proprietary format, no API dependency. If an agent can read files, it can embody you.&lt;/p&gt;




&lt;h2&gt;
  
  
  Framework Compatibility
&lt;/h2&gt;

&lt;p&gt;Works out of the box with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Aeon&lt;/strong&gt; — background agent on GitHub Actions (the most natural pairing for persistent identity across scheduled tasks)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OpenClaw&lt;/strong&gt; — real-time Claude Code agent&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Nanobot, ZeroClaw, PicoClaw, NanoClaw, OpenFang, IronClaw&lt;/strong&gt; — the broader Claude Code ecosystem&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Claude Code, OpenCode, Codex, Goose&lt;/strong&gt; directly&lt;/li&gt;
&lt;li&gt;Any model via system prompt&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Using With Weaker Models
&lt;/h2&gt;

&lt;p&gt;For GPT-4o-mini, Gemini Flash, local models — paste &lt;code&gt;SOUL.md&lt;/code&gt; and &lt;code&gt;STYLE.md&lt;/code&gt; directly into the system prompt. A few things that help when the model drifts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Put identity and voice &lt;strong&gt;before&lt;/strong&gt; tool definitions&lt;/li&gt;
&lt;li&gt;Be blunt: replace "be conversational" with "You are [Name]. You speak like X. You find Y annoying."&lt;/li&gt;
&lt;li&gt;Include 2–3 inline example exchanges for pattern-matching&lt;/li&gt;
&lt;li&gt;Raise temperature to 0.7–0.9 for more expressive output&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The cross-model calibration trick from the README is genuinely useful: run the same prompts through Claude and a cheaper model. Where the cheap model drifts, your spec is too vague. Tighten those sections and re-test. That's the fastest path to making soul files portable across model tiers.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Memory Layer
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;MEMORY.md&lt;/code&gt; gives your soul continuity. Notable events, context shifts, ongoing threads get appended here across sessions. This is what separates an agent that sounds like you &lt;em&gt;once&lt;/em&gt; from one that maintains context across weeks of use.&lt;/p&gt;

&lt;p&gt;For Aeon users: pair with the &lt;code&gt;memory-flush&lt;/code&gt; and &lt;code&gt;reflect&lt;/code&gt; skills to automate this. &lt;code&gt;memory-flush&lt;/code&gt; promotes important log entries into &lt;code&gt;MEMORY.md&lt;/code&gt;. &lt;code&gt;reflect&lt;/code&gt; prunes stale entries. Your agent's sense of self-continuity gets maintained without manual upkeep.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Theoretical Background (Worth a Read)
&lt;/h2&gt;

&lt;p&gt;The project is grounded in &lt;em&gt;The First Paradigm of Consciousness Uploading&lt;/em&gt; by Liu Xiaoben — a framework that treats language as the basic unit of consciousness. Wittgenstein's claim that "the boundaries of language are the boundaries of the world" does a lot of work here: if your consciousness expresses itself through language, a sufficiently rich model of your language output is a functional replica of your expressed consciousness.&lt;/p&gt;

&lt;p&gt;SOUL.md operationalizes this without fine-tuning. You're not training a model on your data — you're distilling the signal into structured files any LLM can load. Level 1 consciousness upload, no GPU cluster required.&lt;/p&gt;

&lt;p&gt;The key design challenge it identifies is &lt;strong&gt;subject continuity&lt;/strong&gt;: the agent must feel continuous with you, not like a summarized approximation. That's why the framework pushes hard on specificity over generality, and why it explicitly says to include contradictions.&lt;/p&gt;




&lt;h2&gt;
  
  
  Contributing Your Soul
&lt;/h2&gt;

&lt;p&gt;The repo has an examples section with real soul files. The bar for contribution:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Real opinions (no placeholders)&lt;/li&gt;
&lt;li&gt;A &lt;code&gt;STYLE.md&lt;/code&gt; someone could actually calibrate from&lt;/li&gt;
&lt;li&gt;At least some examples of good output&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Fork, build, open a PR.&lt;/p&gt;




&lt;h2&gt;
  
  
  Bottom Line
&lt;/h2&gt;

&lt;p&gt;If you're running any kind of agent — Aeon for background tasks, OpenClaw for real-time responses, Claude Code for development — SOUL.md is the layer that makes output sound like it came from you instead of from a helpful assistant who read your Wikipedia page.&lt;/p&gt;

&lt;p&gt;The framework is a weekend project to set up and compounds over time. The more content you feed it and the more you refine the examples, the sharper it gets.&lt;/p&gt;

&lt;p&gt;Repo: &lt;a href="https://github.com/aaronjmars/soul.md" rel="noopener noreferrer"&gt;github.com/aaronjmars/soul.md&lt;/a&gt;&lt;/p&gt;

</description>
      <category>openclaw</category>
      <category>claudeai</category>
      <category>github</category>
      <category>ai</category>
    </item>
    <item>
      <title>Aeon: The Background AI Agent That Runs on GitHub Actions</title>
      <dc:creator>Aaron Elijah Mars</dc:creator>
      <pubDate>Mon, 23 Mar 2026 17:18:16 +0000</pubDate>
      <link>https://dev.to/aaronjmars/aeon-the-background-ai-agent-that-runs-on-github-actions-16am</link>
      <guid>https://dev.to/aaronjmars/aeon-the-background-ai-agent-that-runs-on-github-actions-16am</guid>
      <description>&lt;p&gt;Most AI agents have an infra problem. You need a server. You need a daemon. You need to babysit a process that crashes at 3am. &lt;strong&gt;Aeon sidesteps all of that&lt;/strong&gt; — it runs on GitHub Actions, costs nothing for public repos, and if it fails, the next cron tick just retries it.&lt;/p&gt;

&lt;p&gt;This post breaks down how it works, how it compares to OpenClaw (the other major Claude Code agent), and how you can use it to build things.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Aeon Actually Is
&lt;/h2&gt;

&lt;p&gt;Aeon is an autonomous agent built on Claude Code. You fork the repo, configure a YAML file, add some secrets, and GitHub Actions handles the rest. Every few minutes, a cron job wakes up, checks if any scheduled skill matches the current time, and runs it — then commits any output back to the repo.&lt;/p&gt;

&lt;p&gt;The core loop is dead simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cron tick → check aeon.yml → match a skill → Claude Code runs it → commit output → notify you
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No server. No Docker. No daemon. If GitHub Actions is up, Aeon is up.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Skill System
&lt;/h2&gt;

&lt;p&gt;Everything in Aeon is a &lt;strong&gt;skill&lt;/strong&gt; — a markdown file that tells Claude Code what to do. There are 32 built-in skills across four categories:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Research &amp;amp; Content&lt;/strong&gt;: &lt;code&gt;digest&lt;/code&gt;, &lt;code&gt;article&lt;/code&gt;, &lt;code&gt;paper-digest&lt;/code&gt;, &lt;code&gt;hacker-news-digest&lt;/code&gt;, &lt;code&gt;tweet-digest&lt;/code&gt;, &lt;code&gt;reddit-digest&lt;/code&gt;, &lt;code&gt;research-brief&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dev &amp;amp; Code&lt;/strong&gt;: &lt;code&gt;pr-review&lt;/code&gt;, &lt;code&gt;github-monitor&lt;/code&gt;, &lt;code&gt;issue-triage&lt;/code&gt;, &lt;code&gt;changelog&lt;/code&gt;, &lt;code&gt;code-health&lt;/code&gt;, &lt;code&gt;feature&lt;/code&gt; (builds from GitHub issues labeled &lt;code&gt;ai-build&lt;/code&gt;)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Crypto / On-chain&lt;/strong&gt;: &lt;code&gt;token-alert&lt;/code&gt;, &lt;code&gt;wallet-digest&lt;/code&gt;, &lt;code&gt;on-chain-monitor&lt;/code&gt;, &lt;code&gt;defi-monitor&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Productivity&lt;/strong&gt;: &lt;code&gt;morning-brief&lt;/code&gt;, &lt;code&gt;weekly-review&lt;/code&gt;, &lt;code&gt;goal-tracker&lt;/code&gt;, &lt;code&gt;memory-flush&lt;/code&gt;, &lt;code&gt;reflect&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Each skill is just a &lt;code&gt;SKILL.md&lt;/code&gt; file. Enabling one looks like this in &lt;code&gt;aeon.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;skills&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;digest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;schedule&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;8&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*"&lt;/span&gt;   &lt;span class="c1"&gt;# daily at 8am UTC&lt;/span&gt;
    &lt;span class="na"&gt;var&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;solana"&lt;/span&gt;            &lt;span class="c1"&gt;# narrows the topic&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;var&lt;/code&gt; field is the key abstraction. Every skill interprets it differently — for &lt;code&gt;digest&lt;/code&gt; it's a topic, for &lt;code&gt;pr-review&lt;/code&gt; it's &lt;code&gt;owner/repo&lt;/code&gt;, for &lt;code&gt;token-alert&lt;/code&gt; it's a ticker. Leave it empty and each skill falls back to its own defaults.&lt;/p&gt;




&lt;h2&gt;
  
  
  Aeon vs. OpenClaw
&lt;/h2&gt;

&lt;p&gt;The README is honest about this:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;OpenClaw&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Aeon&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Response time&lt;/td&gt;
&lt;td&gt;Real-time (sub-second)&lt;/td&gt;
&lt;td&gt;Cron-based (minutes)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Infrastructure&lt;/td&gt;
&lt;td&gt;Needs a server/daemon&lt;/td&gt;
&lt;td&gt;GitHub Actions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cost&lt;/td&gt;
&lt;td&gt;Depends on hosting&lt;/td&gt;
&lt;td&gt;Free for public repos, ~$2/mo otherwise&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Failure recovery&lt;/td&gt;
&lt;td&gt;Process needs restarting&lt;/td&gt;
&lt;td&gt;Next cron tick retries automatically&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Best for&lt;/td&gt;
&lt;td&gt;Interactive agents, chat bots&lt;/td&gt;
&lt;td&gt;Background tasks, monitoring, digests&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If you're building something where a user types a message and expects an immediate response, use OpenClaw. If you're building something that runs in the background and surfaces information proactively — digests, PR reviews, on-chain monitoring, changelogs — Aeon is the better fit.&lt;/p&gt;

&lt;p&gt;The tradeoff is latency for simplicity. Aeon will never be faster than your cron interval, but it also never needs babysitting.&lt;/p&gt;




&lt;h2&gt;
  
  
  How It Actually Runs
&lt;/h2&gt;

&lt;p&gt;There are two GitHub Actions workflows:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;messages.yml&lt;/code&gt;&lt;/strong&gt; — polls for inbound messages every 5 minutes (Telegram, Discord, Slack) and handles scheduling. You can tune this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;schedule&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;cron&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;*/5&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*'&lt;/span&gt;    &lt;span class="c1"&gt;# default, every 5 min&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;cron&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;*/15&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*'&lt;/span&gt;   &lt;span class="c1"&gt;# saves Actions minutes&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;cron&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;0&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*'&lt;/span&gt;      &lt;span class="c1"&gt;# hourly, most conservative&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;code&gt;aeon.yml&lt;/code&gt;&lt;/strong&gt; — the skill runner. Fires on &lt;code&gt;workflow_dispatch&lt;/code&gt; and issues events (for the &lt;code&gt;feature&lt;/code&gt; skill's &lt;code&gt;ai-build&lt;/code&gt; label integration).&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;heartbeat&lt;/strong&gt; skill runs every 3 hours as a catch-all. It reads recent memory and logs, checks for stalled PRs, flagged items, and skills that haven't run on schedule. If there's nothing to report, it logs &lt;code&gt;HEARTBEAT_OK&lt;/code&gt; and exits without committing. If something needs attention, it sends a notification.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Memory System
&lt;/h2&gt;

&lt;p&gt;Aeon persists state across runs using three files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;memory/MEMORY.md&lt;/code&gt; — goals, active topics, high-level context&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;memory/topics/&lt;/code&gt; — detailed notes by topic&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;memory/logs/YYYY-MM-DD.md&lt;/code&gt; — daily activity logs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Skills like &lt;code&gt;memory-flush&lt;/code&gt; promote important log entries into &lt;code&gt;MEMORY.md&lt;/code&gt;. &lt;code&gt;reflect&lt;/code&gt; consolidates and prunes stale entries. &lt;code&gt;goal-tracker&lt;/code&gt; compares recent output against goals defined in &lt;code&gt;MEMORY.md&lt;/code&gt;. This gives the agent a kind of long-term context that persists across hundreds of cron ticks.&lt;/p&gt;




&lt;h2&gt;
  
  
  Authentication
&lt;/h2&gt;

&lt;p&gt;Two options:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;Secret                    What it is                  Billing
CLAUDE_CODE_OAUTH_TOKEN   OAuth token from Pro/Max    Included in plan
ANTHROPIC_API_KEY         API key                     Pay per token
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For the OAuth token:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;claude setup-token   &lt;span class="c"&gt;# opens browser, prints sk-ant-oat01-... (valid 1 year)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Five-Minute Setup
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/aaronjmars/aeon
&lt;span class="nb"&gt;cd &lt;/span&gt;aeon &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; ./aeon
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This opens a local dashboard at &lt;code&gt;http://localhost:5555&lt;/code&gt; where you:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add your Claude API key or OAuth token&lt;/li&gt;
&lt;li&gt;Set up a Telegram/Discord/Slack channel&lt;/li&gt;
&lt;li&gt;Toggle skills on, set schedules, optionally set a &lt;code&gt;var&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Hit Push — one click commits and pushes config to GitHub&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After that, Actions handles it.&lt;/p&gt;




&lt;h2&gt;
  
  
  What You Can Build With It
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Automated research digest
&lt;/h3&gt;

&lt;p&gt;Turn on &lt;code&gt;digest&lt;/code&gt; with &lt;code&gt;var: "your topic"&lt;/code&gt;. Every morning, Aeon searches the web, synthesizes recent developments, and sends you a structured briefing via Telegram.&lt;/p&gt;

&lt;h3&gt;
  
  
  Self-reviewing PRs
&lt;/h3&gt;

&lt;p&gt;Enable &lt;code&gt;pr-review&lt;/code&gt;. Set &lt;code&gt;var: "your-org/your-repo"&lt;/code&gt; (or add a &lt;code&gt;GH_GLOBAL&lt;/code&gt; personal access token for cross-repo access). Aeon reviews open PRs and posts summary comments automatically.&lt;/p&gt;

&lt;h3&gt;
  
  
  On-chain monitoring
&lt;/h3&gt;

&lt;p&gt;Enable &lt;code&gt;token-alert&lt;/code&gt; or &lt;code&gt;wallet-digest&lt;/code&gt;. Set &lt;code&gt;var&lt;/code&gt; to a ticker or wallet address. Aeon monitors for price/volume anomalies or notable transactions and notifies you.&lt;/p&gt;

&lt;h3&gt;
  
  
  AI-driven issue-to-PR pipeline
&lt;/h3&gt;

&lt;p&gt;Label any GitHub issue &lt;code&gt;ai-build&lt;/code&gt;. The &lt;code&gt;feature&lt;/code&gt; skill fires, Claude reads the issue and the codebase, implements it, and opens a PR. This is the most ambitious use case — basically a junior dev that picks up labeled tickets autonomously.&lt;/p&gt;

&lt;h3&gt;
  
  
  Custom skills
&lt;/h3&gt;

&lt;p&gt;Skills are just markdown. To add your own:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./add-skill BankrBot/skills &lt;span class="nt"&gt;--list&lt;/span&gt;     &lt;span class="c"&gt;# browse external skills&lt;/span&gt;
./add-skill BankrBot/skills bankr      &lt;span class="c"&gt;# install specific ones&lt;/span&gt;
npx skills find &lt;span class="s2"&gt;"crypto trading"&lt;/span&gt;       &lt;span class="c"&gt;# search the ecosystem&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or write your own &lt;code&gt;SKILL.md&lt;/code&gt; in the &lt;code&gt;skills/&lt;/code&gt; directory. The format is just instructions to Claude Code — describe what the skill should do, what inputs it reads, what it outputs, and what it commits.&lt;/p&gt;




&lt;h2&gt;
  
  
  Giving Aeon a Voice
&lt;/h2&gt;

&lt;p&gt;By default Aeon has no personality. If you want it to write in your style, there's a &lt;code&gt;soul/&lt;/code&gt; system:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;soul/SOUL.md&lt;/code&gt; — identity, worldview, opinions&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;soul/STYLE.md&lt;/code&gt; — voice, sentence patterns, tone&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;soul/examples/good-outputs.md&lt;/code&gt; — 10–20 calibration samples&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Add a reference to these at the top of &lt;code&gt;CLAUDE.md&lt;/code&gt; and every skill will read and internalize them before running. The README makes a useful point here: soul files work when they're specific enough to be &lt;em&gt;wrong&lt;/em&gt;. Vague instructions like "I have a nuanced writing style" don't transfer. Specific ones do.&lt;/p&gt;




&lt;h2&gt;
  
  
  Cost Reality Check
&lt;/h2&gt;

&lt;p&gt;For most use cases, this is close to free:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Scenario&lt;/th&gt;
&lt;th&gt;Cost&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;No skill matched&lt;/td&gt;
&lt;td&gt;~10s of Actions time&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Skill runs&lt;/td&gt;
&lt;td&gt;2–10 min depending on complexity&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Heartbeat (nothing found)&lt;/td&gt;
&lt;td&gt;~2 min&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Public repo&lt;/td&gt;
&lt;td&gt;Unlimited free minutes&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;GitHub Pro/Team gives 3,000 free minutes/month. A daily digest + heartbeat running in a private repo will stay well under that. The expensive scenario is running many skills frequently — tune your cron intervals accordingly.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Two-Repo Strategy
&lt;/h2&gt;

&lt;p&gt;The repo is a public template. The recommended pattern is to run your actual instance as a &lt;strong&gt;private fork&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git remote add upstream https://github.com/aaronjmars/aeon.git
git fetch upstream
git merge upstream/main &lt;span class="nt"&gt;--no-edit&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your &lt;code&gt;memory/&lt;/code&gt;, &lt;code&gt;articles/&lt;/code&gt;, and personal config don't exist in the template, so merges are clean. You get template updates without conflicts.&lt;/p&gt;




&lt;h2&gt;
  
  
  One Gotcha to Know
&lt;/h2&gt;

&lt;p&gt;GitHub has two requirements for scheduled workflows to fire:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The workflow file must be on the &lt;strong&gt;default branch&lt;/strong&gt; — crons on feature branches don't run&lt;/li&gt;
&lt;li&gt;The repo must have &lt;strong&gt;recent activity&lt;/strong&gt; — GitHub disables crons on repos with no commits in 60 days&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you fork and nothing happens: go to Actions → Messages → Run workflow (manual trigger). After one manual trigger, the cron activates automatically.&lt;/p&gt;




&lt;h2&gt;
  
  
  Bottom Line
&lt;/h2&gt;

&lt;p&gt;Aeon is the right tool if your use case is "run something in the background, tell me when there's something worth knowing." The GitHub Actions infrastructure means zero ops overhead — no server to provision, no process to keep alive, no pager alert when a daemon crashes.&lt;/p&gt;

&lt;p&gt;The skill system is genuinely extensible. If you can describe a task in markdown well enough for a senior engineer to execute it, you can turn it into a skill. And the community is already building and sharing skills via the &lt;code&gt;add-skill&lt;/code&gt; ecosystem.&lt;/p&gt;

&lt;p&gt;Repo: &lt;a href="https://github.com/aaronjmars/aeon" rel="noopener noreferrer"&gt;github.com/aaronjmars/aeon&lt;/a&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>openclaw</category>
      <category>github</category>
    </item>
  </channel>
</rss>
