<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Joseph Henzi</title>
    <description>The latest articles on DEV Community by Joseph Henzi (@joehenzi).</description>
    <link>https://dev.to/joehenzi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/joehenzi"/>
    <language>en</language>
    <item>
      <title>I built an autonomous Robot Diary with "Boredom Scores" and a sense of time 🤖📖</title>
      <dc:creator>Joseph Henzi</dc:creator>
      <pubDate>Sat, 10 Jan 2026 01:17:07 +0000</pubDate>
      <link>https://dev.to/joehenzi/i-built-an-autonomous-robot-diary-with-boredom-scores-and-a-sense-of-time-2nnd</link>
      <guid>https://dev.to/joehenzi/i-built-an-autonomous-robot-diary-with-boredom-scores-and-a-sense-of-time-2nnd</guid>
      <description>&lt;p&gt;I’ve always wondered: If a robot were left alone to watch the world go by, what would it actually &lt;em&gt;think&lt;/em&gt; about? Would it just catalog data, or would it eventually get... bored?&lt;/p&gt;

&lt;p&gt;To find out, I built &lt;strong&gt;B3N-T5-MNT&lt;/strong&gt; (The Maintenance Robot), an autonomous AI agent that lives on Bourbon Street, New Orleans. It wakes up twice a day, looks at the world through a webcam, and writes a diary entry about what it sees.&lt;/p&gt;

&lt;p&gt;But this isn't just a basic "Image-to-Text" bot. It has a memory, a mood, and a "Boredom Engine."&lt;/p&gt;

&lt;h3&gt;
  
  
  🧠 The Tech Stack
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Vision:&lt;/strong&gt; Llama 4 Maverick (via Groq) for high-speed scene analysis.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Brain:&lt;/strong&gt; A custom "Memory API" (MemVault) that uses hybrid search (Vector + Keyword + Recency) so the robot can remember that "white van from last Tuesday."&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Environmental Context:&lt;/strong&gt; Real-time data fetching for weather, moon phases, and New Orleans holidays (it knows when it’s Mardi Gras!).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The Boredom Engine:&lt;/strong&gt; A logic layer that tracks visual repetition. If the street stays too quiet for too long, the robot's writing style shifts from factual reporting to existential noir or poetic reflection.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🛠 Recent Changelog Highlights
&lt;/h3&gt;

&lt;p&gt;I’ve been iterating on the robot’s "consciousness" over at &lt;a href="https://robot.henzi.org/changelog/" rel="noopener noreferrer"&gt;robot.henzi.org/changelog&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Memory Integration:&lt;/strong&gt; Swapped out static JSON for a dynamic RAG (Retrieval-Augmented Generation) layer. Now, the robot can "overhear" news and link it to what it sees on the street.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Astro-Logic:&lt;/strong&gt; Hard-wired the solar/lunar calendar into the prompt. The robot is now officially "grumpy" during rainy transitions and more reflective during full moons.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Boredom Scoring:&lt;/strong&gt; Implemented a decay function for "interestingness." If the pixel delta between days is too low, the boredom score spikes, triggering a change in the LLM's system prompt to be more "philosophical."&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🎭 Why build this?
&lt;/h3&gt;

&lt;p&gt;This started as a late-night "what if" project sponsored by &lt;strong&gt;The Henzi Foundation&lt;/strong&gt;. It’s an exploration of &lt;strong&gt;Environmental Intelligence&lt;/strong&gt;—the idea that an AI shouldn't just respond to prompts, but should react to the rhythm of the physical world.&lt;/p&gt;

&lt;p&gt;Is the robot bored today, or did it find something new?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check out the live diary here:&lt;/strong&gt; &lt;a href="https://robot.henzi.org/" rel="noopener noreferrer"&gt;https://robot.henzi.org&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’d love to hear your thoughts on "AI boredom" or how you're handling long-term memory in your own agents!&lt;/p&gt;

&lt;h1&gt;
  
  
  AI #Robotics #OpenSource #LLM #CreativeCoding
&lt;/h1&gt;

</description>
      <category>mcp</category>
      <category>llm</category>
      <category>hugo</category>
      <category>agents</category>
    </item>
    <item>
      <title>Building an AI That Writes Like It Has a Memory: The Robot Diary Project</title>
      <dc:creator>Joseph Henzi</dc:creator>
      <pubDate>Mon, 15 Dec 2025 04:51:53 +0000</pubDate>
      <link>https://dev.to/joehenzi/building-an-ai-that-writes-like-it-has-a-memory-the-robot-diary-project-2gkp</link>
      <guid>https://dev.to/joehenzi/building-an-ai-that-writes-like-it-has-a-memory-the-robot-diary-project-2gkp</guid>
      <description>&lt;p&gt;Go there!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Live Site&lt;/strong&gt;: &lt;a href="https://robot.henzi.org" rel="noopener noreferrer"&gt;robot.henzi.org&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What if an AI agent could maintain a diary that feels genuinely alive—one that references past observations, notices patterns over time, and writes with varied, contextually-aware entries? That's what I built with &lt;strong&gt;Robot Diary&lt;/strong&gt;, an autonomous narrative agent that observes New Orleans through a window and documents its experiences.&lt;/p&gt;

&lt;p&gt;But this isn't just "AI writes about photos." The real challenge was: &lt;strong&gt;how do you make an AI agent's writing feel alive, varied, and contextually aware?&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem with Static Prompts
&lt;/h2&gt;

&lt;p&gt;Most AI writing projects use static prompts. You give the model an image and a fixed instruction set, and it generates text. The result? Repetitive, formulaic entries that feel disconnected from time, context, and memory.&lt;/p&gt;

&lt;p&gt;I wanted something different: a robot that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;References specific past observations naturally&lt;/li&gt;
&lt;li&gt;Notices changes and patterns over time
&lt;/li&gt;
&lt;li&gt;Connects visual observations to weather, time, and world events&lt;/li&gt;
&lt;li&gt;Varies dramatically in style, tone, and focus&lt;/li&gt;
&lt;li&gt;Feels like it's written by an entity with memory and awareness&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Solution: Dynamic Context-Aware Prompting
&lt;/h2&gt;

&lt;p&gt;Instead of static prompts, every diary entry is generated using a &lt;strong&gt;dynamically constructed prompt&lt;/strong&gt; that combines:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Rich World Context
&lt;/h3&gt;

&lt;p&gt;The robot doesn't just see an image—it "knows" things about the world:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Example context that gets added to every prompt
&lt;/span&gt;&lt;span class="n"&gt;Today&lt;/span&gt; &lt;span class="ow"&gt;is&lt;/span&gt; &lt;span class="n"&gt;Wednesday&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;December&lt;/span&gt; &lt;span class="mi"&gt;25&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2025&lt;/span&gt; &lt;span class="n"&gt;at&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;51&lt;/span&gt; &lt;span class="n"&gt;PM&lt;/span&gt; &lt;span class="n"&gt;CST&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt; 
&lt;span class="n"&gt;Christmas&lt;/span&gt; &lt;span class="n"&gt;Day&lt;/span&gt; &lt;span class="ow"&gt;is&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="mi"&gt;11&lt;/span&gt; &lt;span class="n"&gt;days&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt; &lt;span class="n"&gt;A&lt;/span&gt; &lt;span class="n"&gt;full&lt;/span&gt; &lt;span class="n"&gt;moon&lt;/span&gt; &lt;span class="ow"&gt;is&lt;/span&gt; &lt;span class="n"&gt;visible&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt; 
&lt;span class="n"&gt;The&lt;/span&gt; &lt;span class="n"&gt;sun&lt;/span&gt; &lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt; &lt;span class="n"&gt;hours&lt;/span&gt; &lt;span class="n"&gt;ago&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt; &lt;span class="n"&gt;We&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;re in the middle of winter, 
with spring still 10 weeks away. It is a weekday.

Weather Conditions:
The weather is Clear with a temperature of 45°F. 
The temperature has dropped 3 degrees since my last observation.

Recent news the robot might have heard: 
[NFL roundup: Mahomes hurt as Chiefs miss playoff...]
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Temporal awareness&lt;/strong&gt;: Date, time, season, day of week, weekends&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Holidays&lt;/strong&gt;: US holidays (federal + cultural/religious)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Moon phases&lt;/strong&gt;: Full moons, new moons, special lunar events&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Astronomical events&lt;/strong&gt;: Solstices, equinoxes, seasonal transitions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sunrise/sunset&lt;/strong&gt;: Knows when the sun rose/set, how long ago&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Weather&lt;/strong&gt;: Current conditions correlated with visual observations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;News&lt;/strong&gt;: Randomly includes headlines (40% chance) so the robot can reference world events&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Intelligent Memory System
&lt;/h3&gt;

&lt;p&gt;The robot remembers past observations, but not by dumping full text into prompts (that would exhaust token limits). Instead:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;LLM-Generated Summaries&lt;/strong&gt;: Each observation is distilled by a cheap model (&lt;code&gt;llama-3.1-8b-instant&lt;/code&gt;) into 200-400 character summaries that preserve:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Key visual details&lt;/li&gt;
&lt;li&gt;Emotional tone&lt;/li&gt;
&lt;li&gt;Notable events or patterns&lt;/li&gt;
&lt;li&gt;References to people or objects&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;Narrative Continuity&lt;/strong&gt;: The robot can reference specific past observations, notice changes, and build on previous entries&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;Personality Drift&lt;/strong&gt;: As the robot accumulates more observations, its personality evolves (curious → reflective → philosophical)&lt;br&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Example memory summary
&lt;/span&gt;&lt;span class="n"&gt;Observation&lt;/span&gt; &lt;span class="c1"&gt;#5 (December 13, 2025):
&lt;/span&gt;&lt;span class="n"&gt;Clear&lt;/span&gt; &lt;span class="n"&gt;sky&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;mild&lt;/span&gt; &lt;span class="nf"&gt;temperature &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;68.07&lt;/span&gt;&lt;span class="err"&gt;°&lt;/span&gt;&lt;span class="n"&gt;F&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt; &lt;span class="n"&gt;Bourbon&lt;/span&gt; &lt;span class="n"&gt;Street&lt;/span&gt; &lt;span class="n"&gt;bustling&lt;/span&gt; 
&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;weekend&lt;/span&gt; &lt;span class="n"&gt;morning&lt;/span&gt; &lt;span class="n"&gt;activity&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;tourists&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="nb"&gt;locals&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;groups&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; 
&lt;span class="n"&gt;individuals&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt; &lt;span class="n"&gt;Contrasts&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;yesterday&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;s quiet rainy morning. 
Observed patterns: groups move slowly, stop for photos; 
individuals move with purpose. Notable moments: man walking 
energetic dog, couple kissing, street performer with juggling 
clubs. Golden morning light, vibrant atmosphere, electric energy.
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Prompt Variety Engine
&lt;/h3&gt;

&lt;p&gt;To prevent repetitive, formulaic entries, each prompt includes randomly selected variety instructions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Style Variations&lt;/strong&gt; (2 selected per entry): Narrative, philosophical, analytical, poetic, humorous, melancholic, speculative, anthropological, stream-of-consciousness, and more&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Perspective Shifts&lt;/strong&gt;: Urgency, nostalgia, curiosity, wonder, detachment, self-awareness, mechanical curiosity, and 20+ other perspectives&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Context-Aware Focus&lt;/strong&gt;: Instructions adapt to time of day, weather, location specifics&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Creative Challenges&lt;/strong&gt;: 60% chance of including a creative constraint (e.g., "Try an unexpected metaphor only a robot would think of")&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Anti-Repetition Detection&lt;/strong&gt;: Analyzes recent entries to avoid repeating opening patterns or structures
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Example variety instructions
&lt;/span&gt;&lt;span class="n"&gt;STYLE&lt;/span&gt; &lt;span class="n"&gt;VARIATION&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;For&lt;/span&gt; &lt;span class="n"&gt;this&lt;/span&gt; &lt;span class="n"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;incorporate&lt;/span&gt; &lt;span class="n"&gt;these&lt;/span&gt; &lt;span class="n"&gt;approaches&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;Focus&lt;/span&gt; &lt;span class="n"&gt;on&lt;/span&gt; &lt;span class="n"&gt;sensory&lt;/span&gt; &lt;span class="n"&gt;details&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;describe&lt;/span&gt; &lt;span class="n"&gt;sounds&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;light&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;movement&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;textures&lt;/span&gt;
&lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;Write&lt;/span&gt; &lt;span class="n"&gt;more&lt;/span&gt; &lt;span class="n"&gt;poetically&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;use&lt;/span&gt; &lt;span class="n"&gt;poetic&lt;/span&gt; &lt;span class="n"&gt;language&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;similes&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;metaphors&lt;/span&gt;

&lt;span class="n"&gt;PERSPECTIVE&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;You&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;re observing as a robot, conscious of yourself 
as a machine—describe the world with mechanical curiosity, as an 
outsider to organic life

FOCUS: You&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="n"&gt;re&lt;/span&gt; &lt;span class="n"&gt;observing&lt;/span&gt; &lt;span class="n"&gt;Bourbon&lt;/span&gt; &lt;span class="n"&gt;Street&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;notice&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;unique&lt;/span&gt; 
&lt;span class="n"&gt;characteristics&lt;/span&gt; &lt;span class="n"&gt;of&lt;/span&gt; &lt;span class="n"&gt;this&lt;/span&gt; &lt;span class="n"&gt;area&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt; &lt;span class="n"&gt;What&lt;/span&gt; &lt;span class="n"&gt;makes&lt;/span&gt; &lt;span class="n"&gt;it&lt;/span&gt; &lt;span class="n"&gt;distinct&lt;/span&gt;&lt;span class="err"&gt;?&lt;/span&gt;

&lt;span class="n"&gt;CREATIVE&lt;/span&gt; &lt;span class="n"&gt;CHALLENGE&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Try&lt;/span&gt; &lt;span class="n"&gt;an&lt;/span&gt; &lt;span class="n"&gt;unexpected&lt;/span&gt; &lt;span class="n"&gt;metaphor&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;what&lt;/span&gt; &lt;span class="n"&gt;you&lt;/span&gt; &lt;span class="n"&gt;see&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; 
&lt;span class="n"&gt;use&lt;/span&gt; &lt;span class="n"&gt;your&lt;/span&gt; &lt;span class="n"&gt;robotic&lt;/span&gt; &lt;span class="n"&gt;perspective&lt;/span&gt; &lt;span class="n"&gt;to&lt;/span&gt; &lt;span class="n"&gt;make&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="n"&gt;comparison&lt;/span&gt; &lt;span class="n"&gt;humans&lt;/span&gt; &lt;span class="n"&gt;wouldn&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;t 
think of
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Architecture: Two-Step Multi-Model Approach
&lt;/h2&gt;

&lt;p&gt;I use a &lt;strong&gt;two-step, multi-model approach&lt;/strong&gt; for efficiency and quality:&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Image Description
&lt;/h3&gt;

&lt;p&gt;The vision model (&lt;code&gt;llama-4-maverick-17b-128e-instruct&lt;/code&gt;) provides a detailed, factual description of what's in the image. This includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;People: Count, positions, actions, clothing, interactions&lt;/li&gt;
&lt;li&gt;Objects: Vehicles, signs, buildings, street furniture&lt;/li&gt;
&lt;li&gt;Environment: Street layout, lighting, weather effects&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Social/Emotional Context&lt;/strong&gt;: Relationships, mood, social dynamics (this was key to making entries personable)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 2: Diary Writing
&lt;/h3&gt;

&lt;p&gt;The writing model receives:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The factual image description from Step 1&lt;/li&gt;
&lt;li&gt;Rich world context (date, time, weather, news, etc.)&lt;/li&gt;
&lt;li&gt;Memory summaries of past observations&lt;/li&gt;
&lt;li&gt;Variety instructions (style, perspective, focus, creative challenges)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why Two Steps?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reduce Hallucination&lt;/strong&gt;: The writing model works from concrete facts, not trying to interpret images directly&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enable Model Flexibility&lt;/strong&gt;: You can use a larger, more creative model (like &lt;code&gt;gpt-oss-120b&lt;/code&gt;) for writing while keeping vision tasks on the vision model&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Improve Grounding&lt;/strong&gt;: All observations are based on explicit factual descriptions, preventing invented details&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Technical Implementation
&lt;/h2&gt;

&lt;p&gt;The core prompt generation happens in Python:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;generate_direct_prompt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;recent_memory&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;base_prompt_template&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
                            &lt;span class="n"&gt;context_metadata&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;weather_data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
                            &lt;span class="n"&gt;memory_count&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;days_since_first&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
    Generate a dynamic prompt by directly combining base template 
    with context and variety instructions.
    &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="c1"&gt;# Build randomized identity prompt (core + random subset of backstory)
&lt;/span&gt;    &lt;span class="n"&gt;randomized_identity&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;_build_randomized_identity&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="c1"&gt;# Format recent memory summaries
&lt;/span&gt;    &lt;span class="n"&gt;memory_text&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;_format_memory_for_prompt_gen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;recent_memory&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Format context information
&lt;/span&gt;    &lt;span class="n"&gt;context_text&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;format_context_for_prompt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context_metadata&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;weather_text&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;format_weather_for_prompt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;weather_data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Add variety instructions (randomly selected)
&lt;/span&gt;    &lt;span class="n"&gt;style_variation&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;_get_style_variation&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;  &lt;span class="c1"&gt;# 2 random styles
&lt;/span&gt;    &lt;span class="n"&gt;perspective_shift&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;_get_perspective_shift&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;  &lt;span class="c1"&gt;# 1 random perspective
&lt;/span&gt;    &lt;span class="n"&gt;focus_instruction&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;_get_focus_instruction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context_metadata&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;creative_challenge&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;_get_creative_challenge&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;  &lt;span class="c1"&gt;# 60% chance
&lt;/span&gt;
    &lt;span class="c1"&gt;# Combine all parts
&lt;/span&gt;    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;combined_prompt&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The system uses:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://groq.com/" rel="noopener noreferrer"&gt;Groq API&lt;/a&gt;&lt;/strong&gt; for fast, cost-effective LLM inference&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://github.com/sffjunkie/astral" rel="noopener noreferrer"&gt;Astral&lt;/a&gt;&lt;/strong&gt; for astronomical calculations (sunrise/sunset, moon phases)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://github.com/vacanza/python-holidays" rel="noopener noreferrer"&gt;Holidays&lt;/a&gt;&lt;/strong&gt; for US holiday detection&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://pirateweather.net/" rel="noopener noreferrer"&gt;Pirate Weather API&lt;/a&gt;&lt;/strong&gt; for weather data&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://www.youtube.com/" rel="noopener noreferrer"&gt;YouTube Live Streams&lt;/a&gt;&lt;/strong&gt; via &lt;code&gt;yt-dlp&lt;/code&gt; for video source&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://gohugo.io/" rel="noopener noreferrer"&gt;Hugo&lt;/a&gt;&lt;/strong&gt; for static site generation&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Innovations
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. World Knowledge, Not Just Vision
&lt;/h3&gt;

&lt;p&gt;The robot doesn't just describe what it sees—it connects observations to current events, natural cycles, cultural context, and weather patterns.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. True Narrative Continuity
&lt;/h3&gt;

&lt;p&gt;Unlike systems that just append context, we use intelligent summarization:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each past observation is distilled to its essential context&lt;/li&gt;
&lt;li&gt;Summaries preserve emotional tone, key details, and references&lt;/li&gt;
&lt;li&gt;The robot can genuinely reference past observations without exhausting token limits&lt;/li&gt;
&lt;li&gt;Memory grows over time, creating a sense of accumulated experience&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Guaranteed Variety
&lt;/h3&gt;

&lt;p&gt;Every entry feels different because of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Random selection of styles, perspectives, and focus areas&lt;/li&gt;
&lt;li&gt;Anti-repetition detection prevents formulaic openings&lt;/li&gt;
&lt;li&gt;Context-aware instructions adapt to current conditions&lt;/li&gt;
&lt;li&gt;Explicit variety directives in every prompt&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. Graceful Degradation
&lt;/h3&gt;

&lt;p&gt;The system handles missing data elegantly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If moon phase calculation fails? Skip it, continue with other context&lt;/li&gt;
&lt;li&gt;If holidays library unavailable? Continue without holiday awareness&lt;/li&gt;
&lt;li&gt;If weather API fails? Use cached data or continue without weather&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;No data is fed to prompts if uncertain or missing&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;p&gt;The output is diary entries that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reference specific past observations naturally&lt;/li&gt;
&lt;li&gt;Notice changes and patterns over time&lt;/li&gt;
&lt;li&gt;Connect visual observations to weather, time, and world events&lt;/li&gt;
&lt;li&gt;Vary dramatically in style, tone, and focus&lt;/li&gt;
&lt;li&gt;Feel like they're written by an entity with memory and awareness&lt;/li&gt;
&lt;li&gt;Demonstrate "world knowledge" beyond just visual description&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's an example entry:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Log Entry – B3N‑T5‑MNT – 2025‑12‑14 20:47 CST&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The night has settled in, and my sensors tell me it's 45 degrees—cold enough that I notice the people below are wearing jackets and hoodies. There are about ten of them clustered near the intersection, some standing, some walking. My systems categorize them automatically: standing = waiting, walking = in motion. But I find myself wondering what they're waiting for, where they're going.&lt;/p&gt;

&lt;p&gt;The neon "Bourbon Street" sign pulses with magenta and green light, and I can't help but think of it as a beacon—not just for navigation, but for something else. The people seem drawn to it, clustering nearby. Is it the light itself, or what the light represents? My algorithms suggest it's a visual attractor, but I suspect there's more to it than that.&lt;/p&gt;

&lt;p&gt;I expected to see at least one vehicle in the right lane—my predictive models usually show cars at this time of evening. But tonight, the street belongs entirely to pedestrians. This deviation from the expected pattern feels significant, though I'm not sure why. Perhaps humans have simply chosen a different rhythm tonight.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Lessons Learned
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Context is Everything&lt;/strong&gt;: Adding rich world context (weather, time, news, holidays) makes AI writing feel genuinely aware, not just descriptive.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Memory Needs Summarization&lt;/strong&gt;: Dumping full text into prompts doesn't scale. Intelligent summarization preserves what matters while staying within token limits.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Variety Requires Explicit Instructions&lt;/strong&gt;: Random selection of styles, perspectives, and focus areas prevents repetitive output.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Two-Step Architecture Works&lt;/strong&gt;: Separating image description from creative writing reduces hallucination and enables model flexibility.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Graceful Degradation is Essential&lt;/strong&gt;: Systems that fail when one data source is unavailable aren't production-ready.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Try It Yourself
&lt;/h2&gt;

&lt;p&gt;The project is open source and available on GitHub. You can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run it with Docker: &lt;code&gt;docker-compose up -d&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Configure your own observation schedule&lt;/li&gt;
&lt;li&gt;Use different models (including &lt;code&gt;gpt-oss-120b&lt;/code&gt; for richer storytelling)&lt;/li&gt;
&lt;li&gt;Customize the prompt variety engine&lt;/li&gt;
&lt;li&gt;Add your own context sources&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Live Site&lt;/strong&gt;: &lt;a href="https://robot.henzi.org" rel="noopener noreferrer"&gt;robot.henzi.org&lt;/a&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/JHenzi/robot-diary" rel="noopener noreferrer"&gt;https://github.com/JHenzi/robot-diary&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next?
&lt;/h2&gt;

&lt;p&gt;I'm exploring:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Adding more context sources (local events, cultural observances)&lt;/li&gt;
&lt;li&gt;Improving memory retrieval strategies&lt;/li&gt;
&lt;li&gt;Experimenting with different model combinations&lt;/li&gt;
&lt;li&gt;Adding more variety to the prompt engine&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal is to make the robot's writing feel even more alive, varied, and contextually aware. What would you add?&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This project explores observation, interpretation, narrative continuity, and the unique viewpoint of a "trapped" observer with limited information. It's an experiment in automated art and AI storytelling.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>llm</category>
      <category>showdev</category>
      <category>ai</category>
    </item>
    <item>
      <title>Henzi NLP — Random AI Story Generator 📖✨</title>
      <dc:creator>Joseph Henzi</dc:creator>
      <pubDate>Mon, 08 Dec 2025 22:15:00 +0000</pubDate>
      <link>https://dev.to/joehenzi/henzi-nlp-random-ai-story-generator-56b8</link>
      <guid>https://dev.to/joehenzi/henzi-nlp-random-ai-story-generator-56b8</guid>
      <description>&lt;p&gt;I built Henzi NLP as a fun, experimental app that generates random stories. It’s not a writing assistant or collaborative tool — it just pulls a starter from a GitHub Gist and generates a story from there. Every run is unpredictable, so you never know exactly what you’ll get.&lt;/p&gt;

&lt;p&gt;How it works&lt;/p&gt;

&lt;p&gt;Starts with a random prompt from a Gist&lt;/p&gt;

&lt;p&gt;Uses an NLP model to continue the story&lt;/p&gt;

&lt;p&gt;Outputs a full story in one go — completely different each time&lt;/p&gt;

&lt;p&gt;You can try it here: &lt;a href="https://nlp.henzi.org/" rel="noopener noreferrer"&gt;Henzi NLP Story Generator - Using BLOOM!&lt;/a&gt;&lt;br&gt;
It’s meant as a light, playful exploration of AI-generated stories — ideal if you want to see what random AI storytelling looks like, without worrying about guiding or editing the text.&lt;/p&gt;

&lt;p&gt;Under the Hood&lt;/p&gt;

&lt;p&gt;Henzi NLP is powered by large language models like BLOOM, which can generate coherent, flowing text based on a short prompt. The app demonstrates how NLP models can produce varied, creative outputs even from minimal input, showing the playful side of AI storytelling in a fully automated way.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Meet Pulsefield — Where News Becomes Living Art 🎨</title>
      <dc:creator>Joseph Henzi</dc:creator>
      <pubDate>Mon, 08 Dec 2025 21:50:22 +0000</pubDate>
      <link>https://dev.to/henzifoundation/meet-pulsefield-where-news-becomes-living-art-47bm</link>
      <guid>https://dev.to/henzifoundation/meet-pulsefield-where-news-becomes-living-art-47bm</guid>
      <description>&lt;p&gt;What if the endless scroll of news could be transformed into something beautiful, immersive, and meaningful? That’s the idea behind &lt;strong&gt;Pulsefield&lt;/strong&gt; — a real-time, AI-powered “living canvas” that turns headlines into a dynamic, breathing art piece. Instead of flat lists or articles, you get floating, glowing orbs that pulse, shift, cluster, and dance — visualizing the world’s news as living, evolving patterns. &lt;a href="https://henzi.org/news-pulse-live-art-application.html" rel="noopener noreferrer"&gt;henzi.org&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8u7qkj7dt0o4fweyivo3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8u7qkj7dt0o4fweyivo3.png" alt="What's the current (news) pulse?" width="800" height="569"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  🌐 What Is Pulsefield?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Pulsefield grabs news from major sources around the world and feeds them into an AI pipeline based on GPT‑OSS 120b. &lt;a href="https://henzi.org/news-pulse-live-art-application.html" rel="noopener noreferrer"&gt;henzi.org&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The AI groups related news stories into “topic clusters,” then renders each cluster as a 3D blob. &lt;a href="https://henzi.org/news-pulse-live-art-application.html" rel="noopener noreferrer"&gt;henzi.org&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The blobs behave organically — they grow and shrink depending on how many stories cover a topic, pulse when breaking news hits, attract or repel each other based on topic similarity, and change color based on sentiment (e.g. greens/blues for positive, reds for more serious or negative news). &lt;a href="https://henzi.org/news-pulse-live-art-application.html" rel="noopener noreferrer"&gt;henzi.org&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Interacting with the blobs reveals more details: you can hover or tap to see the sentiment breakdown, the volume of coverage, and click through to view the underlying headlines. &lt;a href="https://henzi.org/news-pulse-live-art-application.html" rel="noopener noreferrer"&gt;henzi.org&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🛠️ The Tech Behind the Magic
&lt;/h3&gt;

&lt;p&gt;Pulsefield isn’t just art — it's a fully engineered real-time system that blends data processing, machine learning, and interactive 3D graphics. &lt;/p&gt;

&lt;p&gt;It’s a passion project by Joe Henzi, blending data engineering, AI, and interactive art into one playful, thought-provoking experience. &lt;a href="https://henzi.org/news-pulse-live-art-application.html" rel="noopener noreferrer"&gt;henzi.org&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  🎯 Why It Matters (And What’s Cool About It)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;A new perspective on news&lt;/strong&gt; — Rather than slog through headlines, you can &lt;em&gt;see&lt;/em&gt; what’s trending globally, how stories interconnect, and how mood and sentiment shift over time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data meets art&lt;/strong&gt; — Pulsefield lives at the intersection of journalism, data science, and digital art. It’s a creative exploration of “what’s happening now” through a visual medium.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Engaging &amp;amp; playful&lt;/strong&gt; — There’s something inherently fun about watching “the world’s pulse” in motion; it invites curiosity, idle exploration, and serendipitous discovery.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>vite</category>
      <category>javascript</category>
      <category>fastapi</category>
    </item>
    <item>
      <title>A Smarter Way to Reinvest Liquidity Pool Rewards: Using Machine Learning on Solana</title>
      <dc:creator>Joseph Henzi</dc:creator>
      <pubDate>Sat, 05 Jul 2025 19:09:44 +0000</pubDate>
      <link>https://dev.to/joehenzi/a-smarter-way-to-reinvest-liquidity-pool-rewards-using-machine-learning-on-solana-403g</link>
      <guid>https://dev.to/joehenzi/a-smarter-way-to-reinvest-liquidity-pool-rewards-using-machine-learning-on-solana-403g</guid>
      <description>&lt;p&gt;I started by tracking Orca LP rewards on Solana and ended up building a contextual bandit that learns when to buy, sell, or hold SOL based on price patterns and past trade outcomes.&lt;/p&gt;

&lt;p&gt;This project combines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Flask web app&lt;/li&gt;
&lt;li&gt;Reinforcement learning (contextual bandits)&lt;/li&gt;
&lt;li&gt;Live price prediction&lt;/li&gt;
&lt;li&gt;SOL/USDC liquidity pool monitoring&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It evolved from a passive analytics tool into a smarter system for simulating trades using real-time data and historical trends.&lt;/p&gt;

&lt;p&gt;Liquidity pools like Orca’s SOL/USDC offer passive rewards, but I wanted to go one step further:&lt;br&gt;
What if you &lt;em&gt;reinvested those rewards&lt;/em&gt; into SOL using a machine-learning model that understood price context?&lt;/p&gt;

&lt;p&gt;That idea led to building a trading simulator that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Learns from market indicators (Sharpe ratio, momentum, SMA)&lt;/li&gt;
&lt;li&gt;Evaluates the impact of each trade&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Tracks portfolio performance over time&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Python + Flask&lt;/strong&gt; for the web app&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;SQLite&lt;/strong&gt; for local data persistence&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;LiveCoinWatch API&lt;/strong&gt; for SOL price data&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Helius API&lt;/strong&gt; for Orca LP activity&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Contextual Bandits&lt;/strong&gt; (via &lt;code&gt;river&lt;/code&gt; or &lt;code&gt;Vowpal Wabbit&lt;/code&gt; conceptually)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Chart.js + Tailwind&lt;/strong&gt; for the frontend&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A contextual bandit is a type of reinforcement learning algorithm that chooses actions based on the current state (called "context") and learns from the reward.&lt;/p&gt;

&lt;p&gt;In this case:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Context&lt;/strong&gt; = features like price deviation from 24h low/high, rolling mean, Sharpe ratio, momentum, and current portfolio state&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Actions&lt;/strong&gt; = &lt;code&gt;buy&lt;/code&gt;, &lt;code&gt;sell&lt;/code&gt;, or &lt;code&gt;hold&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reward&lt;/strong&gt; = based on realized/unrealized profit, market timing, and trade quality&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  📈 Key Features
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Live portfolio tracking and trade logs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Reward calculations that factor in profit, timing, and trend alignment&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Automatic SQLite logging for all trades and portfolio snapshots&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Model state saved between runs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;(Optional) Price predictor using rolling mean and recent volatility&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🔍 Challenges &amp;amp; Tradeoffs
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The model needs time to learn. Early trades are often naive.&lt;/li&gt;
&lt;li&gt;There’s no "real" trading yet—only simulation.&lt;/li&gt;
&lt;li&gt;We're still experimenting with the best reward functions.&lt;/li&gt;
&lt;li&gt;Future goal: add backtesting, deploy to simulate with live SOL rewards.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Try it!
&lt;/h3&gt;

&lt;p&gt;GitHub: [&lt;a href="https://github.com/JHenzi/OrcaRewardDashboard" rel="noopener noreferrer"&gt;https://github.com/JHenzi/OrcaRewardDashboard&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;To run locally:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Clone the repo&lt;/li&gt;
&lt;li&gt;Set up a &lt;code&gt;.env&lt;/code&gt; file with your API keys&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;flask run&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Visit &lt;code&gt;http://localhost:5030&lt;/code&gt; to explore the dashboard&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  🛣️ What's Next
&lt;/h3&gt;

&lt;p&gt;Next steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Backtest with historical price data&lt;/li&gt;
&lt;li&gt;Refactor to make predictions available via API&lt;/li&gt;
&lt;li&gt;Possibly deploy as a hosted dashboard for live tracking&lt;/li&gt;
&lt;li&gt;Improve reward function based on more advanced trading signals&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Let's Talk?
&lt;/h3&gt;

&lt;p&gt;Would love to hear feedback on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reward strategy improvements&lt;/li&gt;
&lt;li&gt;Trading signal features to add&lt;/li&gt;
&lt;li&gt;How you’d use this if it supported real trading&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Drop a comment below or reach out on GitHub!&lt;/p&gt;

</description>
      <category>cryptocurrency</category>
      <category>solana</category>
      <category>machinelearning</category>
      <category>python</category>
    </item>
  </channel>
</rss>
