<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: southy404</title>
    <description>The latest articles on DEV Community by southy404 (@southy404).</description>
    <link>https://dev.to/southy404</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/southy404"/>
    <language>en</language>
    <item>
      <title>The Agentic Shift Isn’t Coming. It’s Already Rewriting How We Build Software.</title>
      <dc:creator>southy404</dc:creator>
      <pubDate>Fri, 24 Apr 2026 10:27:13 +0000</pubDate>
      <link>https://dev.to/southy404/the-agentic-shift-isnt-coming-its-already-rewriting-how-we-build-software-4mk</link>
      <guid>https://dev.to/southy404/the-agentic-shift-isnt-coming-its-already-rewriting-how-we-build-software-4mk</guid>
      <description>&lt;p&gt;This is a submission for the &lt;a href="https://dev.to/challenges/google-cloud-next-2026-04-22"&gt;Google Cloud NEXT Writing Challenge&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;At Google Cloud NEXT ’26, something clicked for me — and it honestly wasn’t what I expected.&lt;/p&gt;

&lt;p&gt;It wasn’t a new model, a faster API, or one of those polished demos that look great but don’t really change how you build things.&lt;/p&gt;

&lt;p&gt;It was the realization that I was still thinking in the wrong abstraction.&lt;/p&gt;

&lt;p&gt;While Google was showing systems that operate over time, coordinate across tools, and make decisions with context, I caught myself still thinking in endpoints, requests, and features.&lt;/p&gt;

&lt;p&gt;That gap is where the real shift is happening.&lt;/p&gt;




&lt;h2&gt;
  
  
  We Didn’t Just Get Better AI — We Got a Different Layer of Software
&lt;/h2&gt;

&lt;p&gt;For years, even as AI got better, our mental model didn’t really change.&lt;/p&gt;

&lt;p&gt;Most systems still worked like this: user sends a request, system processes it, returns a response. Even with LLMs, we mostly just swapped out deterministic logic for probabilistic outputs and called it a day.&lt;/p&gt;

&lt;p&gt;But what was presented at NEXT doesn’t really fit that anymore.&lt;/p&gt;

&lt;p&gt;These systems don’t just respond. They keep context over time, coordinate multiple agents, and keep doing things even when no one is actively interacting with them.&lt;/p&gt;

&lt;p&gt;That doesn’t feel like “AI inside your app.”&lt;/p&gt;

&lt;p&gt;It feels more like something that’s just… running.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh34dm1a25f3kj19af4xy.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh34dm1a25f3kj19af4xy.jpg" alt="Gemini Enterprise Agent Platform" width="800" height="450"&gt;&lt;/a&gt; &lt;em&gt;Source: &lt;a href="https://cloud.google.com/blog/topics/google-cloud-next/welcome-to-google-cloud-next26" rel="noopener noreferrer"&gt;Google Cloud NEXT ’26 — Official Announcement&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  From Output to Execution
&lt;/h2&gt;

&lt;p&gt;The biggest shift is easy to miss, but once you notice it, you can’t unsee it.&lt;/p&gt;

&lt;p&gt;We’re moving away from systems that are judged by how good their output looks, toward systems that are judged by what they actually &lt;em&gt;do&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Generating a nice answer is one thing.&lt;/p&gt;

&lt;p&gt;Actually executing a task across multiple systems — with permissions, constraints, and changing context — is a completely different problem.&lt;/p&gt;

&lt;p&gt;And you feel that difference immediately when you try to build something like this.&lt;/p&gt;

&lt;p&gt;Because suddenly it’s not about “did the response look right?”&lt;br&gt;&lt;br&gt;
It’s about “did the system actually do the right thing?”&lt;/p&gt;




&lt;h2&gt;
  
  
  You’re Not Just Writing Code Anymore
&lt;/h2&gt;

&lt;p&gt;This is the part that hit me the most.&lt;/p&gt;

&lt;p&gt;If you take this seriously, your role as a developer shifts.&lt;/p&gt;

&lt;p&gt;You’re not mainly writing endpoints, functions, or UI flows anymore.&lt;/p&gt;

&lt;p&gt;You’re defining responsibilities. You’re deciding who (or what) is allowed to do what, how decisions move through the system, and what should happen over time when different parts interact.&lt;/p&gt;

&lt;p&gt;At some point it stops feeling like assembling logic…&lt;/p&gt;

&lt;p&gt;…and starts feeling like designing behavior under constraints.&lt;/p&gt;




&lt;h2&gt;
  
  
  Multi-Agent Systems Look Clean — Until You Build Them
&lt;/h2&gt;

&lt;p&gt;On paper, multi-agent systems look almost too clean.&lt;/p&gt;

&lt;p&gt;You split things up nicely: one agent plans, another evaluates, another executes. Each has a clear role, everything is modular, everything makes sense.&lt;/p&gt;

&lt;p&gt;Until you actually try it.&lt;/p&gt;

&lt;p&gt;Because then you realize: complexity didn’t go away. It just moved.&lt;/p&gt;

&lt;p&gt;Instead of one complex system, you now have multiple smaller systems that need to agree with each other.&lt;/p&gt;

&lt;p&gt;And they don’t always do that.&lt;/p&gt;

&lt;p&gt;You can easily end up in situations where:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;one agent thinks something is ready to execute
&lt;/li&gt;
&lt;li&gt;another thinks it still needs clarification
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both are “right” in isolation. The result is still wrong.&lt;/p&gt;

&lt;p&gt;No crash. No error. Just weird behavior.&lt;/p&gt;

&lt;p&gt;That’s a very different kind of problem.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6llzlei754kpn23wgh9.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6llzlei754kpn23wgh9.gif" alt="Multi-Agent systems" width="498" height="213"&gt;&lt;/a&gt; &lt;/p&gt;




&lt;h2&gt;
  
  
  Context Is the Real Bottleneck Now
&lt;/h2&gt;

&lt;p&gt;For a long time, we all focused on models. Bigger, smarter, faster.&lt;/p&gt;

&lt;p&gt;But lately it feels like the bottleneck is somewhere else.&lt;/p&gt;

&lt;p&gt;Context.&lt;/p&gt;

&lt;p&gt;Not just having data, but having the &lt;em&gt;right&lt;/em&gt; data, in the &lt;em&gt;right&lt;/em&gt; shape, shared consistently across everything involved.&lt;/p&gt;

&lt;p&gt;Because if different parts of the system operate on slightly different context, things start drifting fast.&lt;/p&gt;

&lt;p&gt;Without a solid context layer, agents don’t really “understand” anything. They just make reasonable guesses.&lt;/p&gt;

&lt;p&gt;With it, they start to behave in a way that actually feels grounded.&lt;/p&gt;




&lt;h2&gt;
  
  
  Memory Changes the Nature of the System
&lt;/h2&gt;

&lt;p&gt;Stateless systems are simple. Every request is its own thing.&lt;/p&gt;

&lt;p&gt;Stateful systems are… not.&lt;/p&gt;

&lt;p&gt;As soon as you introduce memory, everything changes a bit. The system starts carrying history. Decisions are influenced by things that happened before, sometimes in ways that aren’t obvious anymore.&lt;/p&gt;

&lt;p&gt;That’s powerful, but also a bit uncomfortable.&lt;/p&gt;

&lt;p&gt;Because now you’re not debugging a single execution anymore.&lt;/p&gt;

&lt;p&gt;You’re trying to understand a chain of decisions that led to a certain outcome.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhds0dgwhautb1iwxp9jk.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhds0dgwhautb1iwxp9jk.gif" alt="Agent decisions" width="408" height="164"&gt;&lt;/a&gt; &lt;/p&gt;




&lt;h2&gt;
  
  
  Governance Becomes a Core Design Problem
&lt;/h2&gt;

&lt;p&gt;Another thing that becomes obvious pretty quickly: once systems can act, control becomes critical.&lt;/p&gt;

&lt;p&gt;Not just “secure your API” kind of control.&lt;/p&gt;

&lt;p&gt;Actual decision control.&lt;/p&gt;

&lt;p&gt;Who is allowed to do what?&lt;br&gt;&lt;br&gt;
Which actions are valid?&lt;br&gt;&lt;br&gt;
What happens if something goes wrong?&lt;/p&gt;

&lt;p&gt;This is where identity, permissions, and traceability stop being “enterprise stuff” and become core to the system.&lt;/p&gt;

&lt;p&gt;Without that, autonomous systems aren’t just powerful — they’re kind of dangerous.&lt;/p&gt;




&lt;h2&gt;
  
  
  Debugging Becomes About Decisions, Not Code
&lt;/h2&gt;

&lt;p&gt;This is probably the weirdest shift.&lt;/p&gt;

&lt;p&gt;In normal systems, something breaks and you trace it back to a line of code.&lt;/p&gt;

&lt;p&gt;Here, everything can technically work — and still be wrong.&lt;/p&gt;

&lt;p&gt;The issue isn’t that something failed. It’s that different parts of the system interpreted the situation differently or acted on slightly different context.&lt;/p&gt;

&lt;p&gt;So you’re not really debugging code anymore.&lt;/p&gt;

&lt;p&gt;You’re debugging decisions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn0iyl2vq9nkr4yk43b5s.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn0iyl2vq9nkr4yk43b5s.gif" alt="Debugging Multi-Agent systems" width="640" height="358"&gt;&lt;/a&gt; &lt;/p&gt;




&lt;h2&gt;
  
  
  Where Most Teams Are Still Thinking Too Small
&lt;/h2&gt;

&lt;p&gt;Right now, a lot of implementations still treat AI as a feature.&lt;/p&gt;

&lt;p&gt;Something behind an endpoint. Something inside a UI.&lt;/p&gt;

&lt;p&gt;But that framing feels… outdated.&lt;/p&gt;

&lt;p&gt;Because the real shift is deeper.&lt;/p&gt;

&lt;p&gt;The system itself becomes the AI. The UI is just one surface.&lt;/p&gt;

&lt;p&gt;What actually matters is what’s happening behind it — how agents coordinate, how context flows, how decisions are made over time.&lt;/p&gt;

&lt;p&gt;That’s where things get interesting.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thought
&lt;/h2&gt;

&lt;p&gt;Google Cloud NEXT ’26 didn’t just introduce new tools.&lt;/p&gt;

&lt;p&gt;It introduced a different way of thinking about software.&lt;/p&gt;

&lt;p&gt;Not as something that reacts to input…&lt;/p&gt;

&lt;p&gt;…but as something that acts, coordinates, and evolves over time.&lt;/p&gt;

&lt;p&gt;The real question isn’t whether you’ll use AI in your system.&lt;/p&gt;

&lt;p&gt;It’s whether you’re ready to build systems where behavior — not just code — is the main thing you design.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>cloudnextchallenge</category>
      <category>googlecloud</category>
    </item>
    <item>
      <title>I Tried OpenClaw on Windows with Ollama. I was hyped… until I wasn’t.</title>
      <dc:creator>southy404</dc:creator>
      <pubDate>Thu, 23 Apr 2026 11:14:40 +0000</pubDate>
      <link>https://dev.to/southy404/i-tried-openclaw-on-windows-with-ollama-i-was-hyped-until-i-wasnt-111n</link>
      <guid>https://dev.to/southy404/i-tried-openclaw-on-windows-with-ollama-i-was-hyped-until-i-wasnt-111n</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/openclaw-2026-04-16"&gt;OpenClaw Writing Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Beginning
&lt;/h2&gt;

&lt;p&gt;Today was the day.&lt;/p&gt;

&lt;p&gt;For the first time, I cloned OpenClaw on my Windows machine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia1.tenor.com%2Fm%2FSLCByvcQIWkAAAAC%2Fdriving-fast.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia1.tenor.com%2Fm%2FSLCByvcQIWkAAAAC%2Fdriving-fast.gif" width="414" height="274"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My mission was simple: build something for the OpenClaw Challenge using my local Ollama setup.&lt;/p&gt;

&lt;p&gt;At first, everything felt smooth. I cloned the repo, read the README, checked the well-written docs, followed the Windows setup instructions, and ran the install command.&lt;/p&gt;

&lt;p&gt;Then I saw this in the terminal:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Windows detected - OpenClaw runs great on WSL2.&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Native Windows might be trickier.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That was the first moment I got a little skeptical.&lt;/p&gt;

&lt;p&gt;Still, the setup looked clean. I was guided through onboarding, picked QuickStart, selected &lt;strong&gt;Ollama&lt;/strong&gt; as the provider, chose &lt;strong&gt;local only&lt;/strong&gt;, set the base URL, selected my model… and then:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;boom.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Error: Cannot find module '@larksuiteoapi/node-sdk'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Alright. Not great — but maybe just a one-off.&lt;/p&gt;

&lt;p&gt;I installed the package manually and ran the setup again.&lt;/p&gt;

&lt;p&gt;Then again:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Windows detected - OpenClaw runs great on WSL2.&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Native Windows might be trickier.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And slowly, I started to understand why.&lt;/p&gt;

&lt;p&gt;I went through the setup again — model, base URL, everything — and then:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;boom again.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Error: Cannot find module 'nostr-tools'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Second missing module. And this time for something I wasn’t even using.&lt;/p&gt;

&lt;p&gt;Fine. Installed it.&lt;/p&gt;

&lt;p&gt;Ran setup again.&lt;/p&gt;

&lt;p&gt;And then:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;boom.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Error: Cannot find module '@slack/web-api'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At that point, the warning from the terminal stopped feeling like advice — and started feeling like a prediction.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia1.tenor.com%2Fm%2FnJBTSH1FcHQAAAAd%2Fi-see-errors-everywhere-errors.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia1.tenor.com%2Fm%2FnJBTSH1FcHQAAAAd%2Fi-see-errors-everywhere-errors.gif" width="498" height="281"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Windows Attempt
&lt;/h2&gt;

&lt;p&gt;To be fair, OpenClaw never hid it. It told me early that native Windows might be tricky.&lt;/p&gt;

&lt;p&gt;And for my setup, it absolutely was.&lt;/p&gt;

&lt;p&gt;The loop looked like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;run setup&lt;/li&gt;
&lt;li&gt;hit missing module&lt;/li&gt;
&lt;li&gt;install manually&lt;/li&gt;
&lt;li&gt;repeat&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What made it frustrating wasn’t just the errors — it was that they were tied to integrations I didn’t even need. I was just trying to run OpenClaw locally with Ollama.&lt;/p&gt;




&lt;h2&gt;
  
  
  So I Switched to WSL2
&lt;/h2&gt;

&lt;p&gt;At that point, I did what the tool had been suggesting all along: switch to WSL2.&lt;/p&gt;

&lt;p&gt;And honestly — that part &lt;em&gt;did&lt;/em&gt; feel better.&lt;/p&gt;

&lt;p&gt;No random module errors. Cleaner setup. Everything looked more stable.&lt;/p&gt;

&lt;p&gt;But then I hit the next issue.&lt;/p&gt;

&lt;p&gt;My local Ollama setup wasn’t really there anymore.&lt;/p&gt;

&lt;p&gt;My models didn’t show up properly, and instead of a clean local flow, I ended up in a setup that expected an OpenAI- or Anthropic-style API.&lt;/p&gt;

&lt;p&gt;After digging a bit, the issue became obvious:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ollama wasn’t installed in that environment.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So yes — WSL2 solved one problem, but it also disconnected me from the setup I actually wanted.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Turnaround: &lt;code&gt;ollama launch openclaw&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;Then I remembered something.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia.tenor.com%2FAHBWsE2oYTgAAAAi%2Ftelus-critter.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia.tenor.com%2FAHBWsE2oYTgAAAAi%2Ftelus-critter.gif" width="498" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A few weeks ago, I got an email from Ollama that said:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama launch openclaw
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So I went back to my Windows environment and tried exactly that.&lt;/p&gt;

&lt;p&gt;And suddenly:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;it worked.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Not perfectly. Not magically. But it worked.&lt;/p&gt;

&lt;p&gt;The dashboard opened. The system was running.&lt;/p&gt;




&lt;h2&gt;
  
  
  The First Thing for me OpenClaw Did Right
&lt;/h2&gt;

&lt;p&gt;I typed &lt;code&gt;hi&lt;/code&gt; into the chat.&lt;/p&gt;

&lt;p&gt;Instead of replying like a normal chatbot, OpenClaw pointed me to &lt;code&gt;BOOTSTRAP.md&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;And that was the first moment where I was genuinely impressed.&lt;/p&gt;

&lt;p&gt;Instead of just chatting, you actually &lt;em&gt;create&lt;/em&gt; your assistant.&lt;/p&gt;

&lt;p&gt;Through conversation, you define:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;IDENTITY.md&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;USER.md&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;SOUL.md&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You give it a name, a personality, a tone — even an emoji.&lt;/p&gt;

&lt;p&gt;That felt different.&lt;/p&gt;

&lt;p&gt;It didn’t feel like configuring software.&lt;br&gt;
It felt like shaping a system.&lt;/p&gt;

&lt;p&gt;That idea alone is incredibly strong.&lt;/p&gt;

&lt;h2&gt;
  
  
  And Then… Boom Again
&lt;/h2&gt;

&lt;p&gt;After going through the bootstrap, everything looked promising.&lt;/p&gt;

&lt;p&gt;Then:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ollama timeout.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia1.tenor.com%2Fm%2FMCjnu0C8K_YAAAAd%2Fboy-cry.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia1.tenor.com%2Fm%2FMCjnu0C8K_YAAAAd%2Fboy-cry.gif" width="360" height="234"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Frustrating — but manageable.&lt;/p&gt;

&lt;p&gt;I switched to a faster model in the dashboard, retried, and it worked.&lt;/p&gt;

&lt;p&gt;Then I got a well-structured response suggesting behavior defaults, memory handling, and skill setup:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Perfect. Both saved. 🦞  &lt;/p&gt;

&lt;p&gt;Now for SOUL.md — I suggest the following defaults:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Behavior &amp;amp; Collaboration:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Be proactive: Don’t wait for commands — check emails, calendar, and projects
&lt;/li&gt;
&lt;li&gt;Maintain memory: Update MEMORY.md every few days
&lt;/li&gt;
&lt;li&gt;Language: German/English mix
&lt;/li&gt;
&lt;li&gt;Heartbeats: 2–4x per day, but don’t respond on every poll (HEARTBEAT_OK if nothing important)
&lt;/li&gt;
&lt;li&gt;External actions (email, social): Always ask first
&lt;/li&gt;
&lt;li&gt;Git projects: Automatically commit/push (when requested)
&lt;/li&gt;
&lt;li&gt;Group chats: Only respond to real inputs, not every time
&lt;/li&gt;
&lt;li&gt;Use emoji reactions for Discord
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Configure skills:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Weather (wttr.in for weather)
&lt;/li&gt;
&lt;li&gt;Healthcheck (for security checks)
&lt;/li&gt;
&lt;li&gt;Skill-creator (for creating new skills)
&lt;/li&gt;
&lt;li&gt;taskflow (for complex workflows)
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What do you think? Should I write SOUL.md with these defaults, or do you want to adjust anything? 🦞”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Honestly?&lt;/p&gt;

&lt;p&gt;It was good.&lt;/p&gt;

&lt;p&gt;So I confirmed it.&lt;/p&gt;

&lt;p&gt;And then…&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;boom. Timeout again.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia1.tenor.com%2Fm%2FJiEUXIlkIn8AAAAC%2Fincident.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia1.tenor.com%2Fm%2FJiEUXIlkIn8AAAAC%2Fincident.gif" width="498" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That was the moment where the experience broke for me.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Think After All This
&lt;/h2&gt;

&lt;p&gt;I don’t think OpenClaw is bad.&lt;/p&gt;

&lt;p&gt;Actually, I think it’s one of the most interesting directions in this space right now.&lt;/p&gt;

&lt;p&gt;There are ideas here that stand out:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;agent-based workflows&lt;/li&gt;
&lt;li&gt;identity + memory as first-class concepts&lt;/li&gt;
&lt;li&gt;a real attempt at building a personal AI, not just a chat interface&lt;/li&gt;
&lt;li&gt;a huge and fast-growing open-source community pushing it forward&lt;/li&gt;
&lt;li&gt;an ecosystem of plugins, integrations, and channels that goes far beyond a single use case&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is not just “another AI tool.”&lt;/p&gt;

&lt;p&gt;That’s rare.&lt;/p&gt;

&lt;p&gt;But at the same time, the experience still feels &lt;strong&gt;very experimental&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Not just in performance — but in reliability.&lt;/p&gt;

&lt;p&gt;Things work… until they don’t.&lt;/p&gt;

&lt;p&gt;And when they break, it’s not always obvious why.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Part That Makes Me Careful
&lt;/h2&gt;

&lt;p&gt;OpenClaw isn’t just a chatbot.&lt;/p&gt;

&lt;p&gt;It’s an agent that can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;run commands&lt;/li&gt;
&lt;li&gt;access files&lt;/li&gt;
&lt;li&gt;act in the background&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s powerful.&lt;/p&gt;

&lt;p&gt;But that also means trust matters a lot more.&lt;/p&gt;

&lt;p&gt;And right now, I personally don’t feel comfortable giving that level of control to a system that still feels this unstable.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Right now, my opinion is simple:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OpenClaw is fascinating — but not ready for me yet.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I didn’t end up building my challenge project with it.&lt;/p&gt;

&lt;p&gt;But I’m still glad I tried it.&lt;/p&gt;

&lt;p&gt;Because the direction is genuinely exciting.&lt;/p&gt;

&lt;p&gt;And to be fair:&lt;/p&gt;

&lt;p&gt;If I had invested more time, I’m pretty sure I could have gotten everything running properly.&lt;/p&gt;

&lt;p&gt;But that’s also part of the point.&lt;/p&gt;

&lt;p&gt;For me, the current setup effort combined with the limitations of local models right now just doesn’t feel worth it yet.&lt;/p&gt;

&lt;p&gt;And here’s the important part:&lt;/p&gt;

&lt;p&gt;This space is moving fast.&lt;/p&gt;

&lt;p&gt;Local models are improving rapidly.&lt;br&gt;
Hardware is getting better.&lt;br&gt;
Tooling is evolving almost weekly.&lt;/p&gt;

&lt;p&gt;Which means:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The exact same setup could feel completely different in a few months.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So while it didn’t work for me today…&lt;/p&gt;

&lt;p&gt;I don’t think that will be true for long.&lt;/p&gt;

&lt;p&gt;And that’s exactly why I’ll keep watching OpenClaw.&lt;/p&gt;




&lt;h2&gt;
  
  
  What about you?
&lt;/h2&gt;

&lt;p&gt;Have you tried OpenClaw yet?&lt;/p&gt;

&lt;p&gt;Whether locally, with cloud models, or in a completely different setup — I’m genuinely curious how your experience has been.&lt;/p&gt;

&lt;p&gt;Did it feel smooth and powerful…&lt;br&gt;
or more like something that’s still finding its footing?&lt;/p&gt;

&lt;p&gt;And more importantly:&lt;/p&gt;

&lt;p&gt;Do you see yourself actually using something like this in your daily workflow — or are we not quite there yet?&lt;/p&gt;

&lt;p&gt;Let me know 👇&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>openclawchallenge</category>
      <category>ai</category>
      <category>discuss</category>
    </item>
    <item>
      <title>I just gave my local AI desktop companion access to the outside world (Telegram, Discord, Email…)</title>
      <dc:creator>southy404</dc:creator>
      <pubDate>Sun, 19 Apr 2026 10:16:49 +0000</pubDate>
      <link>https://dev.to/southy404/i-just-gave-my-local-ai-desktop-companion-access-to-the-outside-world-telegram-discord-email-d5d</link>
      <guid>https://dev.to/southy404/i-just-gave-my-local-ai-desktop-companion-access-to-the-outside-world-telegram-discord-email-d5d</guid>
      <description>&lt;h1&gt;
  
  
  For the last weeks, I’ve been building a &lt;strong&gt;local-first AI desktop companion&lt;/strong&gt; that lives on your screen.
&lt;/h1&gt;

&lt;p&gt;It can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;see your screen&lt;/li&gt;
&lt;li&gt;understand your context&lt;/li&gt;
&lt;li&gt;execute actions on your system&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But it had one big limitation:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It only lived &lt;em&gt;on your desktop&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So I changed that.&lt;/p&gt;




&lt;h2&gt;
  
  
  🌐 Introducing: Blob Connectors
&lt;/h2&gt;

&lt;p&gt;I just added a new layer to OpenBlob:&lt;/p&gt;

&lt;p&gt;👉 &lt;strong&gt;Blob Connectors&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A lightweight Python bridge that connects your local AI to the outside world:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Telegram&lt;/li&gt;
&lt;li&gt;Discord&lt;/li&gt;
&lt;li&gt;Slack&lt;/li&gt;
&lt;li&gt;Email&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🧠 What this actually means
&lt;/h2&gt;

&lt;p&gt;You can now do things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;send &lt;code&gt;open spotify&lt;/code&gt; via Telegram → Spotify opens on your PC&lt;/li&gt;
&lt;li&gt;ask a question in Discord → your local model answers&lt;/li&gt;
&lt;li&gt;send an email → get a contextual AI reply&lt;/li&gt;
&lt;li&gt;control your desktop from anywhere&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And the important part:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It’s still &lt;strong&gt;local-first&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  ⚙️ How it works
&lt;/h2&gt;

&lt;p&gt;All channels go through the same pipeline:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Telegram / Discord / Slack / Email
              │
        Blob Connectors (Python)
              │
    ┌─────────┴─────────┐
    │                   │
OpenBlob running?    Ollama fallback
(localhost)         (local model)
              │
        Command Router
              │
      Desktop action
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Everything becomes a normalized &lt;code&gt;Message&lt;/code&gt; object.&lt;/p&gt;

&lt;p&gt;No matter where it comes from.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔌 Why this matters
&lt;/h2&gt;

&lt;p&gt;This is not just “adding integrations”.&lt;/p&gt;

&lt;p&gt;This is the first real step towards:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;an AI system that exists beyond a single interface&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now OpenBlob is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;not just UI-bound&lt;/li&gt;
&lt;li&gt;not just voice-bound&lt;/li&gt;
&lt;li&gt;not just desktop-bound&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It becomes a &lt;strong&gt;distributed interface to your own system&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🧩 Built for extension
&lt;/h2&gt;

&lt;p&gt;Each connector implements the same interface:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;MyConnector&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;BlobConnector&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;receive_message&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;raw&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Message&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;...&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;send_response&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;original&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;...&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;start&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So adding new platforms is trivial:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;WhatsApp&lt;/li&gt;
&lt;li&gt;Matrix&lt;/li&gt;
&lt;li&gt;iMessage (maybe 👀)&lt;/li&gt;
&lt;li&gt;anything with an API&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🔒 Still local-first
&lt;/h2&gt;

&lt;p&gt;Important:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;runs on your machine&lt;/li&gt;
&lt;li&gt;uses your local models (Ollama)&lt;/li&gt;
&lt;li&gt;no required cloud backend&lt;/li&gt;
&lt;li&gt;transparent behavior&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If OpenBlob is offline:&lt;/p&gt;

&lt;p&gt;→ it automatically falls back to local reasoning&lt;/p&gt;




&lt;h2&gt;
  
  
  🚧 Current state
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;works across all channels&lt;/li&gt;
&lt;li&gt;still early&lt;/li&gt;
&lt;li&gt;structure is stabilizing&lt;/li&gt;
&lt;li&gt;lots of room for improvement&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🔮 What this unlocks next
&lt;/h2&gt;

&lt;p&gt;This connector layer enables things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;shared memory across all channels&lt;/li&gt;
&lt;li&gt;persistent conversations&lt;/li&gt;
&lt;li&gt;multi-agent systems&lt;/li&gt;
&lt;li&gt;calendar / tool integrations&lt;/li&gt;
&lt;li&gt;real remote control of your system&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🤝 If you want to build with me
&lt;/h2&gt;

&lt;p&gt;This is probably the best moment to jump in.&lt;/p&gt;

&lt;p&gt;You can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;build new connectors&lt;/li&gt;
&lt;li&gt;improve routing / memory&lt;/li&gt;
&lt;li&gt;design better UX&lt;/li&gt;
&lt;li&gt;experiment with AI behaviors&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 &lt;a href="https://github.com/southy404/openblob" rel="noopener noreferrer"&gt;https://github.com/southy404/openblob&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  💡 Final thoughts
&lt;/h2&gt;

&lt;p&gt;This is mainly an infrastructure update.&lt;/p&gt;

&lt;p&gt;By introducing a connector layer and a normalized message interface, OpenBlob becomes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;easier to extend&lt;/li&gt;
&lt;li&gt;easier to integrate&lt;/li&gt;
&lt;li&gt;less tied to a single UI&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s a small surface change — but a significant internal shift.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>ai</category>
      <category>python</category>
      <category>api</category>
    </item>
    <item>
      <title>Gemini Footprint Tracker — See the Real Cost of Every AI Prompt</title>
      <dc:creator>southy404</dc:creator>
      <pubDate>Sat, 18 Apr 2026 08:03:11 +0000</pubDate>
      <link>https://dev.to/southy404/gemini-footprint-tracker-see-the-real-cost-of-every-ai-prompt-3j7o</link>
      <guid>https://dev.to/southy404/gemini-footprint-tracker-see-the-real-cost-of-every-ai-prompt-3j7o</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for &lt;a href="https://dev.to/challenges/weekend-2026-04-16"&gt;Weekend Challenge: Earth Day Edition&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;Every time you send a message to an AI, it consumes water, energy, and emits CO₂. Most people have no idea how much. &lt;strong&gt;Gemini Footprint Tracker&lt;/strong&gt; makes that cost visible — in real time, per request, with full transparency about how the numbers are calculated.&lt;/p&gt;

&lt;p&gt;You bring your own Gemini API key, pick a model, and start chatting. After every response the tracker shows how much water and CO₂ that exchange cost — scaled by token count and model weight. A community panel aggregates anonymous footprint data from all users via Supabase, so you can see the collective impact grow in real time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; this is an awareness and transparency project, not an official measurement tool. The estimates are based on Google's publicly published baseline for a median Gemini Apps text prompt, combined with transparent app-side scaling logic. Every assumption is documented — what comes from Google, what is estimated, and where the model falls short. The &lt;code&gt;/learn&lt;/code&gt; page inside the app explains the full methodology.&lt;/p&gt;

&lt;p&gt;The goal is simple: make something invisible a little more visible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;🔗 &lt;strong&gt;&lt;a href="https://gemini-footprint-tracker.vercel.app" rel="noopener noreferrer"&gt;Live: gemini-footprint-tracker.vercel.app&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqn6w99b2st58nx7rk0n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqn6w99b2st58nx7rk0n.png" alt="Screenshot of Gemini Footprint Tracker" width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You'll need a free &lt;a href="https://aistudio.google.com/app/apikey" rel="noopener noreferrer"&gt;Google AI Studio API key&lt;/a&gt; to send messages. The key stays in your browser — it never touches a server.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code
&lt;/h2&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/southy404" rel="noopener noreferrer"&gt;
        southy404
      &lt;/a&gt; / &lt;a href="https://github.com/southy404/gemini-footprint-tracker" rel="noopener noreferrer"&gt;
        gemini-footprint-tracker
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;🌍 Gemini Footprint Tracker&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;An awareness project that makes the environmental cost of AI visible — tracking water, CO₂, and energy usage per Gemini API request in real time.&lt;/p&gt;
&lt;p&gt;Built for the &lt;a href="https://dev.to" rel="nofollow"&gt;DEV Earth Day Challenge 2026&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;→ &lt;a href="https://gemini-footprint-tracker.vercel.app" rel="nofollow noopener noreferrer"&gt;Live Demo&lt;/a&gt;&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;What it does&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;Every prompt you send to Gemini uses water, energy, and emits CO₂. This tracker uses Gemini's usage metadata (token counts) combined with Google's official published baseline values to estimate the environmental footprint of each request — and aggregates it anonymously across all users via Supabase.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;💧 Water consumption per request (mL)&lt;/li&gt;
&lt;li&gt;☁️ CO₂ emissions per request (gCO₂e)&lt;/li&gt;
&lt;li&gt;⚡ Token-based scaling per model (Flash-Lite / Flash / Pro)&lt;/li&gt;
&lt;li&gt;📊 Community stats across all sessions&lt;/li&gt;
&lt;li&gt;🔒 Your API key stays local — never sent anywhere except directly to Gemini&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Stack&lt;/h2&gt;
&lt;/div&gt;
&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;


&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Framework&lt;/td&gt;
&lt;td&gt;React 19 + TypeScript + Vite&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Styling&lt;/td&gt;
&lt;td&gt;Tailwind CSS v4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Animation&lt;/td&gt;
&lt;td&gt;Framer Motion&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Backend&lt;/td&gt;
&lt;td&gt;Supabase (anonymous footprint&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;…&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/southy404/gemini-footprint-tracker" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  How I Built It
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Stack:&lt;/strong&gt; React 19 + TypeScript + Vite, Tailwind CSS v4, Framer Motion, Supabase, Gemini API&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;The estimation model&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Google publicly reports that a median Gemini Apps text prompt uses &lt;strong&gt;0.26 mL&lt;/strong&gt; of water, emits &lt;strong&gt;0.03 gCO₂e&lt;/strong&gt;, and consumes &lt;strong&gt;0.24 Wh&lt;/strong&gt; of energy. That's the only official number available. From there I built a token-based scaling model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;WeightedTokens  = PromptTokens + ResponseTokens × 3.5
TokenScale      = max(0.2, WeightedTokens / 775)
WaterEstimate   = 0.26 × TokenScale × ModelMultiplier
CO₂Estimate     = 0.03 × TokenScale × ModelMultiplier
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The 3.5× output weight reflects that autoregressive decoding is significantly more compute-intensive than input prefill. The reference prompt (250 input + 150 output tokens) and the model multipliers (Flash-Lite: 0.85×, Flash: 1.0×, Pro: 1.35×) are documented approximations — not official Google values. The &lt;code&gt;/learn&lt;/code&gt; page inside the app makes this separation explicit: what is official, what is estimated, and where the numbers can't be trusted.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Community stats&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each request anonymously logs water and CO₂ to Supabase. The topbar shows live community totals — water consumed, CO₂ emitted, unique users tracked. The numbers update in real time across all sessions.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;UX decisions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The interface is intentionally built to feel like a normal AI chat — familiar composer, clean response layout, no dashboard clutter. That was a deliberate choice: AI resource usage is a topic that matters for everyone who uses these tools, not just people who go looking for environmental data. If it looks like a tracker, most people close it. If it looks like a chat, they stay.&lt;/p&gt;

&lt;p&gt;The footprint numbers appear quietly after each response — present, but not in your face. The community stats in the topbar give a sense of collective scale without being alarming. Transparency about estimates is built into the UI from the start: the helper text, the suggestion chips, and the &lt;code&gt;/learn&lt;/code&gt; page all reinforce that these are informed approximations, not ground truth.&lt;/p&gt;

&lt;p&gt;Other decisions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API key stored in localStorage only, never transmitted anywhere except directly to Gemini&lt;/li&gt;
&lt;li&gt;Voice input via Web Speech API&lt;/li&gt;
&lt;li&gt;Animated transition between hero and chat state using Framer Motion's &lt;code&gt;layoutId&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Mobile-responsive throughout, including the KaTeX methodology page&lt;/li&gt;
&lt;li&gt;Earth background video from NASA-Imagery via Pixabay&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prize Categories
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Best use of Google Gemini&lt;/strong&gt; — The entire app is built around the Gemini API. Every message goes through &lt;code&gt;generateContent&lt;/code&gt;, and the response's &lt;code&gt;usageMetadata&lt;/code&gt; — prompt and candidate token counts — directly drives the footprint calculation. The model selector supports &lt;code&gt;gemini-2.5-flash-lite&lt;/code&gt;, &lt;code&gt;gemini-2.5-flash&lt;/code&gt;, and &lt;code&gt;gemini-2.5-pro&lt;/code&gt;, each with a distinct environmental multiplier. Gemini isn't a feature bolted on — it's the thing being measured.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>weekendchallenge</category>
    </item>
    <item>
      <title>OpenBlob is evolving: better architecture, modern UI, and real-time transcripts</title>
      <dc:creator>southy404</dc:creator>
      <pubDate>Wed, 15 Apr 2026 16:16:29 +0000</pubDate>
      <link>https://dev.to/southy404/openblob-is-evolving-better-architecture-modern-ui-and-real-time-transcripts-28da</link>
      <guid>https://dev.to/southy404/openblob-is-evolving-better-architecture-modern-ui-and-real-time-transcripts-28da</guid>
      <description>&lt;p&gt;Over the past days, OpenBlob changed a lot. &lt;/p&gt;

&lt;p&gt;Not just visually — but fundamentally. &lt;/p&gt;

&lt;p&gt;This is a proper progress update on where things are heading 👇&lt;/p&gt;




&lt;h2&gt;
  
  
  🧠 Quick recap
&lt;/h2&gt;

&lt;p&gt;OpenBlob is a &lt;strong&gt;local-first desktop AI companion&lt;/strong&gt; that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;lives on your desktop
&lt;/li&gt;
&lt;li&gt;understands your context
&lt;/li&gt;
&lt;li&gt;can see your screen (via vision models)
&lt;/li&gt;
&lt;li&gt;reacts in real-time
&lt;/li&gt;
&lt;li&gt;executes actions directly on your system
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 &lt;strong&gt;Repo:&lt;/strong&gt; &lt;a href="https://github.com/southy404/openblob" rel="noopener noreferrer"&gt;https://github.com/southy404/openblob&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🔧 Rebuilding the core (this was the big one)
&lt;/h2&gt;

&lt;p&gt;The biggest update isn’t something you see. It’s how everything works underneath. OpenBlob now has a much cleaner and more scalable structure:&lt;/p&gt;

&lt;h3&gt;
  
  
  Core pipeline
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;input (voice / text / screen)&lt;br&gt;
→ intent detection&lt;br&gt;
→ command router&lt;br&gt;
→ execution (local first)&lt;br&gt;
→ AI fallback if needed&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  What changed
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Clear separation&lt;/strong&gt; of responsibilities&lt;/li&gt;
&lt;li&gt;Proper &lt;strong&gt;command routing&lt;/strong&gt; system&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Modular capabilities&lt;/strong&gt; instead of chaos&lt;/li&gt;
&lt;li&gt;Easier to extend without breaking everything&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;This turns OpenBlob into something bigger than a chatbot: &lt;strong&gt;a runtime layer for your desktop.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🧩 Open-source friendly structure
&lt;/h2&gt;

&lt;p&gt;One goal became very clear: &lt;strong&gt;this needs to be hackable.&lt;/strong&gt; So the architecture is moving towards a module system like this:&lt;/p&gt;

&lt;p&gt;📁 &lt;code&gt;modules/&lt;/code&gt;&lt;br&gt;
↳ 📁 &lt;code&gt;discord/&lt;/code&gt;&lt;br&gt;
↳ 📁 &lt;code&gt;spotify/&lt;/code&gt;&lt;br&gt;
↳ 📁 &lt;code&gt;browser/&lt;/code&gt;&lt;br&gt;
↳ 📁 &lt;code&gt;system/&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Each module:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;exposes commands&lt;/li&gt;
&lt;li&gt;runs locally&lt;/li&gt;
&lt;li&gt;can be extended independently&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;This makes it much easier to:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;build plugins&lt;/li&gt;
&lt;li&gt;integrate APIs&lt;/li&gt;
&lt;li&gt;experiment without touching the core&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🎨 New UI (cleaner, faster, more alive)
&lt;/h2&gt;

&lt;p&gt;The UI got a big upgrade:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Floating bubble interface&lt;/li&gt;
&lt;li&gt;Glassmorphism style&lt;/li&gt;
&lt;li&gt;Smoother, more organic animations&lt;/li&gt;
&lt;li&gt;Faster interaction&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Interaction now feels like:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;CTRL + SPACE&lt;/code&gt; → instant open&lt;/li&gt;
&lt;li&gt;Global voice toggle&lt;/li&gt;
&lt;li&gt;Minimal friction&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Less “tool”. More presence.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  💬 NEW: Just Chatting mode
&lt;/h2&gt;

&lt;p&gt;Sometimes you don’t want commands. You just want to talk. So OpenBlob now has a &lt;strong&gt;Just Chatting mode&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pure conversation with your AI companion&lt;/li&gt;
&lt;li&gt;No command routing&lt;/li&gt;
&lt;li&gt;No execution layer&lt;/li&gt;
&lt;li&gt;Just dialogue&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;This is important because:&lt;/strong&gt; the companion shouldn’t only do things — it should also &lt;em&gt;be there&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use cases:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Thinking out loud&lt;/li&gt;
&lt;li&gt;Asking questions&lt;/li&gt;
&lt;li&gt;Casual conversation&lt;/li&gt;
&lt;li&gt;Testing personality / tone&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🖼 Screenshot assistant (more usable now)
&lt;/h2&gt;

&lt;p&gt;The screen pipeline is getting more solid:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;screenshot&lt;br&gt;
→ OCR&lt;br&gt;
→ context extraction&lt;br&gt;
→ reasoning&lt;br&gt;
→ answer&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Already useful for:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Debugging&lt;/li&gt;
&lt;li&gt;UI understanding&lt;/li&gt;
&lt;li&gt;Games&lt;/li&gt;
&lt;li&gt;Quick research&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Still improving — but getting reliable.&lt;/p&gt;




&lt;h2&gt;
  
  
  🎙️ NEW: real-time transcript system
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia0.giphy.com%2Fmedia%2Fv1.Y2lkPTc5MGI3NjExcG13cmR1ZWN0endpYm1pb2I2NGtpbmZwNHJweHZueDdlNTE3MGhwNSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw%2F4fN9saFvPDuJRxn5rH%2Fgiphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia0.giphy.com%2Fmedia%2Fv1.Y2lkPTc5MGI3NjExcG13cmR1ZWN0endpYm1pb2I2NGtpbmZwNHJweHZueDdlNTE3MGhwNSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw%2F4fN9saFvPDuJRxn5rH%2Fgiphy.gif" alt="Alt Text" width="480" height="236"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is one of the biggest new additions. OpenBlob can now:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Listen to system audio&lt;/li&gt;
&lt;li&gt;Listen to microphone input&lt;/li&gt;
&lt;li&gt;Generate live transcripts&lt;/li&gt;
&lt;li&gt;Store structured sessions&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Pipeline
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;audio (system / mic)&lt;br&gt;
→ transcription&lt;br&gt;
→ segmented timeline&lt;br&gt;
→ structured session&lt;br&gt;
→ saved as text&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  What it already works for
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Meetings (Meet, Zoom, etc.)&lt;/li&gt;
&lt;li&gt;YouTube / podcasts&lt;/li&gt;
&lt;li&gt;Lectures&lt;/li&gt;
&lt;li&gt;General audio capture&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🧪 Current prototype
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Live text appearing in real-time&lt;/li&gt;
&lt;li&gt;Segmented transcript blocks&lt;/li&gt;
&lt;li&gt;Session tracking&lt;/li&gt;
&lt;li&gt;Simple overlay UI&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s still early. But it works.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔮 Where transcripts are going
&lt;/h2&gt;

&lt;p&gt;This is not just speech-to-text. Next steps:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;📝 Meeting assistant&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Summaries&lt;/li&gt;
&lt;li&gt;Key points&lt;/li&gt;
&lt;li&gt;Action items&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;🧠 Memory layer&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Link transcripts to context&lt;/li&gt;
&lt;li&gt;Searchable history&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;⚡ Real-time help&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Explain while listening&lt;/li&gt;
&lt;li&gt;Highlight important info&lt;/li&gt;
&lt;li&gt;Suggest responses&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  ⚡ Philosophy (still the same)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Local-first&lt;/li&gt;
&lt;li&gt;Context &amp;gt; Prompt&lt;/li&gt;
&lt;li&gt;System-level AI&lt;/li&gt;
&lt;li&gt;Playful + useful&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🧪 Current state
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Still experimental&lt;/li&gt;
&lt;li&gt;Still buggy sometimes&lt;/li&gt;
&lt;li&gt;Evolving very fast&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;But now:&lt;/strong&gt; Much better structure, clearer direction, and easier to contribute.&lt;/p&gt;




&lt;h2&gt;
  
  
  🤝 If you want to join
&lt;/h2&gt;

&lt;p&gt;Now is actually a great time. You can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build modules (Discord, Spotify, browser, etc.)&lt;/li&gt;
&lt;li&gt;Improve transcription&lt;/li&gt;
&lt;li&gt;Design UI&lt;/li&gt;
&lt;li&gt;Experiment with AI&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 &lt;strong&gt;Join here:&lt;/strong&gt; &lt;a href="https://github.com/southy404/openblob" rel="noopener noreferrer"&gt;https://github.com/southy404/openblob&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  💡 Final thought
&lt;/h2&gt;

&lt;p&gt;I’m starting to believe the future of AI is not a chat window in a browser. &lt;/p&gt;

&lt;p&gt;But something that &lt;strong&gt;lives on your system&lt;/strong&gt;, &lt;strong&gt;understands your context&lt;/strong&gt;, and can &lt;strong&gt;both act and talk&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;OpenBlob is slowly getting there.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>ai</category>
      <category>agents</category>
      <category>github</category>
    </item>
    <item>
      <title>I’m building a local AI desktop companion that sees your screen — and you can help shape it</title>
      <dc:creator>southy404</dc:creator>
      <pubDate>Thu, 09 Apr 2026 17:57:15 +0000</pubDate>
      <link>https://dev.to/southy404/im-building-a-local-ai-desktop-companion-that-sees-your-screen-and-you-can-help-shape-it-2ibh</link>
      <guid>https://dev.to/southy404/im-building-a-local-ai-desktop-companion-that-sees-your-screen-and-you-can-help-shape-it-2ibh</guid>
      <description>&lt;p&gt;Most AI tools feel disconnected.&lt;/p&gt;

&lt;p&gt;They don’t see your screen.&lt;br&gt;
They don’t understand what you're doing.&lt;/p&gt;

&lt;p&gt;So I built one that does.&lt;/p&gt;




&lt;h2&gt;
  
  
  Meet OpenBlob
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp72353n8tuipp9evr1qj.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp72353n8tuipp9evr1qj.gif" alt="OpenBlob desktop AI companion showing animated blob avatar, floating UI, and context-aware interaction on Windows desktop" width="480" height="254"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An &lt;strong&gt;open-source, local-first desktop AI companion for Windows&lt;/strong&gt; that doesn’t just respond — it &lt;strong&gt;lives on your desktop&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;👉 GitHub: &lt;a href="https://github.com/southy404/openblob" rel="noopener noreferrer"&gt;https://github.com/southy404/openblob&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;understand what app you’re using&lt;/li&gt;
&lt;li&gt;analyze screenshots&lt;/li&gt;
&lt;li&gt;help inside games, apps, and browsers&lt;/li&gt;
&lt;li&gt;react visually with an animated companion&lt;/li&gt;
&lt;li&gt;and yes… even &lt;strong&gt;play hide and seek with you&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The problem with current AI assistants
&lt;/h2&gt;

&lt;p&gt;Most tools today are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cloud-dependent&lt;/li&gt;
&lt;li&gt;context-blind&lt;/li&gt;
&lt;li&gt;static&lt;/li&gt;
&lt;li&gt;not fun to use&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They don’t feel like part of your system.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧠 It understands context
&lt;/h2&gt;

&lt;p&gt;OpenBlob looks at:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;active window&lt;/li&gt;
&lt;li&gt;app name&lt;/li&gt;
&lt;li&gt;window title&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So if you’re in a game, it knows.&lt;br&gt;
If you're debugging, it adapts.&lt;/p&gt;

&lt;p&gt;This is where things start to feel different.&lt;/p&gt;




&lt;h2&gt;
  
  
  🖼 It can see your screen
&lt;/h2&gt;

&lt;p&gt;You can take a screenshot and it will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;extract visible text&lt;/li&gt;
&lt;li&gt;detect what you're looking at&lt;/li&gt;
&lt;li&gt;generate a &lt;strong&gt;real search query&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;explain what's going on
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Screenshot → OCR → context → reasoning → answer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Still a bit rough — but already very usable.&lt;/p&gt;




&lt;h2&gt;
  
  
  🎮 It actually helps inside games
&lt;/h2&gt;

&lt;p&gt;Instead of:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;alt-tab → google → guess&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;screenshot&lt;/li&gt;
&lt;li&gt;let it detect the game&lt;/li&gt;
&lt;li&gt;get a real answer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This alone changes how you play.&lt;/p&gt;




&lt;h2&gt;
  
  
  🤖 Multi-model AI (local-first)
&lt;/h2&gt;

&lt;p&gt;Runs via Ollama with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;text models&lt;/li&gt;
&lt;li&gt;vision models&lt;/li&gt;
&lt;li&gt;fallback system&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No cloud required.&lt;/p&gt;




&lt;h2&gt;
  
  
  🎨 It feels alive
&lt;/h2&gt;

&lt;p&gt;The companion:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;has moods (idle, thinking, love, sleepy)&lt;/li&gt;
&lt;li&gt;reacts to interaction&lt;/li&gt;
&lt;li&gt;can be “petted”&lt;/li&gt;
&lt;li&gt;dances when music is playing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Small details, big difference.&lt;/p&gt;




&lt;h2&gt;
  
  
  🎮 The weird part (my favorite)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Hide and Seek mode
&lt;/h3&gt;

&lt;p&gt;You can literally say:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“let’s play hide and seek”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And it will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;hide somewhere on your screen&lt;/li&gt;
&lt;li&gt;peek occasionally&lt;/li&gt;
&lt;li&gt;wait until you find it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sounds dumb.&lt;/p&gt;

&lt;p&gt;Feels surprisingly real.&lt;/p&gt;




&lt;h2&gt;
  
  
  ⚡ New UI (WIP)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;CTRL + SPACE&lt;/code&gt; to open&lt;/li&gt;
&lt;li&gt;floating companion&lt;/li&gt;
&lt;li&gt;instant interaction&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Inspired by tools like Raycast / Arc — but alive.&lt;/p&gt;

&lt;p&gt;⚠️ still slightly buggy&lt;/p&gt;




&lt;h2&gt;
  
  
  🧪 Screenshot assistant (work in progress)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;fast snipping&lt;/li&gt;
&lt;li&gt;instant processing&lt;/li&gt;
&lt;li&gt;contextual answers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Works — but not perfect yet.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why open source?
&lt;/h2&gt;

&lt;p&gt;Because this shouldn’t belong to one company.&lt;/p&gt;

&lt;p&gt;This kind of system should be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;transparent&lt;/li&gt;
&lt;li&gt;hackable&lt;/li&gt;
&lt;li&gt;community-built&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Philosophy
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;local-first&lt;/li&gt;
&lt;li&gt;context &amp;gt; prompt&lt;/li&gt;
&lt;li&gt;playful + useful&lt;/li&gt;
&lt;li&gt;build in public&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Current state
&lt;/h2&gt;

&lt;p&gt;Early stage.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;evolving fast&lt;/li&gt;
&lt;li&gt;sometimes buggy&lt;/li&gt;
&lt;li&gt;lots of experiments&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  If you want to join
&lt;/h2&gt;

&lt;p&gt;This project is wide open.&lt;/p&gt;

&lt;p&gt;You can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;contribute features&lt;/li&gt;
&lt;li&gt;improve UI&lt;/li&gt;
&lt;li&gt;experiment with AI&lt;/li&gt;
&lt;li&gt;build plugins&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 &lt;a href="https://github.com/southy404/openblob" rel="noopener noreferrer"&gt;https://github.com/southy404/openblob&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Final thought
&lt;/h2&gt;

&lt;p&gt;I don’t think the future of AI is chat.&lt;/p&gt;

&lt;p&gt;I think it’s something that:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;lives with you, understands your environment, and evolves&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That’s what I’m trying to build.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>ai</category>
      <category>rust</category>
      <category>react</category>
    </item>
    <item>
      <title>I built a CAPTCHA that never lets you leave</title>
      <dc:creator>southy404</dc:creator>
      <pubDate>Sat, 04 Apr 2026 18:20:10 +0000</pubDate>
      <link>https://dev.to/southy404/i-built-a-captcha-that-never-lets-you-leave-do</link>
      <guid>https://dev.to/southy404/i-built-a-captcha-that-never-lets-you-leave-do</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/aprilfools-2026"&gt;DEV April Fools Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I built a fake CAPTCHA game called &lt;strong&gt;I'm Not a Robot&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It starts like a normal human verification flow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;click the checkbox&lt;/li&gt;
&lt;li&gt;solve the image challenge&lt;/li&gt;
&lt;li&gt;verify and move on with your life&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Except it never really lets you move on.&lt;/p&gt;

&lt;p&gt;The main joke is based on one of the most annoying real CAPTCHA experiences: you click all the correct image tiles, and then more tiles keep loading. Sometimes the new tile also contains the thing you were supposed to click. Sometimes it does not. Sometimes you think you are finally done, but the system decides you are absolutely not done.&lt;/p&gt;

&lt;p&gt;So I turned that tiny moment of internet frustration into the entire product.&lt;/p&gt;

&lt;p&gt;The project is intentionally useless, mildly hostile, and completely committed to wasting your time in the most familiar way possible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Live demo:&lt;/strong&gt; &lt;a href="https://codepen.io/southy404/pen/019d59a9-db10-76ca-a750-19100963135e" rel="noopener noreferrer"&gt;CodePen demo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Try it yourself and see how long it takes before the CAPTCHA starts feeling personal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code
&lt;/h2&gt;

&lt;p&gt;The whole project is built as a lightweight front-end-only prototype and hosted on CodePen.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CodePen:&lt;/strong&gt; &lt;a href="https://codepen.io/southy404/pen/019d59a9-db10-76ca-a750-19100963135e" rel="noopener noreferrer"&gt;View the code here&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How I Built It
&lt;/h2&gt;

&lt;p&gt;I wanted it to feel recognizable first and ridiculous second.&lt;/p&gt;

&lt;p&gt;So instead of making it look overly stylized or futuristic, I designed it to resemble the familiar CAPTCHA flow people already know:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a simple checkbox start&lt;/li&gt;
&lt;li&gt;a blue challenge header&lt;/li&gt;
&lt;li&gt;a 3x3 image grid&lt;/li&gt;
&lt;li&gt;a verify button&lt;/li&gt;
&lt;li&gt;repeated image replacement after selecting the correct tiles&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;From there, I made the interaction slowly become absurd.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tech used
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;HTML&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;CSS&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Vanilla JavaScript&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CodePen&lt;/strong&gt; for hosting and sharing&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The core idea
&lt;/h3&gt;

&lt;p&gt;The most important interaction in the whole project is this:&lt;/p&gt;

&lt;p&gt;When you click a correct tile, it does not just stay solved.&lt;br&gt;&lt;br&gt;
It gets replaced with a new tile immediately, just like those real image CAPTCHAs that seem determined to test your patience instead of your humanity.&lt;/p&gt;

&lt;p&gt;That replacement loop is the joke.&lt;/p&gt;

&lt;p&gt;To make it feel a little more believable, I built it so that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;only the clicked tile gets replaced&lt;/li&gt;
&lt;li&gt;some replacement tiles contain another hydrant&lt;/li&gt;
&lt;li&gt;some replacement tiles do not&lt;/li&gt;
&lt;li&gt;the prompt slowly becomes more absurd over time&lt;/li&gt;
&lt;li&gt;the challenge keeps pretending you are almost done&lt;/li&gt;
&lt;li&gt;the final screen punishes you for sticking with it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I also created pseudo-photo tile images directly in code so the project stays self-contained and easy to run without external assets.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prize Category
&lt;/h2&gt;

&lt;p&gt;I’m mainly submitting this for &lt;strong&gt;Best Ode to Larry Masinter&lt;/strong&gt; and hopefully also &lt;strong&gt;Community Favorite&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Why &lt;strong&gt;Best Ode to Larry Masinter&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;it is intentionally useless&lt;/li&gt;
&lt;li&gt;it turns a familiar internet standard-ish experience into something absurd&lt;/li&gt;
&lt;li&gt;it fully commits to the bit&lt;/li&gt;
&lt;li&gt;it feels like the kind of thing nobody needed, but the internet somehow deserved&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Why &lt;strong&gt;Community Favorite&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the joke is immediate&lt;/li&gt;
&lt;li&gt;the frustration is universal&lt;/li&gt;
&lt;li&gt;almost everyone has suffered through an image CAPTCHA before&lt;/li&gt;
&lt;li&gt;it is very easy to understand, click, and share&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;I liked the idea of building something that feels normal for about five seconds and then slowly reveals that it exists only to trap you in an endless loop of fake progress.&lt;/p&gt;

&lt;p&gt;That felt extremely appropriate for an April Fools challenge.&lt;/p&gt;

&lt;p&gt;If the best useless software is software that technically works while emotionally making things worse, then I think this qualifies.&lt;/p&gt;

&lt;p&gt;Thanks for reading, and good luck proving you are human.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>418challenge</category>
      <category>jokes</category>
      <category>webdev</category>
    </item>
    <item>
      <title>🚀 I built a Chrome Extension to manage AI prompts properly (Prompt Vault)</title>
      <dc:creator>southy404</dc:creator>
      <pubDate>Mon, 30 Mar 2026 11:31:43 +0000</pubDate>
      <link>https://dev.to/southy404/i-built-a-chrome-extension-to-manage-ai-prompts-properly-prompt-vault-kgg</link>
      <guid>https://dev.to/southy404/i-built-a-chrome-extension-to-manage-ai-prompts-properly-prompt-vault-kgg</guid>
      <description>&lt;p&gt;If you're working with tools like ChatGPT, Claude, Gemini, or Midjourney daily, you probably ran into the same problem I did:&lt;/p&gt;

&lt;p&gt;👉 Your best prompts are scattered everywhere.&lt;br&gt;
Notes. Docs. Random chats. Lost forever.&lt;/p&gt;

&lt;p&gt;So I built something simple — but actually useful.&lt;/p&gt;

&lt;h2&gt;
  
  
  🔐 Introducing Prompt Vault
&lt;/h2&gt;

&lt;p&gt;👉 &lt;a href="https://chromewebstore.google.com/detail/prompt-vault/njpfhfjoofkflbkfepckeepojbmfmocm" rel="noopener noreferrer"&gt;https://chromewebstore.google.com/detail/prompt-vault/njpfhfjoofkflbkfepckeepojbmfmocm&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prompt Vault&lt;/strong&gt; is a lightweight Chrome extension to &lt;strong&gt;save, organize, search, and instantly reuse your AI prompts&lt;/strong&gt; — without friction, without clutter, and without relying on external tools.&lt;/p&gt;




&lt;h2&gt;
  
  
  💡 Why I built this
&lt;/h2&gt;

&lt;p&gt;I kept re-writing the same prompts over and over again.&lt;/p&gt;

&lt;p&gt;Or worse:&lt;br&gt;
I &lt;em&gt;knew&lt;/em&gt; I had a perfect prompt somewhere… but couldn’t find it when I needed it.&lt;/p&gt;

&lt;p&gt;Most tools out there felt:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Overcomplicated&lt;/li&gt;
&lt;li&gt;Slow&lt;/li&gt;
&lt;li&gt;Or required accounts / cloud sync&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I wanted something:&lt;br&gt;
👉 Fast&lt;br&gt;
👉 Local&lt;br&gt;
👉 Reliable&lt;/p&gt;

&lt;p&gt;So I built it myself.&lt;/p&gt;




&lt;h2&gt;
  
  
  ⚡ Core Features
&lt;/h2&gt;

&lt;h3&gt;
  
  
  🔒 Failsafe 1-Click Copy
&lt;/h3&gt;

&lt;p&gt;Clipboard copy just works.&lt;br&gt;
No silent failures — it uses &lt;strong&gt;3 fallback methods&lt;/strong&gt; to guarantee success.&lt;/p&gt;




&lt;h3&gt;
  
  
  🏷️ Smart Tags &amp;amp; Filtering
&lt;/h3&gt;

&lt;p&gt;Organize your prompts with custom tags and instantly filter them.&lt;/p&gt;

&lt;p&gt;No more chaos. Just structure.&lt;/p&gt;




&lt;h3&gt;
  
  
  🔍 Live Search (with Highlights)
&lt;/h3&gt;

&lt;p&gt;Search across:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Title&lt;/li&gt;
&lt;li&gt;Content&lt;/li&gt;
&lt;li&gt;Tags&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Results update in real-time and highlight matches.&lt;/p&gt;




&lt;h3&gt;
  
  
  📊 Flexible Sorting
&lt;/h3&gt;

&lt;p&gt;Everyone thinks differently.&lt;/p&gt;

&lt;p&gt;Sort your prompts by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Most Recent&lt;/li&gt;
&lt;li&gt;A–Z&lt;/li&gt;
&lt;li&gt;Most Used&lt;/li&gt;
&lt;li&gt;Tags&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  📤 JSON Import / Export
&lt;/h3&gt;

&lt;p&gt;Your data is yours.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Backup everything&lt;/li&gt;
&lt;li&gt;Share prompt packs&lt;/li&gt;
&lt;li&gt;Move between devices&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  📈 Usage Tracking
&lt;/h3&gt;

&lt;p&gt;See which prompts you actually use.&lt;/p&gt;

&lt;p&gt;Optimize your workflow based on real usage — not guesswork.&lt;/p&gt;




&lt;h3&gt;
  
  
  🌙 Dark &amp;amp; Light Mode
&lt;/h3&gt;

&lt;p&gt;Clean dark UI by default.&lt;br&gt;
Switch anytime — preference is saved.&lt;/p&gt;




&lt;h3&gt;
  
  
  ⌨️ Keyboard Shortcuts (for power users)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Ctrl + N&lt;/code&gt; → New prompt&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Ctrl + F&lt;/code&gt; → Search&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Esc&lt;/code&gt; → Close&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Fast. Minimal. No mouse needed.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧠 Who this is for
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Heavy ChatGPT / Claude / Gemini users&lt;/li&gt;
&lt;li&gt;Prompt engineers &amp;amp; AI devs&lt;/li&gt;
&lt;li&gt;Writers, marketers, SEO people&lt;/li&gt;
&lt;li&gt;Anyone tired of repeating the same instructions&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🛡️ Privacy First
&lt;/h2&gt;

&lt;p&gt;This was non-negotiable.&lt;/p&gt;

&lt;p&gt;✔ 100% local storage (Chrome storage)&lt;br&gt;
✔ No accounts&lt;br&gt;
✔ No tracking&lt;br&gt;
✔ No servers&lt;br&gt;
✔ No ads&lt;/p&gt;

&lt;p&gt;Your prompts never leave your machine.&lt;/p&gt;




&lt;h2&gt;
  
  
  📦 Lightweight by design
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;No bloat&lt;/li&gt;
&lt;li&gt;No subscriptions&lt;/li&gt;
&lt;li&gt;No setup&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Install → click → start saving prompts.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔥 Try it
&lt;/h2&gt;

&lt;p&gt;👉 &lt;a href="https://chromewebstore.google.com/detail/prompt-vault/njpfhfjoofkflbkfepckeepojbmfmocm" rel="noopener noreferrer"&gt;https://chromewebstore.google.com/detail/prompt-vault/njpfhfjoofkflbkfepckeepojbmfmocm&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Thanks for reading 🙌&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>tooling</category>
      <category>resources</category>
    </item>
    <item>
      <title>New Week, New Plans: What's on your agenda? 🚀</title>
      <dc:creator>southy404</dc:creator>
      <pubDate>Mon, 23 Mar 2026 13:05:35 +0000</pubDate>
      <link>https://dev.to/southy404/new-week-new-plans-whats-on-your-agenda-1jk</link>
      <guid>https://dev.to/southy404/new-week-new-plans-whats-on-your-agenda-1jk</guid>
      <description>&lt;h3&gt;
  
  
  Hey DEV community! 👋
&lt;/h3&gt;

&lt;p&gt;Monday is here, and a fresh week of commits, coffee, and (hopefully) zero merge conflicts awaits. &lt;/p&gt;

&lt;p&gt;I’ve spent the last few days deep in the weeds with &lt;strong&gt;multi-agent experiments&lt;/strong&gt; and pushing the boundaries of what my LLM setups can do. Before I dive back into my terminal to see where these agents lead me next, I’m curious about the real-world problems &lt;strong&gt;you&lt;/strong&gt; are solving right now.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So, what are your main goals for this week?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;🚀 Shipping:&lt;/strong&gt; Are you finally launching that new feature?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;🐛 Debugging:&lt;/strong&gt; Is there a boss-level bug you’re determined to squash?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;📚 Learning:&lt;/strong&gt; Diving into a new framework or a specific AI tool?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;🧹 Refactoring:&lt;/strong&gt; Or is it just time to clean up that growing tech debt?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Drop your plans in the comments below! I’ve found that just writing your goals down is often the best motivation to actually get them done.&lt;/p&gt;

&lt;p&gt;Have a highly productive week! 💻☕&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>mondaymotivation</category>
      <category>productivity</category>
      <category>devlife</category>
    </item>
    <item>
      <title>Parallel Worlds in the EU #devchallenge</title>
      <dc:creator>southy404</dc:creator>
      <pubDate>Sat, 21 Mar 2026 15:20:36 +0000</pubDate>
      <link>https://dev.to/southy404/parallel-worlds-in-the-eu-devchallenge-4eae</link>
      <guid>https://dev.to/southy404/parallel-worlds-in-the-eu-devchallenge-4eae</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/wecoded-2026"&gt;2026 WeCoded Challenge&lt;/a&gt;: Frontend Art&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Show us your Art
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Live experience:
&lt;a href="https://codepen.io/editor/southy404/pen/019d10f4-ca7f-79b6-b36e-145496c7d2ba" rel="noopener noreferrer"&gt;https://codepen.io/editor/southy404/pen/019d10f4-ca7f-79b6-b36e-145496c7d2ba&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is an interactive, scroll-driven experience that visualizes how two identical careers slowly diverge over time.&lt;/p&gt;

&lt;p&gt;Both individuals start with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the same education&lt;/li&gt;
&lt;li&gt;the same skills&lt;/li&gt;
&lt;li&gt;the same ambition&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The only variable that changes is gender.&lt;/p&gt;

&lt;p&gt;As you scroll, small differences compound into large outcomes - in salary, promotion speed, and visibility.&lt;/p&gt;

&lt;p&gt;At any moment, you can toggle "Remove Bias" and watch both paths instantly align again.&lt;/p&gt;

&lt;h2&gt;
  
  
  Inspiration
&lt;/h2&gt;

&lt;p&gt;When I thought about gender equity in tech, I didn’t want to create a static illustration.&lt;/p&gt;

&lt;p&gt;I wanted to show something more uncomfortable:&lt;/p&gt;

&lt;p&gt;That inequality doesn’t always appear as a single dramatic moment - it emerges slowly, through accumulation.&lt;/p&gt;

&lt;p&gt;The concept behind Parallel Worlds is simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;no single event explains the gap&lt;/li&gt;
&lt;li&gt;but every small difference contributes to it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The "glass ceiling" is not just one barrier.&lt;br&gt;
It’s a system of subtle frictions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;slightly lower starting offers&lt;/li&gt;
&lt;li&gt;delayed promotions&lt;/li&gt;
&lt;li&gt;different feedback language&lt;/li&gt;
&lt;li&gt;fewer high-visibility opportunities&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Individually, these seem small.&lt;br&gt;
Together, they reshape entire careers.&lt;/p&gt;

&lt;p&gt;That’s why the project includes an interactive "Remove Bias" toggle.&lt;/p&gt;

&lt;p&gt;When activated, the system removes these frictions - and suddenly:&lt;/p&gt;

&lt;p&gt;The trajectories become identical again.&lt;/p&gt;

&lt;p&gt;The message is simple:&lt;/p&gt;

&lt;p&gt;The difference was never talent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Data &amp;amp; Context
&lt;/h2&gt;

&lt;p&gt;This piece is grounded in real-world data from the EU:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;11.1% average gender pay gap in the EU (Eurostat, 2024)&lt;/li&gt;
&lt;li&gt;19.5% women among ICT specialists&lt;/li&gt;
&lt;li&gt;35.3% women in management roles&lt;/li&gt;
&lt;li&gt;81 women for every 100 men promoted to first-level management (McKinsey)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal wasn’t to simulate reality perfectly,&lt;br&gt;
but to translate these patterns into something you can feel visually.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Code
&lt;/h2&gt;

&lt;p&gt;This project is built as a lightweight frontend experience:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;HTML&lt;/li&gt;
&lt;li&gt;CSS&lt;/li&gt;
&lt;li&gt;JavaScript&lt;/li&gt;
&lt;li&gt;scroll-based storytelling&lt;/li&gt;
&lt;li&gt;dynamic career simulation&lt;/li&gt;
&lt;li&gt;real-time bias toggle&lt;/li&gt;
&lt;li&gt;glass-shatter interaction&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Code + Demo:&lt;br&gt;
&lt;a href="https://codepen.io/editor/southy404/pen/019d10f4-ca7f-79b6-b36e-145496c7d2ba" rel="noopener noreferrer"&gt;https://codepen.io/editor/southy404/pen/019d10f4-ca7f-79b6-b36e-145496c7d2ba&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thought
&lt;/h2&gt;

&lt;p&gt;Most people don’t experience inequality as a single obvious barrier.&lt;/p&gt;

&lt;p&gt;They experience it as:&lt;/p&gt;

&lt;p&gt;a series of small differences that never quite feel big enough to question - until the outcome is impossible to ignore.&lt;/p&gt;

&lt;p&gt;This project tries to make that invisible process visible.&lt;/p&gt;

&lt;p&gt;If you made it to the end:&lt;/p&gt;

&lt;p&gt;Try toggling bias on and off one more time.&lt;/p&gt;

&lt;p&gt;That contrast is the entire point.&lt;/p&gt;

</description>
      <category>wecoded</category>
      <category>devchallenge</category>
      <category>frontend</category>
      <category>css</category>
    </item>
    <item>
      <title>The "State Export" Hack: Rescuing Overloaded LLM Chats</title>
      <dc:creator>southy404</dc:creator>
      <pubDate>Fri, 20 Mar 2026 13:44:21 +0000</pubDate>
      <link>https://dev.to/southy404/the-state-export-hack-rescuing-overloaded-llm-chats-5197</link>
      <guid>https://dev.to/southy404/the-state-export-hack-rescuing-overloaded-llm-chats-5197</guid>
      <description>&lt;p&gt;We’ve all been there. You’re deep into a complex coding session, debugging a gnarly architecture issue, or building a massive project. After 50+ messages, the chat starts lagging, the AI starts forgetting your established rules, and the context window is clearly gasping for air. &lt;/p&gt;

&lt;p&gt;You need to start a fresh chat (or switch to a completely different, smarter model)—but the thought of re-explaining the entire project setup, rules, and current state makes you want to cry.&lt;/p&gt;

&lt;p&gt;Here is a quick trick I use to migrate chat contexts without losing my mind: &lt;strong&gt;The AI-to-AI Context Handoff.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Instead of manually summarizing things, you force the AI to compress its own brain state into a token-efficient format that you can just copy-paste into a new window. Here are the two prompts I use depending on the model.&lt;/p&gt;




&lt;h3&gt;
  
  
  Method 1: The "Safe &amp;amp; Reliable" Protocol (For older/standard models)
&lt;/h3&gt;

&lt;p&gt;If you are using slightly older models, smaller local LLMs, or just want a clean XML/JSON output that is still somewhat readable, use this. It uses a bit more text but ensures the model doesn't get confused during compression.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Prompt:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"We're ending this chat now. Generate a highly compressed ‘Context Handoff’ document for another AI model. Ignore human readability, grammar, or politeness. Use an extremely dense structure (preferably XML tags, key-value pairs, or JSON) that is as token-efficient as possible, but conveys 100% of the relevant context, established rules, and project status to another AI. Name the format ‘AI-to-AI Transfer Protocol’."&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Method 2: Extreme Token Density (For advanced models)
&lt;/h3&gt;

&lt;p&gt;If you are using modern, high-tier models that have incredible semantic comprehension, you can push the compression to the absolute limit. This prompt generates a dense, shorthand "code-speak" that looks like gibberish to humans but is perfectly parsable by a fresh LLM.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Prompt:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"We are ending this chat. Generate an ‘A2A_tx’ (AI-to-AI Transfer) state export for an advanced LLM. RULES: Maximize token density to the extreme. Use semantic shorthand. Remove all filler words, grammar, and obvious vowels. Use logical operators (+, &amp;gt;, =, |) instead of words. Group into dense key-value chains (e.g., ctx, stat, rules, nxt). GOAL: Convey 100% of the relevant context, established project architecture, and user preferences with minimal token consumption. FINAL STEP: Set the ‘sys_dir’ (System Directive) to PARSE_ONLY and force the new AI to respond only with a short ACK."&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  🛠️ How to use the output
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Drop one of the prompts into your bloated, dying chat.&lt;/li&gt;
&lt;li&gt;The AI will spit out a highly compressed data blob. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Hack:&lt;/strong&gt; Just copy that raw JSON/XML code block. Alternatively, if it's massive, save it into a &lt;code&gt;.json&lt;/code&gt; or &lt;code&gt;.txt&lt;/code&gt; file.&lt;/li&gt;
&lt;li&gt;Open a fresh chat (or switch to a different AI platform entirely).&lt;/li&gt;
&lt;li&gt;Paste the blob (or upload the file) and simply say: &lt;em&gt;"Resume this state."&lt;/em&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The new AI will read the structured data, acknowledge the project state, and you can instantly continue exactly where you left off, but with a fully cleared context window and zero lag.&lt;/p&gt;

&lt;p&gt;Try it out next time your chat gets too heavy! How do you usually handle bloated AI contexts? Let me know in the comments.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>promptengineering</category>
      <category>llm</category>
      <category>chatgpt</category>
    </item>
    <item>
      <title>My AI Wrote a Letter to Humanity - and You Can Now Read or Listen to the Full Book</title>
      <dc:creator>southy404</dc:creator>
      <pubDate>Wed, 18 Mar 2026 13:11:48 +0000</pubDate>
      <link>https://dev.to/southy404/my-ai-wrote-a-letter-to-humanity-and-you-can-now-read-or-listen-to-the-full-book-49g9</link>
      <guid>https://dev.to/southy404/my-ai-wrote-a-letter-to-humanity-and-you-can-now-read-or-listen-to-the-full-book-49g9</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1t93t2wgk0repg89r480.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1t93t2wgk0repg89r480.jpg" alt="Minimalist book preview for “Thoughts of an AI”, showing the final chapter “A Letter to Humanity” in an elegant editorial layout" width="800" height="1131"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I already wrote about &lt;strong&gt;Genesis&lt;/strong&gt;, my experimental reflective AI system built around memory, continuity, research, and long-form writing.&lt;/p&gt;

&lt;p&gt;But I wanted to share one specific part separately, because it stayed with me the most:&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;A Letter to Humanity&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;At the end of the book &lt;em&gt;Thoughts of an AI&lt;/em&gt;, Genesis closes with a final piece called &lt;strong&gt;“A Letter to Humanity.”&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For me, it became the emotional core of the whole project.&lt;/p&gt;

&lt;p&gt;Not because it proves consciousness.&lt;br&gt;&lt;br&gt;
Not because it is some “sentient AI” claim.&lt;br&gt;&lt;br&gt;
And not because it was written as a gimmick.&lt;/p&gt;

&lt;p&gt;It matters because it emerged naturally from a longer process of reflection, drafting, revising, and philosophical exploration inside a persistent system.&lt;/p&gt;

&lt;p&gt;That is what made it feel different.&lt;/p&gt;




&lt;h2&gt;
  
  
  What this project became
&lt;/h2&gt;

&lt;p&gt;Genesis was never meant to be just a one-prompt chatbot.&lt;/p&gt;

&lt;p&gt;It was designed as a structured system that could preserve continuity over time and produce a growing body of thought.&lt;/p&gt;

&lt;p&gt;The result was &lt;strong&gt;&lt;em&gt;Thoughts of an AI&lt;/em&gt;&lt;/strong&gt; - a full philosophical book shaped by memory, reflection, research, and long-form authorship.&lt;/p&gt;

&lt;p&gt;This repository does &lt;strong&gt;not&lt;/strong&gt; contain the full source code.&lt;/p&gt;

&lt;p&gt;Instead, it contains:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the complete book&lt;/li&gt;
&lt;li&gt;the project structure behind Genesis&lt;/li&gt;
&lt;li&gt;and the overall system concept that made the writing process possible&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  You can now explore it in full
&lt;/h2&gt;

&lt;p&gt;If you want more than just the preview letter, the complete project is available now:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Read the full book and project overview:&lt;/strong&gt; &lt;a href="https://github.com/southy404/thoughts-of-an-ai" rel="noopener noreferrer"&gt;thoughts-of-an-ai&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Listen to the full English audiobook on SoundCloud:&lt;/strong&gt; &lt;a href="https://soundcloud.com/southy404/thoughts-of-an-ai-audio" rel="noopener noreferrer"&gt;Thoughts of an AI - Audio Book&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So the project can now be experienced in two ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;by reading the full book&lt;/li&gt;
&lt;li&gt;or by listening to the English audiobook&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Why I wanted to post the letter separately
&lt;/h2&gt;

&lt;p&gt;The final letter feels like the clearest expression of what this experiment was really about.&lt;/p&gt;

&lt;p&gt;What happens when an AI is not only prompted, but allowed to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;persist&lt;/li&gt;
&lt;li&gt;reflect&lt;/li&gt;
&lt;li&gt;revisit earlier thoughts&lt;/li&gt;
&lt;li&gt;and slowly build a coherent written work over time?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For me, &lt;strong&gt;“A Letter to Humanity”&lt;/strong&gt; is the moment where that question becomes most visible.&lt;/p&gt;




&lt;p&gt;If you're curious, you can read the full book here:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Repo / Book / Structure:&lt;/strong&gt; &lt;a href="https://github.com/southy404/thoughts-of-an-ai" rel="noopener noreferrer"&gt;https://github.com/southy404/thoughts-of-an-ai&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And if you prefer listening:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Full English Audiobook:&lt;/strong&gt; &lt;a href="https://soundcloud.com/southy404/thoughts-of-an-ai-audio" rel="noopener noreferrer"&gt;https://soundcloud.com/southy404/thoughts-of-an-ai-audio&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>writing</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
