<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Lexy Callemeyn</title>
    <description>The latest articles on DEV Community by Lexy Callemeyn (@lexy_eyn).</description>
    <link>https://dev.to/lexy_eyn</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/lexy_eyn"/>
    <language>en</language>
    <item>
      <title>I imagined a P2P network for sharing AI inference and made a prototype</title>
      <dc:creator>Lexy Callemeyn</dc:creator>
      <pubDate>Tue, 05 May 2026 10:00:12 +0000</pubDate>
      <link>https://dev.to/lexy_eyn/i-imagined-a-p2p-network-for-sharing-ai-inference-and-made-a-prototype-9d7</link>
      <guid>https://dev.to/lexy_eyn/i-imagined-a-p2p-network-for-sharing-ai-inference-and-made-a-prototype-9d7</guid>
      <description>&lt;p&gt;A few weeks ago I was remembering an old PC game I played in my younger years and that I'd love to wander through again, listening to the soundtrack, seeing the colorful cities. On the game's subreddit, the community was talking about creating a spiritual successor and then my brain took a weird shortcut:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;What if I could help them speed up the development by lending the remaining tokens of my daily Claude subscription or the power of a local AI model to this project when I'm not using them…&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;And it came to me like a vision: a P2P network, a swarm of AI workers, each one provided by people who believe in the project, working together to help the owner in a communal way. Not the P2P you’re thinking about, no file sharing, no torrents. Just machines talking directly to each other, one sending a task, the other sending a result back.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Am I a dreamer? Completely! And proud of it 😂&lt;/p&gt;




&lt;h2&gt;
  
  
  We contribute code. We contribute money. What about compute?
&lt;/h2&gt;

&lt;p&gt;There’s many ways to help a project you can’t wait to work/play with. In the early days, you gave code. Then you could give money through Patreon, buying a coffee, kickstarter, ... Some people give time: organizing issues, reviewing pull requests, writing documentation, answering questions in Discord.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;And now we all have something new to give: inference capacities.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We now have powerful local models. We have cloud subscriptions with daily usages to Claude, ChatGPT, Gemini, ... We’ve built setups that can reason, generate, review, and create, right from our desks. That’s an incredible amount of power at our fingertips.&lt;/p&gt;

&lt;p&gt;And when we’re done for the day, that power is still there. Ready. Available.&lt;/p&gt;

&lt;p&gt;Now imagine a small open source project maintained by a few people on their free time. Imagine them having a swarm of volunteer AI workers at disposal to work on features, check the issues, draft tests, review pull requests, all because a community of people who believe in the project lend their idle models for a few hours.&lt;/p&gt;

&lt;p&gt;You don’t need to understand the codebase. You don’t need to write a single line of code. You just need to care about a project enough to turn a client on while doing something else.&lt;/p&gt;

&lt;p&gt;It can be a new kind of open source contribution. And I think it can help many workflows.&lt;/p&gt;




&lt;h2&gt;
  
  
  How do I see that technically?
&lt;/h2&gt;

&lt;p&gt;There’s two sides to explain: the contributor and the project owner&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9678yb8ty62e8tg17wq8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9678yb8ty62e8tg17wq8.png" alt="Contributor to project owner exchange logic schema" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The contributor side
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;You start a client and setup your best local LLM or an API key to your favorite model&lt;/li&gt;
&lt;li&gt;You enter a git repo url or a direct P2P url shared by the owner&lt;/li&gt;
&lt;li&gt;It reads all relevant markdown files to understand how to work inside the project.&lt;/li&gt;
&lt;li&gt;It receives prompts directly from the owner through the app.&lt;/li&gt;
&lt;li&gt;It does the inference job and send back the content asked in a stringified JSON format.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4kaopf8ls8o20rwlzizo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4kaopf8ls8o20rwlzizo.png" alt="Contributor flow schema" width="800" height="137"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And everybody can contribute even without having to plunge into the code or having engineering knowledge!&lt;/p&gt;

&lt;h3&gt;
  
  
  The project owner side
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;You start the owner client and prepare your project: the source files, the documentation, the context that will guide the contributors machines.&lt;/li&gt;
&lt;li&gt;Once contributors connect, you see them appear anonymously with their model specs.&lt;/li&gt;
&lt;li&gt;From there you create tasks: a prompt, the related files, and you dispatch them to the model that fits best.&lt;/li&gt;
&lt;li&gt;Results come back through the P2P network. You review, accept, or ask for a new version. You can even ask several contributors to work on the same task and pick the best output or merge them together in a all-in-one better version.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F076t0rn5cer78urbtvxh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F076t0rn5cer78urbtvxh.png" alt="Project owner flow schema" width="800" height="137"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As “simple” as that 😉&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;And this is the moment when this article was supposed to finish…&lt;br&gt;
But I was so curious and it was technically so fresh in my head that I wanted to see it in action. So I refined the project with Claude Opus and asked it to create a prototype instead of just giving the idea 😅&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Hive Inference P2P prototype
&lt;/h2&gt;

&lt;p&gt;May I introduce you to the Hive?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gitlab.com/lexy.callemeyn/hive-inference-p2p" rel="noopener noreferrer"&gt;Here’s the link to the repo of the prototype.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Its composed of two software: The bee and the owner.&lt;/p&gt;

&lt;h3&gt;
  
  
  The technical stack
&lt;/h3&gt;

&lt;p&gt;It’s built in Rust and Tauri with a basic HTML/JS frontend. There’s a full working GUI with a correct UX, it could be way better but it does the job pretty well.&lt;/p&gt;

&lt;p&gt;The core of the network is based on libp2p, the stack originally developed for Ethereum and used by most modern peer-to-peer projects. The networking layer is about 40 lines of code. libp2p does the heavy lifting.&lt;/p&gt;

&lt;p&gt;Everything is end-to-end encrypted through a Noise handshake with ed25519 keypairs.&lt;/p&gt;

&lt;h3&gt;
  
  
  The main features
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The bees have two ways to connect to the swarm: they can read the owner’s address from a small hive.yml committed to any Git repository or it can enter the P2P url directly from the owner.&lt;/li&gt;
&lt;li&gt;The bee client can have several models set up&lt;/li&gt;
&lt;li&gt;The owner can edit the global context, fetch web pages (for documentation ie) to add it to and change all the basic prompts.&lt;/li&gt;
&lt;li&gt;Every task can be isolated or dependent on other ones.&lt;/li&gt;
&lt;li&gt;A review panel gathering the result formatted and colored by coding language.&lt;/li&gt;
&lt;li&gt;The owner can ask for several versions of a same task and choose the best one by themself. Or they can also create a merge task and ask a bee to compute all the versions generated to the best one. &lt;strong&gt;Like this schema:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fck88a6n4gn8ynx5coem4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fck88a6n4gn8ynx5coem4.png" alt=" " width="800" height="743"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here’s a demo video to see the prototype in action:&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/CugjYjZpbHs"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;




&lt;h2&gt;
  
  
  The other use cases of the Hive?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;You want to benchmark several models from several computers and check the result in the same place.&lt;/li&gt;
&lt;li&gt;You want to study if a swarm of models can produce something better together than in solo. Like a choir, a room of jury, the weather ensemble forecasting or the genetic evolution algorithm.&lt;/li&gt;
&lt;li&gt;You want to easily benchmark your AGENTS.md or other context setups and see the result.&lt;/li&gt;
&lt;li&gt;You want to do A/B prompt testing. Same task instruction, two system prompts via the prompts editor. See which prompt style produces better outputs across many models, not just the one you tested with.&lt;/li&gt;
&lt;li&gt;You’re working on a closed-source project and you don’t want your code to go through OpenAI or any cloud API. You send your tasks to colleagues running local models on their own machines. Your code never leaves the circle.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  What’s next ?
&lt;/h2&gt;

&lt;p&gt;The prototype is in a early phase so it’s not bug-free. I mainly tested it with local LLMs but not cloud ones.&lt;/p&gt;

&lt;p&gt;Btw it’s definitely open source. Fork it, clone it, make it yours. If you see something in this idea, don’t hesitate to test it, send a feedback, open issues and share it with someone who could use it.&lt;/p&gt;

&lt;p&gt;Thanks for reading 💜&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>showdev</category>
      <category>discuss</category>
    </item>
  </channel>
</rss>
