<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: conradbzura</title>
    <description>The latest articles on DEV Community by conradbzura (@conradbzura).</description>
    <link>https://dev.to/conradbzura</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/conradbzura"/>
    <language>en</language>
    <item>
      <title>What if Python was natively distributable?</title>
      <dc:creator>conradbzura</dc:creator>
      <pubDate>Mon, 16 Mar 2026 17:53:01 +0000</pubDate>
      <link>https://dev.to/conradbzura/what-if-python-was-natively-distributable-2ip3</link>
      <guid>https://dev.to/conradbzura/what-if-python-was-natively-distributable-2ip3</guid>
      <description>&lt;h2&gt;
  
  
  The case for distributed execution without orchestration.
&lt;/h2&gt;

&lt;p&gt;You have an async function. You want to run it on another machine. How hard could that be?&lt;/p&gt;

&lt;p&gt;If you’ve gone looking for the answer in the Python ecosystem, you already know: unreasonably so. Not because the problem is complex, but because every framework that offers to solve it insists on solving a dozen other problems you didn’t ask about.&lt;/p&gt;

&lt;p&gt;Want distributed execution? Great — but first, define your workflow as a DAG. Configure a state backend. Pick a serialization format from a list of four. Set up a message broker. Write a retry policy. Decide on a dead letter queue strategy. Oh, and here’s a decorator, but it only works on top-level functions with JSON-serializable arguments that are registered in a task registry that the worker must import at startup.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;You wanted to run a function somewhere else. You got an operating system.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The quiet assumption
&lt;/h2&gt;

&lt;p&gt;There’s a pattern here worth naming. Somewhere along the way, the Python ecosystem decided that distributed execution is inseparable from orchestration. That you can’t have one without the other. That handing a function to a remote process is meaningless unless the framework also knows what to do when that process catches fire.&lt;/p&gt;

&lt;p&gt;It’s a reasonable instinct. Distributed systems &lt;em&gt;are&lt;/em&gt; unreliable. Failures &lt;em&gt;do&lt;/em&gt; happen. But there’s a difference between acknowledging that reality and baking a specific response to it into the execution layer.&lt;/p&gt;

&lt;p&gt;When you write a regular async function in Python, the language doesn’t force you to register an error handler before you’re allowed to &lt;code&gt;await&lt;/code&gt; it. It doesn’t demand that you declare the function in a central registry. It doesn’t serialize your arguments through a format that strips away half their type information. It gives you a simple, transparent mechanism — &lt;code&gt;async&lt;/code&gt;/&lt;code&gt;await&lt;/code&gt; — and trusts you to build whatever policies you need on top.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Why should distributed execution change any of this?&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The cost of opinions
&lt;/h2&gt;

&lt;p&gt;The frameworks that dominate this space — and I don’t need to name them — are genuinely impressive pieces of engineering. They solve real problems for real teams running real workloads. This isn’t about them being wrong.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;It’s about them being opinionated at the wrong layer.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;When orchestration, checkpointing, and retry logic are fused into the execution primitive, you don’t just get those features — you get their constraints. Your functions must conform to a specific shape. Your arguments must survive a round trip through JSON or a proprietary serializer. Your control flow must be expressible as a graph. Your error handling must fit the framework’s model, not yours.&lt;/p&gt;

&lt;p&gt;These constraints compound. They push you toward writing “framework code” — code that exists to satisfy the tool rather than express your intent. The function you wanted to distribute starts to look nothing like the function you would have written if distribution weren’t a concern.&lt;/p&gt;

&lt;p&gt;And if you &lt;em&gt;don’t&lt;/em&gt; need orchestration? If you just want to fan out some computation, or stream results from workers, or run the same async pipeline across multiple machines? You’re still paying the full tax.&lt;/p&gt;

&lt;h2&gt;
  
  
  A different starting point
&lt;/h2&gt;

&lt;p&gt;What if we started from the other end? Instead of building a distributed application framework and embedding an execution engine inside it, what if we just… made execution distributable?&lt;/p&gt;

&lt;p&gt;The idea is simple: Python already has the right primitives. Async functions give you concurrency. Async generators give you streaming. &lt;code&gt;await&lt;/code&gt; gives you composition. Exceptions give you error propagation. Context variables give you scoping. What’s missing is the ability to say “run this over there” without giving any of that up.&lt;/p&gt;

&lt;p&gt;Not “run this over there according to my DAG definition.” Not “run this over there and checkpoint the result.” Just: this function, that machine, same semantics.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/wool-labs/wool" rel="noopener noreferrer"&gt;That’s what I set out to build with Wool →&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmkmowspbxcnih603csqj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmkmowspbxcnih603csqj.png" alt="Wool - A lightweight distributed Python runtime" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Wool in thirty seconds
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Distributed coroutines
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;wool&lt;/span&gt;

&lt;span class="nd"&gt;@wool.routine&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;


&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;wool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;WorkerPool&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One decorator and a context manager. The function is still an async function. You still &lt;code&gt;await&lt;/code&gt; it. You still catch its exceptions. If it’s an async generator, you still iterate it. The only difference is that it executes on a remote worker process instead of the local event loop.&lt;/p&gt;

&lt;p&gt;Workers discover each other through pluggable backends — shared memory for a single machine, Zeroconf for a local network, or roll your own to suit your existing stack. There’s no broker, scheduler, or central coordinator. Workers cooperate as part of a peer-to-peer network. The topology is flat.&lt;/p&gt;

&lt;p&gt;What Wool doesn’t do is just as important. &lt;em&gt;No built-in retry logic&lt;/em&gt; — that’s your call. &lt;em&gt;No persistent state&lt;/em&gt; — if you need it, you know your storage better than I do. &lt;em&gt;No dead letter queues, no workflow engine, no DAGs.&lt;/em&gt; Wool provides best-effort, at-most-once execution, and gets out of the way.&lt;/p&gt;

&lt;p&gt;The bet is that the language already gives you enough to build whatever policies you need. Wool just makes the execution itself transparent.&lt;/p&gt;

&lt;h3&gt;
  
  
  Distributed async generators
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nd"&gt;@wool.routine&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;chars&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;sorted&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;charset&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;lo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hi&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;chars&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
    &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="n"&gt;lo&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="n"&gt;hi&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;mid&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;lo&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;hi&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;//&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
        &lt;span class="n"&gt;hint&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="n"&gt;chars&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;mid&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;hint&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;higher&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;lo&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;mid&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
        &lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="n"&gt;hint&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;lower&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;hi&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;mid&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
        &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;wool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;WorkerPool&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;stream&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;z&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;guess&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;anext&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;guess&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;z&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;aclose&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
                &lt;span class="k"&gt;break&lt;/span&gt;
            &lt;span class="n"&gt;hint&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;higher&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;guess&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;z&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;lower&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
            &lt;span class="n"&gt;guess&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;asend&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;hint&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Async generators get the same treatment. The caller drives the generator with &lt;code&gt;asend()&lt;/code&gt;, &lt;code&gt;athrow()&lt;/code&gt;, and &lt;code&gt;aclose()&lt;/code&gt;, exactly as if it were local. Under the hood, each yield crosses a network boundary — the client sends a command, the worker advances the generator one step and streams the value back. But from the caller’s perspective, it’s just an async generator.&lt;/p&gt;

&lt;p&gt;Streaming is how you build pipelines, progressive results, long-running conversational protocols — the fact that you can distribute them without changing how they work is the whole point.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s next
&lt;/h2&gt;

&lt;p&gt;Wool is still (very) early, but it works. Future posts will share progress and lessons learned as I continue to build this thing.&lt;/p&gt;

&lt;p&gt;If you’ve ever felt like distributed Python code shouldn’t require adopting a new programming model, Wool might be for you.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/wool-labs/wool" rel="noopener noreferrer"&gt;Try it out — I welcome your feedback, use cases, and honest skepticism →&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Thanks for reading.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>distributedsystems</category>
      <category>opensource</category>
      <category>software</category>
    </item>
    <item>
      <title>I built Wool, a lightweight distributed Python runtime</title>
      <dc:creator>conradbzura</dc:creator>
      <pubDate>Sat, 14 Mar 2026 17:09:13 +0000</pubDate>
      <link>https://dev.to/conradbzura/i-built-wool-a-lightweight-distributed-python-runtime-3958</link>
      <guid>https://dev.to/conradbzura/i-built-wool-a-lightweight-distributed-python-runtime-3958</guid>
      <description>&lt;p&gt;&lt;a href="https://github.com/wool-labs/wool" rel="noopener noreferrer"&gt;Wool - A distributed Python runtime&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I spent a long time working in the payments industry, specifically on a rather niche reporting/aggregation platform with spiky workloads that were not easily parallelized. To pump as much data through our pipeline as possible, we had to rely on complex locking schemes across half a dozen or so not-so-micro services - keeping a clear mental picture of how the services interacted for a given data source was a major headache. This problem always intrigued me, even after I no longer worked at the company, and lead to the development of Wool.&lt;/p&gt;

&lt;p&gt;If you've worked with frameworks like Ray or Prefect, you're probably familiar with the promise of going from script to scale in two lines of code (or something along those lines). This is essentially the solution I was looking for: a framework with limited boilerplate that facilitated arbitrary distribution schemes within a single, coherent codebase. What I was hoping for, though, was something a little bit more focused - I wasn't working on ML pipelines and didn't need much else other than the distribution layer. &lt;/p&gt;

&lt;p&gt;This is where Wool comes in. While its API is very similar to those of Ray and Prefect, where it differentiates itself is in its scope and architecture.&lt;/p&gt;

&lt;p&gt;First, Wool is not a task orchestrator. It provides push-based, best-effort, at-most-once execution. There is no built-in coordination state, retry logic, or durable task tracking. Those concerns remain application-defined. The beauty of Wool is that it looks and feels like native async Python, allowing you to use purpose-built libraries for your needs as you would for any other Python app (with some caveats).&lt;/p&gt;

&lt;p&gt;Second, Wool was designed with speed in mind. Because it's not bloated with features, it's actually pretty fast, even in its current nascent state. Wool routines are dispatched directly to a decentralized peer-to-peer network of gRPC workers, who can distribute nested routines amongst themselves in turn. This results in low dispatch latencies and high throughput. I won't make any performance claims until I can assemble some more robust benchmarks, but running local workers on my M4 MacBook Pro (a trivial example, I know), I can easily achieve sub-millisecond dispatch latencies.&lt;/p&gt;

&lt;p&gt;Anyway, check it out, any and all feedback is welcome. Regarding docs- the code is the documentation for now, but I promise I'll sort that out soon. I've got plenty of ideas for next steps, but it's always more fun when people actually use what you've built, so I'm open to suggestions for impactful features.&lt;/p&gt;

&lt;p&gt;-Conrad&lt;/p&gt;

</description>
      <category>python</category>
      <category>opensource</category>
      <category>showdev</category>
      <category>software</category>
    </item>
  </channel>
</rss>
