<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: ItsEvilDuck</title>
    <description>The latest articles on DEV Community by ItsEvilDuck (@itsevilduck).</description>
    <link>https://dev.to/itsevilduck</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/itsevilduck"/>
    <language>en</language>
    <item>
      <title>One ledger, two chains — what I learned about multi-chain payment architecture from a reader's correction</title>
      <dc:creator>ItsEvilDuck</dc:creator>
      <pubDate>Sat, 09 May 2026 16:45:03 +0000</pubDate>
      <link>https://dev.to/itsevilduck/one-ledger-two-chains-what-i-learned-about-multi-chain-payment-architecture-from-a-readers-2jpl</link>
      <guid>https://dev.to/itsevilduck/one-ledger-two-chains-what-i-learned-about-multi-chain-payment-architecture-from-a-readers-2jpl</guid>
      <description>&lt;p&gt;In the previous post in this series I wrote about Base as the payment rail for QuackBuilds, and made an argument about why card-network economics break for certain payment shapes that onchain rails handle natively. The post landed cleanly enough on its own, but a reader’s correction in the comments — and a follow-up post on his own blog — opened up a deeper architectural question that I wasn’t ready to answer at the time and am still not ready to answer now. This post is what I learned from the exchange. It is not a roadmap for what I’m building next, because I’m honestly not sure yet. The architectural reasoning is good enough to be worth writing down regardless, and good enough to apply to whatever direction I eventually move in.&lt;br&gt;
I should be upfront about where I am, since it shapes how to read what follows. I’m relatively new to this field, and the architecture in this post is more carefully considered than it is committed to. The Base side of QuackBuilds is in production. Beyond that, I’m holding the next direction loosely — not because I lack ideas, but because I’d rather wait until I’ve learned more before I publicly commit to a path. The reason I’m writing this post anyway, rather than waiting until I’ve built the next thing, is that the architectural reasoning is the part that benefits from public scrutiny. If the reasoning is wrong, I’d rather hear that now from someone who’s run multi-chain infrastructure in production than discover it whenever I do decide to act on it. Treat this post as me thinking out loud about a class of decisions I’ll eventually face, not as a plan I’ve signed.&lt;br&gt;
Most of what’s useful in this post comes from a commenter named Paul (QBitFlow), who’s running non-custodial payment infrastructure on Ethereum, Solana, and Base in production and corrected me three times across two threads in the most substantive comment exchange I’ve had on this platform. He’s since published his own architectural writeup of the same territory, and I’m going to credit him throughout because the architectural pattern this post describes is his first, not mine.&lt;br&gt;
The mistake I was making, before the exchange that prompted this post, was thinking about multi-chain architecture as a reconciliation problem. The mental model I had was something like: there are two chains, each with its own state, and the work is to keep them synchronized — or at least to keep their views of the world consistent enough that the accounting doesn’t drift. That framing is wrong, and it’s wrong in a way that produces wrong code. If you start from “two chains that need to be reconciled,” you build a system where each chain has its own ledger and the ledgers have to be merged at some boundary. The merge is where the bugs live. The merge is where the audit trail breaks. The merge is where the operational complexity compounds.&lt;br&gt;
The correct framing, which Paul articulated cleanly enough that I’m going to quote it directly, is one ledger, chain-specific handling underneath. The ledger is the source of truth. The chains are implementation details below it. Every transaction the system cares about — regardless of which chain it cleared on, regardless of who initiated it, regardless of whether it was settlement or a refund — writes to the same ledger, in the same accounting format, with the same audit-trail semantics. The chain-specific code lives below the ledger and is responsible for the actual on-chain operations: signing, broadcasting, watching for confirmations, handling reorgs, managing fee dynamics. The ledger doesn’t know any of that. The ledger knows only that a transaction happened, what it was, who it was between, and what amount in what asset cleared. The mental shift this asks for is from horizontal to vertical: chains aren’t side-by-side with glue between them, they’re below an accounting layer that sits above all of them. Above, not between.&lt;br&gt;
That inversion sounds small but it changes almost every downstream design decision. The accounting layer becomes uniform and human-readable, regardless of how many chains are underneath it. The post-hoc work — finance reconciliation, tax reporting, customer support, refund flows — operates on the ledger, not on the chains, which means most of that work is chain-agnostic. New chains can be added without rewriting the accounting layer; you just write a new chain-specific handler that knows how to talk to the new chain and how to report back to the ledger in the existing format. The architectural complexity, which would be quadratic in the number of chains under the reconciliation model, becomes linear under the ledger-first model. That’s the whole point.&lt;br&gt;
Paul’s published writeup of his own multi-chain journey makes the same architectural pattern visible from a different angle, and the principle he distills is one I’m going to be carrying around for years: chain identifiers as first-class values, no hardcoded execution-model assumptions. The distinction he draws is between a codebase that says “we support Ethereum and Solana” — two specific chains baked into the data model, the webhooks, the SDKs, the dashboard — and a codebase that says “we support multiple chains, currently Ethereum and Solana” — chain identifiers as runtime values, chain-specific behaviors isolated behind interfaces, no if EVM else Solana branches scattered through the business logic. The two codebases look superficially similar in their first version. Six months later, when a third chain shows up, they look completely different. The first one needs a refactor; the second one just needs an implementation behind an interface that already exists. Worth noting that this principle holds whether or not a system ever ends up multi-chain — the discipline pays off the moment any chain other than the original one enters the system, regardless of which chain that turns out to be or whether it ever happens at all.&lt;br&gt;
The deeper question, which Paul surfaced in a follow-up exchange, is why you’d architect for multiple chains in the first place. The merchant-driven case is the obvious one — your users will eventually want chains you don’t yet support, so you build for capacity to add them. There’s a second flavor I’d been thinking about as substrate-driven — chains added because the internal requirements of the system will eventually need them, even if no user is asking. Paul, correctly, pointed out that both flavors collapse into a deeper category: capability-driven multi-chain. The chain set falls out of the operation set, not out of who’s asking. For a payments rail, that means: accept any stablecoin, settle non-custodially, in seconds, at sub-cent cost when needed. No single chain delivers all four. Different operations would surface different chain capabilities; the design heuristic is to enumerate the operations the system actually needs to perform, identify which capabilities each requires, then ask which chains deliver which capabilities. Who’s asking is then a forcing function on timing, not a flavor. That reframe is the cleanest way to think about chain selection I’ve encountered, and it’s the lens I’d use to evaluate any chain decision I eventually make — whether that’s adding a second chain, sticking with one, or something else entirely.&lt;br&gt;
The integration cost lives in four places, and the ledger isn’t one of them. The second thing Paul corrected me on, which was the more important correction because it’s the one that affects how I’d price this kind of work, was the location of the actual integration cost. I’d been mentally budgeting multi-chain work as learning two SDKs and writing two integration paths, and that framing is wrong in a way builders consistently get wrong. The SDKs themselves are good now — that’s the part most people overestimate. The expensive part lives in the translation layer plus the operational surface around it: reorg handling when validators flap, fee anomalies on L2s during congestion, refund flows for overpayments and edge cases, the audit trail that finance and tax both need, the monitoring that catches a chain anomaly before it shows up as a customer complaint. None of that is multi-chain-specific in theory — single-chain systems need most of it too — but in practice the operational cost of a two-rail architecture is dominated by the work of squaring two chains’ worth of finality models, fee dynamics, address formats, event semantics, and reorg behaviors into a single coherent financial view that downstream systems can actually rely on.&lt;br&gt;
The honest implication, which I want to flag because most architecture posts conveniently skip it, is that the cost of going from one chain to two is either weekend-shaped or quarter-shaped, depending on whether the codebase was structured for new chains or for two specific chains. That bimodal framing comes directly from Paul’s published experience — the Base addition to QBitFlow’s existing Ethereum-and-Solana stack was a weekend, and the reason it was a weekend rather than a quarter was almost entirely the day-one discipline of treating chains as runtime values rather than hardcoded assumptions. If you did the upfront work, additional chains are an interface implementation. If you didn’t, additional chains are a refactor. In practice, codebases tend to cluster at the two ends because the discipline either gets applied as a doctrine or doesn’t get applied at all — partial discipline tends to evaporate under the first deadline pressure. The architecture commits you to one of those two futures from very early on, often before you realize you’ve made the choice. Anyone telling you the cost is 2x is averaging across the two paths and producing a number that doesn’t describe either of them.&lt;br&gt;
The third thing from the exchange that shifted my thinking is the architectural principle that I’m going to be quoting back to myself for years. Paul’s framing was: “maximum security and transparency where it counts, minimum on-chain footprint where it doesn’t.” The instinct in crypto-native systems, and one I’d been quietly drifting toward, is to put as much of the application state on-chain as possible — for verifiability, for transparency, for the cypherpunk virtue of cryptographic guarantees. That instinct breaks down fast in production. Fees pile up. Latency suffers. Most of the state doesn’t actually need cryptographic guarantees to be correct; it just needs to be correct, which is a much cheaper property to deliver. The discipline that holds up is putting the security-critical path on-chain — settlement, custody, authorization, anything where trust assumptions matter — and building around it with off-chain accounting and orchestration to keep the cost reasonable. Maximum transparency where it counts. Minimum on-chain footprint where it doesn’t. That principle holds regardless of how many chains are eventually involved, and it’s one of the cleanest ways I’ve encountered to think about what should actually be on-chain — a question I’d been answering by default rather than by design.&lt;br&gt;
There’s a related lesson in Paul’s published reflection on QBitFlow’s first six months in production that I want to surface explicitly, even though it’s the lesson I’m still wrestling with the hardest. He wrote that if they were starting today, they would have shipped their second EVM chain in v1 alongside the original pair, even before merchant demand fully justified it — because the chain-specific code paths were going to need to exist anyway, the upfront cost of adding them while the codebase was small was lower than the cost six months later, and the conversion benefit was genuinely expensive to delay. That’s a strong argument for early commitment. The counter-argument, and the reason I’m not just acting on his lesson immediately, is that his case had merchant demand visible enough to retrospectively validate the discipline. I don’t yet have that kind of visible demand for any specific direction, and I’m new enough to this field that I’d rather learn more before committing to a path the architecture should be optimized around. Holding decisions open isn’t the same as having no plan; it’s a deliberate choice to keep the option space wide while I’m still calibrating my own judgment.&lt;br&gt;
There’s a section of this post I almost didn’t write, because the architectural decisions in it are clean enough that the political dimension feels like a tangent. But Paul named something in our exchange that I think most engineering writing on this topic skips, and that I want to address directly. The framing was: merchant-driven discipline gets validated continuously by user requests; substrate-driven discipline has to survive a year of “why are we paying this complexity tax for a thing nobody asked for.” That’s exactly right, and it’s the version of the politics problem most architecture posts under-describe. The merchant-driven case carries its own justification — every new merchant request is a fresh data point that proves the discipline was correct. The substrate-driven case, and by extension the capability-driven case where the capability isn’t yet being exercised, has no continuous validation. The discipline costs real time and real complexity for a payoff that’s invisible by definition until the moment it pays off, and the larger the eventual payoff, the more invisible it is in the meantime. That’s an unfunded discipline tax. It’s the interval where engineering leaders quietly start dismantling the abstraction layers they don’t see being used. Same architectural conclusion, very different organizational politics. Anyone planning a substrate-driven multi-chain architecture should know this is the political shape of the work — not because the architecture is wrong, but because the discipline has to survive an interval of looking like overhead before it starts looking like foresight.&lt;br&gt;
The catalog-vs-rail asymmetry — a comparison Paul surfaced and which I’ve been thinking of in those terms since — is worth naming here because it sharpens the politics question further. The avoided cost of substrate-driven discipline is structurally larger on the catalog side than on the payment-rail side, because the refactor cost compounds across every app in a catalog that bakes in state-shape assumptions. By the time the substrate-driven case is forcing the issue, you’d be retrofitting many apps instead of one payment rail. The fact that the discipline pays off more in the catalog case is, paradoxically, exactly why it’s harder to justify upfront — the avoided cost is invisible by definition, and the larger the avoided cost, the more invisible it gets. If you’re an engineering leader being asked to grant a discipline budget for a chain abstraction whose payoff isn’t yet visible, this paragraph is permission to grant it anyway. Paul flagged in a follow-up exchange that this dynamic generalizes well past chain architecture — it applies anywhere the discipline budget is paid in a currency the organization doesn’t track until the bill arrives, including observability investment, schema discipline, and type-safety adoption curves. The class of problem is broader than the instance, which is part of why naming it explicitly matters.&lt;br&gt;
What’s left, given all of that, is the synthesis I’m walking away with: a set of architectural principles I’m reasonably confident about and a set of build decisions I’m not yet ready to make. The principles came from Paul, in scattered comments and a published post; the synthesis — the connecting tissue between them and the application to a catalog rather than a payment rail — is mine to commit to or not. Ledger-first architecture, chain identifiers as first-class values, capability-driven chain selection rather than identity-driven, maximum security on-chain where it matters and minimum on-chain footprint where it doesn’t, and an honest accounting of the political cost of substrate-driven discipline. Those are durable. They’ll apply to whichever direction I eventually take QuackBuilds. The build decisions — whether to add a second chain, when, which one, on what timeline — are downstream of decisions about product direction that I haven’t fully made yet, and I’d rather acknowledge that openly here than perform a roadmap I haven’t committed to. The architecture is ready for the decisions when I’m ready to make them.&lt;br&gt;
What I’m taking away from this entire exchange — both the comment threads and Paul’s published writeup — is something larger than the specific architectural pattern, and I want to name it because it’s the kind of meta-lesson that’s easy to miss while you’re inside the work. Public technical writing, when done with even modest seriousness, is a way to access the expertise of people you have no other way to reach. The framings that are now load-bearing in my architectural thinking didn’t come from a paid consultant or a senior engineer at my company or a paper I read; they came from comments and a published post by someone running production infrastructure I didn’t know existed, who took the time to write thoughtful paragraphs into comment boxes because the posts raised questions worth his time. That doesn’t happen if the post is hedged, defensive, or selling something. It happens when the post is honest about what the writer doesn’t know and explicit about where the questions are. The architectural reasoning in this post is meaningfully better than what I would have arrived at without the exchange, and the cost of accessing that improvement was approximately the time I spent writing the original post. This kind of exchange isn’t typical — most public technical writing doesn’t surface contributors of this caliber, and I don’t want to overclaim that the platform reliably produces it — but when it does happen, it is one of the most asymmetric returns on time available to an independent builder.&lt;br&gt;
If you’re a builder reading this and wondering whether writing publicly about your half-formed technical decisions is worth the time, this exchange is one data point in favor. The bar is honesty, not polish. The reward, when it lands, is people you didn’t know you needed, finding the work and helping you make it better. Being upfront about where you’re new and where you’re still deciding is part of what makes the exchange productive. Performing more certainty than you have closes off the kind of correction that this post benefited from most.&lt;br&gt;
The next post in this series will probably be whatever direction I do eventually pick, written up after I’ve made enough of the decision to have something concrete to report. Until then, the architecture is sitting here, available for application, and the comments are open if any of the reasoning is wrong.&lt;/p&gt;

&lt;p&gt;— &lt;a class="mentioned-user" href="https://dev.to/itsevilduck"&gt;@itsevilduck&lt;/a&gt; 🦆 / quackbuilds.com&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>showdev</category>
      <category>web3</category>
      <category>architecture</category>
    </item>
    <item>
      <title>The photo graveyard — why your camera roll is full of things you'll never look at again</title>
      <dc:creator>ItsEvilDuck</dc:creator>
      <pubDate>Wed, 06 May 2026 17:39:07 +0000</pubDate>
      <link>https://dev.to/itsevilduck/the-photo-graveyard-why-your-camera-roll-is-full-of-things-youll-never-look-at-again-5e7b</link>
      <guid>https://dev.to/itsevilduck/the-photo-graveyard-why-your-camera-roll-is-full-of-things-youll-never-look-at-again-5e7b</guid>
      <description>&lt;p&gt;The photo graveyard — why your camera roll is full of things you’ll never look at again&lt;br&gt;
You take a photo of a parking spot so you can find your car later. You take a photo of a wine label you liked. You take a photo of a whiteboard at a conference, a receipt for an expense you’ll submit, a book recommendation a friend wrote on a napkin, the tag inside a sweater because you might want to buy another one in a different color, the back of a Wi-Fi router because the password is on it and you’re going to need it again in this house in a month. You take these photos at a rate of maybe ten or twelve a week. You almost never look at any of them again. They are now, statistically, the majority of your camera roll, and they will remain there until you die or change phones.&lt;br&gt;
This is the friction this post is about. I’m going to argue it’s worth taking seriously, that the existing solutions are all wrong in instructive ways, and that the reason nobody has solved it is interesting enough on its own to be worth a post even if the friction itself turns out not to be a product.&lt;br&gt;
The thing that’s distinctive about photo-graveyard photos is that they’re not memories. The camera, as a device, was originally designed for memory — birthdays, weddings, vacations, your kid’s face at three. The shape of every photo product, from the album to iCloud Memories to the share-with-grandma flow, is built around that assumption. Photos are precious. Photos accrue emotional value. Photos are the artifact you want to preserve and return to. Software that touches photos has been built, almost without exception, on top of this premise.&lt;br&gt;
But the photos I described in the opening paragraph aren’t precious. They have no emotional value. They were captured for retrieval, not for preservation, and the moment of retrieval almost always fails to come — either because you forgot the photo existed, or because by the time you remembered it you couldn’t find it among the thousands of others, or because the act of searching for it took longer than just solving the problem a different way (re-typing the Wi-Fi password by squinting at the router, asking the friend for the recommendation again, walking around the parking garage hoping). The photo-as-retrieval-object is a fundamentally different artifact than the photo-as-memory, and almost no software treats it differently.&lt;br&gt;
The existing solutions, when you look closely, are all variations on better search. Apple Photos lets you search by content — “wine,” “receipt,” “whiteboard” — and it works astonishingly well as raw text recognition and image classification. Google Photos does the same, sometimes better. The problem is that better search assumes the user remembers they took the photo and is actively trying to find it. The actual failure mode is that the user has forgotten the photo exists, and search doesn’t help with forgetting. You can’t search for something you don’t remember having.&lt;br&gt;
A few apps have tried to attack this from a different angle. Apple’s Live Text lets you tap into a photo and pull text out of it as if it were a document, which is genuinely useful for receipts and Wi-Fi passwords if you remember to look in the photo. Notion-style scrap apps like Captur, Mem, or various others let you take a photo and immediately tag it with intent (“car parked,” “wine to remember”) which is the right idea but requires the user to do the tagging work in the moment, which is exactly the moment the user is least willing to do work — they’re already in a hurry, that’s why they’re taking a photo instead of writing a note. Reminders-with-photo is a category Apple has been quietly building toward, where you take a photo and the system offers to set a reminder around it, which is closer but still requires deliberate intent at capture time.&lt;br&gt;
The deeper problem, and the part that makes this friction philosophically interesting rather than just annoying, is that the retrieval prompt is the missing piece. You don’t need better photo search; you need the photo to find you at the moment you need it. The parking-spot photo should resurface when you walk back into the parking garage, not when you remember to search for “parking.” The wine label should resurface when you’re standing in a wine shop, not when you’re trying to remember which wine you liked three months ago. The Wi-Fi password should resurface when your phone connects to a new network in that house. None of these surfacings require fancy AI — they require spatial and temporal context awareness that the device already has and chooses not to use for this purpose.&lt;br&gt;
Which raises the question of why nobody has shipped this. The technical pieces are all available. iOS exposes geofencing, location triggers, time-of-day awareness, beacon detection, and on-device image classification. Android exposes more. The hardware is willing. The problem, I think, is that the category is awkwardly positioned: it’s too small for Apple or Google to prioritize as a major feature, but too platform-dependent for an indie developer to build well, because the most important capabilities (deep integration with the camera roll, system-level location triggers, lock-screen surfacing) are restricted to the platform vendors. An indie developer can build most of this, but the version they can build is meaningfully worse than the version Apple could build in a weekend, which Apple shows no sign of doing.&lt;br&gt;
So is it a product? My honest answer, after thinking about it more carefully than I expected to when I started writing this post, is probably not in the form most builders would attempt it. A standalone “photo memory” app would face the cold-start problem that ruins all secondary photo apps — users won’t switch their primary camera roll, which means your app only sees photos they deliberately route to it, which means the retrieval problem you’re solving is a fraction of the real problem. The version that would work is a feature inside a primary system the user already lives in — a camera roll, a notes app, a maps app, a wallet — where the retrieval prompt becomes a natural extension of an existing surface. That’s a feature, not a product, and it’s a feature most likely to be built by Apple or Google when they decide to.&lt;br&gt;
The Tiny Frictions point of view, which I’m working out as I write this series, is that not every friction is worth being a product. Some frictions are worth naming clearly so that the platforms eventually solve them, and some frictions are worth solving in a deeply embedded way inside an adjacent product, and some are worth living with because the cost of solving them exceeds the cost of tolerating them. The photo-graveyard friction is, I think, in the second category — it’s a feature inside a product I haven’t built, and the product I haven’t built is something like a contextual scratchpad that lives in the small gap between the camera, the notes app, and the reminders app. Whether that’s a thing I or anyone else should build is a question for a different post.&lt;br&gt;
What I want from you in the comments is the friction in your own life that has this same shape — the thing you do constantly, the thing you’ve worked around with a half-solution, the thing nobody seems to have solved properly. The next post in this series will probably be one of yours.&lt;/p&gt;

&lt;p&gt;quackbuilds.com — &lt;a class="mentioned-user" href="https://dev.to/itsevilduck"&gt;@itsevilduck&lt;/a&gt; 🦆&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>productivity</category>
      <category>design</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Two months into Sats Channel, and the hardest part of building hasn't been the build</title>
      <dc:creator>ItsEvilDuck</dc:creator>
      <pubDate>Wed, 06 May 2026 01:34:10 +0000</pubDate>
      <link>https://dev.to/itsevilduck/two-months-into-sats-channel-and-the-hardest-part-of-building-hasnt-been-the-build-3e05</link>
      <guid>https://dev.to/itsevilduck/two-months-into-sats-channel-and-the-hardest-part-of-building-hasnt-been-the-build-3e05</guid>
      <description>&lt;p&gt;This is the post I was hoping I wouldn’t have to write yet, because the version of it I imagined writing was supposed to come after I’d figured something out. I have not figured something out. I’m writing it anyway, because the previous post in this series ended with a promise to write the failure-mode counterpart — what doesn’t work when you build around micropayments, what assumptions about user behavior turn out to be wrong — and the most honest answer to that question, two months into shipping Sats Channel, is that the assumptions about users aren’t the ones that are breaking. The assumption about getting users in the first place is.&lt;br&gt;
The thesis, stated upfront so you can decide whether to keep reading: the belief that quietly shapes most independent product work — if it’s good enough, distribution will follow — is probably the single most expensive false belief in indie software. It’s expensive because it’s almost-true. There are real cases where it works, mostly cases where the product is solving an acute pain in a community that already has a place to talk to itself, and those cases get cited often enough that they form the lore of independent building. The lore is wrong on the median case, and watch-to-earn is one of the categories where the wrongness becomes mathematically inescapable. A product that pays its users from ad revenue does not function without users. The architecture I wrote up in the last post — the Lightning rails, the scraper, the active-attention checks, the 100-sat threshold — all of it works. None of it matters in the absence of traffic. I built the engine and I’m now realizing how much less I built of the on-ramp.&lt;br&gt;
Here’s what I’ve actually tried so far, in roughly the order I tried them, with as much honesty as I can fit in a paragraph. Twitter/X was the obvious first move — short, frequent posts, threads when something interesting shipped, replies into adjacent conversations. The thing nobody tells you about Twitter as a distribution channel for indie work is that the algorithm is calibrated for engagement, not interest, and a thoughtful post about your product gets fewer impressions than a snarky reply to a viral account. I’m not great at being snarky on demand. Farcaster has been more receptive in tone — the community is smaller and skewed toward exactly the audience I want, technical people who are crypto-literate and interested in build-in-public — but smaller-and-skewed cuts both ways, and posts that should land well sometimes get fifteen impressions and a single thoughtful reply, which is great for conversation and bad for traffic. Friends and family I include because it’s honest, not because it’s a strategy; you can lean on this exactly once before it stops being available, and it generates the wrong kind of traffic anyway — people who want to support you, not people who want the product. Dev.to is what you’re reading right now, and I’ll come back to whether it’s working at the end of the post, because the answer is interesting.&lt;br&gt;
The naive belief I walked in with, the one I’m now starting to identify as the actual root cause, was a stack of three smaller wrong assumptions that I held simultaneously without noticing how much weight they were each carrying. The first was that building in public would generate its own audience — that the act of being visibly thoughtful about the work would draw people to the work itself. The second was that the Bitcoin angle would carry the product to the Bitcoin community automatically — that the population of people who care about Lightning and watch-to-earn would, by virtue of caring, find me. The third was that the novelty of the inversion would do the marketing for me — that a product that pays users instead of charging them was strange enough to be self-propagating. All three of these have a kernel of truth, which is what makes them so dangerous. Building in public does generate audience, slowly, after you have an audience to build in front of. The Bitcoin community does find new products, eventually, after they’ve been validated by other people first. Novelty does compound, in environments where novelty is already being amplified. The kernel-of-truth is the trap, because these beliefs aren’t wrong, they’re insufficient, and insufficient beliefs are harder to disprove than wrong ones — every time the strategy underperforms, you can convince yourself you just need more time, more posts, more patience. Two months in, I can already feel the gravitational pull of that reasoning, and I’m writing this post partly to break it before another two months pass with the same shape.&lt;br&gt;
What this is teaching me, and the part of the post I’m hoping might be useful to someone else at the same stage, is that distribution is not a separate phase of product work that follows building. It’s a parallel track that has to start at the same time and that needs the same architectural seriousness as the codebase. My mistake — the mistake I’m watching myself make in real time — was treating the build as the hard part and assuming distribution was a smaller problem I’d solve later with effort. Distribution is, for most independent builders, the bigger problem, and the right time to start solving it was approximately a year before shipping the first app. Audience is a multi-month-to-multi-year lagging asset. By the time you need it, it’s already too late to start building it. The builders I see succeeding in this category are almost always builders who spent two or three years writing publicly, posting consistently, building relationships in specific communities — not because they were strategizing about future products, but because they happened to have an audience already when they shipped. The audience-first builders make it look like the product was the lever. The product-first builders, like me, are discovering that the lever was the audience the whole time.&lt;br&gt;
The honest answer to “what’s working” is early signal is mixed. Dev.to has been the most receptive of the channels I’ve tried, which I think is because the posts in this series are doing real teaching rather than pitching, and dev.to’s culture rewards that specifically. That’s part of why this post exists — the platform that’s been most generous with attention deserves the most honest post I can write, not the most optimistic one. But “most receptive of the channels I’ve tried” is a low bar when none of the channels are converting at scale, and I’d be lying if I said the dev.to posts have moved Sats Channel’s metrics meaningfully. They’ve moved my thinking meaningfully, which is the thing I came here for and the thing I’d recommend other indie builders use writing for, but the conversion from reader to user is brutally low and I don’t have a clever solution to share. If you’re reading this and you’ve solved this problem, the comments are open and I will read every reply.&lt;br&gt;
What I’m trying next, which I’m flagging because the next post in this series will probably be a follow-up to this one with whatever I learn: I’m starting to think the answer isn’t more marketing channels but fewer, deeper ones. I’ve been spreading thin because spreading thin felt safe — every channel was a hedge against the others — but the result is that I’m a marginal voice on five platforms instead of a meaningful voice on one. The next experiment is to pick a single community and become a real participant in it, contributing without an agenda for long enough that when I do mention what I’m building, the people hearing it know me already. Whether that works, I don’t yet know. I’ll write that post when I have an answer.&lt;br&gt;
Closing thought, because every post in this series has had one. The architecture posts I wrote earlier — the Oracle ARM tier, the Base payments rail, the Lightning revenue loop — were easy to write because the technical decisions had clear right answers and I could explain why I made them. The post you just read is harder because there isn’t a right answer yet and I’m in the middle of looking for it. If you’re a builder who’s further along this curve than I am, I’d genuinely value your perspective in the comments. If you’re a builder who’s at the same stage, this is the post I wish someone had written for me eight weeks ago, and I hope it saves you a few of the weeks I’ve already spent. Either way, the catalog is at quackbuilds.com and Sats Channel specifically is at quackbuilds.com/thesatschannel — and yes, I’m aware that the post about distribution failure is itself an attempt at distribution. The recursion is the joke and also the entire problem.&lt;br&gt;
— &lt;a class="mentioned-user" href="https://dev.to/itsevilduck"&gt;@itsevilduck&lt;/a&gt; / quackbuilds.com&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>showdev</category>
      <category>discuss</category>
      <category>career</category>
    </item>
    <item>
      <title>The payments inversion — building a Lightning revenue-share loop where the app pays the user</title>
      <dc:creator>ItsEvilDuck</dc:creator>
      <pubDate>Tue, 28 Apr 2026 13:08:44 +0000</pubDate>
      <link>https://dev.to/itsevilduck/the-payments-inversion-building-a-lightning-revenue-share-loop-where-the-app-pays-the-user-50ln</link>
      <guid>https://dev.to/itsevilduck/the-payments-inversion-building-a-lightning-revenue-share-loop-where-the-app-pays-the-user-50ln</guid>
      <description>&lt;p&gt;In the last post I argued that Stripe — and the card network underneath it — can’t serve certain payment shapes, no matter how good the developer experience around it gets. Sub-dollar payments don’t work. Agent-to-agent payments don’t work. Today’s post is about a third shape that breaks on the same rails, and the most interesting one to me, because it’s an inversion of how almost every app on the web handles money. Most apps take money from users. The one I want to walk through pays users instead. The architecture that makes that possible is small enough to fit in one post, but the economic implications are bigger than I expected when I started building it.&lt;br&gt;
The app is Sats Channel, currently shipping on QuackBuilds. The premise is straightforward: you watch classic movies and television, and you earn Bitcoin (denominated in sats, the smallest unit of BTC) for the time you spend watching. There are no accounts in the traditional sense — you connect a Lightning wallet and a session starts accruing sats against your address as the video plays. The ad network running against the video pays in Bitcoin, the platform takes a cut to stay alive, and the rest is split back to viewers in proportion to attention. The whole loop is closed in BTC, which turns out to be the part that makes the math work.&lt;br&gt;
Walk through why traditional rails can’t host this. A user watching a forty-minute episode might generate, very roughly, somewhere between a few cents and a few dimes of ad revenue depending on fill rate and CPM. Their share of that, after the platform’s cut, is fractions of a cent per minute. To pay that out via Stripe Connect or PayPal, you’d hit minimum withdrawal thresholds (usually a few dollars), per-transaction fees (often a fixed component plus a percentage), and KYC overhead per recipient that vaporizes the unit economics before the first payout clears. The model only works if the cost of paying a user is meaningfully smaller than the amount you’re paying them. On card rails, for sub-dollar payouts, it isn’t. The bundling tax I mentioned in the last post shows up here in its purest form — every “watch and earn” product on the legacy web ends up being a points system that converts to gift cards at $5 increments, because that’s the smallest payout the rails will tolerate.&lt;br&gt;
Lightning collapses that constraint. A Lightning payment — assuming a healthy channel — costs effectively nothing to send and settles in milliseconds. Paying a user one sat is economically rational. Paying them ten thousand sats is the same cost. The accrual model can therefore mirror reality: as a viewer watches, a counter ticks upward in their balance, and they can claim that balance to their wallet by submitting a Lightning address at any time, as long as the 100-sat minimum is met. The counter only ticks while the viewer is actively watching — the player checks for engagement signals and pauses accrual if it looks like the tab is idle or the user has walked away. That’s not friction for legitimate viewers, it’s the part of the design that keeps ad revenue flowing to the people actually delivering attention rather than to a script in a hidden tab. Sats Channel sets the minimum withdrawal at 100 sats — reachable in a few days of regular viewing at current fill rates, and lower as the ad layer matures. The threshold lives in my dashboard rather than in a smart contract, so it can move down as the per-minute earn rate moves up.&lt;br&gt;
The technical stack is leaner than people expect. The frontend is a video player wrapped around an attention-tracking module that emits viewing events to the backend. The backend maintains a multi-session sat balance, applies the revenue-share formula, and just requires a Lightning address to claim. The ad layer uses A-ADS, which is convenient for two reasons — it pays out directly in BTC, so there’s no fiat-to-crypto conversion to manage, and it accepts inventory from independent sites without the gatekeeping of larger ad networks. The Lightning side is straightforward to integrate via any of the standard service layers (LNbits, Alby’s API, BTCPay Server depending on how custodial you want to be). The whole thing is small enough that the interesting parts are economic and design choices, not infrastructure.&lt;br&gt;
The other piece I want to walk through, because it’s a class of problem most “watch-to-earn” experiments quietly punt on, is content supply. A platform that pays for attention is only as good as its catalog, and a static catalog gets stale fast — the same hundred public-domain titles in week one are the same hundred in week ten, and viewer retention drops accordingly. The version of this product that just seemed to work in early testing was the one with a fresh playlist every time a user came back. The version that actually works at scale needs that freshness to be automated, because hand-curating a daily catalog is exactly the kind of work that becomes the single point of failure for the entire app.&lt;br&gt;
Sats Channel solves this with a scraper that runs once every twenty-four hours, pulls newly available public-domain titles from the sources I’m aggregating, and populates the daily playlists automatically. The catalog grows every day without me touching it. Architecturally, the scraper itself lives on the Oracle ARM tier I covered in post two of this series — it’s exactly the kind of long-running, scheduled job that doesn’t fit serverless and that the free-tier compute layer was designed to absorb. The scraper writes new titles into a Supabase table, the frontend reads from that table to construct the day’s playlist, and a small validation pass checks each new title for playable streams before it goes live. The end-user experience is that the catalog is never empty, never identical to yesterday’s, and never bottlenecked on me being awake to curate it. The architectural experience is that one cron job replaces what would otherwise be a daily content operations problem, and the cost of running that cron is zero.&lt;br&gt;
A few decisions worth surfacing because they shape the product more than they look like they should. Choosing classic content — public-domain or cheaply licensed older film and television — does three things at once. It eliminates licensing costs, which would otherwise eat the entire revenue share before any user got paid. It selects for longer-form viewing sessions, which generate more ad impressions per session than short-form content. And it attracts an audience demographic that overlaps unusually well with the early Bitcoin-curious crowd, which matters for the chicken-and-egg problem of getting the first thousand users to a platform that requires a Lightning wallet. Pricing the platform’s cut is a more delicate decision than I initially treated it as — too generous to users and the platform can’t sustain its own infrastructure, too greedy and the value proposition collapses. Making withdrawals user-initiated rather than scheduled batches keeps the trust model simple: at any moment, the user can prove the platform pays by clicking withdraw, and that proof itself becomes the strongest piece of marketing the product has.&lt;br&gt;
There’s a small mechanic I built in for the early phase, which I’m flagging here because it’s the kind of thing dev.to readers tend to ask about and because it’s open to the first ten people who actually want it. Refer five friends who connect a Lightning wallet and you receive a Founder’s Badge — an original NFT showing on-chain proof you were here first. There are exactly ten of them, all ten slots are still open as of writing, and I’m reserving the option to attach perks to them later if the platform reaches the kind of scale that makes perks meaningful. I’m deliberately not over-promising what those perks will be, because I’d rather under-promise and earn the credibility back later than dangle features I haven’t built. The badge itself is the artifact; the optionality is the second-order value. The referral link surfaces in your dashboard once you connect a wallet, and progress is tracked there. If you’re the kind of person who likes being first onto something while the product is still being shaped, that’s the door.&lt;br&gt;
Honest tradeoffs, because every post in this series has had a section like this and I’d be embarrassed to skip it now. Ad fill rates on small, early-stage sites are genuinely brutal, and the gap between the sats a user theoretically could earn and what they actually earn in early traffic is real. Ad-blocker penetration in the demographic this product attracts is also higher than baseline, which directly suppresses revenue. I’m currently iterating on placement configuration for a few of the secondary ad units, while the primary inventory around the player runs cleanly. And the product is in the part of its lifecycle where the economic model is sound on paper but needs sustained traffic to actually flex — the kind of bootstrapping problem that every revenue-share marketplace faces in its first months and that no architecture choice can fully shortcut.&lt;br&gt;
What I keep coming back to, and the reason this build is the one I wanted to write up next, is that the inversion is real. An app that pays its users isn’t a marketing gimmick or a loss-leader funded by VC subsidy — it’s an architectural possibility that opens up the moment the payment layer can settle in fractions of a cent. There are probably dozens of products in this shape waiting to be built, and watch-to-earn is just the most obvious one. Read-to-earn, listen-to-earn, contribute-to-earn, train-a-model-to-earn — every one of these is a candidate the moment the payout side becomes economically viable, and the payout side became economically viable approximately the moment Lightning matured. Sats Channel is my attempt to build the most-legible version of the pattern. If it works, the more interesting versions come next.&lt;br&gt;
You can try it at quackbuilds.com/thesatschannel. Bring a Lightning wallet (Alby works well in the browser, Phoenix or Wallet of Satoshi work fine on mobile) and pick something to watch. If you find a bug, the comments here are the fastest way to get my attention — and if you’ve thought about the same payments inversion in a different context, I’d be especially interested to hear what shape your version took.&lt;br&gt;
Next post in the series is probably the failure mode counterpart to this one — what doesn’t work when you try to design products around micropayments, and the assumptions about user behavior that turn out to be wrong. Suggestions welcome on which assumptions to attack.&lt;br&gt;
— &lt;a class="mentioned-user" href="https://dev.to/itsevilduck"&gt;@itsevilduck&lt;/a&gt; / quackbuilds.com&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>showdev</category>
      <category>bitcoin</category>
      <category>lightning</category>
    </item>
    <item>
      <title>The payments layer Stripe can't build — using Base for sub-dollar and agent-to-agent flows</title>
      <dc:creator>ItsEvilDuck</dc:creator>
      <pubDate>Sat, 25 Apr 2026 13:48:30 +0000</pubDate>
      <link>https://dev.to/itsevilduck/the-payments-layer-stripe-cant-build-using-base-for-sub-dollar-and-agent-to-agent-flows-3ogc</link>
      <guid>https://dev.to/itsevilduck/the-payments-layer-stripe-cant-build-using-base-for-sub-dollar-and-agent-to-agent-flows-3ogc</guid>
      <description>&lt;p&gt;This is the third post in the QuackBuilds architecture series. The first covered the catalog and the autonomic generation layer at a high level. The second walked through the long-running compute tier — Oracle ARM and Coolify — that handles what serverless can’t. This one is about the payments layer, which is the piece that makes the whole economic model work, and the piece I expect the most pushback on. So let me start by trying to defuse the pushback, because I think the technical case is stronger than the cultural one.&lt;br&gt;
Most developers reading this have integrated Stripe at some point, and Stripe is genuinely excellent at what it does. If you’re charging twenty dollars a month for a SaaS subscription, or seventy dollars for a one-time purchase, or any of the standard shapes that dominate online commerce, Stripe is the right answer and you should not overthink it. The reason QuackBuilds doesn’t use Stripe is not that I dislike Stripe. It’s that the products I’m building live in two payment shapes that Stripe was never designed for, and that no traditional payment processor can serve well, because the limitation isn’t software — it’s the underlying card network economics.&lt;br&gt;
The first shape is sub-dollar payments. If you’ve ever tried to charge five cents for something on the open web, you already know this story. Stripe’s fee structure starts at thirty cents plus 2.9 percent per transaction, which means a ten-cent payment costs you thirty-three cents to process, and you lose money on every transaction. This is not a Stripe problem. It’s a Visa and Mastercard problem, baked into the card network’s settlement model decades before microtransactions were a use case anyone was thinking about. The workaround the SaaS world settled on is to bundle: charge ten dollars a month instead of charging per use, accept that some users will pay for capacity they don’t need, and live with the fact that genuinely small payments are infeasible. That bundling tax is invisible most of the time, but it shapes the entire shape of what gets built. Tools that would naturally cost a few cents per use either don’t exist or get crammed into a subscription wrapper that distorts their economics.&lt;br&gt;
The second shape is agent-to-agent payments. This one is more speculative, but it’s where the architecture is heading and it’s the reason I made this choice rather than waiting. If part of QuackBuilds’ eventual design is autonomous agents that scaffold and ship apps, and those apps in turn might consume APIs, call models, hire other agents to do subtasks, then the payment layer needs to support a software process initiating a payment without a human in the loop. Stripe is built around the assumption that there is always a human approving a transaction — a card on file, a customer who agreed to a subscription, a user clicking a button. Agents don’t fit that mental model. They have budgets, not credit cards. They make decisions in milliseconds, not minutes. They need to pay other agents that may not have a Stripe account at all. The card network can’t serve this use case, and even if it could, the per-transaction fee structure would make most agent-to-agent calls economically irrational.&lt;br&gt;
Onchain payment rails solve both problems by accident, because they were designed under different assumptions in the first place. A USDC transfer on Base — Coinbase’s Ethereum L2 — costs a fraction of a cent in gas, settles in two seconds, and doesn’t care whether the sender is a human, a bot, an agent, or another contract. The address format is uniform across all of them. There is no signup flow, no merchant account, no underwriting, no chargeback risk, no minimum transaction size. A five-cent payment costs functionally the same as a five-thousand-dollar payment, which is the property that makes microtransactions economically possible in the first place. The fee shape is essentially flat where the card network’s fee shape is regressive against small payments.&lt;br&gt;
The integration on the developer side is less exotic than people expect. I’m using a combination of OnchainKit (Coinbase’s React component library for Base) and the underlying viem library for direct contract interaction. Adding “pay with USDC” to a frontend is genuinely a few components and a wallet connect — comparable in complexity to adding a Stripe checkout button, and arguably simpler because there’s no backend webhook handling or session management to worry about. The user signs a transaction in their wallet, the funds arrive at my treasury address two seconds later, and the frontend can confirm the payment by watching the chain. No backend strictly required, though I do run one for accounting and to trigger any post-payment workflows. For users who don’t already have a wallet, MoonPay and Coinbase’s onramp let them buy USDC with a card directly inside the flow, which softens the onboarding problem considerably from where it was even two years ago.&lt;br&gt;
The honest tradeoffs, because I keep promising not to write puff pieces. Crypto onboarding is still a real friction tax, and pretending otherwise is dishonest. A user who already has a wallet pays in two clicks. A user who doesn’t has to either install one or use a card-onramp, both of which are slower than a Stripe checkout for the user’s first payment. I think this friction is decreasing fast — Coinbase’s smart wallet flow, in particular, has gotten genuinely close to “sign up with email” — but it’s not zero today and it would be silly to claim otherwise. The other honest tradeoff is regulatory ambiguity, which varies by jurisdiction and changes constantly, and which I navigate by keeping the payment surface small and well-understood (USDC, a stablecoin, on a Coinbase-operated L2) rather than by getting clever. I’m not running a token sale. I’m accepting digital dollars on a fast network. That framing matters both legally and culturally.&lt;br&gt;
The architectural payoff is worth restating, because it’s the reason all of this is worth the integration cost. With onchain rails in place, every app in the QuackBuilds catalog has access to a payments layer that supports any transaction size from a fraction of a cent upward, supports payment from humans and agents alike, settles in seconds, and costs essentially nothing to operate. Combined with the zero-fixed-cost compute tier from the previous post, what emerges is a stack where shipping a new app, hosting it, and monetizing it can all happen at near-zero marginal cost. That cost shape is what makes the catalog model — many small tools, each potentially cheap or free — economically viable in a way it simply isn’t on a traditional SaaS stack. The technology is, in a real sense, downstream of the economics. I picked Base because the cost curve made the product I wanted to build possible. If Stripe had built a microtransaction product five years ago, I’d probably be using that instead.&lt;br&gt;
That closes out the architecture series for now. Three posts, three layers — the catalog and generation vision, the long-running compute tier, and the payments rail. The next thing I write will probably be from the other direction: not how it’s built, but how it gets used. What a single app in the catalog looks like end-to-end, why I chose to build it, what I learned shipping it. If there’s a specific app from the catalog you’d want that writeup to be about, drop it in the comments and I’ll pick the most-asked.&lt;br&gt;
— &lt;a class="mentioned-user" href="https://dev.to/itsevilduck"&gt;@itsevilduck&lt;/a&gt; / quackbuilds.com&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>showdev</category>
      <category>web3</category>
      <category>payments</category>
    </item>
    <item>
      <title>What serverless can't do — running long-lived services for $0 with Oracle Cloud and Coolify</title>
      <dc:creator>ItsEvilDuck</dc:creator>
      <pubDate>Fri, 24 Apr 2026 23:52:48 +0000</pubDate>
      <link>https://dev.to/itsevilduck/what-serverless-cant-do-running-long-lived-services-for-0-with-oracle-cloud-and-coolify-165f</link>
      <guid>https://dev.to/itsevilduck/what-serverless-cant-do-running-long-lived-services-for-0-with-oracle-cloud-and-coolify-165f</guid>
      <description>&lt;p&gt;In my last post about QuackBuilds, I mentioned almost in passing that the long-running half of my stack — schedulers, trading bots, AI pipelines that take longer than serverless function timeouts allow — runs on an Oracle Cloud ARM instance with Coolify on top. A few people asked about that part specifically, so this post is the deeper dive. It’s the corner of my stack that costs me literally nothing per month and quietly does the work that serverless can’t.&lt;br&gt;
The setup makes sense if you’ve ever hit a wall that’s familiar to anyone who’s tried to build something that isn’t a CRUD app. Vercel functions time out. Cloudflare Workers have a CPU budget. Supabase Edge Functions are great until you need to keep a websocket open, run a polling loop, or hold model state in memory across requests. Once you cross into anything that resembles a daemon — anything that wakes up on a schedule, holds a connection, runs for minutes at a time, or maintains warm state — the serverless layer of your stack stops being able to help you, and you suddenly need a server. That gap is what Oracle Cloud’s Always Free tier fills, and what Coolify makes pleasant.&lt;br&gt;
The Always Free tier itself is, on paper, almost too generous to be real. Oracle gives you up to four ARM Ampere A1 cores and 24GB of RAM, split however you want across instances, with 200GB of block storage, all permanently free. Not free for a year. Not free until they email you. Free indefinitely as long as the account stays active. The hardware is genuinely capable — the A1 cores are modern Ampere ARM chips, not the wheezing shared CPUs you sometimes get from “free tier” providers — and 24GB of RAM is enough to comfortably run several services simultaneously, including ones that hold model weights or large in-memory caches. The catch, and there is a real catch, is that Oracle has a reputation for reclaiming idle Always Free instances if they need the capacity for paying customers. In practice I have not had this happen, and the workaround if it ever did is straightforward, but it’s worth knowing going in.&lt;br&gt;
On top of that bare instance, I run Coolify. If you haven’t come across it, the easiest description is “a self-hosted Heroku” — an open-source platform layer that gives you git-push-to-deploy, automatic Let’s Encrypt SSL, environment variable management, application logs, database provisioning, and a clean web UI that turns the Linux box into something you can actually manage at 11pm without remembering Docker incantations. Installing it is a one-line curl command. Once it’s up, deploying a new service is connecting a GitHub repo, picking a build pack, and clicking deploy. The DX is genuinely close to Heroku circa 2014, which is to say, the era when deploying a web app was actually fun.&lt;br&gt;
The combination — Oracle Always Free ARM plus Coolify — is what lets me run the stuff serverless refuses to host. My Polymarket trading bot, which runs on a four-hour cycle and needs to maintain state between cycles, lives there. A scheduler that polls APIs and triggers workflows lives there. An AI pipeline that processes video in the background lives there. A handful of small services that exist purely because I wanted them to exist live there. None of these would work as serverless functions, all of them work fine on a single ARM instance, and the combined cost of running all of them is exactly zero dollars a month. The variable cost only kicks in for the apps that actually scale on Vercel and the database rows that actually get written to Supabase, both of which are usage-priced. The fixed-cost half of the stack — the part that would normally be a $7-or-more droplet on every other infrastructure provider — is free.&lt;br&gt;
A few honest tradeoffs are worth naming, because I don’t want this to read like an ad. ARM compatibility is real but occasionally annoying. Most modern Docker images have ARM variants now, but you’ll occasionally hit a dependency that only ships x86 and have to find an alternative or build from source. Oracle’s web console is genuinely one of the worst enterprise UIs I’ve ever used, and the initial setup — getting the instance provisioned, networking configured, ports opened — is meaningfully harder than the equivalent flow on DigitalOcean or Hetzner. Once the instance exists and Coolify is running on it, you barely touch the Oracle console again, but the first hour is rough. Backups are your problem; Coolify can help, but you have to set it up. And the Always Free reclaim risk, while I haven’t personally experienced it, is real enough that you should treat the instance as cattle rather than pets — keep your configurations in git, your data in Supabase or somewhere you don’t host yourself, and make rebuilding the instance a thirty-minute exercise rather than a weekend project.&lt;br&gt;
The architectural pattern this enables, and the reason I set it up this way for QuackBuilds, is what I think of as a three-tier stack with the cost curve flipped. The frontend tier is Vercel, where I pay for what I use and the free tier covers small projects entirely. The data tier is Supabase, same model. The compute tier — the part that would traditionally be the most expensive — is Oracle ARM with Coolify, where the fixed cost is zero and only my time is on the meter. For a catalog of small apps, where any individual app might never make money but the catalog as a whole eventually will, this cost shape is the only one that actually works. I can ship a new tool, let it run for a year while it finds its audience, and pay nothing to keep it alive. That’s not possible on a stack where the compute layer charges you whether anyone is using your app or not.&lt;br&gt;
If you’ve been hitting the serverless wall and wondering where to put the things that don’t fit, this is what I’d suggest trying. The setup costs you an evening. The running cost is nothing. The ceiling is high enough that I haven’t come close to hitting it with several real services running concurrently, and if you do hit it, you’ve probably built something successful enough that paying for real infrastructure makes sense.&lt;br&gt;
Next post in the series will be the Base payments integration — how I’m using onchain rails to do what Stripe does, but for sub-dollar payments and agent-to-agent flows. If there’s a different layer of the stack you’d want me to dig into instead, drop it in the comments and I’ll write that one first.&lt;br&gt;
— &lt;a class="mentioned-user" href="https://dev.to/itsevilduck"&gt;@itsevilduck&lt;/a&gt; / quackbuilds.com&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>showdev</category>
      <category>devops</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>I'm building a post-SaaS app catalog on Base, and here's what that actually means</title>
      <dc:creator>ItsEvilDuck</dc:creator>
      <pubDate>Fri, 24 Apr 2026 02:47:25 +0000</pubDate>
      <link>https://dev.to/itsevilduck/im-building-a-post-saas-app-catalog-on-base-and-heres-what-that-actually-means-5521</link>
      <guid>https://dev.to/itsevilduck/im-building-a-post-saas-app-catalog-on-base-and-heres-what-that-actually-means-5521</guid>
      <description>&lt;p&gt;For the last several months I’ve been quietly shipping web apps to a site called QuackBuilds. On the surface it looks like a simple catalog — ten or so small tools spanning health, finance, productivity, and AI, all running in the browser with nothing to install. Under the surface, it’s an experiment in what software distribution might look like when the marginal cost of producing an app approaches zero. I want to walk through both layers, because the surface is what you can use today and the underneath is what I think is actually interesting.&lt;br&gt;
The visible part is straightforward. Every app is a Next.js frontend deployed on Vercel, designed mobile-first because most of my own traffic (and probably yours) comes from a phone. Where an app needs persistence, it talks to Supabase. Where an app needs something long-running that can’t live on serverless — a scheduler, a trading bot, an AI pipeline that takes ninety seconds to complete — I offload to an Oracle Cloud ARM instance running Coolify, which gives me a Heroku-like deploy experience on free-tier hardware. That infrastructure choice matters to the economic argument later, so it’s worth flagging now: my hosting cost for the long-running half of the system is essentially zero, and the serverless half scales linearly with actual usage rather than with provisioned capacity.&lt;br&gt;
The less visible part is the payments and identity layer. QuackBuilds is wired into Base, Coinbase’s Ethereum L2, which means any app that eventually needs monetization can settle in USDC without me building a Stripe integration, managing chargebacks, or gating features behind a subscription. If you’ve ever tried to monetize a small tool, you know the depressing shape of that problem: the payment infrastructure is often more complex than the app itself, and the per-transaction economics make anything under a few dollars infeasible. Onchain rails collapse that problem. A five-cent micropayment is not only possible, it’s routine. That matters for a catalog of small, disposable apps more than it matters for a traditional SaaS product, which is part of why the architecture leans this way.&lt;br&gt;
Here’s where it gets more ambitious, and I’ll flag upfront that parts of this are still being built rather than shipped. The goal I’m working toward is what I’ve been calling an autonomic generation layer. The idea is to treat app creation itself as a pipeline rather than as a human activity. A component I’ve named QuackRouter sits in front of multiple language models — Claude, GPT, Gemini, and others — and routes requests based on task type, cost, and availability, with automatic failover when one provider degrades. Feeding into that router is a build loop I’ve been calling the Hatchery Engine, which takes a specification, scaffolds an app, runs the output through a multi-agent review swarm for correctness and security, and ultimately deploys a candidate app into the catalog. An Agent-to-Device bridge extends the same logic to IoT and local hardware, so agents can act in the physical world and not just the browser.&lt;br&gt;
If that sounds like a lot, it is, and I want to be honest about where the line between built and aspirational currently sits. The routing layer exists and works. The build loop is partially wired up — it can scaffold, but the review and deploy steps are still manual in most cases. The swarm is in design. I’m hand-building the apps that currently populate the catalog, and I’m using that process to instrument what the automated version will eventually need to handle. The reason I’m comfortable talking about the ambitious piece publicly, despite it being incomplete, is that the architecture decisions being made right now — the choice of Base for agent-to-agent payments, the choice of Coolify for long-running agent processes, the choice to make every app stateless and browser-native — only make sense in the context of that larger design. If I were only shipping a catalog of small tools, I would have chosen a simpler stack.&lt;br&gt;
The design question I keep returning to, and the one I’d most like to discuss with other developers, is what a software catalog should actually look like when the cost of producing a new app drops low enough that apps become disposable. The SaaS model assumes each product is expensive to build and therefore needs to be monetized aggressively to recoup that cost. If that assumption no longer holds, a lot of downstream decisions — pricing, onboarding friction, account creation, feature gating, lock-in — start to look like artifacts of the old economics rather than genuine user needs. QuackBuilds is my attempt to design for the new economics from scratch rather than retrofit them onto the old model, and the catalog you can browse today is the early, hand-built seed of that system.&lt;br&gt;
If any of this resonates — browser-first app delivery, onchain payment rails for small tools, multi-agent code generation, or the design question about disposable software — I’d genuinely value your thoughts in the comments. The catalog is at quackbuilds.com/apps and you can find me on most platforms as &lt;a class="mentioned-user" href="https://dev.to/itsevilduck"&gt;@itsevilduck&lt;/a&gt;. Architecture writeups for each layer will land on the site as I stabilize them, and I’ll probably post follow-ups here on dev.to as each piece becomes real enough to demo. Tear it apart, poke at the assumptions, ask the uncomfortable questions. That’s the half of this I’m actually here for.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>showdev</category>
      <category>ai</category>
      <category>web3</category>
    </item>
  </channel>
</rss>
