<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: RAXXO Studios</title>
    <description>The latest articles on DEV Community by RAXXO Studios (@raxxostudios).</description>
    <link>https://dev.to/raxxostudios</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/raxxostudios"/>
    <language>en</language>
    <item>
      <title>PostHog Cloud EU to Self-Hosted: Cost Math and 4 Gotchas I Hit</title>
      <dc:creator>RAXXO Studios</dc:creator>
      <pubDate>Thu, 07 May 2026 08:44:49 +0000</pubDate>
      <link>https://dev.to/raxxostudios/posthog-cloud-eu-to-self-hosted-cost-math-and-4-gotchas-i-hit-3dii</link>
      <guid>https://dev.to/raxxostudios/posthog-cloud-eu-to-self-hosted-cost-math-and-4-gotchas-i-hit-3dii</guid>
      <description>&lt;ul&gt;
&lt;li&gt;&lt;p&gt;PostHog Cloud EU bill crossed 180 EUR/month at ~3M events, so I migrated to a self-hosted Hetzner box for 42 EUR/month.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cost crossover sits near 2M events/month for solo studios. Below that, Cloud EU is cheaper once you count your time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Four things bit me on first try: the Hobby tier 10k events/day cap, ClickHouse memory limits, the reverse proxy that needed to actually be persistent, and one outdated plugin.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After 30 days of self-hosted: 42 EUR infra, 0 EUR PostHog, ~3 hours/month maintenance. Worth it past 2M events.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;PostHog Cloud EU sent me a 180 EUR invoice last month. I was already paying 42 EUR/month for a Hetzner box that sat 80% idle. The math wrote itself, so I migrated. Then four small mistakes turned a "weekend project" into a 6-day stretch where my analytics were partially broken. Here is what actually happened, the cost crossover I found, and the gotchas nobody warns you about.&lt;/p&gt;

&lt;h2&gt;
  
  
  When self-hosting PostHog actually saves money
&lt;/h2&gt;

&lt;p&gt;PostHog Cloud EU pricing is generous up to 1M events/month, then it ramps. My event volume sat around 2.4M/month across raxxo.shop and three side projects. Cloud EU billed me 180 EUR. Self-hosted on a Hetzner CCX23 (4 vCPU, 16 GB RAM, 80 GB NVMe) costs me 42 EUR/month including a 5 EUR backup volume.&lt;/p&gt;

&lt;p&gt;The crossover point is not what PostHog's docs imply. Their calculator suggests self-hosting wins past 1M events. In practice, you need to count three things they leave out:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Your time. The first migration weekend cost me 14 hours. After that, maintenance has been 3 hours/month.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The persistent reverse proxy you need (more on this below). That is another small box or a Caddy/Cloudflare Tunnel setup.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Backups. The default ClickHouse backup story is "you figure it out." I run a nightly snapshot to Hetzner Storage Box at 4 EUR/month.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Add those up. For me at ~2.4M events: Cloud EU 180 EUR vs self-hosted 51 EUR all-in. Past 2M events/month, self-hosting wins clearly. Below 1M events, Cloud EU is cheaper once you value your weekends honestly. Between 1M and 2M is a wash for a solo studio.&lt;/p&gt;

&lt;p&gt;The crossover also depends on session replay. If you record sessions, your storage and compute jump fast. I kept session replay on Cloud EU for the first month after migration to test self-hosted performance under load before flipping that switch.&lt;/p&gt;

&lt;h2&gt;
  
  
  Gotcha 1: the Hobby tier silently caps at 10k events/day
&lt;/h2&gt;

&lt;p&gt;PostHog ships two self-host install paths: the Hobby image (single Docker Compose file) and the production Helm chart. The Hobby docs say "great for personal projects." What they bury is the 10k events/day soft cap. Past that, ingestion still works, but background jobs start lagging and the dashboards go stale by hours.&lt;/p&gt;

&lt;p&gt;I hit this on day 3. The dashboards looked fine for the first two days because traffic was low while DNS propagated. Then a Hashnode syndication picked up a post and pushed 18k events through in an afternoon. Replay clips stopped appearing. Insights froze. PostHog logged nothing useful because the cap is enforced at the worker queue level, not at ingest.&lt;/p&gt;

&lt;p&gt;The fix is to skip Hobby entirely for anything past hobby traffic. Move to the Helm chart on a single-node K3s cluster, or pull the Hobby docker-compose.yml and bump the worker replicas + ClickHouse memory yourself. I did the second option because I did not want a Kubernetes layer for one app. Two extra workers and a memory bump fixed the lag inside an hour.&lt;/p&gt;

&lt;p&gt;If you are reading this before migrating: pretend the Hobby tier does not exist. Start on Helm or hand-tuned compose from day one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Gotcha 2: ClickHouse will eat all your RAM if you let it
&lt;/h2&gt;

&lt;p&gt;ClickHouse is the engine PostHog uses for events. By default, the Hobby image gives ClickHouse no memory limit. On a 16 GB box running ClickHouse plus Postgres plus Kafka plus Redis plus PostHog itself, ClickHouse will happily consume 12 GB during a query and OOM-kill Kafka. When Kafka dies, ingestion silently drops events for the 4-7 minutes it takes to restart and replay.&lt;/p&gt;

&lt;p&gt;I lost a partial day of analytics figuring this out. The symptom is "some events show up, some do not, and there is no error anywhere." The cause is OOM-killed Kafka silently restarting and not catching up.&lt;/p&gt;

&lt;p&gt;The fix is two lines in the ClickHouse config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
0.5
4000000000

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Cap ClickHouse at 50% of host RAM and limit per-query memory to 4 GB. On a 16 GB box this leaves headroom for everything else. Query performance dropped maybe 10% on heavy funnels. I have not noticed it once in normal use.&lt;/p&gt;

&lt;p&gt;If you run a 32 GB box you can be more generous. On 16 GB, do not skip this step.&lt;/p&gt;

&lt;p&gt;One more detail: ClickHouse keeps merge operations running in the background. Those are memory-hungry too. Set &lt;code&gt;background_pool_size&lt;/code&gt; to 8 (down from the default 16) on small boxes. My CPU graph flattened out the day I changed that.&lt;/p&gt;

&lt;h2&gt;
  
  
  Gotcha 3: the reverse proxy actually has to be persistent
&lt;/h2&gt;

&lt;p&gt;PostHog's docs talk about a reverse proxy "to avoid ad blockers." What they do not emphasize is that this proxy needs to be on a domain you own, on infrastructure separate from your app, and persistent across deploys. If you stuff the proxy into your Vercel-deployed Next.js app the way the quickstart shows, two things break:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Vercel cold starts add 200-800 ms to ingestion calls. PostHog's SDK retries on timeout, so you get duplicate events.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Your app deploys cycle the proxy URL. Cached SDK configs in user browsers point to the previous build's edge for a few minutes after each deploy.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I fixed this by running Caddy on the same Hetzner box that hosts PostHog. The PostHog SDK in my apps points to events.raxxo.shop, which Caddy reverse-proxies to the local PostHog container. Zero cold starts, zero deploy ripples, one TLS cert renewed automatically.&lt;/p&gt;

&lt;p&gt;If you do not want a second domain, Cloudflare Workers also works as the proxy layer for free. The point is: do not put the proxy on Vercel or Netlify if your app is also there. Same-origin proxies on serverless platforms create more problems than they solve.&lt;/p&gt;

&lt;p&gt;A second wrinkle: PostHog's SDK config bakes the proxy URL into your build. If you change the proxy host later, you need a coordinated redeploy of every app pointing at it. Pick the domain once, write it down, and treat it like a permanent record. I keep mine in a single env var (&lt;code&gt;NEXT_PUBLIC_POSTHOG_HOST&lt;/code&gt;) shared across all four apps.&lt;/p&gt;

&lt;p&gt;For background on the cost-cutting infra logic, see &lt;a href="https://dev.to/blogs/lab/neon-database-branching-saved-me-200-eur-every-month"&gt;Neon Database Branching Saved Me 200 EUR Every Month&lt;/a&gt;, which covers a similar move from managed to right-sized.&lt;/p&gt;

&lt;h2&gt;
  
  
  Gotcha 4: one PostHog plugin had not been updated in 14 months
&lt;/h2&gt;

&lt;p&gt;PostHog's plugin system is solid for the official plugins. The community plugins are a mixed bag. I was using a GeoIP enrichment plugin from the marketplace that worked fine on Cloud EU. On self-hosted PostHog 1.43, that plugin threw a silent error on every event, dropped the geoip field, and logged nothing visible to the dashboard.&lt;/p&gt;

&lt;p&gt;I only caught it because a funnel I built specifically segments German vs rest-of-EU traffic and the German segment went to zero overnight. The plugin's GitHub repo had not seen a commit in 14 months. Replacing it with the official MaxMind GeoIP plugin took 10 minutes once I knew where to look.&lt;/p&gt;

&lt;p&gt;Audit your plugins before migrating. Open every community plugin's repo, check the last commit date, and check the issue tracker for "1.40+" or "self-hosted" complaints. If a plugin has not been touched in 12 months, plan to replace it. Cloud EU silently swaps in working versions for you. Self-hosted does not.&lt;/p&gt;

&lt;p&gt;I covered the broader observability stack switch in &lt;a href="https://dev.to/blogs/lab/posthog-error-tracking-killed-my-sentry-bill"&gt;PostHog Error Tracking Killed My Sentry Bill&lt;/a&gt;, which lays out why I am consolidating on PostHog instead of running 3 separate tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom line
&lt;/h2&gt;

&lt;p&gt;Past 2M events/month, self-hosted PostHog on a Hetzner box saves real money. Below 1M events, Cloud EU is cheaper once you honestly count your time. The crossover sits around 1.5M-2M events for a solo studio.&lt;/p&gt;

&lt;p&gt;The four gotchas (Hobby tier cap, ClickHouse memory, persistent reverse proxy, plugin compatibility) cost me 6 days the first time. They are all 10-30 minute fixes if you know to look for them. After 30 days running self-hosted, my analytics setup is faster, costs 51 EUR/month all-in, and I have not lost a single event since the ClickHouse memory fix.&lt;/p&gt;

&lt;p&gt;If you want the broader picture of how I run a one-person studio's developer infrastructure, the &lt;a href="https://dev.to/pages/lab-overview"&gt;Lab Overview&lt;/a&gt; collects every infra and tooling article I have written, organized by topic. Start there if you are figuring out where to cut your own bill.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>claudecode</category>
      <category>automation</category>
    </item>
    <item>
      <title>Anthropic's 2026-2027 Compute Map: 5 Deals, 10+ Gigawatts, One Solo-Dev Take</title>
      <dc:creator>RAXXO Studios</dc:creator>
      <pubDate>Thu, 07 May 2026 08:43:38 +0000</pubDate>
      <link>https://dev.to/raxxostudios/anthropics-2026-2027-compute-map-5-deals-10-gigawatts-one-solo-dev-take-1aae</link>
      <guid>https://dev.to/raxxostudios/anthropics-2026-2027-compute-map-5-deals-10-gigawatts-one-solo-dev-take-1aae</guid>
      <description>&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Anthropic stacked five compute deals through May 2026: SpaceX Colossus 1, Amazon, Google plus Broadcom, Microsoft Azure, and Fluidstack&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Combined headline: 10+ gigawatts on paper, 300 MW actually online inside one month&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Week 1 changes nothing for a solo studio, but 12-month rate-limit economics get rebuilt&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Amazon and Google deals do not land until late 2026 and 2027, so the near-term headroom is mostly Colossus 1&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pro, Max, Team, and Enterprise five-hour limits already doubled and peak-hour throttle is gone&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I keep a small text file with every Anthropic capacity deal I see. As of May 6, 2026 it has five entries. The numbers are a lot bigger than they were last quarter, the timelines vary wildly, and the press headlines are doing that thing where they pile up gigawatts and dollar signs in a way that is impressive and also useless if you are the one trying to plan the next twelve months of a one-person studio.&lt;/p&gt;

&lt;p&gt;So here is the map I drew for myself. Five deals, the actual capacity, when it lands, and what each one does (and does not) change for a solo dev paying for Pro or Max.&lt;/p&gt;

&lt;h2&gt;
  
  
  The five deals on the table
&lt;/h2&gt;

&lt;p&gt;The full list of public Anthropic compute commitments through May 2026:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;SpaceX Colossus 1: 300+ MW of new capacity, 220,000+ NVIDIA GPUs, online within a month of May 6, 2026&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Amazon: up to 5 GW agreement, with nearly 1 GW expected by end of 2026&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Google plus Broadcom: 5 GW agreement launching 2027&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Microsoft plus NVIDIA: 30 billion dollar Azure capacity partnership&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fluidstack: 50 billion dollar American AI infrastructure investment&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Read the list once and the obvious thing is that Anthropic stopped picking sides. AWS, Azure, Google, SpaceX, and an independent (Fluidstack) are all in the stack. That is unusual. A year ago everyone was placing bets on which hyperscaler would lock up which lab. Anthropic went the other way and just signed with everyone.&lt;/p&gt;

&lt;p&gt;The other thing the list tells me, once I sit with it for a minute, is that "10+ gigawatts" is a 2027 number, not a 2026 number. Most of the headline capacity on this list has not been built yet. The only piece that is actually online inside the next four weeks is Colossus 1.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is online vs what is on paper
&lt;/h2&gt;

&lt;p&gt;I find it helpful to split the deals into two buckets: "lights are on" and "lights will be on eventually."&lt;/p&gt;

&lt;p&gt;Lights are on (or about to be):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Colossus 1: 300 MW, 220,000+ GPUs, live inside one month&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Lights will be on eventually:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Amazon: nearly 1 GW by end of 2026, the other 4 GW after that&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Google plus Broadcom: 5 GW launching 2027&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Microsoft plus NVIDIA: 30 billion dollar partnership, capacity rolling in over time&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fluidstack: 50 billion dollar investment, multi-year buildout&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That second bucket is what gives you the dramatic gigawatt totals. It is also the bucket that will not help you next Tuesday at 3 PM Berlin when a refactor runs out of budget. The "lights are on" capacity is doing all the work for the next two quarters, and that capacity is just one entry on the list. I wrote up the practical day-to-day side of that in &lt;a href="https://dev.to/blogs/lab/anthropic-spacex-colossus-1-300mw-220k-gpus-and-doubled-claude-limits"&gt;Anthropic plus SpaceX Colossus 1: 300 MW, 220K GPUs, and Doubled Claude Limits&lt;/a&gt; the day it shipped.&lt;/p&gt;

&lt;p&gt;The Amazon nearly-1-GW-by-end-of-2026 line is the next milestone worth tracking. About eight months out. After that, the 2027 Google plus Broadcom slug is the one that genuinely changes the curve, because 5 GW arriving in a single year is a different shape than incremental hyperscaler growth.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why a solo studio sees nothing in week 1
&lt;/h2&gt;

&lt;p&gt;I want to be honest about the near-term effect on my actual workflow, because the gap between the press release and the desk experience is real.&lt;/p&gt;

&lt;p&gt;In week 1 (starting May 6, 2026) the only things I have actually noticed are the things Anthropic explicitly shipped to my plan: doubled five-hour rate limits on Pro, Max, Team, and Enterprise, the removed peak-hours throttle on Pro and Max, and a substantial Opus API limit increase. Everything else on the compute map is invisible from a desk in Berlin.&lt;/p&gt;

&lt;p&gt;The 220,000 GPUs at Colossus 1 do not give me a better model. They do not make Sonnet smarter. They do not make my prompts better. What they do is back the doubled limits with real silicon so the doubled limits do not silently degrade two weeks later when usage climbs to match them. That matters, but it is invisible matter. The press release tells you the deal exists. The desk tells you afternoon prompts now run as cleanly as morning prompts. Those two things connect, but only because the silicon shows up on time.&lt;/p&gt;

&lt;p&gt;The Amazon and Google deals do nothing for me in week 1 because their capacity does not exist yet. The Microsoft and Fluidstack money commitments do nothing for me in week 1 because money is not GPUs. The thing that changes my Tuesday is the 300 MW that is being switched on this month, full stop.&lt;/p&gt;

&lt;p&gt;If you do not pay Anthropic, the week 1 effect is even smaller. The free tier is not part of the doubled-limit announcement. None of the five deals targets free users. So if you are running on the free Claude tier, this map is something to bookmark for next year, not something that changes today.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the 12-month picture actually looks like
&lt;/h2&gt;

&lt;p&gt;This is where the map gets interesting. Stacking the deals on a calendar:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;May to June 2026: Colossus 1 comes online, doubled limits land, Opus API capacity expands. This is the only delta you feel right now.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Mid to late 2026: Amazon's first chunk (about 1 GW) arrives. That is roughly three to four times the size of Colossus 1. If usage growth has eaten the Colossus headroom by then, this is what keeps the doubled limits from quietly tightening again.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;2027: Google plus Broadcom 5 GW launches. Microsoft Azure capacity continues rolling in. Fluidstack capacity continues rolling in. The total fleet grows by a different order of magnitude than 2026.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The thing this implies, that I think gets underrated in the analyst takes, is that 2026 is a holding pattern and 2027 is the inflection. The doubled limits announced May 6 are realistic for the rest of 2026 because Colossus 1 plus the Amazon first slug can carry them. They might get more generous in 2027 once the 5 GW Google capacity lands, or they might stay flat while Anthropic pours that capacity into agent fleets, long-context workloads, and the multi-agent products they keep telegraphing.&lt;/p&gt;

&lt;p&gt;The other detail that quietly matters: Anthropic committed to covering consumer electricity price increases tied to US data centers. That is not a charity move. That is what you say when you know the buildout is going to push residential power costs up near campuses, and you want to keep the political cost of the buildout low so the buildout actually finishes. It is also a tell that they expect this stack to be politically scrutinized, which means schedule slippage on any single deal is a real risk for the 2027 numbers.&lt;/p&gt;

&lt;p&gt;For pricing context across the Claude tiers (which are the surface where you actually feel any of this), &lt;a href="https://dev.to/blogs/lab/claude-max-vs-pro-which-plan-is-actually-worth-it"&gt;Claude Max vs Pro: Which Plan is Actually Worth It?&lt;/a&gt; still holds up. The plan math has not changed, just the headroom inside each plan.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I am actually doing about it
&lt;/h2&gt;

&lt;p&gt;The honest answer for a solo studio is "nothing dramatic, but plan for 2027 to be different."&lt;/p&gt;

&lt;p&gt;Concrete moves on my side:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;I am leaving my Max plan alone. The doubled limits are already paid for. No upgrade triggered by this map.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I am pushing more workloads to Opus that I had been gating to Sonnet only. The Opus API limit boost is the part of this announcement that has the most direct effect on agent workflows. If you build automations with Anthropic SDKs, &lt;a href="https://dev.to/blogs/lab/claude-api-pricing-explained-what-it-actually-costs-in-2026"&gt;Claude API Pricing Explained&lt;/a&gt; covers how to keep those calls cheap.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I am scheduling my heavy /audit and /poorreview runs throughout the day instead of saving them for early morning. The peak-hours throttle is gone and the doubled budget makes that practical now.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I keep &lt;a href="https://join.buffer.com/raxxo-studios" rel="noopener noreferrer"&gt;Buffer&lt;/a&gt; on for social scheduling because that workload sits outside the Claude API, and the published-at scheduling fits how I batch content anyway.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I am not changing my model picks. I am not changing my plan. I am not running speculative agent fleets on the assumption that 2027 limits will be more generous. The map is useful, but the only thing it actually unlocks today is "stop avoiding 3 PM Berlin."&lt;/p&gt;

&lt;p&gt;The thing I will be watching, more than the gigawatt totals, is the Amazon late-2026 milestone. That is the next public capacity drop. If Anthropic ships a second wave of rate-limit upgrades around the time that 1 GW lights up, the pattern is set: each new batch of capacity gets passed back to paying users as soon as it stabilizes. If they ship the 1 GW and limits stay flat, that tells you a different story (capacity is going to internal agent products, not customer headroom). Either way, the November-to-December 2026 window is when the next real signal arrives.&lt;/p&gt;

&lt;p&gt;If you want the running playbook for how I run a solo studio on Claude Code without burning the budget, the &lt;a href="https://dev.to/pages/claude-blueprint"&gt;Claude Blueprint&lt;/a&gt; covers the whole setup. And the day-it-happened post on the SpaceX deal lives at &lt;a href="https://dev.to/blogs/lab/anthropic-spacex-colossus-1-300mw-220k-gpus-and-doubled-claude-limits"&gt;Anthropic plus SpaceX Colossus 1&lt;/a&gt; if you want the practical Pro/Max angle without the full-map view.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom line
&lt;/h2&gt;

&lt;p&gt;Five deals. 10+ gigawatts on paper. 300 MW actually live inside one month. The compute map Anthropic announced through May 2026 reshapes the next twelve months of rate-limit economics, but only one entry on it changes your Tuesday. Treat the rest as scheduled rather than current.&lt;/p&gt;

&lt;p&gt;For solo dev tooling, the practical move is small: keep your existing plan, push more work to Opus where it matters, stop avoiding peak hours, and set a calendar reminder for the Amazon late-2026 milestone. That is the next moment the map actually moves.&lt;/p&gt;

&lt;p&gt;The cleaner mental model is "2026 is breathing room, 2027 is inflection." Plan accordingly. Do not pre-commit to spend or architecture choices that only pay off if the 5 GW slugs land on schedule. Do use the doubled limits today, because those are real and they are paid for.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>claudecode</category>
      <category>automation</category>
    </item>
    <item>
      <title>Solo Studio Invoicing in 2026: Stripe Tax + DATEV Export + USt-VA via API</title>
      <dc:creator>RAXXO Studios</dc:creator>
      <pubDate>Thu, 07 May 2026 08:42:26 +0000</pubDate>
      <link>https://dev.to/raxxostudios/solo-studio-invoicing-in-2026-stripe-tax-datev-export-ust-va-via-api-3g3c</link>
      <guid>https://dev.to/raxxostudios/solo-studio-invoicing-in-2026-stripe-tax-datev-export-ust-va-via-api-3g3c</guid>
      <description>&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Stripe Tax handles VAT calculation per customer location automatically across 40+ countries&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A 50-line Node script exports the monthly DATEV CSV my tax advisor actually wants&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;USt-VA submission via the ELSTER ERIC API works, but sevDesk is the lazy fallback&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Real numbers from a small Berlin studio: 18 to 30 invoices a month, 90 minutes total&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The invoicing stack I had in 2023 was a Google Sheet, a PDF generator I forgot the password to, and a folder called "VAT-stuff-final-FINAL". By 2026 I run a small Berlin one-person studio and most of that is automated. Not all of it. Just enough that I do not dread the 10th of every month, when the VAT pre-declaration is due.&lt;/p&gt;

&lt;p&gt;This is what the stack actually looks like, with the parts that work and the parts that still need a human.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stripe Tax does the VAT math I used to do wrong
&lt;/h2&gt;

&lt;p&gt;For years I calculated VAT manually based on where the customer was. EU B2B with valid VAT ID, reverse charge, no VAT on the invoice. EU B2C, charge their country's VAT rate. UK, charge UK VAT if registered there. Customer in Texas, no VAT but maybe sales tax. I got it wrong more than once.&lt;/p&gt;

&lt;p&gt;Stripe Tax fixed this for me in about an hour of setup. I enabled it in the dashboard, registered the German tax IDs and OSS (One Stop Shop) for EU sales, and added the country list where I have nexus. Stripe checks the customer's address and tax ID against VIES at checkout, applies the right rate, and stores the calculation. The invoice PDF Stripe generates includes the correct line item, the correct VAT note ("Reverse charge: VAT to be paid by the recipient"), and a unique invoice number that meets the German continuous numbering requirement.&lt;/p&gt;

&lt;p&gt;The cost is 0.5% on top of the transaction fee, which sounds small until you do the math on a year. For a studio doing roughly 18 to 30 invoices a month with mixed B2B and B2C, the extra Stripe Tax fee runs around 25 to 60 EUR a month for me. That is less than my old habit of paying my tax advisor to fix my VAT mistakes after the fact. I tested removing it once and immediately had to issue a credit note to a French B2B client because I had charged them German VAT instead of applying reverse charge. Back on it stayed.&lt;/p&gt;

&lt;p&gt;What Stripe Tax does not do: it does not file your USt-VA. It does not push to DATEV. It does not know about your other income (cash invoices, marketplace payouts, a one-off Shopify sale that bypassed Stripe). For that, you need glue.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 50-line Node script that makes my tax advisor stop emailing me
&lt;/h2&gt;

&lt;p&gt;My Steuerberater (tax advisor) wants one thing from me each month: a DATEV-formatted CSV with all my Stripe transactions. Stripe's own dashboard export is not in DATEV format. Closest you can get is a custom report, which is fine for a single month but breaks every time DATEV updates the field spec.&lt;/p&gt;

&lt;p&gt;So I wrote a script. It pulls the previous month's transactions via &lt;code&gt;stripe.charges.list&lt;/code&gt;, maps them to DATEV's "Buchungsstapel" CSV columns (Umsatz, Soll/Haben-Kennzeichen, Konto, Gegenkonto, BU-Schluessel, Belegfeld 1, Datum, Buchungstext), and writes the file. The whole thing fits in 50 lines including imports and is the most boring code I own.&lt;/p&gt;

&lt;p&gt;The skeleton, with details trimmed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Stripe&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;stripe&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;stringify&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;csv-stringify/sync&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;writeFileSync&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;fs&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;stripe&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Stripe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;STRIPE_SECRET&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;start&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;2026-04-01&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;getTime&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;end&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;2026-05-01&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;getTime&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;charges&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="k"&gt;await &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;c&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;stripe&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;charges&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;list&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;created&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;gte&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;start&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;lt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;end&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;limit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}))&lt;/span&gt; &lt;span class="nx"&gt;charges&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;rows&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;charges&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;c&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;Umsatz&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;amount&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toFixed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;.&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;,&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="na"&gt;Soll_Haben&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;H&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;Konto&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;1200&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;Gegenkonto&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;account&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;8400&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;BU_Schluessel&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;bu&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;9&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;Belegfeld_1&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;slice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;12&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="na"&gt;Datum&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;created&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toISOString&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;slice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="na"&gt;Buchungstext&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;slice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;}));&lt;/span&gt;

&lt;span class="nf"&gt;writeFileSync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;export-04-2026.csv&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;header&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;delimiter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;;&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}));&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I run it on the 5th of each month. The output goes into a Dropbox folder my Steuerberater has read access to. He used to send me 10 emails a month asking for clarifications. Now he sends one, in May, asking how my Spanish tax certificate is going.&lt;/p&gt;

&lt;p&gt;The thing this script does not do: credit notes, partial reversals, disputes, marketplace transfers. For those I have a longer cousin script. If you want to see how the monthly close fits together, I covered that in &lt;a href="https://dev.to/blogs/lab/solo-studio-bookkeeping-in-90-minutes-a-month-my-stack-and-routine"&gt;Solo Studio Bookkeeping in 90 Minutes a Month&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  USt-VA via ELSTER ERIC: the honest version
&lt;/h2&gt;

&lt;p&gt;The USt-VA (Umsatzsteuer-Voranmeldung) is the German monthly VAT pre-declaration. Every month, by the 10th, you tell the Finanzamt how much VAT you collected, how much input VAT you can deduct, and you pay the difference. It is the single most-late filing in German freelance life because the official ELSTER portal is, generously, an experience.&lt;/p&gt;

&lt;p&gt;There are three ways to file it.&lt;/p&gt;

&lt;p&gt;The official path is ERiC, the ELSTER Rich Client, a binary library you can call from C, Java, or via a Python wrapper. It validates your XML against the Finanzamt schema and submits over a TLS-encrypted channel. I built a wrapper around it once, in 2024, and got it to submit a real USt-VA. It worked. It also took me a weekend, the docs are German tax German (different from regular German), and the schema changes every year. If you are the kind of person who enjoys decoding XSD files for fun, this path is free and reliable.&lt;/p&gt;

&lt;p&gt;The pragmatic path is sevDesk or &lt;a href="https://www.lexoffice.de" rel="noopener noreferrer"&gt;lexoffice&lt;/a&gt;. Both have proper APIs that wrap ELSTER. You push your invoices and expenses into their data model, click submit, and they handle the XML and the ERiC handshake. sevDesk's API costs me about 18 EUR a month on the smallest plan that includes ELSTER submission. It is the lazy version and I switched to it after a quarter of fighting ERiC directly. Worth it.&lt;/p&gt;

&lt;p&gt;The wrong path is "I will file it through the web portal manually each month." You will forget. Three months from now you will get a friendly letter from your Finanzamt with a 50 EUR late fee, and the fee compounds. Pick one of the first two.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I send my customers vs what goes to the Finanzamt
&lt;/h2&gt;

&lt;p&gt;The invoice the customer gets is generated by Stripe and looks clean: my studio name, customer details, line items, VAT, total, IBAN, the German legal footer (Steuernummer, USt-IdNr, Kleinunternehmer status if applicable). Stripe stores it as a hosted PDF and sends a download link in the receipt email.&lt;/p&gt;

&lt;p&gt;The data the Finanzamt eventually sees is different. They see aggregates, not individual invoices, and they only care about the per-month sum of taxable sales by VAT category (19%, 7%, reverse charge, OSS), the input VAT I deducted, and the resulting payment. They do not look at customer names. They will, however, audit me one day and at that point they want every individual invoice, in DATEV-readable form, with continuous numbering and matching bank movements.&lt;/p&gt;

&lt;p&gt;This is why the boring script matters more than the pretty PDF. The PDF is for trust. The CSV is for survival. If you are running a studio at this scale, you will want both, and you want them to come out of one source of truth, not two systems you reconcile by hand on a Sunday night.&lt;/p&gt;

&lt;p&gt;The other reason this stack works for me: I keep all expenses on one card. One business &lt;a href="https://shopify.pxf.io/5k5rj9" rel="noopener noreferrer"&gt;Shopify&lt;/a&gt; Balance card, one Stripe account, one DATEV-readable export. No personal-card-for-business-stuff. That alone removed about 40% of my month-end pain when I switched in 2024. For more on the wider picture of what running a small studio with this kind of plumbing looks like, &lt;a href="https://dev.to/blogs/lab/stripe-projects-changes-solo-maker-economics"&gt;Stripe Projects Changes Solo Maker Economics&lt;/a&gt; covers the operational side I do not get into here.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom Line
&lt;/h2&gt;

&lt;p&gt;The 2026 invoicing stack for a solo studio has gotten good enough that the math is not the hard part anymore. Stripe Tax handles the per-customer VAT logic. A 50-line Node script bridges Stripe to DATEV. Either ERIC directly or a service like sevDesk submits the USt-VA so I do not log into ELSTER manually.&lt;/p&gt;

&lt;p&gt;What is left for me to do is the human part: categorising weird transactions, deciding if a marketplace payout is a service or a goods sale, remembering that the Spanish OSS threshold changed in 2024. That part still takes 90 minutes a month. The rest takes about as long as making coffee.&lt;/p&gt;

&lt;p&gt;If you are looking for the complete picture of how the monthly routine fits together, the &lt;a href="https://dev.to/pages/studio"&gt;Studio page&lt;/a&gt; has the full RAXXO operating system, and I keep updating both as I find better tools. The boring infrastructure is what gives me time to make things people actually want to buy.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>claudecode</category>
      <category>automation</category>
    </item>
    <item>
      <title>Build a Telegram Alert Bot for Shopify Orders in 30 Lines of TypeScript</title>
      <dc:creator>RAXXO Studios</dc:creator>
      <pubDate>Thu, 07 May 2026 08:41:15 +0000</pubDate>
      <link>https://dev.to/raxxostudios/build-a-telegram-alert-bot-for-shopify-orders-in-30-lines-of-typescript-akc</link>
      <guid>https://dev.to/raxxostudios/build-a-telegram-alert-bot-for-shopify-orders-in-30-lines-of-typescript-akc</guid>
      <description>&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A Bun serverless function on Vercel turns Shopify orders/create webhooks into Telegram pings in under 5s&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;HMAC verification with the webhook secret stops random POSTs from spoofing your phone&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;BotFather setup plus a one-time chat_id lookup is the only manual step before deploy&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The full alert handler is 28 lines of TypeScript, retries on Telegram 429s, and costs zero EUR on Vercel Hobby&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I missed two orders during a flash sale because the Shopify mobile app push lag was somewhere between 90 seconds and never. So I wrote my own. The whole thing fits on one screen, runs on Vercel, and pings my phone before the customer's confirmation email even renders.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the alert actually looks like
&lt;/h2&gt;

&lt;p&gt;When a new order lands, my phone buzzes with this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
NEW ORDER #1042
Total: 47 EUR
City: Berlin
Items:
- Statusline Builder License x1
- Claude Blueprint Bundle x1

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Five seconds, end to end. No app, no polling, no Zapier middleman charging 20 EUR a month for what is essentially one HTTPS request forwarding another HTTPS request. Telegram is the messenger because it has a real bot API, supports markdown, and works on every device I own. Slack would also work. Discord too. The pattern is identical, only the final POST URL changes.&lt;/p&gt;

&lt;p&gt;The point of this build is that it is small enough to actually understand. You can read the whole function, hold it in your head, and modify it without spelunking through a vendor dashboard. That matters when something breaks at 2am during a launch.&lt;/p&gt;

&lt;p&gt;If you want the broader pattern of using webhooks to drive automation across your store, &lt;a href="https://dev.to/blogs/lab/building-a-webhook-system-for-shopify-order-automation"&gt;Building a Webhook System for Shopify Order Automation&lt;/a&gt; covers the full architecture. This article is the minimum viable version of that, scoped to one job.&lt;/p&gt;

&lt;h2&gt;
  
  
  BotFather and chat_id, the only manual part
&lt;/h2&gt;

&lt;p&gt;Open Telegram. Search for &lt;code&gt;@BotFather&lt;/code&gt;. Send &lt;code&gt;/newbot&lt;/code&gt;. Pick a name. Pick a username ending in &lt;code&gt;bot&lt;/code&gt;. BotFather hands you a token that looks like &lt;code&gt;7234567890:AAH...&lt;/code&gt;. Save it. This is the credential your serverless function will use to send messages. Treat it like a password.&lt;/p&gt;

&lt;p&gt;The bot can now send messages, but it does not know where to send them yet. Telegram routes messages by &lt;code&gt;chat_id&lt;/code&gt;, a number that identifies a user, group, or channel. To find yours, send any message to your new bot inside Telegram, then visit:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
https://api.telegram.org/bot/getUpdates

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will see a JSON blob. Inside &lt;code&gt;result[0].message.chat.id&lt;/code&gt; is your number. Mine is a 10-digit positive integer. Save that too.&lt;/p&gt;

&lt;p&gt;If you want alerts in a group instead of a private chat, add the bot to the group, send a message, and the same endpoint will return a negative integer chat_id. Channels work the same way but require posting permission for the bot.&lt;/p&gt;

&lt;p&gt;That is the entire Telegram side. Two values, both stored as Vercel environment variables, never committed to git.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 28-line handler
&lt;/h2&gt;

&lt;p&gt;This is the whole thing. Bun is not strictly required, plain Node works, but Bun starts faster on Vercel and the types are cleaner. Save as &lt;code&gt;api/shopify-order.ts&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;crypto&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;node:crypto&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;TG&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;TELEGRAM_TOKEN&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;CHAT&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;TELEGRAM_CHAT_ID&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;SECRET&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SHOPIFY_WH_SIGNATURE&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;runtime&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;edge&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;raw&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;text&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sig&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;x-shopify-hmac-sha256&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;??&lt;/span&gt; &lt;span class="dl"&gt;""&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;expected&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;crypto&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createHmac&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;sha256&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;SECRET&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;update&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;raw&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;digest&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;base64&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;crypto&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;timingSafeEqual&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;Buffer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;sig&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="nx"&gt;Buffer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;expected&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;bad sig&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;401&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;o&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;raw&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;items&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;o&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;line_items&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;l&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;any&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;`- &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;l&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;title&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; x&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;l&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;quantity&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;city&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;o&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;shipping_address&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;city&lt;/span&gt; &lt;span class="o"&gt;??&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;unknown&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;text&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`NEW ORDER #&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;o&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;order_number&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;\nTotal: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;o&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;total_price&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;o&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;currency&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;\nCity: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;city&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;\nItems:\n&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;items&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`https://api.telegram.org/bot&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;TG&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/sendMessage`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;POST&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;content-type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;application/json&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;chat_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;CHAT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;text&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ok&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ok&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;429&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;setTimeout&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;telegram failed&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;502&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A few things worth pointing out before you ship it.&lt;/p&gt;

&lt;p&gt;The HMAC check uses &lt;code&gt;timingSafeEqual&lt;/code&gt; instead of &lt;code&gt;===&lt;/code&gt; because string comparison leaks timing information. Anyone who can hit your endpoint can technically brute-force a valid signature one byte at a time if you compare with a regular equals sign. It is paranoid for a small store, but it is one extra line and removes the question entirely.&lt;/p&gt;

&lt;p&gt;The body is read once with &lt;code&gt;req.text()&lt;/code&gt; because the HMAC is computed over the raw payload. If you parse JSON first and re-stringify, the signature will not match. Order matters here: verify, then parse.&lt;/p&gt;

&lt;p&gt;The retry loop only catches Telegram 429s. Shopify will retry on its side if your function returns a 5xx, so I do not need to do anything special there. Telegram rate limits are usually 30 messages per second, way above what a small store will ever hit, but a flash sale can spike, and a single failed retry beats losing the alert entirely.&lt;/p&gt;

&lt;p&gt;For Vercel Edge Config patterns that pair well with this kind of webhook handler, &lt;a href="https://dev.to/blogs/lab/5-vercel-edge-config-patterns-i-use-for-shopify-a-b-tests"&gt;5 Vercel Edge Config Patterns I Use For Shopify A/B Tests&lt;/a&gt; walks through Edge Config setup. Same deployment target, same instant cold starts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wiring the Shopify webhook
&lt;/h2&gt;

&lt;p&gt;Two ways to register the webhook. The dashboard works fine for one alert: Settings, Notifications, Webhooks, scroll to the bottom, Create webhook. Pick &lt;code&gt;Order creation&lt;/code&gt;, format JSON, paste your Vercel URL, save. Shopify shows the signing secret right after you save it. Copy it immediately. You cannot view it again later, only rotate it.&lt;/p&gt;

&lt;p&gt;The other way is the Admin API, which I prefer because I can put it in version control:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST &lt;span class="s2"&gt;"https://your-shop.myshopify.com/admin/api/2026-01/webhooks.json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"X-Shopify-Access-Token: &lt;/span&gt;&lt;span class="nv"&gt;$ADMIN_API_KEY&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{"webhook":{"topic":"orders/create","address":"https://your-app.vercel.app/api/shopify-order","format":"json"}}'&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Either way, the signing key lives at Settings, Notifications, Webhooks. Drop it into Vercel as &lt;code&gt;SHOPIFY_WH_SIGNATURE&lt;/code&gt;. Add &lt;code&gt;TELEGRAM_TOKEN&lt;/code&gt; and &lt;code&gt;TELEGRAM_CHAT_ID&lt;/code&gt; while you are there. Three environment variables, all sensitive, all in Vercel never in code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
vercel &lt;span class="nb"&gt;env &lt;/span&gt;add SHOPIFY_WH_SIGNATURE production
vercel &lt;span class="nb"&gt;env &lt;/span&gt;add TELEGRAM_TOKEN production
vercel &lt;span class="nb"&gt;env &lt;/span&gt;add TELEGRAM_CHAT_ID production
vercel deploy &lt;span class="nt"&gt;--prod&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First deploy takes about 20 seconds. Subsequent deploys are faster because the build cache is warm. Vercel Hobby is fine for this. I have run alert bots like this on free tier for over a year without ever hitting the function invocation limit, because order webhooks are by definition rare. If you do 1000 orders a month, that is 1000 invocations. Hobby allows 100k.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing without placing real orders
&lt;/h2&gt;

&lt;p&gt;Shopify has a nice quality of life feature in the webhook list: a Send test notification button. It fires the webhook with a sample payload, which is enough to verify your function works end to end. You should see your phone buzz within a couple seconds.&lt;/p&gt;

&lt;p&gt;If nothing happens, check Vercel Logs first. The function either failed signature verification (wrong secret in env), threw a JSON parse error (malformed payload), or got a 4xx back from Telegram (wrong token or chat_id). The error message will tell you which one.&lt;/p&gt;

&lt;p&gt;I also keep a tiny curl one-liner to test the Telegram side in isolation, decoupled from Shopify entirely:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
curl &lt;span class="s2"&gt;"https://api.telegram.org/bot&lt;/span&gt;&lt;span class="nv"&gt;$TELEGRAM_TOKEN&lt;/span&gt;&lt;span class="s2"&gt;/sendMessage"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s2"&gt;"chat_id=&lt;/span&gt;&lt;span class="nv"&gt;$CHAT_ID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s2"&gt;"text=ping"&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If that pings your phone, Telegram is fine and the issue is in the webhook chain. If it does not ping, the token or chat_id is wrong. Two layers to check, easy to isolate.&lt;/p&gt;

&lt;p&gt;One thing that bit me the first time: Telegram silently drops messages over 4096 characters. Not an issue for order alerts, but if you start adding line item descriptions or customer notes, watch the length. The fetch will return 200, the message will not arrive, you will assume the bot is broken.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom Line
&lt;/h2&gt;

&lt;p&gt;This is the cheapest, most boring useful thing I built last quarter. Five seconds from order placed to phone buzzing, three environment variables, twenty-eight lines of TypeScript. It cost zero EUR to run and it has not gone down once in three months.&lt;/p&gt;

&lt;p&gt;The trick is keeping it small. Every time I have been tempted to add features, like an end of day order summary, a low-stock ping, or a Slack mirror, I have built a separate function instead. One job per handler. When something breaks, I know exactly where to look.&lt;/p&gt;

&lt;p&gt;If you want more patterns like this for running a small Shopify store with serverless plumbing instead of subscription apps, the Lab has a growing collection of tutorials. Start at &lt;a href="https://dev.to/pages/lab-overview"&gt;Lab Overview&lt;/a&gt; for the index. The whole point of the studio is showing how a one-person team can replace what used to need a team, with code small enough to fit in your head.&lt;/p&gt;

&lt;p&gt;Try it on a staging store first. The Send test notification button is your friend. And keep the BotFather token out of git.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>claudecode</category>
      <category>automation</category>
    </item>
    <item>
      <title>Claude Managed Agents Just Got Dreams, 20-Way Parallelism, and Self-Checking Loops</title>
      <dc:creator>RAXXO Studios</dc:creator>
      <pubDate>Thu, 07 May 2026 07:19:34 +0000</pubDate>
      <link>https://dev.to/raxxostudios/claude-managed-agents-just-got-dreams-20-way-parallelism-and-self-checking-loops-oj5</link>
      <guid>https://dev.to/raxxostudios/claude-managed-agents-just-got-dreams-20-way-parallelism-and-self-checking-loops-oj5</guid>
      <description>&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Claude Managed Agents now ship Dreaming, a memory consolidator that learns from session logs without overwriting your data&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Multi-agent orchestration runs up to 20 specialized agents in parallel, useful for blog cluster ships and inventory sweeps&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Result loops let agents self-check outputs against a rubric before returning, saving you a manual QA pass&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Webhooks plug agent runs into Slack, Shopify, or any external system without polling cron jobs&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Anthropic dev conference in San Francisco dropped a quiet bomb on Claude Managed Agents this week. Four features at once. Dreaming in testing, plus three public betas (multi-agent orchestration, result loops, webhooks). I spent yesterday rerouting parts of my one-person studio around them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dreaming, or how Claude Managed Agents learn while you sleep
&lt;/h2&gt;

&lt;p&gt;Dreaming is the headline. It is in testing, not public beta yet, so treat this section as a heads-up and not a how-to. The mechanic is simple to describe and hard to build. The agent reads its own session logs, finds repeating patterns, merges duplicate memories, and writes a tighter, optimized memory store. The original logs stay untouched. Your agent learns over time without you cleaning up after it.&lt;/p&gt;

&lt;p&gt;If you read &lt;a href="https://dev.to/blogs/lab/claude-managed-agents-now-have-filesystem-memory"&gt;Claude Managed Agents Now Have Filesystem Memory&lt;/a&gt; when it landed in late April, you already have the mental model. Filesystem memory was step one (give the agent a place to write notes). Dreaming is step two (let it tidy those notes on its own).&lt;/p&gt;

&lt;p&gt;In practice, here is what changes for a daily-use agent. Right now my blog-publish agent has a memory file that, after 200 articles, looks like a hoarder's garage. Same affiliate rule, written 14 different ways. Same TLDR shape, restated every other week. Dreaming is supposed to fold those into one canonical entry per concept. The original sessions stay readable. The active memory gets shorter and sharper.&lt;/p&gt;

&lt;p&gt;The thing I am watching for is whether Dreaming respects the boundary between rules and observations. A rule like "currency is 5€ never €5" should never get merged with a one-time observation. If the testing build keeps that line clean, this becomes the most useful agent feature shipped all year. If not, expect a wave of "my agent forgot the most important rule" tweets.&lt;/p&gt;

&lt;p&gt;I will write up the real test once I get into the testing cohort.&lt;/p&gt;

&lt;h2&gt;
  
  
  20-agent parallelism, the actual unlock for solo studios
&lt;/h2&gt;

&lt;p&gt;Multi-agent orchestration is now public beta. Up to 20 specialized agents collaborating in parallel, with shared context and a coordinating runner. This is the feature that changes which workflows are realistic for a one-person operation.&lt;/p&gt;

&lt;p&gt;Three concrete RAXXO-style use cases I am moving over this week.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Blog cluster ship.&lt;/strong&gt; I publish multiple long-form articles a day on the lab. Sequential publishing took 40 to 60 minutes per batch (research, draft, humanize, publish, syndicate, OG image). With orchestration, three writer agents work in parallel on three distinct topics, each backed by its own research sub-agent. A coordinator checks for cluster overlap before they all hit publish. End-to-end shipping for three articles drops to roughly the time of one. (Yes, the article you are reading was written this way.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Shopify inventory watch.&lt;/strong&gt; I run a small product catalog on &lt;a href="https://shopify.pxf.io/5k5rj9" rel="noopener noreferrer"&gt;Shopify&lt;/a&gt;. Last Christmas I lost two days to a stockout I should have caught. The new pattern: a runner spawns one agent per product line. Each watches its own SKUs, checks demand spikes against the last 30 days, and flags drift. The runner aggregates, writes a single morning summary. 12 agents, one report, zero polling cron jobs. This used to need a queue worker and a job scheduler. Now it is a single config file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lipsync render queue.&lt;/strong&gt; My Lexxa video pipeline cuts long-form YouTube from blog articles. The bottleneck has always been the lipsync render step (each Magnific Speak clip takes 4 to 8 minutes). With orchestration, eight render agents process eight clips in parallel, a stitcher agent assembles the final cut, an audio agent normalizes levels. What used to be a 90-minute serial render is now closer to 15 minutes wall-clock.&lt;/p&gt;

&lt;p&gt;The honest caveat. 20 agents in parallel sounds heroic until you realize the coordination problem grows fast. If two agents touch the same memory file you get write conflicts. The orchestration layer handles a lot of this for you, but you still have to design for parallelism. Most workflows do not actually need 20 agents. Most need 3.&lt;/p&gt;

&lt;h2&gt;
  
  
  Result loops, the feature that saves a manual QA pass
&lt;/h2&gt;

&lt;p&gt;Result loops are the public-beta feature I expected least and now use most. The idea is dead simple. The agent ships an output, then reads it back against a rubric you provide, then either accepts or revises. No human in the loop until the final answer.&lt;/p&gt;

&lt;p&gt;Before result loops, every blog draft I generated needed a manual pass for the obvious tells. Em dashes that snuck back in. Words like "moreover" and "furthermore". The currency format slipping into "€5" instead of "5€". I caught these by hand or with a brand-check hook. Both have a cost.&lt;/p&gt;

&lt;p&gt;Now the publish agent ships, then runs a self-check against a 12-point rubric (TLDR shape, word count, voice rules, affiliate placement, internal links, slug format, you get the idea). If the rubric flags anything, the agent revises and re-checks. Up to 3 loops, then it asks for help.&lt;/p&gt;

&lt;p&gt;What makes this work is the rubric. A vague rubric ("is this on-brand?") gets a vague answer ("yes"). A specific rubric ("count em dashes, count instances of $, count words below 1400") gets a specific revise. The rule of thumb I am settling on: every rubric line should be checkable by a regex or a count. If a human has to interpret it, the agent will too.&lt;/p&gt;

&lt;p&gt;The catch. Result loops cost extra tokens. Each revise is another full output. For high-stakes work (a launch announcement, a paid product page) the savings on the manual pass are worth it. For throwaway internal tasks, leave the loop off and ship.&lt;/p&gt;

&lt;p&gt;If you want the deeper context on the agent platform itself, &lt;a href="https://dev.to/blogs/lab/claude-managed-agents-build-and-deploy-ai-agents-at-scale"&gt;Claude Managed Agents: Build and Deploy AI Agents at Scale&lt;/a&gt; covers the deployment story.&lt;/p&gt;

&lt;h2&gt;
  
  
  Webhooks, plus the thing nobody is mentioning
&lt;/h2&gt;

&lt;p&gt;Webhooks are the fourth public-beta feature, and they are exactly what they sound like. An agent finishes a run, posts a JSON payload to a URL of your choice. Slack channel, Shopify webhook, your own endpoint, whatever you wired up.&lt;/p&gt;

&lt;p&gt;This feels obvious until you realize what it replaces. Until last week, knowing when an agent finished meant either polling the API on a cron, or building a status dashboard with a refresh loop. Both work. Both are annoying. Webhooks delete the problem.&lt;/p&gt;

&lt;p&gt;The patterns I am wiring up first.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Publish to Slack.&lt;/strong&gt; My blog-publish agent now posts to a private channel the moment a draft validates. Title, slug, word count, link. If something fails the rubric, I get the error, not just silence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Schedule to &lt;a href="https://join.buffer.com/raxxo-studios" rel="noopener noreferrer"&gt;Buffer&lt;/a&gt;.&lt;/strong&gt; A repurposing agent fans every new long-form article into a LinkedIn post and three tweets. The webhook posts the formatted content to my Buffer queue. I review on the phone, hit approve.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Render queue done.&lt;/strong&gt; The Lexxa pipeline pings a webhook the second a final cut is ready, with the &lt;a href="https://vercel.com" rel="noopener noreferrer"&gt;Vercel&lt;/a&gt; blob URL of the MP4. I get a phone notification, watch the cut, decide whether to ship.&lt;/p&gt;

&lt;p&gt;The thing nobody is mentioning. Webhooks plus result loops plus orchestration is the actual story. Each feature on its own is fine. Stacked, they remove the last reasons to babysit a long-running agent. You queue the work, you go for a walk, you come back to a finished result with a Slack ping for anything that needed your judgment. The dev community on X is calling this the autonomous turn. It is overstated, but only barely.&lt;/p&gt;

&lt;p&gt;A note on rate limits. None of this is free. Multi-agent orchestration burns tokens roughly linearly with agent count. Result loops add 1 to 3x on every output. Webhooks are cheap but the systems they trigger are not. Budget accordingly. I am running closer to my plan ceiling than I was a week ago. Worth it for me, plan ahead for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom Line
&lt;/h2&gt;

&lt;p&gt;Claude Managed Agents went from "nice infrastructure" to "actually changes how I plan a Tuesday" in a single dev conference. Dreaming will, in time, fix the memory-bloat problem that every long-running agent eventually hits. Multi-agent orchestration is the feature that makes solo-studio workflows competitive with small-team output. Result loops save the manual QA pass on anything boring enough to rubric-check. Webhooks close the loop on babysitting.&lt;/p&gt;

&lt;p&gt;If you have not touched the platform yet, start with one orchestrated workflow. Pick the most boring repeatable thing you do (mine was the daily blog ship). Wire 3 agents and a webhook. You will know within a week whether the rest of your stack should follow.&lt;/p&gt;

&lt;p&gt;If you want to see the rest of how I am running RAXXO Studios as a one-person AI shop, the toolkit lives at &lt;a href="https://dev.to/pages/studio"&gt;/pages/studio&lt;/a&gt;. The full archive of agent experiments lives at &lt;a href="https://dev.to/blogs/lab"&gt;/blogs/lab&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>claudecode</category>
      <category>automation</category>
    </item>
    <item>
      <title>Claude Just Shipped Finance Agent Templates: Pitches, Valuations, and Month-End Close</title>
      <dc:creator>RAXXO Studios</dc:creator>
      <pubDate>Thu, 07 May 2026 07:18:57 +0000</pubDate>
      <link>https://dev.to/raxxostudios/claude-just-shipped-finance-agent-templates-pitches-valuations-and-month-end-close-3j0c</link>
      <guid>https://dev.to/raxxostudios/claude-just-shipped-finance-agent-templates-pitches-valuations-and-month-end-close-3j0c</guid>
      <description>&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Anthropic shipped ready-to-run Claude agent templates for finance teams covering pitch decks, valuation reviews, and month-end close&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Templates install as plugins in Cowork or Claude Code, or run as Managed Agents via the public cookbooks&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Three named workflows: pitch building, valuation review, and closing the books at month-end&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I tested the same template pattern on solo-studio finance jobs (quarterly invoicing, prepaid retainers, German USt-VA), and most need real adaptation before you trust them&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Anthropic just dropped a set of Claude agent templates aimed at finance teams. Pitches, valuation reviews, month-end close. Three workflows that normally eat a junior banker's weekend, packaged as installable plugins. I read the announcement twice because the wording is sneaky: these are templates, not products, and where you run them changes the story completely.&lt;/p&gt;

&lt;h2&gt;
  
  
  What "Claude agent templates" actually means
&lt;/h2&gt;

&lt;p&gt;Claude agent templates are pre-built workflows that show Claude how to handle a specific job, plus the tools, prompts, and structure it needs to finish it. You do not get a finished app. You get a starting point that already knows the steps.&lt;/p&gt;

&lt;p&gt;Anthropic shipped the financial-services pack with three options for where to run them. You can install them as plugins inside Cowork (Anthropic's collaborative workspace product) or inside Claude Code (the terminal coding agent most readers here already use). Or you can grab the cookbook code and run them as Managed Agents in production, which means Anthropic hosts the runtime and you just call an endpoint.&lt;/p&gt;

&lt;p&gt;That last path matters more than it sounds. Managed Agents handle the orchestration layer most teams rebuild from scratch: queueing, retries, memory, tool routing. If you already build with the Claude Agent SDK, the cookbook is a head start. For the deeper context on the SDK side, see &lt;a href="https://dev.to/blogs/lab/i-built-3-production-agents-with-the-claude-agent-sdk-in-one-weekend"&gt;I Built 3 Production Agents With the Claude Agent SDK in One Weekend&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The financial-services templates are the first vertical pack. The pattern is obvious: Anthropic is publishing reference workflows for industries where the agent-shape is well understood and the data is repetitive. Finance, then probably legal, then probably sales ops.&lt;/p&gt;

&lt;h2&gt;
  
  
  The three named workflows, unpacked
&lt;/h2&gt;

&lt;p&gt;Pitches. Valuations. Month-end close. Every analyst has lived all three.&lt;/p&gt;

&lt;p&gt;The pitch-building template. A pitch deck draft is mostly assembly: company background, market size, comparable transactions, recommended structure, key risks. Claude is good at structured assembly when it has the right inputs. The template likely wires Claude to a CRM or research source, drops the data into a deck skeleton, and outputs slides ready for a human to refine. The painful 60% of the deck gets done while the analyst sleeps. The 40% that needs judgment still needs judgment.&lt;/p&gt;

&lt;p&gt;The valuation review template. Valuation work is forensic: pull comps, sanity-check the DCF model, flag where assumptions look optimistic, summarize for the deal team. Claude reads spreadsheets, follows formulas, and writes plain-English memos. The template probably hands Claude a model file plus market data, asks it to recompute key cells and flag anomalies, then writes the review memo. This is the workflow I would actually trust earliest, because the output is a memo a human reviews line by line. Claude is auditable here. If the memo says top-line grows 40% a year and the underlying assumption is a single bullish analyst note, you see that in plain text and you push back. Compare that to a black-box "AI valuation tool" that spits out a number. Memos win.&lt;/p&gt;

&lt;p&gt;Month-end close. This one I find the most interesting because it is not glamorous. Reconciling accounts, matching invoices to payments, flagging variances, building the close package. Repetitive, rule-driven, deadline-sensitive. The exact shape Claude is good at when you give it the right tools. A template that closes the books in 4 hours instead of 4 days is not a moonshot. Mid-market accounting teams are already doing it manually with junior staff. Replacing that with a reviewed agent run is a real workflow change, not a demo.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cowork plugin vs Claude Code plugin vs Managed Agent
&lt;/h2&gt;

&lt;p&gt;Three install paths, three different audiences.&lt;/p&gt;

&lt;p&gt;Cowork is the workspace product. Plugins there sit next to a finance team's actual collaboration surface. The CFO can run a valuation review from the same place they run everything else. No new tool to adopt, no terminal, no cookbook. This is the path enterprise will take.&lt;/p&gt;

&lt;p&gt;Claude Code plugins are for developers who want to trigger the same workflow from the terminal. If you live in Claude Code (I do), an installed financial-services plugin gives you new slash commands. That is the right shape for solo operators and small teams who treat the terminal as the office.&lt;/p&gt;

&lt;p&gt;Managed Agents via the cookbooks is the production path. You take the template, adapt the prompts, wire it into your stack, and run it through Anthropic's hosted runtime. This is for product teams building finance features into their own apps, not for finance teams using Claude internally. Background on the runtime: &lt;a href="https://dev.to/blogs/lab/claude-managed-agents-now-have-filesystem-memory"&gt;Claude Managed Agents Now Have Filesystem Memory&lt;/a&gt; shows what you actually get for free at that layer.&lt;/p&gt;

&lt;p&gt;The reason Anthropic shipped all three install paths together is honest: the same workflow has a different shape depending on whether you are a CFO, a developer, or a vendor. Forcing one shape on all three would have killed adoption.&lt;/p&gt;

&lt;h2&gt;
  
  
  What a solo studio actually gets from this pattern
&lt;/h2&gt;

&lt;p&gt;I run the books for a one-person studio. So my reaction to "ready-to-run finance agent templates" was direct: which of my actual finance jobs does this pattern fit, and which ones need adaptation?&lt;/p&gt;

&lt;p&gt;Quarterly invoicing. This is a fit. I have recurring clients, predictable line items, EUR amounts, VAT rules I never want to think about. A template that pulls the project log, drafts invoices in the right format, and queues them for review would save a clean afternoon every quarter. The pitch-building template is the closest sibling because invoicing is structured assembly with fixed inputs.&lt;/p&gt;

&lt;p&gt;Prepaid retainer tracking. Mostly a fit. When a client pays upfront for a 3-month engagement, that 3,000 EUR sits on the balance sheet as a liability until the work is delivered. Tracking it across multiple projects gets ugly fast. The valuation review template is the right shape: read the contract, read the time log, compute what is recognized, flag anything weird. The catch is the German chart of accounts. The cookbook will be US-GAAP shaped. Adaptation is real work, not a 5-minute prompt edit.&lt;/p&gt;

&lt;p&gt;German USt-VA prep. This is where the honesty kicks in. Quarterly VAT filing in Germany has a specific form, specific deadlines, specific reverse-charge rules for EU clients. No US cookbook handles this out of the box. You can absolutely build a Claude agent for it, but you are starting from the SDK, not from the financial-services template. Treat the templates as inspiration for the agent shape, not as a turnkey solution for German tax.&lt;/p&gt;

&lt;p&gt;Receipt processing. A fit, but the tooling matters more than the template. Claude reads PDFs and screenshots fine. The template pattern (read, classify, file, flag) maps directly. The real work is wiring it to wherever your receipts live. Mine sit in three folders, two inboxes, and a stubborn pile of paper. Any agent has to deal with that mess before the prompt even matters. For my routine, see &lt;a href="https://dev.to/blogs/lab/solo-studio-bookkeeping-in-90-minutes-a-month-my-stack-and-routine"&gt;Solo Studio Bookkeeping in 90 Minutes a Month: My Stack and Routine&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;One more thing worth saying out loud. Templates are not models. The same Claude template runs differently depending on which model you point it at, how much context you give it, and which tools it can call. Anthropic shipping a template does not skip the part where you test it on your own data, your own edge cases, your own rounding rules. It just skips the cold start.&lt;/p&gt;

&lt;p&gt;The honest summary: the template pattern is right. The specific cookbooks are US-shaped. Solo operators outside the US should read the cookbooks for the agent design, then build their own version. That is still a 10x speedup over starting from a blank prompt.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom line
&lt;/h2&gt;

&lt;p&gt;Anthropic shipped Claude agent templates for finance because finance is one of the few jobs where the shape of the work is predictable enough that a generic template actually saves time. Pitches, valuations, month-end close. Three workflows that map onto a real Friday afternoon. The three install paths (Cowork plugin, Claude Code plugin, Managed Agent via cookbook) each fit a different team. Pick the one closest to where you already work.&lt;/p&gt;

&lt;p&gt;For solo studios, do not wait for a German-tax cookbook that may never come. Read the financial-services templates, copy the agent shape, swap the data sources for your own. The pattern transfers cleanly. The data layer never does.&lt;/p&gt;

&lt;p&gt;If you want a hub for the rest of the Claude agent coverage on this blog, the &lt;a href="https://dev.to/pages/topic-claude-code"&gt;Claude Code MOC&lt;/a&gt; collects the SDK, Managed Agents, and tooling pieces in one place. Start there, then come back here when Anthropic ships the next vertical pack.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>claudecode</category>
      <category>automation</category>
    </item>
    <item>
      <title>Anthropic + SpaceX Colossus 1: 300MW, 220K GPUs, and Doubled Claude Limits</title>
      <dc:creator>RAXXO Studios</dc:creator>
      <pubDate>Thu, 07 May 2026 07:18:22 +0000</pubDate>
      <link>https://dev.to/raxxostudios/anthropic-spacex-colossus-1-300mw-220k-gpus-and-doubled-claude-limits-200c</link>
      <guid>https://dev.to/raxxostudios/anthropic-spacex-colossus-1-300mw-220k-gpus-and-doubled-claude-limits-200c</guid>
      <description>&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Anthropic doubled Claude Code five-hour rate limits for Pro, Max, Team, and Enterprise on May 6, 2026&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Peak-hours throttling is gone for Pro and Max, so afternoons feel like mornings again&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The 300 MW Colossus 1 deal adds 220,000+ NVIDIA GPUs and lands within one month&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For solo studios, this means longer build sessions, fewer "limit reached" pauses, and steadier API throughput&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I hit the Claude Code limit twice last Tuesday before lunch. Once at 11:14, again at 12:42 after the cooldown. Yesterday Anthropic doubled the five-hour windows for Pro, Max, Team, and Enterprise, killed peak-hours throttling on Pro and Max, and announced they signed for all of SpaceX's Colossus 1 data center. 300 megawatts. 220,000+ NVIDIA GPUs. Online inside a month.&lt;/p&gt;

&lt;p&gt;This is the practical post about what doubling those limits actually changes for someone running a one-person studio off a Pro or Max seat. The compute side, not the agent-features side.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the Anthropic SpaceX deal actually adds
&lt;/h2&gt;

&lt;p&gt;Colossus 1 is a SpaceX data center campus. Anthropic signed an agreement to use all of its compute capacity. The number that matters is 300 megawatts of new capacity coming online within one month, which translates to over 220,000 NVIDIA GPUs dedicated to Anthropic workloads.&lt;/p&gt;

&lt;p&gt;For context on the wider race, here are the recent commitments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Amazon: up to 5 GW total, with about 1 GW expected by end of 2026&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Google + Broadcom: 5 GW launching 2027&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Microsoft + NVIDIA: 30 billion dollar Azure partnership&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fluidstack: 50 billion dollar American AI infrastructure investment&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;300 MW sounds small next to a 5 GW Amazon number until you remember timelines. 5 GW from Amazon means a partial slice in eight months and the rest later. Colossus 1 is online in weeks, not quarters. That is the gap that matters for me right now, because "next year" does not help me on the Tuesday morning I get throttled mid-deploy.&lt;/p&gt;

&lt;p&gt;The deal also lines up with Anthropic's international expansion for regulated enterprises (financial services, healthcare, government). They committed to covering consumer electricity price increases tied to US data centers. That detail is unusual and worth noting because it means the cost of all that compute is not silently pushed onto residential bills near the campuses.&lt;/p&gt;

&lt;p&gt;For the rate-limit story, the only number that counts is "online within one month." That is the capacity that backs the doubled limits.&lt;/p&gt;

&lt;h2&gt;
  
  
  Doubled Claude Code rate limits, in plain numbers
&lt;/h2&gt;

&lt;p&gt;The headline change for paying users:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Pro: five-hour rate limit doubled&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Max: five-hour rate limit doubled&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Team: five-hour rate limit doubled&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enterprise: five-hour rate limit doubled&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pro and Max: peak-hours limit reduction removed&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That last bullet is the one I care about most. Until now, hitting Claude Code between roughly 9 AM and 6 PM US Pacific meant a quietly tightened budget. You could feel it. The same prompt that ran clean at 7 AM Berlin would slam into a wall at 4 PM. I learned to push heavy refactors to early morning and save the afternoon for browser work. That coping strategy is now obsolete.&lt;/p&gt;

&lt;p&gt;Doubling the five-hour window changes the rhythm of a working day. A Pro seat used to give me roughly two solid coding sprints before a forced cooldown. With the doubled budget, three sprints fit comfortably. For my Tuesday workflow (one big refactor, one feature, one bug bash) that is the difference between finishing in one day or pushing the bug bash to Wednesday.&lt;/p&gt;

&lt;p&gt;If you mostly run one or two prompts an hour, you may not feel the doubled limit at all. If you run agent loops, multi-file refactors, or long /audit-style passes that fan out across many tools, this is a real change in your day.&lt;/p&gt;

&lt;p&gt;I covered the older limit-checking workflow in &lt;a href="https://dev.to/blogs/lab/how-to-check-your-claude-usage-limit-3-ways"&gt;How to Check Your Claude Usage Limit (3 Ways)&lt;/a&gt;. The mechanic is the same, the budget is just bigger now.&lt;/p&gt;

&lt;h2&gt;
  
  
  What 300 MW means for Claude Opus API users
&lt;/h2&gt;

&lt;p&gt;The press release also mentions that Claude Opus API rate limits were "substantially increased." No specific multiplier was given, so I am not going to invent one. What I can say is that Opus has been the bottleneck for me in a few places:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Long /poorreview runs that hit Opus for the synthesis step&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Big Cline or agent SDK jobs that batch Opus calls&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Document-style tasks where Sonnet is fine for drafts but Opus is better for the final pass&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you sit on Sonnet for most work and only escalate to Opus, you will probably notice the change as fewer 429s on the escalation step. If you run Opus as a default, the API budget bump is the real win, not the Claude Code change.&lt;/p&gt;

&lt;p&gt;The Opus boost also has a quieter implication. Anthropic does not push out API rate-limit increases on Opus unless they can serve them. That is what 220,000 GPUs in a month buys you. So the API change is the most direct evidence that the Colossus 1 capacity is actually landing on the inference fleet, not just sitting in a press release.&lt;/p&gt;

&lt;p&gt;For pricing context on which Claude tier is right for which workload, see &lt;a href="https://dev.to/blogs/lab/claude-max-vs-pro-which-plan-is-actually-worth-it"&gt;Claude Max vs Pro: Which Plan is Actually Worth It?&lt;/a&gt;. The plan math has not changed, just the headroom inside each plan.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical impact for a solo studio
&lt;/h2&gt;

&lt;p&gt;Here is the honest version. I run RAXXO Studios on a Max seat plus Vercel Pro, with the API used mostly for syndication agents, blog generation, and some custom evals. Yesterday's announcement maps to four small but real workflow shifts:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;I can stop scheduling refactors for early morning. The peak-hours penalty is gone, so 3 PM Berlin is no longer a trap.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The /audit command (12-category FULLMOON pass) used to eat half my five-hour window. Now it eats a quarter, which means I can run /audit alongside a feature build instead of choosing between them.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Long agent loops (the kind that chain blog-publish then blog-syndication) are less likely to halt mid-pipeline.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I will keep using &lt;a href="https://join.buffer.com/raxxo-studios" rel="noopener noreferrer"&gt;Buffer&lt;/a&gt; for social scheduling because that workload sits outside Claude entirely, but the upside is I no longer juggle "save Claude budget for the afternoon" mental tax.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;There are a few things this announcement does not change. It does not change the model itself. It does not give you Opus for free if you only have Pro. It does not fix bad prompts. The 220,000 GPUs are about throughput and reliability, not intelligence per token. If your Claude Code workflow is slow because you keep loading 800 files into context for no reason, doubling the rate limit just means you can do that twice as often.&lt;/p&gt;

&lt;p&gt;One more thing I want to flag, because it took me a minute to internalize. The doubled five-hour window does not mean you suddenly have 10 hours of headroom. The window is still five hours. The budget inside that window is bigger. So if you used to plan around "morning sprint, lunch, afternoon sprint," that pattern still works, just with more breathing room inside each sprint. If you used to plan around two separate days because the budget felt cramped, that compresses into one day now.&lt;/p&gt;

&lt;p&gt;The other thing worth saying: if Anthropic burns 300 MW of fresh capacity in one month, you will see ripple effects in pricing and product velocity over the rest of 2026. More capacity often means more aggressive feature shipping, because the team can ship things that need real GPU room (long context, parallel tool calls, agent fleets) without strangling the existing customer base. Watching that part play out is half the reason I am writing this on a Thursday morning instead of doing the laundry.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom line
&lt;/h2&gt;

&lt;p&gt;If you pay for Pro, Max, Team, or Enterprise, the May 6, 2026 update is a free upgrade. Doubled five-hour windows, no more peak-hours throttling on Pro and Max, and a substantial Opus API boost. The Colossus 1 deal (300 MW, 220,000+ NVIDIA GPUs, live within a month) is what backs all of it. Other deals are bigger on paper, but Colossus 1 lands first, and "first" is the part that changes your Tuesday.&lt;/p&gt;

&lt;p&gt;For a one-person studio, the practical move is simple. Stop avoiding peak hours. Run the long /audit while a feature build is in flight. Push more work to Opus on jobs where the synthesis step actually matters. If you want the deeper pattern playbook on running Claude Code as your operating system, the &lt;a href="https://dev.to/pages/claude-blueprint"&gt;Claude Blueprint&lt;/a&gt; covers the whole setup, and &lt;a href="https://dev.to/blogs/lab/anthropic-hit-30-billion-what-it-means-for-developers"&gt;Anthropic Hit 30 Billion: What It Means for Developers&lt;/a&gt; explains why this kind of capacity announcement keeps happening.&lt;/p&gt;

&lt;p&gt;The headline I would have written is "Claude Code stopped pausing me." That is what 220,000 GPUs feels like at a desk in Berlin.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>claudecode</category>
      <category>automation</category>
    </item>
    <item>
      <title>Solo Studio Bookkeeping in 90 Minutes a Month: My Stack and Routine</title>
      <dc:creator>RAXXO Studios</dc:creator>
      <pubDate>Wed, 06 May 2026 05:18:06 +0000</pubDate>
      <link>https://dev.to/raxxostudios/solo-studio-bookkeeping-in-90-minutes-a-month-my-stack-and-routine-5a8i</link>
      <guid>https://dev.to/raxxostudios/solo-studio-bookkeeping-in-90-minutes-a-month-my-stack-and-routine-5a8i</guid>
      <description>&lt;ul&gt;
&lt;li&gt;&lt;p&gt;My bookkeeping stack costs 30 EUR a month and runs on four tools.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I spend 15 minutes a week on receipts and 60 minutes monthly closing books.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The whole routine takes 90 minutes a month and replaces a 200 EUR bookkeeper.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Yearly handoff to my tax advisor takes one afternoon, fully sorted by category.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I run a one-person studio. For the first year, my bookkeeping was a folder called "receipts maybe" and a spreadsheet I opened twice. That stopped working when monthly income crossed four figures and my tax advisor asked, politely, why nothing was categorized. Here is the stack and the routine that replaced the chaos.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Stack: 4 Tools, ~30 EUR a Month
&lt;/h2&gt;

&lt;p&gt;I use four tools, total monthly cost about 30 EUR. That is cheaper than a single hour with a bookkeeper, and the stack does 90 percent of what a bookkeeper would charge me 200 EUR a month for.&lt;/p&gt;

&lt;p&gt;Tool one is my business banking account. Separate from personal, no exceptions. I picked one with a clean export (CSV and PDF), a virtual card for online subscriptions, and an API. Cost is 0 to 10 EUR depending on the plan. The single most important habit I built was never paying business expenses from my personal account, even when traveling. One mixed transaction creates an hour of cleanup later.&lt;/p&gt;

&lt;p&gt;Tool two is a receipts app, Dext. It lets me snap a photo, auto-extracts vendor, date, amount, VAT, and matches it to the bank line. Cost is around 20 EUR per month for solo plans. I tried free alternatives. They saved me 20 EUR and cost me 4 hours of typing every month. Bad trade.&lt;/p&gt;

&lt;p&gt;Tool three is a Notion ledger. One database. Columns: date, vendor, category, amount, VAT amount, project, paid status. I built it once, took 30 minutes, and it has not changed in 18 months. Cost is 0 EUR if you already use Notion, otherwise free tier works. It is not the books, it is the index. The books live in Dext and the bank.&lt;/p&gt;

&lt;p&gt;Tool four is an invoicing tool. I use a lightweight one with recurring invoices, automatic VAT logic, and a customer-facing payment link. Cost is around 8 EUR per month. I do not use a free invoice generator anymore. I lost two payments to broken PDF links before I bit the bullet.&lt;/p&gt;

&lt;p&gt;Total: roughly 30 EUR per month. Replaces a part-time bookkeeper. Pays itself back the first time I avoid a late-payment fee.&lt;/p&gt;

&lt;h2&gt;
  
  
  Weekly: 15 Minutes of Receipts and Tagging
&lt;/h2&gt;

&lt;p&gt;Every Friday, end of day, I spend 15 minutes on the same routine. I set a timer. If I am not done in 15 minutes, I stop and finish next week. Friction kills routines, so I keep this short.&lt;/p&gt;

&lt;p&gt;Step one: open my email and forward every receipt that came in that week to the Dext inbox address. Stripe, hosting, software subscriptions, the lot. Dext processes them within minutes. This is 3 minutes of work.&lt;/p&gt;

&lt;p&gt;Step two: open my phone, scroll through the camera roll, snap any paper receipts I forgot, and upload them to Dext. Coffee meetings, hardware, books. If I am not sure whether something is a business expense, I upload it anyway and decide during the monthly review. 4 minutes.&lt;/p&gt;

&lt;p&gt;Step three: open the bank app, scan the week's transactions, and tag anything that needs context. I do not categorize yet. I just leave a one-line note ("client X retainer", "annual domain renewal", "client lunch"). Future me will thank present me. 5 minutes.&lt;/p&gt;

&lt;p&gt;Step four: open the Notion ledger, log any invoice I sent that week, and mark any incoming payments as received. I check overdue invoices. If anything is more than 7 days late, I send a friendly nudge from the invoicing tool. 3 minutes.&lt;/p&gt;

&lt;p&gt;That is it. 15 minutes, weekly, non-negotiable. The whole thing works because I never let receipts pile up. A week of receipts is 10 to 20 items. A month of receipts is 60 to 80 items, and that is where solo founders give up and pay someone 200 EUR to dig them out.&lt;/p&gt;

&lt;p&gt;I do not skip this even on weeks with low activity. The habit matters more than the volume. A 5-minute Friday is fine. A skipped Friday breaks the chain.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monthly: 60 Minutes of Reconciliation and a P&amp;amp;L
&lt;/h2&gt;

&lt;p&gt;First Sunday of the month, 60 minutes, coffee and a clean desk. This is the only "real" bookkeeping work I do, and it is the part most solo founders dread. Done weekly-prep first, this is calm.&lt;/p&gt;

&lt;p&gt;Minutes 1 to 20: reconciliation. I open Dext and the bank export side by side. Every bank line should have a matched receipt. Anything unmatched gets fixed: either I find the missing receipt, or I categorize it as "no receipt, business expense, [reason]" with a screenshot of the email confirmation. Roughly 5 percent of lines end up here, and that is fine for a tax advisor as long as it is documented.&lt;/p&gt;

&lt;p&gt;Minutes 20 to 40: categorization. Every transaction gets a category. I use about 12 categories: income, software, hardware, hosting, contractors, marketing, education, travel, meals, office, fees, taxes. Twelve is the sweet spot. Fewer and the P&amp;amp;L is useless. More and I spend 20 minutes deciding whether a Notion subscription is "software" or "office".&lt;/p&gt;

&lt;p&gt;Minutes 40 to 55: VAT handling. I run a small filter in the ledger that sums VAT collected (from invoices) and VAT paid (from expenses). The difference is what I owe or get back. I do not file the VAT myself, my tax advisor does, but I send the number monthly so there are no surprises at quarter end. This took me a year to set up and now takes 5 minutes a month.&lt;/p&gt;

&lt;p&gt;Minutes 55 to 60: the P&amp;amp;L snapshot. One page. Money in this month, expenses by category, net profit, runway in months. Saved as a PDF, dropped in a shared Notion page my tax advisor can read. Monthly income swings between 200 and 1200 EUR depending on the month, and seeing that pattern in a graph is what changed my pricing decisions in 2026.&lt;/p&gt;

&lt;p&gt;After 60 minutes, I close the laptop. The books are clean for another month.&lt;/p&gt;

&lt;h2&gt;
  
  
  Yearly: How I Hand Off Cleanly to My Tax Advisor
&lt;/h2&gt;

&lt;p&gt;Once a year, in January, I do a 3-hour close. That is the only time the bookkeeping costs me a real chunk of a day, and it is because the rest of the year was tidy.&lt;/p&gt;

&lt;p&gt;Step one: export everything. Bank statements (12 months, PDF and CSV). Dext export of all receipts (one ZIP). Invoicing tool export of all invoices sent and paid. Notion ledger as CSV. Total time: 20 minutes.&lt;/p&gt;

&lt;p&gt;Step two: a 30-minute self-audit. I open the P&amp;amp;L for each month and check totals add up. I look for outliers ("why was March software spending double?") and I add a one-line note. The notes save my tax advisor from emailing me 14 questions in February. Each note costs me 30 seconds and saves 15 minutes of back-and-forth.&lt;/p&gt;

&lt;p&gt;Step three: I write a one-page summary. Total income, expense categories with totals, big purchases (anything over 500 EUR with a depreciation note), VAT collected, VAT paid, anything weird. This is the document my tax advisor actually reads. The 200 pages of receipts are backup.&lt;/p&gt;

&lt;p&gt;Step four: I drop the whole package in a shared folder and send my tax advisor one email with three bullet points. They reply within a week with their fee for the year (usually around 600 EUR for my volume) and a list of any clarifications. I have never had more than 3 questions in 2 years of doing it this way.&lt;/p&gt;

&lt;p&gt;Step five: I file the previous year's package as read-only and do not touch it again. The tax advisor handles the filing. I get a copy of the final return in March or April. Done.&lt;/p&gt;

&lt;p&gt;Total yearly time: about 4 hours, including emails. Total yearly cost (advisor plus tools): around 960 EUR. For a solo studio that touches 5 figures of income, this is the cheapest insurance I buy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom Line
&lt;/h2&gt;

&lt;p&gt;Solo bookkeeping is not hard. It is just unforgiving when you skip weeks. Four tools, 30 EUR a month, 90 minutes of work, and a tax advisor who only sees the clean version. That is the whole system. The math works because the weekly 15 minutes prevents the monthly 4-hour panic, and the monthly close prevents the yearly meltdown.&lt;/p&gt;

&lt;p&gt;If you are a solo founder still running on a folder called "receipts maybe", pick one tool this week. The receipts app first, because that is where the most time leaks out. Set the Friday timer. Do it for 4 weeks. The habit will hold long before the spreadsheet does.&lt;/p&gt;

&lt;p&gt;I write more about how I run a one-person AI studio at raxxo.shop, and what I am working on this month lives on the /now page. Books closed, back to building.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>claudecode</category>
      <category>automation</category>
    </item>
    <item>
      <title>Why I Ship Blog Articles in Clusters of 3 (Not Daily)</title>
      <dc:creator>RAXXO Studios</dc:creator>
      <pubDate>Wed, 06 May 2026 05:17:52 +0000</pubDate>
      <link>https://dev.to/raxxostudios/why-i-ship-blog-articles-in-clusters-of-3-not-daily-1geg</link>
      <guid>https://dev.to/raxxostudios/why-i-ship-blog-articles-in-clusters-of-3-not-daily-1geg</guid>
      <description>&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Daily blogging burns you out and scatters topical authority across unrelated posts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A cluster of 3 means one theme, three angles, published in one or two days.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Clusters give Google and LLM crawlers a denser internal graph to map and trust.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;My workflow: pick a theme Monday, draft overview plus deep-dive plus counter-take, ship together.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I used to publish one blog post a day. The numbers looked great in a spreadsheet and terrible in Search Console. I switched to clusters of 3 and traffic, dwell time, and LLM citations all moved in the same direction at once.&lt;/p&gt;

&lt;p&gt;This is the playbook I now run for every topic on raxxo.shop/blogs/lab.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Daily Post Trap
&lt;/h2&gt;

&lt;p&gt;Daily publishing sounds like discipline. In practice it is a slow tax on your brain and your archive.&lt;/p&gt;

&lt;p&gt;The first problem is theme fragmentation. Day 1 I write about prompt engineering. Day 2 about Shopify themes. Day 3 about a Figma plugin. None of these posts link to each other. None of them reinforce a topic. Google sees a feed, not a library. LLM crawlers see noise, not authority.&lt;/p&gt;

&lt;p&gt;The second problem is depth. To hit a daily slot I had 90 minutes per post on average. Ninety minutes is enough for a hot take. It is not enough for a screenshot, a code block, a real example, and a counter-argument. So my posts trended toward 600 word opinion pieces with no internal links and no compounding value.&lt;/p&gt;

&lt;p&gt;The third problem is burnout. After 30 days of solo daily publishing I had 30 orphan posts, no pillar pages, and the energy of a wet napkin. I could not write a 2000 word deep-dive because I had spent the budget on filler.&lt;/p&gt;

&lt;p&gt;The fourth and worst problem is that daily cadence trains the wrong muscle. You optimize for "what can I publish today" instead of "what does my topic cluster need next". Those are different questions and they produce different archives.&lt;/p&gt;

&lt;p&gt;I am not anti-frequency. I am anti-frequency-without-structure. If you publish daily and every post lives inside a planned cluster with shared tags and tight internal links, fine. But that is not what most solo creators do. Most of us publish whatever ChatGPT spat out at breakfast and wonder why the archive feels like a junk drawer.&lt;/p&gt;

&lt;p&gt;The honest test: open your last 30 posts. How many link to each other? If the answer is under 5, you are running the daily post trap, not a content strategy.&lt;/p&gt;

&lt;h2&gt;
  
  
  What a Cluster of 3 Looks Like
&lt;/h2&gt;

&lt;p&gt;A cluster is not three random posts in the same week. It is three angles on one tightly defined theme, published close together, cross-linked from day one.&lt;/p&gt;

&lt;p&gt;I use a fixed shape: overview, deep-dive, counter-take.&lt;/p&gt;

&lt;p&gt;The overview is the on-ramp. 1200 to 1500 words. It defines the theme, lists the components, and links forward to the other two posts in the cluster. This is the post that ranks for the broad keyword. It is also the post LLMs love to cite because it is structured and self-contained.&lt;/p&gt;

&lt;p&gt;The deep-dive is the expert proof. 1800 to 2500 words. One sub-topic, examined with code, screenshots, numbers, or workflow steps. This is the post that ranks for the long tail. It is what backlink hunters share because it has the receipts.&lt;/p&gt;

&lt;p&gt;The counter-take is the personality. 900 to 1300 words. An opinion that argues against the consensus inside the same theme. This is the post that gets shared on LinkedIn and X because it has a spine. It also signals to LLMs that you are not just paraphrasing the top 10 results.&lt;/p&gt;

&lt;p&gt;All three share a tag set. All three share a hero topic in the title. All three link to each other in the first 200 words and again in a closing "more in this cluster" block.&lt;/p&gt;

&lt;p&gt;A real example from my own archive: I shipped a cluster on AI video tools last month. Overview was a tool comparison. Deep-dive was a 2200 word workflow on one specific pipeline. Counter-take argued that most AI video output is overproduced and you should ship rougher. Three posts, one theme, cross-linked. Together they pulled more traffic in week one than my previous 12 daily posts combined.&lt;/p&gt;

&lt;p&gt;The constraint is the magic. Three angles forces you to pick a theme worth three angles. Daily cadence does not. That alone changes what you write about.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Clusters Compound for SEO and LLM Discovery
&lt;/h2&gt;

&lt;p&gt;This is where clusters stop being a workflow choice and start being a ranking strategy.&lt;/p&gt;

&lt;p&gt;Google has been running on pillar-and-cluster logic for years. The algorithm rewards sites that demonstrate topical authority, which is a polite way of saying "you have to publish more than one thing on a topic for us to trust you on it." A single 1500 word post on, say, Shopify theme optimization signals nothing. Three posts on the same theme, cross-linked, with overlapping keyword surfaces, signal a small expert site on that exact slice.&lt;/p&gt;

&lt;p&gt;Internal link density is the lever. When my overview post links to the deep-dive and counter-take, and both of those link back, I have created a triangle. Google's crawler walks that triangle and assigns higher relevance to all three. Schema breadcrumbs amplify it: each post sits under the same blog category, with the same parent path, and the structured data tells search engines they belong together.&lt;/p&gt;

&lt;p&gt;LLM crawlers behave similarly but with sharper preferences. Perplexity and ChatGPT search surface clusters more often than orphan posts because clusters give them a confident answer plus a "for more depth" link. When I check my Perplexity citations, the posts that get pulled are almost always part of a cluster. The orphans never get cited even when they are better written.&lt;/p&gt;

&lt;p&gt;There is a second LLM effect that gets ignored. Models build internal representations of topic graphs. A site with 5 clusters of 3 reads as "expert across 5 sub-topics." A site with 60 daily orphans reads as "scattered blog." Same word count, completely different signal.&lt;/p&gt;

&lt;p&gt;Internal linking also fixes the orphan problem retroactively. When I plan a new cluster, I scan my archive for older relevant posts and weave them in. A 2 year old post that was getting 4 visits a month suddenly sits inside a fresh cluster and starts pulling traffic again. Daily publishing does not do this because there is no time to plan backlinks. Cluster publishing forces it.&lt;/p&gt;

&lt;p&gt;The compounding part: each cluster makes the next one easier. By the time I have 6 clusters in one category, the seventh almost writes itself because I can link it into a pre-existing graph. Daily orphan posts never compound. Each one starts from zero.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Workflow: One Theme, Three Angles, One Day
&lt;/h2&gt;

&lt;p&gt;Here is the actual rhythm I run on raxxo.shop/blogs/lab.&lt;/p&gt;

&lt;p&gt;Monday morning I pick the theme. One sentence. "AI video pipelines for solo creators." I write the three angle headlines on the napkin before I open a doc. Overview, deep-dive, counter-take. If I cannot generate three honest angles in 10 minutes, the theme is too narrow or too vague, and I pick a new one.&lt;/p&gt;

&lt;p&gt;Monday afternoon I draft all three opening hooks and TLDRs in parallel. This is the cheapest moment to catch a weak angle. If the counter-take hook reads like a recycled overview, I rework it before writing 1500 words I will throw away.&lt;/p&gt;

&lt;p&gt;Tuesday I write the overview. Full draft, internal links to the other two posts left as placeholders. Same day I draft the deep-dive outline so the overview can promise specifics it will deliver.&lt;/p&gt;

&lt;p&gt;Wednesday I write the deep-dive and the counter-take back to back. The counter-take is fast because the research is already loaded in my head. By Wednesday evening I have three drafts.&lt;/p&gt;

&lt;p&gt;Thursday I edit, add the cross-links, run the brand check hook, and publish all three in one window. I schedule the syndication with &lt;a href="https://join.buffer.com/raxxo-studios" rel="noopener noreferrer"&gt;Buffer&lt;/a&gt; so the social posts go out staggered across two days, not in a burst.&lt;/p&gt;

&lt;p&gt;Friday I do nothing on this cluster. I research the next theme.&lt;/p&gt;

&lt;p&gt;The rule I will not break: all three posts in a cluster ship within 48 hours of each other. Any longer and the cross-links feel stale, the SEO signal weakens, and the syndication loses its compounding effect.&lt;/p&gt;

&lt;p&gt;One theme per week, three posts per cluster, one cluster per category every 2 to 3 weeks. That is the cadence. It is slower than daily and it produces more visible results across every metric I care about.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom Line
&lt;/h2&gt;

&lt;p&gt;Daily blogging is volume theatre. Clusters of 3 are an actual strategy.&lt;/p&gt;

&lt;p&gt;The math is simple. Three posts on one theme, cross-linked, published in 1 to 2 days, beat 30 daily orphans on traffic, dwell time, backlinks, and LLM citations every single time I have measured it. The compounding is not linear. It is exponential because each cluster slots into a graph that the next cluster will plug into.&lt;/p&gt;

&lt;p&gt;Pick a theme. Write three angles. Ship them together. Move on.&lt;/p&gt;

&lt;p&gt;If you want to see what this looks like in practice, browse the most recent cluster on raxxo.shop/blogs/lab. The three latest posts under any category will share a theme, share tags, and link to each other in the first 200 words. That is the pattern. Copy it for your own blog and you will feel the difference within two clusters.&lt;/p&gt;

&lt;p&gt;Stop measuring posts per week. Start measuring clusters per quarter. Your archive will thank you.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>claudecode</category>
      <category>automation</category>
    </item>
    <item>
      <title>Shopify Section Rendering API: 6 Patterns That Cut Storefront TTFB by 60%</title>
      <dc:creator>RAXXO Studios</dc:creator>
      <pubDate>Wed, 06 May 2026 05:17:30 +0000</pubDate>
      <link>https://dev.to/raxxostudios/shopify-section-rendering-api-6-patterns-that-cut-storefront-ttfb-by-60-3l5b</link>
      <guid>https://dev.to/raxxostudios/shopify-section-rendering-api-6-patterns-that-cut-storefront-ttfb-by-60-3l5b</guid>
      <description>&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Cart drawer refresh swaps one section in 180ms instead of reloading the full PDP at 1.2s.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Filtered collection grids return only the product list, cutting payload from 340KB to 42KB on scroll.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Variant-aware recommendations re-render with section_id when a swatch changes, no full PDP reload.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Geo-aware shipping banners hydrate per visitor without busting the page cache or shipping a heavy script.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Inline quick-add modals fetch the variant picker section on demand, saving 280KB on collection pages.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A/B test bucket assignment renders the right section variant server-side, no flicker, no tag manager.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I rebuilt six &lt;a href="https://shopify.pxf.io/5k5rj9" rel="noopener noreferrer"&gt;Shopify&lt;/a&gt; storefronts in the past year, and the same trick keeps showing up in my notes: the Section Rendering API. It is the single fastest way to make a Liquid theme feel like a SPA without shipping a SPA. Here are the six patterns I keep coming back to, with real TTFB numbers from production stores I work on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 1: Cart Drawer Refresh Without a Page Reload
&lt;/h2&gt;

&lt;p&gt;Default Dawn cart drawers refetch &lt;code&gt;/cart.js&lt;/code&gt; and then re-render with client-side templates. That is fine until your cart line item template grows: discount badges, subscription frequency, gift wrap toggles. I had one PDP shipping 47KB of cart-line JS just to keep the drawer in sync.&lt;/p&gt;

&lt;p&gt;Section Rendering API kills that. After &lt;code&gt;POST /cart/add.js&lt;/code&gt;, I fire one extra request:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/?section_id=cart-drawer&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;html&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;text&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;querySelector&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;#cart-drawer&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;innerHTML&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;html&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Shopify renders the &lt;code&gt;cart-drawer.liquid&lt;/code&gt; section server-side and returns plain HTML. No JSON parsing, no template engine, no drift between Liquid and JS. The section sees the freshly updated cart object on the same request because Shopify renders sections after cart mutations are persisted.&lt;/p&gt;

&lt;p&gt;Numbers from a Dawn fork I instrumented in March: cart-add round-trip dropped from 1.21s to 380ms. TTFB on the section call sat at 142ms p50, 220ms p95. The HTML payload was 8.4KB gzipped vs the 47KB of JS the drawer used to pull. I also stopped seeing the half-second flash where the price would update before the discount line did, because both render in one Liquid pass.&lt;/p&gt;

&lt;p&gt;One gotcha: the section must be added under &lt;code&gt;{% sections %}&lt;/code&gt; in the layout, or wrapped in a section group, otherwise &lt;code&gt;/?section_id=&lt;/code&gt; returns 404.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 2: Filtered Collection Grids on Scroll
&lt;/h2&gt;

&lt;p&gt;Collection page filters are the worst offender for full-page reloads. A shopper picks "size: M" and the entire 1.4MB page reloads, including the navigation, footer, hero, and tracking pixels. On a Berlin coffee shop client, average filter-change-to-paint was 2.8 seconds.&lt;/p&gt;

&lt;p&gt;I swapped that for one Section Rendering API call:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`/collections/all?filter.v.option.size=M&amp;amp;section_id=collection-grid`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;html&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;text&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;querySelector&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;#grid&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;innerHTML&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;html&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="nx"&gt;history&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;pushState&lt;/span&gt;&lt;span class="p"&gt;({},&lt;/span&gt; &lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;&amp;amp;section_id=collection-grid&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;section_id&lt;/code&gt; param works on any URL that would normally render that section. Shopify applies the filter, sorts, and pagination logic on the server, then returns just the grid markup.&lt;/p&gt;

&lt;p&gt;Bytes on the wire dropped from 340KB gzipped to 42KB. TTFB went from 1.1s to 280ms p50. I kept the URL in sync with &lt;code&gt;history.pushState&lt;/code&gt; so back-button and share links still work. Pagination uses the same trick: &lt;code&gt;?page=2&amp;amp;section_id=collection-grid&lt;/code&gt;. The section receives the paginated &lt;code&gt;collection.products&lt;/code&gt; automatically because section rendering respects all the same query params the full page would.&lt;/p&gt;

&lt;p&gt;For infinite scroll, I attach an IntersectionObserver to the last product card and prefetch page N+1 when it enters viewport. The observer-triggered fetch averages 220ms, which means by the time the shopper scrolls to the bottom, the next 24 products are already in the DOM.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 3: Variant-Aware Product Recommendations
&lt;/h2&gt;

&lt;p&gt;Most "you might also like" blocks are static per product. Mine are not. When a shopper picks the black variant of a hoodie, I want the recommendations to show black-leaning items. Doing this client-side means shipping the entire product catalog metadata.&lt;/p&gt;

&lt;p&gt;Server-side via Section Rendering API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;
&lt;span class="nx"&gt;picker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addEventListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;change&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;variantId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;target&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`/products/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;handle&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;?variant=&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;variantId&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;&amp;amp;section_id=product-recommendations`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;querySelector&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;#recs&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;innerHTML&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;text&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The recommendations section uses Shopify's &lt;code&gt;recommendations.products&lt;/code&gt; object, which is variant-aware when you pass &lt;code&gt;?variant=&lt;/code&gt;. The Liquid section can also read &lt;code&gt;product.selected_variant&lt;/code&gt; and adjust the algorithm prompt or fallback list.&lt;/p&gt;

&lt;p&gt;On a fashion store with 2,400 SKUs, this took the variant-change-to-recs-update path from 940ms (full PDP reload via form submit) to 310ms. Payload was 14KB. I also moved the "complete the look" block to the same call, so two recommendation rails update in a single round trip.&lt;/p&gt;

&lt;p&gt;A subtle win: because Shopify caches section rendering responses with the same edge logic as the full page, the second variant pick on the same product is often served from cache at 60ms.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 4: Geo-Aware Shipping Banner
&lt;/h2&gt;

&lt;p&gt;"Free shipping over 50 EUR to Germany" is a great conversion line, but it is wrong for half my visitors. Personalizing it client-side means a flash of the wrong banner, then a swap. Shoppers notice.&lt;/p&gt;

&lt;p&gt;I use Shopify's &lt;code&gt;localization&lt;/code&gt; object plus a section-rendered banner:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight liquid"&gt;&lt;code&gt;
&lt;span class="cp"&gt;{%&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;localization&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nv"&gt;country&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nv"&gt;iso_code&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'DE'&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="cp"&gt;%}&lt;/span&gt;
  Free shipping over 50 EUR
&lt;span class="cp"&gt;{%&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;elsif&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;localization&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nv"&gt;country&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nv"&gt;iso_code&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'AT'&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="cp"&gt;%}&lt;/span&gt;
  Free shipping over 60 EUR
&lt;span class="cp"&gt;{%&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;else&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="cp"&gt;%}&lt;/span&gt;
  Calculated shipping at checkout
&lt;span class="cp"&gt;{%&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;endif&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="cp"&gt;%}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The page itself ships with a placeholder `&lt;/p&gt;

&lt;p&gt;&lt;code&gt;. On &lt;/code&gt;DOMContentLoaded&lt;code&gt;, I fire &lt;/code&gt;/?section_id=ship-banner`. Shopify's edge resolves geo from the visitor IP, picks the right Liquid branch, and returns 280 bytes of HTML.&lt;/p&gt;

&lt;p&gt;TTFB on this call is 90ms p50 because the section is tiny and edge-cached per country. The full page stays in the shared CDN cache, which is the part that matters: I am not busting page cache for personalization. That alone moved my collection page cache hit rate from 41% to 88%.&lt;/p&gt;

&lt;p&gt;I tried doing this with a third-party geo-IP script first. It was 38KB and added 240ms to LCP. The Section Rendering approach is 0KB of script and 90ms of network, parallel to other resources.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 5: Inline Quick-Add Modal
&lt;/h2&gt;

&lt;p&gt;Quick-add buttons on collection pages either ship every variant picker for every product (bloat) or pop a modal that fetches &lt;code&gt;/products/handle.js&lt;/code&gt; and rebuilds the picker in JS (drift). Both are bad.&lt;/p&gt;

&lt;p&gt;Section Rendering wins again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;
&lt;span class="nx"&gt;button&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addEventListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;click&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`/products/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;handle&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;?section_id=quick-add-modal`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;modal&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;innerHTML&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;text&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="nx"&gt;modal&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;showModal&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;quick-add-modal.liquid&lt;/code&gt; section renders the variant picker, price, stock status, and add-to-cart form using the exact same Liquid as the PDP. No template duplication, no drift when I add a new metafield to the picker.&lt;/p&gt;

&lt;p&gt;On a 48-product collection page, this saved 280KB of upfront HTML by removing per-card variant pickers. First quick-add click costs 240ms (network plus render), every subsequent click on the same product is 50ms from edge cache. I preload the section on &lt;code&gt;mouseenter&lt;/code&gt; of the button, which makes the click feel instant for desktop users.&lt;/p&gt;

&lt;p&gt;The modal also gets fresh inventory on every open, which matters during flash sales. Static pickers go stale within minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 6: A/B Test Without a Tag Manager
&lt;/h2&gt;

&lt;p&gt;Client-side A/B tests cause flicker, hurt LCP, and require GTM. I run mine server-side using a section per variant and a single cookie.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight liquid"&gt;&lt;code&gt;
&lt;span class="cp"&gt;{%&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;assign&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;bucket&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nv"&gt;cookies&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'ab_hero'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nf"&gt;default&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'a'&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="cp"&gt;%}&lt;/span&gt;
&lt;span class="cp"&gt;{%&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;bucket&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'b'&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="cp"&gt;%}&lt;/span&gt;
  &lt;span class="cp"&gt;{%&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;section&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'hero-variant-b'&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="cp"&gt;%}&lt;/span&gt;
&lt;span class="cp"&gt;{%&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;else&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="cp"&gt;%}&lt;/span&gt;
  &lt;span class="cp"&gt;{%&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;section&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'hero-variant-a'&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="cp"&gt;%}&lt;/span&gt;
&lt;span class="cp"&gt;{%&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;endif&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="cp"&gt;%}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Bucket assignment happens in a tiny edge function that sets the cookie on first visit. The page itself is cached per bucket, so both variants stay fast. When I want to update only the hero (e.g., swap headline copy without invalidating the whole page), I use the Section Rendering API to refetch just the hero section after a cookie change.&lt;/p&gt;

&lt;p&gt;For analytics, the hero section emits a data attribute (&lt;code&gt;data-ab-bucket="b"&lt;/code&gt;) that my one-line tracking script reads on load and sends to my analytics endpoint. No GTM, no Optimizely, no flicker. LCP on the hero stayed at 1.1s in both buckets vs the 1.9s I was seeing with client-side Optimize back in 2023.&lt;/p&gt;

&lt;p&gt;Across 14 tests in the past six months, this setup gave me cleaner data than my old GTM stack because there is no exposure-event race condition. The bucket is the rendered HTML.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom Line
&lt;/h2&gt;

&lt;p&gt;The Section Rendering API is the most underused tool in the Shopify ecosystem. It turns Liquid into a hot-swappable component system without forcing you into Hydrogen, React, or a headless rebuild. Six patterns, six measurable wins, all shipping today on stores I maintain.&lt;/p&gt;

&lt;p&gt;If you are a &lt;a href="https://shopify.pxf.io/5k5rj9" rel="noopener noreferrer"&gt;Shopify&lt;/a&gt; developer still doing full-page reloads for filters, variant changes, or cart updates, you are leaving 60-80% of your TTFB on the table. The pattern is always the same: keep the page cacheable, render the dynamic piece on demand, swap the HTML in.&lt;/p&gt;

&lt;p&gt;I document every storefront experiment that beats my baseline in the &lt;a href="https://raxxo.shop/blogs/lab" rel="noopener noreferrer"&gt;RAXXO Lab&lt;/a&gt;. If you want the section files I use as starting points for the cart drawer and quick-add modal, they live in the lab archive. Pull them, fork them, ship faster.&lt;/p&gt;

&lt;p&gt;Next storefront you build, try one pattern. Measure TTFB before and after. The numbers will sell the rest.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>claudecode</category>
      <category>automation</category>
    </item>
    <item>
      <title>Replacing Webpack DevServer With Bun Hot Reload: 14 Lines, 3.2x Faster</title>
      <dc:creator>RAXXO Studios</dc:creator>
      <pubDate>Wed, 06 May 2026 05:16:54 +0000</pubDate>
      <link>https://dev.to/raxxostudios/replacing-webpack-devserver-with-bun-hot-reload-14-lines-32x-faster-327f</link>
      <guid>https://dev.to/raxxostudios/replacing-webpack-devserver-with-bun-hot-reload-14-lines-32x-faster-327f</guid>
      <description>&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Webpack DevServer ate 250MB and took 4.1s to start&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;14 lines of Bun replaced it with WebSocket HMR&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cold start dropped from 4.1s to 1.3s, memory from 250MB to 40MB&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It still loses on React Fast Refresh, CSS HMR, and proxy config&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I had a small frontend project. Plain JS, a few HTML files, one CSS file, no React. Webpack DevServer was running it, eating 250MB of memory and taking 4.1 seconds to start. I replaced the whole thing with 14 lines of Bun. Here is what worked, what broke, and the numbers.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Webpack DevServer I Was Actually Running
&lt;/h2&gt;

&lt;p&gt;The project was a static site with maybe 30 KB of source. Index.html, a single bundle.js, one stylesheet, and three image assets. The Webpack config was 47 lines. The dev dependency tree was 312 packages. node_modules weighed in at 184 MB.&lt;/p&gt;

&lt;p&gt;Why was Webpack there at all? Because three years ago I started the project from a template, and the template had Webpack, and I never questioned it. The template gave me hot module replacement, a dev server, source maps, and a build step. I used the dev server every day. I used the build step once a week. The other 90% of the config was unused.&lt;/p&gt;

&lt;p&gt;When I ran &lt;code&gt;npm run dev&lt;/code&gt;, this is what happened. webpack-cli loaded. webpack-dev-server loaded. Plugins loaded. The compiler ran a full pass over my 30 KB of source, which Webpack treated as a graph problem. Memory climbed to 250 MB. After 4.1 seconds, the server was listening on port 8080.&lt;/p&gt;

&lt;p&gt;The HMR worked, but it was loud. Every save triggered a recompile, a hash check, a payload over WebSocket, and a &lt;code&gt;webpackHotUpdate&lt;/code&gt; call in the browser. Latency from save to paint was around 320 ms on my machine. Not slow enough to be painful. Slow enough to notice.&lt;/p&gt;

&lt;p&gt;The real problem was not speed. It was that I was loading a tool built for a 50,000-line codebase to serve a 30 KB static site. The mismatch bothered me every time I saw the memory chart.&lt;/p&gt;

&lt;p&gt;I had Bun installed already. Bun ships with a file watcher, an HTTP server, native WebSocket support, and a static file handler. Everything I needed for a dev server was sitting in the runtime. I just had to wire it up.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 14-Line Bun Replacement
&lt;/h2&gt;

&lt;p&gt;Here is the file. I named it &lt;code&gt;dev.js&lt;/code&gt; and dropped it next to &lt;code&gt;index.html&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;watch&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;fs&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;clients&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Set&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;root&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./public&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nx"&gt;Bun&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;serve&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;3000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;upgrade&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;URL&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;pathname&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;file&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;Bun&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;root&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/index.html&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;websocket&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ws&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;clients&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ws&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="nf"&gt;close&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ws&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;clients&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;delete&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ws&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nf"&gt;watch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;root&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;recursive&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;clients&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;forEach&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;c&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;reload&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)));&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;dev server on http://localhost:3000&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Fourteen lines. Zero dependencies beyond Bun itself. It serves static files from &lt;code&gt;./public&lt;/code&gt;, upgrades any matching request to a WebSocket, and pushes a &lt;code&gt;reload&lt;/code&gt; message to every connected client whenever a file changes.&lt;/p&gt;

&lt;p&gt;The client side needs three lines in &lt;code&gt;index.html&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;
  &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;WebSocket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ws://localhost:3000&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;onmessage&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;location&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reload&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is the entire HMR layer. It is not module replacement, it is a full reload. For a static site with three JS files and one stylesheet, full reload is fast enough that the difference is invisible. The browser caches assets, the WebSocket round trip is sub-millisecond on localhost, and the page is back on screen before my eyes leave the editor.&lt;/p&gt;

&lt;p&gt;The script tag only runs in dev because the production build strips it. In my case the production build is &lt;code&gt;cp -r public dist&lt;/code&gt;, so I skip the strip and just serve a different &lt;code&gt;index.html&lt;/code&gt; from the static host. Two &lt;code&gt;index.html&lt;/code&gt; files, one with the script, one without. Done.&lt;/p&gt;

&lt;p&gt;I deleted Webpack, webpack-cli, webpack-dev-server, html-webpack-plugin, css-loader, style-loader, mini-css-extract-plugin, and seven Babel packages. node_modules dropped from 184 MB to 0 MB, because I had no remaining dev dependencies. The package.json &lt;code&gt;devDependencies&lt;/code&gt; block went from 23 entries to zero.&lt;/p&gt;

&lt;h2&gt;
  
  
  Numbers: Cold Start, HMR Latency, Memory
&lt;/h2&gt;

&lt;p&gt;I ran both setups ten times each on the same machine, M1 MacBook, nothing else open. Median values.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cold start.&lt;/strong&gt; Time from &lt;code&gt;npm run dev&lt;/code&gt; (or &lt;code&gt;bun dev.js&lt;/code&gt;) to the server accepting its first request.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Webpack DevServer: 4.1 seconds&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Bun: 1.3 seconds&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Speedup: 3.2x&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;HMR latency.&lt;/strong&gt; Time from save in the editor to the page being repainted in the browser. Measured with a console.timeEnd on the client.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Webpack DevServer (HMR): 320 ms&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Bun (full reload): 80 ms&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Speedup: 4x&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Bun number is faster even though it does a full reload, because the full reload over a hot WebSocket beats Webpack's incremental update path for small bundles. Webpack pays the cost of recompiling, hashing, and sending a JSON payload. Bun pays the cost of one WebSocket message and a browser refresh. For 30 KB of assets, the refresh wins.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Memory.&lt;/strong&gt; Resident memory of the dev server process after it has been running for two minutes with one save event.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Webpack DevServer: 251 MB&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Bun: 39 MB&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Reduction: 6.4x&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Disk.&lt;/strong&gt; node_modules size for dev dependencies only.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Webpack stack: 184 MB&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Bun: 0 MB&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The disk number is the one I felt most. Cloning the repo on a fresh machine used to take 14 seconds of &lt;code&gt;npm install&lt;/code&gt;. Now it takes zero. Bun is the only dependency, and it is installed globally.&lt;/p&gt;

&lt;p&gt;These numbers are for a small project. They do not extrapolate to a 50,000-line app with code splitting and CSS modules. They do extrapolate to every other small static site I run, of which I have eleven.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where This Setup Still Loses to Webpack/Vite
&lt;/h2&gt;

&lt;p&gt;I am not telling anyone to delete their Webpack config. Here is what I gave up.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No React Fast Refresh.&lt;/strong&gt; Webpack and Vite both preserve component state across saves. Edit a component, the form values stay filled in, the open modal stays open. Bun's full reload throws all that away every time. For a JSX-heavy project this is a real loss. I do not write JSX in this project, so I do not feel it. If you do, stay on Vite.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No CSS HMR.&lt;/strong&gt; Vite injects new CSS without reloading the page. My setup does a full reload on every CSS change, which means scroll position is lost and any client-side state evaporates. For a one-page static site this is fine. For a multi-step form, it is annoying. PostCSS chains, Tailwind, CSS modules, all of these need real HMR to feel right, and a 14-line script will not give you that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No module graph.&lt;/strong&gt; Webpack and Vite know which files import which. When you edit a leaf module, only the modules that depend on it get re-evaluated. My setup reloads the world on any file change. For a 30 KB project this is invisible. For a 30 MB project it would feel slow.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No proxy config.&lt;/strong&gt; webpack-dev-server has a &lt;code&gt;proxy&lt;/code&gt; field that forwards &lt;code&gt;/api/*&lt;/code&gt; to a backend during dev, dodging CORS. My 14 lines do not. If your dev workflow needs proxying, you have to write that yourself, and the budget grows past 14 lines fast.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No source maps for bundled code.&lt;/strong&gt; Because there is no bundler, there is nothing to source-map. If you ship one bundled JS file in production, you lose the dev map. I ship raw ES modules, so this does not apply, but if you bundle, this matters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No TypeScript transpile.&lt;/strong&gt; Bun runs TS natively on the server, but the browser does not. If you need TS in browser code, you need a build step or a runtime transpiler, and that pulls more lines into the picture.&lt;/p&gt;

&lt;p&gt;The honest summary: this setup is great for static sites, ESM-only projects, and small experiments. It is wrong for any app that needs React Fast Refresh, CSS HMR, a proxy, or a real module graph.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom Line
&lt;/h2&gt;

&lt;p&gt;Fourteen lines of Bun replaced 23 dev dependencies, 184 MB of node_modules, and a Webpack config I had not touched in three years. Cold start went from 4.1 seconds to 1.3 seconds, a 3.2x improvement. HMR latency dropped from 320 ms to 80 ms, even with a full reload, because the full reload over a hot WebSocket beats incremental updates for tiny bundles. Memory dropped from 251 MB to 39 MB.&lt;/p&gt;

&lt;p&gt;This is not a Vite replacement. It is not a Webpack replacement. It is the right tool for static sites and ESM-only projects where the build step is &lt;code&gt;cp -r public dist&lt;/code&gt; and the module graph fits in your head. For everything else, keep Vite.&lt;/p&gt;

&lt;p&gt;I rolled this out across eleven small projects in a weekend. Cumulative dev memory savings, around 2.3 GB. Cumulative cold start savings per day, around 30 seconds.&lt;/p&gt;

&lt;p&gt;If you liked this teardown, the &lt;a href="https://raxxo.shop/blogs/lab" rel="noopener noreferrer"&gt;Bun Shell article&lt;/a&gt; in the same cluster covers replacing bash scripts with &lt;code&gt;Bun.$&lt;/code&gt;. Same energy, smaller surface, bigger payoff.&lt;/p&gt;

&lt;p&gt;Built and tested by RAXXO Studios. More tools and write-ups at &lt;a href="https://raxxo.shop" rel="noopener noreferrer"&gt;raxxo.shop&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>claudecode</category>
      <category>automation</category>
    </item>
    <item>
      <title>Build a Magnific to Shopify Image Pipeline in 5 Minutes</title>
      <dc:creator>RAXXO Studios</dc:creator>
      <pubDate>Wed, 06 May 2026 05:16:41 +0000</pubDate>
      <link>https://dev.to/raxxostudios/build-a-magnific-to-shopify-image-pipeline-in-5-minutes-1e1c</link>
      <guid>https://dev.to/raxxostudios/build-a-magnific-to-shopify-image-pipeline-in-5-minutes-1e1c</guid>
      <description>&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Generate a 4096px hero in Magnific in under a minute&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Resize to four responsive variants with a six-line bash script&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Push every variant to Shopify Files via Admin API in one command&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Reference the assets from Liquid with image_url width filters&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I used to spend twelve minutes per product hero. Open Magnific, download, drag into Photoshop, export four sizes, log into Shopify, upload, copy URLs, paste into Liquid. After three products that loop felt like sand in my shoes. So I rebuilt it as a five-minute pipeline with one shell command and one Liquid filter, and now my machine does the boring 30 seconds while I do something else.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Generate Assets in Magnific (CLI or Web)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://referral.magnific.com/mQMIvsh" rel="noopener noreferrer"&gt;Magnific&lt;/a&gt; (formerly Freepik) is where I start every product hero. The Magnific upscaler handles 4096 pixel output at a quality that beats anything I can render locally, and the Mystic generator gives me the source image when I do not have a photograph yet.&lt;/p&gt;

&lt;p&gt;For a product hero I run two passes. First, generate at 1024 pixels with Mystic using a tight prompt that locks color, lens, and framing. Second, push the winning frame through the upscaler at 4x. Total Magnific time: about 60 seconds of clicking and 90 seconds of generation. The upscale output is one file: &lt;code&gt;hero-4096.jpg&lt;/code&gt;, sitting in &lt;code&gt;~/Downloads&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;If you prefer the API, Magnific exposes the same upscaler over REST. I keep a tiny wrapper that posts the source URL and polls for the result, then writes the binary to disk. The web UI is faster for one-off heroes and the API wins when I am batching ten product photos for a drop.&lt;/p&gt;

&lt;p&gt;Two rules I never break. Always upscale at 4096 pixels, never 2048. Storage is free, but a second round trip later because mobile retina needs more pixels is not. And always export as JPEG quality 90, not PNG. Shopify will recompress anyway, and a 4 MB JPEG beats a 14 MB PNG through any CDN.&lt;/p&gt;

&lt;p&gt;Naming matters too. I use &lt;code&gt;product-slug-hero-4096.jpg&lt;/code&gt; from minute one. The slug becomes the Shopify Files filename, the alt text seed, and the Liquid reference key. Three jobs, one string. If you skip naming discipline here, every later step in the pipeline grows a small workaround until the script is uglier than the manual flow you replaced.&lt;/p&gt;

&lt;p&gt;By the end of step one I have one big square file on disk and a name I will not have to rename. That is the only artifact step two needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Pipe Output Through a Local Resize Script
&lt;/h2&gt;

&lt;p&gt;Now the boring 30 seconds. I have one bash script that takes a 4096 source and spits out four variants: 480 mobile, 800 tablet, 1600 desktop, 3200 retina. On macOS I use &lt;code&gt;sips&lt;/code&gt; because it ships with the OS and never breaks. On Linux or in CI I swap in ImageMagick.&lt;/p&gt;

&lt;p&gt;The whole script is six lines.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;
&lt;span class="nv"&gt;SRC&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nv"&gt;BASE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;SRC&lt;/span&gt;&lt;span class="p"&gt;%.*&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;for &lt;/span&gt;W &lt;span class="k"&gt;in &lt;/span&gt;480 800 1600 3200&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
  &lt;/span&gt;sips &lt;span class="nt"&gt;-Z&lt;/span&gt; &lt;span class="nv"&gt;$W&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SRC&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--out&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;BASE&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;W&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.jpg"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null
&lt;span class="k"&gt;done&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save it as &lt;code&gt;~/bin/rx-resize&lt;/code&gt;, make it executable, and run &lt;code&gt;rx-resize hero-4096.jpg&lt;/code&gt;. Five seconds later the directory has four files. I keep the original as the 4096 variant so I have five total: 480, 800, 1600, 3200, and the 4096 source for hero contexts.&lt;/p&gt;

&lt;p&gt;A few things I learned the painful way. Do not use ImageMagick &lt;code&gt;-resize&lt;/code&gt; with default filtering on JPEGs from Magnific, the output goes soft. Use &lt;code&gt;-filter Lanczos -resize&lt;/code&gt; if you go that route, or stay with &lt;code&gt;sips -Z&lt;/code&gt; which gives a cleaner downscale on Apple silicon. Strip EXIF too. Magnific files often carry a 200 KB color profile that adds nothing on the web. &lt;code&gt;sips&lt;/code&gt; strips it automatically, ImageMagick needs &lt;code&gt;-strip&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Compression is the other money line. JPEG quality 82 is the sweet spot for product photography on dark backgrounds like the RAXXO Studios palette. Quality 90 looks identical and weighs 40 percent more. Quality 75 starts to band on smooth gradients. Test once on a hero with strong gradients, lock the number, never touch it again.&lt;/p&gt;

&lt;p&gt;The resize step is where the time savings actually live. A manual export of four sizes takes me 4 minutes in Photoshop. The script takes 30 seconds, runs while I drink coffee, and never makes a typo on the dimensions. That is 3.5 minutes saved per product, every product, forever.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Push to Shopify Files via the Admin API
&lt;/h2&gt;

&lt;p&gt;This is the step everyone skips and everyone regrets. The Shopify admin uploader is fine for one file. For four files across twenty products it is 80 manual uploads and at least one wrong file in the wrong slot. The Admin API solves it in one shell command.&lt;/p&gt;

&lt;p&gt;The dance is three calls. First, &lt;code&gt;stagedUploadsCreate&lt;/code&gt; returns a presigned S3 URL per file. Second, you PUT the binary to that URL. Third, &lt;code&gt;fileCreate&lt;/code&gt; registers the upload as a permanent &lt;a href="https://shopify.pxf.io/5k5rj9" rel="noopener noreferrer"&gt;Shopify&lt;/a&gt; Files asset and returns a CDN URL. The whole sequence runs in under 8 seconds for four variants on a decent connection.&lt;/p&gt;

&lt;p&gt;I keep my upload script in &lt;code&gt;~/CLAUDE/RAXXOSTUDIOS/raxxo-shop/scripts/shopify-upload&lt;/code&gt;. It reads credentials from &lt;code&gt;.env&lt;/code&gt; (never settings.json, never hardcoded), accepts a glob pattern, and prints the final CDN URLs as a JSON map. Run it like &lt;code&gt;rx-shopify-upload "hero-*.jpg"&lt;/code&gt; and it handles the four uploads in parallel.&lt;/p&gt;

&lt;p&gt;Three production details that took me a week to figure out. One, the &lt;code&gt;stagedUploadsCreate&lt;/code&gt; resource must be &lt;code&gt;IMAGE&lt;/code&gt;, not &lt;code&gt;FILE&lt;/code&gt;, or Shopify will not recognize the asset in the theme image picker. Two, the S3 PUT needs the exact &lt;code&gt;Content-Type&lt;/code&gt; Shopify returned in the staged URL parameters, not the one your file actually has. Three, &lt;code&gt;fileCreate&lt;/code&gt; is async on Shopify's side, so poll the file ID until status is &lt;code&gt;READY&lt;/code&gt; before treating the URL as live. About 2 to 4 seconds typical.&lt;/p&gt;

&lt;p&gt;The CDN URL format is &lt;code&gt;https://cdn.shopify.com/s/files/.../hero-1600.jpg&lt;/code&gt;. Save those URLs somewhere your Liquid templates can read them, ideally as theme settings or metafields. I write them straight to a metafield on the product so the theme can resolve them with one tag.&lt;/p&gt;

&lt;p&gt;Five minutes in, all four variants live on Shopify's CDN, named consistently, accessible from any template.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Reference From Liquid With cdn.shop Image Filters
&lt;/h2&gt;

&lt;p&gt;The last step is the cleanest. Shopify's &lt;code&gt;image_url&lt;/code&gt; filter takes a width parameter and returns the right CDN variant on the fly. Combined with &lt;code&gt;srcset&lt;/code&gt;, the browser picks the right size for the device pixel ratio.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight liquid"&gt;&lt;code&gt;
&lt;span class="cp"&gt;{%&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;assign&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;hero&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;product&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nv"&gt;metafields&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nv"&gt;custom&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nv"&gt;hero_image&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="cp"&gt;%}&lt;/span&gt;
![&lt;span class="cp"&gt;{{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;product&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nv"&gt;title&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="cp"&gt;}}&lt;/span&gt; hero](&lt;span class="cp"&gt;{{&lt;/span&gt;%20hero%20|%20image_url:%20width:%20800%20}})&lt;span class="w"&gt;

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is the entire frontend. Shopify's CDN serves the matching variant, falls back gracefully, and applies WebP transcoding for browsers that support it. I do not maintain four separate image references, the filter handles selection.&lt;/p&gt;

&lt;p&gt;Two Liquid rules I enforce on every theme. Always set explicit &lt;code&gt;width&lt;/code&gt; and &lt;code&gt;height&lt;/code&gt; so the browser reserves space and Cumulative Layout Shift stays at zero. Always set &lt;code&gt;loading="lazy"&lt;/code&gt; except for the first hero above the fold, which gets &lt;code&gt;loading="eager"&lt;/code&gt; and &lt;code&gt;fetchpriority="high"&lt;/code&gt;. Lighthouse loves both, and so does Google's ranking model.&lt;/p&gt;

&lt;p&gt;For OG and Twitter card images I reference the 1600 variant directly. Social crawlers do not respect srcset, so give them one fixed URL at the right resolution. I add &lt;code&gt;?v={{ 'now' | date: '%s' }}&lt;/code&gt; during dev to bust cache, then strip it before deploy.&lt;/p&gt;

&lt;p&gt;End to end the pipeline now takes 5 minutes of human time and 30 seconds of machine time per product. Compared to my old 12-minute manual loop that is a 60 percent reduction, and the variants are pixel-identical across every product. Consistency turns out to be the real win, not just speed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom Line
&lt;/h2&gt;

&lt;p&gt;I shipped 17 product pages on raxxo.shop with this pipeline in one weekend. Old workflow would have been 3.4 hours of mouse clicks, new workflow was 85 minutes of waiting on generation while I wrote copy. Same output quality, four times the throughput, zero typos in image dimensions because the script does not have fingers.&lt;/p&gt;

&lt;p&gt;If you run a Shopify store and you generate or upscale heroes in &lt;a href="https://referral.magnific.com/mQMIvsh" rel="noopener noreferrer"&gt;Magnific&lt;/a&gt;, the four-step pipeline pays for itself on the third product. Local resize, API upload, Liquid filter. No drag and drop, no Photoshop export sheet, no admin uploader.&lt;/p&gt;

&lt;p&gt;Want the actual scripts I use on raxxo.shop, including the &lt;code&gt;.env&lt;/code&gt; setup and the parallel upload variant? Head to raxxo.shop/blogs/lab and check the tutorials cluster for the companion piece on automating Shopify product launches end to end. Same philosophy, ten more steps automated.&lt;/p&gt;

&lt;p&gt;Build it once, run it forever.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>claudecode</category>
      <category>automation</category>
    </item>
  </channel>
</rss>
