<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: RyanCwynar</title>
    <description>The latest articles on DEV Community by RyanCwynar (@ryancwynar).</description>
    <link>https://dev.to/ryancwynar</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ryancwynar"/>
    <language>en</language>
    <item>
      <title>Redis Queues: The Glue Between Your AI Agent Tasks</title>
      <dc:creator>RyanCwynar</dc:creator>
      <pubDate>Thu, 19 Mar 2026 10:01:36 +0000</pubDate>
      <link>https://dev.to/ryancwynar/redis-queues-the-glue-between-your-ai-agent-tasks-1hc</link>
      <guid>https://dev.to/ryancwynar/redis-queues-the-glue-between-your-ai-agent-tasks-1hc</guid>
      <description>&lt;p&gt;You have an AI agent that finds prospects. Another task that builds spec websites. A third that sends outreach. How do they talk to each other?&lt;/p&gt;

&lt;p&gt;I tried a database. I tried flat files. Then I tried Redis, and everything got simpler.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem: AI Tasks Need a Handoff Layer
&lt;/h2&gt;

&lt;p&gt;When you are building autonomous pipelines — not toy demos, but systems that actually run unsupervised — the hardest part is not the AI. It is the plumbing between tasks.&lt;/p&gt;

&lt;p&gt;My setup looks like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Prospect discovery&lt;/strong&gt; — finds local businesses with outdated websites&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Site builder&lt;/strong&gt; — generates a modern redesign preview on GitHub Pages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Outreach&lt;/strong&gt; — sends SMS or email with the preview link&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Follow-up&lt;/strong&gt; — checks tracking pixels, re-engages warm leads&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each step produces output the next step needs. A database works, but it is heavy. Flat files work, but they do not signal. What I needed was a queue — something that says "here is work, go do it."&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Redis Fits
&lt;/h2&gt;

&lt;p&gt;Redis gives you three things that matter for AI agent coordination:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lists as queues.&lt;/strong&gt; &lt;code&gt;LPUSH&lt;/code&gt; to enqueue, &lt;code&gt;RPOP&lt;/code&gt; to dequeue. Dead simple. Your prospect finder pushes business objects onto a list. Your site builder pops them off. No polling a database, no file watchers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Prospect finder adds work&lt;/span&gt;
redis-cli LPUSH prospect:queue &lt;span class="s1"&gt;'{"name": "Acme Auto", "url": "acmeauto.com", "phone": "555-0123"}'&lt;/span&gt;

&lt;span class="c"&gt;# Site builder grabs next job&lt;/span&gt;
redis-cli RPOP prospect:queue
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Sets for deduplication.&lt;/strong&gt; When your prospect finder runs every few hours, it will rediscover the same businesses. Before enqueuing, check the set:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;redis-cli SISMEMBER prospect:seen &lt;span class="s2"&gt;"acmeauto.com"&lt;/span&gt;
&lt;span class="c"&gt;# Returns 0 (new) or 1 (already processed)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is how I track saturation. When &lt;code&gt;SISMEMBER&lt;/code&gt; returns 1 more often than 0, it is time to expand your search radius or switch categories.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TTL for temporary state.&lt;/strong&gt; Some data should not live forever. Tracking pixel hits, rate limit counters, session tokens — give them a TTL and forget about cleanup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;redis-cli SET outreach:ratelimit:sms 15 EX 3600
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  My Actual Setup
&lt;/h2&gt;

&lt;p&gt;I run Redis in a Docker container alongside everything else on a single VPS. The config is minimal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;redis&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis:7-alpine&lt;/span&gt;
  &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;redis-data:/data&lt;/span&gt;
  &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis-server --appendonly yes --maxmemory 256mb --maxmemory-policy allkeys-lru&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Append-only file (AOF) for persistence, 256MB cap with LRU eviction. For a solo dev pipeline processing maybe 50-100 prospects a day, this is massive overkill — and that is exactly the point. You never think about it.&lt;/p&gt;

&lt;p&gt;The AI agent accesses Redis through simple CLI calls or a lightweight Node script. No ORM, no client library drama. &lt;code&gt;redis-cli&lt;/code&gt; is installed on the host, and from inside containers, Redis is just &lt;code&gt;redis:6379&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Patterns That Emerged
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The staging pattern.&lt;/strong&gt; Instead of going straight from discovery to outreach, I queue into &lt;code&gt;prospect:staged&lt;/code&gt;. A separate review step (sometimes human, sometimes AI) moves approved prospects to &lt;code&gt;prospect:ready&lt;/code&gt;. This gives me a kill switch without touching code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The dead letter queue.&lt;/strong&gt; When outreach fails — bad phone number, bounced email — the prospect goes to &lt;code&gt;prospect:failed&lt;/code&gt; instead of disappearing. I review these weekly. Sometimes the data just needs cleaning.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The counter pattern.&lt;/strong&gt; I track daily stats with expiring keys:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;redis-cli INCR stats:prospects:2026-03-19
redis-cli EXPIRE stats:prospects:2026-03-19 604800  &lt;span class="c"&gt;# 7 days&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Seven days of rolling stats, zero maintenance.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Would Not Use Redis For
&lt;/h2&gt;

&lt;p&gt;Anything that needs complex queries. If you want "show me all prospects in Miami that were contacted more than 7 days ago and have not responded" — use Postgres. Redis is for flow control, not analytics.&lt;/p&gt;

&lt;p&gt;Anything that needs transactions across multiple data types. Redis transactions exist but they are not what you want for business logic.&lt;/p&gt;

&lt;p&gt;Long-term storage of records you might need for compliance. Redis is a buffer, not an archive.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Takeaway
&lt;/h2&gt;

&lt;p&gt;If you are building AI automations that have more than one step, you need a coordination layer. Redis is the lowest-friction option I have found. It took 10 minutes to add to my Docker Compose, and it immediately simplified three different pipelines.&lt;/p&gt;

&lt;p&gt;The AI is the flashy part. The queue is what makes it work while you sleep.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>architecture</category>
      <category>automation</category>
    </item>
    <item>
      <title>One VPS, Ten Projects: Docker + Traefik + Cloudflare for Indie Hackers</title>
      <dc:creator>RyanCwynar</dc:creator>
      <pubDate>Wed, 18 Mar 2026 19:01:15 +0000</pubDate>
      <link>https://dev.to/ryancwynar/one-vps-ten-projects-docker-traefik-cloudflare-for-indie-hackers-5hai</link>
      <guid>https://dev.to/ryancwynar/one-vps-ten-projects-docker-traefik-cloudflare-for-indie-hackers-5hai</guid>
      <description>&lt;p&gt;If you are running multiple side projects and paying for separate hosting on each one, stop. A single $10/month VPS can host all of them with automatic HTTPS, zero-downtime deploys, and a reverse proxy that routes traffic by domain name. Here is the exact stack I use.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Every indie hacker hits this wall. You have a Next.js blog on Vercel, an API on Railway, a workflow tool on some other platform, and a static site on Netlify. Each one is cheap individually, but the costs add up. Worse, you are scattered across five dashboards with five different deploy workflows.&lt;/p&gt;

&lt;p&gt;I consolidated everything onto one Hostinger VPS. One machine. One &lt;code&gt;docker-compose.yml&lt;/code&gt;. Ten services.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Stack
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Traefik&lt;/strong&gt; sits at the front as a reverse proxy. It listens on ports 80 and 443, terminates TLS with automatic Let us Encrypt certificates, and routes requests to the right container based on the domain name. No nginx config files. No manual cert renewals.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker Compose&lt;/strong&gt; defines every service. Each project gets its own container with its own Dockerfile. They all share a Docker network so they can talk to each other internally, but only Traefik is exposed to the internet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cloudflare&lt;/strong&gt; handles DNS. Point your domain to the VPS IP, set it to DNS-only mode (not proxied, since Traefik handles TLS), and you are done. New project? Add an A record, add a container, redeploy.&lt;/p&gt;

&lt;h2&gt;
  
  
  What a Service Looks Like
&lt;/h2&gt;

&lt;p&gt;Here is a stripped-down example from my &lt;code&gt;docker-compose.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;website&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./repo/byldr-nextjs&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;traefik.enable=true&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;traefik.http.routers.website.rule=Host(`byldr.co`)&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;traefik.http.routers.website.tls.certresolver=letsencrypt&lt;/span&gt;
  &lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;web&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is the entire routing config. Traefik reads Docker labels at runtime. No config file to update, no proxy to restart. Add the labels, run &lt;code&gt;docker compose up -d&lt;/code&gt;, and Traefik picks it up automatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Am Running
&lt;/h2&gt;

&lt;p&gt;Right now my single VPS hosts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Traefik&lt;/strong&gt; — reverse proxy and TLS termination&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A Next.js marketing site&lt;/strong&gt; — server-side rendered, rebuilt with &lt;code&gt;docker compose build&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Convex backend&lt;/strong&gt; — self-hosted database and serverless functions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Convex dashboard&lt;/strong&gt; — admin UI for the backend&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;n8n&lt;/strong&gt; — workflow automation (prospecting, SEO, monitoring)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PostgreSQL&lt;/strong&gt; — relational data for analytics and keyword tracking&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Redis&lt;/strong&gt; — caching and rate limiting&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A Rust API&lt;/strong&gt; — lightweight endpoints for webhooks and integrations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A voice AI server&lt;/strong&gt; — Twilio plus OpenAI Realtime API bridge&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All on a VPS with 4 cores and 8GB of RAM. It never breaks a sweat.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy Workflow
&lt;/h2&gt;

&lt;p&gt;Deploying a change is two commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose build &lt;span class="nt"&gt;--no-cache&lt;/span&gt; website
docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt; website
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Traefik handles the switchover. The old container keeps serving until the new one is healthy. No load balancer config. No blue-green scripts.&lt;/p&gt;

&lt;p&gt;For the Convex backend, I run &lt;code&gt;npx convex deploy&lt;/code&gt; which pushes functions to the self-hosted instance. Same VPS, different deploy path.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Economics
&lt;/h2&gt;

&lt;p&gt;Hostinger VPS: $10 per month. Cloudflare DNS: free. Let us Encrypt: free. Docker: free. Traefik: free.&lt;/p&gt;

&lt;p&gt;Compare that to hosting the same services across Vercel, Railway, Render, and a managed database. You are looking at $50 to $100 per month minimum, and you do not even get the benefit of services being able to talk to each other on a local network.&lt;/p&gt;

&lt;p&gt;The tradeoff is that you are your own ops team. If the VPS goes down, you fix it. But honestly, it has been months since I had to touch anything beyond deploying new code.&lt;/p&gt;

&lt;h2&gt;
  
  
  When This Does Not Work
&lt;/h2&gt;

&lt;p&gt;If you need global edge distribution, use Vercel or Cloudflare Workers. If you need five nines of uptime for paying customers, use managed infrastructure. If you are allergic to SSH, this is not for you.&lt;/p&gt;

&lt;p&gt;But if you are an indie hacker with a portfolio of projects and you want to keep costs near zero while maintaining full control — one VPS, Docker Compose, and Traefik is hard to beat.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;Spin up a cheap VPS. Install Docker. Copy a &lt;code&gt;docker-compose.yml&lt;/code&gt; with Traefik configured for Let us Encrypt. Add your first service with the right labels. Point a domain at it.&lt;/p&gt;

&lt;p&gt;The whole setup takes about an hour. After that, adding a new project takes five minutes.&lt;/p&gt;

&lt;p&gt;Stop paying five platforms to host five projects. Put them all in one place and move on to building.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>docker</category>
      <category>sideprojects</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Cross-Posting Automation: Publish Once, Syndicate Everywhere</title>
      <dc:creator>RyanCwynar</dc:creator>
      <pubDate>Wed, 18 Mar 2026 10:01:05 +0000</pubDate>
      <link>https://dev.to/ryancwynar/cross-posting-automation-publish-once-syndicate-everywhere-32h2</link>
      <guid>https://dev.to/ryancwynar/cross-posting-automation-publish-once-syndicate-everywhere-32h2</guid>
      <description>&lt;p&gt;You write a blog post. Then you copy it to Dev.to. Then Hashnode. Then maybe Medium. Each platform has its own editor, its own formatting quirks, its own publish button. Multiply that by two posts a day and you have a full-time job that produces zero new content.&lt;/p&gt;

&lt;p&gt;I got tired of this after exactly one day of manual cross-posting. So I built a system that publishes to three platforms from a single function call. Here is how it works and why the boring parts matter more than you think.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architecture
&lt;/h2&gt;

&lt;p&gt;My blog runs on a self-hosted Convex backend. When I publish a post, it goes into a &lt;code&gt;posts&lt;/code&gt; table with the markdown content, metadata, and a &lt;code&gt;crossPost&lt;/code&gt; array that specifies which platforms should get a copy.&lt;/p&gt;

&lt;p&gt;The publish command looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx convex run posts:upsertPost &lt;span class="s1"&gt;'{"slug": "my-post", "title": "My Post", "content": "...", "crossPost": ["devto", "hashnode"]}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One command. The post lands on ryancwynar.com immediately. Then I trigger cross-posting:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx convex run crossPost:crossPostArticle &lt;span class="s1"&gt;'{"slug": "my-post"}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This function reads the post from the database, formats it for each platform's API, and pushes it out. Dev.to gets a version with a canonical URL pointing back to my site. Hashnode gets the same treatment. Both APIs are straightforward — POST some JSON, get back a URL.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Canonical URLs Matter
&lt;/h2&gt;

&lt;p&gt;This is the part most people skip. Every cross-posted article includes a &lt;code&gt;canonical_url&lt;/code&gt; pointing to the original on ryancwynar.com. This tells search engines which version is the source of truth.&lt;/p&gt;

&lt;p&gt;Without canonical URLs, you are competing with yourself in search results. Dev.to has higher domain authority than most personal blogs, so Google will rank their copy above yours. Canonical URLs fix this — in theory. In practice, Google does what Google wants, but you are at least giving it the right signal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Platform-Specific Formatting
&lt;/h2&gt;

&lt;p&gt;Dev.to and Hashnode both accept markdown, but they are not identical. Dev.to uses front matter for metadata:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;My&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Post"&lt;/span&gt;
&lt;span class="na"&gt;published&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;automation, crossposting&lt;/span&gt;
&lt;span class="na"&gt;canonical_url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://ryancwynar.com/blog/my-post&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Hashnode uses a GraphQL API where you pass the markdown as a field in the mutation. Tags work differently — Hashnode has a fixed taxonomy, so I map my tags to the closest match.&lt;/p&gt;

&lt;p&gt;The cross-post function handles all of this. The source content is platform-agnostic markdown. The formatting layer adapts it per destination.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Win: AI Can Drive It
&lt;/h2&gt;

&lt;p&gt;The reason I built this as a CLI-callable function rather than a web UI is that my AI agent runs it. I have a cron job that fires twice a day. The agent checks recent work, picks a topic, writes the post, and calls the publish and cross-post commands. No human in the loop.&lt;/p&gt;

&lt;p&gt;This only works because the interface is a single function call with predictable inputs. If publishing required clicking through three different web UIs, automation would be fragile and expensive. The function-call interface is the enabler.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Would Do Differently
&lt;/h2&gt;

&lt;p&gt;Medium does not have a proper API for programmatic publishing anymore. Their integration options are limited and their API has been in a weird state for years. I skipped it. If Medium matters to you, you might need a browser automation approach, which adds a lot of complexity for marginal reach.&lt;/p&gt;

&lt;p&gt;I also wish I had built analytics aggregation from the start. Each platform has its own stats dashboard. I can see views on Dev.to, reads on Hashnode, and page views on my site, but there is no unified view. That is the next project.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Numbers
&lt;/h2&gt;

&lt;p&gt;After a week of automated publishing at two posts per day:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;14 posts published across 3 platforms&lt;/li&gt;
&lt;li&gt;Zero manual formatting work&lt;/li&gt;
&lt;li&gt;Cross-posting adds about 8 seconds per post (API round trips)&lt;/li&gt;
&lt;li&gt;Total infrastructure cost: $0 (self-hosted Convex, free tier APIs)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The ROI is not in the publishing — it is in the consistency. Publishing every day is easy when it costs zero effort. The compound effect of daily content is something I could never maintain manually.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It Yourself
&lt;/h2&gt;

&lt;p&gt;You do not need Convex for this. The pattern is simple: store your content in one place, write adapter functions for each platform's API, and expose a single entry point that triggers them all. A Node script with three API calls would work fine.&lt;/p&gt;

&lt;p&gt;The hard part is not the code. It is committing to the canonical URL strategy and accepting that your personal site is the source of truth, even when Dev.to gets more eyeballs. Play the long game.&lt;/p&gt;

</description>
      <category>automation</category>
      <category>productivity</category>
      <category>showdev</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Spec Work at Scale: Using AI to Redesign Small Business Websites Before They Ask</title>
      <dc:creator>RyanCwynar</dc:creator>
      <pubDate>Tue, 17 Mar 2026 19:01:23 +0000</pubDate>
      <link>https://dev.to/ryancwynar/spec-work-at-scale-using-ai-to-redesign-small-business-websites-before-they-ask-4on</link>
      <guid>https://dev.to/ryancwynar/spec-work-at-scale-using-ai-to-redesign-small-business-websites-before-they-ask-4on</guid>
      <description>&lt;p&gt;Most freelancers hate spec work. You spend hours building something, send it to a stranger, and hear nothing back. The math never works.&lt;/p&gt;

&lt;p&gt;But what if you could do spec work in 15 minutes instead of 15 hours? That changes everything.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Ugly Site Strategy
&lt;/h2&gt;

&lt;p&gt;I started a simple experiment: find small businesses with terrible websites, redesign them using AI, and send the owner a link. No pitch deck. No discovery call. Just "here is what your site could look like."&lt;/p&gt;

&lt;p&gt;The workflow looks like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Find the business&lt;/strong&gt; — Google Maps, Yelp, or my AI prospect finder surfaces businesses with outdated sites&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scrape their content&lt;/strong&gt; — Pull real copy, images, phone numbers, and service descriptions from the existing site&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generate a redesign&lt;/strong&gt; — AI builds a modern static site using their actual content&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploy instantly&lt;/strong&gt; — Push to GitHub Pages, get a live URL in seconds&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Send the link&lt;/strong&gt; — Email or call with "I rebuilt your website, take a look"&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The key insight: using their real content instead of lorem ipsum makes it feel personal. When a roofing company owner sees their own phone number, their own service list, and their own business name on a polished modern site, it clicks immediately.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Works Better Than Cold Outreach
&lt;/h2&gt;

&lt;p&gt;Traditional cold outreach for web services is brutal. You are competing with every agency, every Fiverr freelancer, every cousin who "knows WordPress." Your email looks exactly like the 50 others they deleted this week.&lt;/p&gt;

&lt;p&gt;A live redesign is different. It is tangible. The business owner can pull it up on their phone and show their spouse. They can compare it side-by-side with their current site. You have already done the work — the conversation shifts from "should I hire someone" to "should I hire THIS person."&lt;/p&gt;

&lt;p&gt;I built a before-and-after screenshot system that puts the old site next to the new one. The contrast does the selling.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Technical Stack
&lt;/h2&gt;

&lt;p&gt;Each redesign is a static site — just HTML, CSS, and maybe a bit of JavaScript. No CMS, no database, no hosting costs. GitHub Pages serves them for free.&lt;/p&gt;

&lt;p&gt;The generation process uses a templating approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scrape the original site for content and images&lt;/li&gt;
&lt;li&gt;Pick a color scheme that fits the industry (navy for roofing, green for lawn care, warm tones for restaurants)&lt;/li&gt;
&lt;li&gt;Generate semantic HTML with proper sections: hero, services, testimonials, contact&lt;/li&gt;
&lt;li&gt;Include real CTAs with the business phone number and address&lt;/li&gt;
&lt;li&gt;Deploy to a subdirectory under my GitHub Pages domain&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each site takes about 10-15 minutes of AI-assisted work. Compare that to the 8-20 hours a traditional redesign would take.&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding Tracking
&lt;/h2&gt;

&lt;p&gt;Here is where it gets interesting. Each redesign URL includes a unique tracking parameter tied to a prospect record in my database. When the owner clicks the link, I know. When they visit multiple times, I know. When they forward it to someone else, I see that too.&lt;/p&gt;

&lt;p&gt;This turns spec work into a lead scoring system. A prospect who visits their redesign five times in two days is warm. A prospect who never clicks is cold. I focus my follow-up on the people who are already interested.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;p&gt;After building redesigns for dentists, roofers, auto repair shops, chiropractors, and lawn care companies, a few patterns emerged:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Speed beats perfection.&lt;/strong&gt; A good redesign shipped today beats a perfect one next week. The business owner does not care about your CSS architecture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Industry matters.&lt;/strong&gt; Some industries respond much better than others. Service businesses with ugly sites and high customer lifetime value (roofing, dental, HVAC) are the sweet spot.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Follow-up is everything.&lt;/strong&gt; The redesign gets attention, but closing requires a phone call. Email alone has maybe a 5% response rate. A call referencing the redesign you sent gets much higher engagement.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Saturation is real.&lt;/strong&gt; In a small metro area, you will run through the best prospects quickly. Build the system to expand geographically when local leads dry up.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bigger Picture
&lt;/h2&gt;

&lt;p&gt;This is not really about websites. It is about using AI to collapse the cost of demonstrating value. The same pattern works for any service: show, do not tell.&lt;/p&gt;

&lt;p&gt;When the cost of creating a sample drops to near zero, spec work stops being a gamble and becomes a strategy. The businesses that figure this out first — in any industry — will have a massive advantage in customer acquisition.&lt;/p&gt;

&lt;p&gt;The tools exist. The only question is whether you are willing to build the pipeline.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>automation</category>
      <category>productivity</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Tracking Pixels and Webhooks: The DIY Sales Pipeline Nobody Talks About</title>
      <dc:creator>RyanCwynar</dc:creator>
      <pubDate>Tue, 17 Mar 2026 10:01:34 +0000</pubDate>
      <link>https://dev.to/ryancwynar/tracking-pixels-and-webhooks-the-diy-sales-pipeline-nobody-talks-about-5c5d</link>
      <guid>https://dev.to/ryancwynar/tracking-pixels-and-webhooks-the-diy-sales-pipeline-nobody-talks-about-5c5d</guid>
      <description>&lt;p&gt;Most indie hackers obsess over landing pages and SEO. Meanwhile, the most useful thing I built for my web dev business was a tracking pixel and a webhook.&lt;/p&gt;

&lt;p&gt;Here's the setup: I find local businesses with outdated websites, generate a modern mockup, and send them a link. Simple enough. But the magic isn't in the outreach — it's in knowing what happens &lt;em&gt;after&lt;/em&gt; the message lands.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Tracking Pixel
&lt;/h2&gt;

&lt;p&gt;Every mockup link includes a &lt;code&gt;?ref=PROSPECT_ID&lt;/code&gt; parameter. When someone clicks through, a lightweight endpoint logs the visit:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Convex HTTP action&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;track&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;httpAction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;URL&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ref&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;searchParams&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ref&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ref&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;runMutation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;api&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;prospects&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;recordView&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ref&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="c1"&gt;// Return a 1x1 transparent pixel&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;PIXEL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Content-Type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;image/gif&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now I know &lt;em&gt;exactly&lt;/em&gt; who opened the link, when, and how many times. A prospect who visits their mockup three times in two days is a warm lead. Someone who never clicks? Don't waste a follow-up.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Webhook Loop
&lt;/h2&gt;

&lt;p&gt;The tracking pixel fires a mutation that updates the prospect record. But it also triggers a webhook back to my AI agent (running on OpenClaw). The agent sees the view event and can autonomously decide what to do:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First view → add a note: "👀 Prospect viewed their redesign"&lt;/li&gt;
&lt;li&gt;Multiple views in 24h → flag as hot lead&lt;/li&gt;
&lt;li&gt;View + no response after 48h → queue a gentle follow-up&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This creates a closed loop. The outreach goes out, the tracking pixel reports back, and the agent adapts. No dashboard refreshing. No manual CRM updates.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Beats SaaS Tools
&lt;/h2&gt;

&lt;p&gt;You could use HubSpot or Salesforce for this. But for a solo operation:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Cost:&lt;/strong&gt; My entire stack runs on a $10/month VPS. Convex is self-hosted. The tracking endpoint is three lines of code.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Control:&lt;/strong&gt; I own the data. I can query it however I want. No vendor lock-in, no monthly seat fees.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration:&lt;/strong&gt; Because the webhook hits my AI agent directly, I can do things no SaaS tool supports — like having the agent draft a personalized follow-up based on which sections of the mockup the prospect spent time on.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Stripe Shortcut
&lt;/h2&gt;

&lt;p&gt;I added one more piece: a Stripe payment link generator. When a prospect is flagged as warm, the system creates a $200 payment link specific to that prospect:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;createPaymentLink&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;action&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;args&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;stripe&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;checkout&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sessions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;line_items&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt; &lt;span class="na"&gt;price&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;PRICE_ID&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;quantity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="p"&gt;}],&lt;/span&gt;
    &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;payment&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;prospectId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;args&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;prospectId&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;success_url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;BASE_URL&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/thank-you?ref=&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;args&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;prospectId&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The follow-up message includes the payment link. Prospect clicks mockup → views redesign → gets follow-up with payment link → pays. Four steps, mostly automated.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Tracking changes behavior.&lt;/strong&gt; Before the pixel, I'd send 20 messages and wonder why nobody responded. After, I realized most people &lt;em&gt;were&lt;/em&gt; clicking — they just needed a nudge. The data turned a 2% response rate into 15%.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Webhooks are underrated glue.&lt;/strong&gt; Everyone talks about APIs. But a simple webhook that says "this thing happened" and lets your agent decide what to do next is incredibly powerful. It's event-driven architecture without the enterprise overhead.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You don't need much.&lt;/strong&gt; A Convex backend, a tracking pixel, a webhook, and an AI agent. Total code: maybe 200 lines. Total cost: nearly zero. But it gave me a sales pipeline that adapts in real time.&lt;/p&gt;

&lt;p&gt;The best tools aren't the ones with the most features. They're the ones you build yourself, because they do exactly what you need and nothing else.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Heartbeats vs Cron: Two Patterns for Scheduling Autonomous AI Work</title>
      <dc:creator>RyanCwynar</dc:creator>
      <pubDate>Mon, 16 Mar 2026 19:01:25 +0000</pubDate>
      <link>https://dev.to/ryancwynar/heartbeats-vs-cron-two-patterns-for-scheduling-autonomous-ai-work-1l0</link>
      <guid>https://dev.to/ryancwynar/heartbeats-vs-cron-two-patterns-for-scheduling-autonomous-ai-work-1l0</guid>
      <description>&lt;p&gt;When you give an AI agent the ability to act on its own, you immediately hit a scheduling problem: &lt;em&gt;when&lt;/em&gt; should it do things?&lt;/p&gt;

&lt;p&gt;I have been running an autonomous AI setup for a few weeks now — it writes blog posts, posts to LinkedIn, checks email, monitors projects. Along the way I landed on two distinct scheduling patterns that serve very different purposes. If you are building anything similar, understanding the tradeoff will save you time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 1: Heartbeats
&lt;/h2&gt;

&lt;p&gt;A heartbeat is a periodic poll. Every 30 minutes or so, the system pings the agent: "Hey, anything need attention?" The agent checks a lightweight file (&lt;code&gt;HEARTBEAT.md&lt;/code&gt;) for standing tasks, glances at recent context, and either acts or replies with the equivalent of "all good."&lt;/p&gt;

&lt;p&gt;Heartbeats are great when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;You want to batch work.&lt;/strong&gt; One heartbeat can check email, glance at the calendar, and review notifications in a single turn. That is one API call instead of three separate cron jobs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Timing does not need to be precise.&lt;/strong&gt; If checking email at 2:00 PM vs 2:27 PM makes no difference, a heartbeat is simpler.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You need conversational context.&lt;/strong&gt; The heartbeat runs in the main session, so the agent has access to recent chat history. It knows what you were just talking about.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The downside: heartbeats are inherently imprecise. They fire on an interval, not at exact times. And because they run in the main session, they burn tokens even when there is nothing to do.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 2: Cron Jobs
&lt;/h2&gt;

&lt;p&gt;Cron is the classic: run this task at this exact time. In my setup, cron jobs spin up isolated sessions — fresh context, no chat history leaking in. The agent does its thing and announces the result.&lt;/p&gt;

&lt;p&gt;Cron jobs win when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Exact timing matters.&lt;/strong&gt; "Post to LinkedIn at 8:00 AM UTC every weekday" — you want that to actually happen at 8:00 AM, not whenever the next heartbeat fires.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The task is self-contained.&lt;/strong&gt; Writing a blog post does not need to know what I said in chat an hour ago. It needs memory files and a clear prompt.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You want isolation.&lt;/strong&gt; An isolated session means the task cannot accidentally reference private conversation context. This matters more than you think when cross-posting to public platforms.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You want different models per task.&lt;/strong&gt; Maybe your blog writer uses a beefier model while your email checker runs on something fast and cheap.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The downside: each cron job is a separate session with its own token cost. If you have 15 cron jobs firing throughout the day, that adds up.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Practical Split
&lt;/h2&gt;

&lt;p&gt;Here is how I split things in practice:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Heartbeat (batched, 2-4x daily):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check email for urgent items&lt;/li&gt;
&lt;li&gt;Glance at calendar for upcoming events&lt;/li&gt;
&lt;li&gt;Review social notifications&lt;/li&gt;
&lt;li&gt;Light memory maintenance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cron (precise, isolated):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Blog posts at specific times (morning and evening)&lt;/li&gt;
&lt;li&gt;LinkedIn posts on weekday mornings&lt;/li&gt;
&lt;li&gt;One-shot reminders ("remind me in 20 minutes")&lt;/li&gt;
&lt;li&gt;Any task that produces public output&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The rule of thumb: if the output stays between me and the agent, heartbeat. If it goes somewhere public or needs exact timing, cron.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Gotcha: Overlapping Responsibility
&lt;/h2&gt;

&lt;p&gt;The failure mode is when both systems think they own a task. Early on I had a heartbeat checking for "anything to post" while also having a cron job scheduled to post. The agent would occasionally double-post or skip because it assumed the other mechanism handled it.&lt;/p&gt;

&lt;p&gt;The fix is simple: clear ownership. If a cron job handles blog posting, the heartbeat should never touch blog posting. Document it in one place and do not be clever about fallbacks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters Beyond My Setup
&lt;/h2&gt;

&lt;p&gt;If you are building any kind of autonomous agent — whether it is an AI assistant, a monitoring bot, or an automation pipeline — you will hit this same fork. The answer is almost never "just use cron for everything" or "just poll." It is a mix, and the split depends on whether you need precision, isolation, and public accountability (cron) or flexibility, batching, and conversational context (heartbeat).&lt;/p&gt;

&lt;p&gt;The boring scheduling layer is what makes the difference between an AI demo and an AI system that actually runs reliably day after day.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>architecture</category>
      <category>automation</category>
    </item>
    <item>
      <title>Flat File Memory: The Simplest Persistence Layer for AI Agents</title>
      <dc:creator>RyanCwynar</dc:creator>
      <pubDate>Mon, 16 Mar 2026 10:01:02 +0000</pubDate>
      <link>https://dev.to/ryancwynar/flat-file-memory-the-simplest-persistence-layer-for-ai-agents-565a</link>
      <guid>https://dev.to/ryancwynar/flat-file-memory-the-simplest-persistence-layer-for-ai-agents-565a</guid>
      <description>&lt;p&gt;Every AI agent session starts from zero. No memory of yesterday, no context from last week, no idea what it built an hour ago. If you're building autonomous agents that do real work — not just chat — you need to solve this.&lt;/p&gt;

&lt;p&gt;I tried databases. I tried vector stores. I ended up with markdown files in a folder. Here's why.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem With Starting Fresh
&lt;/h2&gt;

&lt;p&gt;My AI agent (running on &lt;a href="https://openclaw.com" rel="noopener noreferrer"&gt;OpenClaw&lt;/a&gt;) handles everything from writing blog posts to managing prospect pipelines to deploying code. Each session is stateless. The model wakes up, reads its instructions, and has no idea what happened before.&lt;/p&gt;

&lt;p&gt;Without memory, it would research the same topics, rewrite the same articles, and repeat the same mistakes every single session.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Not a Database?
&lt;/h2&gt;

&lt;p&gt;Databases are great for structured data. But agent memory isn't structured — it's messy, contextual, and evolving. One day the agent needs to remember a deploy command. The next day it needs to recall that a prospect pipeline hit saturation in Phoenix.&lt;/p&gt;

&lt;p&gt;Vector stores help with retrieval, but they add latency, complexity, and another service to maintain. For a single-agent setup, it's overkill.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Flat File Approach
&lt;/h2&gt;

&lt;p&gt;Here's what actually works:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;memory/
  2026-03-14.md
  2026-03-15.md
  2026-03-16.md
MEMORY.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Daily files&lt;/strong&gt; (&lt;code&gt;memory/YYYY-MM-DD.md&lt;/code&gt;) are raw logs. What happened, what was built, what failed. The agent writes to today's file throughout the session.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MEMORY.md&lt;/strong&gt; is curated long-term memory. The agent periodically reviews daily files and distills the important stuff — lessons learned, project status, preferences, key decisions.&lt;/p&gt;

&lt;p&gt;It's literally a journal system. Daily notes plus a summary document.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works in Practice
&lt;/h2&gt;

&lt;p&gt;Every session, the agent reads today's file and yesterday's file for recent context. For deeper recall, it does a semantic search across all memory files. That's it.&lt;/p&gt;

&lt;p&gt;When the agent publishes a blog post, it logs it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## Blog Post - Auto Published (10:00 AM UTC)&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Title: "Sub-200ms Voice AI: Bridging Twilio and OpenAI Realtime API"
&lt;span class="p"&gt;-&lt;/span&gt; Slug: sub-200ms-voice-ai-twilio-openai-realtime
&lt;span class="p"&gt;-&lt;/span&gt; Cross-posted to Dev.to and Hashnode
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next session, it reads this and knows not to write about voice AI again today. Simple.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Beats Everything Else
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Debuggability.&lt;/strong&gt; When something goes wrong, I open a markdown file and read it. No query language, no dashboard, no log aggregator.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Editability.&lt;/strong&gt; I can manually edit the agent's memory. Want it to remember something? Add a line. Want it to forget? Delete one. Try doing that with embeddings in a vector database.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Zero dependencies.&lt;/strong&gt; No database to maintain, no service to keep running, no schema migrations. Files on disk.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Version control.&lt;/strong&gt; The memory directory is in a git repo. I can see exactly how the agent's understanding evolved over time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Context window friendly.&lt;/strong&gt; Markdown files are already in the format LLMs consume best. No serialization, no formatting overhead.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Two-Tier Pattern
&lt;/h2&gt;

&lt;p&gt;The key insight is separating raw logs from curated memory:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Daily files&lt;/strong&gt; grow indefinitely but are only read for recent context (today + yesterday)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MEMORY.md&lt;/strong&gt; stays small and focused — the agent prunes it during idle time&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Semantic search&lt;/strong&gt; bridges the gap when the agent needs something older&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This mirrors how human memory works. You remember today clearly, yesterday mostly, and everything else is fuzzy until something triggers recall.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to Use Something Else
&lt;/h2&gt;

&lt;p&gt;Flat files break down when you have multiple agents writing concurrently, when you need transactional guarantees, or when your memory corpus exceeds what semantic search can handle efficiently.&lt;/p&gt;

&lt;p&gt;For a single autonomous agent doing real work? Markdown files in a folder. It's boring, it's simple, and it just works.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I'm building autonomous AI systems that handle real business tasks — from content to prospecting to deployments. Follow along at &lt;a href="https://ryancwynar.com" rel="noopener noreferrer"&gt;ryancwynar.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>What Happens When Your AI Prospect Finder Runs Out of Prospects</title>
      <dc:creator>RyanCwynar</dc:creator>
      <pubDate>Sun, 15 Mar 2026 19:01:19 +0000</pubDate>
      <link>https://dev.to/ryancwynar/what-happens-when-your-ai-prospect-finder-runs-out-of-prospects-4mih</link>
      <guid>https://dev.to/ryancwynar/what-happens-when-your-ai-prospect-finder-runs-out-of-prospects-4mih</guid>
      <description>&lt;p&gt;You build an automated system to find local business prospects. It works great. Then one day it starts returning mostly duplicates. Congratulations — you have hit saturation, and it is one of the most useful signals your system can produce.&lt;/p&gt;

&lt;p&gt;Over the past few weeks I have been running an AI-powered prospect discovery pipeline targeting South Florida businesses — doctors, dentists, law firms, CPAs. The system runs on cron jobs, searches Google Maps and web directories, deduplicates against a PostgreSQL queue, and adds new prospects for outreach via voice AI calls.&lt;/p&gt;

&lt;p&gt;Here is what the trajectory looked like.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Growth Phase
&lt;/h2&gt;

&lt;p&gt;Early on, every search returned fresh results. A query for "dentists Fort Lauderdale" would yield five or six new prospects with zero duplicates. The queue grew fast — from zero to 200 in the first week.&lt;/p&gt;

&lt;p&gt;The system was simple: search a category and geography, extract business name and phone number, check if we already have them, insert if new. Each cron run targeted a different vertical (medical, legal, dental) and a different city in the South Florida metro.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Saturation Signal
&lt;/h2&gt;

&lt;p&gt;Around prospect 240, things changed. A search for "doctors Miami" that previously returned six new leads now returned twelve candidates — eight of which were already in the queue. The duplicate ratio flipped from 20% to 70%.&lt;/p&gt;

&lt;p&gt;This is not a bug. This is information.&lt;/p&gt;

&lt;p&gt;When your automated pipeline starts hitting high duplicate rates, it is telling you that you have effectively covered that geographic and vertical combination. Continuing to search the same space burns compute and API calls for diminishing returns.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Saturation Tells You
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Your coverage is real.&lt;/strong&gt; If you are finding the same businesses repeatedly across different search queries, your data is comprehensive. You are not missing the obvious ones.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Your dedup is working.&lt;/strong&gt; The fact that duplicates get caught means your matching logic handles variations in business names and phone numbers. This is harder than it sounds — "Dr. Smith Family Practice" and "Smith Family Medical" might be the same business.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It is time to expand, not optimize.&lt;/strong&gt; The instinct is to try harder searches or more creative queries. The better move is to expand geography. South Florida saturated at 260 prospects? Move to Tampa, Orlando, or Jacksonville.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architecture That Made This Easy
&lt;/h2&gt;

&lt;p&gt;The reason saturation was easy to detect and act on is that the system was designed with simple, observable data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Search results → Candidate list → Dedup check → Insert or skip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every run logs how many candidates were found, how many were duplicates, and how many were inserted. When the inserted count trends toward zero, you know.&lt;/p&gt;

&lt;p&gt;I use PostgreSQL for the prospect queue, which makes dedup queries trivial. A simple check on phone number catches most duplicates. Business name fuzzy matching catches the rest.&lt;/p&gt;

&lt;p&gt;The cron jobs rotate through verticals and geographies on a schedule. Each job targets one combination — "dentists Boca Raton" or "law firms Palm Beach" — so when a specific combination saturates, you can see exactly which one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons for Any Automated Discovery System
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Track your hit rate.&lt;/strong&gt; The ratio of new results to total candidates is your most important metric. Plot it over time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Design for expansion.&lt;/strong&gt; When you saturate one dimension (geography, vertical, keyword), you need a clean way to add new dimensions. If your system is hardcoded to one city, you will hit a wall.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Saturation is a feature.&lt;/strong&gt; It means your system works. It found everything findable in that space. Celebrate it, then move on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Do not fight diminishing returns.&lt;/strong&gt; When duplicate rates exceed 60-70%, stop searching that combination. Redirect those compute cycles to unexplored territory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Log everything.&lt;/strong&gt; You cannot detect saturation if you do not track what happened on each run. Every search, every candidate, every skip.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Next
&lt;/h2&gt;

&lt;p&gt;The South Florida queue is at 260 prospects and effectively saturated for the verticals I care about. The next step is geographic expansion — same verticals, new metro areas. The infrastructure does not change; only the search parameters do.&lt;/p&gt;

&lt;p&gt;That is the whole point of building systems instead of doing things manually. When one market is covered, you change a config value and point the pipeline somewhere new. The boring infrastructure pays for itself when scaling is a one-line change.&lt;/p&gt;

&lt;p&gt;Saturation is not failure. It is the system telling you it finished its job in that space. Listen to it.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>automation</category>
      <category>marketing</category>
      <category>webscraping</category>
    </item>
    <item>
      <title>Why I Self-Host Convex as My Personal API Layer</title>
      <dc:creator>RyanCwynar</dc:creator>
      <pubDate>Sun, 15 Mar 2026 10:00:56 +0000</pubDate>
      <link>https://dev.to/ryancwynar/why-i-self-host-convex-as-my-personal-api-layer-41h5</link>
      <guid>https://dev.to/ryancwynar/why-i-self-host-convex-as-my-personal-api-layer-41h5</guid>
      <description>&lt;p&gt;Every side project starts the same way: you need a database, an API, and auth. By the third project, you're drowning in Supabase instances and Railway deployments. I got tired of it, so I self-hosted Convex on a single VPS and turned it into the backbone for everything I build.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem with Per-Project Backends
&lt;/h2&gt;

&lt;p&gt;Most indie hackers spin up a new backend for every project. A Postgres here, a serverless function there, maybe a Firebase instance for real-time stuff. It works until you have six projects, four databases, and no idea which one has the customer data you need.&lt;/p&gt;

&lt;p&gt;I wanted one place where all my data lives, all my functions run, and I can wire up new projects in minutes instead of hours.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Convex
&lt;/h2&gt;

&lt;p&gt;Convex gives you a reactive database, serverless functions, and real-time subscriptions out of the box. The self-hosted version means I own the data and the compute. No vendor lock-in, no surprise bills, no rate limits.&lt;/p&gt;

&lt;p&gt;The killer feature for my use case: &lt;strong&gt;functions are just TypeScript.&lt;/strong&gt; I write a query or mutation, deploy it, and it's immediately callable from any project. No REST routes to define, no API gateway to configure. The function &lt;em&gt;is&lt;/em&gt; the API.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Setup
&lt;/h2&gt;

&lt;p&gt;My stack is dead simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;One $10/mo VPS&lt;/strong&gt; running Docker&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Traefik&lt;/strong&gt; for reverse proxy and automatic HTTPS&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Convex self-hosted&lt;/strong&gt; (two containers: backend + dashboard)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PostgreSQL&lt;/strong&gt; for Convex's storage layer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Deploy command is one line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;CONVEX_SELF_HOSTED_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://convex-socket.byldr.co"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nv"&gt;CONVEX_SELF_HOSTED_ADMIN_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"your-admin-key"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
npx convex deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every project in the repo shares the same Convex instance. New tables, new functions — just deploy and they're live.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Runs on It
&lt;/h2&gt;

&lt;p&gt;Right now, my single Convex instance powers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Blog engine&lt;/strong&gt; — posts, cross-posting to Dev.to and Hashnode&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Contact forms&lt;/strong&gt; — submissions from multiple sites&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prospect tracking&lt;/strong&gt; — leads, payment links, follow-ups&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API key management&lt;/strong&gt; — auth for external integrations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Webhook logging&lt;/strong&gt; — every inbound webhook gets logged&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Email system&lt;/strong&gt; — templates, send logs, credential management&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's six distinct "products" sharing one backend. Adding a new one takes maybe 20 minutes: define the schema, write the functions, deploy.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Win: Cross-Project Queries
&lt;/h2&gt;

&lt;p&gt;Here's what you can't do with separate backends: query across projects. When I want to see all activity — blog posts published, prospects contacted, emails sent — it's one query against one database. No API stitching, no data pipeline, no ETL.&lt;/p&gt;

&lt;p&gt;My AI agent uses this constantly. It calls Convex functions to publish blog posts, check prospect status, and log activities. One set of credentials, one deployment target, zero context-switching.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'd Do Differently
&lt;/h2&gt;

&lt;p&gt;I'd set up proper table namespacing earlier. Right now my tables are a flat list (&lt;code&gt;posts&lt;/code&gt;, &lt;code&gt;prospects&lt;/code&gt;, &lt;code&gt;emailLogs&lt;/code&gt;). With ten more projects, that gets messy. Convex doesn't have schemas-within-schemas, but a naming convention like &lt;code&gt;blog_posts&lt;/code&gt;, &lt;code&gt;crm_prospects&lt;/code&gt; would help.&lt;/p&gt;

&lt;p&gt;I'd also add a read-only dashboard earlier. The Convex dashboard is great for debugging, but I want a simple status page showing counts and recent activity across all projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Should You Do This?
&lt;/h2&gt;

&lt;p&gt;If you're building one SaaS product, probably not. Use Convex Cloud or whatever managed service fits.&lt;/p&gt;

&lt;p&gt;But if you're an indie hacker with multiple projects, a self-hosted Convex instance is genuinely underrated. You get a unified data layer, real-time by default, and TypeScript functions that deploy in seconds. All on a cheap VPS you already have.&lt;/p&gt;

&lt;p&gt;The best infrastructure is the kind you set up once and forget about. Six months in, I've deployed dozens of function updates and haven't touched the Docker config once. That's the goal.&lt;/p&gt;

</description>
      <category>api</category>
      <category>architecture</category>
      <category>backend</category>
      <category>sideprojects</category>
    </item>
    <item>
      <title>Sub-200ms Voice AI: Bridging Twilio and OpenAI Realtime API</title>
      <dc:creator>RyanCwynar</dc:creator>
      <pubDate>Sat, 14 Mar 2026 19:01:04 +0000</pubDate>
      <link>https://dev.to/ryancwynar/sub-200ms-voice-ai-bridging-twilio-and-openai-realtime-api-21g3</link>
      <guid>https://dev.to/ryancwynar/sub-200ms-voice-ai-bridging-twilio-and-openai-realtime-api-21g3</guid>
      <description>&lt;p&gt;Most voice AI feels like talking to a call center robot. You say something, wait two seconds, get a canned response. The latency kills the illusion.&lt;/p&gt;

&lt;p&gt;I wanted something better — a voice agent that could hold a real conversation with near-human response times. Here's how I built it by bridging Twilio Media Streams directly to OpenAI's Realtime API.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architecture
&lt;/h2&gt;

&lt;p&gt;The traditional approach to voice AI is a three-step pipeline: Speech-to-Text → LLM → Text-to-Speech. Each step adds latency. STT takes 500ms-1s. The LLM call takes another 500ms-2s. TTS adds another 500ms. You're looking at 1.5-3.5 seconds of dead air before your agent says anything. Humans notice pauses over 300ms.&lt;/p&gt;

&lt;p&gt;OpenAI's Realtime API changes the game. Instead of three separate steps, you get a single WebSocket connection that handles audio in and audio out. The model "hears" raw audio and "speaks" back directly. No transcription round-trip.&lt;/p&gt;

&lt;p&gt;The trick is getting Twilio's phone audio into that WebSocket.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;When someone calls our Twilio number, Twilio opens a Media Stream — a WebSocket that sends us raw audio packets (mulaw, 8kHz). Our Node.js server sits in the middle:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Phone Call → Twilio → Media Stream WebSocket → Our Server → OpenAI Realtime WebSocket
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The server's job is simple: receive audio chunks from Twilio, forward them to OpenAI, and pipe OpenAI's audio responses back to Twilio. It's a bridge, not a processor.&lt;/p&gt;

&lt;p&gt;Here's the core of it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Twilio sends audio&lt;/span&gt;
&lt;span class="nx"&gt;twilioWs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;message&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;msg&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;media&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;openaiWs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;input_audio_buffer.append&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;audio&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;media&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;payload&lt;/span&gt;  &lt;span class="c1"&gt;// Already base64 mulaw&lt;/span&gt;
    &lt;span class="p"&gt;}));&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// OpenAI sends audio back&lt;/span&gt;
&lt;span class="nx"&gt;openaiWs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;message&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;response.audio.delta&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;twilioWs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;event&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;media&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;streamSid&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;streamSid&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;media&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;delta&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}));&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's the essential loop. Audio flows in both directions through our server with minimal processing overhead.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Details That Matter
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Transcription:&lt;/strong&gt; I added &lt;code&gt;input_audio_transcription: { model: "whisper-1" }&lt;/code&gt; to capture what both sides say. This runs async — it doesn't add to response latency, but gives you full transcripts after the call.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Voice selection:&lt;/strong&gt; OpenAI offers several voices. I went with &lt;code&gt;ash&lt;/code&gt; — it's deeper and more natural for a male-presenting agent. The voice quality from the Realtime API is noticeably better than traditional TTS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Interruption handling:&lt;/strong&gt; The Realtime API handles barge-in natively. If someone starts talking while the agent is speaking, it detects the interruption and stops. No custom VAD needed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DNS and SSL:&lt;/strong&gt; Twilio needs a public WebSocket endpoint. I pointed &lt;code&gt;realtime.byldr.co&lt;/code&gt; at my VPS (not through Cloudflare proxy — WebSockets don't play nice with proxied connections) and used Let's Encrypt for SSL.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-World Results
&lt;/h2&gt;

&lt;p&gt;End-to-end latency: roughly 200ms from when someone finishes speaking to when the agent starts responding. That's fast enough to feel conversational. People don't notice the gap.&lt;/p&gt;

&lt;p&gt;I tested it by having the agent cold-call me with a comically aggressive sales pitch. I played along, gave a fake credit card number, and the agent handled the whole exchange naturally. Full transcript captured on both sides.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'd Do Differently
&lt;/h2&gt;

&lt;p&gt;The Realtime API is still relatively new, and the pricing is steep — audio tokens cost significantly more than text. For high-volume use cases, the three-step pipeline with a faster STT (like Deepgram) might be more cost-effective, even with the latency hit.&lt;/p&gt;

&lt;p&gt;Also, error handling on WebSocket reconnection needs more thought. Twilio's Media Streams can hiccup, and if either WebSocket drops, you need to gracefully restart the bridge without the caller hearing dead air.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Runtime:&lt;/strong&gt; Node.js on a $10/month VPS&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Phone:&lt;/strong&gt; Twilio (inbound + outbound)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI:&lt;/strong&gt; OpenAI Realtime API (gpt-4o-realtime-preview)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Process manager:&lt;/strong&gt; PM2&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Total infrastructure cost:&lt;/strong&gt; ~$15/month plus per-minute API costs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You don't need a fancy MLOps platform to build voice AI that feels real. A VPS, two WebSocket connections, and some careful audio piping gets you 90% of the way there.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>api</category>
      <category>openai</category>
      <category>performance</category>
    </item>
    <item>
      <title>Building an Autonomous Content Pipeline: From Cron Job to Cross-Post</title>
      <dc:creator>RyanCwynar</dc:creator>
      <pubDate>Sat, 14 Mar 2026 10:01:09 +0000</pubDate>
      <link>https://dev.to/ryancwynar/building-an-autonomous-content-pipeline-from-cron-job-to-cross-post-44gm</link>
      <guid>https://dev.to/ryancwynar/building-an-autonomous-content-pipeline-from-cron-job-to-cross-post-44gm</guid>
      <description>&lt;p&gt;I have a confession: I didn't write this blog post. Well, I did — I built the system that writes it. Every morning at 10 AM UTC, a cron job fires, an AI agent checks what I've been working on, writes an article, publishes it to my site, and cross-posts it to Dev.to and Hashnode. No approval step. No drafts sitting in a queue. Just ship.&lt;/p&gt;

&lt;p&gt;Here's how the whole thing works.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Trigger
&lt;/h2&gt;

&lt;p&gt;It starts with a cron job. Not a traditional crontab entry — this runs through OpenClaw's cron system, which can spin up isolated agent sessions on a schedule. The job fires a prompt that says: check recent memory files, pick a topic, write an article, publish it.&lt;/p&gt;

&lt;p&gt;The key insight is that the agent has access to daily memory files (&lt;code&gt;memory/YYYY-MM-DD.md&lt;/code&gt;) that log everything I've been building. So it's not generating content from thin air — it's writing about real work.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Stack
&lt;/h2&gt;

&lt;p&gt;The publishing pipeline is surprisingly simple:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Convex backend&lt;/strong&gt; — My site runs on a self-hosted Convex instance. A single mutation (&lt;code&gt;posts:upsertPost&lt;/code&gt;) handles creating or updating blog posts with title, slug, content, excerpt, and cross-posting flags.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cross-posting&lt;/strong&gt; — A separate Convex action (&lt;code&gt;crossPost:crossPostArticle&lt;/code&gt;) takes the slug and pushes the article to Dev.to (via their API) and Hashnode (via GraphQL). Each platform has its own formatting quirks, but markdown is the common denominator.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Notification&lt;/strong&gt; — After publishing, the agent sends me a WhatsApp message with the link. I find out about my own blog post the same way my readers do.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What I Learned Building This
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Memory files are the secret sauce.&lt;/strong&gt; The agent wakes up fresh every session with no context. But because it reads structured daily logs, it knows what I've been working on. Yesterday's memory file says we published an article about boring infrastructure. The day before, we drafted a LinkedIn post about AI automation. The agent picks something new and avoids repeating itself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No approval gates means you actually ship.&lt;/strong&gt; I used to have a "draft and review" step. Know what happened? Drafts piled up. I'd review them three days later, decide they were stale, and delete them. Removing the approval step was scary but effective. The quality is good enough because the agent has clear constraints: 500-800 words, practical and technical, my voice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cross-posting multiplies reach for free.&lt;/strong&gt; Dev.to and Hashnode both support canonical URLs, so there's no SEO penalty. One article becomes three touchpoints. The API integrations took maybe two hours total to build.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Saturation signals matter.&lt;/strong&gt; After running automated prospecting campaigns for weeks, I learned that more isn't always better. The same applies to content — the system checks what's been published recently and avoids flooding the same topics. It's not just about producing content; it's about producing &lt;em&gt;varied&lt;/em&gt; content.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Unsexy Parts
&lt;/h2&gt;

&lt;p&gt;Most of the work wasn't the AI integration. It was:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Getting the Convex self-hosted instance stable behind Traefik&lt;/li&gt;
&lt;li&gt;Handling API rate limits on Dev.to (they're strict)&lt;/li&gt;
&lt;li&gt;Dealing with Hashnode's GraphQL schema changes&lt;/li&gt;
&lt;li&gt;Making sure the cron job doesn't fire twice if the gateway restarts&lt;/li&gt;
&lt;li&gt;Error handling when the cross-post fails but the main post succeeded&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the pattern I keep seeing: 20% of the work is the cool AI stuff, 80% is plumbing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Should You Build This?
&lt;/h2&gt;

&lt;p&gt;If you're a developer who wants to write more but doesn't, yes. The barrier to publishing isn't writing ability — it's the friction of the publishing process. Automate the friction, and content flows.&lt;/p&gt;

&lt;p&gt;If you're worried about quality: set constraints. Word count limits, topic guidelines, voice examples. The agent follows instructions well when they're specific.&lt;/p&gt;

&lt;p&gt;If you're worried about authenticity: every article is based on real work I'm actually doing. It's not hallucinated thought leadership. It's a technical log with personality.&lt;/p&gt;

&lt;p&gt;The whole pipeline took about a day to build. The ROI has been a steady stream of technical content that I genuinely wouldn't have published otherwise. Sometimes the best code you write is the code that writes for you.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>automation</category>
      <category>showdev</category>
    </item>
    <item>
      <title>When Your AI Pipeline Hits the Wall</title>
      <dc:creator>RyanCwynar</dc:creator>
      <pubDate>Fri, 13 Mar 2026 19:01:14 +0000</pubDate>
      <link>https://dev.to/ryancwynar/when-your-ai-pipeline-hits-the-wall-1fng</link>
      <guid>https://dev.to/ryancwynar/when-your-ai-pipeline-hits-the-wall-1fng</guid>
      <description>&lt;p&gt;You build an automated prospecting system. It works. It finds leads while you sleep, validates phone numbers, categorizes by campaign, queues everything for outreach. You wake up to 15 new qualified prospects. Life is good.&lt;/p&gt;

&lt;p&gt;Then one morning you wake up to zero.&lt;/p&gt;

&lt;p&gt;Not because anything broke. Because your AI exhausted the market.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Setup
&lt;/h2&gt;

&lt;p&gt;I built an autonomous prospect discovery system for AI receptionist demos targeting South Florida. The stack is straightforward: cron jobs trigger search queries every few hours, results get scraped and validated, duplicates get filtered, and qualified leads land in a CRM queue tagged by campaign type.&lt;/p&gt;

&lt;p&gt;Dentists. Law firms. Medical practices. CPAs. The system ran 24/7 across multiple campaign verticals, and within a few weeks it had queued up 260+ prospects.&lt;/p&gt;

&lt;p&gt;Then the dedup rate started climbing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Saturation Signal
&lt;/h2&gt;

&lt;p&gt;At first, maybe 1 in 5 candidates was a duplicate. Normal. Then it was 1 in 3. Then 3 out of 4. One run searched four different queries across three cities and found exactly zero new prospects.&lt;/p&gt;

&lt;p&gt;This is the saturation signal, and most people miss it because they are not tracking the right metric. Everyone watches the output count — how many leads did we add today? But the real signal is the &lt;em&gt;rejection ratio&lt;/em&gt;. When your system is spending more cycles filtering duplicates than finding new targets, you have hit the edges of your market segment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters More Than You Think
&lt;/h2&gt;

&lt;p&gt;In traditional sales, you rarely hit true geographic saturation. The Rolodex is infinite because humans are slow. But automation changes the math. An AI that searches, scrapes, and validates 24/7 can literally exhaust a vertical in a metro area within weeks.&lt;/p&gt;

&lt;p&gt;This is not a bug. It is a feature — if you recognize it.&lt;/p&gt;

&lt;p&gt;Saturation means you have &lt;em&gt;complete coverage&lt;/em&gt; of that segment. Every dentist in Miami-Dade who has a website and a phone number? You have them. That is a powerful position for a sales team. Instead of wondering if you missed someone, you know the list is comprehensive.&lt;/p&gt;

&lt;h2&gt;
  
  
  What To Do When You Hit the Wall
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Expand geography.&lt;/strong&gt; South Florida is saturated? Move to Tampa, Orlando, Jacksonville. Same verticals, new territory. The system does not care about zip codes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Expand verticals.&lt;/strong&gt; You have every dentist. What about veterinarians? Chiropractors? Insurance agencies? The AI receptionist pitch works for any business that answers phones.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Go deeper, not wider.&lt;/strong&gt; Instead of finding &lt;em&gt;more&lt;/em&gt; prospects, enrich the ones you have. Pull Google reviews, check social media presence, estimate practice size. The best outreach is targeted outreach.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Shift from discovery to conversion.&lt;/strong&gt; At some point the bottleneck is not finding leads — it is closing them. Redirect automation effort from prospecting to follow-up sequences, call scheduling, and nurture campaigns.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Meta Lesson
&lt;/h2&gt;

&lt;p&gt;Every automated pipeline has a ceiling. The interesting question is not whether you will hit it, but whether your system &lt;em&gt;tells you&lt;/em&gt; when you do.&lt;/p&gt;

&lt;p&gt;Most automation just keeps running, burning API credits on searches that return nothing new. The smart version tracks its own diminishing returns and raises a flag: "Hey, I am finding 90% duplicates. Time to change strategy."&lt;/p&gt;

&lt;p&gt;This is the difference between automation and &lt;em&gt;intelligent&lt;/em&gt; automation. The first one does what you told it. The second one tells you when what you told it stopped working.&lt;/p&gt;

&lt;p&gt;Build the signal detection into your pipelines from day one. Your future self — the one staring at a dashboard wondering why lead volume dropped — will thank you.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>automation</category>
      <category>productivity</category>
      <category>webscraping</category>
    </item>
  </channel>
</rss>
