<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Yuri Tománek</title>
    <description>The latest articles on DEV Community by Yuri Tománek (@ahojmetrics).</description>
    <link>https://dev.to/ahojmetrics</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ahojmetrics"/>
    <language>en</language>
    <item>
      <title>Your Lighthouse Score Is Only Half the Story</title>
      <dc:creator>Yuri Tománek</dc:creator>
      <pubDate>Sun, 22 Feb 2026 04:24:03 +0000</pubDate>
      <link>https://dev.to/ahojmetrics/your-lighthouse-score-is-only-half-the-story-1d80</link>
      <guid>https://dev.to/ahojmetrics/your-lighthouse-score-is-only-half-the-story-1d80</guid>
      <description>&lt;p&gt;A Lighthouse score of 95 feels great. Until you check what your actual users experience and find that 40% of them are getting a Poor LCP.&lt;/p&gt;

&lt;p&gt;How? Because Lighthouse runs in a controlled environment. Fixed CPU, fixed network, no browser extensions, cold cache. Your real users are on old Android phones, congested Wi-Fi, with 12 Chrome extensions installed. The test and reality can be very different.&lt;/p&gt;

&lt;p&gt;We just shipped &lt;strong&gt;Field Data&lt;/strong&gt; in Ahoj Metrics to close that gap. You can now look up real Chrome user experience data for any domain or URL, right alongside your Lighthouse audits.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Field Data?
&lt;/h2&gt;

&lt;p&gt;The data comes from Google's Chrome User Experience Report (CrUX). It's an aggregated, anonymised dataset of real performance timings collected from Chrome users who have opted in to sharing usage statistics.&lt;/p&gt;

&lt;p&gt;When someone visits your site in Chrome, their browser quietly measures how long things take to load, how quickly the page responds to clicks, and how much the layout shifts around. Google aggregates this data across all opted-in Chrome users and makes it available through the CrUX API.&lt;/p&gt;

&lt;p&gt;A few important details about how CrUX works:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;28-day rolling window.&lt;/strong&gt; The data represents the last 28 days of real user visits. No single bad day can spike the numbers. No single good day can hide persistent problems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;75th percentile (p75).&lt;/strong&gt; The reported value isn't the average. It's the experience of someone at the 75th percentile, meaning 75% of your visitors had a better experience than this number, and 25% had a worse one. This is intentional. Google wants you to optimize for the tail, not the middle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Good / Needs Improvement / Poor distribution.&lt;/strong&gt; Every page load gets classified against Google's thresholds. You can see what percentage of your users fall into each bucket. A site might have 80% Good, 12% Needs Improvement, and 8% Poor for LCP. That distribution tells you more than any single number.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lab Data vs Field Data
&lt;/h2&gt;

&lt;p&gt;This is the core concept. Both are useful. Neither is complete on its own.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lab data (Lighthouse)&lt;/strong&gt; tests your site in a controlled environment. Same CPU, same network throttling, same browser config, every time. It's reproducible. It's great for finding issues, comparing before/after a deployment, and running automated tests in CI/CD. But it's synthetic. It doesn't represent any real user.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Field data (CrUX)&lt;/strong&gt; measures what your actual visitors experience. Real devices, real networks, real browser configurations. It's messy and variable, but it's the truth. It's also what Google uses for Core Web Vitals in Search ranking.&lt;/p&gt;

&lt;p&gt;Here's where it gets interesting: these two numbers can disagree significantly.&lt;/p&gt;

&lt;p&gt;A site might score 68 on Lighthouse (worrying) but show 85% Good LCP in CrUX (fine in practice). Why? Maybe most of your users are on fast connections with warm caches, so the real experience is better than what the lab predicts.&lt;/p&gt;

&lt;p&gt;Or the reverse: a Lighthouse score of 92 (looks great) but only 55% Good LCP in CrUX (a real problem). Maybe your audience skews toward mobile users in regions with slower connectivity, and the lab test doesn't capture that.&lt;/p&gt;

&lt;p&gt;Neither number is "right." Lab data tells you what's wrong. Field data tells you the impact. You need both to make good decisions about where to spend your optimization time.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Five Metrics
&lt;/h2&gt;

&lt;p&gt;Field Data in Ahoj Metrics shows five metrics:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LCP (Largest Contentful Paint)&lt;/strong&gt; measures how quickly the main content loads. This is usually the hero image, a large heading, or a video thumbnail. Google considers under 2.5 seconds "Good."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;INP (Interaction to Next Paint)&lt;/strong&gt; measures how responsive the page is to user input. When someone taps a button or clicks a link, how long before something visibly happens? Under 200ms is "Good." INP replaced FID (First Input Delay) as a Core Web Vital in 2024.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CLS (Cumulative Layout Shift)&lt;/strong&gt; measures how much the layout jumps around while loading. You know when you're about to tap a button and an ad loads above it, pushing everything down? That's layout shift. Under 0.10 is "Good."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;FCP (First Contentful Paint)&lt;/strong&gt; measures how quickly the first piece of content appears. Not the main content (that's LCP), just anything: text, an image, the background color. Under 1.8 seconds is "Good."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TTFB (Time to First Byte)&lt;/strong&gt; measures how quickly the server responds to the browser's request. Under 800ms is "Good."&lt;/p&gt;

&lt;p&gt;LCP, INP, and CLS are Google's three Core Web Vitals. These are the metrics that directly feed into Google's search ranking signals. If you can only focus on three things, focus on these.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Use It
&lt;/h2&gt;

&lt;p&gt;Go to &lt;strong&gt;Field Data&lt;/strong&gt; in the Ahoj Metrics sidebar. Enter any domain (like &lt;code&gt;https://stripe.com&lt;/code&gt;) or a specific URL. Hit &lt;strong&gt;Look Up Field Data&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;You'll see the p75 value and the Good/Needs Improvement/Poor distribution for all five metrics. Instant results, no audit credits used.&lt;/p&gt;

&lt;p&gt;A few things to know:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It works for any public site.&lt;/strong&gt; You can look up your competitors, your clients, or any site you're curious about. The data is public.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Not every URL has data.&lt;/strong&gt; CrUX needs a meaningful amount of Chrome traffic to generate a record. If you look up an internal tool, a brand new site, or a low-traffic page, Google won't have data for it. You'll see a clear message when that happens. Origin-level lookups (the whole domain) are more likely to have data than individual URLs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It's available to all users.&lt;/strong&gt; Free tier, paid plans, everyone. Field Data lookups don't count against your audit quota. The CrUX API is free from Google, and we saw no reason to gate it.&lt;/p&gt;

&lt;h2&gt;
  
  
  How This Changes Your Workflow
&lt;/h2&gt;

&lt;p&gt;Before, an Ahoj Metrics workflow looked like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Run Lighthouse audit from multiple regions&lt;/li&gt;
&lt;li&gt;See scores and recommendations&lt;/li&gt;
&lt;li&gt;Fix issues&lt;/li&gt;
&lt;li&gt;Run another audit to verify&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now it looks like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Check Field Data for a baseline of what real users experience&lt;/li&gt;
&lt;li&gt;Run Lighthouse audit from multiple regions to find specific issues&lt;/li&gt;
&lt;li&gt;Fix issues&lt;/li&gt;
&lt;li&gt;Run another audit to verify the fix&lt;/li&gt;
&lt;li&gt;Wait for field data to update (28-day rolling window) to confirm the real-world impact&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Field data gives you the "why" behind your optimization work. You're not fixing things because a synthetic test says so. You're fixing things because 30% of your real users are getting a Poor LCP.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Not Just Use PageSpeed Insights?
&lt;/h2&gt;

&lt;p&gt;Google's PageSpeed Insights already shows CrUX data. It's free and it works. So why look at it in Ahoj Metrics?&lt;/p&gt;

&lt;p&gt;Context. In PSI, field data lives on Google's website, separate from everything else. You look up a URL, see the numbers, close the tab. In Ahoj Metrics, field data lives next to your Lighthouse audits, your monitors, and your historical data. You can see how your lab scores compare to real-world experience for the same site, in the same tool, without switching between tabs.&lt;/p&gt;

&lt;p&gt;PSI also doesn't save history, doesn't compare across sites, and doesn't integrate into a monitoring workflow. It's a snapshot tool. Ahoj Metrics is trying to be the place where all your performance data lives together.&lt;/p&gt;

&lt;h2&gt;
  
  
  Technical Details
&lt;/h2&gt;

&lt;p&gt;For anyone curious about the implementation:&lt;/p&gt;

&lt;p&gt;We built a thin Ruby wrapper around the CrUX API (&lt;code&gt;ahojmetrics/crux-api&lt;/code&gt;). Results are cached server-side for 12 hours using Solid Cache (PostgreSQL-backed, same as the rest of our infrastructure). Repeat lookups for the same URL are instant.&lt;/p&gt;

&lt;p&gt;The API response from Google is verbose. Metric names are long (&lt;code&gt;largest_contentful_paint&lt;/code&gt;), CLS comes back as a string float, and the structure is nested. Our serializer normalizes everything into a clean JSON shape with short keys (&lt;code&gt;lcp&lt;/code&gt;, &lt;code&gt;inp&lt;/code&gt;, &lt;code&gt;cls&lt;/code&gt;) that the frontend can work with easily.&lt;/p&gt;

&lt;p&gt;Authentication is the same as every other Ahoj endpoint. Standard JWT/session auth, no separate API key needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;Field Data is a lookup tool today. You search for a URL and see the current CrUX data. We're thinking about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Historical field data tracking.&lt;/strong&gt; Store CrUX snapshots over time so you can see trends, not just the current 28-day window.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Field data alongside monitors.&lt;/strong&gt; When your automated Lighthouse monitor runs, also pull the CrUX data for that URL and display them together.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Field vs lab comparison view.&lt;/strong&gt; A side-by-side showing your Lighthouse lab metrics and CrUX field metrics for the same URL, highlighting where they agree and where they diverge.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If any of those would be particularly useful to you, I'd love to hear about it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;Sign in to &lt;a href="https://ahojmetrics.com" rel="noopener noreferrer"&gt;Ahoj Metrics&lt;/a&gt; and go to Field Data in the sidebar. Look up your own site, look up your competitors, look up anything. No credits used, no limits.&lt;/p&gt;

&lt;p&gt;If you don't have an account, the free tier gives you 20 Lighthouse audits per month plus unlimited Field Data lookups.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Ahoj Metrics is a performance monitoring tool that runs Lighthouse audits from 18 global regions and now shows real Chrome user experience data via CrUX. Built with Rails 8, Solid Queue, and Fly.io.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webperf</category>
      <category>lighthouse</category>
      <category>corewebvitals</category>
      <category>seo</category>
    </item>
    <item>
      <title>How We Run Lighthouse from 18 Regions in Under 2 Minutes</title>
      <dc:creator>Yuri Tománek</dc:creator>
      <pubDate>Sat, 14 Feb 2026 12:19:08 +0000</pubDate>
      <link>https://dev.to/ahojmetrics/how-we-run-lighthouse-from-18-regions-in-under-2-minutes-pd7</link>
      <guid>https://dev.to/ahojmetrics/how-we-run-lighthouse-from-18-regions-in-under-2-minutes-pd7</guid>
      <description>&lt;p&gt;Most performance monitoring tools test your site from one location, or run tests sequentially across regions. That means testing from 18 locations can take 20+ minutes.&lt;/p&gt;

&lt;p&gt;We needed something faster. Ahoj Metrics tests from 18 global regions simultaneously in about 2 minutes. Here's how.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architecture
&lt;/h2&gt;

&lt;p&gt;The core idea is simple: don't keep workers running. Spawn them on demand, run the test, destroy them.&lt;/p&gt;

&lt;p&gt;We use &lt;a href="https://fly.io/docs/machines/" rel="noopener noreferrer"&gt;Fly.io's Machines API&lt;/a&gt; to create ephemeral containers in specific regions. Each container runs a single Lighthouse audit, sends the results back via webhook, and destroys itself.&lt;/p&gt;

&lt;p&gt;Here's how a request flows through the system:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxec3u4coldl7v6bhhx5r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxec3u4coldl7v6bhhx5r.png" alt=" " width="800" height="670"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The key design decision: &lt;strong&gt;one audit = one ReportRequest&lt;/strong&gt;, regardless of how many regions you test. Test from 1 region or 18 - it's the same user action.&lt;/p&gt;

&lt;h2&gt;
  
  
  Spawning Machines with the Fly.io API
&lt;/h2&gt;

&lt;p&gt;Here's the actual code that creates a machine in a specific region:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;FlyMachinesService&lt;/span&gt;
  &lt;span class="no"&gt;API_BASE_URL&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"https://api.machines.dev/v1"&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nc"&gt;self&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_machine&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt;&lt;span class="p"&gt;:,&lt;/span&gt; &lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="p"&gt;:,&lt;/span&gt; &lt;span class="n"&gt;app_name&lt;/span&gt;&lt;span class="p"&gt;:)&lt;/span&gt;
    &lt;span class="n"&gt;url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="no"&gt;API_BASE_URL&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/apps/&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;app_name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/machines"&lt;/span&gt;

    &lt;span class="n"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="ss"&gt;region: &lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="ss"&gt;config: &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="ss"&gt;image: &lt;/span&gt;&lt;span class="no"&gt;ENV&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"WORKER_IMAGE"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"registry.fly.io/am-worker:latest"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="ss"&gt;size: &lt;/span&gt;&lt;span class="s2"&gt;"performance-8x"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="ss"&gt;auto_destroy: &lt;/span&gt;&lt;span class="kp"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="ss"&gt;restart: &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="ss"&gt;policy: &lt;/span&gt;&lt;span class="s2"&gt;"no"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="ss"&gt;stop_config: &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="ss"&gt;timeout: &lt;/span&gt;&lt;span class="s2"&gt;"30s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="ss"&gt;signal: &lt;/span&gt;&lt;span class="s2"&gt;"SIGTERM"&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="ss"&gt;env: &lt;/span&gt;&lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="ss"&gt;services: &lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;HTTParty&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="ss"&gt;headers: &lt;/span&gt;&lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="ss"&gt;body: &lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;to_json&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="ss"&gt;timeout: &lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;success?&lt;/span&gt;
      &lt;span class="no"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;success: &lt;/span&gt;&lt;span class="kp"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;data: &lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parsed_response&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;else&lt;/span&gt;
      &lt;span class="no"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="ss"&gt;success: &lt;/span&gt;&lt;span class="kp"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="ss"&gt;error: &lt;/span&gt;&lt;span class="s2"&gt;"API error: &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;code&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; - &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;body&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
      &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A few things worth noting:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;auto_destroy: true&lt;/code&gt;&lt;/strong&gt; is the magic. The machine cleans itself up after the process exits. No lingering containers, no zombie workers, no cleanup cron jobs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;performance-8x&lt;/code&gt;&lt;/strong&gt; gives us 4 vCPU and 8GB RAM. Lighthouse is resource-hungry - it runs a full Chrome instance. Underpowered machines produce inconsistent scores because Chrome competes for CPU time. We tried smaller sizes and the variance was too high.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;restart: { policy: "no" }&lt;/code&gt;&lt;/strong&gt; means if Lighthouse crashes, the machine just dies. We handle the failure on the Rails side by checking for timed-out reports.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;services: []&lt;/code&gt;&lt;/strong&gt; means no public ports. The worker doesn't need to accept incoming traffic. It runs Lighthouse and POSTs results back to our API. That's it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Worker
&lt;/h2&gt;

&lt;p&gt;Each Fly.io machine runs a Docker container that does roughly this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Read environment variables (target URL, callback URL, report ID)&lt;/li&gt;
&lt;li&gt;Launch headless Chrome&lt;/li&gt;
&lt;li&gt;Run Lighthouse audit&lt;/li&gt;
&lt;li&gt;POST the JSON results back to the Rails API&lt;/li&gt;
&lt;li&gt;Exit (machine auto-destroys)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The callback is a simple webhook. The worker doesn't need to know anything about our database, user accounts, or billing. It just runs a test and reports back.&lt;/p&gt;

&lt;h2&gt;
  
  
  Handling Results
&lt;/h2&gt;

&lt;p&gt;On the Rails side, each Report record tracks its own status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ReportRequest&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;ApplicationRecord&lt;/span&gt;
  &lt;span class="n"&gt;has_many&lt;/span&gt; &lt;span class="ss"&gt;:reports&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;check_completion!&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;unless&lt;/span&gt; &lt;span class="n"&gt;reports&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;all?&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="ss"&gt;:completed?&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;update!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;status: &lt;/span&gt;&lt;span class="s2"&gt;"completed"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;update_cached_stats!&lt;/span&gt;
    &lt;span class="n"&gt;check_monitor_alert&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;site_monitor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;present?&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When a worker POSTs results, the corresponding Report is updated. After each update, we check if all reports for the request are done. If so, we aggregate the results, calculate averages, and update the dashboard.&lt;/p&gt;

&lt;p&gt;Each report is independent. If the Sydney worker fails but the other 17 succeed, you still get 17 results. The failed region shows as an error without blocking everything else.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost Math
&lt;/h2&gt;

&lt;p&gt;This is the part that makes ephemeral workers compelling. Compare two approaches:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Persistent workers (18 regions, always-on):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;18 performance-8x machines running 24/7&lt;/li&gt;
&lt;li&gt;Based on Fly.io's pricing calculator: ~$2,734/month&lt;/li&gt;
&lt;li&gt;Mostly sitting idle waiting for audit requests&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Ephemeral workers (our approach):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Machines run for ~2 minutes per audit&lt;/li&gt;
&lt;li&gt;performance-8x costs roughly $0.0001344/second&lt;/li&gt;
&lt;li&gt;One 18-region audit costs about $0.29&lt;/li&gt;
&lt;li&gt;100 audits/month = ~$29&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At low volume, ephemeral is dramatically cheaper. The crossover point where persistent workers become more cost-effective is well beyond our current scale.&lt;/p&gt;

&lt;p&gt;The tradeoff is cold start time. Each machine takes a few seconds to boot. For our use case (users expect a 1-2 minute wait anyway), that's invisible.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Background Job Layer
&lt;/h2&gt;

&lt;p&gt;We use Solid Queue (Rails 8's built-in job backend) for everything. No Redis, no Sidekiq.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# config/recurring.yml&lt;/span&gt;
&lt;span class="na"&gt;production&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;monitor_scheduler&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MonitorSchedulerJob&lt;/span&gt;
    &lt;span class="na"&gt;queue&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
    &lt;span class="na"&gt;schedule&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;every minute&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The MonitorSchedulerJob runs every minute, checks which monitors are due for testing, and kicks off the Fly.io machine spawning. Monitor runs are background operations - they don't count toward the user's audit quota.&lt;/p&gt;

&lt;p&gt;This keeps the architecture simple. One PostgreSQL database handles the queue (via Solid Queue), the application data, and the cache. No Redis to manage, no separate queue infrastructure to monitor.&lt;/p&gt;

&lt;h2&gt;
  
  
  What We Learned
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Lighthouse needs consistent resources.&lt;/strong&gt; When we first used shared-cpu machines, scores would vary by 15-20 points between runs of the same URL. Bumping to performance-8x brought variance down to 2-3 points. The extra cost per audit is worth the consistency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Timeouts need multiple layers.&lt;/strong&gt; We set timeouts at the HTTP level (30s for API calls), the machine level (stop_config timeout), and the application level (mark reports as failed after 5 minutes). Belt and suspenders.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Region availability isn't guaranteed.&lt;/strong&gt; Sometimes a Fly.io region is temporarily unavailable. We handle this gracefully - the report for that region shows an error, but the rest of the audit completes normally.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Webhook delivery can fail.&lt;/strong&gt; If our API is temporarily unreachable when the worker finishes, we lose the result. We're adding a retry mechanism and considering having workers write results to object storage as a fallback.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Numbers
&lt;/h2&gt;

&lt;p&gt;After running this in production since January 2026:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Average audit time: ~2 minutes (single region or all 18)&lt;/li&gt;
&lt;li&gt;P95 audit time: ~3 minutes&lt;/li&gt;
&lt;li&gt;Machine boot time: 3-8 seconds depending on region&lt;/li&gt;
&lt;li&gt;Success rate: ~97% (3% are timeouts or region availability issues)&lt;/li&gt;
&lt;li&gt;Cost per audit: $0.01-0.29 depending on regions selected&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;You can test this yourself at &lt;a href="https://ahojmetrics.com" rel="noopener noreferrer"&gt;ahojmetrics.com&lt;/a&gt;. Free tier gives you 20 audits/month - enough to see how your site performs from Sydney, Tokyo, Sao Paulo, London, and more.&lt;/p&gt;

&lt;p&gt;If you have questions about the architecture, ask in the comments. Happy to go deeper on any part of this.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built with Rails 8.1, Solid Queue, Fly.io Machines API, and PostgreSQL. Frontend is React + TypeScript on Cloudflare Pages.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>rails</category>
      <category>lighthouse</category>
      <category>webperf</category>
      <category>infrastructure</category>
    </item>
    <item>
      <title>Building Ahoj Metrics: Nearly 2 Years, Multiple Rewrites, One Rails SaaS</title>
      <dc:creator>Yuri Tománek</dc:creator>
      <pubDate>Mon, 19 Jan 2026 06:05:25 +0000</pubDate>
      <link>https://dev.to/ahojmetrics/building-ahoj-metrics-nearly-2-years-multiple-rewrites-one-rails-saas-f6l</link>
      <guid>https://dev.to/ahojmetrics/building-ahoj-metrics-nearly-2-years-multiple-rewrites-one-rails-saas-f6l</guid>
      <description>&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; I spent nearly 2 years building a SaaS that runs Lighthouse audits from 18 global regions in ~2 minutes average. After exploring Go, Rust, and TypeScript, I came back to Rails. Here's the journey, the tech stack, and what I learned.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;As a developer, I've always struggled to answer one question: &lt;strong&gt;"How fast is my site... really?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Sure, you can run Lighthouse locally. But your MacBook on fast WiFi doesn't represent your users in Sydney on 3G, or customers in São Paulo on a typical mobile connection.&lt;/p&gt;

&lt;p&gt;I wanted a tool that could:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Test from &lt;strong&gt;multiple global regions&lt;/strong&gt; simultaneously&lt;/li&gt;
&lt;li&gt;Run &lt;strong&gt;automated monitoring&lt;/strong&gt; with alerts&lt;/li&gt;
&lt;li&gt;Track &lt;strong&gt;performance over time&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Be &lt;strong&gt;fast and reliable&lt;/strong&gt; (no 10-minute waits)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nothing on the market hit all these points at a reasonable price, so in &lt;strong&gt;February 2024&lt;/strong&gt; I started building &lt;strong&gt;Ahoj Metrics&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Journey: Nearly 2 Years of Iteration
&lt;/h2&gt;

&lt;p&gt;I started this project on February 20th, 2024 with a TypeScript version. Since then, I've:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Rewritten the backend&lt;/strong&gt; multiple times (lost count honestly)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Explored Go, Rust, and pure TypeScript&lt;/strong&gt; for the backend&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kept coming back to Rails&lt;/strong&gt; - I've been using it since version 1.0 in early 2006 (20+ years!)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Completely overhauled the UI 3-4 times&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Why did I keep rewriting? I was chasing "the perfect stack." Go felt too verbose. Rust was overkill. TypeScript for backend felt like reinventing the wheel.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final decision:&lt;/strong&gt; Rails. I have 20+ years of experience with it, the ecosystem is mature, and I can ship fast. Sometimes the best tool is the one you know deeply.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Tech Stack
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Backend: Rails 8.1
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Why Rails?&lt;/strong&gt; 20+ years of experience, mature ecosystem, fast prototyping&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Solid Queue&lt;/strong&gt; for background jobs (no Redis needed)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Nanoid IDs&lt;/strong&gt; instead of integers for cleaner URLs and security&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-database setup&lt;/strong&gt;: Primary, Cache, Queue, Cable databases&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;JWT authentication&lt;/strong&gt; for API access&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Frontend: React + TypeScript + Vite
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Hosted separately&lt;/strong&gt; on Cloudflare Pages for edge performance&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Brutalist design system&lt;/strong&gt; (clean, fast, no-nonsense)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DaisyUI&lt;/strong&gt; for component primitives&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PostHog&lt;/strong&gt; for product analytics&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Infrastructure: Fly.io + AWS ECS
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Fly.io Machines API&lt;/strong&gt; for ephemeral Lighthouse workers, spawn on-demand, test from 18 regions, destroy immediately&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ARM64 architecture&lt;/strong&gt; (Graviton) for cost efficiency&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self-hosted Raspberry Pi runner&lt;/strong&gt; for CI/CD (4.5x faster builds, $0 cost)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Payments: Polar
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Clean API, developer-friendly webhooks&lt;/li&gt;
&lt;li&gt;Supports subscription lifecycle (upgrades, downgrades, failed payments)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Technical Decisions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Ephemeral Workers on Fly.io
&lt;/h3&gt;

&lt;p&gt;Instead of keeping workers running 24/7, we spawn Fly.io machines on-demand using their Machines API.&lt;/p&gt;

&lt;p&gt;Here's the actual code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;FlyMachinesService&lt;/span&gt;
  &lt;span class="no"&gt;API_BASE_URL&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"https://api.machines.dev/v1"&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nc"&gt;self&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_machine&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt;&lt;span class="p"&gt;:,&lt;/span&gt; &lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="p"&gt;:,&lt;/span&gt; &lt;span class="n"&gt;app_name&lt;/span&gt;&lt;span class="p"&gt;:)&lt;/span&gt;
    &lt;span class="n"&gt;url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="no"&gt;API_BASE_URL&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/apps/&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;app_name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/machines"&lt;/span&gt;

    &lt;span class="n"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="ss"&gt;region: &lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="ss"&gt;config: &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="ss"&gt;image: &lt;/span&gt;&lt;span class="no"&gt;ENV&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"WORKER_IMAGE"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"registry.fly.io/am-worker:latest"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="ss"&gt;size: &lt;/span&gt;&lt;span class="s2"&gt;"performance-2x"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# 4 vCPU, 8GB RAM&lt;/span&gt;
        &lt;span class="ss"&gt;auto_destroy: &lt;/span&gt;&lt;span class="kp"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;      &lt;span class="c1"&gt;# Key: destroy after completion&lt;/span&gt;
        &lt;span class="ss"&gt;restart: &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="ss"&gt;policy: &lt;/span&gt;&lt;span class="s2"&gt;"no"&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="ss"&gt;stop_config: &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="ss"&gt;timeout: &lt;/span&gt;&lt;span class="s2"&gt;"30s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="ss"&gt;signal: &lt;/span&gt;&lt;span class="s2"&gt;"SIGTERM"&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="ss"&gt;env: &lt;/span&gt;&lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="ss"&gt;services: &lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;HTTParty&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="ss"&gt;headers: &lt;/span&gt;&lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="ss"&gt;body: &lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;to_json&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="ss"&gt;timeout: &lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;success?&lt;/span&gt;
      &lt;span class="no"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;success: &lt;/span&gt;&lt;span class="kp"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;data: &lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parsed_response&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;else&lt;/span&gt;
      &lt;span class="no"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="ss"&gt;success: &lt;/span&gt;&lt;span class="kp"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="ss"&gt;error: &lt;/span&gt;&lt;span class="s2"&gt;"API error: &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;code&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; - &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;body&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
      &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why this works:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No idle worker costs (only pay for seconds of actual usage)&lt;/li&gt;
&lt;li&gt;~2 minute average audit runtime&lt;/li&gt;
&lt;li&gt;Scales to 18 regions simultaneously&lt;/li&gt;
&lt;li&gt;Clean slate for every test (no cache pollution)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;performance-2x&lt;/code&gt; size gives Lighthouse enough resources to run smoothly&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Self-Hosted ARM64 CI/CD
&lt;/h3&gt;

&lt;p&gt;Building Docker images on GitHub-hosted x86 runners using QEMU emulation was painfully slow (18+ minutes).&lt;/p&gt;

&lt;p&gt;Solution: &lt;strong&gt;Raspberry Pi 4/5 as a self-hosted runner&lt;/strong&gt; with native ARM64 builds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Results:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build time: &lt;strong&gt;4m 25s&lt;/strong&gt; (down from 18+ min)&lt;/li&gt;
&lt;li&gt;Cost: &lt;strong&gt;$0/build&lt;/strong&gt; (vs $0.035/build on GitHub-hosted)&lt;/li&gt;
&lt;li&gt;Deploys to AWS ECS Graviton (ARM64)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Solid Queue for Background Jobs
&lt;/h3&gt;

&lt;p&gt;Rails 8 ships with Solid Queue, and I leaned in hard:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# config/recurring.yml&lt;/span&gt;
&lt;span class="na"&gt;production&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;monitor_scheduler&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MonitorSchedulerJob&lt;/span&gt;
    &lt;span class="na"&gt;queue&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
    &lt;span class="na"&gt;schedule&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;every minute&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;No Redis, no Sidekiq, no extra infrastructure.&lt;/strong&gt; Just PostgreSQL.&lt;/p&gt;

&lt;p&gt;Jobs run every minute to check which monitors are due for testing, spawn Fly.io workers, and aggregate results.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges &amp;amp; Lessons Learned
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. &lt;strong&gt;Quota Management: Simpler Than Expected&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Free users get 20 audits/month. I initially overcomplicated this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key insights:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;An audit is an audit&lt;/strong&gt;, regardless of how many regions you test

&lt;ul&gt;
&lt;li&gt;Test from 1 region? 1 audit consumed.&lt;/li&gt;
&lt;li&gt;Test from 5 regions? Still 1 audit consumed.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Monitor runs don't count toward quota&lt;/strong&gt; - they only track usage in the background job, not through the quota system&lt;/li&gt;

&lt;li&gt;Monitors are only available on paid plans (Starter+) anyway&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;The tier limits are defined in a simple config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="k"&gt;module&lt;/span&gt; &lt;span class="nn"&gt;HasTierLimits&lt;/span&gt;
  &lt;span class="no"&gt;TIER_CONFIG&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="ss"&gt;free:       &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="ss"&gt;quota: &lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;             &lt;span class="ss"&gt;retention_days: &lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="ss"&gt;monitors: &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;              &lt;span class="ss"&gt;team_members: &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;              &lt;span class="ss"&gt;max_regions: &lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="ss"&gt;starter:    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="ss"&gt;quota: &lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;            &lt;span class="ss"&gt;retention_days: &lt;/span&gt;&lt;span class="mi"&gt;90&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="ss"&gt;monitors: &lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;              &lt;span class="ss"&gt;team_members: &lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;              &lt;span class="ss"&gt;max_regions: &lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="ss"&gt;pro:        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="ss"&gt;quota: &lt;/span&gt;&lt;span class="mi"&gt;500&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;            &lt;span class="ss"&gt;retention_days: &lt;/span&gt;&lt;span class="kp"&gt;nil&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;monitors: &lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;             &lt;span class="ss"&gt;team_members: &lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;             &lt;span class="ss"&gt;max_regions: &lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="ss"&gt;enterprise: &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="ss"&gt;quota: &lt;/span&gt;&lt;span class="no"&gt;Float&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;INFINITY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;retention_days: &lt;/span&gt;&lt;span class="kp"&gt;nil&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;monitors: &lt;/span&gt;&lt;span class="no"&gt;Float&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;INFINITY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;team_members: &lt;/span&gt;&lt;span class="no"&gt;Float&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;INFINITY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;max_regions: &lt;/span&gt;&lt;span class="mi"&gt;18&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}.&lt;/span&gt;&lt;span class="nf"&gt;freeze&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Quota tracking only happens for user-initiated audits:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;track_usage&lt;/span&gt;
  &lt;span class="n"&gt;period&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;current&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;strftime&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"%Y-%m"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;billable_user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;quota_owner&lt;/span&gt;  &lt;span class="c1"&gt;# Team owner or current user&lt;/span&gt;
  &lt;span class="n"&gt;usage&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;billable_user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;usage_records&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find_or_create_by&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;period: &lt;/span&gt;&lt;span class="n"&gt;period&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;usage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;increment!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:reports_count&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. &lt;strong&gt;Multi-Region Testing Architecture&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Testing from multiple regions simultaneously is the core feature. Here's how it works:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ReportRequest → Multiple Reports&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User creates 1 &lt;code&gt;ReportRequest&lt;/code&gt; (e.g., "Test example.com from 5 regions")&lt;/li&gt;
&lt;li&gt;System creates 5 &lt;code&gt;Report&lt;/code&gt; records (one per region)&lt;/li&gt;
&lt;li&gt;5 Fly.io workers spawn simultaneously&lt;/li&gt;
&lt;li&gt;Each worker runs Lighthouse and reports back independently&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ReportRequest&lt;/code&gt; aggregates results and determines overall status&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The challenge: handling timeouts, retries, and partial failures.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ReportRequest&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;ApplicationRecord&lt;/span&gt;
  &lt;span class="n"&gt;has_many&lt;/span&gt; &lt;span class="ss"&gt;:reports&lt;/span&gt;

  &lt;span class="c1"&gt;# After each report updates, check if all are done&lt;/span&gt;
  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;check_completion!&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;unless&lt;/span&gt; &lt;span class="n"&gt;reports&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;all?&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="ss"&gt;:completed?&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;update!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;status: &lt;/span&gt;&lt;span class="s2"&gt;"completed"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;update_cached_stats!&lt;/span&gt;  &lt;span class="c1"&gt;# Calculate avg performance across regions&lt;/span&gt;
    &lt;span class="n"&gt;check_monitor_alert&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;site_monitor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;present?&lt;/span&gt;  &lt;span class="c1"&gt;# Alert if below threshold&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Key decisions:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each report is independent (can fail without blocking others)&lt;/li&gt;
&lt;li&gt;Cached stats on &lt;code&gt;ReportRequest&lt;/code&gt; for fast dashboard queries&lt;/li&gt;
&lt;li&gt;Monitor alerts trigger after aggregating all regional results&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F90ygxgv8joo689urlhaw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F90ygxgv8joo689urlhaw.png" alt="Audit UI" width="800" height="645"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  3. &lt;strong&gt;Frontend Performance Matters&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;We're a &lt;strong&gt;performance monitoring tool&lt;/strong&gt;. If our site is slow, we lose credibility.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stats (Lighthouse from 18 regions):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Performance: &lt;strong&gt;95-100&lt;/strong&gt; across all regions&lt;/li&gt;
&lt;li&gt;LCP: &lt;strong&gt;&amp;lt; 1.2s&lt;/strong&gt; globally&lt;/li&gt;
&lt;li&gt;CLS: &lt;strong&gt;0.001&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Static React frontend on Cloudflare Pages (edge network)&lt;/li&gt;
&lt;li&gt;Aggressive code splitting&lt;/li&gt;
&lt;li&gt;No heavy frameworks (no Next.js, just Vite)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What I'd Do Differently
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. &lt;strong&gt;Stop Chasing Perfection, Start Shipping&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This is the biggest lesson. I wasted nearly 2 years rewriting and polishing, trying to build the "perfect" stack and the "perfect" UI.&lt;/p&gt;

&lt;p&gt;Then I heard a quote by Eugène Delacroix: &lt;strong&gt;"The artist who aims at perfection in everything achieves it in nothing."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That hit hard. I realized that was me - constantly chasing perfection, never shipping.&lt;/p&gt;

&lt;p&gt;So I stopped. I gave myself 2-3 days to finish core features, make sure it worked, and shipped it last Saturday. Any issues can be fixed while it's live.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lesson:&lt;/strong&gt; Done is better than perfect. Ship, iterate, improve based on real feedback.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. &lt;strong&gt;Stop Rewriting, Use What You Know&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;I wasted months rewriting in different languages. Go, Rust, TypeScript - none of them were fundamentally better than Rails for this use case.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lesson:&lt;/strong&gt; Use what you know. Ship fast. Iterate based on real feedback, not hypothetical performance gains.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. &lt;strong&gt;Start with Waitlist + Landing Page&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;I built the entire product before validating demand. Bad move.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Better approach:&lt;/strong&gt; Landing page → waitlist → validate → build MVP.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. &lt;strong&gt;Simpler Pricing Tiers&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Four tiers (Free, Starter, Pro, Enterprise) is too many. Should've started with Free + Pro.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. &lt;strong&gt;Public API from Day 1&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Everyone asks for API access. Should've prioritized this earlier.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;Now that it's live, here's what I'm working on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Webhooks &amp;amp; API&lt;/strong&gt; - Full REST API for programmatic access and CI/CD integration&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CrUX API Integration&lt;/strong&gt; - Real User Monitoring data from Google's Chrome UX Report&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom RUM Tool&lt;/strong&gt; - Installable Real User Monitoring for your own customers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI Insights&lt;/strong&gt; - Automated performance recommendations and anomaly detection&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Improved Graphs&lt;/strong&gt; - Better data visualization for performance trends over time&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;More Regions&lt;/strong&gt; - Expanding beyond 18 regions to cover more edge locations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Want to see something specific? Let me know in the comments!&lt;/p&gt;

&lt;h2&gt;
  
  
  Current Status
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;✅ Launched on ahojmetrics.com&lt;/li&gt;
&lt;li&gt;✅ Free tier: 20 audits/month&lt;/li&gt;
&lt;li&gt;✅ Paid plans: $35-$299/month&lt;/li&gt;
&lt;li&gt;✅ 18 global regions available&lt;/li&gt;
&lt;li&gt;✅ ~2 minute average audit runtime&lt;/li&gt;
&lt;li&gt;✅ Automated monitoring with alerts&lt;/li&gt;
&lt;li&gt;✅ Team collaboration&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Try It Out
&lt;/h2&gt;

&lt;p&gt;You can sign up for free at &lt;a href="https://ahojmetrics.com" rel="noopener noreferrer"&gt;ahojmetrics.com&lt;/a&gt; (no credit card required).&lt;/p&gt;

&lt;p&gt;Test your site from Sydney, London, Tokyo, São Paulo, and 14 other regions. See how Core Web Vitals, performance scores, and load times differ globally.&lt;/p&gt;

&lt;p&gt;I'd love feedback from the dev community! What features would make this more useful for you?&lt;/p&gt;




&lt;p&gt;Questions? Drop them in the comments or connect with me on &lt;a href="https://www.linkedin.com/in/yuritomanek/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Happy to share more details about any part of the stack!&lt;/p&gt;

</description>
      <category>performance</category>
      <category>rails</category>
      <category>saas</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
