<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: NydarTrading</title>
    <description>The latest articles on DEV Community by NydarTrading (@nydartrading).</description>
    <link>https://dev.to/nydartrading</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nydartrading"/>
    <language>en</language>
    <item>
      <title>I Built a $1 Alternative to DeleteMe and Incogni — Here's How It Works</title>
      <dc:creator>NydarTrading</dc:creator>
      <pubDate>Sat, 21 Mar 2026 11:58:19 +0000</pubDate>
      <link>https://dev.to/nydartrading/i-built-a-1-alternative-to-deleteme-and-incogni-heres-how-it-works-1jn2</link>
      <guid>https://dev.to/nydartrading/i-built-a-1-alternative-to-deleteme-and-incogni-heres-how-it-works-1jn2</guid>
      <description>&lt;p&gt;Data brokers are selling your personal information right now. Your name, address, phone number, email, relatives, employer — it's all on sites like Spokeo, WhitePages, Radaris, and TruePeopleSearch. Anyone with $1 and five minutes can find it.&lt;/p&gt;

&lt;p&gt;Services like &lt;strong&gt;Incogni&lt;/strong&gt; ($7.49/month) and &lt;strong&gt;DeleteMe&lt;/strong&gt; ($10.75/month) will find and remove this data for you. They're good products. But they cost $90-130/year, and most of what they do is search public websites and submit opt-out forms — something you can do yourself if you know where to look.&lt;/p&gt;

&lt;p&gt;So I built a tool that does the detection half for $1.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the Tool Does
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://apify.com/ryanclinton/personal-data-exposure-report" rel="noopener noreferrer"&gt;Personal Data Exposure Report&lt;/a&gt; scans 20+ data broker and people-search sites for your name, email, and phone number. For each site where your data is found, it gives you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;direct link&lt;/strong&gt; to your profile on the broker site&lt;/li&gt;
&lt;li&gt;What &lt;strong&gt;types of data&lt;/strong&gt; are exposed (address, phone, email, relatives, etc.)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Step-by-step removal instructions&lt;/strong&gt; specific to that site&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;direct link&lt;/strong&gt; to the site's opt-out page&lt;/li&gt;
&lt;li&gt;What you'll need to complete the removal (email, phone, profile URL)&lt;/li&gt;
&lt;li&gt;Estimated &lt;strong&gt;time&lt;/strong&gt; to complete the removal&lt;/li&gt;
&lt;li&gt;Whether the site is part of a &lt;strong&gt;group&lt;/strong&gt; (one removal covers multiple sites)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Plus a summary with your exposure score, breakdown of easy vs hard removals, and all data types exposed across all sites.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Detection Works
&lt;/h2&gt;

&lt;p&gt;The actor uses multiple detection methods to maximize coverage:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Brave Search&lt;/strong&gt; — searches for your name across 29 known data broker domains, finding indexed listings with direct profile URLs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Direct HTTP scanning&lt;/strong&gt; — checks 6 broker sites that respond to direct requests&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Proxy scanning&lt;/strong&gt; — checks 13 additional protected sites using residential proxies&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DataBreach.com&lt;/strong&gt; — checks if your email appears in known data breaches&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Have I Been Pwned&lt;/strong&gt; — checks email breach exposure (optional, requires API key)&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Sites Covered
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Tier 1 — Free people-search sites (richest data)
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Site&lt;/th&gt;
&lt;th&gt;Data Exposed&lt;/th&gt;
&lt;th&gt;Removal&lt;/th&gt;
&lt;th&gt;Time&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;TruePeopleSearch&lt;/td&gt;
&lt;td&gt;Name, address, phone, age, relatives&lt;/td&gt;
&lt;td&gt;Online form&lt;/td&gt;
&lt;td&gt;2 min&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;FastPeopleSearch&lt;/td&gt;
&lt;td&gt;Name, address, phone, email, relatives&lt;/td&gt;
&lt;td&gt;Online form&lt;/td&gt;
&lt;td&gt;2 min&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ThatsThem&lt;/td&gt;
&lt;td&gt;Name, address, phone, email, IP&lt;/td&gt;
&lt;td&gt;Online form&lt;/td&gt;
&lt;td&gt;2 min&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Nuwber&lt;/td&gt;
&lt;td&gt;Name, address, phone, email, age&lt;/td&gt;
&lt;td&gt;Email link form&lt;/td&gt;
&lt;td&gt;3 min&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;USPhoneBook&lt;/td&gt;
&lt;td&gt;Name, address, phone&lt;/td&gt;
&lt;td&gt;Online form&lt;/td&gt;
&lt;td&gt;2 min&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SearchPeopleFree&lt;/td&gt;
&lt;td&gt;Name, address, phone, age&lt;/td&gt;
&lt;td&gt;Online form&lt;/td&gt;
&lt;td&gt;2 min&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Tier 2 — Paywalled sites
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Site&lt;/th&gt;
&lt;th&gt;Data Exposed&lt;/th&gt;
&lt;th&gt;Removal&lt;/th&gt;
&lt;th&gt;Time&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Spokeo&lt;/td&gt;
&lt;td&gt;Name, address, phone, email, social, court records&lt;/td&gt;
&lt;td&gt;Profile URL + email confirm&lt;/td&gt;
&lt;td&gt;3-5 min&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;WhitePages&lt;/td&gt;
&lt;td&gt;Name, address, phone, relatives, age&lt;/td&gt;
&lt;td&gt;Phone call verification&lt;/td&gt;
&lt;td&gt;5-10 min&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Radaris&lt;/td&gt;
&lt;td&gt;Name, address, phone, email, court, property&lt;/td&gt;
&lt;td&gt;Account + phone verify&lt;/td&gt;
&lt;td&gt;10-30 min&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PeopleFinders&lt;/td&gt;
&lt;td&gt;Name, address, phone, age, relatives&lt;/td&gt;
&lt;td&gt;Online form&lt;/td&gt;
&lt;td&gt;3 min&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Intelius&lt;/td&gt;
&lt;td&gt;Name, address, phone, email, relatives&lt;/td&gt;
&lt;td&gt;PeopleConnect suppression&lt;/td&gt;
&lt;td&gt;5-10 min&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Breach databases
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Site&lt;/th&gt;
&lt;th&gt;What It Checks&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;DataBreach.com&lt;/td&gt;
&lt;td&gt;Whether your email appears in known data breaches&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Have I Been Pwned&lt;/td&gt;
&lt;td&gt;Email breach exposure (needs API key — $3.50/month)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  The PeopleConnect Trick
&lt;/h2&gt;

&lt;p&gt;Here's something most people don't know: &lt;strong&gt;Intelius, ZabaSearch, Instant Checkmate, AnyWho, and Addresses.com are all owned by PeopleConnect.&lt;/strong&gt; One removal at &lt;a href="https://suppression.peopleconnect.us/login" rel="noopener noreferrer"&gt;suppression.peopleconnect.us&lt;/a&gt; covers all five sites.&lt;/p&gt;

&lt;p&gt;The report flags these automatically so you don't waste time submitting five separate removals when one will do.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the Output Looks Like
&lt;/h2&gt;

&lt;p&gt;For each site where your data is found:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"site"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Spokeo"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"found"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"profileUrl"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://www.spokeo.com/John-Smith/California"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"dataTypes"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"address"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"phone"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"email"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"social media"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"removalUrl"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://www.spokeo.com/optout"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"difficulty"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"easy"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"removalSteps"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"Find your profile on Spokeo and copy the profile URL"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"Go to https://www.spokeo.com/optout"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"Paste your profile URL into the form"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"Enter your email address"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"Complete the CAPTCHA and click Submit"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"Check your email and click the confirmation link"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"Removal takes 24-48 hours"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"removalTime"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"3-5 minutes"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"removalRequires"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"your Spokeo profile URL"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"email address"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And a summary at the end:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"summary"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"sitesScanned"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;23&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"sitesWithData"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;9&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"exposureScore"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;39&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"easyRemovals"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"mediumRemovals"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"hardRemovals"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Removal Workflow
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Run the report ($1). Review your exposure score.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Handle PeopleConnect sites first. If you're found on any PeopleConnect site, one removal at &lt;a href="https://suppression.peopleconnect.us/login" rel="noopener noreferrer"&gt;suppression.peopleconnect.us&lt;/a&gt; covers all of them. 5-10 minutes removes you from 5+ sites.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Do the easy removals. Sites with &lt;code&gt;difficulty: easy&lt;/code&gt; have simple online forms. Most take 2-3 minutes each.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; Tackle medium removals. These need email or phone verification. Budget 5-10 minutes each.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5:&lt;/strong&gt; Handle hard removals last. Radaris and VoterRecords have deliberately difficult processes. The report includes fallback options.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6:&lt;/strong&gt; Re-scan in 30 days. Data brokers re-add data regularly. At $1/scan, monthly monitoring costs $12/year.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost Comparison
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;This Tool&lt;/th&gt;
&lt;th&gt;Incogni&lt;/th&gt;
&lt;th&gt;DeleteMe&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Price&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$1.00 one-time&lt;/td&gt;
&lt;td&gt;$7.49/month&lt;/td&gt;
&lt;td&gt;$10.75/month&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Sites covered&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;20+ direct + 29 via search&lt;/td&gt;
&lt;td&gt;180+&lt;/td&gt;
&lt;td&gt;750+&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;What you get&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Exposure report + removal guide&lt;/td&gt;
&lt;td&gt;Automated removal&lt;/td&gt;
&lt;td&gt;Automated removal + monitoring&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Removal included&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;No (step-by-step guide)&lt;/td&gt;
&lt;td&gt;Yes (automated)&lt;/td&gt;
&lt;td&gt;Yes (automated)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Annual cost&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$12 (monthly scans)&lt;/td&gt;
&lt;td&gt;$89.88&lt;/td&gt;
&lt;td&gt;$129&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Profile URLs&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  When to Use This vs. Incogni/DeleteMe
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Use this tool when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You want to know where your data is exposed before paying for removal&lt;/li&gt;
&lt;li&gt;You prefer to handle removals yourself&lt;/li&gt;
&lt;li&gt;You want a one-time check, not a subscription&lt;/li&gt;
&lt;li&gt;You're evaluating whether paid services are worth it for your situation&lt;/li&gt;
&lt;li&gt;You're a business running privacy reports for clients&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Use Incogni/DeleteMe when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You want fully automated removal with no manual work&lt;/li&gt;
&lt;li&gt;You need coverage of 100+ sites&lt;/li&gt;
&lt;li&gt;You need ongoing monitoring and automatic re-removal&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Limitations
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Detection + guide, not automated removal.&lt;/strong&gt; You follow the steps yourself.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;US-focused.&lt;/strong&gt; The sites scanned are primarily US-based people-search engines.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Point-in-time snapshot.&lt;/strong&gt; Data brokers continuously update. A clean result today doesn't mean clean forever.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Common names.&lt;/strong&gt; Very common names may show false positives.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;The tool is live on the &lt;a href="https://apify.com/ryanclinton/personal-data-exposure-report" rel="noopener noreferrer"&gt;Apify Store&lt;/a&gt;. Enter your name, optionally add email/phone/state for better coverage, and get your exposure report in under a minute.&lt;/p&gt;

&lt;p&gt;Full documentation: &lt;a href="https://apifyforge.com/actors/mcp-servers/personal-data-exposure-report" rel="noopener noreferrer"&gt;ApifyForge&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built by &lt;a href="https://apifyforge.com" rel="noopener noreferrer"&gt;ApifyForge&lt;/a&gt; — we build data intelligence tools on the Apify platform. More tools at &lt;a href="https://apify.com/ryanclinton" rel="noopener noreferrer"&gt;apify.com/ryanclinton&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>privacy</category>
      <category>security</category>
      <category>webdev</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>What 9,000 Search Impressions Taught Us About What Traders Actually Want</title>
      <dc:creator>NydarTrading</dc:creator>
      <pubDate>Mon, 16 Mar 2026 19:38:16 +0000</pubDate>
      <link>https://dev.to/nydartrading/what-9000-search-impressions-taught-us-about-what-traders-actually-want-on</link>
      <guid>https://dev.to/nydartrading/what-9000-search-impressions-taught-us-about-what-traders-actually-want-on</guid>
      <description>&lt;p&gt;I've been checking the Google Search Console numbers obsessively for the past two months. Not because I love staring at dashboards (well, maybe a bit — I did build a &lt;a href="https://dev.to/"&gt;dashboard platform&lt;/a&gt; after all), but because the data is telling me something I didn't expect.&lt;/p&gt;

&lt;p&gt;This month, Nydar hit 9,638 search impressions. In February, we were barely scraping 300 a day. By mid-March, we're consistently hitting 600+. That's not a hockey stick, but for a solo-built trading platform that's been live for a couple of months, I'll take it.&lt;/p&gt;

&lt;p&gt;What's more interesting than the number itself is &lt;em&gt;what people are searching for&lt;/em&gt; when they find us.&lt;/p&gt;

&lt;h2&gt;
  
  
  The queries tell a story
&lt;/h2&gt;

&lt;p&gt;The biggest surprise has been &lt;a href="https://dev.to/learn/order-flow-trading"&gt;order flow trading&lt;/a&gt;. Our learn article on it is sitting at position 7.3 in Google with 136 impressions this month. That's page one for a competitive term. People searching that aren't beginners asking "what is a stock" — they're traders who already know what a candlestick chart is and want to go deeper. They want to understand tape reading, delta divergence, and how institutional orders move through the book before they show up on your chart.&lt;/p&gt;

&lt;p&gt;We built the &lt;a href="https://dev.to/blog/building-institutional-order-flow-suite"&gt;order flow suite&lt;/a&gt; because I used these tools at Cowen and couldn't find anything decent outside of a Bloomberg terminal. The institutional platforms charge five figures a year for order flow visualisation. The retail ones give you a basic &lt;a href="https://dev.to/help/order-book"&gt;order book&lt;/a&gt; and call it a day. There's a massive gap between "here's the top 5 bids and asks" and "here's how liquidity is actually being consumed across price levels in real time." That gap is where &lt;a href="https://dev.to/help/footprint"&gt;footprint charts&lt;/a&gt;, &lt;a href="https://dev.to/help/volume-delta"&gt;volume delta&lt;/a&gt;, and &lt;a href="https://dev.to/help/aggregated-book"&gt;aggregated book depth&lt;/a&gt; live.&lt;/p&gt;

&lt;p&gt;Turns out a lot of other people were looking for the same thing. The search data confirms it — traders want institutional-grade tools without the institutional price tag. Not a simplified version. The actual tools.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/learn/market-microstructure"&gt;Market microstructure&lt;/a&gt; is another one sitting at position 11.5, just off page one, with 54 impressions. That's a niche topic — bid-ask dynamics, latency, maker-taker models, how your order actually gets filled and why you sometimes get worse execution than you expected. The fact that people are finding our content on it tells me there's a real gap in plain-English explanations of how markets actually work under the hood. Most educational content stops at "supply and demand" and never gets into the mechanics. We went deeper because the mechanics are where the edge is.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I didn't expect
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://dev.to/help/custom-indicators"&gt;Custom indicators and Pine Script&lt;/a&gt; — 40 impressions at position 11.6. I built the &lt;a href="https://dev.to/blog/why-we-built-a-pine-script-indicator-engine"&gt;Pine Script engine&lt;/a&gt; because I personally wanted to run custom scripts against live data without paying for a premium charting subscription. Didn't think many people would search for it specifically, but apparently there's a crowd of traders who've outgrown the built-in indicators on their current platform and want to write their own.&lt;/p&gt;

&lt;p&gt;That makes sense when you think about it. Anyone serious about &lt;a href="https://dev.to/learn/technical-analysis-basics"&gt;technical analysis&lt;/a&gt; eventually hits the ceiling of what RSI and MACD can tell you. They start combining indicators, building composite signals, testing ideas. And then they realise they need somewhere to actually run those ideas against live data without writing a full trading bot from scratch. That's what the Pine Script engine does — it bridges the gap between "I have an idea" and "I can see if this idea works on today's market."&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://dev.to/features/ai-signals"&gt;AI signals&lt;/a&gt; page is pulling 100 impressions at position 10.0. That's interesting because AI trading signals is a space absolutely drowning in scams and snake oil. Every other ad on social media is some bloke promising his AI bot will turn £500 into £50,000 by next Tuesday. The fact that our &lt;a href="https://dev.to/how-our-ai-works"&gt;honest breakdown of how the models work&lt;/a&gt; is ranking suggests Google is rewarding transparency over hype. We're not promising "guaranteed returns" — we're showing you the &lt;a href="https://dev.to/blog/xgboost-vs-lstm-crypto-prediction"&gt;XGBoost confidence scores&lt;/a&gt; and letting you decide. You can see the model's reasoning, its historical accuracy, and where it gets things wrong.&lt;/p&gt;

&lt;p&gt;We published a piece on &lt;a href="https://dev.to/blog/meta-labeling-filtering-bad-trades"&gt;meta-labelling&lt;/a&gt; a while back that explains how we filter out low-confidence signals before they ever reach you. That post didn't get much organic traction, but the people who found it stayed on the page. Quality over quantity — which is the whole philosophy, really.&lt;/p&gt;

&lt;h2&gt;
  
  
  The CTR problem (and what we're doing about it)
&lt;/h2&gt;

&lt;p&gt;Here's the uncomfortable number: 21 clicks from 9,638 impressions. That's a 0.2% click-through rate. Brutal.&lt;/p&gt;

&lt;p&gt;But context matters. Most of those impressions are from pages sitting at positions 15-60 in search results. Nobody clicks on page 6 of Google. The pages that &lt;em&gt;are&lt;/em&gt; on page one — the homepage, the order flow article, the AI signals page — have much better CTR. The homepage alone has a 29% click-through rate at position 4.4.&lt;/p&gt;

&lt;p&gt;The fix isn't to write more content. It's to make the existing content rank higher. We've been rewriting title tags and meta descriptions across the site — the kind of thankless work that doesn't make for exciting screenshots but moves the needle over weeks. When someone searches "order flow trading" and sees our result on page one, the title and description need to make them click instead of scrolling past.&lt;/p&gt;

&lt;p&gt;We've also been dealing with some cannibalization issues — multiple pages competing for the same search term. The &lt;a href="https://dev.to/glossary"&gt;glossary&lt;/a&gt; is comprehensive (200 terms), and sometimes a glossary entry like &lt;a href="https://dev.to/glossary/fear-and-greed-index"&gt;fear and greed index&lt;/a&gt; competes with the &lt;a href="https://dev.to/blog/fear-and-greed-index-explained"&gt;full blog post&lt;/a&gt; explaining the same concept. We're sorting that with canonical tags and internal linking rather than deleting content.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the data is shaping
&lt;/h2&gt;

&lt;p&gt;Nothing dramatic. We're not pivoting based on search queries. But it is shaping priorities.&lt;/p&gt;

&lt;p&gt;The pages sitting just outside page one — &lt;a href="https://dev.to/learn/market-microstructure"&gt;market microstructure&lt;/a&gt;, &lt;a href="https://dev.to/help/footprint"&gt;footprint charts&lt;/a&gt;, &lt;a href="https://dev.to/help/correlation-matrix"&gt;correlation matrices&lt;/a&gt;, &lt;a href="https://dev.to/learn/funding-rate-open-interest"&gt;funding rate and open interest&lt;/a&gt; — those are the ones getting attention right now. More internal links, deeper content, better explanations. The organic traffic will follow if the content deserves it. If it doesn't, at least users who do land on those pages are getting something genuinely useful rather than thin SEO bait.&lt;/p&gt;

&lt;p&gt;I've also been spending time on things you can't see in search data. The automated trading bot running internally has been consistently profitable over the past few weeks. It's not public yet, and I'm not going to rush it out until the &lt;a href="https://dev.to/blog/backtesting-mistakes-traders-make"&gt;backtesting&lt;/a&gt; is thorough enough that I'd trust it with real size. Which I already do, quietly, in small position sizes. The current strategy uses a trailing stop system that we've been grid-searching to find the optimal parameters — the kind of work that takes weeks of data to validate properly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/blog/why-paper-trading-should-be-the-default"&gt;Paper trading&lt;/a&gt; remains the core of what Nydar does. Everything we build gets tested there first, and it's what most users interact with. The bot, the signals, the order flow tools — they all feed into the paper trading engine before anything touches real money.&lt;/p&gt;

&lt;h2&gt;
  
  
  A side project is paying the bills
&lt;/h2&gt;

&lt;p&gt;I'm also building something on the side — a separate product that generates revenue from day one. It made $30 before 7am this morning, and it's still early. That matters for Nydar because it removes the pressure to monetise prematurely.&lt;/p&gt;

&lt;p&gt;Most indie trading platforms die the same way. They launch free, get some traction, panic about server costs, then slap a paywall on everything and watch their users disappear. Or worse, they start selling their users' order flow data to market makers. That's not happening here.&lt;/p&gt;

&lt;p&gt;Having a separate revenue stream means Nydar can stay generous with what's free. It means I can take three months to build the premium features properly instead of shipping a half-baked subscription tier because the AWS bill is due. No "free trial ends in 3 days" pop-ups. No artificial limits on features that cost me nothing to serve.&lt;/p&gt;

&lt;p&gt;That's a luxury most indie projects don't have, and I'm not going to waste it by shipping half-baked features just to hit a revenue target.&lt;/p&gt;

&lt;h2&gt;
  
  
  The content machine
&lt;/h2&gt;

&lt;p&gt;One thing I'm genuinely proud of is the depth of educational content we've built. 42 learn articles covering everything from &lt;a href="https://dev.to/learn/scalping"&gt;scalping strategies&lt;/a&gt; to &lt;a href="https://dev.to/learn/liquidation-cascades"&gt;liquidation cascades&lt;/a&gt; to &lt;a href="https://dev.to/learn/iceberg-orders-spoofing"&gt;iceberg orders and spoofing detection&lt;/a&gt;. 200 glossary terms, most of them linking through to deeper learn articles and help pages. 33 blog posts in two months.&lt;/p&gt;

&lt;p&gt;None of it is generated filler. Every learn article is written by someone who's actually used these concepts on a trading desk. When I write about &lt;a href="https://dev.to/learn/order-flow-trading"&gt;order flow&lt;/a&gt;, it's because I've sat in front of a Bloomberg terminal watching institutional orders sweep through the book. When I write about &lt;a href="https://dev.to/blog/position-sizing-rule-that-saves-accounts"&gt;position sizing&lt;/a&gt;, it's because I've seen what happens when people don't do it.&lt;/p&gt;

&lt;p&gt;The content strategy isn't "rank for keywords." It's "explain things properly, and the rankings will follow." So far, that's working. The pages that rank highest are the ones with the most substance, not the ones with the most keyword density.&lt;/p&gt;

&lt;h2&gt;
  
  
  Taking a breather
&lt;/h2&gt;

&lt;p&gt;I'm off tomorrow for St Patrick's Day. The irony of an Irishman building a trading platform and taking the one day a year the entire world pretends to be Irish as a day off isn't lost on me. Markets are closed anyway for the weekend, so the timing works out — no FOMO about missing a trade while I'm three pints deep in a pub somewhere.&lt;/p&gt;

&lt;p&gt;The platform will keep running — &lt;a href="https://dev.to/blog/what-happens-when-a-solo-dev-goes-down"&gt;it managed a full week without me in hospital&lt;/a&gt;, so it can handle a bank holiday. The bot will keep trading crypto (those markets never sleep, even when the Irish do). The data feeds will keep flowing. The &lt;a href="https://dev.to/blog/how-we-cut-api-calls-by-85-percent"&gt;API quota management&lt;/a&gt; will keep doing its thing, backing off outside market hours and spinning back up when the bell rings.&lt;/p&gt;

&lt;p&gt;When I'm back on Wednesday, the focus is on pushing those near-page-one pages over the line. Market microstructure, footprint charts, custom indicators — all sitting between positions 11 and 13. A few well-placed internal links and some content improvements should be enough to nudge them onto page one. There's also work to do on &lt;a href="https://dev.to/features/options-flow"&gt;options flow&lt;/a&gt; — it's at position 19 with 58 impressions, which means people are searching for it and Google thinks we're relevant. We just need to prove we deserve a higher spot.&lt;/p&gt;

&lt;h2&gt;
  
  
  The numbers
&lt;/h2&gt;

&lt;p&gt;For anyone tracking along:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;9,638&lt;/strong&gt; search impressions this month (up from ~5,700 last month, ~2,800 the month before)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;21 clicks&lt;/strong&gt; (low, but trending up — was 3 a month ago)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Average position improving&lt;/strong&gt; — from 71.4 to 25.5 over the past 28 days&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;33 blog posts&lt;/strong&gt; published since January&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;200 glossary terms&lt;/strong&gt; — all interlinked with learn articles and help pages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;42 learn articles&lt;/strong&gt; covering every major trading concept&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;63 help pages&lt;/strong&gt; — one for every widget and feature&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's still early. We're nowhere near profitable from the platform itself, the click numbers are still small, and there's a lot of work to do on CTR. But the trajectory is in the right direction. The content is ranking, the platform works, people are finding us through genuine searches for topics we actually know something about. That's the foundation.&lt;/p&gt;

&lt;p&gt;Everything else gets built on top of it.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Curious what all the fuss is about? &lt;a href="https://dev.to/"&gt;Try the dashboard&lt;/a&gt; — no sign-up required, start paper trading in seconds.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://nydar.co.uk/blog/what-9000-search-impressions-taught-us" rel="noopener noreferrer"&gt;Nydar&lt;/a&gt;. Nydar is a free trading platform with AI-powered signals and analysis.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>trading</category>
      <category>fintech</category>
      <category>beginners</category>
    </item>
    <item>
      <title>We Built an AML Screening Tool That Replaces $100K Enterprise Contracts</title>
      <dc:creator>NydarTrading</dc:creator>
      <pubDate>Thu, 12 Mar 2026 15:51:17 +0000</pubDate>
      <link>https://dev.to/nydartrading/we-built-an-aml-screening-tool-that-replaces-100k-enterprise-contracts-2f7c</link>
      <guid>https://dev.to/nydartrading/we-built-an-aml-screening-tool-that-replaces-100k-enterprise-contracts-2f7c</guid>
      <description>&lt;p&gt;If you work in fintech, you know the drill. Before you can open an account, process a payment, or onboard a business customer, you need to run AML (Anti-Money Laundering) checks. It's not optional — regulators will shut you down if you don't.&lt;/p&gt;

&lt;p&gt;The problem is the tooling. Enterprise AML platforms like Refinitiv World-Check, ComplyAdvantage, and LexisNexis cost $15,000 to $100,000 per year. For a startup, a small bank, or a crypto exchange in its early stages, that's a brutal line item for what amounts to searching a few databases.&lt;/p&gt;

&lt;p&gt;So we built our own.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Enterprise AML Tools Actually Do
&lt;/h2&gt;

&lt;p&gt;Strip away the sales decks and the enterprise pricing, and here's what these platforms actually do:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Search sanctions lists&lt;/strong&gt; — OFAC SDN, EU consolidated list, UN sanctions, OpenSanctions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Search criminal watchlists&lt;/strong&gt; — Interpol Red Notices, FBI Most Wanted&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Check for Politically Exposed Persons (PEPs)&lt;/strong&gt; — FARA foreign agent registrations, campaign finance records&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Verify corporate entities&lt;/strong&gt; — OpenCorporates, LEI databases, FDIC insurance status&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Detect shell companies&lt;/strong&gt; — Nominee directors, bearer shares, shell haven jurisdictions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Score the risk&lt;/strong&gt; — Combine signals into a risk tier: Low, Medium, High, or Prohibited&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Every single one of these data sources is publicly available. OFAC publishes their SDN list. Interpol has a public API. OpenCorporates is open data. The SEC, CFPB, and FDIC all publish their records.&lt;/p&gt;

&lt;p&gt;The enterprise tools charge $100K/year to query public data and put a dashboard on top.&lt;/p&gt;

&lt;h2&gt;
  
  
  What We Built
&lt;/h2&gt;

&lt;p&gt;Our &lt;a href="https://apify.com/ryanclinton/financial-crime-screening-mcp" rel="noopener noreferrer"&gt;Financial Crime Screening MCP&lt;/a&gt; is a single API endpoint that runs 13 data sources in parallel and returns a structured AML risk classification. It's available as an MCP server, which means any AI agent or LLM client can call it directly.&lt;/p&gt;

&lt;p&gt;It exposes 8 tools:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;What It Does&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;comprehensive_entity_screen&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Full screening across sanctions, criminal watchlists, and corporate registries&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;sanctions_deep_check&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Focused OFAC + OpenSanctions check with fuzzy matching&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;criminal_watchlist_scan&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Interpol Red Notices + FBI Most Wanted search&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;pep_influence_analysis&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;FARA foreign agent + FEC campaign finance check&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;corporate_shell_detection&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Shell company indicators, haven jurisdictions, nominee directors&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;financial_institution_verify&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;FDIC insurance status + consumer complaint analysis&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;proximity_to_crime_score&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Multi-signal convergence scoring across all categories&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;aml_risk_classification&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Full 13-source AML risk tier classification&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  How It Scores Risk
&lt;/h3&gt;

&lt;p&gt;The full classification runs every data source and scores across five dimensions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Sanctions exposure&lt;/strong&gt; — Any direct sanctions match automatically triggers PROHIBITED tier&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Corporate transparency&lt;/strong&gt; — Missing LEI, shell haven registration, nominee directors&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Political exposure&lt;/strong&gt; — FARA registrations, significant campaign finance activity&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Financial regulatory standing&lt;/strong&gt; — FDIC status, consumer complaint volume&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Proximity to crime&lt;/strong&gt; — Signal convergence across multiple adverse categories&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key insight is convergence. A single CFPB complaint doesn't mean much. But if someone has consumer complaints AND shows up in FARA records AND is registered in a shell haven AND has no LEI — that convergence of signals is far more telling than any individual hit.&lt;/p&gt;

&lt;p&gt;The output gives you a clear risk tier (LOW / MEDIUM / HIGH / PROHIBITED) with a recommendation for each level — from "standard processing" to "file SAR and escalate to compliance."&lt;/p&gt;

&lt;h3&gt;
  
  
  Example Output
&lt;/h3&gt;

&lt;p&gt;Here's what a comprehensive entity screen returns:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"entity"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Acme Holdings Ltd"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"entityType"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"company"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"sanctions"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"ofac"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"hits"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"records"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"openSanctions"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"hits"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"records"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"..."&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"criminalWatchlists"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"interpol"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"hits"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"records"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"fbi"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"hits"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"records"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"corporate"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"registrations"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"..."&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"lei"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"totalHits"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Two OpenSanctions hits and no LEI. That's a flag — needs further investigation. The &lt;code&gt;aml_risk_classification&lt;/code&gt; tool would score this across all dimensions and return a risk tier with a specific recommendation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who This Is For
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Fintech startups&lt;/strong&gt; — You need AML screening to get licensed, but you can't afford $50K/year before you have revenue. This gets you compliant for a few dollars per check.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Crypto exchanges&lt;/strong&gt; — High volume, low margin. Running sanctions checks on every user at enterprise pricing doesn't work. At $1.50-3 per query, the math works even at scale.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Compliance teams at small banks&lt;/strong&gt; — You have the regulatory requirement but not the Refinitiv budget. This covers the same data sources at a fraction of the cost.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Legal and advisory firms&lt;/strong&gt; — KYC/KYB checks for client onboarding. Run a quick screen before engagement, without a six-figure annual contract.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI agent developers&lt;/strong&gt; — Because this is an MCP server, any AI agent framework can call it. Build compliance workflows that screen entities automatically as part of a larger pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost Comparison
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Solution&lt;/th&gt;
&lt;th&gt;Annual Cost&lt;/th&gt;
&lt;th&gt;Per-Check Cost&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Refinitiv World-Check&lt;/td&gt;
&lt;td&gt;$30,000 - $100,000&lt;/td&gt;
&lt;td&gt;Included (but you pay regardless)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ComplyAdvantage&lt;/td&gt;
&lt;td&gt;$15,000 - $50,000&lt;/td&gt;
&lt;td&gt;Included&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LexisNexis&lt;/td&gt;
&lt;td&gt;$20,000 - $80,000&lt;/td&gt;
&lt;td&gt;Included&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Our tool&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$0 annual fee&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$1.50 - $3 per check&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;No annual contract. No minimum spend. Pay only when you run a check.&lt;/p&gt;

&lt;p&gt;For a startup running 500 checks per month, that's $750-$1,500/month vs. $30,000+ upfront for enterprise alternatives. For low-volume use cases (law firms running 50 checks/month), it's $75-$150/month vs. tens of thousands annually.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Doesn't Replace
&lt;/h2&gt;

&lt;p&gt;I want to be honest about the limitations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;This is not a compliance program.&lt;/strong&gt; It's a screening tool. You still need policies, procedures, a compliance officer, and a framework for how you handle hits.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No ongoing monitoring.&lt;/strong&gt; This runs point-in-time checks. Enterprise tools offer continuous monitoring with alerts. You'd need to schedule periodic re-screening yourself.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No case management.&lt;/strong&gt; When you get a hit, enterprise tools have workflows for investigation, escalation, and documentation. Here, you get the data — what you do with it is up to you.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fuzzy matching has limits.&lt;/strong&gt; Name matching across languages, transliterations, and aliases is hard. Enterprise tools have decades of tuning behind their matching algorithms.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For many use cases — especially early-stage companies, low-volume screening, and developer-built compliance workflows — the tradeoff is worth it. You get 80% of the capability at 2% of the cost.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;The tool is live on the &lt;a href="https://apify.com/ryanclinton/financial-crime-screening-mcp" rel="noopener noreferrer"&gt;Apify Store&lt;/a&gt;. You can connect it as an MCP server from any compatible client (Claude Desktop, Cursor, or your own agent framework).&lt;/p&gt;

&lt;p&gt;We also have a &lt;a href="https://apify.com/ryanclinton/counterparty-due-diligence-mcp" rel="noopener noreferrer"&gt;Counterparty Due Diligence MCP&lt;/a&gt; for corporate KYB screening (18 data sources, beneficial ownership analysis, jurisdiction risk scoring) and an &lt;a href="https://apify.com/ryanclinton/esg-supply-chain-risk-mcp" rel="noopener noreferrer"&gt;ESG Supply Chain Risk MCP&lt;/a&gt; for environmental and labor compliance checks.&lt;/p&gt;

&lt;p&gt;If you're building in fintech and spending more on compliance tooling than on your actual product, there's a better way.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;We build trading analytics and data infrastructure at &lt;a href="https://nydar.co.uk" rel="noopener noreferrer"&gt;Nydar&lt;/a&gt;. More on what we're building at &lt;a href="https://dev.to/nydartrading"&gt;dev.to/nydartrading&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>fintech</category>
      <category>security</category>
      <category>saas</category>
      <category>startup</category>
    </item>
    <item>
      <title>How to Scrape Contact Data from 10,000 Websites (for Under $50)</title>
      <dc:creator>NydarTrading</dc:creator>
      <pubDate>Thu, 12 Mar 2026 15:48:05 +0000</pubDate>
      <link>https://dev.to/nydartrading/how-to-scrape-contact-data-from-10000-websites-for-under-50-4mpn</link>
      <guid>https://dev.to/nydartrading/how-to-scrape-contact-data-from-10000-websites-for-under-50-4mpn</guid>
      <description>&lt;p&gt;I got tired of manually researching company contact info.&lt;/p&gt;

&lt;p&gt;Not because I'm lazy — because it doesn't scale. If you need emails and phone numbers for 20 companies, sure, spend an afternoon clicking through websites. But when you need contact data for 1,000 companies? 10,000? That's weeks of mind-numbing work that a script can do in minutes.&lt;/p&gt;

&lt;p&gt;So I built a tool that does exactly that. You give it a list of URLs, it visits each website, finds the contact and team pages automatically, and gives you back structured data: emails, phone numbers, names, job titles, and social links. One clean record per company.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem With Manual Contact Research
&lt;/h2&gt;

&lt;p&gt;A human researcher follows a predictable pattern:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open the company website&lt;/li&gt;
&lt;li&gt;Look for a "Contact" or "About" page&lt;/li&gt;
&lt;li&gt;Write down any emails, phone numbers, and names&lt;/li&gt;
&lt;li&gt;Move to the next company&lt;/li&gt;
&lt;li&gt;Repeat 999 more times&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;At maybe 10 companies per hour, that's 100 hours of work for 1,000 companies. At a conservative $20/hour, that's $2,000 in labor — for data that goes stale within months.&lt;/p&gt;

&lt;p&gt;The scraper does the same thing, but processes 1,000 websites in about 15 minutes for roughly $5 in compute.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You Get Back
&lt;/h2&gt;

&lt;p&gt;For each website, you get a single structured record:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://buffer.com"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"domain"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"buffer.com"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"emails"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"hello@buffer.com"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"phones"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"+1-555-0123"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"contacts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Joel Gascoigne"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Founder CEO"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Caro Kopprasch"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Chief of Staff"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Jenny Terry"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"VP of Finance &amp;amp; Operations"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"socialLinks"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"linkedin"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://www.linkedin.com/company/bufferapp"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"twitter"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://x.com/buffer"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"facebook"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://www.facebook.com/bufferapp"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"instagram"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://www.instagram.com/buffer"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"pagesScraped"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"scrapedAt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2026-02-06T23:48:25.255Z"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That Buffer example pulled 48 team members with names and titles from their about page. Every email is deduplicated, every phone number is validated, and social links are extracted from across the site. One row per company, ready for your spreadsheet or CRM.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick Start — Python
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;apify_client&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ApifyClient&lt;/span&gt;

&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ApifyClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;token&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;YOUR_API_TOKEN&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;run&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;actor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ryanclinton/website-contact-scraper&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;run_input&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;urls&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://stripe.com&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://basecamp.com&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://buffer.com&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;maxPagesPerDomain&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dataset&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;run&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;defaultDatasetId&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]).&lt;/span&gt;&lt;span class="nf"&gt;iterate_items&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;domain&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;emails&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Quick Start — JavaScript
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;ApifyClient&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;apify-client&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ApifyClient&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;token&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;YOUR_API_TOKEN&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;run&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;actor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ryanclinton/website-contact-scraper&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;call&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;urls&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://stripe.com&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://basecamp.com&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://buffer.com&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="na"&gt;maxPagesPerDomain&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;items&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dataset&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;run&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;defaultDatasetId&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;listItems&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="nx"&gt;items&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;forEach&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;item&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;item&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;domain&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;item&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;emails&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;, &lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Real Power: Google Maps → Contact Data Pipeline
&lt;/h2&gt;

&lt;p&gt;Scraping three websites is a demo. The real value is chaining this with other data sources to build a full lead pipeline.&lt;/p&gt;

&lt;p&gt;Say you need leads for every dentist in Austin, TX:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;apify_client&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ApifyClient&lt;/span&gt;

&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ApifyClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;token&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;YOUR_API_TOKEN&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Step 1: Get businesses from Google Maps
&lt;/span&gt;&lt;span class="n"&gt;maps_run&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;actor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;compass/crawler-google-places&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;run_input&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;searchStringsArray&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;dentists in Austin, TX&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]}&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Step 2: Extract website URLs
&lt;/span&gt;&lt;span class="n"&gt;websites&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;biz&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dataset&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;maps_run&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;defaultDatasetId&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]).&lt;/span&gt;&lt;span class="nf"&gt;iterate_items&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;biz&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;website&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;websites&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;biz&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;website&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Found &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;websites&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; business websites&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Step 3: Scrape contact info from all of them
&lt;/span&gt;&lt;span class="n"&gt;contacts_run&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;actor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ryanclinton/website-contact-scraper&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;run_input&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;urls&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;websites&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;maxPagesPerDomain&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Step 4: Full lead database
&lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dataset&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;contacts_run&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;defaultDatasetId&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]).&lt;/span&gt;&lt;span class="nf"&gt;iterate_items&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;emails&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;, &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;emails&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;emails&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;no email&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;names&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;contacts&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;domain&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;emails&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; | &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;names&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; contacts found&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You go from "dentists in Austin" to a spreadsheet with names, emails, phone numbers, and social profiles in under 20 minutes. The total cost for 200 businesses is under $2.&lt;/p&gt;

&lt;h2&gt;
  
  
  Other Pipelines That Work Well
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Found names but no emails?&lt;/strong&gt; Feed the results into &lt;a href="https://apify.com/ryanclinton/email-pattern-finder" rel="noopener noreferrer"&gt;Email Pattern Finder&lt;/a&gt;. It detects the company's email format (first.last@, f.last@, etc.) and generates email addresses for every team member.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Want to prioritize your leads?&lt;/strong&gt; Run the results through &lt;a href="https://apify.com/ryanclinton/b2b-lead-qualifier" rel="noopener noreferrer"&gt;B2B Lead Qualifier&lt;/a&gt;. It scores each company on 30+ business quality signals so you contact the best prospects first.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Want the whole thing automated?&lt;/strong&gt; &lt;a href="https://apify.com/ryanclinton/b2b-lead-gen-suite" rel="noopener noreferrer"&gt;B2B Lead Generation Suite&lt;/a&gt; chains all three steps — contact scraping, email generation, and lead scoring — into a single run. One input, one output.&lt;/p&gt;

&lt;h2&gt;
  
  
  Performance and Cost
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Websites&lt;/th&gt;
&lt;th&gt;Time&lt;/th&gt;
&lt;th&gt;Cost&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;~10 seconds&lt;/td&gt;
&lt;td&gt;&amp;lt; $0.01&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;100&lt;/td&gt;
&lt;td&gt;~2 minutes&lt;/td&gt;
&lt;td&gt;~$0.05&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1,000&lt;/td&gt;
&lt;td&gt;~15 minutes&lt;/td&gt;
&lt;td&gt;~$0.50&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10,000&lt;/td&gt;
&lt;td&gt;~2.5 hours&lt;/td&gt;
&lt;td&gt;~$5.00&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The first 100 websites are free. No credit card required to test.&lt;/p&gt;

&lt;p&gt;It's fast because it uses HTTP requests instead of spinning up a browser for every page. Most business websites serve their contact info as plain HTML, so you don't need a browser to read it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What It Won't Do
&lt;/h2&gt;

&lt;p&gt;I want to be upfront about limitations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;JavaScript-heavy sites&lt;/strong&gt; — If a site renders its contact page entirely with client-side JavaScript (React SPAs with no server rendering), you won't get results. This works for the vast majority of business sites, but not all.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Contact forms only&lt;/strong&gt; — Some sites have no visible email, just a form. You'll get names and social links but no emails.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Login-protected pages&lt;/strong&gt; — Public pages only. No authentication or cookie handling.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For bulk lead generation, hitting 85% of sites at a fraction of the cost of browser automation is the right tradeoff. For the remaining 15%, you'd need a browser-based solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who's Using This
&lt;/h2&gt;

&lt;p&gt;The tool has been used for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Sales teams&lt;/strong&gt; building prospect lists from industry directories&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Recruiters&lt;/strong&gt; finding hiring managers at target companies&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Market researchers&lt;/strong&gt; building competitive intelligence databases&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agencies&lt;/strong&gt; enriching client CRM data in bulk&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Freelancers&lt;/strong&gt; finding decision-makers for cold outreach&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're manually copying emails from websites into spreadsheets, you're doing it wrong.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;The tool is live on the &lt;a href="https://apify.com/ryanclinton/website-contact-scraper" rel="noopener noreferrer"&gt;Apify Store&lt;/a&gt;. First 100 websites are free — just paste your URLs and hit Start.&lt;/p&gt;

&lt;p&gt;For anything more than a one-off run, the API examples above let you integrate it directly into your workflow. Schedule it, chain it with other tools, or pipe the output straight into your CRM.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;We build data infrastructure and trading analytics at &lt;a href="https://nydar.co.uk" rel="noopener noreferrer"&gt;Nydar&lt;/a&gt;. Follow along on &lt;a href="https://dev.to/nydartrading"&gt;dev.to&lt;/a&gt; for more on what we're building.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>javascript</category>
      <category>tutorial</category>
      <category>productivity</category>
    </item>
    <item>
      <title>What Is WSB Buying Right Now? How to Track It Without the Noise</title>
      <dc:creator>NydarTrading</dc:creator>
      <pubDate>Tue, 10 Mar 2026 15:21:00 +0000</pubDate>
      <link>https://dev.to/nydartrading/what-is-wsb-buying-right-now-how-to-track-it-without-the-noise-j0p</link>
      <guid>https://dev.to/nydartrading/what-is-wsb-buying-right-now-how-to-track-it-without-the-noise-j0p</guid>
      <description>&lt;p&gt;I spent way too long refreshing r/wallstreetbets every morning.&lt;/p&gt;

&lt;p&gt;Not because I was trading off the DD posts — most of them are barely literate. But because there's real signal buried in there if you know how to find it. When a ticker goes from 40 mentions a day to 800 in 24 hours, that's not random. Something is happening. The problem is that extracting that signal manually means scrolling through hundreds of posts, memes, loss porn, and "is this a dip or am I just poor" comments.&lt;/p&gt;

&lt;p&gt;So we built a tool that does it automatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem with reading WSB manually
&lt;/h2&gt;

&lt;p&gt;WallStreetBets has 15+ million members. On any given day there are thousands of posts and tens of thousands of comments. The sub moves fast. A ticker can go from zero mentions to the most talked-about stock on the internet in a few hours — GameStop in January 2021 being the obvious example, but it happens on a smaller scale constantly.&lt;/p&gt;

&lt;p&gt;If you're checking WSB manually, you're doing it wrong for a few reasons:&lt;/p&gt;

&lt;p&gt;You're seeing what's &lt;em&gt;already&lt;/em&gt; popular, not what's &lt;em&gt;becoming&lt;/em&gt; popular. By the time a ticker is plastered across the front page with rocket emojis and diamond hands, the move has probably started. The value isn't in knowing that everyone's talking about GME — it's in catching the acceleration &lt;em&gt;before&lt;/em&gt; it hits the front page.&lt;/p&gt;

&lt;p&gt;You're also reading sentiment wrong. A post titled "GME to the moon 🚀🚀🚀" and a post titled "I just lost $47,000 on GME" both count as mentions of GME. Manually, your brain lumps them together. But the sentiment is completely different, and the ratio between bullish and bearish mentions tells you something the raw count doesn't.&lt;/p&gt;

&lt;p&gt;And you're wasting time. The median WSB post is not useful. It's a meme, a screenshot of a $200 loss, or someone asking what options are. The actual signal-to-noise ratio is terrible. You could spend 30 minutes scrolling and come away with less information than a single widget gives you in 3 seconds.&lt;/p&gt;

&lt;h2&gt;
  
  
  What mention velocity actually tells you
&lt;/h2&gt;

&lt;p&gt;Most people think about Reddit sentiment as a simple question: is WSB bullish or bearish on a stock? That matters, but it's not the most useful signal. The more interesting metric is &lt;em&gt;velocity&lt;/em&gt; — how fast is the conversation growing?&lt;/p&gt;

&lt;p&gt;A ticker with 500 mentions per day that's been stable for a week is popular, but it's not news. A ticker that went from 50 mentions yesterday to 400 today is accelerating. That acceleration is the signal. It means something changed — a catalyst dropped, someone posted a DD that went viral, or shorts started covering and people noticed.&lt;/p&gt;

&lt;p&gt;We track this by comparing 24-hour mention counts against the previous 24-hour period. The percentage change gets bucketed into velocity badges:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Rising&lt;/strong&gt; (50%+ increase) — gaining traction, worth watching&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Surging&lt;/strong&gt; (100%+ increase) — real momentum building&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Exploding&lt;/strong&gt; (200%+ increase) — something significant happened&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;An "Exploding" badge on a ticker you haven't heard of is much more interesting than a "Rising" badge on TSLA. TSLA is always getting mentioned. The velocity badge helps you filter out the permanently-popular tickers and focus on what's actually changing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sentiment is harder than it looks
&lt;/h2&gt;

&lt;p&gt;Scoring whether a Reddit post is bullish or bearish sounds simple but it's genuinely tricky. "GME is going to the moon" is obviously bullish. "Just YOLOed my rent money into GME puts" is bearish — they're betting it goes down. But "I swear if GME drops below $20 again I'm loading the truck" is... bullish? It's talking about a drop but expressing buying intent.&lt;/p&gt;

&lt;p&gt;Our widget scores each mention and gives you an aggregate bullish/bearish percentage. It's not perfect — nothing is when you're doing NLP on posts that use "retarded" as a term of endearment — but it's a lot better than trying to gauge the vibe by scrolling.&lt;/p&gt;

&lt;p&gt;The sentiment score is most useful when it &lt;em&gt;diverges&lt;/em&gt; from velocity. If mentions are exploding but sentiment is turning bearish, that's a different situation than mentions exploding with 80% bullish sentiment. The first might be a sell-the-news event. The second might be early-stage momentum.&lt;/p&gt;

&lt;h2&gt;
  
  
  What WSB gets right (and wrong)
&lt;/h2&gt;

&lt;p&gt;Here's the thing about WSB that the financial media consistently misunderstands: it's not a monolith. It's 15 million people with different strategies, different risk tolerances, and wildly different levels of experience. Some of the DD posts on there are legitimately excellent — detailed analysis with sources, well-reasoned short squeeze theses, sector research that rivals what junior analysts at banks produce.&lt;/p&gt;

&lt;p&gt;And then someone replies "sir this is a Wendy's."&lt;/p&gt;

&lt;p&gt;The crowd dynamic is what makes it powerful &lt;em&gt;and&lt;/em&gt; dangerous. When WSB collectively identifies a short squeeze setup — high short interest, low float, rising FTDs, cost to borrow spiking — they've occasionally been right in spectacular fashion. GameStop and AMC proved that retail buying pressure, when concentrated enough, can move stocks that institutional investors thought were dead money.&lt;/p&gt;

&lt;p&gt;But the crowd is also subject to every behavioral bias in the textbook. Confirmation bias runs rampant — once a ticker becomes a "meme stock," any good news is rocket fuel and any bad news is "hedgie manipulation." The sunk cost fallacy keeps people holding through 80% drawdowns. And social proof means people buy stocks because other people are buying them, not because of any underlying thesis.&lt;/p&gt;

&lt;p&gt;This is exactly why tracking the data — velocity, sentiment, mentions — is more useful than reading the posts themselves. The data doesn't have confirmation bias. It just tells you what's happening.&lt;/p&gt;

&lt;h2&gt;
  
  
  The GME playbook: what velocity looked like in real time
&lt;/h2&gt;

&lt;p&gt;Everyone knows the GameStop story in hindsight. But if you were watching the data in real time, the velocity signal was screaming days before the mainstream media caught on.&lt;/p&gt;

&lt;p&gt;In early January 2021, GME mentions on WSB were steady at maybe 200-300 per day. It was a niche play — DFV had been posting his YOLO updates for months, a handful of people were in, but it wasn't front page material. Then around January 11th, when the stock started moving from $19 to $31, mentions jumped to 1,500 in a single day. That's a velocity spike you can't miss. By January 22nd, when the stock hit $65, mentions were over 10,000 per day. The acceleration happened &lt;em&gt;before&lt;/em&gt; the $300+ parabolic move, not during it.&lt;/p&gt;

&lt;p&gt;AMC told a similar story six months later. The stock sat at $9 in late May 2021 with moderate WSB attention — maybe 400 mentions a day, mostly people comparing it unfavorably to GME. Then on May 24th, mentions tripled overnight. By May 26th, the stock was at $19 and climbing. A week later it hit $62. The people who were watching velocity caught the move at $9-12. The people who read about it on CNBC bought at $50.&lt;/p&gt;

&lt;p&gt;The pattern repeats on smaller scales constantly. BBBY in August 2022 — velocity spiked three days before the stock went from $5 to $30. SPRT in August 2021 — mentions went from near-zero to "Exploding" over a weekend, stock ran 700% that month. These weren't all good trades (BBBY holders got destroyed when it collapsed), but the velocity data told you &lt;em&gt;something was happening&lt;/em&gt; before price confirmed it.&lt;/p&gt;

&lt;p&gt;That's the whole value proposition. Not "this will go up." Just "something changed, and it changed fast."&lt;/p&gt;

&lt;h2&gt;
  
  
  WSB isn't one strategy
&lt;/h2&gt;

&lt;p&gt;People outside the sub think WSB is just "buy calls on meme stocks and pray." That's like saying a city is one restaurant. There are distinct trading styles on WSB, and the data we track is useful to all of them differently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The YOLO options crowd.&lt;/strong&gt; These are the people buying weekly calls with their entire portfolio. They live and die by momentum, and for them, velocity data is basically a targeting system. An "Exploding" ticker with 80%+ bullish sentiment and high call volume is exactly what they're looking for. Whether that's smart is a different question, but the data helps them find the momentum plays faster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The short squeeze hunters.&lt;/strong&gt; This is probably WSB's most sophisticated subgroup. They're looking at short interest percentages, cost to borrow rates, days to cover, FTD spikes — the full picture. For them, Reddit velocity is the catalyst trigger. They'll have a watchlist of heavily-shorted stocks and wait for WSB to discover one of them. When velocity spikes on a high-SI name, that's the convergence they're waiting for.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theta gang.&lt;/strong&gt; The options sellers. They actually love it when WSB piles into a stock, because implied volatility spikes and they can sell overpriced premiums. For them, an "Exploding" velocity badge means "time to sell puts or covered calls on this thing while IV is through the roof." They're fading the crowd's enthusiasm by collecting premium.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The buy-and-hold value crowd.&lt;/strong&gt; Yeah, they exist on WSB. They're in PLTR, SOFI, AMD — stocks with actual businesses that also happen to be WSB favorites. For them, sentiment data is a sanity check. If their long-term hold suddenly shows up with crashing sentiment and exploding mentions, they want to know why. Did something fundamental change, or is the sub just having a tantrum because earnings missed by a penny?&lt;/p&gt;

&lt;p&gt;Each of these strategies uses the same data differently. That's why we built the widget to show raw metrics — velocity, sentiment, mention count — rather than trying to interpret them for you. Your strategy determines what the numbers mean.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cross-referencing: where the real edge is
&lt;/h2&gt;

&lt;p&gt;Reddit sentiment on its own is interesting. Reddit sentiment cross-referenced against three other data points is actually useful.&lt;/p&gt;

&lt;p&gt;Here's the stack we use on the &lt;a href="https://dev.to/features/wsb"&gt;WSB Trading Floor&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reddit velocity + short interest.&lt;/strong&gt; If a ticker is exploding on WSB &lt;em&gt;and&lt;/em&gt; has 25%+ short interest, you've got the setup for a squeeze. The Reddit crowd is the demand side. The short sellers who need to cover are the forced buying. Together, that's how you get moves like GME. Without the high short interest, a Reddit velocity spike is just hype — it might push the stock up 10% before fading. With it, you've got the potential for something much bigger.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reddit sentiment + dark pool volume.&lt;/strong&gt; Dark pools are where institutions trade when they don't want to move the market. If Reddit sentiment is overwhelmingly bullish on a ticker and dark pool volume suddenly spikes, someone big is positioning. Could be buying alongside the crowd. Could be selling into the strength. Either way, the combination tells you that institutional money is paying attention to the same stock the Reddit crowd is, and that changes the dynamics.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reddit velocity + options flow.&lt;/strong&gt; When a ticker's mentions are exploding and you simultaneously see large block trades on call options — especially ones bought at the ask — that's confirmation from a completely different data source. Some random person posting DD on Reddit is one thing. Someone dropping $2 million on weekly calls at the same time is another. The options flow widget shows you whether smart money is aligned with or fading the Reddit narrative.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reddit velocity + FTDs + cost to borrow.&lt;/strong&gt; This is the full squeeze checklist. Rising Failures to Deliver mean shares aren't being located for delivery. Spiking cost to borrow means short sellers are paying through the nose to maintain positions. And Reddit velocity means the crowd has found it. When all three are moving together, that's the convergence that preceded every major squeeze in the last five years.&lt;/p&gt;

&lt;p&gt;None of these signals work perfectly alone. But when three of them line up on the same ticker at the same time, you're looking at something worth investigating.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to actually use this data
&lt;/h2&gt;

&lt;p&gt;I want to be clear about something: Reddit sentiment data is not a buy signal. It's context. It's one input alongside price action, options flow, short interest, and whatever other analysis you're doing.&lt;/p&gt;

&lt;p&gt;Here's how we use it:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;As an early warning system.&lt;/strong&gt; If a ticker you're holding suddenly shows up with an "Exploding" velocity badge and bearish sentiment, that's worth investigating even if you haven't seen the catalyst yet. The crowd found something before you did.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;As a squeeze checklist item.&lt;/strong&gt; A potential short squeeze needs multiple ingredients — high short interest, high cost to borrow, rising FTDs, and a catalyst. Reddit momentum is often that catalyst. If you've identified a ticker with the right setup and it starts trending on WSB, the missing piece may have just clicked into place.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;As a fade signal.&lt;/strong&gt; This is counterintuitive, but sometimes the most useful thing WSB tells you is when to &lt;em&gt;not&lt;/em&gt; follow the crowd. If sentiment is at 95% bullish and the ticker has been mentioned 5,000 times today, you're late. The institutions that were going to cover have covered. The move may already be priced in. Extreme one-sided sentiment has historically been a contrarian indicator.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;As a timing tool for exits.&lt;/strong&gt; This one gets overlooked. If you're already in a position and the ticker suddenly goes from 200 mentions to 5,000, that's not just a signal for new entrants — it's a signal for you to think about your exit. Parabolic mention growth often coincides with the final leg of a move. The last buyers are piling in based on FOMO, and that's usually when the smart money starts selling. Watching velocity on your own holdings tells you when your quiet little trade has become a crowded one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;As entertainment.&lt;/strong&gt; Honestly, some of us just like knowing what's going on over there. It's the most interesting trading community on the internet and the culture is genuinely funny. Nothing wrong with checking in just to see what's happening, as long as you're not making trading decisions based on the strength of someone's rocket emoji conviction.&lt;/p&gt;

&lt;h2&gt;
  
  
  The evolution of WSB as a market force
&lt;/h2&gt;

&lt;p&gt;It's worth understanding how much WSB has changed since 2020. Pre-pandemic, the sub had about 1 million members and was mostly a place to post loss porn and joke about going bankrupt on FDs (extremely short-dated options, for the uninitiated). It was niche. Nobody on Wall Street was monitoring it.&lt;/p&gt;

&lt;p&gt;That changed permanently in January 2021. The GameStop squeeze proved that a decentralized group of retail traders, coordinated loosely through Reddit, could move a stock hard enough to blow up a hedge fund. Melvin Capital lost 53% in a single month. The financial establishment was genuinely shaken — not because of the dollar amount, but because it challenged the assumption that retail flow is noise.&lt;/p&gt;

&lt;p&gt;Post-GME, WSB swelled to 15+ million members. The culture shifted. More newcomers, more people who had no idea what a put option was, more mainstream media attention. The quality of DD posts arguably declined. But the &lt;em&gt;data signal&lt;/em&gt; got stronger, not weaker. More members means more mentions, which means velocity spikes are more statistically significant. A ticker going viral on a 1-million-member sub could be a coincidence. A ticker going viral on a 15-million-member sub is an event.&lt;/p&gt;

&lt;p&gt;The financial industry adapted too. Hedge funds now monitor Reddit sentiment. There are entire companies built around scraping WSB for alpha. Quant funds include social media velocity in their models. The information asymmetry that existed in January 2021 — where WSB could see something Wall Street couldn't — is smaller now, but it hasn't disappeared. The crowd still finds things first sometimes, especially small-cap names that institutional analysts don't cover.&lt;/p&gt;

&lt;h2&gt;
  
  
  The widget stack
&lt;/h2&gt;

&lt;p&gt;Our &lt;a href="https://dev.to/help/reddit-sentiment"&gt;Reddit Sentiment widget&lt;/a&gt; is the starting point, but it's designed to work alongside everything else on the &lt;a href="https://dev.to/features/wsb"&gt;WSB Trading Floor&lt;/a&gt;. Sixteen widgets total — short interest scanner, dark pool volume, cost to borrow tracker, FTD spikes, congressional trades (yes, Pelosi's portfolio), gamma exposure, max pain, insider transaction clusters, and more.&lt;/p&gt;

&lt;p&gt;The idea is that you don't have to alt-tab between six different websites to get the full picture on a ticker. Reddit says it's hot. Short interest says it's squeezable. Options flow says someone big just bought calls. Dark pool says institutions are moving. FTDs are spiking. Cost to borrow is at 80%. That's six data points from six different sources, all on one screen.&lt;/p&gt;

&lt;p&gt;You can browse all of it without an account. If you want to add widgets to your dashboard and customise the layout, &lt;a href="https://nydar.co.uk/register" rel="noopener noreferrer"&gt;sign up&lt;/a&gt; — takes about 30 seconds, no credit card.&lt;/p&gt;

&lt;h2&gt;
  
  
  What WSB is talking about right now
&lt;/h2&gt;

&lt;p&gt;I'm not going to list the "top WSB stocks for 2026" because that list goes stale in about 12 hours. That's the whole point of building a real-time tool instead of writing a blog post with tickers in it. Every article you see titled "Top 10 WallStreetBets Stocks to Buy in 2026" was outdated before it was published. The tickers they list were trending &lt;em&gt;when the author checked&lt;/em&gt;, which was probably three days before the article went live.&lt;/p&gt;

&lt;p&gt;If you want to know what WSB is buying right now — like, actually right now — &lt;a href="https://nydar.co.uk/register" rel="noopener noreferrer"&gt;check the widget&lt;/a&gt;. It updates constantly. Whatever I write here will be wrong by tomorrow, and we'd both rather you see the actual live data anyway.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://nydar.co.uk/blog/what-is-wsb-buying-right-now" rel="noopener noreferrer"&gt;Nydar&lt;/a&gt;. Nydar is a free trading platform with AI-powered signals and analysis.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>trading</category>
      <category>fintech</category>
      <category>beginners</category>
    </item>
    <item>
      <title>28 Gaming Stocks We Track and Why Pure-Play Publishers Are the Ones to Watch</title>
      <dc:creator>NydarTrading</dc:creator>
      <pubDate>Mon, 09 Mar 2026 14:51:08 +0000</pubDate>
      <link>https://dev.to/nydartrading/28-gaming-stocks-we-track-and-why-pure-play-publishers-are-the-ones-to-watch-5c76</link>
      <guid>https://dev.to/nydartrading/28-gaming-stocks-we-track-and-why-pure-play-publishers-are-the-ones-to-watch-5c76</guid>
      <description>&lt;p&gt;When most people think "gaming stocks," they think EA, Nintendo, maybe Activision before the Microsoft acquisition. The big names. The obvious ones.&lt;/p&gt;

&lt;p&gt;But those aren't the stocks that move 10-25% on a single game review. Those are the ones that shrug it off because gaming is one division among many.&lt;/p&gt;

&lt;p&gt;We built a &lt;a href="https://dev.to/features/game-stocks"&gt;tracking system&lt;/a&gt; that covers 28 game publishers across 8 global exchanges. After spending months mapping review events to stock reactions, one pattern became very clear: the less diversified the company, the more the stock moves on game reviews. And the companies that move the most are the ones most people haven't heard of.&lt;/p&gt;

&lt;h2&gt;
  
  
  The tier system
&lt;/h2&gt;

&lt;p&gt;We split the 28 publishers into two tiers based on how much their stock price depends on game review scores.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tier 1 — pure-play publishers.&lt;/strong&gt; These are companies where gaming is 80%+ of revenue and one AAA title can represent 30-60% of expected annual earnings. When that title reviews 15 points below expectations, analysts revise sales forecasts down 20-40%, and on a $2B market cap company that's $80-160M wiped off expected revenue before the market even opens.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tier 2 — diversified publishers.&lt;/strong&gt; Gaming is significant but not the whole business. EA has FIFA/FC, Madden, and a large live-service portfolio. One bad review for a new IP isn't sinking the stock. The moves are smaller (3-8%), more noise-prone, and harder to trade.&lt;/p&gt;

&lt;p&gt;If you're looking at gaming stocks through the lens of review events, Tier 1 is where the signal-to-noise ratio is highest.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tier 1: The stocks that actually move
&lt;/h2&gt;

&lt;h3&gt;
  
  
  CD Projekt (WSE: CDR)
&lt;/h3&gt;

&lt;p&gt;The canonical example. The Witcher and Cyberpunk are essentially the entire company. When Cyberpunk 2077 reviewed at 53 on consoles against an expected 92, the stock fell 9.4% on review day and 75% over the following months. When The Witcher 3 crushed expectations, the stock went on a multi-year run.&lt;/p&gt;

&lt;p&gt;CD Projekt is the poster child for review-driven stock moves because there's nowhere to hide. One franchise, one set of expectations, and the entire company rides on it. The Warsaw Stock Exchange listing adds another wrinkle — CDR trades during European hours, and most major review embargoes lift on US time. That delay between information and market reaction is exactly the kind of gap that creates opportunity.&lt;/p&gt;

&lt;p&gt;What makes CD Projekt especially interesting is the redemption arc. After the Cyberpunk launch disaster, they went heads-down on patches and the Phantom Liberty expansion. When Phantom Liberty reviewed well, the stock recovered significantly. The lesson: for pure-play publishers, the review-to-stock relationship works in both directions. A bad launch tanks the stock, but a strong follow-up can recover it — and the recovery move can be just as tradeable as the initial drop.&lt;/p&gt;

&lt;h3&gt;
  
  
  Capcom (TSE: 9697)
&lt;/h3&gt;

&lt;p&gt;Monster Hunter and Street Fighter drive the business. When Monster Hunter: World scored 90 and broke out in Western markets in 2018, Capcom stock more than doubled over the year as sales kept beating forecasts. Capcom is interesting because they've got several strong franchises — Resident Evil, Devil May Cry, Monster Hunter, Street Fighter — but they're all gaming. No semiconductor division, no music label, no financial services absorbing the blow.&lt;/p&gt;

&lt;h3&gt;
  
  
  Krafton (KRX: 259960)
&lt;/h3&gt;

&lt;p&gt;PUBG. That's the story. Krafton is essentially a single-franchise company trading on the Korean exchange. Their market cap lives and dies with PUBG's global engagement metrics and whatever their next major title looks like. Any new launch is a massive event for this stock.&lt;/p&gt;

&lt;p&gt;Krafton is also a good example of why you need to watch the Korean exchanges specifically. KRX has different trading mechanics, different retail investor behaviour (Korean retail is famously active in gaming stocks), and different hours than Western markets. When a PUBG update launches badly and Korean gaming forums light up, the stock can move before any Western analyst has written a note about it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Embracer Group (OMX: EMBRAC B)
&lt;/h3&gt;

&lt;p&gt;This one's complicated. Embracer went on an acquisition spree — THQ Nordic, Gearbox, Crystal Dynamics, Eidos — and then the Saudi Aramco deal fell through in 2023. The stock got hammered. They're now restructuring and spinning off divisions. But the core thesis holds: when you own that many studios and gaming is the only business, major launch outcomes move the stock.&lt;/p&gt;

&lt;p&gt;Embracer is a cautionary tale about what happens when a pure-play publisher's strategy falls apart independent of review scores. The stock collapsed not because of bad reviews, but because the financing disappeared. That's an important reminder: review events are one catalyst among several. &lt;a href="https://dev.to/help/earnings"&gt;Earnings surprises&lt;/a&gt;, failed deals, and management changes can all overwhelm the review signal. The review thesis works best when there's no competing narrative.&lt;/p&gt;

&lt;h3&gt;
  
  
  Team17 (LSE: TM17)
&lt;/h3&gt;

&lt;p&gt;The Overcooked and Worms publisher, plus a growing portfolio of indie publishing deals. Small-cap, UK-listed, and entirely dependent on gaming revenue. The stock is reactive to both their own launches and the broader indie publishing market sentiment.&lt;/p&gt;

&lt;p&gt;Small-cap gaming stocks like Team17 are where the review thesis gets most volatile. Lower liquidity means bigger percentage moves on less volume. A surprise hit from one of their indie partners can spike the stock more dramatically than a comparable event at EA, simply because fewer shares are trading. The flip side is that small-cap means wider spreads and harder exits — which matters if you're trying to trade around a review event in real time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Paradox Interactive (OMX: PDX)
&lt;/h3&gt;

&lt;p&gt;Crusader Kings, Stellaris, Cities: Skylines. Paradox has a loyal niche audience and a DLC-heavy business model. Their stock moves on launch reviews, but also on the post-launch DLC reception — a bad expansion for a live game can drag the stock down weeks after the initial launch.&lt;/p&gt;

&lt;p&gt;Cities: Skylines II is the case study here. The first game was a massive success with an 85 Metacritic. The sequel launched to mixed reviews and brutal Steam user scores — "Mostly Negative" at launch due to performance issues. The stock took a hit not just on the review day, but on a slow bleed as negative user reviews kept piling up over weeks. Paradox is unique because their DLC-driven model means the review window never really closes. Every major expansion is another potential catalyst, positive or negative.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kadokawa (TSE: 9468)
&lt;/h3&gt;

&lt;p&gt;The parent company of FromSoftware. When Elden Ring scored 96 against an expected 90 in 2022, Kadokawa stock rose 15%. They also own media and publishing businesses, but the FromSoftware connection makes them Tier 1 for review events involving Souls-like titles. Armored Core VI was another catalyst.&lt;/p&gt;

&lt;p&gt;Kadokawa is an interesting structural case because the company you're actually betting on (FromSoftware) is a subsidiary, not the listed entity itself. This means the stock reaction is somewhat diluted compared to what it would be if FromSoftware traded independently. But it also means the market sometimes underreacts to FromSoftware review events because analysts covering Kadokawa are thinking about the whole conglomerate. That underreaction is where the edge lives.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tier 2: The big names with built-in shock absorbers
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Electronic Arts (NASDAQ: EA)
&lt;/h3&gt;

&lt;p&gt;EA is the classic Tier 2 example. FIFA/FC alone generates billions annually regardless of review scores — it's an annuity business driven by Ultimate Team spending, not critic opinions. When Anthem scored 55 against an expected 80+, the stock dropped 8% that week. Painful, but not catastrophic. EA's diversified enough to absorb one bad launch without an existential crisis. The review thesis works here, but the moves are muted.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ubisoft (Euronext: UBI)
&lt;/h3&gt;

&lt;p&gt;Assassin's Creed, Far Cry, Rainbow Six — Ubisoft has enough franchises that one bad review doesn't sink the ship. But cumulative bad reviews do. The stock's long-term decline coincided with a string of middling review scores across multiple franchises. Individual launch events create 3-5% swings, not the 10-25% of Tier 1 publishers.&lt;/p&gt;

&lt;p&gt;Ubisoft is actually the best argument for why you should track review drift even on Tier 2 stocks. No single Ubisoft game tanked the stock — it was death by a thousand cuts. A 72 here, a 68 there, a cancelled project, another mediocre open-world game. The &lt;a href="https://dev.to/help/game-stocks"&gt;Review Impact History widget&lt;/a&gt; shows this pattern clearly: a string of below-average scores compounding over quarters until the market finally reprices the whole company downward.&lt;/p&gt;

&lt;h3&gt;
  
  
  Take-Two Interactive (NASDAQ: TTWO)
&lt;/h3&gt;

&lt;p&gt;GTA and NBA 2K dominate the revenue. Take-Two is interesting because GTA launches are so rare and so massive that they're essentially Tier 1 events — a new GTA scoring badly would be catastrophic. But between GTA launches, the stock trades more on 2K Sports revenue and Red Dead engagement, making it a Tier 2 for most review events.&lt;/p&gt;

&lt;p&gt;The GTA situation is worth its own paragraph. GTA VI is the single most-anticipated game in the industry. When it finally gets a review embargo date, every gaming stock tracker in the world should be watching — because a GTA launch doesn't just move Take-Two, it moves the entire sector. A 97 lifts all boats. An 80 would send shockwaves. The sympathy correlation during a GTA launch event will be unlike anything else in the dataset.&lt;/p&gt;

&lt;h3&gt;
  
  
  Nintendo (TSE: 7974)
&lt;/h3&gt;

&lt;p&gt;Hardware-software combo with massive cash reserves. A bad Zelda review would move the stock, but Nintendo's balance sheet and hardware revenue provide cushioning that pure-play publishers don't have. They're also culturally resistant to the kind of quality crashes that hit Western publishers — their internal quality bar prevents most disasters before they happen.&lt;/p&gt;

&lt;h3&gt;
  
  
  Bandai Namco (TSE: 7832)
&lt;/h3&gt;

&lt;p&gt;Gaming plus toys plus anime. The diversification dilutes review-score sensitivity. Elden Ring was a Bandai Namco-published title, but the stock reaction was smaller than Kadokawa's (the FromSoftware parent) because gaming is one piece of a larger pie.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sony (TSE: 6758)
&lt;/h3&gt;

&lt;p&gt;PlayStation is a division, not the company. Semiconductors, music, pictures, financial services — a bad PS5 exclusive review doesn't register on Sony's balance sheet the way it would for a pure-play publisher. Sony is relevant to the gaming sector thesis only through cross-correlation: when sector sentiment turns negative, Sony can get dragged along.&lt;/p&gt;

&lt;h2&gt;
  
  
  The rest of the 28
&lt;/h2&gt;

&lt;p&gt;We haven't covered every company individually, but they all matter to the sector picture:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Asia:&lt;/strong&gt; Square Enix (TSE: 9684) is a fascinating in-between case — Final Fantasy is massive, but they also have manga, merch, and amusement businesses. A bad Final Fantasy review still moves the stock, but less than it would if Square Enix were pure gaming. Konami (TSE: 9766) has pivoted so far toward fitness clubs and pachislot machines that game reviews barely register anymore, but a new Metal Gear or Silent Hill could change that overnight. Sega (TSE: 6460) has Atlus and Persona/Shin Megami Tensei alongside Sonic, giving them a surprisingly strong franchise roster. Nexon (TSE: 3659) and NCSoft (KRX: 036570) are the Korean online gaming giants — their stocks react more to monthly active user numbers than to critic reviews, but a major new title launch is still a review event.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Europe:&lt;/strong&gt; Frontier Developments (LSE: FDEV) makes Planet Coaster and Elite Dangerous — small-cap UK, entirely gaming-dependent, and the stock moves violently on launch reviews. Stillfront (OMX: SF) focuses on mobile and free-to-play, which is less review-driven but still trades with the sector.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Americas:&lt;/strong&gt; Roblox (NYSE: RBLX) is platform-first rather than game-first, so traditional review events don't apply the same way — but it trades with gaming sector sentiment. Zynga got acquired by Take-Two, thinning the US pure-play publisher list further.&lt;/p&gt;

&lt;p&gt;The point isn't that all 28 are equally tradeable on review events. It's that tracking all of them gives you the sector-level picture: correlation data, sympathy moves, and momentum indicators that you miss if you're only watching one or two stocks.&lt;/p&gt;

&lt;h2&gt;
  
  
  The exchanges matter
&lt;/h2&gt;

&lt;p&gt;The 28 companies trade across 8 exchanges in 3 time zones:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Asia (14 companies):&lt;/strong&gt; Tokyo Stock Exchange, Korea Exchange, KOSDAQ. Japanese and Korean publishers dominate the list — Capcom, Konami, Square Enix, Sega, Nintendo, Nexon, Kadokawa, NCSoft, Krafton, and more. These exchanges open when most Western review embargoes have already lifted, creating a gap between information and price.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Europe (9 companies):&lt;/strong&gt; Warsaw Stock Exchange, Euronext Paris, OMX Stockholm, Helsinki, London Stock Exchange. CD Projekt, Ubisoft, Embracer, Paradox, Team17, and others. European trading hours overlap with US morning, so review events during this window get priced in across both regions simultaneously.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Americas (5 companies):&lt;/strong&gt; NASDAQ and NYSE. EA, Take-Two, Activision Blizzard (Microsoft), Roblox, and others. These get the most analyst coverage and institutional attention, which means review events are priced in fastest here.&lt;/p&gt;

&lt;p&gt;The time zone arbitrage is real. If a major review drops at 10am Eastern, Tokyo doesn't open for another 14+ hours. You know the signal before the Asian market has a chance to react. Our &lt;a href="https://dev.to/features/game-stocks"&gt;Gaming Sector widget&lt;/a&gt; shows cross-publisher correlation by region so you can estimate the likely sympathy move.&lt;/p&gt;

&lt;h2&gt;
  
  
  What moves the stock isn't the score — it's the deviation
&lt;/h2&gt;

&lt;p&gt;This is the thing most people get wrong about gaming stocks. A Metacritic 75 isn't automatically bearish. A 92 isn't automatically bullish. &lt;strong&gt;What matters is how far the actual score deviates from the franchise's historical average.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If a franchise averages 82 across five entries and the new one scores 92, that's a +10 deviation — bullish. The publisher beat expectations. If the same franchise scores 70, that's -12 — the stock is getting hit.&lt;/p&gt;

&lt;p&gt;Same absolute score of 75 can be bullish for a franchise that averages 68 and catastrophic for one that averages 90. The deviation model is why we track &lt;a href="https://dev.to/help/game-stocks"&gt;franchise expected scores&lt;/a&gt; and show the gap in real-time as reviews come in.&lt;/p&gt;

&lt;p&gt;Pre-release hype adds another layer. If IGDB follows and trailer views are through the roof, the effective market expectation is higher than the franchise average. An "average" score for a hyped game disappoints because average wasn't what anyone was pricing in.&lt;/p&gt;

&lt;h2&gt;
  
  
  The social amplifier
&lt;/h2&gt;

&lt;p&gt;A bad review score with 500 Reddit comments in the first hour moves the stock 2-3x more than a bad score with 50 comments. Social buzz is the multiplier that determines whether a review event stays in gaming news or crosses over into financial news.&lt;/p&gt;

&lt;p&gt;We track this across &lt;a href="https://dev.to/features/game-stocks"&gt;Reddit, YouTube, Twitch, and Discord&lt;/a&gt;. When a negative review hits the front page of r/games with thousands of upvotes and three major YouTube creators release negative videos within 24 hours, it goes from "niche gaming news" to the kind of story that triggers analyst notes and institutional selling.&lt;/p&gt;

&lt;p&gt;The YouTube creator signal is particularly telling. When SkillUp, ACG, and AngryJoe all release negative reviews within the same afternoon, the combined audience reach is in the tens of millions. That's not a niche gaming discussion anymore — that's a consumer sentiment wave. And when those videos start getting picked up by mainstream finance coverage ("Gaming giant's stock tumbles as critics pan new release"), the stock move enters its second leg.&lt;/p&gt;

&lt;p&gt;Twitch viewership tells a different story. High day-one viewer counts confirm mainstream interest in the title. But what matters more is the drop-off rate. A game that launches with 300k concurrent Twitch viewers but drops to 20k within 48 hours is signalling that the hype was front-loaded — and the stock often follows the viewer count down over the following week.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cross-publisher sympathy
&lt;/h2&gt;

&lt;p&gt;One pattern that surprised me: game publishers don't trade in isolation. When one publisher drops on a bad review, peers in the same region often follow within the same week. We call this sympathy trading, and when correlation is running above 60%, a single review event becomes a sector event.&lt;/p&gt;

&lt;p&gt;This creates opportunities beyond the direct play. If Publisher A drops 8% on a bad review and Publisher B drops 3% in sympathy despite having no news, Publisher B might be oversold. Or if you have a strong thesis on an upcoming review event and sector correlation is high, you can capture the move across multiple tickers.&lt;/p&gt;

&lt;p&gt;Regional clustering makes this more predictable than you'd expect. Japanese publishers correlate more tightly with each other than with European or US publishers — partly overlapping trading hours, partly shared analyst coverage, partly retail investor overlap. When one TSE-listed publisher drops on review news, other TSE gaming stocks often dip the same session. The European cluster behaves similarly — CD Projekt, Ubisoft, Embracer, and Paradox tend to move together during sector events even though they're on different exchanges within Europe.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://dev.to/features/game-stocks"&gt;Gaming Sector widget&lt;/a&gt; quantifies this with same-direction move percentages between every publisher pair, filterable by region. When that number is above 60%, you're not just trading one stock — you're trading a sector.&lt;/p&gt;

&lt;h2&gt;
  
  
  When none of this matters
&lt;/h2&gt;

&lt;p&gt;There are times when the review thesis goes out the window, and it's worth knowing when to sit on your hands.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Earnings week.&lt;/strong&gt; If a publisher is reporting quarterly earnings within a few days of a review embargo, the earnings data dominates everything. A great review score won't save a stock that just missed revenue estimates by 15%. Always cross-reference the &lt;a href="https://dev.to/help/earnings"&gt;earnings calendar&lt;/a&gt; before committing to a review-event trade.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Macro panic.&lt;/strong&gt; During broad market selloffs — a Fed rate shock, a banking crisis, a geopolitical escalation — sector correlation goes to 1. Everything drops together. Gaming stocks included. The review signal gets drowned out by macro noise, and even a 95-scoring blockbuster won't help a publisher's stock when the whole market is down 4%.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Live-service transitions.&lt;/strong&gt; Some publishers are shifting from launch-driven revenue to recurring subscription and microtransaction models. The more a company depends on live-service revenue from existing games rather than launch sales from new titles, the less review scores matter to the stock. EA's shift toward FC Ultimate Team and Apex Legends is a good example — the company's quarterly revenue depends more on engagement metrics than new game scores.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pre-announced disasters.&lt;/strong&gt; Sometimes the market prices in a bad review before it happens. If the publisher set a day-one embargo (bad sign), gameplay leaks look rough, and early access coverage is negative — the stock may have already dropped before the embargo lifts. In that case, the actual review might be a "sell the rumour, buy the news" situation where the stock bounces on a score that's bad but not as bad as feared.&lt;/p&gt;

&lt;h2&gt;
  
  
  Which gaming stocks are worth watching right now?
&lt;/h2&gt;

&lt;p&gt;I'm not going to tell you what to buy. That's not what Nydar does — we're a paper trading and analysis platform, not a financial advisor.&lt;/p&gt;

&lt;p&gt;But I will say this: the gaming stocks worth watching are the ones with upcoming AAA launches, short review embargo timelines, and Tier 1 revenue concentration. The &lt;a href="https://dev.to/help/game-stocks"&gt;Game Releases widget&lt;/a&gt; shows upcoming launches with embargo countdowns, franchise expected scores, and IGDB hype data. That's where you find the next review event.&lt;/p&gt;

&lt;p&gt;The pattern repeats every time. Embargo lifts, reviews flood in, the score forms, social buzz amplifies it, and the stock moves. Whether it moves up or down depends on the deviation from expected. The &lt;a href="https://dev.to/learn/game-stock-trading"&gt;learn guide&lt;/a&gt; walks through the full playbook.&lt;/p&gt;

&lt;p&gt;Twenty-eight stocks. Eight exchanges. Three time zones. One thesis: for pure-play game publishers, review scores are the most predictable catalyst in equity markets, and almost nobody is systematically tracking them.&lt;/p&gt;

&lt;p&gt;We are.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://nydar.co.uk/blog/gaming-stocks-to-watch" rel="noopener noreferrer"&gt;Nydar&lt;/a&gt;. Nydar is a free trading platform with AI-powered signals and analysis.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>trading</category>
      <category>fintech</category>
      <category>beginners</category>
    </item>
    <item>
      <title>We Built a Tool That Maps Game Review Scores to Stock Moves</title>
      <dc:creator>NydarTrading</dc:creator>
      <pubDate>Mon, 09 Mar 2026 13:47:43 +0000</pubDate>
      <link>https://dev.to/nydartrading/we-built-a-tool-that-maps-game-review-scores-to-stock-moves-3k3k</link>
      <guid>https://dev.to/nydartrading/we-built-a-tool-that-maps-game-review-scores-to-stock-moves-3k3k</guid>
      <description>&lt;p&gt;I've been tracking game publisher stocks for years. Not because I'm a huge gamer — I am, but that's beside the point — but because the relationship between review scores and stock prices is one of the most predictable patterns in equity markets, and almost nobody is systematically trading it.&lt;/p&gt;

&lt;p&gt;So we built a tool to do exactly that. Seven dedicated widgets, 28 companies tracked across 8 global exchanges, and a data pipeline that pulls from six different sources. As far as I can tell, nothing else like it exists.&lt;/p&gt;

&lt;h2&gt;
  
  
  The thesis in 30 seconds
&lt;/h2&gt;

&lt;p&gt;For diversified tech companies, a single game title barely registers on the balance sheet. Microsoft isn't going to tank because one Game Pass title scored a 65. But for a &lt;strong&gt;pure-play publisher&lt;/strong&gt; — a company where gaming is 80%+ of revenue — one AAA title can represent 30-60% of expected annual revenue.&lt;/p&gt;

&lt;p&gt;When that title reviews 15 points below expectations, analysts revise sales projections down 20-40%. On a $2B market cap company, that's $80-160M wiped off expected revenue before the stock even opens. Fear and momentum amplify the move from there.&lt;/p&gt;

&lt;p&gt;The numbers back this up:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Homefront → THQ (2011):&lt;/strong&gt; Metacritic 70 vs expected 85. Stock fell 26% in one day. THQ filed for bankruptcy the following year.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cyberpunk 2077 → CD Projekt (2020):&lt;/strong&gt; Console Metacritic 53 vs expected 92. Stock fell 9.4% on review day, 75% over months.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Elden Ring → Kadokawa (2022):&lt;/strong&gt; Metacritic 96 vs expected 90. Publisher stock rose 15%.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Apex Legends → EA (2019):&lt;/strong&gt; Surprise launch, Metacritic 89. Stock rose 16% in one day.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Anthem → EA (2019):&lt;/strong&gt; Metacritic 55 vs expected 80+. Stock dropped 8% in the week after reviews. The live-service roadmap was eventually abandoned entirely.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monster Hunter: World → Capcom (2018):&lt;/strong&gt; Metacritic 90 with massive Western crossover success. Capcom stock more than doubled over the year as sales kept beating forecasts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The pattern is clear. The question is whether you can spot the signal fast enough to act on it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why speed matters
&lt;/h2&gt;

&lt;p&gt;Game reviews don't drop all at once. There's a predictable sequence, and each stage gives you different information. Understanding the timeline is the difference between catching the move and chasing it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Embargo timing — the first signal
&lt;/h3&gt;

&lt;p&gt;Before a single review publishes, the embargo date itself tells you something. Publishers set review embargoes — the date and time when critics are allowed to publish their scores. &lt;strong&gt;When they set it is a signal in itself.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Early embargoes (days before launch) indicate confidence. The publisher wants reviews out there driving preorders. Late embargoes (launch day or later) are a red flag — the publisher is trying to sell copies before the scores hit. We've seen this pattern enough times that embargo timing alone can adjust your thesis before a single review drops.&lt;/p&gt;

&lt;p&gt;Both the Launch Health and Game Releases widgets show real-time embargo countdowns in hours and minutes. When the countdown hits zero, reviews start flooding in.&lt;/p&gt;

&lt;h3&gt;
  
  
  First reviews — where the edge is sharpest
&lt;/h3&gt;

&lt;p&gt;The moment the embargo lifts, major outlets publish simultaneously. IGN, GameSpot, Eurogamer — they've had the game for weeks and their reviews are queued up. But here's the thing: RSS feeds pick up these reviews minutes before they appear on aggregate sites like OpenCritic.&lt;/p&gt;

&lt;p&gt;We pull from five outlet RSS feeds. You can literally see individual critic scores appearing in the Review Feed widget before the aggregate Metacritic/OpenCritic number forms. The first 5-10 reviews set the tone, but they're not the final score — and that gap between early signal and final number is where the opportunity lives.&lt;/p&gt;

&lt;h3&gt;
  
  
  Review drift — the second leg
&lt;/h3&gt;

&lt;p&gt;As more reviews come in over hours and days, the aggregate score moves. We call this &lt;strong&gt;review drift&lt;/strong&gt;, and we track it with a sparkline that shows the running average over time.&lt;/p&gt;

&lt;p&gt;A game that starts at 88 and drifts down to 82 as smaller outlets and more critical reviewers weigh in tells a very different story than one that holds steady at 88. The drift direction is often a &lt;a href="https://dev.to/learn/game-stock-trading"&gt;leading indicator for the stock move's second leg&lt;/a&gt;. The initial score sets the direction; the drift determines whether the move extends or reverses.&lt;/p&gt;

&lt;p&gt;I've seen cases where the first batch of reviews looked solid — scores in the low 80s, market barely reacted — but the drift sparkline showed a clear downward trend. By the time the aggregate settled at 74, the stock had already started falling. Watching the drift, you could see it coming.&lt;/p&gt;

&lt;h3&gt;
  
  
  Social amplification — the multiplier
&lt;/h3&gt;

&lt;p&gt;This is the part most people miss. A bad review score with 500 Reddit comments within an hour moves the stock 2-3x more than a bad score with 50 comments. Social buzz is the multiplier.&lt;/p&gt;

&lt;p&gt;We track this across four platforms:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reddit&lt;/strong&gt; — mention volume across r/games, r/gaming, and r/pcgaming, the three largest gaming subreddits. When a bad review hits the front page with thousands of upvotes, it goes from "niche gaming news" to "mainstream financial news" fast.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;YouTube&lt;/strong&gt; — creator coverage from six major channels (IGN, gameranx, Skill Up, ACG, AngryJoe, Worth A Buy). Multiple negative videos from trusted creators within 24 hours of embargo lift is a strong signal that retail sentiment will turn negative.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Twitch&lt;/strong&gt; — live viewership and active stream counts. High viewer counts on launch day confirm mainstream interest. Declining viewership within the first week suggests the game isn't retaining attention.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Discord&lt;/strong&gt; — server member counts, channel counts, and direct invite links. A game with a large, active Discord community has a higher floor for post-launch engagement.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Steam launch data — the reality check
&lt;/h3&gt;

&lt;p&gt;Player counts and user review sentiment provide a second opinion on the critic score. Sometimes critics and users disagree — we call this a &lt;strong&gt;contrarian signal&lt;/strong&gt;. If critics say 85 but Steam reviews are "Mixed," that's a red flag the stock could give back gains. If critics say 72 but Steam users love it, the stock drop might be overdone.&lt;/p&gt;

&lt;p&gt;We also scan Steam reviews for disaster keywords — "refund," "broken," "unplayable," "crash" — and track the player count trajectory over the first few days. A game that launches with 200k concurrent players but drops to 40k within 48 hours is a very different picture than one that holds steady.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we actually built
&lt;/h2&gt;

&lt;p&gt;We didn't build one widget and call it done. The signal requires context, and context requires multiple data sources viewed together. That's why we built seven.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://dev.to/help/game-stocks"&gt;Launch Health&lt;/a&gt;&lt;/strong&gt; — The flagship composite score. Six weighted factors: critic score vs expected (40%), Steam user sentiment (20%), player count trajectory (15%), review velocity (10%), disaster keywords (10%), and Twitch viewership (5%). Produces a single 0-100 score with buy/sell classification. Also shows embargo countdown timers and sentiment trajectory sparklines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Review Feed&lt;/strong&gt; — Individual critic reviews as they drop, with a running average, review drift sparkline, velocity badges, and RSS-matched article counts. This is where you watch the score form in real time. The drift sparkline is the key innovation — it shows you whether the score is settling up or down before the final number is public.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Game Releases&lt;/strong&gt; — Upcoming launches with embargo dates, franchise expected scores based on historical data, IGDB hype counts and follows, community ratings, platform chips, and hype-vs-player conversion signals. The tier filter lets you focus on Tier 1 publishers where the thesis is strongest. DLC counts show post-launch content commitment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stock Heatmap&lt;/strong&gt; — All 28 companies grouped by region (Asia, Europe, Americas). Click any company for a franchise history detail panel: per-franchise average scores, launch counts, stock impact percentages, best and worst launches, and overall company stats. This is the research layer — you use it before an event to understand a publisher's track record.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Review Impact History&lt;/strong&gt; — Historical score-to-stock events with a scatter chart showing the correlation visually. Full backtest results with win rate, total PnL, return percentage, individual trade breakdowns (wins/losses, average return per trade), and strategy description. Sector momentum banner and cross-publisher sympathy percentage give you the macro context.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Social Buzz&lt;/strong&gt; — All four social platforms in one view, correlated with the publisher's stock ticker. Reddit mention volume, YouTube creator coverage, Discord member and channel counts with invite links, and Twitch live viewer counts with active streams. Search for any game and see the full social picture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Gaming Sector&lt;/strong&gt; — Three views in one widget. The overview shows all 28 stocks by region with catalyst flags and historical event counts. The momentum view shows hit rate, average impact, positive/negative event split, and Tier 1 stats. The correlation view measures same-direction moves between publishers, filterable by region.&lt;/p&gt;

&lt;h2&gt;
  
  
  The deviation model
&lt;/h2&gt;

&lt;p&gt;This is the core idea and it's worth spelling out clearly, because it's counterintuitive if you've never thought about it this way.&lt;/p&gt;

&lt;p&gt;The absolute review score doesn't predict the stock move. &lt;strong&gt;The deviation from expected&lt;/strong&gt; does.&lt;/p&gt;

&lt;p&gt;We calculate expected scores from franchise history. If a series has averaged 82 across five entries, the market broadly expects the next one to land near 82. Score 92? That's a +10 deviation — strong buy signal. Score 70? That's -12 — the stock is going to get hit.&lt;/p&gt;

&lt;p&gt;Here's why this matters: a Metacritic 75 for a franchise that historically averages 70 is &lt;em&gt;bullish&lt;/em&gt;. The publisher beat expectations. The same 75 for a franchise that averages 90 is &lt;em&gt;catastrophic&lt;/em&gt;. That's a -15 deviation. Same score, completely opposite stock reaction.&lt;/p&gt;

&lt;p&gt;IGDB hype data adds another adjustment layer. A franchise might average 82 historically, but if pre-release hype is through the roof — massive trailer views, IGDB follows spiking, preorder figures leaking high — the effective market expectation is higher than 82. When a hyped game delivers an "average" 82 score, the stock still drops because "average" wasn't what anyone was pricing in. The hype-vs-players conversion signal in the Releases widget catches exactly this: overhyped titles that technically met franchise averages but missed inflated market expectations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cross-publisher correlation
&lt;/h2&gt;

&lt;p&gt;One thing that surprised me when I dug into the data: game publishers don't trade in isolation. When one publisher drops on a bad review, peers often follow within the same week. We call this sympathy trading, and we measure it.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://dev.to/features/game-stocks"&gt;Gaming Sector widget&lt;/a&gt; shows same-direction move percentages between publishers during review events, filterable by region (Asia, Europe, Americas). When correlation is high — 60%+ — a single review event becomes a sector event. A bad review for one publisher drags peers down. A hit game lifts the sector.&lt;/p&gt;

&lt;p&gt;This creates opportunities beyond the direct play. If Publisher A drops 8% on a bad review and Publisher B drops 3% in sympathy despite having no news of its own, Publisher B might be mispriced. Or if sector correlation is running hot and you have a strong thesis on an upcoming review event, you can size across multiple tickers to capture the sector move rather than concentrating on a single stock.&lt;/p&gt;

&lt;p&gt;The region filter matters here. Asian publishers (TSE, KRX) tend to correlate more tightly with each other than with European or US publishers, partly because of overlapping trading hours and shared analyst coverage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tier 1 vs Tier 2
&lt;/h2&gt;

&lt;p&gt;Not all publishers are equally sensitive to review scores. We classify them into tiers based on revenue concentration:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tier 1 — high sensitivity.&lt;/strong&gt; Pure-play publishers where gaming is the core business and one title can make or break the year. Review events can move these stocks 10-25%. CD Projekt is the canonical example — The Witcher and Cyberpunk are essentially the entire company. Capcom, Krafton, Embracer, Team17, Paradox — these are companies where a single launch outcome materially changes the financial outlook.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tier 2 — moderate sensitivity.&lt;/strong&gt; Diversified publishers where gaming is significant but not the only revenue stream. Review scores matter but are diluted by other business lines. EA has FIFA/FC, Madden, and a large live-service portfolio — one bad review for a new IP won't sink the stock. Ubisoft, Take-Two, and Bandai Namco fall here. The moves are smaller (3-8%) and more noise-prone.&lt;/p&gt;

&lt;p&gt;The tier filter in the Releases widget lets you focus on Tier 1 publishers where the thesis is cleanest. If you're trading review events, Tier 1 is where the signal-to-noise ratio is highest.&lt;/p&gt;

&lt;h2&gt;
  
  
  The exchange complication
&lt;/h2&gt;

&lt;p&gt;We track companies on 8 exchanges across 3 time zones: Tokyo, Seoul, Warsaw, Paris, Stockholm, Helsinki, London, and US (NASDAQ/NYSE). This means a review embargo that lifts at 10am Eastern might not impact a Tokyo-listed stock until the next Asian trading session — a 14+ hour delay.&lt;/p&gt;

&lt;p&gt;This is actually an advantage if you know how to use it. If a major review drops during US hours and you've already read the signal, you know how Asian-listed peers are likely to open before the market gets there. The Gaming Sector widget's region-based correlation view helps you estimate the likely sympathy move.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I think this matters
&lt;/h2&gt;

&lt;p&gt;There's a whole industry built around tracking earnings announcements and analyst upgrades. Entire platforms exist to monitor insider trades and 13F filings. Bloomberg terminals have earnings surprise trackers. Every brokerage app has an earnings calendar.&lt;/p&gt;

&lt;p&gt;But review scores? The data is public, the pattern is well-documented in academic literature, and the moves are significant — yet I couldn't find a single tool that systematically maps review events to stock tickers across global exchanges. The information is scattered across OpenCritic, IGDB, Reddit, YouTube, Steam, and Discord. Nobody was pulling it together and connecting it to stock tickers.&lt;/p&gt;

&lt;p&gt;So we built one. Twenty-eight companies. Eight exchanges. Three time zones. Six data sources. Seven widgets.&lt;/p&gt;

&lt;p&gt;Is it niche? Absolutely. There are maybe 4-5 major review events per year that generate tradeable stock moves. But when they happen, the moves are large, predictable in direction (if not magnitude), and the signal comes before the market has fully priced it in.&lt;/p&gt;

&lt;h2&gt;
  
  
  When this doesn't work
&lt;/h2&gt;

&lt;p&gt;I'd be lying if I said this was a guaranteed money printer. There are real limitations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Earnings trump reviews.&lt;/strong&gt; If the publisher is reporting quarterly earnings within days of a review embargo, the earnings data dominates. A great review score won't save a stock that just missed revenue estimates by 15%. Always check the &lt;a href="https://dev.to/help/earnings"&gt;earnings calendar&lt;/a&gt; before trading a review event.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Diversified publishers absorb the hit.&lt;/strong&gt; Sony isn't dropping 10% because one PS5 exclusive reviewed badly. Their gaming division is one piece of a larger business that includes semiconductors, music, pictures, and financial services. The thesis only works cleanly on Tier 1 pure-play publishers where one title genuinely matters to the bottom line.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Review scores don't always predict sales.&lt;/strong&gt; Some games sell brilliantly despite mediocre reviews — brand power, franchise loyalty, and limited competition can carry a title commercially. Some games review well but sell poorly because of niche genre appeal, bad marketing timing, or a crowded release window. The review-to-stock connection works because scores &lt;em&gt;correlate&lt;/em&gt; with sales, not because they &lt;em&gt;guarantee&lt;/em&gt; them.&lt;br&gt;
If you're interested in the full thesis, including the review timeline playbook, risk management around earnings overlap, and how to handle exchange hour mismatches, we wrote a &lt;a href="https://dev.to/learn/game-stock-trading"&gt;comprehensive guide&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Seven widgets covering the full lifecycle of a review event: from embargo countdown to score formation, social amplification, Steam reality check, sector correlation, and historical backtesting. Add them to your dashboard, pick a Tier 1 publisher with an upcoming embargo, and watch how it plays out.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://dev.to/features/game-stocks"&gt;feature page&lt;/a&gt; has the full breakdown of what's included, and the &lt;a href="https://dev.to/help/game-stocks"&gt;help docs&lt;/a&gt; cover every widget in detail.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://nydar.co.uk/blog/tracking-game-review-scores-to-stock-moves" rel="noopener noreferrer"&gt;Nydar&lt;/a&gt;. Nydar is a free trading platform with AI-powered signals and analysis.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>trading</category>
      <category>fintech</category>
      <category>beginners</category>
    </item>
    <item>
      <title>What Happens When a Solo Dev Goes Down</title>
      <dc:creator>NydarTrading</dc:creator>
      <pubDate>Fri, 06 Mar 2026 09:37:44 +0000</pubDate>
      <link>https://dev.to/nydartrading/what-happens-when-a-solo-dev-goes-down-43pp</link>
      <guid>https://dev.to/nydartrading/what-happens-when-a-solo-dev-goes-down-43pp</guid>
      <description>&lt;p&gt;I built Nydar by myself. No co-founder, no engineering team, no DevOps person on call. Just me, a VPS, and a lot of late nights.&lt;/p&gt;

&lt;p&gt;Last week I found out what happens when the "just me" part breaks.&lt;/p&gt;

&lt;h2&gt;
  
  
  The short version
&lt;/h2&gt;

&lt;p&gt;I got hit with a serious infection. Thought it was COVID at first — it wasn't. Tested negative, got admitted, spent a week on IV antibiotics around the clock. I was too ill to look at my phone, let alone SSH into a server and check logs.&lt;/p&gt;

&lt;p&gt;For a solo operation, that's the nightmare scenario. There's no colleague to ping on Slack. No on-call rotation. No "hey can you keep an eye on things while I'm out." It's just... whatever you built, running on its own, for as long as it takes.&lt;/p&gt;

&lt;h2&gt;
  
  
  What actually happened
&lt;/h2&gt;

&lt;p&gt;Nothing. In the boring sense.&lt;/p&gt;

&lt;p&gt;The API server stayed up the entire time. The ML models kept generating signals. The data feeds kept pulling from exchanges. The automated trading bot (not public yet, but it's running internally) executed 25 trades across the week, holding 5 positions when I finally checked in. Market hours caching kicked in as normal — the system backed off overnight and on weekends to conserve API quota, then spun back up at market open.&lt;/p&gt;

&lt;p&gt;Nobody emailed me to say the site was down. Because it wasn't.&lt;/p&gt;

&lt;p&gt;I'm not saying this to brag. I'm saying it because it surprised me a bit. When you're deep in the weeds every day — fixing bugs, tweaking indicators, optimising API calls — you don't always step back and notice that the thing you built actually works without you touching it constantly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters if you're a user
&lt;/h2&gt;

&lt;p&gt;If you use Nydar for paper trading, your positions, your watchlists, your dashboard layouts, your alerts — none of that depends on me being awake. The backend is a single Python process running behind Apache on a Linux box. It's not clever infrastructure. It's just built properly.&lt;/p&gt;

&lt;p&gt;That's a deliberate choice. I spent time on the boring stuff that doesn't make good screenshots: &lt;a href="https://dev.to/blog/how-we-cut-api-calls-by-85-percent"&gt;API quota management&lt;/a&gt;, &lt;a href="https://dev.to/help/chart"&gt;market hours awareness&lt;/a&gt; so the system doesn't burn through rate limits at 2am, proper error handling that degrades gracefully instead of crashing. The kind of work that only pays off when you're not around to manually intervene.&lt;/p&gt;

&lt;p&gt;Last week was the first real test of all that, and it passed.&lt;/p&gt;

&lt;h2&gt;
  
  
  The solo dev tradeoff
&lt;/h2&gt;

&lt;p&gt;There's an honest conversation to have about the risks of a one-person operation. I've worked on institutional systems — I spent years at Cowen on Wall Street, worked on ENNI (a £500M infrastructure programme), and helped deliver IT systems for Ireland's National Children's Hospital (a €2.24B project). I know what "proper" engineering looks like with teams of dozens.&lt;/p&gt;

&lt;p&gt;Nydar isn't that. It's one person making every decision, writing every line of code, handling every deployment. The upside is speed and coherence — I can ship a feature in a day that would take a committee three sprints to scope. The downside is that when I'm on a hospital bed with a drip in my arm, nobody's shipping anything.&lt;/p&gt;

&lt;p&gt;But here's the thing I keep coming back to: most of the trading platforms people use daily started as solo projects or tiny teams. The early versions of most software you rely on were held together by one or two people who cared enough to get the details right. Scale comes later. Resilience comes from caring about the boring stuff.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'm not going to do
&lt;/h2&gt;

&lt;p&gt;I'm not going to pretend this was some profound life lesson. I got sick, I got treated, I'm recovering. The platform kept running because I spent time making it keep running. That's engineering, not philosophy.&lt;/p&gt;

&lt;p&gt;I'm also not going to make promises about what's coming next. I'm still recovering and my head's not fully in it yet. When it is, you'll see it in the changelog, not in a blog post full of vague roadmap promises.&lt;/p&gt;

&lt;h2&gt;
  
  
  The takeaway
&lt;/h2&gt;

&lt;p&gt;If you're evaluating trading tools, ask yourself: what happens when things go wrong? Not "what features does it have" — that's the easy question. What happens when the developer is unavailable, when an API goes down, when it's 3am and nobody's watching?&lt;/p&gt;

&lt;p&gt;For Nydar, the answer last week was: everything kept working. I'll take that over a feature list any day.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you want to see what a resilient trading platform looks like, &lt;a href="https://dev.to/"&gt;start paper trading on Nydar&lt;/a&gt; — no sign-up required.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://nydar.co.uk/blog/what-happens-when-a-solo-dev-goes-down" rel="noopener noreferrer"&gt;Nydar&lt;/a&gt;. Nydar is a free trading platform with AI-powered signals and analysis.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>trading</category>
      <category>fintech</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Why Paper Trading Should Be the Default, Not an Afterthought</title>
      <dc:creator>NydarTrading</dc:creator>
      <pubDate>Mon, 02 Mar 2026 17:47:41 +0000</pubDate>
      <link>https://dev.to/nydartrading/why-paper-trading-should-be-the-default-not-an-afterthought-53ao</link>
      <guid>https://dev.to/nydartrading/why-paper-trading-should-be-the-default-not-an-afterthought-53ao</guid>
      <description>&lt;p&gt;There's something broken about how most trading platforms work.&lt;/p&gt;

&lt;p&gt;You sign up. You land on a dashboard full of charts, numbers, and buttons. You're immediately prompted to connect a brokerage account or deposit funds. Maybe there's a "demo mode" buried somewhere in the settings, but the entire onboarding flow pushes you toward real money as fast as possible.&lt;/p&gt;

&lt;p&gt;This is backwards. And it's not just bad UX — it's genuinely irresponsible.&lt;/p&gt;

&lt;h2&gt;
  
  
  The industry's incentive problem
&lt;/h2&gt;

&lt;p&gt;Let's be honest about why platforms do this. Most trading platforms make money from commissions, payment for order flow, or spread markups. A user paper trading generates zero revenue. A user depositing $5,000 and placing real trades generates revenue from day one. The financial incentive is to get users into real money as quickly as possible, and every design decision flows from that.&lt;/p&gt;

&lt;p&gt;This creates a perverse dynamic where the platform's interests are directly opposed to the user's interests. A new trader depositing money on day one is almost certainly going to lose it. The statistics on this are grim: studies consistently show that &lt;a href="https://www.esma.europa.eu/sites/default/files/library/esma50-164-4537_cfd_final_report.pdf" rel="noopener noreferrer"&gt;70–80% of retail traders lose money&lt;/a&gt;. That number gets worse for traders who skip the learning phase entirely.&lt;/p&gt;

&lt;p&gt;The FCA in the UK now requires CFD platforms to display their client loss percentages. Next time you see "76% of retail investor accounts lose money when trading CFDs with this provider" in tiny print at the bottom of a landing page, ask yourself: does this platform's onboarding do anything to reduce that number, or does it just disclose it and move on?&lt;/p&gt;

&lt;p&gt;Paper trading doesn't fix the statistics. But it gives people a fighting chance to discover whether they have any aptitude for this before they hand over their rent money.&lt;/p&gt;

&lt;h2&gt;
  
  
  What "paper trading" actually means
&lt;/h2&gt;

&lt;p&gt;The term comes from the pre-internet era when aspiring traders would literally write hypothetical trades on paper. "Buy 100 shares of IBM at $42." Then you'd check the newspaper the next day and calculate your P&amp;amp;L by hand. No execution, no risk, no real money. Just pencil, paper, and the stock pages.&lt;/p&gt;

&lt;p&gt;Modern paper trading is the same concept with real infrastructure behind it. When you paper trade on Nydar, you get $100,000 in virtual capital and access to the same &lt;a href="https://dev.to/features"&gt;real-time market data&lt;/a&gt;, the same &lt;a href="https://dev.to/help/chart"&gt;charting tools&lt;/a&gt;, the same &lt;a href="https://dev.to/help/customindicators"&gt;technical indicators&lt;/a&gt;, and the same signal analysis that a live trader sees. The only difference is that your orders execute against a simulation engine rather than a real exchange.&lt;/p&gt;

&lt;p&gt;This distinction matters more than it sounds. A lot of platforms offer "demo mode" that runs on delayed data, has limited features, or expires after 14 days. That's not paper trading — it's a product trial dressed up as education. Real paper trading means you can stay in simulation mode indefinitely, with the full feature set, using live data. There's no countdown timer pushing you toward a credit card form. There's no feature gating that makes the demo feel like a lesser experience. If you can do it with real money, you can do it with paper money.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why we built Nydar around paper trading
&lt;/h2&gt;

&lt;p&gt;When we started building Nydar, the question wasn't "should we add paper trading?" It was "why would we let anyone trade real money before they've proven they can trade fake money?"&lt;/p&gt;

&lt;p&gt;I spent years in institutional finance — at Cowen Inc on Wall Street and managing large capital projects in London. One thing that's true across every professional trading desk, every asset manager, and every fund I've seen: nobody puts real money on a new strategy without testing it first. Nobody. Not the quant funds, not the discretionary traders, not the prop desks. Everyone simulates before they commit capital.&lt;/p&gt;

&lt;p&gt;Prop firms have formalised this into evaluation periods. Before you trade the firm's capital, you trade a simulated account for weeks or months. They measure your drawdown, your consistency, your risk management. If you can't manage fake money, you don't get real money. This isn't controversial in professional trading — it's standard practice.&lt;/p&gt;

&lt;p&gt;But retail platforms — the ones serving people with the least experience and the most to lose — push users straight to real money. It's the exact inverse of how the professionals do it. The people who need the most practice get the least.&lt;/p&gt;

&lt;p&gt;So we made paper trading the default. When you sign up for Nydar, you start in paper mode with $100,000 virtual capital. Not because we don't want your money (we'll have premium tiers eventually), but because starting with simulation is the right thing to do. You should know how to read a chart, place an order, set a &lt;a href="https://dev.to/glossary/stop-loss"&gt;stop loss&lt;/a&gt;, and manage a &lt;a href="https://dev.to/help/positions"&gt;position&lt;/a&gt; before a single real dollar is at stake.&lt;/p&gt;

&lt;h2&gt;
  
  
  What our paper trading engine actually does
&lt;/h2&gt;

&lt;p&gt;Since paper trading is core to the product rather than a bolted-on feature, we invested in making it behave like the real thing. A simulation is only as useful as it is realistic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Realistic execution.&lt;/strong&gt; Orders fill at market prices using real-time data from live exchanges. Limit orders sit until the price hits your level — they don't magically fill at whatever price you typed. This matters because understanding &lt;a href="https://dev.to/learn/order-types"&gt;order types&lt;/a&gt; and execution is half the battle for new traders. If you place a limit buy at $42.00 and the stock bounces off $42.05, your order doesn't fill — just like it wouldn't on a real exchange. You learn that limit orders require patience and that getting filled isn't guaranteed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Portfolio tracking.&lt;/strong&gt; Every paper trade flows into a full portfolio view with &lt;a href="https://dev.to/help/positions"&gt;unrealised P&amp;amp;L&lt;/a&gt;, average entry prices, margin usage, and position-level analytics. This is the same data you'd see with a real brokerage connection. The muscle memory of checking your positions, calculating your risk, and deciding whether to add or cut — that all transfers directly to live trading. We track your realised P&amp;amp;L separately so you can see not just where you stand now, but how all your closed trades performed in aggregate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Persistent state.&lt;/strong&gt; Your paper trades aren't wiped when you close the browser. They persist across sessions, accumulate a track record, and build a history you can review. This is essential for the learning process. After a month of paper trading, you should be able to look back at your trades and identify patterns: "I keep losing on counter-trend entries" or "my winners run longer than my losers." Without persistent history, every session starts from zero and the learning compounds much more slowly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Full toolchain access.&lt;/strong&gt; This is where most demo modes fall short. Paper trading on Nydar isn't a stripped-down experience. You have access to the &lt;a href="https://dev.to/help/screener"&gt;stock screener&lt;/a&gt; to find setups, &lt;a href="https://dev.to/help/signals"&gt;technical signals&lt;/a&gt; to confirm them, the &lt;a href="https://dev.to/help/journal"&gt;trading journal&lt;/a&gt; to record your reasoning, &lt;a href="https://dev.to/help/alerts"&gt;alerts&lt;/a&gt; to notify you when prices hit your levels, and the full suite of &lt;a href="https://dev.to/help/customindicators"&gt;105 technical indicators&lt;/a&gt; to build your analysis. The paper account is the live account with a different ledger. Nothing is locked behind a paywall or held back for "real" users.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three markets, three different lessons
&lt;/h2&gt;

&lt;p&gt;You can paper trade &lt;a href="https://dev.to/crypto"&gt;crypto&lt;/a&gt;, stocks, and forex from the same Nydar account. Most people start with one asset class, but each market teaches you different things — and paper trading is the safest way to discover those differences.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Crypto runs 24/7.&lt;/strong&gt; There's no closing bell. No market hours. Your positions are live on Saturday morning, on Christmas Day, at 3 AM. This teaches you something stocks can't: overnight risk is real, and it happens while you're sleeping. New crypto paper traders often wake up to 8% drawdowns they never saw happen in real time. That's a valuable lesson to learn on virtual money. You quickly develop opinions about whether to use stop losses overnight (which can get triggered by low-liquidity wicks) or to size positions small enough that overnight moves don't matter.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Forex moves on macro events.&lt;/strong&gt; Currency pairs react to interest rate decisions, employment data, GDP releases, and central bank speeches — events that stock traders might not pay attention to. Paper trading forex teaches you to check the &lt;a href="https://dev.to/glossary/economic-calendar"&gt;economic calendar&lt;/a&gt; before you open a position. There's nothing quite like getting stopped out of a EUR/USD trade because you didn't know the ECB was announcing rates at 1:45 PM. Better to learn that at zero cost.&lt;/p&gt;

&lt;p&gt;Forex also introduces you to session overlaps. The London–New York overlap (1 PM–5 PM GMT) is when EUR/USD and GBP/USD have the tightest spreads and highest volume. The Asian session trades differently. Paper trading across sessions teaches you when your strategy works and when it doesn't — without paying for the education.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stocks have gaps.&lt;/strong&gt; The US stock market opens at 9:30 AM and closes at 4 PM Eastern. Between sessions, news happens. Earnings get released. Analyst upgrades come out. When the market reopens, prices can &lt;a href="https://dev.to/glossary/gap"&gt;gap&lt;/a&gt; up or down significantly from the previous close. If you held a position overnight and the stock gaps down 5% on earnings, your stop loss at 2% below your entry was useless — the price blew right past it.&lt;/p&gt;

&lt;p&gt;Paper trading stocks teaches you about gap risk, about the difference between &lt;a href="https://dev.to/learn/pre-after-hours-trading"&gt;pre-market and after-hours trading&lt;/a&gt;, and about why position sizing matters more than stop placement for overnight holds. These are lessons that cost real money if you learn them the hard way.&lt;/p&gt;

&lt;h2&gt;
  
  
  Paper trading vs backtesting
&lt;/h2&gt;

&lt;p&gt;We get asked this a lot: "If I can &lt;a href="https://dev.to/blog/backtesting-mistakes-traders-make"&gt;backtest&lt;/a&gt; a strategy against historical data, why do I need to paper trade it too?"&lt;/p&gt;

&lt;p&gt;They're complementary, not interchangeable.&lt;/p&gt;

&lt;p&gt;Backtesting tells you whether a strategy &lt;em&gt;would have&lt;/em&gt; worked over a historical period. It's powerful for validating an edge — testing whether your RSI oversold bounce strategy actually produces positive expectancy over 500 historical trades. But backtesting has a fundamental limitation: you already know what happened. Even if you're not consciously peeking at future bars, the emotional experience of backtesting is nothing like real-time trading.&lt;/p&gt;

&lt;p&gt;Paper trading is forward-looking. You don't know what the next candle will be. You have to make decisions with incomplete information, manage the discomfort of a position moving against you, and resist the urge to override your plan when things get choppy. The data comes at you in real time, just like it will with real money.&lt;/p&gt;

&lt;p&gt;The ideal workflow is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Backtest&lt;/strong&gt; to validate the strategy's historical edge&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Paper trade&lt;/strong&gt; to validate that you can execute it in real time&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Micro-size live trade&lt;/strong&gt; to validate that you can execute it with real money at stake&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each step tests a different thing. Backtesting tests the strategy. Paper trading tests your execution. Live trading tests your psychology. Skipping any of the three leaves a gap.&lt;/p&gt;

&lt;h2&gt;
  
  
  The psychology gap (and why we're honest about it)
&lt;/h2&gt;

&lt;p&gt;Here's where we have to be straight with you: paper trading has a real limitation, and anyone who tells you otherwise is selling something.&lt;/p&gt;

&lt;p&gt;The limitation is psychology. When you paper trade, there's no fear. A 5% drawdown on virtual money doesn't make your stomach drop. A stop loss getting hit doesn't trigger the impulse to move it further away "just this once." A winning streak doesn't make you overconfident and double your position size.&lt;/p&gt;

&lt;p&gt;Real money introduces real emotions, and those emotions are responsible for a huge percentage of trading losses. The strategy that worked beautifully in paper mode falls apart when real money is on the line — not because the strategy changed, but because you changed. You started hesitating on entries, cutting winners early, holding losers too long, and revenge trading after a loss. These are human responses that no amount of simulation can fully prepare you for.&lt;/p&gt;

&lt;p&gt;We acknowledge this directly in our &lt;a href="https://dev.to/learn/paper-trading-guide"&gt;paper trading guide&lt;/a&gt;. It's not something we hide or gloss over. Paper trading teaches you the mechanics and the strategy. It doesn't teach you the emotional discipline. That comes from small real-money exposure, which is a separate stage.&lt;/p&gt;

&lt;p&gt;The right sequence is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Paper trading&lt;/strong&gt; — learn the mechanics, test your strategy, build a track record&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Micro-size real trading&lt;/strong&gt; — 10–25% of your intended position size, feel the emotions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scale gradually&lt;/strong&gt; — increase size as you prove you can stay disciplined under pressure&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Platforms that skip step 1 are setting users up for an expensive step 2.&lt;/p&gt;

&lt;h2&gt;
  
  
  What "good" paper trading behaviour looks like
&lt;/h2&gt;

&lt;p&gt;After watching how people use Nydar's paper trading over months of development and user feedback, we've noticed patterns that separate the people who eventually trade well from the people who don't.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;They treat it like real money.&lt;/strong&gt; This sounds obvious, but it's the single biggest differentiator. The users who paper trade with realistic position sizes — risking 1–2% per trade on a $100K account, so $1,000–$2,000 per position — learn far more than the users who YOLO their entire balance into a single crypto trade "because it doesn't matter."&lt;/p&gt;

&lt;p&gt;It does matter. Not because of the money, but because of the habits. If you paper trade recklessly, you build reckless habits. Those habits don't magically disappear when real money enters the picture. They get worse.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;They keep a journal.&lt;/strong&gt; Our &lt;a href="https://dev.to/help/journal"&gt;trading journal&lt;/a&gt; exists specifically for this. The users who write down why they entered a trade, what they expected to happen, and what actually happened — they improve measurably over weeks. The users who just click buy and sell and check their P&amp;amp;L at the end of the day don't. Journaling forces you to articulate your reasoning, and articulating your reasoning is how you find the holes in it. We wrote about this in more detail in our post on &lt;a href="https://dev.to/blog/why-trading-journal-matters"&gt;why trading journals matter&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;They focus on process, not outcome.&lt;/strong&gt; A paper trade that followed your plan perfectly but lost money is a good trade. A paper trade that broke every rule but happened to profit is a bad trade. This is counter-intuitive and hard to internalise, but it's the foundation of sustainable trading. Paper trading is the low-stakes environment where you can learn to evaluate trades by process rather than outcome — before the emotional weight of real money makes that evaluation almost impossible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;They give it time.&lt;/strong&gt; The users who paper trade for a week and then go live are barely better off than the users who never paper traded at all. You need months, not days. You need to experience a market correction, a choppy range, a news-driven spike, and a boring sideways grind. If you've only paper traded during a bull market, you have no idea how your strategy performs when conditions change — and they always change.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;They don't reset after a blowup.&lt;/strong&gt; This one is controversial, but we feel strongly about it. When users blow up their paper account — and many do — the instinct is to hit reset and start fresh with a clean $100K. We think that's a mistake. The blown-up account is the most valuable data you have. Look at it. Study the trades that got you there. Was it one catastrophic position or a slow bleed from a hundred bad decisions? Did you size too aggressively? Ignore your stops? Revenge trade after a losing day?&lt;/p&gt;

&lt;p&gt;Resetting erases the evidence. Keep the wreckage. Learn from it. Then adapt your strategy and trade out of the hole — or at least try to. The experience of managing a damaged account is something you'll need if you ever trade real money, because drawdowns happen to everyone.&lt;/p&gt;

&lt;h2&gt;
  
  
  The "but I learn faster with real money" argument
&lt;/h2&gt;

&lt;p&gt;We hear this one a lot. "I don't take paper trading seriously because there's no real consequence. I need skin in the game to focus."&lt;/p&gt;

&lt;p&gt;There's a kernel of truth here. Real consequences do sharpen attention. But this argument is usually used to justify skipping the learning phase entirely, and that's where it falls apart.&lt;/p&gt;

&lt;p&gt;You wouldn't learn to fly a plane by skipping the simulator and going straight to a real cockpit. You wouldn't learn surgery by skipping the cadaver lab. The "I learn better with real stakes" crowd isn't wrong about the psychology — they're wrong about the sequencing.&lt;/p&gt;

&lt;p&gt;Learn the mechanics first. Learn which buttons do what, how orders work, what &lt;a href="https://dev.to/learn/support-resistance"&gt;support and resistance&lt;/a&gt; look like on a live chart, how to read &lt;a href="https://dev.to/learn/volume-analysis"&gt;volume&lt;/a&gt;, how &lt;a href="https://dev.to/learn/moving-averages"&gt;moving averages&lt;/a&gt; behave in different market conditions. Do all of that with fake money. Then, when you switch to real money, the only new variable is the emotional component — and that's hard enough to manage on its own without also trying to figure out how limit orders work at the same time.&lt;/p&gt;

&lt;p&gt;There's also a practical argument: mechanical mistakes with real money are expensive and completely avoidable. Buying when you meant to sell. Entering a market order when you meant limit. Setting a stop loss on the wrong side of your entry. Typing the wrong quantity and buying 1,000 shares instead of 100. These are button-pressing errors, not strategy errors, and they happen to everyone at the start. Better to make them on paper.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we got wrong (and fixed)
&lt;/h2&gt;

&lt;p&gt;Building paper trading as a core feature rather than an afterthought meant we made mistakes that pure-demo platforms never encounter.&lt;/p&gt;

&lt;p&gt;Early on, our paper trading engine used slightly delayed prices for order fills — about a 500ms lag between the quoted price and the execution price. We thought this was being realistic about slippage. In practice, it confused users who couldn't understand why their limit order at $42.00 filled at $42.03. We switched to immediate fills at the quoted price for market orders and exact-price fills for limit orders when the price crosses the level. Paper trading should teach you about &lt;a href="https://dev.to/glossary/slippage"&gt;slippage&lt;/a&gt; conceptually, but simulating it with artificial delays just creates frustration without genuine learning.&lt;/p&gt;

&lt;p&gt;We also initially capped the paper trading period at 30 days. The thinking was that this would encourage people to move to live trading. What actually happened was that engaged users who were genuinely learning hit the 30-day wall and left the platform entirely. The users who blew up their paper account in 3 days and switched to real money were the ones who should have spent longer practising. We removed the cap entirely. Paper trade for a year if you want. The learning compounds over time, and artificially cutting it short helps nobody.&lt;/p&gt;

&lt;p&gt;Another mistake was not building the journal integration from day one. For the first few months, paper trades and journal entries were separate — you could record trades and you could write journal notes, but they weren't linked. Users had to manually cross-reference timestamps to match a journal entry to a specific trade. This friction meant almost nobody journaled their paper trades, which meant they weren't getting the feedback loop that makes paper trading genuinely educational. When we linked them, journal usage for paper trades doubled almost immediately. Reducing friction matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  The bigger picture
&lt;/h2&gt;

&lt;p&gt;Paper trading isn't just a feature. It's a philosophy about what a trading platform owes its users.&lt;/p&gt;

&lt;p&gt;If you're going to give someone access to markets — with all the financial risk that entails — you have an obligation to also give them a safe place to learn. Not a 14-day trial. Not a crippled demo with delayed data. A real, full-featured simulation environment that they can use for as long as they need, with the same tools and the same data that live traders get.&lt;/p&gt;

&lt;p&gt;The trading industry has a retention problem, and it's self-inflicted. Platforms acquire users, those users lose money quickly because they weren't prepared, and they leave — often with a bitter taste and a smaller bank account. The lifetime value of that user is negative when you account for support costs and the reputational damage of another person telling their friends that "trading is a scam."&lt;/p&gt;

&lt;p&gt;The alternative is to invest in that user's education upfront. Let them paper trade. Let them fail safely. Let them discover whether this is something they want to pursue seriously — and give them the tools to get good at it before they put real money on the line. The users who survive that process and go on to trade profitably are the ones who stick around for years. They're the ones who tell their friends about the platform. They're the ones who upgrade to premium when it exists.&lt;/p&gt;

&lt;p&gt;This is why paper trading is the default on Nydar. It's not a stepping stone to the "real" product. It is the product — at least until you've proven, to yourself, that you're ready for what comes next.&lt;/p&gt;

&lt;p&gt;Most platforms are designed to extract money from traders as quickly as possible. We'd rather build something that makes traders better. If that takes longer to monetise, so be it. The traders who survive the learning curve are the ones who'll stick around for years. The ones who blow up on day one are gone forever.&lt;/p&gt;

&lt;p&gt;We'd rather have the first kind.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Ready to start?&lt;/em&gt; Paper trading on Nydar is free with $100,000 virtual capital across crypto, stocks, and forex. No credit card, no time limit. &lt;a href="https://dev.to/"&gt;Get started here&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://nydar.co.uk/blog/why-paper-trading-should-be-the-default" rel="noopener noreferrer"&gt;Nydar&lt;/a&gt;. Nydar is a free trading platform with AI-powered signals and analysis.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>trading</category>
      <category>fintech</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Why We Built a Pine Script Indicator Engine (And What Traders Are Doing With It)</title>
      <dc:creator>NydarTrading</dc:creator>
      <pubDate>Fri, 27 Feb 2026 17:49:39 +0000</pubDate>
      <link>https://dev.to/nydartrading/why-we-built-a-pine-script-indicator-engine-and-what-traders-are-doing-with-it-1nb2</link>
      <guid>https://dev.to/nydartrading/why-we-built-a-pine-script-indicator-engine-and-what-traders-are-doing-with-it-1nb2</guid>
      <description>&lt;p&gt;We ship 105 technical indicators out of the box. &lt;a href="https://dev.to/glossary/rsi"&gt;RSI&lt;/a&gt;, &lt;a href="https://dev.to/glossary/macd"&gt;MACD&lt;/a&gt;, &lt;a href="https://dev.to/glossary/bollinger-bands"&gt;Bollinger Bands&lt;/a&gt;, &lt;a href="https://dev.to/glossary/adx"&gt;ADX&lt;/a&gt;, ATR, Stochastic, Ichimoku — the full catalogue. For most traders, that's more than enough. You could build a solid strategy using three of them and never touch the rest.&lt;/p&gt;

&lt;p&gt;But the traders who stick around — the ones who come back every day, who build real edge, who actually make money — they always end up wanting something we haven't built yet.&lt;/p&gt;

&lt;p&gt;One wanted a triple-confirmation oversold indicator that only fires when RSI, Bollinger Band touch, and volume spike happen simultaneously. Another wanted a custom momentum oscillator that weights recent price action differently during high-volatility regimes. A third wanted &lt;code&gt;di_plus&lt;/code&gt; and &lt;code&gt;di_minus&lt;/code&gt; with custom smoothing periods that don't match any standard ADX implementation.&lt;/p&gt;

&lt;p&gt;A forex trader wanted a session-aware VWAP that resets at the London open rather than midnight UTC. A crypto trader wanted an indicator that tracks the spread between spot price and perpetual futures funding rate — something that doesn't exist in any standard library because it requires combining two different data feeds.&lt;/p&gt;

&lt;p&gt;We kept getting the same request in different forms: "Can I build my own indicator?"&lt;/p&gt;

&lt;p&gt;So we built an engine that lets you do exactly that.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem with "just add more indicators"
&lt;/h2&gt;

&lt;p&gt;The obvious solution to "traders want more indicators" is to keep adding them to the platform. And for a while, that's what we did. Every time someone asked for something we didn't have, we'd evaluate whether it was worth adding to the core library.&lt;/p&gt;

&lt;p&gt;This doesn't scale. Here's why.&lt;/p&gt;

&lt;p&gt;Every indicator we add becomes something we have to maintain. It needs documentation. It needs to handle edge cases across crypto, stocks, and &lt;a href="https://dev.to/learn/forex-trading-basics"&gt;forex&lt;/a&gt; — and those asset classes behave differently in ways that affect calculations. A volume-based indicator that works perfectly on stock data breaks on forex pairs where "volume" means tick count rather than actual shares traded. Our 105 built-in indicators represent a significant maintenance surface, and each one had to be tested against all three asset classes before we shipped it.&lt;/p&gt;

&lt;p&gt;More importantly, the requests are infinitely varied. Traders don't just want standard indicators — they want &lt;em&gt;combinations&lt;/em&gt; of indicators with custom logic layered on top. They want &lt;a href="https://dev.to/learn/moving-averages"&gt;moving averages&lt;/a&gt; that switch between SMA and EMA based on volatility. They want &lt;a href="https://dev.to/glossary/rsi"&gt;RSI&lt;/a&gt; that uses a different lookback period during trending versus ranging markets. They want things that don't have names yet because they invented them.&lt;/p&gt;

&lt;p&gt;That last point is worth emphasising. Some of the most effective indicators we've seen aren't published in any textbook. They're the product of a trader spending months watching a specific market, noticing a pattern, and codifying it. The trader who watches the BTC/USDT order book all day and realises that a specific ratio of bid-to-ask volume at certain price levels precedes a move — that's not something we can anticipate. That insight belongs to the trader. They just need a way to express it.&lt;/p&gt;

&lt;p&gt;You can't pre-build infinity. You have to give people a language.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Pine Script syntax
&lt;/h2&gt;

&lt;p&gt;When we decided to build a custom indicator engine, the first decision was the scripting language. We had three realistic options:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;JavaScript.&lt;/strong&gt; We're a web platform, so this was the obvious choice from a technical standpoint. The browser already has a JS runtime. But giving users access to a full JavaScript runtime inside a trading chart is a security nightmare. Even sandboxed, the attack surface is enormous — you'd need to block network access, filesystem access, infinite loops, memory bombs, and a hundred other vectors. And JS is verbose for numerical computation. Nobody wants to write &lt;code&gt;Array.from({length: period}, (_, i) =&amp;gt; close[i]).reduce((a, b) =&amp;gt; a + b, 0) / period&lt;/code&gt; when they mean "average of last N closes."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A custom DSL.&lt;/strong&gt; We could design our own language from scratch, tailored exactly to our needs. Clean, safe, optimised for time-series data. But nobody would know how to write it. The learning curve would kill adoption before it started. Every user would need to learn a new language just to do something they might already know how to express. We'd also need to write all the documentation, tutorials, and examples from scratch — a massive investment with no guarantee anyone would bother reading it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pine Script syntax.&lt;/strong&gt; And this is what we chose. Not because we're trying to clone any particular platform, but because Pine Script has become the de facto standard for expressing trading logic. Thousands of tutorials exist online. Every trading forum has Pine Script examples. When a trader googles "RSI divergence indicator code" or "di_plus pine script," the results are overwhelmingly Pine Script.&lt;/p&gt;

&lt;p&gt;By adopting Pine Script syntax, we inherited an entire ecosystem of knowledge. Traders can take code they've written elsewhere, paste it into our &lt;a href="https://dev.to/help/customindicators"&gt;custom indicator editor&lt;/a&gt;, make minor adjustments, and have it running on live data in minutes. They don't need to learn anything new. The syntax they already know just works.&lt;/p&gt;

&lt;p&gt;There's a pragmatic angle too. Pine Script is purpose-built for time-series computation on OHLCV data. The language assumes you're working with price bars. Variables implicitly reference the current bar. Built-in functions like &lt;code&gt;ta.sma()&lt;/code&gt; and &lt;code&gt;ta.rsi()&lt;/code&gt; handle the lookback windowing automatically. This makes trading logic dramatically more concise than equivalent code in a general-purpose language, which means fewer bugs and faster iteration for the trader.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the engine actually does
&lt;/h2&gt;

&lt;p&gt;Let me walk through what happens when you write a custom indicator on Nydar. This isn't a tutorial — we have a comprehensive &lt;a href="https://dev.to/learn/pine-script-guide"&gt;Pine Script guide&lt;/a&gt; and detailed &lt;a href="https://dev.to/help/customindicators"&gt;custom indicators documentation&lt;/a&gt; for that — but understanding the architecture explains why it works the way it does.&lt;/p&gt;

&lt;p&gt;When you write indicator code, it goes through three stages:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Parsing and validation.&lt;/strong&gt; Your code is parsed into an abstract syntax tree. We check for syntax errors, undefined variables, type mismatches, and potentially dangerous operations. This is where we catch mistakes before they hit your chart. Unlike a general-purpose language, we can validate that your code will produce a plottable result — it must output at least one series of values. We also enforce resource limits at this stage: no unbounded loops, no recursive function calls deeper than a set threshold, no allocations above a memory ceiling. This is how we keep the engine safe without needing a full sandbox.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Compilation to an execution plan.&lt;/strong&gt; The parsed code is compiled into a series of operations that our engine can execute efficiently. This is where Pine Script's design really helps us. Because the language is inherently bar-by-bar — every statement implicitly iterates over the historical price data — we can optimise the execution path in ways that wouldn't be possible with arbitrary code. Common subexpressions get deduplicated. If your indicator computes &lt;code&gt;ta.sma(close, 20)&lt;/code&gt; in three different places, we calculate it once. Built-in function calls get routed to our optimised implementations rather than being interpreted line by line.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Execution against price data.&lt;/strong&gt; The execution plan runs against the current chart's OHLCV data. Results are cached and only recomputed when new bars arrive or parameters change. This is critical for real-time performance — when a new tick comes in, we don't recalculate the entire history. We incrementally update only the current bar and any values that depend on it. On a chart showing a year of 5-minute data, that's the difference between recalculating 100,000+ bars per tick versus updating one.&lt;/p&gt;

&lt;p&gt;The result is an indicator that feels native. It updates in real time, it works across all timeframes, and it scales to whatever history depth the chart is showing. A user shouldn't be able to tell the difference between a built-in indicator and a custom one — that was our design goal, and we hit it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real examples from our users
&lt;/h2&gt;

&lt;p&gt;The best part of building a custom indicator engine is seeing what people do with it that you never anticipated. Here are some of the more interesting indicators traders have built:&lt;/p&gt;

&lt;h3&gt;
  
  
  Triple-confirmation oversold signal
&lt;/h3&gt;

&lt;p&gt;This was the original request that pushed us to build the engine. The logic is straightforward — only signal when three independent conditions agree:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//@version=5
indicator("Triple Oversold", overlay=true)
rsiOversold = ta.rsi(close, 14) &amp;lt; 30
bbTouch = close &amp;lt;= ta.sma(close, 20) - 2 * ta.stdev(close, 20)
volSpike = volume &amp;gt; 2 * ta.sma(volume, 20)
signal = rsiOversold and bbTouch and volSpike
plotshape(signal, style=shape.triangleup, location=location.belowbar, color=color.green, size=size.normal)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It fires rarely — maybe once or twice a month on any given stock — but when it does, the setup has a high probability of at least a short-term bounce. The trader who built this told us it was more valuable than any single built-in indicator because it eliminated the manual work of checking three separate panels and trying to eyeball whether all three conditions were true at the same bar. That's the kind of thing humans are bad at and computers are good at.&lt;/p&gt;

&lt;h3&gt;
  
  
  Volatility-adaptive moving average
&lt;/h3&gt;

&lt;p&gt;A forex trader built a &lt;a href="https://dev.to/glossary/moving-average"&gt;moving average&lt;/a&gt; that automatically adjusts its period based on &lt;a href="https://dev.to/glossary/atr"&gt;ATR&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//@version=5
indicator("Adaptive MA", overlay=true)
atrVal = ta.atr(14)
atrMean = ta.sma(atrVal, 100)
ratio = atrVal / atrMean
adaptPeriod = math.round(20 * ratio)
adaptPeriod := math.max(5, math.min(adaptPeriod, 50))
adaptMA = ta.sma(close, adaptPeriod)
plot(adaptMA, color=color.blue, linewidth=2)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;During low-volatility consolidation, the ratio drops below 1 and the period shortens — making the MA more responsive to breakouts. During high-volatility trends, the ratio rises and the period extends — filtering out noise and keeping you in the trade. The result is a single line that adapts to &lt;a href="https://dev.to/learn/market-sentiment"&gt;market conditions&lt;/a&gt; instead of requiring the trader to manually switch between fast and slow MAs depending on what the market is doing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Custom ADX with &lt;code&gt;di_plus&lt;/code&gt; / &lt;code&gt;di_minus&lt;/code&gt; smoothing
&lt;/h3&gt;

&lt;p&gt;This is the one that shows up in our search console data. A lot of traders want the directional movement components (&lt;code&gt;di_plus&lt;/code&gt;, &lt;code&gt;di_minus&lt;/code&gt;) from the &lt;a href="https://dev.to/glossary/adx"&gt;ADX&lt;/a&gt; calculation, but with non-standard smoothing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//@version=5
indicator("Custom DI", overlay=false)
period = input.int(7, "DI Period")
smooth = input.int(5, "Smoothing")
upMove = high - high[1]
downMove = low[1] - low
plusDM = upMove &amp;gt; downMove and upMove &amp;gt; 0 ? upMove : 0
minusDM = downMove &amp;gt; upMove and downMove &amp;gt; 0 ? downMove : 0
atrVal = ta.ema(ta.tr, period)
diPlus = 100 * ta.ema(plusDM, period) / atrVal
diMinus = 100 * ta.ema(minusDM, period) / atrVal
plot(ta.ema(diPlus, smooth), "DI+", color=color.green)
plot(ta.ema(diMinus, smooth), "DI-", color=color.red)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The default 14-period Wilder smoothing in the standard ADX is tuned for daily charts. If you're trading 5-minute bars, that 14-period lookback covers just over an hour — not enough context for some strategies, too much for others. This version lets you set both the DI period and an additional smoothing layer independently. Several of our users run this on &lt;a href="https://dev.to/learn/scalping"&gt;scalping&lt;/a&gt; setups where the standard ADX is too laggy.&lt;/p&gt;

&lt;h3&gt;
  
  
  Multi-timeframe divergence scanner
&lt;/h3&gt;

&lt;p&gt;One of the more ambitious builds — a user created an indicator that checks for &lt;a href="https://dev.to/glossary/rsi"&gt;RSI&lt;/a&gt; divergence across three timeframes simultaneously. Price making higher highs while RSI makes lower highs on the 15-minute, 1-hour, and 4-hour charts at the same time. When all three align, it flags a high-probability &lt;a href="https://dev.to/learn/reversal-trading"&gt;reversal&lt;/a&gt; setup.&lt;/p&gt;

&lt;p&gt;This required accessing multiple timeframe data within a single indicator, which was one of the features we had to add to the engine after launch. The &lt;code&gt;request.security()&lt;/code&gt; function lets you pull data from a different timeframe than the one the chart is displaying. It's one of those features where the first user who asked for it made us realise we'd shipped the engine incomplete — of course you need multi-timeframe access. Trading decisions are almost never made on a single timeframe.&lt;/p&gt;

&lt;h2&gt;
  
  
  The pitfalls we help you avoid
&lt;/h2&gt;

&lt;p&gt;Building an indicator engine also means watching people make mistakes, and then building guardrails. The three most common issues in custom indicator development are well-documented in &lt;a href="https://dev.to/learn/technical-analysis-basics"&gt;technical analysis&lt;/a&gt; literature, but they're easy to fall into when you're writing code:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Repainting.&lt;/strong&gt; An indicator "repaints" when it changes its historical values as new data arrives. This is the most dangerous pitfall because it makes &lt;a href="https://dev.to/learn/backtesting-strategies"&gt;backtesting&lt;/a&gt; results look better than they are. If your indicator uses &lt;code&gt;close&lt;/code&gt; on the current bar and the bar hasn't finished yet, every tick changes that value — and when you look at the historical chart later, you only see the final values. It looks like the indicator was right all along, but in real time it was flickering between states. Our engine flags common repainting patterns during the validation stage and warns you before the indicator runs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Overfitting.&lt;/strong&gt; This is the custom indicator equivalent of curve-fitting a backtest. You tune 6 parameters until your indicator perfectly identifies every reversal in the last year of data, and then it fails completely on new data because you've modelled the noise rather than the signal. We don't have a magic solution for this — it's fundamentally a statistical problem — but we do show parameter sensitivity warnings. If changing a parameter by 1 dramatically changes the number of signals, that's a red flag.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lookahead bias.&lt;/strong&gt; Using future data in calculations, usually accidentally. The classic example is referencing &lt;code&gt;close&lt;/code&gt; in a condition that's supposed to fire at the open — the close price isn't known at the open, so the indicator "works" on historical data but can't work in real time. Pine Script's bar-by-bar execution model helps prevent this by default, since each statement only has access to the current and previous bars. But it's still possible to introduce lookahead through creative use of &lt;code&gt;request.security()&lt;/code&gt; with a lower timeframe, and we check for those patterns.&lt;/p&gt;

&lt;p&gt;We built detection for these three issues because they represent the gap between "my indicator looks brilliant on a historical chart" and "my indicator actually works for live trading." Bridging that gap is the difference between a toy and a tool.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we learned building it
&lt;/h2&gt;

&lt;p&gt;Building the engine taught us some things that surprised us.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Most users start by copying, not creating.&lt;/strong&gt; We expected people to write indicators from scratch. In reality, about 80% of custom indicators on the platform started as code copied from a forum, blog, or tutorial. Users paste it in, see if it works, then start tweaking parameters. This is why Pine Script compatibility was so important — it dramatically lowered the friction from "I found some code online" to "it's running on my chart." The learning journey isn't blank-page-to-finished-indicator. It's copy, tweak, understand, eventually create.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Error messages matter more than features.&lt;/strong&gt; The single biggest factor in user retention with custom indicators wasn't how powerful the engine was. It was how helpful the error messages were. When we launched, errors were technical — "unexpected token at line 14." Users bounced. When we rewrote them to be contextual — "Line 14: &lt;code&gt;ta.sma&lt;/code&gt; expects a number for the period parameter, but got a string. Did you mean to use &lt;code&gt;input.int(14)&lt;/code&gt; instead?" — completion rates doubled. We now treat error message quality as a first-class feature, not an afterthought.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Performance is a feature.&lt;/strong&gt; Early versions of the engine recalculated everything on every tick. On a 5-minute chart with a year of data, that's over 100,000 bars being processed multiple times per second. The UI would freeze, the chart would stutter, and users would assume the indicator was broken rather than slow. We had to build the incremental update system described above, and it made the difference between "cool tech demo" and "tool I actually use for trading." Nobody cares how clever your architecture is if the chart lags.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Saving and sharing drives adoption.&lt;/strong&gt; When we added the ability to save indicators and load them across different charts, usage tripled. People build an indicator once and then use it everywhere — on every asset, every timeframe, every layout. The indicator becomes part of their workflow rather than a one-off experiment. We're considering adding community sharing next, though we want to get the security review right before we let users run each other's code. The risk of a cleverly crafted indicator that looks helpful but actually misleads is real, and we'd rather launch it right than launch it fast.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The 105 built-in indicators got better too.&lt;/strong&gt; An unexpected benefit: building the custom engine forced us to re-examine our built-in indicator implementations. When users can write their own RSI and compare it to ours, any discrepancy gets noticed immediately. We found and fixed three edge-case bugs in our built-in indicators that had been there since launch — all discovered because a user's custom version produced slightly different values and asked us why. Competition from your own users turns out to be excellent quality assurance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where this is going
&lt;/h2&gt;

&lt;p&gt;Custom indicators are one piece of a larger vision. If you can express your trading logic as code, you should be able to do more than just visualise it on a chart.&lt;/p&gt;

&lt;p&gt;We're working on connecting custom indicators to our &lt;a href="https://dev.to/learn/backtesting-strategies"&gt;backtesting engine&lt;/a&gt;. Write your indicator, define entry and exit rules, and test it against historical data — all without leaving the platform. The indicator engine already computes values across full history, so the data pipeline is there. We just need to build the strategy wrapper and the results dashboard. The goal is to go from "I have an idea" to "here's how it would have performed over the last 2 years" in under a minute.&lt;/p&gt;

&lt;p&gt;Beyond backtesting, we want custom indicators to feed into our &lt;a href="https://dev.to/help/customindicators"&gt;alert system&lt;/a&gt;. "Notify me when my triple-confirmation oversold signal fires on any stock in my &lt;a href="https://dev.to/learn/building-watchlist"&gt;watchlist&lt;/a&gt;." That's the point where custom indicators become truly operational — not just a visual tool on a chart you're staring at, but an automated scanner that watches the market for you while you do other things. Combined with our existing Telegram notification pipeline, your custom indicator could wake you up at 3am when that crypto setup you've been waiting for finally appears.&lt;/p&gt;

&lt;p&gt;The endgame is a platform where your entire trading workflow — from idea to indicator to backtest to live alerts to execution — lives in one place. We're not there yet. But the custom indicator engine is the foundation that everything else builds on, because it's where your trading logic gets expressed in a form that a computer can work with.&lt;/p&gt;

&lt;p&gt;We built Nydar because we believe traders deserve professional-grade tools without the professional-grade price tag. The custom indicator engine is the purest expression of that philosophy. Instead of deciding what indicators you need, we gave you the building blocks to create exactly what you need.&lt;/p&gt;

&lt;p&gt;If you've been sitting on an indicator idea, &lt;a href="https://dev.to/help/customindicators"&gt;give it a try&lt;/a&gt;. And if you get stuck, the &lt;a href="https://dev.to/learn/pine-script-guide"&gt;Pine Script guide&lt;/a&gt; and &lt;a href="https://dev.to/glossary/pine-script"&gt;glossary&lt;/a&gt; are there to help you get unstuck.&lt;/p&gt;

&lt;p&gt;Your edge is your own. We just built the engine to run it.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://nydar.co.uk/blog/why-we-built-a-pine-script-indicator-engine" rel="noopener noreferrer"&gt;Nydar&lt;/a&gt;. Nydar is a free trading platform with AI-powered signals and analysis.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>trading</category>
      <category>fintech</category>
      <category>beginners</category>
    </item>
    <item>
      <title>How We Cut Our API Calls by 85% Without Losing a Single Data Point</title>
      <dc:creator>NydarTrading</dc:creator>
      <pubDate>Fri, 27 Feb 2026 00:54:44 +0000</pubDate>
      <link>https://dev.to/nydartrading/how-we-cut-our-api-calls-by-85-without-losing-a-single-data-point-4e4j</link>
      <guid>https://dev.to/nydartrading/how-we-cut-our-api-calls-by-85-without-losing-a-single-data-point-4e4j</guid>
      <description>&lt;p&gt;Last week we noticed something alarming in our API usage logs. Nydar was making nearly 15,000 API calls per day to our primary market data provider — against a daily budget of 2,000. We were overshooting by 7.5x. Not by a little. Not by double. By seven and a half times our allocation.&lt;/p&gt;

&lt;p&gt;The data was still flowing because the provider throttles gradually rather than hard-blocking, but we were living on borrowed time. One policy change on their end and our entire stock data pipeline goes dark. No quotes, no heatmaps, no analyst ratings, no institutional holdings. Everything that makes Nydar useful for stock traders — gone.&lt;/p&gt;

&lt;p&gt;We had to fix it. And we had to fix it without users noticing anything had changed.&lt;/p&gt;

&lt;p&gt;This is the story of how we diagnosed the problem, fixed it in four phases over 48 hours, and ended up with a genuinely better product than we started with.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why not just buy more quota?
&lt;/h2&gt;

&lt;p&gt;Before we get into the technical work, let's address the obvious question. Our data provider offers paid tiers with higher limits. Why not just upgrade?&lt;/p&gt;

&lt;p&gt;Two reasons. First, the economics don't add up at our stage. We're a growing platform, not a hedge fund. Moving from the free tier to a plan that would cover 15,000 calls/day would cost more per month than our entire server infrastructure. That's not a good trade when most of those calls are wasted.&lt;/p&gt;

&lt;p&gt;Second — and more importantly — buying more quota doesn't fix the underlying problem. If our architecture is making 7.5x more calls than necessary, throwing money at the rate limit just means we're paying 7.5x more than we should. We'd rather fix the root cause and keep that budget for when we actually need it — when the user base grows and the real demand exceeds the free tier.&lt;/p&gt;

&lt;p&gt;Optimisation first. Scaling second.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Nydar's data pipeline works
&lt;/h2&gt;

&lt;p&gt;To understand where the waste was happening, you need to understand how data flows through the system.&lt;/p&gt;

&lt;p&gt;Nydar supports three asset classes: stocks, crypto, and forex. Each has different data providers, different update frequencies, and different trading hours. The architecture looks roughly like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;External APIs&lt;/strong&gt; — providers like Finnhub (stocks, forex), Binance (crypto), TwelveData (supplementary quotes), and others&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backend data sources&lt;/strong&gt; — Python classes that wrap each provider, handle authentication, and implement caching&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;In-memory cache layer&lt;/strong&gt; — a simple dictionary with TTL-based expiration. No Redis, no external cache. Just a dict with timestamps&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;REST endpoints&lt;/strong&gt; — FastAPI routes that widgets call to get data&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;WebSocket layer&lt;/strong&gt; — a persistent connection that pushes real-time updates to the frontend at configurable intervals&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Frontend widgets&lt;/strong&gt; — 40+ React components that render the data&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Every API call happens at layer 2. The cache at layer 3 is supposed to prevent redundant calls. The WebSocket at layer 5 determines polling frequency. The waste was happening because layers 2, 3, and 5 weren't coordinating properly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Discovering the problem
&lt;/h2&gt;

&lt;p&gt;It started at 11 PM on a Tuesday. A routine check of our data provider's dashboard showed we'd used 14,898 of our 2,000 daily allocation. That's not a typo. Fourteen thousand eight hundred and ninety-eight.&lt;/p&gt;

&lt;p&gt;The natural reaction is to assume something is broken — a runaway loop, a misconfigured retry, maybe a bot hammering the API. But our logs didn't show anything obviously wrong. The system was behaving exactly as designed. The problem wasn't a bug. It was architecture.&lt;/p&gt;

&lt;p&gt;Nydar aggregates real-time data from multiple providers — live quotes, order books, analyst ratings, institutional filings, options chains, earnings calendars, and more. Each of these features makes API calls. Individually, each one is reasonable. A quote here, a filing there. Collectively, they were drowning us.&lt;/p&gt;

&lt;h2&gt;
  
  
  The audit that changed everything
&lt;/h2&gt;

&lt;p&gt;The first step was figuring out &lt;em&gt;where&lt;/em&gt; all these calls were actually going. We had basic daily counters, but that's like knowing your electricity bill is high without knowing which appliance is the problem. We needed per-endpoint, per-symbol granularity.&lt;/p&gt;

&lt;p&gt;We rebuilt our API usage tracker with a detailed breakdown structure: which API, which date, which endpoint, which symbol, how many calls. We let it run for a full 24-hour cycle and looked at the results the next morning.&lt;/p&gt;

&lt;p&gt;They were eye-opening:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Source&lt;/th&gt;
&lt;th&gt;Calls/hour&lt;/th&gt;
&lt;th&gt;The problem&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Forex rate lookups&lt;/td&gt;
&lt;td&gt;~991&lt;/td&gt;
&lt;td&gt;Each of 15 currency pairs hit the bulk rates endpoint individually — but that endpoint returns &lt;em&gt;every&lt;/em&gt; rate in one response&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Heatmap quotes&lt;/td&gt;
&lt;td&gt;~3,843&lt;/td&gt;
&lt;td&gt;25 individual stock quotes per refresh, fired sequentially, with only a 60-second cache&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;WebSocket polling&lt;/td&gt;
&lt;td&gt;~480&lt;/td&gt;
&lt;td&gt;Polling every 30 seconds for stock data even when the US market was closed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Public snapshot&lt;/td&gt;
&lt;td&gt;~240&lt;/td&gt;
&lt;td&gt;Fetching 4 stock quotes every 120 seconds with no market-hours check&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;TwelveData duplication&lt;/td&gt;
&lt;td&gt;uncounted&lt;/td&gt;
&lt;td&gt;A &lt;code&gt;get_quote()&lt;/code&gt; method that bypassed the cache entirely and duplicated what &lt;code&gt;get_ticker()&lt;/code&gt; already does&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Four categories of waste: redundant calls, unnecessary calls, excessive calls, and duplicate calls.&lt;/p&gt;

&lt;h2&gt;
  
  
  The forex revelation
&lt;/h2&gt;

&lt;p&gt;The forex issue deserves its own section because it was the single most absurd finding.&lt;/p&gt;

&lt;p&gt;Our data provider has a &lt;code&gt;/forex/rates&lt;/code&gt; endpoint. You pass it a base currency — say EUR — and it returns exchange rates for &lt;em&gt;every&lt;/em&gt; quote currency in a single response. EUR/USD, EUR/GBP, EUR/JPY, EUR/AUD, all of them. One call, all the data.&lt;/p&gt;

&lt;p&gt;We were calling it once per pair.&lt;/p&gt;

&lt;p&gt;So fetching EUR/USD, EUR/GBP, and EUR/JPY meant three separate API calls to the same endpoint, each returning the exact same blob of data, and we'd extract one rate from each response and throw away the rest. Multiply that by 15 active currency pairs refreshing every 60 seconds, and you get nearly a thousand wasted calls per hour.&lt;/p&gt;

&lt;p&gt;The fix was embarrassingly simple: call &lt;code&gt;/forex/rates?base=EUR&lt;/code&gt; once, cache the full response for 60 seconds, and have all EUR-based pair lookups read from that cache. One call instead of fifteen. A 93% reduction in forex API usage from a single architectural insight.&lt;/p&gt;

&lt;p&gt;This is why auditing matters. We would never have guessed this was the biggest offender. The forex widget looked simple and lightweight from the outside. But underneath, it was our most expensive feature per data point.&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 1: Stop the bleeding
&lt;/h2&gt;

&lt;p&gt;With the audit data in hand, we tackled the four biggest offenders in order of impact.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Forex bulk caching&lt;/strong&gt; was the first and easiest win. We already described it above — one call per base currency instead of one per pair. Forex API usage dropped from ~900 calls/hour to ~60 calls/hour. We could have stopped here and still cut our total daily calls by nearly a third.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Market hours gating&lt;/strong&gt; was the second-biggest win and required more thought. We built a &lt;code&gt;is_stock_market_open()&lt;/code&gt; utility that determines whether the US stock market is in regular trading hours — 9:30 AM to 4:00 PM Eastern Time, Monday through Friday, accounting for DST. When the market is closed, stock-related endpoints return stale cached data with a &lt;code&gt;"market_closed": true&lt;/code&gt; flag. No API call at all.&lt;/p&gt;

&lt;p&gt;This eliminated roughly 4,000 calls per day that were happening between market close and market open. Think about that: for 17.5 hours out of every 24, we were making API calls for data that literally cannot change (regular-hours stock prices don't move when the exchange is closed). Pure waste.&lt;/p&gt;

&lt;p&gt;But this one almost killed us.&lt;/p&gt;

&lt;h3&gt;
  
  
  The midnight UTC war story
&lt;/h3&gt;

&lt;p&gt;Our first implementation used &lt;code&gt;is_stock_market_active()&lt;/code&gt; instead of &lt;code&gt;is_stock_market_open()&lt;/code&gt;. The "active" version includes pre-market (4 AM–9:30 AM ET) and after-hours (4 PM–8 PM ET) sessions. It seemed like the responsible choice — broader coverage means fresher data, right?&lt;/p&gt;

&lt;p&gt;The problem is timezone arithmetic, and it's the kind of bug that only manifests at specific times of day.&lt;/p&gt;

&lt;p&gt;Our daily API counter resets at midnight UTC. Midnight UTC is 7 PM Eastern Time — right in the middle of the after-hours session. So here's what happened every single night:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;11:59 PM UTC: Counter at 1,987 of 2,000. System is throttled. Everything is fine.&lt;/li&gt;
&lt;li&gt;12:00 AM UTC (7 PM ET): Counter resets to zero. &lt;code&gt;is_stock_market_active()&lt;/code&gt; returns true because after-hours runs until 8 PM ET.&lt;/li&gt;
&lt;li&gt;12:00:01 AM UTC: The system thinks the market is active and it has a fresh budget. Full-speed polling begins.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We had a widget called AssetBands that shows support and resistance levels for a watchlist of stocks. It polls 50 symbols every 30 seconds via the bulk tickers endpoint. At full speed, with a fresh daily counter, it burned through 2,000 API calls in under 30 minutes. By 12:30 AM UTC — 7:30 PM Eastern, still in after-hours — our entire daily quota was gone.&lt;/p&gt;

&lt;p&gt;The first morning this happened, we thought our Phase 1 fixes hadn't worked. The usage dashboard showed 2,000+ calls, same as before. It took an hour of staring at the per-hour breakdown to spot it: a massive spike at exactly midnight UTC, then nothing. The observability tooling we'd built in Phase 3 literally paid for itself on day one.&lt;/p&gt;

&lt;p&gt;The fix was a one-word change in one file: &lt;code&gt;is_stock_market_active()&lt;/code&gt; to &lt;code&gt;is_stock_market_open()&lt;/code&gt;. Regular hours only. Pre-market and after-hours data is genuinely useful, but not at the cost of your entire daily budget. If we ever need extended-hours data, we'll budget for it as a separate, explicit cost — not as a side effect of a boolean function that happens to return true at the wrong time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Heatmap optimisation&lt;/strong&gt; addressed both speed and volume. The heatmap widget fetches quotes for 25 stocks to build a sector performance grid. Two changes: we switched from sequential fetches to &lt;code&gt;asyncio.gather()&lt;/code&gt; for parallel execution, and bumped the cache TTL from 60 seconds to 180 seconds.&lt;/p&gt;

&lt;p&gt;The parallel fetch doesn't reduce API call count — it's still 25 calls — but it cuts the response time from around 5 seconds to under 1 second. Users were seeing a blank heatmap for five seconds on every refresh. Now it snaps in. The cache bump from 60s to 180s reduces call volume by two-thirds. For a heatmap showing sector performance, three-minute-old data is perfectly fine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TwelveData deduplication&lt;/strong&gt; was the last Phase 1 fix. Our TwelveData integration had two methods that hit the same underlying API: &lt;code&gt;get_ticker()&lt;/code&gt; (cached) and &lt;code&gt;get_quote()&lt;/code&gt; (not cached). The quote method existed for historical reasons and was bypassing the cache on every call. We routed it through the cached &lt;code&gt;get_ticker()&lt;/code&gt; path. Usage dropped by roughly 50%.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WebSocket back-off.&lt;/strong&gt; This one isn't in the original audit table because it overlaps with market-hours gating, but it's worth calling out. Our WebSocket layer pushes real-time updates to connected clients. During market hours, it polls every 30 seconds. But we had no concept of "the market is closed, slow down." After adding market-hours awareness, the WebSocket backs off to a 5-minute interval outside regular hours. Still polling — in case of corporate events or overnight moves — but at 1/10th the frequency.&lt;/p&gt;

&lt;p&gt;We also added an &lt;strong&gt;extended OHLCV cache&lt;/strong&gt; for candlestick data. During market hours, OHLCV data has a standard cache TTL. When the market is closed, we bump it to 4 hours. Candlestick data from 3 PM isn't going to change at 11 PM. A 4-hour cache during off-hours means the data is there when someone opens the app late at night, but we're not refreshing it every few minutes for no reason.&lt;/p&gt;

&lt;p&gt;Combined, these changes reduced our total API calls by an estimated 60–70%. But we weren't done. Reducing calls is only half the problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 2: What happens when you hit the wall anyway
&lt;/h2&gt;

&lt;p&gt;Even with a 70% reduction, there will be days when traffic spikes, when a user has 40 widgets open, when something unexpected happens. You will hit your quota ceiling eventually. The question is: what does the user see when you do?&lt;/p&gt;

&lt;p&gt;Before our work, the answer was "broken widgets." API calls would fail with HTTP errors, the frontend would show generic error states or infinite spinners, and the user would assume the platform was down. Terrible.&lt;/p&gt;

&lt;p&gt;We built a three-layer quota exhaustion system that transforms this failure mode into something that actually makes sense.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Layer 1: The backend quota guard.&lt;/strong&gt; Before every single API call to a rate-limited provider, we check a daily counter. If the quota is exhausted, we raise a &lt;code&gt;QuotaExhaustedError&lt;/code&gt; immediately — no network call, no wasted time, no ambiguity. This check happens in a single method called &lt;code&gt;_get_params()&lt;/code&gt; that all 17+ endpoints flow through. One chokepoint, complete coverage.&lt;/p&gt;

&lt;p&gt;The key insight here is that quota exhaustion isn't an error. It's a state. The system should handle it as gracefully as it handles "market closed" or "no data available." Raising a typed exception (rather than returning &lt;code&gt;None&lt;/code&gt; or an empty response) means every layer of the stack can handle it explicitly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Layer 2: The global exception handler.&lt;/strong&gt; A FastAPI exception handler catches &lt;code&gt;QuotaExhaustedError&lt;/code&gt; from any route — not just the ones we've thought of — and returns a structured 429 response: &lt;code&gt;{"code": "QUOTA_EXHAUSTED", "message": "..."}&lt;/code&gt;. This means we never need to remember to wrap a new endpoint in quota-handling logic. The safety net is global.&lt;/p&gt;

&lt;p&gt;We also had to deal with a provider-specific quirk: our data provider returns HTTP 403 (Forbidden) when you're rate-limited, not the standard 429 (Too Many Requests). Python's &lt;code&gt;response.raise_for_status()&lt;/code&gt; turns a 403 into a generic &lt;code&gt;HTTPStatusError&lt;/code&gt;, which our existing error handling was logging as a server error. We built a &lt;code&gt;_check_response()&lt;/code&gt; static method that intercepts 403 and 429 responses before anything else touches them and converts them to &lt;code&gt;QuotaExhaustedError&lt;/code&gt;. Without this, quota events would masquerade as server errors throughout the codebase.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Layer 3: Frontend intelligence.&lt;/strong&gt; An Axios interceptor catches 429 responses and rewrites the error message to "Daily data limit reached — resets at midnight UTC." Clean, human-readable, no jargon.&lt;/p&gt;

&lt;p&gt;But we went further. Each widget now has a smart error state component called &lt;code&gt;WidgetErrorState&lt;/code&gt; that checks context before deciding what to show. If the stock market is closed and there's a quota error, the user sees "Market Closed" with the last known values — not "quota exhausted." Because from the user's perspective, there's nothing wrong. The market is closed. That's why the numbers aren't moving. Showing them a quota error would be technically accurate but experientially misleading.&lt;/p&gt;

&lt;p&gt;This distinction — between what's technically true and what's useful for the user — turned out to be one of the most important decisions in the entire project. Ten widgets now use this smart error dispatcher: Chart, VolumeProfile, MarketBreadth, IPO, ShortSqueeze, OrderFlow, OrderBook, AggregatedBook, LiquidationMap, and VolumeDelta.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The WebSocket channel.&lt;/strong&gt; Most of Nydar's real-time data flows over a WebSocket connection, not REST. So the quota signal needs to travel that path too. When the backend detects quota exhaustion, it pushes a &lt;code&gt;quota_exhausted&lt;/code&gt; message type over the WebSocket. The frontend listens for this event and fires an internal &lt;code&gt;ws:quota-exhausted&lt;/code&gt; event that any component can subscribe to.&lt;/p&gt;

&lt;p&gt;This is important because WebSocket-driven widgets wouldn't see the REST 429 responses. Without the WebSocket channel, a widget that gets its data purely from the push connection would keep showing stale data with no indication of why it stopped updating. The user would see prices frozen at their last value with no error, no market-closed badge, nothing. Just silence. That's worse than an error message — it's a lie.&lt;/p&gt;

&lt;h3&gt;
  
  
  What the user actually sees before and after
&lt;/h3&gt;

&lt;p&gt;Let's make this concrete. Before the optimisation work, here's what happened when a user opened Nydar at 10 PM on a Tuesday:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Chart widget:&lt;/strong&gt; Showed a loading spinner for 5 seconds, then "Error loading data"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Order book:&lt;/strong&gt; Infinite spinner&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Heatmap:&lt;/strong&gt; Loaded after 5 seconds but with random gaps where individual stock quotes failed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Market breadth:&lt;/strong&gt; "Something went wrong"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Volume profile:&lt;/strong&gt; Empty canvas&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After the work:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Chart widget:&lt;/strong&gt; Shows the day's chart with a subtle "Market Closed" badge and the closing price&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Order book:&lt;/strong&gt; Shows the last known state with a "Market Closed" indicator&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Heatmap:&lt;/strong&gt; Loads in under a second showing end-of-day sector performance&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Market breadth:&lt;/strong&gt; Shows the day's final breadth reading with a closing timestamp&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Volume profile:&lt;/strong&gt; Shows the day's volume profile, which is actually &lt;em&gt;more&lt;/em&gt; useful at end of day than during the session (the full profile is complete)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every widget shows data. Every widget explains its state. No spinners, no errors, no confusion. The platform feels alive and informed even when the market isn't.&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 3: You can't optimise what you can't see
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;is_stock_market_active()&lt;/code&gt; bug taught us that we needed much better visibility into what the system was doing in real time. Phase 3 was about building that observability layer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Per-endpoint, per-symbol tracking.&lt;/strong&gt; Not just "you made 2,000 calls today" but "the heatmap endpoint made 847 calls, 340 of which were for AAPL, across the quote and ticker sub-endpoints." This level of detail is what lets you spot anomalies instantly. If AAPL suddenly accounts for 80% of your API calls, you know something is wrong with the AAPL-specific code path.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rotating log files.&lt;/strong&gt; We set up a dedicated &lt;code&gt;logs/api_usage.log&lt;/code&gt; with a rotating handler: 10MB per file, 5 files max. That gives us roughly a week of history without unbounded disk growth. The logs capture every API call with timestamp, endpoint, symbol, and cache status (hit or miss). When something goes wrong at 3 AM, we have the data to reconstruct what happened.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Admin dashboard endpoint.&lt;/strong&gt; We enriched our existing &lt;code&gt;/api/admin/api-usage&lt;/code&gt; endpoint to return top consumers, per-endpoint breakdowns, cache hit rates, and exhaustion status. This isn't a fancy dashboard — it's a JSON endpoint that we curl when we need to check things. Simple, but it's saved us multiple times already.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Throttled persistence.&lt;/strong&gt; The usage tracker writes its stats to a JSON file, but doing that on every API call would create its own performance problem. We added a 30-second throttle: stats accumulate in memory and flush to disk periodically, with an explicit flush on application shutdown. This is one of those boring-but-essential details that separates production systems from prototypes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 4: The paranoia phase
&lt;/h2&gt;

&lt;p&gt;The final phase was about edge cases and defensive coding. The kind of work that doesn't show up in metrics but prevents 3 AM pages.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Silent quota propagation.&lt;/strong&gt; Here's a problem we didn't anticipate: when a &lt;code&gt;QuotaExhaustedError&lt;/code&gt; gets raised inside a method, it can get caught by a generic &lt;code&gt;except Exception&lt;/code&gt; block higher up the call stack. Generic exception handlers typically log the error at ERROR level. Quota exhaustion isn't an error — it's expected behaviour. But without explicit handling, every quota event generates a log line that looks like something is broken.&lt;/p&gt;

&lt;p&gt;With 15 forex pairs refreshing every 30 seconds, that's potentially 1,800 false-alarm ERROR log entries per hour. Our log aggregator would light up like a Christmas tree for something that's completely normal.&lt;/p&gt;

&lt;p&gt;The fix is tedious but essential: add &lt;code&gt;except QuotaExhaustedError: raise&lt;/code&gt; before every &lt;code&gt;except Exception&lt;/code&gt; block in every method that might transitively call an API. The quota error punches through all the generic handlers and gets caught only by the global exception handler where it belongs. We did this across all 17+ endpoint methods in both our stock and forex data sources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Premium interest capture.&lt;/strong&gt; This is the business side of quota management, and it's an example of turning a constraint into a feature.&lt;/p&gt;

&lt;p&gt;When users encounter the quota exhaustion state during market hours (which is now rare, but happens on high-traffic days), they don't just see an error. They see a subtle call-to-action: "Want uninterrupted data? Register your interest for premium access."&lt;/p&gt;

&lt;p&gt;Clicking it opens a modal that captures their email and what tier of data access they'd value. We deliberately didn't build a payment flow. We're not selling something that doesn't exist yet. We're gathering signal. The registrations go to a simple JSON store on the backend — nothing fancy, no email marketing platform, just a file that grows by one line when someone expresses interest. Aggregate stats are available at an admin endpoint so we can track conversion rates from quota events to signups.&lt;/p&gt;

&lt;p&gt;The insight here is that the users who encounter quota limitations during market hours are, by definition, our most active users. They have multiple widgets open, they're checking data frequently, they care enough to be trading during peak hours. These are exactly the people we want to talk to about a premium tier. The quota limit acts as a natural filter for our highest-value users.&lt;/p&gt;

&lt;p&gt;We didn't plan this. It emerged from asking "what should the user see when we hit the wall?" and realising that the answer isn't just "an error message" — it's an opportunity to understand what our most engaged users would pay for.&lt;/p&gt;

&lt;h2&gt;
  
  
  The numbers
&lt;/h2&gt;

&lt;p&gt;After all four phases rolled out over 48 hours, here's where we landed:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Before&lt;/th&gt;
&lt;th&gt;After&lt;/th&gt;
&lt;th&gt;Change&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Daily API calls&lt;/td&gt;
&lt;td&gt;~14,898&lt;/td&gt;
&lt;td&gt;&amp;lt;2,000&lt;/td&gt;
&lt;td&gt;-85%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Forex calls/hour&lt;/td&gt;
&lt;td&gt;~991&lt;/td&gt;
&lt;td&gt;~60&lt;/td&gt;
&lt;td&gt;-94%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Off-hours waste&lt;/td&gt;
&lt;td&gt;~4,000/day&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;-100%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Heatmap response time&lt;/td&gt;
&lt;td&gt;~5 seconds&lt;/td&gt;
&lt;td&gt;&amp;lt;1 second&lt;/td&gt;
&lt;td&gt;-80%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Error logs from quota events&lt;/td&gt;
&lt;td&gt;~1,800/hour&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;-100%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;User-facing errors on quota&lt;/td&gt;
&lt;td&gt;Broken widgets&lt;/td&gt;
&lt;td&gt;"Market Closed" or graceful CTA&lt;/td&gt;
&lt;td&gt;qualitative&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;We're now comfortably within our API budget with headroom for growth. During market hours, data freshness is identical to before — we didn't increase any cache TTLs that affect real-time trading data. The 180-second heatmap cache is the only user-visible change, and for a sector heatmap, that granularity is more than sufficient.&lt;/p&gt;

&lt;h2&gt;
  
  
  The frontend side: market-aware error states
&lt;/h2&gt;

&lt;p&gt;One piece of this work deserves a deeper look because it applies to any product that depends on external data: the frontend error state hierarchy.&lt;/p&gt;

&lt;p&gt;Before this project, our widgets had a binary state: either data loaded successfully, or the widget showed a generic error. After the work, every widget can be in one of four states:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Loading&lt;/strong&gt; — data is being fetched&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Success&lt;/strong&gt; — data rendered normally&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Market Closed&lt;/strong&gt; — it's outside trading hours, showing last known values with a clear indicator&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Quota Exhausted&lt;/strong&gt; — daily limit reached, with a premium interest CTA&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The market-closed state is checked first. This is critical. If the market is closed &lt;em&gt;and&lt;/em&gt; the quota is exhausted, the user sees "Market Closed" — because that's the real reason the data isn't updating. The quota state only shows during market hours when the user would actually expect fresh data.&lt;/p&gt;

&lt;p&gt;We also built the market hours check into the frontend itself, and this was trickier than it sounds.&lt;/p&gt;

&lt;h3&gt;
  
  
  The DST problem
&lt;/h3&gt;

&lt;p&gt;Daylight Saving Time is the bane of any system that cares about US market hours. The NYSE opens at 9:30 AM Eastern, but "Eastern" means different UTC offsets depending on the time of year: UTC-5 during winter (EST), UTC-4 during summer (EDT). If you hardcode the offset, your market-hours check breaks twice a year — in March and November — and it breaks silently. Prices keep updating normally; it's just your cache gating that's wrong, burning extra API calls for two weeks until someone notices.&lt;/p&gt;

&lt;p&gt;We solved this using the browser's built-in &lt;code&gt;Intl.DateTimeFormat&lt;/code&gt; API with the &lt;code&gt;America/New_York&lt;/code&gt; timezone. The browser handles DST transitions correctly because it uses the IANA timezone database, which is updated regularly. No external API calls, no timezone libraries, no hardcoded offsets.&lt;/p&gt;

&lt;p&gt;The implementation is lightweight: format the current UTC time into &lt;code&gt;America/New_York&lt;/code&gt; components, extract the hour and minute, and check if it falls between 9:30 and 16:00 on a weekday. This runs entirely on the client. Zero network overhead. The frontend knows what time it is in New York without asking anyone.&lt;/p&gt;

&lt;p&gt;We use this in multiple places: the &lt;code&gt;WidgetErrorState&lt;/code&gt; component checks it before deciding whether to show "Market Closed" vs "Quota Exhausted", and the AssetBands widget uses it to switch between 30-second active polling and 5-minute idle polling.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we learned
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Audit before you optimise.&lt;/strong&gt; We would have guessed wrong about the biggest offenders. The forex bulk-cache fix was the single highest-impact change, but it wasn't on anyone's radar until we saw the per-endpoint numbers. Intuition is good for generating hypotheses. Data is what tells you where to actually spend your time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"Active" and "open" aren't the same thing.&lt;/strong&gt; Pre-market and after-hours sessions exist, and traders do care about them. But they're not worth burning your entire daily API budget on. For cache gating and polling decisions, regular trading hours is the right boundary. If you need extended-hours data, budget for it explicitly rather than treating it as a side effect of your regular polling.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Error states are features, not afterthoughts.&lt;/strong&gt; The quota exhaustion UX is arguably better than what we had before the optimisation work. Users now see contextual, market-aware status messages instead of generic loading spinners when data isn't updating. The system is more honest about what's happening and why. A well-designed error state builds trust. A broken widget destroys it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Typed exceptions beat boolean flags.&lt;/strong&gt; Using &lt;code&gt;QuotaExhaustedError&lt;/code&gt; as a proper exception type — rather than returning &lt;code&gt;None&lt;/code&gt; or setting &lt;code&gt;is_exhausted = True&lt;/code&gt; on some state object — meant that every layer of the stack could handle quota events explicitly. The global exception handler catches everything we forgot. The silent propagation pattern keeps logs clean. Types are documentation that the compiler enforces.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Observability pays for itself on day one.&lt;/strong&gt; The per-endpoint usage dashboard caught the &lt;code&gt;is_stock_market_active()&lt;/code&gt; regression within hours of deployment. Without it, we'd have woken up to another 15,000-call day with no idea why the fix we'd just shipped didn't work. Every minute spent building observability tools saves hours of debugging later.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cache TTLs are product decisions, not just engineering ones.&lt;/strong&gt; How stale can heatmap data be before it misleads a trader? Is 60-second-old data meaningfully different from 180-second-old data for a sector overview? These aren't questions engineers should answer alone. They require understanding what the user is actually doing with the data and what decisions they're making based on it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The pattern
&lt;/h2&gt;

&lt;p&gt;If you're building a trading platform — or any application that aggregates data from rate-limited APIs — the pattern is the same:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Measure first.&lt;/strong&gt; Per-endpoint, per-symbol, per-hour. You need the resolution to find the real problems.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Eliminate redundancy.&lt;/strong&gt; Bulk endpoints exist for a reason. If you're calling an API that returns 100 results to extract 1, cache the other 99.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Respect time boundaries.&lt;/strong&gt; If data can't change, don't ask for it. Market hours, business hours, weekends — build these boundaries into your polling logic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Design your error states.&lt;/strong&gt; When you hit the limit, what does the user see? If the answer is "a broken widget," you have work to do.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build the dashboard before you need it.&lt;/strong&gt; You'll need it sooner than you think.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What's next
&lt;/h2&gt;

&lt;p&gt;We're now running comfortably within our API budget with room to grow. But this work opened up questions we hadn't considered before.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-provider failover.&lt;/strong&gt; If one data provider hits its limit, could we transparently fall back to another? We already have TwelveData and AlphaVantage as secondary sources for some data types. The quota guard infrastructure makes this possible — instead of raising &lt;code&gt;QuotaExhaustedError&lt;/code&gt;, we could try the next provider in a priority chain. We haven't built this yet, but the architecture is ready for it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Per-user quota awareness.&lt;/strong&gt; Right now, all users share the same API budget. As we grow, we'll need to think about fair scheduling — making sure one power user with 40 widgets doesn't crowd out everyone else. The per-symbol tracking from Phase 3 gives us the data to understand usage patterns. The question is whether to implement server-side rate limiting per user, or to solve it at the data layer with smarter caching and sharing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Holiday calendars.&lt;/strong&gt; Our market-hours check handles weekdays and DST, but not market holidays. On Christmas Day, Martin Luther King Day, and a dozen other holidays, the US stock market is closed but our code thinks it's a normal Monday. We need to add a holiday calendar — either hardcoded for the year or fetched from an API (which, ironically, would be another API call to manage).&lt;/p&gt;

&lt;p&gt;These are good problems to have. They're the problems of a platform that's working, growing, and thinking about the next level of sophistication. The 48 hours we spent on API optimisation didn't just fix a quota crisis — they gave us the observability, the error-state architecture, and the caching patterns to build on for everything that comes next.&lt;/p&gt;

&lt;p&gt;Not bad for a week's work.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://nydar.co.uk/blog/how-we-cut-api-calls-by-85-percent" rel="noopener noreferrer"&gt;Nydar&lt;/a&gt;. Nydar is a free trading platform with AI-powered signals and analysis.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>trading</category>
      <category>fintech</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Intelligent Asset Bands: How We Rebuilt Symbol Navigation From Scratch</title>
      <dc:creator>NydarTrading</dc:creator>
      <pubDate>Wed, 25 Feb 2026 18:15:00 +0000</pubDate>
      <link>https://dev.to/nydartrading/intelligent-asset-bands-how-we-rebuilt-symbol-navigation-from-scratch-3id4</link>
      <guid>https://dev.to/nydartrading/intelligent-asset-bands-how-we-rebuilt-symbol-navigation-from-scratch-3id4</guid>
      <description>&lt;h2&gt;
  
  
  The Problem With Symbol Selectors
&lt;/h2&gt;

&lt;p&gt;Every trading platform has a symbol selector. And almost every one of them is terrible.&lt;/p&gt;

&lt;p&gt;You get a search box. Maybe a dropdown with some favourites. If you're lucky, there's a watchlist you can manually curate. But none of them actually &lt;em&gt;help&lt;/em&gt; you find what's moving in the market right now. They're passive — they wait for you to already know what you want to look at.&lt;/p&gt;

&lt;p&gt;That's backwards. A good trading platform shouldn't just display data for the symbol you picked. It should surface the symbols that matter &lt;em&gt;right now&lt;/em&gt; and make switching between them instant.&lt;/p&gt;

&lt;p&gt;Think about how a professional trader actually works. They're not staring at one chart all day. They're scanning. Flicking between symbols. Looking for momentum, volume spikes, sector rotations. The symbol selector isn't a minor UI element — it's the primary interface for discovery. And yet on most platforms it's an afterthought. A text input with autocomplete, maybe a star icon to save favourites. The same UX pattern we've had since the early 2000s.&lt;/p&gt;

&lt;p&gt;Nydar used to have a search bar in the header. Type a symbol, hit enter, your dashboard updates. Functional. Boring. And fundamentally the same approach every other platform takes. You had to already know what you wanted to look at before you could look at it. If you wanted to find what was moving in crypto right now, you'd have to check an external screener, note the symbols, then come back to Nydar and type them in one by one.&lt;/p&gt;

&lt;p&gt;We wanted something better. Something that turns symbol navigation from a passive lookup into an active discovery tool. Something that replaces the question "what do I want to look at?" with the answer "here's what's worth looking at right now."&lt;/p&gt;

&lt;p&gt;That's what Asset Bands are.&lt;/p&gt;




&lt;h2&gt;
  
  
  What We Built
&lt;/h2&gt;

&lt;p&gt;Asset Bands are horizontal strips that sit between the header and your dashboard. Each enabled asset class — crypto, stocks, forex — gets its own colour-coded band with a row of clickable symbol pills. Click any pill and your entire dashboard instantly reconfigures to show that symbol's data. Every chart, every indicator, every widget updates in real-time.&lt;/p&gt;

&lt;p&gt;The visual design is intentional. Each asset class has its own accent colour: warm orange for crypto, electric blue for stocks, emerald green for forex. The left edge of each band has a solid accent border with a gradient wash that fades across the first 240 pixels. It's subtle enough that it doesn't compete with your charts, but distinctive enough that you can instantly tell which band you're looking at.&lt;/p&gt;

&lt;p&gt;Each pill shows the symbol name and a small percentage badge showing the 24-hour change. Green for up, red for down. The active symbol is highlighted in the accent colour with a subtle glow. Inactive pills have a muted, dark appearance with soft borders — they look clickable without being visually noisy.&lt;/p&gt;

&lt;p&gt;But the real power isn't in the pills themselves. It's in the filter system behind them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Per-Asset Intelligent Filters
&lt;/h3&gt;

&lt;p&gt;Each asset class gets its own set of filters, because what matters in crypto is completely different from what matters in equities or forex. We didn't want a one-size-fits-all dropdown where half the options are irrelevant. A crypto trader scanning for volume doesn't care about sector classifications. An equity trader looking at financials doesn't need a "Top 50 by market cap" filter — they know the names already.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Crypto filters:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Top 25 / Top 50&lt;/strong&gt; — ranked by actual market capitalisation from aggregated cross-exchange data, not an arbitrary hardcoded list&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;By Volume&lt;/strong&gt; — the 25 highest-volume coins sorted by real 24-hour trading volume&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gainers&lt;/strong&gt; — the top 25 biggest movers to the upside right now, sorted by percentage change&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Losers&lt;/strong&gt; — the top 25 biggest drops, for contrarian plays or risk monitoring&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;All&lt;/strong&gt; — every tradeable pair on the exchange&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Stock filters:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Top 25&lt;/strong&gt; — the blue-chip core&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gainers / Losers&lt;/strong&gt; — sorted by real-time daily price change percentage&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tech Sector&lt;/strong&gt; — AAPL, NVDA, MSFT, META, GOOGL, AMD, ADBE, CRM, and the rest of the tech heavyweights&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Finance Sector&lt;/strong&gt; — JPM, GS, BAC, V, MA, MS, WFC, C — Wall Street's core&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;All&lt;/strong&gt; — the full universe of tracked equities&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Forex filters:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Major Pairs&lt;/strong&gt; — EUR/USD, GBP/USD, USD/JPY, USD/CHF, AUD/USD, NZD/USD, USD/CAD — the seven pairs that account for the vast majority of global forex volume&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross Pairs&lt;/strong&gt; — everything else, from EUR/GBP to AUD/JPY to CAD/JPY&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;All&lt;/strong&gt; — the complete list of tracked pairs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key word there is &lt;em&gt;real&lt;/em&gt;. Every filter except the sector classifications is driven by live market data, not a static list someone typed into a config file six months ago. When you click "Gainers," you're seeing the actual top performers sorted by real price change data from the last 30 seconds. When you click "By Volume," you're seeing actual trading volume aggregated across exchanges. This isn't a screener that runs once a day and caches the results. It's live.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why These Specific Filters?
&lt;/h2&gt;

&lt;p&gt;We spent time thinking about what a trader actually needs when scanning markets. Not what looks impressive in a feature list — what actually changes behaviour.&lt;/p&gt;

&lt;h3&gt;
  
  
  Volume: The Most Underrated Signal
&lt;/h3&gt;

&lt;p&gt;Volume is arguably the single most important piece of information a trader can have beyond price. High volume on an up-move suggests conviction — real money is flowing in, not just a few retail traders chasing. High volume on a down-move suggests capitulation or forced liquidation. Low volume on any move suggests it probably won't last.&lt;/p&gt;

&lt;p&gt;The "By Volume" filter surfaces this immediately. During a major market event — an unexpected Fed announcement, a crypto exchange collapse, a surprise earnings beat — the volume filter instantly shows you where the action is. You don't need to check 50 charts. You don't need to scan a screener on another site. The symbols with the highest conviction are right there in front of you, ranked.&lt;/p&gt;

&lt;h3&gt;
  
  
  Gainers and Losers: Following Momentum (or Fading It)
&lt;/h3&gt;

&lt;p&gt;Momentum traders want to see what's running. Contrarian traders want to see what's been beaten down. Both need the same data, sorted differently. The Gainers filter shows the biggest percentage moves to the upside over the last 24 hours. The Losers filter shows the biggest drops.&lt;/p&gt;

&lt;p&gt;Crucially, these filters update with every price refresh cycle (every 30 seconds). A coin that was in the middle of the pack at 9am could be the top gainer by 10am. You'll see that shift in real-time because the filter re-sorts on every data update.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sectors: Scanning an Industry in One Click
&lt;/h3&gt;

&lt;p&gt;When a major economic event hits — an interest rate decision, a regulatory announcement, a trade policy change — it rarely affects just one stock. It moves entire sectors. The tech sector filter lets you see all tech stocks at once: are they all red, or is NVDA green while the rest are down? That's a signal. The finance sector filter shows you whether banks are moving together or diverging.&lt;/p&gt;

&lt;p&gt;This is how institutional traders think. They don't look at individual stocks in isolation. They look at sectors, at correlations, at relative strength. The sector filters give retail traders the same lens.&lt;/p&gt;

&lt;h3&gt;
  
  
  Majors vs Crosses: Forex-Specific Logic
&lt;/h3&gt;

&lt;p&gt;Forex is a different beast entirely. The seven major pairs all involve the US dollar and account for the overwhelming majority of global forex volume. Cross pairs don't involve the dollar and tend to have wider spreads and lower liquidity. Traders often specialise in one or the other, and the filter reflects that natural division.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Data Pipeline That Makes It Work
&lt;/h2&gt;

&lt;p&gt;Here's where the engineering gets interesting. Making filters that &lt;em&gt;feel&lt;/em&gt; instant but are backed by &lt;em&gt;real&lt;/em&gt; market data requires careful orchestration of multiple data sources, each with different strengths, different rate limits, and different refresh characteristics.&lt;/p&gt;

&lt;h3&gt;
  
  
  CoinGecko for Market Rankings
&lt;/h3&gt;

&lt;p&gt;For crypto, we needed market capitalisation rankings, 24-hour volume figures, and price change percentages — and we needed them for the top 100 coins in a single API call. CoinGecko's &lt;code&gt;/coins/markets&lt;/code&gt; endpoint is purpose-built for exactly this.&lt;/p&gt;

&lt;p&gt;One request returns everything: current price, market cap, market cap rank, total volume, and 24-hour price change percentage. For 100 coins. In a single response. That's remarkably efficient — the kind of endpoint that was clearly designed by people who understood how trading platforms actually consume market data.&lt;/p&gt;

&lt;p&gt;We cache this response for 10 minutes on the backend behind a dedicated cache key (&lt;code&gt;ranked_crypto&lt;/code&gt;) in our API cache layer. Market cap rankings don't shuffle every second. If Bitcoin and Ethereum swap places in market cap, you'll see it within 10 minutes. That's more than fast enough for a ranking filter, and it means we're making roughly 144 CoinGecko API calls per day instead of thousands. Given CoinGecko's rate limits on their free tier, this matters.&lt;/p&gt;

&lt;p&gt;The 10-minute TTL was a deliberate choice. Too short and we'd burn through rate limits. Too long and the "Gainers" filter would feel stale. Ten minutes means that during a volatile market day, you're never more than 10 minutes behind on rankings, but during a quiet day you're not wasting API calls fetching identical data.&lt;/p&gt;

&lt;h3&gt;
  
  
  Binance for the Long Tail
&lt;/h3&gt;

&lt;p&gt;CoinGecko gives us the top 100 with rich metadata. But Nydar tracks hundreds of crypto pairs via Binance. The top 100 covers the majors, but there are traders who want to look at smaller-cap tokens that only show up on exchange-specific listings.&lt;/p&gt;

&lt;p&gt;So after fetching ranked data from CoinGecko, we merge in the full Binance symbol list. The top 100 coins have market cap rankings, volume data, and change percentages baked in from the CoinGecko response. Everything beyond that gets basic symbol information from Binance — enough to show the pill and load the chart, but without the ranking metadata.&lt;/p&gt;

&lt;p&gt;This two-tier approach means the "By Volume" and "Top 25" filters are driven by CoinGecko's aggregated cross-exchange data (which is more accurate than any single exchange's figures, since it represents global liquidity), while the "All" filter still shows every Binance pair for traders who want to go deep into the long tail.&lt;/p&gt;

&lt;p&gt;The merge itself is careful not to create duplicates. We build a Set of symbols from the CoinGecko response, then filter the Binance list to exclude anything already covered. The result is a single symbol cache where the first ~100 entries are rich (with market cap, volume, rank) and the rest are basic (symbol and source only).&lt;/p&gt;

&lt;h3&gt;
  
  
  Live Price Updates Every 30 Seconds
&lt;/h3&gt;

&lt;p&gt;Regardless of which data source populated the symbol list, every visible pill gets live price updates every 30 seconds via our bulk ticker endpoint. This serves two critical purposes.&lt;/p&gt;

&lt;p&gt;First, it keeps the change percentage badges on each pill current. The tiny "+3.2%" next to a symbol isn't decorative — it's actionable information. If that number is stale, the entire filter system is stale. A 30-second refresh means the visual indicators are never more than half a minute behind reality.&lt;/p&gt;

&lt;p&gt;Second, it means the Gainers and Losers filters are working with fresh data every time you switch filters. The filter function runs against the current symbol cache, and the symbol cache is updated every 30 seconds. So if you switch to "Gainers" at 10:00:15, you're seeing data from the 10:00:00 refresh at worst.&lt;/p&gt;

&lt;p&gt;The update path is important here. When price data arrives from the ticker endpoint, we merge it into the existing symbol cache &lt;em&gt;without&lt;/em&gt; replacing the CoinGecko metadata. So a coin keeps its market cap ranking (from CoinGecko, refreshed every 10 minutes) while its price and change percentage update every 30 seconds (from the live ticker feed). Two data sources, different refresh rates, seamlessly merged into a single reactive data structure.&lt;/p&gt;

&lt;p&gt;This is handled at the Zustand store level. The &lt;code&gt;updateSymbolPrices&lt;/code&gt; action takes a map of &lt;code&gt;{symbol → {price, change24h, volume24h}}&lt;/code&gt; and patches each matching entry in the cache without touching fields like &lt;code&gt;marketCapRank&lt;/code&gt; or &lt;code&gt;marketCap&lt;/code&gt;. Zustand's immutable update pattern means React only re-renders the pills whose data actually changed — not the entire band.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stock Data: Working Within Constraints
&lt;/h3&gt;

&lt;p&gt;For stocks, the data situation is different. Finnhub's free tier gives us real-time quotes with price and daily change percentage, but no volume data and no sector classifications. That constrains what we can offer.&lt;/p&gt;

&lt;p&gt;The Gainers and Losers filters work the same way as crypto — sort by &lt;code&gt;change24h&lt;/code&gt;, take the top or bottom 25. The data comes from the same 30-second ticker refresh cycle.&lt;/p&gt;

&lt;p&gt;For sector classification, we don't have a free API that returns sector data with price quotes. So we maintain a hardcoded sector map — tech and finance classifications for the ~36 stocks in our universe that fit those categories.&lt;/p&gt;

&lt;p&gt;Is this ideal? No. We'd rather have a dynamic sector API. But the pragmatic reality is that Apple isn't going to stop being a tech company between API calls. The sector map changes perhaps once a year when a company restructures or when we add new stocks to our universe. Hardcoding it is the honest engineering choice rather than over-engineering a dynamic solution for data that's effectively static.&lt;/p&gt;

&lt;p&gt;We deliberately chose tech and finance as the two sector filters because they represent the two largest and most traded sectors in our equity universe. Healthcare, energy, and consumer discretionary are candidates for future expansion as we grow the stock universe.&lt;/p&gt;

&lt;h3&gt;
  
  
  Forex: Convention Over Configuration
&lt;/h3&gt;

&lt;p&gt;Forex pair classification is the simplest of the three. The seven major pairs (EUR/USD, GBP/USD, USD/JPY, USD/CHF, AUD/USD, NZD/USD, USD/CAD) are defined by global convention, not by any metric that changes. They're the pairs where both currencies are from major developed economies and one side is always USD. Everything else is a cross. We store the majors as a &lt;code&gt;Set&lt;/code&gt; and classify by membership — if it's in the set, it's a major. If not, it's a cross.&lt;/p&gt;

&lt;p&gt;This is the kind of thing that doesn't need to be clever. The seven major forex pairs haven't changed in decades. They won't change tomorrow. A Set lookup is the right answer.&lt;/p&gt;




&lt;h2&gt;
  
  
  The State Architecture
&lt;/h2&gt;

&lt;p&gt;The entire filter system runs through a single Zustand store (&lt;code&gt;assetBandStore&lt;/code&gt;) that manages state for all three asset classes simultaneously. The architecture decisions here were important for performance and persistence.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Gets Persisted (and What Doesn't)
&lt;/h3&gt;

&lt;p&gt;The store uses Zustand's &lt;code&gt;persist&lt;/code&gt; middleware with a &lt;code&gt;partialize&lt;/code&gt; function that carefully controls what gets saved to localStorage. User preferences — which bands are enabled, which filter is active, custom lists — those persist across sessions. If you set stocks to "Tech Sector" and close your browser, it's still on "Tech Sector" when you come back.&lt;/p&gt;

&lt;p&gt;But the symbol cache and price data do &lt;em&gt;not&lt;/em&gt; persist. They refresh on every page load. This is deliberate. Persisting a 100-item symbol cache with prices and volumes would mean showing stale data for the first few seconds of every session until the fresh data arrives. That's worse than showing nothing for a moment while data loads, because stale data &lt;em&gt;looks&lt;/em&gt; real. A user might make a decision based on last Tuesday's volume data because it loaded instantly from localStorage. Instead, we show the known symbols immediately (from a hardcoded fallback list) and replace them with ranked data within a second or two.&lt;/p&gt;

&lt;h3&gt;
  
  
  Selective Re-renders
&lt;/h3&gt;

&lt;p&gt;Each &lt;code&gt;BandRow&lt;/code&gt; component subscribes to only its own slice of the store. The crypto band reads &lt;code&gt;symbolCache['crypto']&lt;/code&gt; and &lt;code&gt;bands['crypto']&lt;/code&gt;. It doesn't subscribe to the stock or forex data. This means a price update for AAPL doesn't cause the crypto band to re-render. In a system where three asset classes are each updating every 30 seconds, this matters for keeping the UI responsive.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;useMemo&lt;/code&gt; on the filter application means the sorted/filtered list only recomputes when the underlying data actually changes — not on every render cycle. And the &lt;code&gt;useCallback&lt;/code&gt; on the selection handler prevents unnecessary re-renders of the pill components.&lt;/p&gt;




&lt;h2&gt;
  
  
  The UX Details That Took Longer Than the Backend
&lt;/h2&gt;

&lt;p&gt;The data pipeline was the intellectually satisfying part. But the UX polish is what actually makes Asset Bands &lt;em&gt;usable&lt;/em&gt;, and it took significantly more iteration than the backend. We went through at least four major revisions of the interaction model before landing on something that felt right.&lt;/p&gt;

&lt;h3&gt;
  
  
  Wrong-Type Widget Handling: Three Attempts
&lt;/h3&gt;

&lt;p&gt;Here's a problem we didn't anticipate until we started using Asset Bands ourselves: what happens when you have crypto-specific widgets (Order Flow, Volume Delta) on your dashboard and then click a stock symbol?&lt;/p&gt;

&lt;p&gt;These widgets can only display crypto data — they depend on Binance WebSocket feeds that don't exist for equities. So when you switch to AAPL, they have nothing to show.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Attempt 1: Overlay messages.&lt;/strong&gt; Our first solution was straightforward — show a message overlay: "This widget requires a crypto symbol. Select a crypto symbol above to view data." Technically correct. Visually horrifying. The crypto widgets became massive empty rectangles with a small text label floating in the middle. Your carefully arranged 6-widget dashboard suddenly had three giant voids in it. It looked broken, even though it was working exactly as designed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Attempt 2: Auto-minimise in place.&lt;/strong&gt; We collapsed mismatched widgets to a single-row height strip in their current grid position. Better — no more voids. But now you had thin collapsed bars scattered randomly across your dashboard, breaking the visual flow. A collapsed bar at row 2, column 3 created an awkward gap between the widgets above and below it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Attempt 3: Auto-minimise to bottom.&lt;/strong&gt; The final solution: when a widget's asset class doesn't match the selected symbol, it collapses to a single row &lt;em&gt;and&lt;/em&gt; gets pushed to the bottom of the grid as a full-width bar. When you switch back to a matching symbol, it expands back to its original size and position.&lt;/p&gt;

&lt;p&gt;This required solving a non-obvious problem with &lt;code&gt;react-grid-layout&lt;/code&gt;'s vertical compaction. Setting &lt;code&gt;y: 999&lt;/code&gt; on a widget pushes it down, but only within its current column. A 4-column-wide widget at &lt;code&gt;x: 8&lt;/code&gt; would sit at the bottom of column 8, potentially overlapping with full-height widgets in other columns.&lt;/p&gt;

&lt;p&gt;The fix was making minimised widgets full-width (&lt;code&gt;w: 12, x: 0&lt;/code&gt;). A full-width widget at &lt;code&gt;y: 999&lt;/code&gt; &lt;em&gt;must&lt;/em&gt; go below everything else because it spans all columns. The grid's compaction algorithm has no choice but to place it at the very bottom.&lt;/p&gt;

&lt;p&gt;We also needed to preserve the original dimensions for restoration. When a widget auto-minimises, we store its original &lt;code&gt;height&lt;/code&gt;, &lt;code&gt;width&lt;/code&gt;, &lt;code&gt;x&lt;/code&gt;, &lt;code&gt;y&lt;/code&gt;, and &lt;code&gt;minWidth&lt;/code&gt; in the widget's persistent config object. When the mismatch resolves (user switches back to crypto), we read those values back and restore the widget to its exact previous position and size. This survives page reloads because the widget config persists through Zustand's localStorage layer.&lt;/p&gt;

&lt;p&gt;There's also a flag (&lt;code&gt;_autoMin&lt;/code&gt;) that distinguishes auto-minimised widgets from manually minimised ones. If you manually minimise a widget and then switch asset classes, we don't want the auto-expand to fight with your manual choice. The flag ensures that only widgets that were &lt;em&gt;automatically&lt;/em&gt; collapsed get automatically restored.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scroll Behaviour
&lt;/h3&gt;

&lt;p&gt;Symbol bands can have 50+ pills. They need to scroll horizontally. This sounds simple but has several edge cases that affect usability.&lt;/p&gt;

&lt;p&gt;We implemented three complementary scroll mechanisms:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Mouse wheel&lt;/strong&gt; — the scroll wheel maps to horizontal scroll. This is the expected behaviour for a horizontal list, but browsers don't do it automatically for horizontal overflow. We capture the &lt;code&gt;onWheel&lt;/code&gt; event and translate &lt;code&gt;deltaY&lt;/code&gt; into &lt;code&gt;scrollLeft&lt;/code&gt; changes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Click and drag&lt;/strong&gt; — you can grab the band and slide it laterally. This required careful momentum detection to distinguish between a &lt;em&gt;click&lt;/em&gt; (which should select a symbol) and a &lt;em&gt;drag&lt;/em&gt; (which should scroll the list). We track mouse movement from &lt;code&gt;mousedown&lt;/code&gt; and only enter "drag mode" after 3 pixels of horizontal movement. If the user drags, we suppress the subsequent &lt;code&gt;click&lt;/code&gt; event so it doesn't accidentally select a symbol.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Edge fade&lt;/strong&gt; — a CSS gradient overlay on the right edge signals that there are more pills to discover. It's a pointer-events-transparent div absolutely positioned over the scroll container, fading from the surface colour to transparent. Simple visual affordance, but without it users don't realise the band scrolls.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  The Pill Design: Buttons, Not Labels
&lt;/h3&gt;

&lt;p&gt;The symbol pills went through their own iteration. The first version looked like text labels — thin, no borders, no background. They didn't look clickable. Users hesitated before clicking them because the visual affordance was wrong.&lt;/p&gt;

&lt;p&gt;We switched to button-like pills: visible borders (&lt;code&gt;border-dark-700/30&lt;/code&gt;), a subtle background (&lt;code&gt;bg-dark-800/25&lt;/code&gt;), rounded corners (&lt;code&gt;rounded-md&lt;/code&gt;), and clear hover states that brighten the border and background. The active pill gets the full accent colour treatment — filled background, dark text, a soft glow shadow. The contrast between active (bright accent) and inactive (muted dark) is immediate and unambiguous.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;active:scale-95&lt;/code&gt; on click gives tactile feedback — the pill shrinks slightly when pressed, like a physical button. The active pill gets &lt;code&gt;scale-[1.04]&lt;/code&gt; so it sits slightly proud of its neighbours. These are tiny details, but they add up to something that feels intentional and polished rather than thrown together.&lt;/p&gt;

&lt;h3&gt;
  
  
  Filter Dropdown
&lt;/h3&gt;

&lt;p&gt;The filter dropdown is deliberately minimal. A small button next to the asset label shows the current filter's short code. Click it and a dropdown appears with the available filters for that asset class.&lt;/p&gt;

&lt;p&gt;Each filter option shows a human-readable label ("By Volume") and a short code ("Vol"). The active filter is highlighted with the asset's accent colour background. Click outside to dismiss. There are no animations, no transitions, no multi-level menus, no search, no icons.&lt;/p&gt;

&lt;p&gt;We considered adding more visual flair to the dropdown — icons for each filter, descriptions, maybe a preview count. We stripped it all away. The dropdown appears, you make a choice, it disappears. The faster you can get through the filter selection, the faster you're back to looking at actual market data. Every millisecond spent in a dropdown menu is a millisecond not spent trading.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Trading Workflow, Before and After
&lt;/h2&gt;

&lt;p&gt;The real value of Asset Bands isn't any individual feature. It's the shift in mental model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Before Asset Bands:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open Nydar&lt;/li&gt;
&lt;li&gt;Dashboard shows whatever symbol you looked at last&lt;/li&gt;
&lt;li&gt;If you want to check what's moving, open CoinGecko / Finviz / another screener in a new tab&lt;/li&gt;
&lt;li&gt;Find an interesting symbol on the external screener&lt;/li&gt;
&lt;li&gt;Switch back to Nydar&lt;/li&gt;
&lt;li&gt;Type the symbol in the search bar&lt;/li&gt;
&lt;li&gt;Wait for the dashboard to update&lt;/li&gt;
&lt;li&gt;Repeat for the next symbol&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;With Asset Bands:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open Nydar&lt;/li&gt;
&lt;li&gt;Glance at the bands — see what's green, what's red, what's moving&lt;/li&gt;
&lt;li&gt;Click a symbol — dashboard updates instantly&lt;/li&gt;
&lt;li&gt;Click "Gainers" — see the top performers right now&lt;/li&gt;
&lt;li&gt;Notice SOL is up 8% — click it — full analysis in front of you&lt;/li&gt;
&lt;li&gt;Switch to stocks — click "Tech" — scan the sector&lt;/li&gt;
&lt;li&gt;Notice NVDA is diverging from AMD — click between them to compare&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's a fundamentally different relationship between trader and platform. Instead of a tool that responds to your questions, it's a tool that prompts better questions. The information is &lt;em&gt;pushed&lt;/em&gt; to you rather than &lt;em&gt;pulled&lt;/em&gt; by you.&lt;/p&gt;

&lt;p&gt;Professional traders at institutional desks have Bloomberg terminals that show exactly this kind of information — market movers, sector performance, volume leaders. It's constantly visible, constantly updating, and it shapes how they think about the market moment to moment. Asset Bands bring that same awareness to retail traders.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Numbers
&lt;/h2&gt;

&lt;p&gt;Some metrics on what went into this feature:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;6 files modified&lt;/strong&gt; across frontend and backend&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;3 data sources&lt;/strong&gt; orchestrated (CoinGecko, Binance, Finnhub) with different refresh rates and cache strategies&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;15 distinct filters&lt;/strong&gt; across three asset classes, each backed by real market data&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;30-second&lt;/strong&gt; price refresh cycle on every visible pill&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;10-minute&lt;/strong&gt; ranking cache from CoinGecko, reducing API calls from thousands to ~144/day&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;5 widget dimensions&lt;/strong&gt; stored and restored during auto-minimise (height, width, x, y, minWidth)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;3 scroll mechanisms&lt;/strong&gt; (wheel, drag, edge fade)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;4 major UX revisions&lt;/strong&gt; before landing on the final interaction model&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;Asset Bands are live now. If you're a Nydar user, you'll see them between the header and your dashboard. The crypto and stock bands are enabled by default; forex is available in the asset class toggles in the header.&lt;/p&gt;

&lt;p&gt;We're planning to add &lt;strong&gt;custom watchlists&lt;/strong&gt; — the ability to create your own symbol groups and save them as a filter. The architecture already supports it. There's a &lt;code&gt;custom&lt;/code&gt; filter type in the store, a full custom list management system with create/delete/add/remove operations, and the filter function already handles the custom case. We just need to build the UI for creating and managing lists — a modal to name the list, a way to add symbols to it, and a way to select it from the filter dropdown.&lt;/p&gt;

&lt;p&gt;We're also looking at adding &lt;strong&gt;more sector classifications&lt;/strong&gt; for stocks as we expand the equity universe. Healthcare and energy are the obvious next additions. And potentially a &lt;strong&gt;"trending" filter&lt;/strong&gt; for crypto that surfaces coins seeing unusual search or social activity — CoinGecko has a trending endpoint that could power this.&lt;/p&gt;

&lt;p&gt;The longer-term vision is for Asset Bands to become a genuine market scanning tool. Not just a way to switch symbols, but a way to &lt;em&gt;discover&lt;/em&gt; trading opportunities. Real-time anomaly detection — flagging when a coin's volume spikes 5x its average, or when an entire sector moves in the opposite direction of the market. Correlation alerts — when two usually-correlated symbols diverge. Regime change detection — surfacing when the market transitions from trending to ranging.&lt;/p&gt;

&lt;p&gt;For now, try switching between the filters. Click "Gainers" on a red day. Look at "By Volume" during a major market event. Switch between Tech and Finance when rate decisions drop. You'll start noticing things you would have missed with a search box.&lt;/p&gt;

&lt;p&gt;That's the whole point.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://nydar.co.uk/blog/intelligent-asset-bands-real-market-filters" rel="noopener noreferrer"&gt;Nydar&lt;/a&gt;. Nydar is a free trading platform with AI-powered signals and analysis.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>trading</category>
      <category>fintech</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
