<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: George Kioko</title>
    <description>The latest articles on DEV Community by George Kioko (@the_aientrepreneur_7ae85).</description>
    <link>https://dev.to/the_aientrepreneur_7ae85</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/the_aientrepreneur_7ae85"/>
    <language>en</language>
    <item>
      <title>Seven new Apify actors in two days: healthcare, AI infra, GovCon, real estate</title>
      <dc:creator>George Kioko</dc:creator>
      <pubDate>Thu, 14 May 2026 01:01:06 +0000</pubDate>
      <link>https://dev.to/the_aientrepreneur_7ae85/seven-new-apify-actors-in-two-days-healthcare-ai-infra-govcon-real-estate-2io4</link>
      <guid>https://dev.to/the_aientrepreneur_7ae85/seven-new-apify-actors-in-two-days-healthcare-ai-infra-govcon-real-estate-2io4</guid>
      <description>&lt;h1&gt;
  
  
  Seven new Apify actors in two days
&lt;/h1&gt;

&lt;p&gt;Two day sprint. Codex did the per-actor source code, I orchestrated and verified each one against real data. All seven live on Apify Store. Pay per event pricing activates 2026-05-26.&lt;/p&gt;

&lt;p&gt;Each one targets a specific buyer category with a real budget. Not generic scrapers, not toy demos. Numbers on what each one does and which buyers should care.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Hospital Price Transparency MRF Normalizer
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://apify.com/george-the-developer/hospital-mrf-normalizer" rel="noopener noreferrer"&gt;Store link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;CMS enforcement of the hospital price transparency rule tightened in April 2026. Every US hospital publishes a machine readable file of negotiated rates by payer and procedure, but the files are gigantic and inconsistent. CMS v2 JSON, CMS v3 JSON, CSV with column drift between hospitals.&lt;/p&gt;

&lt;p&gt;This actor wraps the parsing into one Standby API. Detects format automatically (gzip, JSON, CSV), streams the file, normalizes each row to one schema with payer, plan, billing code, code type, negotiated rate, methodology, expiration date.&lt;/p&gt;

&lt;p&gt;Pricing: $0.50 per start, $0.002 per rate row, $0.02 per provider procedure payer bundle.&lt;/p&gt;

&lt;p&gt;Verified live test: Cooper University Hospital CSV returned 13 normalized rows including Aetna Better Health at code 0042T (HCPCS) for $272.44 negotiated.&lt;/p&gt;

&lt;p&gt;Buyers: health cost transparency platforms (Amino, Cedar, Healthcare Bluebook), employer benefits tools (Castlight, Garner, Transcarent), TPA and PPO operators (Zelis, MultiPlan).&lt;/p&gt;

&lt;h2&gt;
  
  
  2. MCP Server Registry and Security Scorer
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://apify.com/george-the-developer/mcp-server-registry-scorer" rel="noopener noreferrer"&gt;Store link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Anthropic Model Context Protocol exploded past 22,000 servers indexed on Glama. Agent platforms admitting MCP servers do it by README and stars and hope.&lt;/p&gt;

&lt;p&gt;This actor joins the official Anthropic registry with npm package metadata (downloads, first published date, maintainer count), GitHub repo data (stars, archived, last commit, open issues), and produces a deterministic risk score per server. Signals: missing repo, archived, stale 90 days, unknown publisher, no license, weird tool count, registered but never built, single maintainer npm, advisory match.&lt;/p&gt;

&lt;p&gt;Score lands 0 to 100 in bands low, medium, high, critical. Same input always returns same score, no LLM in the loop.&lt;/p&gt;

&lt;p&gt;Pricing: $0.50 per start, $0.025 per server profile, $0.15 per full security scan.&lt;/p&gt;

&lt;p&gt;Verified live test: &lt;code&gt;ac.inference.sh/mcp&lt;/code&gt; server flagged risk 70 (high) because no repo link, no license, unknown publisher, remote only transport.&lt;/p&gt;

&lt;p&gt;Buyers: agent platform admission control (Docker MCP Toolkit, Cursor, Continue, Windsurf), enterprise AI governance teams, MCP marketplace ranking signals, dev tool risk dashboards.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. FDA Warning Letter and Enforcement Monitor
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://apify.com/george-the-developer/fda-warning-letter-monitor" rel="noopener noreferrer"&gt;Store link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;FDA published warning letters on a static index page that everyone in regulated industries needs to read but nobody wants to scrape. The April 2026 GLP-1 and telehealth enforcement push made this even more urgent.&lt;/p&gt;

&lt;p&gt;This actor pulls the full letter feed, classifies each letter by topic (GLP-1, telehealth, compounding, manufacturing, biologics, food, advertising, dietary supplement), extracts the recipient company and product line, and rolls everything up into a company level risk brief with letter count, topic mix, time since last letter, and severity signals.&lt;/p&gt;

&lt;p&gt;Pricing: $1 per start, $0.30 per letter, $1.50 per company risk brief.&lt;/p&gt;

&lt;p&gt;Verified live test: CareFusion 213 LLC returned risk band "high" with 4 open enforcement actions tied to manufacturing.&lt;/p&gt;

&lt;p&gt;Buyers: FDA consultants (Redica, Greenleaf Health, ProPharma, RQM Plus, The FDA Group), pharma QA SaaS (AssurX, Sparta, Kneat), med-device legal, telehealth ops.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Clinical Trial Investigator and Site Intelligence
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://apify.com/george-the-developer/clinical-trial-investigator-intel" rel="noopener noreferrer"&gt;Store link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;CROs and sponsors at any scale rebuild the same data join for every new study. ClinicalTrials.gov plus NPI Registry plus OpenPayments plus PubMed publication count plus geo cluster of sites. The data is public, the join is mechanical, but everyone rebuilds it in a half maintained Python script.&lt;/p&gt;

&lt;p&gt;This actor wraps the join into one Standby API. Query by condition or NCT id, get investigator profiles with NPI, OpenPayments dollar totals by company, publication counts, trial history broken down by phase, and a deterministic site fit score for each location.&lt;/p&gt;

&lt;p&gt;Pricing: $1 per start, $0.10 per investigator profile, $0.50 per site fit row.&lt;/p&gt;

&lt;p&gt;Verified live test: glioblastoma phase 2 query returned Thomas J Kaley MD at Memorial Sloan Kettering with NPI 1578721858, full therapeutic area list, and complete trial history.&lt;/p&gt;

&lt;p&gt;Buyers: top 5 CROs (IQVIA, ICON, Parexel, Medidata, Veeva), clinical site networks (Advarra, Clinitiative), patient recruitment platforms (LINEA), sponsor BD teams.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Federal Contract Opportunity Monitor
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://apify.com/george-the-developer/federal-contract-opportunity-monitor" rel="noopener noreferrer"&gt;Store link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;SAM.gov publishes federal contract opportunities hourly. USAspending publishes awarded contracts with recipient data. Joining them and tagging by topic is the daily grind for federal sales teams and GovCon advisors. Bloomberg Government and Govini do this at enterprise pricing.&lt;/p&gt;

&lt;p&gt;This actor pulls SAM.gov internal search (no API key required) plus USAspending POST endpoint, normalizes both into one schema, tags opportunities by topic keyword (configurable per user), and produces partnership leads: which prime contractors won similar work recently and at what value.&lt;/p&gt;

&lt;p&gt;Pricing: $1 per start, $0.10 per opportunity, $0.50 per partnership lead.&lt;/p&gt;

&lt;p&gt;Verified live test: &lt;code&gt;/leads?keyword=consulting&amp;amp;awarded_amount_min=100000&lt;/code&gt; returned General Dynamics IT with 2 recent awards totaling $1.7 billion, lead band "priority", top buying agency Department of State.&lt;/p&gt;

&lt;p&gt;Buyers: GovCon advisors (Govini, Deltek, EZGovOpps), federal sales teams at MSPs and consulting firms, subcontract introduction services, state and local procurement intel.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. LLM Provider Price and Latency Monitor
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://apify.com/george-the-developer/llm-provider-price-latency-monitor" rel="noopener noreferrer"&gt;Store link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;LLM gateways and agent platforms route across 5 to 10 providers. Pricing changes weekly. OpenRouter aggregates in a UI but does not publish it as a clean JSON ingestion feed. Most teams maintain their own scraper.&lt;/p&gt;

&lt;p&gt;This actor wraps OpenRouter as canonical (200+ models, no auth), falls back to scraping OpenAI, Anthropic, Together, and Groq pricing pages when a model is missing, and returns a normalized snapshot per model with prompt and completion price per 1M tokens, context length, capability flags (vision, tools, JSON mode), and provider routing list.&lt;/p&gt;

&lt;p&gt;A second endpoint produces cross provider benchmark rows: same model family, multiple providers, cost per 1k chat pair, multiplier from cheapest to most expensive.&lt;/p&gt;

&lt;p&gt;Pricing: $0.50 per start, $0.025 per model snapshot, $0.15 per benchmark row.&lt;/p&gt;

&lt;p&gt;Verified live test: &lt;code&gt;/models?provider=anthropic&amp;amp;limit=3&lt;/code&gt; returned 3 Anthropic models with full pricing populated. Benchmark endpoint returned 2 cross provider rows with cheapest_provider field.&lt;/p&gt;

&lt;p&gt;Buyers: LLM gateway operators (Portkey, Helicone, LiteLLM), agent platform model admission, FinOps cost per task budgeting, AI engineering team weekly digests.&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Multi City Building Permit Aggregator
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://apify.com/george-the-developer/multi-city-building-permit-aggregator" rel="noopener noreferrer"&gt;Store link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each US city publishes building permit data in its own JSON shape. NYC moved their authoritative feed from the old DOB Issuance dataset to DOB NOW in 2024 and broke every scraper that hardcoded the old IDs.&lt;/p&gt;

&lt;p&gt;This actor wraps NYC and Chicago open data portals into one schema. Per permit you get permit type, status, work type, estimated cost, address with GPS, ZIP, block lot, builder business name, builder license, property business name. A second endpoint produces a builder activity roundup for any business name.&lt;/p&gt;

&lt;p&gt;Pricing: $1 per start, $0.05 per permit row, $0.30 per builder activity roundup.&lt;/p&gt;

&lt;p&gt;Verified live test: COSTELLO CONSTRUCTION roundup returned 88 permits over 3 years for $3 million total value, top property businesses served (Flushing Hospital Medical Center, Jerome Avenue SM Realty LLC), top permit types, top ZIP codes, activity tier "high volume".&lt;/p&gt;

&lt;p&gt;Buyers: construction supply distributors planning territories, contractor SaaS lead enrichment (ServiceTitan, Jobber, Buildertrend), real estate investor research, market research firms.&lt;/p&gt;

&lt;h2&gt;
  
  
  What activation looks like
&lt;/h2&gt;

&lt;p&gt;Pricing activates 2026-05-26 for the first 5 actors and 2026-05-27 for the last 2. Until then runs are free for testers who want to validate the schema.&lt;/p&gt;

&lt;p&gt;If any of the buyer categories above describes your work, drop a free trial test at the contact info in each Store listing. Sample data on one company or condition of your choice gets returned within the day.&lt;/p&gt;

&lt;p&gt;If you want to follow what each actor earns at first activation, the monthly digest goes out via &lt;a href="https://x.com/ai_in_it" rel="noopener noreferrer"&gt;@ai_in_it on X&lt;/a&gt; and the &lt;a href="https://github.com/the-ai-entrepreneur-ai-hub/apify-actor-portfolio" rel="noopener noreferrer"&gt;apify-actor-portfolio repo&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;George&lt;br&gt;
&lt;a href="https://apify.com/george-the-developer" rel="noopener noreferrer"&gt;george.the.developer on Apify&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Source and verification reports: &lt;a href="https://github.com/the-ai-entrepreneur-ai-hub/apify-actor-portfolio/blob/main/articles/2026-05-13-seven-actors-launch.md" rel="noopener noreferrer"&gt;github.com/the-ai-entrepreneur-ai-hub/apify-actor-portfolio&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>I built a $0.002 email validator because ZeroBounce was killing my margins on a freelance gig</title>
      <dc:creator>George Kioko</dc:creator>
      <pubDate>Tue, 05 May 2026 20:47:08 +0000</pubDate>
      <link>https://dev.to/the_aientrepreneur_7ae85/i-built-a-0002-email-validator-because-zerobounce-was-killing-my-margins-on-a-freelance-gig-3dag</link>
      <guid>https://dev.to/the_aientrepreneur_7ae85/i-built-a-0002-email-validator-because-zerobounce-was-killing-my-margins-on-a-freelance-gig-3dag</guid>
      <description>&lt;p&gt;A client paid me a fixed fee to clean a 47k email list. Sounded fine on paper. Then I priced the verification step.&lt;/p&gt;

&lt;p&gt;ZeroBounce minimum is 2,000 credits for $39, so $0.0195 per email at the cheapest pay-as-you-go rate. Their subscription gets you to about $0.0099 per email at 10k volume. For my 47k list that is $462 PAYG or $235 if I commit to a monthly plan I will not need next month. The freelance fee was a flat number. The math was getting ugly.&lt;/p&gt;

&lt;p&gt;I have an Apify account and a small list of standby actors that bill per call. So I went to check what an SMTP MX validator actually costs to run.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is in a real validation
&lt;/h2&gt;

&lt;p&gt;If you scope it tight there are five checks that catch most of the garbage:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;RFC 5322 syntax (regex on the local part and domain).&lt;/li&gt;
&lt;li&gt;Disposable domain list (Mailinator, 10MinuteMail, 1.4k entries that move slowly).&lt;/li&gt;
&lt;li&gt;Free provider tag (Gmail, Yahoo, etc, useful for B2B scoring not deliverability).&lt;/li&gt;
&lt;li&gt;MX record lookup via DNS.&lt;/li&gt;
&lt;li&gt;Optional SMTP handshake that opens a TCP connection to the MX host on port 25 and reads the banner. No data sent.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The first four are local computation. Microseconds. The SMTP step is the only thing that costs a network round trip and most ISPs will give you a clean answer in under 2 seconds.&lt;/p&gt;

&lt;h2&gt;
  
  
  The actor
&lt;/h2&gt;

&lt;p&gt;I wrote it as a standby HTTP server on Apify. One endpoint, JSON in, JSON out. Billed per email via &lt;code&gt;Actor.charge&lt;/code&gt;. Pay only on a successful return.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST &lt;span class="s1"&gt;'https://george-the-developer--email-validator-api.apify.actor/?token=YOUR_TOKEN'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s1"&gt;'Content-Type: application/json'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{"email":"someone@somewhere.com"}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Returns:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"email"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"someone@somewhere.com"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"valid_syntax"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"is_disposable"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"is_free"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mx_records"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"mail.somewhere.com"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"smtp_check"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"deliverable"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"score"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.95&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Per call cost is $0.002. For my 47k list that came out to roughly $94 of compute, not $462. The shape of the gig changed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;flowchart LR
  Client[Client API call] --&amp;gt; Standby[Apify Standby Actor]
  Standby --&amp;gt; Syntax[RFC 5322 check]
  Syntax --&amp;gt; Disposable[Disposable domain list]
  Disposable --&amp;gt; MX[DNS MX lookup]
  MX --&amp;gt; SMTP[SMTP handshake on :25]
  SMTP --&amp;gt; Charge[Actor.charge per email]
  Charge --&amp;gt; Resp[(JSON response)]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What it does not do
&lt;/h2&gt;

&lt;p&gt;I want to be honest about scope. It does not do role-based detection beyond a simple list of &lt;code&gt;info@&lt;/code&gt;, &lt;code&gt;support@&lt;/code&gt;, &lt;code&gt;sales@&lt;/code&gt;, etc. It does not do toxicity scoring or abuse history, that needs a paid feed. It does not catch every catch-all domain, which is a known industry problem and not solvable with public DNS alone. ZeroBounce charges more partly because they aggregate proprietary signals from their customer base. Worth it if you are validating cold lists for cold outreach. Less worth it if you are cleaning a list of people who already opted in.&lt;/p&gt;

&lt;h2&gt;
  
  
  Per call vs flat subscription
&lt;/h2&gt;

&lt;p&gt;The shape that bugged me about SaaS validators is the cap. You pay $79 a month and you get a fixed bucket of credits. If your usage is bursty (one client every few weeks) most of that bucket evaporates. If your usage is steady the math works out. But solo and freelance work is bursty by definition. Per call billing matches the work. No cap, no commitment, just a number per email.&lt;/p&gt;

&lt;h2&gt;
  
  
  Numbers from the gig
&lt;/h2&gt;

&lt;p&gt;47,219 emails through the actor. Run completed in 38 minutes on the lowest memory tier. About 3.1k emails came back as undeliverable, 1.2k as disposable, 41.5k as deliverable. The client took the cleaned list and ran their own send. I did not have to think about a credit pool.&lt;/p&gt;

&lt;h2&gt;
  
  
  When this is the wrong call
&lt;/h2&gt;

&lt;p&gt;If you are running enterprise email marketing at 10M sends a month, ZeroBounce or NeverBounce or Bouncer probably fit you. They have account managers, SLA contracts, GDPR paperwork. I am one developer with an Apify account. If you need that other thing, go pay for it.&lt;/p&gt;

&lt;p&gt;If you are a freelancer cleaning a list, or a small agency that runs verification once a month, per call billing solves your specific problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;Actor is at apify.com/george.the.developer/email-validator-api. Free to try if you have an Apify account, billed per call. Source for the docs and curl examples is at github.com/the-ai-entrepreneur-ai-hub/email-validator-api-docs (I keep the actor itself private but the call surface is documented).&lt;/p&gt;

&lt;p&gt;If anyone has run into the same SaaS-cap problem with their own freelance gigs, would love to hear what you ended up using.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://theaientrepreneur.hashnode.dev/i-built-a-0002-email-validator-because-zerobounce-was-killing-my-margins-on-a-freelance-gig" rel="noopener noreferrer"&gt;Hashnode&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>4 days to lock in current LinkedIn scraper pricing</title>
      <dc:creator>George Kioko</dc:creator>
      <pubDate>Fri, 01 May 2026 08:24:24 +0000</pubDate>
      <link>https://dev.to/the_aientrepreneur_7ae85/4-days-to-lock-in-current-linkedin-scraper-pricing-5elg</link>
      <guid>https://dev.to/the_aientrepreneur_7ae85/4-days-to-lock-in-current-linkedin-scraper-pricing-5elg</guid>
      <description>&lt;p&gt;Short post. The LinkedIn Company Employees Scraper on my Apify portfolio is raising PPE pricing on May 5. If you have a planned batch coming up, run it before then.&lt;/p&gt;

&lt;p&gt;This is not a marketing nudge. This is an "I auto-disabled half the actor's value because I was losing money on residential proxy and the May 5 fix re-enables it" situation.&lt;/p&gt;

&lt;h2&gt;
  
  
  What changes May 5
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Event&lt;/th&gt;
&lt;th&gt;Old&lt;/th&gt;
&lt;th&gt;New&lt;/th&gt;
&lt;th&gt;Multiplier&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;short-profile&lt;/td&gt;
&lt;td&gt;$0.003&lt;/td&gt;
&lt;td&gt;$0.009&lt;/td&gt;
&lt;td&gt;3x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;full-profile&lt;/td&gt;
&lt;td&gt;$0.006&lt;/td&gt;
&lt;td&gt;$0.015&lt;/td&gt;
&lt;td&gt;2.5x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;full-profile-with-email&lt;/td&gt;
&lt;td&gt;$0.01&lt;/td&gt;
&lt;td&gt;$0.025&lt;/td&gt;
&lt;td&gt;2.5x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;actor-start&lt;/td&gt;
&lt;td&gt;$0.005&lt;/td&gt;
&lt;td&gt;$0.005&lt;/td&gt;
&lt;td&gt;unchanged&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;A run that costs $0.10 today would cost about $0.30 after May 5. Same data, same accuracy, same actor.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why now
&lt;/h2&gt;

&lt;p&gt;Verification was auto-disabled on April 24 because residential proxy cost was running below the short-profile price. The actor falls back to confidence=low SERP-only output until the May 5 increase reactivates verified mode.&lt;/p&gt;

&lt;p&gt;If you absolutely cannot wait until May 5 for verified data, pass &lt;code&gt;acceptDiscoveryFallback: true&lt;/code&gt; on input. Build 2.2.21+ runs SERP-only at the current $0.003 rate.&lt;/p&gt;

&lt;p&gt;Full breakdown of how the pricing got into this state in the first place:&lt;/p&gt;

&lt;p&gt;theaientrepreneur.hashnode.dev/two-agency-users-were-83-of-my-revenue-they-left-and-i-noticed-29-days-later&lt;/p&gt;

&lt;h2&gt;
  
  
  Three actions if you use this actor
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Run any planned bulk batches before May 5 to lock in current pricing on the events you need.&lt;/li&gt;
&lt;li&gt;Update your maxTotalChargeUsd cap if you have one. The same volume after May 5 needs roughly 3x the cap on short-profile-heavy runs.&lt;/li&gt;
&lt;li&gt;If your pipeline expects verified output and you cannot wait, switch to acceptDiscoveryFallback now to keep getting results at current pricing, with the confidence=low caveat.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What does not change
&lt;/h2&gt;

&lt;p&gt;The actor itself. Same code, same maintenance, same response time. Same author who reads every Discussion thread.&lt;/p&gt;

&lt;p&gt;The May 5 change brings pricing in line with what residential verification costs. After it lands, verification turns back on automatically and your runs return verified worksFor data again at the new rate.&lt;/p&gt;

&lt;h2&gt;
  
  
  When this is the wrong actor for you
&lt;/h2&gt;

&lt;p&gt;If you only need the candidate set and not LinkedIn-verified worksFor data, the SERP-only path at $0.003 stays cheaper than competitors that bundle verification. If you need fully-verified output and your spreadsheet math says $0.009 per profile is too much, my actor is not the right tool. Phantombuster bundles verification at $69/mo for low volume. Bright Data is more expensive but enterprise-grade. Apify itself has 50+ LinkedIn actors, some priced lower for lower-quality verification.&lt;/p&gt;

&lt;p&gt;I run mine at $0.009 because the residential proxy cost behind verification is real. Lower-priced competitors either skip verification, scrape with logged-in cookies (ToS risk), or run at a loss until they shut down (Proxycurl pattern).&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom line
&lt;/h2&gt;

&lt;p&gt;Run your batches before May 5 if you can. Drop questions on the Apify Discussion thread for the actor if anything is unclear about the change.&lt;/p&gt;

&lt;p&gt;apify.com/george.the.developer/linkedin-company-employees-scraper&lt;/p&gt;

</description>
      <category>apify</category>
      <category>pricing</category>
      <category>scraping</category>
      <category>saas</category>
    </item>
    <item>
      <title>6 Apify actors I actually use myself</title>
      <dc:creator>George Kioko</dc:creator>
      <pubDate>Tue, 28 Apr 2026 19:51:26 +0000</pubDate>
      <link>https://dev.to/the_aientrepreneur_7ae85/6-apify-actors-i-actually-use-myself-1n06</link>
      <guid>https://dev.to/the_aientrepreneur_7ae85/6-apify-actors-i-actually-use-myself-1n06</guid>
      <description>&lt;p&gt;I have 27 public Apify actors. Most are good. Six are genuinely useful and I run them through n8n and curl on a weekly basis. This is a tour of those six, with the actual prices, sample inputs, and the use cases I built them for.&lt;/p&gt;

&lt;p&gt;If you found me through the silent-churn postmortem from earlier this week, this is the followup people kept asking for: "what else do you have?"&lt;/p&gt;

&lt;h2&gt;
  
  
  1. LinkedIn Company Employees Scraper
&lt;/h2&gt;

&lt;p&gt;apify.com/george.the.developer/linkedin-company-employees-scraper&lt;/p&gt;

&lt;p&gt;The one most people find me through. Takes a LinkedIn company URL, returns the top N employees who match your title filter. Verifies via JA4-accurate TLS fetch on a self-hosted Go service so it does not need login cookies and does not get flagged as a bot.&lt;/p&gt;

&lt;p&gt;Sample input:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"companies"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"https://www.linkedin.com/company/stripe"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"maxEmployees"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;25&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"targetTitles"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"CEO"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"CTO"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Head of Engineering"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pay per event. $0.005 actor-start, $0.003 per short profile. Hike scheduled for May 5 to $0.009 per short profile to cover residential proxy cost properly.&lt;/p&gt;

&lt;p&gt;Use it for: lead lists, sales prospecting, recruiter sourcing.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Email Validator API
&lt;/h2&gt;

&lt;p&gt;apify.com/george.the.developer/email-validator-api&lt;/p&gt;

&lt;p&gt;Standby HTTP API. Sub-second response. Runs syntax + MX + disposable + role-based + SMTP handshake checks. Pay per event $0.002 per email. Run a list of 50,000 emails and pay $100, not the $375 NeverBounce charges.&lt;/p&gt;

&lt;p&gt;Sample call:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="s1"&gt;'https://george-the-developer--email-validator-api.apify.actor/validate?email=test@stripe.com'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use it for: pre-flight on cold email lists, signup form fraud filtering, list cleanup before importing to a CRM.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Domain WHOIS Lookup
&lt;/h2&gt;

&lt;p&gt;apify.com/george.the.developer/domain-whois-lookup&lt;/p&gt;

&lt;p&gt;Standby API. Returns registrar, age in days, expiry, DNS records. Falls back to RDAP since 374 gTLDs sunsetted port 43 WHOIS in early 2025. $0.005 per lookup.&lt;/p&gt;

&lt;p&gt;Sample call:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="s1"&gt;'https://george-the-developer--domain-whois-lookup.apify.actor/lookup?domain=stripe.com'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use it for: lead-scoring (domain age is a real signal), security tooling, brand monitoring.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Company Enrichment API
&lt;/h2&gt;

&lt;p&gt;apify.com/george.the.developer/company-enrichment-api&lt;/p&gt;

&lt;p&gt;Domain in, company name + industry + tech stack signals out. Sub-second standby response. $0.01 per call.&lt;/p&gt;

&lt;p&gt;Use it for: enrichment step in a lead-gen pipeline, ICP scoring, account research.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. URL Metadata Extractor
&lt;/h2&gt;

&lt;p&gt;apify.com/george.the.developer/url-metadata-extractor&lt;/p&gt;

&lt;p&gt;OG tags, Twitter cards, favicon, canonical URL, structured data. Anything an AI agent needs to actually understand a page without parsing the full DOM. $0.003 per URL.&lt;/p&gt;

&lt;p&gt;Use it for: content tools that show link previews, AI agents that need to summarize before they read, dashboards that aggregate links.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. AI Content Detector
&lt;/h2&gt;

&lt;p&gt;apify.com/george.the.developer/ai-content-detector&lt;/p&gt;

&lt;p&gt;Text in, AI-probability score out. Uses an LLM-based classifier behind the scenes, not a regex. $0.003 per text.&lt;/p&gt;

&lt;p&gt;Use it for: content moderation pipelines, marketplace listings filtering, dataset cleanup before training.&lt;/p&gt;

&lt;h2&gt;
  
  
  How they fit together
&lt;/h2&gt;

&lt;p&gt;A typical pipeline I see in customer logs looks like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;LinkedIn Company Employees Scraper finds candidates at a target company&lt;/li&gt;
&lt;li&gt;Each candidate's email gets cleaned through Email Validator&lt;/li&gt;
&lt;li&gt;Their company domain runs through WHOIS Lookup for age signal&lt;/li&gt;
&lt;li&gt;Domain runs through Company Enrichment for industry + tech stack&lt;/li&gt;
&lt;li&gt;Output goes into Hubspot or Pipedrive&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each step is pay per event. If you only need 3 of the 5 steps, you only pay for those events. There is no per-seat or per-month surcharge.&lt;/p&gt;

&lt;h2&gt;
  
  
  Honest pricing context
&lt;/h2&gt;

&lt;p&gt;I just shipped a billing-guard fix for the LinkedIn actor that prevents it from emitting profiles when the charge would exceed your maxTotalChargeUsd cap. Postmortem on that lives here:&lt;/p&gt;

&lt;p&gt;theaientrepreneur.hashnode.dev/why-my-linkedin-scraper-now-refuses-jobs&lt;/p&gt;

&lt;p&gt;Same gate is rolling out across the standby APIs over the next two weeks. If you used any of these before the fix and your output count did not match your billed events, ping me.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where to start
&lt;/h2&gt;

&lt;p&gt;If you do lead generation, start with LinkedIn + Email Validator. That is the chain that produces revenue for users I see in the logs.&lt;/p&gt;

&lt;p&gt;If you do content moderation or AI agent work, start with URL Metadata Extractor + AI Content Detector. They both return clean JSON in well under a second.&lt;/p&gt;

&lt;p&gt;If you do security or domain research, start with WHOIS + Company Enrichment.&lt;/p&gt;

&lt;p&gt;All six work standalone. All six bill per event, not per seat. None require API key contortions, just an Apify token and a curl.&lt;/p&gt;

&lt;p&gt;apify.com/george.the.developer&lt;/p&gt;

</description>
      <category>apify</category>
      <category>scraping</category>
      <category>indiehackers</category>
      <category>saas</category>
    </item>
    <item>
      <title>What I shipped after the $540 silent churn postmortem</title>
      <dc:creator>George Kioko</dc:creator>
      <pubDate>Sat, 25 Apr 2026 08:24:18 +0000</pubDate>
      <link>https://dev.to/the_aientrepreneur_7ae85/what-i-shipped-after-the-540-silent-churn-postmortem-hmk</link>
      <guid>https://dev.to/the_aientrepreneur_7ae85/what-i-shipped-after-the-540-silent-churn-postmortem-hmk</guid>
      <description>&lt;p&gt;Yesterday I posted a postmortem on losing $540 a month to silent user churn. Some folks asked what the actual fix was. This is that post. Less drama, more code, three concrete patches that went live today.&lt;/p&gt;

&lt;p&gt;If you missed yesterday: &lt;a href="https://theaientrepreneur.hashnode.dev/two-agency-users-were-83-of-my-revenue-they-left-and-i-noticed-29-days-later" rel="noopener noreferrer"&gt;https://theaientrepreneur.hashnode.dev/two-agency-users-were-83-of-my-revenue-they-left-and-i-noticed-29-days-later&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When I started digging into why my LinkedIn employee scraper was bleeding compute on real user runs, I found it was not one bug. It was three, layered.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bug one: push first, charge second
&lt;/h2&gt;

&lt;p&gt;Every Apify pay per event tutorial shows you Actor.charge('event-name', { count: 1 }). Easy. What none of them stress is what happens when the charge call fails.&lt;/p&gt;

&lt;p&gt;There are at least three live failure modes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The user set maxTotalChargeUsd on the run. They hit it. Charge returns chargedCount: 0.&lt;/li&gt;
&lt;li&gt;Apify itself returns eventChargeLimitReached: true mid run.&lt;/li&gt;
&lt;li&gt;The platform throws a transient error your try/catch swallows.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;My actor's loop was structured like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;Actor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;pushData&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;record&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;Actor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;charge&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;eventName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;count&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;warning&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`charge failed: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="c1"&gt;// the loop keeps going regardless&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Push first, charge second, swallow errors, keep looping. So if charge stopped working halfway through a 100 profile run, the actor cheerfully output the remaining 50 for free while still spending real proxy and SERP money. The user got 100 profiles. I got billed for 50.&lt;/p&gt;

&lt;p&gt;That is exactly the kind of leak you only notice when you stare at a per run cost graph and wonder why your revenue line is growing slower than your cost line.&lt;/p&gt;

&lt;h2&gt;
  
  
  The fix: a charge gate that fails closed
&lt;/h2&gt;

&lt;p&gt;The new code calls a small helper before every emit. If charge fails for any reason, the gate refuses every subsequent call without even trying.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;createProfileChargeGate&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;isPPE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;eventName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;actorCharge&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;stats&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;chargeLimitReached&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;hasChargeLimitReached&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;chargeLimitReached&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nf"&gt;chargeForNextProfile&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;isPPE&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;canEmit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;charged&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;reason&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;not-ppe&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
            &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;chargeLimitReached&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;canEmit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;charged&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;reason&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;charge-limit-reached&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;

            &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;actorCharge&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;eventName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;count&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
                &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;eventChargeLimitReached&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="nx"&gt;chargeLimitReached&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
                    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;canEmit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;charged&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;reason&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;charge-limit-reached&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;
                &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;charged&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Number&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;chargedCount&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
                &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;charged&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="nx"&gt;chargeLimitReached&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
                    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;canEmit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;charged&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;reason&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;not-charged&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;
                &lt;span class="nx"&gt;stats&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;totalCharges&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;stats&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;totalCharges&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;charged&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
                &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;canEmit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;charged&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;reason&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;charged&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="nx"&gt;chargeLimitReached&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
                &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;canEmit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;charged&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;reason&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;charge-error&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The main loop now does:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;gate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createProfileChargeGate&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;isPPE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;eventName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;actorCharge&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Actor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;charge&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;stats&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;profile&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;profiles&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;verdict&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;gate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;chargeForNextProfile&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;verdict&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;canEmit&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Stopping at &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;stats&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;totalCharges&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; profiles, gate refused: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;verdict&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;reason&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;Actor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;pushData&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;profile&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the gate has refused, every call short circuits without trying to charge again. The run wraps up gracefully instead of bleeding compute on uncharged output.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bug two: jobs that should never start
&lt;/h2&gt;

&lt;p&gt;The second class of bug is the job that should not have run at all. A user sets companyCount 200, targetTitles 30, maxEmployees 1, hits Run, and watches my actor burn proxy and verification cost while emitting almost nothing.&lt;/p&gt;

&lt;p&gt;The math is approachable. Per company you do roughly basePages + targetTitleCount SERP requests at about $0.0025 each, plus a verification attempt budget at about $0.0004 each. Per profile emitted you collect actorStartPriceUsd + shortProfilePriceUsd, then Apify takes 20% platform share off the top.&lt;/p&gt;

&lt;p&gt;So a preflight estimator can compute estimatedPlatformCostUsd and estimatedCreatorRevenueUsd before any compute happens.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;buildMarginPreflightEstimate&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="nx"&gt;companyCount&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;targetTitleCount&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;maxEmployees&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;verifyEnabled&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;actorStartPriceUsd&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;shortProfilePriceUsd&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;creatorRevenueShare&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;serpCostUsd&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;verificationAttemptCostUsd&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;maxCostToCreatorRevenueRatio&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.75&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;basePages&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;getBaseSerpPagesPerCompany&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;maxEmployees&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;verifyEnabled&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;serpRequests&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;companyCount&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;basePages&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;targetTitleCount&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;verificationAttempts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;verifyEnabled&lt;/span&gt;
        &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="nx"&gt;companyCount&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nf"&gt;getInitialVerificationCandidateLimit&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;companyCount&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;maxEmployees&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
        &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;estimatedProfiles&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;companyCount&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;maxEmployees&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;estimatedPlatformCostUsd&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;serpRequests&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;serpCostUsd&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;verificationAttempts&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;verificationAttemptCostUsd&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;estimatedCreatorRevenueUsd&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;actorStartPriceUsd&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;estimatedProfiles&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;shortProfilePriceUsd&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;creatorRevenueShare&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ratio&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;estimatedCreatorRevenueUsd&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
        &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="nx"&gt;estimatedPlatformCostUsd&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="nx"&gt;estimatedCreatorRevenueUsd&lt;/span&gt;
        &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;Infinity&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;exceedsMarginBudget&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;ratio&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;maxCostToCreatorRevenueRatio&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;estimatedPlatformCostUsd&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;estimatedCreatorRevenueUsd&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;ratio&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="nx"&gt;exceedsMarginBudget&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;warning&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;exceedsMarginBudget&lt;/span&gt;
            &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="s2"&gt;`Not profitable enough for verified mode. Reduce companies, reduce targetTitles, or increase maxEmployees per company.`&lt;/span&gt;
            &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In main, before any real work:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;estimate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exceedsMarginBudget&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Input rejected before run start: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;estimate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;warning&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;. No PPE events will be charged.`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The throw happens before Actor.charge has been called once. The user gets a clear refusal at submit time and pays nothing. They can resubmit with parameters that actually make sense.&lt;/p&gt;

&lt;p&gt;The estimator tests confirm it accepts normal small runs (2 companies, 25 employees each, verified) and rejects the title-heavy 1-employee runs that were the worst offenders.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bug three: a default that was too generous
&lt;/h2&gt;

&lt;p&gt;The third change is product taste. Default maxEmployees was 100. That is too many for verified scraping with current LinkedIn block rates. Most users wanted 10 to 20 anyway and just left the default. The new default is 25.&lt;/p&gt;

&lt;p&gt;If you really want 100 verified profiles per company, type 100 explicitly. Acknowledging it costs you a keystroke instead of nothing.&lt;/p&gt;

&lt;p&gt;Small change, real impact. The new default protected at least one user yesterday from accidentally triggering the margin preflight refusal.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this means if you use the actor
&lt;/h2&gt;

&lt;p&gt;Three things you will notice in the build that went live today:&lt;/p&gt;

&lt;p&gt;You will not get billed for partial runs where charge stopped working. Either the run completes and you pay for everything you got, or it stops mid run and you pay nothing past the limit, never both.&lt;/p&gt;

&lt;p&gt;You will get rejected at submit time if your input is structurally unprofitable. The error message tells you exactly which knob to turn.&lt;/p&gt;

&lt;p&gt;You will be defaulted into a smaller, faster run. Big jobs are still possible, you just have to opt in.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I am doing next
&lt;/h2&gt;

&lt;p&gt;The same three patterns apply to most of my PPE actors. The charge gate is already a module. I will be rolling it across the rest of the portfolio over the next week:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AI Content Detector&lt;/li&gt;
&lt;li&gt;Email Validator API&lt;/li&gt;
&lt;li&gt;URL Metadata Extractor&lt;/li&gt;
&lt;li&gt;Domain WHOIS Lookup&lt;/li&gt;
&lt;li&gt;Company Enrichment API&lt;/li&gt;
&lt;li&gt;Website Intelligence API&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These already shipped a fix yesterday for a different leak (GPT Store action pings hitting the standby actor with test payloads). The billing gate is the next layer.&lt;/p&gt;

&lt;p&gt;If you build pay per event actors on Apify, take an hour and add a similar gate. The savings show up immediately. Your users start trusting your billing numbers because the numbers actually match the work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try the actor
&lt;/h2&gt;

&lt;p&gt;The fixes are live as of today's build:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://apify.com/george.the.developer/linkedin-company-employees-scraper" rel="noopener noreferrer"&gt;https://apify.com/george.the.developer/linkedin-company-employees-scraper&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pass companies as a list of LinkedIn URLs, set maxEmployees explicitly if you want more than 25, and watch the run console. The new guards should make the cost line predictable for the first time since I shipped this thing.&lt;/p&gt;

&lt;p&gt;Yesterday was the diagnosis. Today is the fix. Tomorrow, I find out if anyone other than me actually cares.&lt;/p&gt;

</description>
      <category>apify</category>
      <category>postmortem</category>
      <category>billing</category>
      <category>saas</category>
    </item>
    <item>
      <title>How I lost $540/month in 30 days to silent user churn (and didn't notice)</title>
      <dc:creator>George Kioko</dc:creator>
      <pubDate>Fri, 24 Apr 2026 04:34:05 +0000</pubDate>
      <link>https://dev.to/the_aientrepreneur_7ae85/how-i-lost-540month-in-30-days-to-silent-user-churn-and-didnt-notice-4m5c</link>
      <guid>https://dev.to/the_aientrepreneur_7ae85/how-i-lost-540month-in-30-days-to-silent-user-churn-and-didnt-notice-4m5c</guid>
      <description>&lt;p&gt;Last week my 30 day profit dropped from $268 to $50 and I assumed it was a bug in the dashboard.&lt;/p&gt;

&lt;p&gt;It wasn't a bug. It was me not paying attention for 29 days straight.&lt;/p&gt;

&lt;p&gt;This is a postmortem of how I lost roughly $540/month from two agency buyers who just quietly stopped running my actor on March 25. I found out on April 23. That's 29 days of ambient denial while every other part of my portfolio was also quietly rotting.&lt;/p&gt;

&lt;p&gt;Writing this partly so I remember the lesson, partly because I know at least three other Apify devs are about to make the same mistake.&lt;/p&gt;

&lt;h2&gt;
  
  
  The numbers
&lt;/h2&gt;

&lt;p&gt;I run a bunch of scrapers and APIs on Apify. Here's what the revenue split actually looked like in early April before things went sideways.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Actor&lt;/th&gt;
&lt;th&gt;Users&lt;/th&gt;
&lt;th&gt;Monthly revenue&lt;/th&gt;
&lt;th&gt;% of total&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Google Maps Lead Intel&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;$540&lt;/td&gt;
&lt;td&gt;83%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LinkedIn Employee Scraper&lt;/td&gt;
&lt;td&gt;37&lt;/td&gt;
&lt;td&gt;$42&lt;/td&gt;
&lt;td&gt;6.5%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;YouTube Transcript&lt;/td&gt;
&lt;td&gt;40&lt;/td&gt;
&lt;td&gt;$28&lt;/td&gt;
&lt;td&gt;4.3%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Google Scholar&lt;/td&gt;
&lt;td&gt;18&lt;/td&gt;
&lt;td&gt;$14&lt;/td&gt;
&lt;td&gt;2.1%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Email Validator API&lt;/td&gt;
&lt;td&gt;46&lt;/td&gt;
&lt;td&gt;$11&lt;/td&gt;
&lt;td&gt;1.7%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Website Intelligence API&lt;/td&gt;
&lt;td&gt;22&lt;/td&gt;
&lt;td&gt;$8&lt;/td&gt;
&lt;td&gt;1.2%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Everything else (5 actors)&lt;/td&gt;
&lt;td&gt;188&lt;/td&gt;
&lt;td&gt;$7&lt;/td&gt;
&lt;td&gt;1.1%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;353&lt;/td&gt;
&lt;td&gt;$650&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Read that first row again. Two users. $540. More than the other 351 users combined, by a factor of like 7x.&lt;/p&gt;

&lt;p&gt;I told myself this was fine because the product was working and the buyers were happy. Both things were true in early March. Neither was true by late March. I just didn't know.&lt;/p&gt;

&lt;h2&gt;
  
  
  The silence
&lt;/h2&gt;

&lt;p&gt;March 25 was the last run either agency executed. I didn't flag it because nothing explicit broke. No error email, no angry message, no refund request. My Apify dashboard just showed fewer runs but the rolling 30 day number still looked okay because it was still averaging in the fat weeks from before.&lt;/p&gt;

&lt;p&gt;Here's roughly how the number moved:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Early April: $268 rolling 30d profit. I'm feeling smug.&lt;/li&gt;
&lt;li&gt;Mid April: $92. I figure maybe it's a slow week.&lt;/li&gt;
&lt;li&gt;April 22: $50. I finally open the run logs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By the time I looked, the last run from either agency was 29 days ago. Whatever issue they had (I still don't fully know), they decided it wasn't worth telling me about. They just left.&lt;/p&gt;

&lt;p&gt;Agency users don't complain. They just stop paying. If you're building for them, burn that into your forehead.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I found when I actually looked
&lt;/h2&gt;

&lt;p&gt;This is the part that made me feel physically ill. Once I started doing a proper audit of my portfolio, I found that 6 of my other 10 monetized actors were broken in some way. Not all catastrophic. Some were returning partial data. One had a silently failing selector from a site redesign in February. One was charging $0 per run because of a broken &lt;code&gt;Actor.charge()&lt;/code&gt; signature I'd introduced in a refactor.&lt;/p&gt;

&lt;p&gt;Let me repeat that: I had actors that were executing successfully, returning data to users, and billing them exactly nothing. For weeks.&lt;/p&gt;

&lt;p&gt;If one of those 2 agencies had tried a second actor of mine during that period, they'd have gotten rot. That's probably why they didn't come back.&lt;/p&gt;

&lt;p&gt;The root cause wasn't the bugs though. Bugs happen. The root cause was that my attention was entirely on the $25/run cash cow because it was paying the bills. The cash cow was hiding the state of the herd.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why 2 agency users paid more than 353 devs
&lt;/h2&gt;

&lt;p&gt;I want to sit with this one because I think most solo devs misunderstand it.&lt;/p&gt;

&lt;p&gt;My Google Maps Lead Intel actor charges $25 per successful run. It scrapes a geography, enriches each business with a website audit, scores them, and hands the agency a ranked list of cold outreach targets. One agency was running it on a schedule against 40 US cities per week. At $25 a pop, that's serious money.&lt;/p&gt;

&lt;p&gt;The 353 devs on my other actors were paying $0.003 to $0.01 per row. They're hobbyists, students, one guy building a thesis scraper. They're lovely. They're also economically irrelevant to whether I can pay rent.&lt;/p&gt;

&lt;p&gt;Two lessons fell out of this.&lt;/p&gt;

&lt;p&gt;First, your paying users and your popular users are almost never the same people. Popularity on Apify Store is a vanity metric. Agency retention is the only metric that buys groceries.&lt;/p&gt;

&lt;p&gt;Second, concentrated revenue is fragile in ways that only hurt you once. When I had 2 whales, my revenue was 83% dependent on their mood. The moment either whale left, my month was destroyed. Worse, because they paid so much, I built no alerting around them. I assumed I'd notice. I did not notice.&lt;/p&gt;

&lt;p&gt;If you're reading this as an agency owner looking for scraping tools, what you actually want is a vendor who is not dependent on you. Someone with 50 paying clients will answer your ticket faster than someone with 2, because the guy with 2 is terrified of you and therefore weirdly slow to respond to bad news.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'm changing
&lt;/h2&gt;

&lt;p&gt;Not writing a manifesto. Just the four things I'm doing this week.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Push alerting on every run.&lt;/strong&gt; Apify has webhooks. I never wired them up because my dashboard was enough. It wasn't. Every successful run, every failed run, every billing event now pings a private Telegram channel I actually read. Here's the whole snippet, it's embarrassingly short:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// webhooks config in actor.json points at this endpoint&lt;/span&gt;
&lt;span class="c1"&gt;// payload is whatever Apify sends plus the run metadata&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;notify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;eventType&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;eventData&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;text&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`[&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;eventType&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;] &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;resource&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;actId&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; run &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;resource&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;\n`&lt;/span&gt;
               &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s2"&gt;`status: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;resource&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;\n`&lt;/span&gt;
               &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s2"&gt;`charged: $&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;resource&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;usageTotalUsd&lt;/span&gt; &lt;span class="o"&gt;??&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`https://api.telegram.org/bot&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;TG_TOKEN&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/sendMessage`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;POST&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Content-Type&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;application/json&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;chat_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;TG_CHAT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;text&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;end&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. If I'd had this on March 25, I'd have noticed the absence of runs within 48 hours, not 29 days. If you run anything that bills users, stop reading and go wire this up. Seriously.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Weekly self test of every monetized actor.&lt;/strong&gt; Every Sunday I run each of my paid actors against a known input and diff the output against last week's. If the schema changes or the row count collapses, I know before the user does. This is stupid simple and I should have been doing it from day one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Diversifying the buyer pool.&lt;/strong&gt; $25/run is staying. But I'm actively building out the $5 to $10 tier with two new actors targeted at small agencies, because I want the bottom of the revenue chart to be less wobbly. Ten $50/month buyers survive any one of them leaving. Two $270/month buyers don't.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Public status page.&lt;/strong&gt; Still building this but the idea is: if an actor is degraded, the user knows before they hit it. Trust compounds and I just burned a chunk of it, so I'm paying interest now.&lt;/p&gt;

&lt;h2&gt;
  
  
  Close
&lt;/h2&gt;

&lt;p&gt;If you run any product that bills users on autopilot, go wire a Telegram or Slack webhook today. Not tomorrow. The 30 day rolling dashboard lies to you when things trend down because it's still averaging in good weeks. Push alerts don't lie. Runs either happen or they don't.&lt;/p&gt;

&lt;p&gt;I'm writing this mostly for me. But if you want to see what the portfolio looks like now, or hire the actor that caused all this drama in the first place, it's here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Apify profile: &lt;a href="https://apify.com/george.the.developer" rel="noopener noreferrer"&gt;https://apify.com/george.the.developer&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Google Maps Lead Intel (the $25/run one): on the same profile&lt;/li&gt;
&lt;li&gt;Everything else I've shipped (27 public actors, some broken last week, all fixed now): same profile&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ask me anything in the comments. Especially if you're an agency buyer thinking about pulling the trigger on a vendor. I have thoughts about what you should actually be looking for.&lt;/p&gt;

</description>
      <category>apify</category>
      <category>postmortem</category>
      <category>webscraping</category>
      <category>saas</category>
    </item>
    <item>
      <title>Two APIs I Built This Week That Cost Nothing to Run</title>
      <dc:creator>George Kioko</dc:creator>
      <pubDate>Thu, 23 Apr 2026 11:50:41 +0000</pubDate>
      <link>https://dev.to/the_aientrepreneur_7ae85/two-apis-i-built-this-week-that-cost-nothing-to-run-g3e</link>
      <guid>https://dev.to/the_aientrepreneur_7ae85/two-apis-i-built-this-week-that-cost-nothing-to-run-g3e</guid>
      <description>&lt;p&gt;Most APIs have a dirty secret in their pricing: the upstream service they call costs money, and that cost gets passed to you plus margin. LLM based APIs charge you for tokens. Geocoding APIs charge you for lookups. Data enrichment APIs charge you for the enrichment source.&lt;/p&gt;

&lt;p&gt;I wanted to build APIs where the underlying operation costs literally zero. Here are two I shipped this week.&lt;/p&gt;

&lt;h2&gt;
  
  
  API 1: DNS Record Checker
&lt;/h2&gt;

&lt;p&gt;Node.js ships with a built in &lt;code&gt;dns&lt;/code&gt; module. It can resolve A records, MX records, CNAME, TXT, NS, and more. No external API call needed. No third party service. The DNS resolution happens through the operating system's resolver, which is free.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;dns&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;dns/promises&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;records&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;dns&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;resolveAny&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;example.com&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="c1"&gt;// Returns A, AAAA, MX, TXT, NS, SOA records&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. Zero dependency, zero API cost, zero rate limits from upstream providers.&lt;/p&gt;

&lt;p&gt;The actor wraps this into a clean JSON API. Pass it a domain, get back every DNS record type with TTLs, priorities for MX records, and SPF/DKIM/DMARC validation. The whole thing runs on Apify's Standby infrastructure so it responds in under a second.&lt;/p&gt;

&lt;p&gt;Use cases that keep coming up: automated domain verification for SaaS onboarding, email deliverability checks (MX + SPF + DKIM in one call), security audits scanning for misconfigured DNS, and monitoring tools that alert when records change unexpectedly.&lt;/p&gt;

&lt;h2&gt;
  
  
  API 2: Sentiment Analysis
&lt;/h2&gt;

&lt;p&gt;The common approach to sentiment analysis is sending text to an LLM and paying per token. That works but it's expensive at scale and adds latency.&lt;/p&gt;

&lt;p&gt;Instead I used a word level lexicon approach. The API scores text using a pre built dictionary of ~7,000 words with known sentiment values. No LLM call. No external API. The scoring runs entirely in memory on the Node.js process.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Simplified version of the scoring logic&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;score&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;words&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reduce&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;word&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;sum&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;lexicon&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;word&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="nx"&gt;words&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The result includes an overall sentiment score, confidence level, and breakdown of positive vs negative word matches. It handles negation ("not good" scores negative) and intensifiers ("very good" scores higher than "good").&lt;/p&gt;

&lt;p&gt;Is it as nuanced as GPT? No. But for brand monitoring, review analysis, social media tracking, and content moderation at scale, a deterministic lexicon approach that returns in 50ms beats a 2 second LLM call that costs 10x more.&lt;/p&gt;

&lt;h2&gt;
  
  
  The pattern worth noticing
&lt;/h2&gt;

&lt;p&gt;Both of these APIs follow the same principle: use what's already built into the runtime or ship a static dataset with the code. No external dependencies that cost money per call.&lt;/p&gt;

&lt;p&gt;This matters because of what I've seen with my existing domain tools. The WHOIS Lookup actor has power users running 262 lookups per user on average. Domain and DNS tools get embedded in automated workflows and run at high volume. When your per call cost is zero, your margin stays healthy no matter how much a single user hammers the API.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pricing
&lt;/h2&gt;

&lt;p&gt;DNS Record Checker: $0.003 per lookup. Sentiment Analysis: $0.003 per text analysis. Both running on Apify Standby mode for instant responses.&lt;/p&gt;

&lt;p&gt;The infrastructure cost is just Apify compute time. No upstream API bills eating into revenue.&lt;/p&gt;

&lt;p&gt;Try them on Apify:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://apify.com/george.the.developer/dns-record-checker" rel="noopener noreferrer"&gt;DNS Record Checker&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://apify.com/george.the.developer/sentiment-analysis-api" rel="noopener noreferrer"&gt;Sentiment Analysis API&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;Built in Nairobi. 52 actors, zero external API costs on these two. Comments and questions welcome.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
    </item>
    <item>
      <title>2 Users Pay Me More Than 353 Users: The Pricing Lesson That Changed Everything</title>
      <dc:creator>George Kioko</dc:creator>
      <pubDate>Thu, 23 Apr 2026 11:49:30 +0000</pubDate>
      <link>https://dev.to/the_aientrepreneur_7ae85/2-users-pay-me-more-than-353-users-the-pricing-lesson-that-changed-everything-4pf8</link>
      <guid>https://dev.to/the_aientrepreneur_7ae85/2-users-pay-me-more-than-353-users-the-pricing-lesson-that-changed-everything-4pf8</guid>
      <description>&lt;p&gt;I have 48 actors running on Apify. Same platform, same developer, same tech stack. Two of those actors tell completely different stories about how software makes money.&lt;/p&gt;

&lt;p&gt;My LinkedIn Employee Scraper has 353 users. It runs thousands of times per month. It charges $0.005 per profile scraped. Total monthly revenue from all those users and all those runs? About $9.&lt;/p&gt;

&lt;p&gt;My Google Maps Lead Intel actor has 2 users. Two. They run it about 22 times per month between them, paying roughly $25 per run. Monthly revenue? Around $540.&lt;/p&gt;

&lt;p&gt;That is a 60x difference in revenue per user. Same platform. Same developer. Same billing system.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Makes Google Maps Worth $25 a Run
&lt;/h2&gt;

&lt;p&gt;The LinkedIn scraper returns raw data. Names, titles, company info. It does one thing and does it well, but developers treat it like a commodity. They plug it into their own pipelines and expect it to cost almost nothing. At $0.005 per profile, it basically does.&lt;/p&gt;

&lt;p&gt;Google Maps Lead Intel returns something different. For every business it finds, you get validated email addresses, a lead score based on 12 online presence signals, Google Ads detection, website tech stack analysis, social media profiles, and review sentiment. It is not scraping. It is intelligence.&lt;/p&gt;

&lt;p&gt;The two users paying $25 per run are lead generation agencies. One services appointment setting clients across 15 metro areas. The other runs local SEO audits. For both of them, a single $25 run replaces 3 to 4 hours of manual research that would cost $200+ if done by a VA.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Buyer Problem
&lt;/h2&gt;

&lt;p&gt;Here is what I missed for months: the LinkedIn scraper attracts developers. Developers are price sensitive. They can build their own scraper given enough time, so they benchmark your tool against their hourly rate. If your scraper costs more than 20 minutes of their time to build, they will build it themselves.&lt;/p&gt;

&lt;p&gt;The Google Maps actor attracts agencies. Agency buyers think in terms of client value, not engineering time. If their client pays $1,500/month for lead gen services and your tool costs $25 per market, that is a rounding error in their margin. They do not negotiate. They do not churn. They run it more as they sign more clients.&lt;/p&gt;

&lt;p&gt;Same platform. Totally different buyer psychology.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Changed
&lt;/h2&gt;

&lt;p&gt;The technical shift was not dramatic. I stopped returning raw JSON blobs and started returning enriched, scored, validated output. Specifically:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Raw Google Maps results became leads with quality scores&lt;/li&gt;
&lt;li&gt;Guessed emails became validated emails with deliverability checks&lt;/li&gt;
&lt;li&gt;Basic business info became competitive intelligence with ad spend signals&lt;/li&gt;
&lt;li&gt;Flat data became actionable reports that agencies could forward to clients&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The pricing shift followed naturally. When your output saves someone 4 hours of work and costs them $25, you are not competing on data volume. You are competing on time saved and decision quality.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Numbers I Wish I Knew Earlier
&lt;/h2&gt;

&lt;p&gt;353 users at $0.005/run = roughly $9/month. Those users submit support tickets, request features, and compare you to 6 other LinkedIn scrapers in the Apify Store.&lt;/p&gt;

&lt;p&gt;2 users at $25/run = roughly $540/month. Those users send you "thank you" messages and ask if you can build them something custom.&lt;/p&gt;

&lt;p&gt;If I could go back and rebuild my portfolio from scratch, I would build fewer tools and make each one solve a complete problem for a specific buyer. Not "scrape this website" but "find me qualified leads in this market with contact info I can trust."&lt;/p&gt;

&lt;h2&gt;
  
  
  The Takeaway
&lt;/h2&gt;

&lt;p&gt;Stop counting users. Start counting revenue per user. Build for the buyer who measures your tool against the cost of the alternative, not against the cost of building it themselves. Package intelligence, not data.&lt;/p&gt;

&lt;p&gt;The developer who needs 10,000 LinkedIn profiles will always shop on price. The agency owner who needs 200 qualified leads by Friday will pay whatever gets it done.&lt;/p&gt;

&lt;p&gt;I know which buyer I am building for now.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built in Nairobi. 48 actors in production. Questions? Drop them below.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>saas</category>
      <category>api</category>
      <category>startup</category>
    </item>
    <item>
      <title>The 5 APIs That Run 200+ Times Per User (And Why That Matters)</title>
      <dc:creator>George Kioko</dc:creator>
      <pubDate>Thu, 23 Apr 2026 11:47:43 +0000</pubDate>
      <link>https://dev.to/the_aientrepreneur_7ae85/the-5-apis-that-run-200-times-per-user-and-why-that-matters-48fp</link>
      <guid>https://dev.to/the_aientrepreneur_7ae85/the-5-apis-that-run-200-times-per-user-and-why-that-matters-48fp</guid>
      <description>&lt;p&gt;Most developer tools get used a handful of times. Someone finds your API, tries it on a test case, maybe runs it a dozen more times, then moves on. That is the normal pattern. Out of 38 actors I have running on Apify, most average 5 to 20 runs per user. Respectable numbers.&lt;/p&gt;

&lt;p&gt;But five of them break the pattern completely. These five average 100 to 260 runs per user. Not because of better marketing or a viral tweet. Because they solve problems that require bulk processing by design.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Numbers
&lt;/h2&gt;

&lt;p&gt;Here is the actual usage data from my Apify dashboard:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;API&lt;/th&gt;
&lt;th&gt;Runs Per User&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Domain WHOIS Lookup&lt;/td&gt;
&lt;td&gt;262&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Google Scholar Scraper&lt;/td&gt;
&lt;td&gt;230&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AI Content Detector&lt;/td&gt;
&lt;td&gt;132&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Website Tech Detector&lt;/td&gt;
&lt;td&gt;126&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Email Validator&lt;/td&gt;
&lt;td&gt;105&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Compare that to something like the LinkedIn Employee Scraper, which has 37 users but averages about 17 runs each. LinkedIn users grab the data they need and stop. WHOIS users feed in hundreds of domains every single session.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why These Five?
&lt;/h2&gt;

&lt;p&gt;The common thread is not the subject matter. It is the workflow. Every one of these tools plugs into a process where the user already has a list and needs to process all of it:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Domain WHOIS Lookup (262 runs/user):&lt;/strong&gt; Security researchers and domain investors run this on batches of suspicious domains. When a phishing campaign registers 10,000 domains with similar naming patterns, someone needs registrar data, creation dates, and nameservers for every single one. That is not a one time task. New domains appear daily.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Google Scholar Scraper (230 runs/user):&lt;/strong&gt; Academic researchers doing systematic literature reviews or bibliometric analysis. They need every paper matching a query, with citations, h index scores, and author profiles exported as structured JSON. One research project can require pulling data on thousands of papers across multiple search terms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI Content Detector (132 runs/user):&lt;/strong&gt; Content moderation teams, academic integrity offices, and publishers who need to scan entire content catalogs. Checking one essay at a time is pointless when you have 500 submissions or 2,000 product descriptions to verify. The bulk API call is the only thing that makes this practical.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Website Tech Detector (126 runs/user):&lt;/strong&gt; Sales development teams that need technology intelligence on their entire prospect list. If you are selling a React migration service, you need to know which of your 3,000 target companies still run Angular or jQuery. Feed in the list, get back frameworks, CDNs, analytics tools, CMS platforms in clean JSON.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Email Validator (105 runs/user):&lt;/strong&gt; Cold outreach operators who clean their lists before every campaign. A 5% bounce rate destroys your sender reputation, so smart operators validate 500 to 5,000 emails before hitting send. They do this before every single campaign, not once.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Means for Builders
&lt;/h2&gt;

&lt;p&gt;The lesson is simple: if your API solves a problem that people encounter once, you need constant marketing to keep new users flowing in. If your API solves a problem that people encounter in batches, repeatedly, you get sticky users who come back on their own.&lt;/p&gt;

&lt;p&gt;None of these five APIs went viral. None of them got featured in a newsletter. The WHOIS lookup has 7 total users. But those 7 users have collectively run it 1,837 times. That is revenue without marketing spend.&lt;/p&gt;

&lt;p&gt;The best APIs are not the ones with the most users. They are the ones where each user cannot stop running them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try Them
&lt;/h2&gt;

&lt;p&gt;All five are live on the Apify Store under my profile (george.the.developer), priced per call with no monthly subscription. Domain WHOIS at $0.005/lookup, Scholar at $0.004/paper, AI Detector at $0.003/text, Tech Detector at $0.005/site, Email Validator at $0.002/email.&lt;/p&gt;

&lt;p&gt;Built in Nairobi. 38 actors, 700+ users, 14,000+ total runs.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>webdev</category>
      <category>api</category>
      <category>saas</category>
    </item>
    <item>
      <title>Google Scholar Has No API Either. Here's What 5,000 Runs Taught Me</title>
      <dc:creator>George Kioko</dc:creator>
      <pubDate>Thu, 23 Apr 2026 11:46:16 +0000</pubDate>
      <link>https://dev.to/the_aientrepreneur_7ae85/google-scholar-has-no-api-either-heres-what-5000-runs-taught-me-3i44</link>
      <guid>https://dev.to/the_aientrepreneur_7ae85/google-scholar-has-no-api-either-heres-what-5000-runs-taught-me-3i44</guid>
      <description>&lt;p&gt;Google Scholar is the single most important search engine for academic research. Billions of papers indexed, citation counts, author profiles, related work links. And Google has never released an official API for it.&lt;/p&gt;

&lt;p&gt;Not deprecated. Not restricted. Just... never built one.&lt;/p&gt;

&lt;p&gt;If you want to programmatically search Google Scholar, grab paper titles, authors, citation counts, and PDF links, you are on your own. So I built an actor that does exactly that.&lt;/p&gt;

&lt;h2&gt;
  
  
  What It Pulls
&lt;/h2&gt;

&lt;p&gt;You give it a search query (like "transformer architecture attention mechanism") and it returns structured data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Attention Is All You Need"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"authors"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"A Vaswani, N Shazeer, N Parmar..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"citationCount"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;112847&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"year"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2017"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://arxiv.org/abs/1706.03762"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"pdfUrl"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://arxiv.org/pdf/1706.03762"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"snippet"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"The dominant sequence transduction models are based on complex recurrent..."&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Paper titles, author lists, citation counts, publication year, direct links, and PDF URLs when available. Everything a researcher needs to build a literature review or track citations over time.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Numbers Tell a Story
&lt;/h2&gt;

&lt;p&gt;Here's where it gets interesting. The actor has &lt;strong&gt;22 users&lt;/strong&gt; and &lt;strong&gt;5,065 total runs&lt;/strong&gt;. Do the math on that ratio: 230 runs per user on average.&lt;/p&gt;

&lt;p&gt;These are not casual users clicking "Run" once to test it. These are power users running it at scale. Academics building citation databases. Research firms tracking publication trends across thousands of queries. AI companies monitoring new papers in their domain.&lt;/p&gt;

&lt;p&gt;That run to user ratio is the strongest signal I have that this tool solves a real problem. When someone runs your tool 200+ times, they have built it into a workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Scholar Is Hard to Scrape
&lt;/h2&gt;

&lt;p&gt;Google Scholar is notoriously aggressive about blocking automated access. It will throw CAPTCHAs after just a handful of requests from the same IP. Most simple scraping scripts break within minutes.&lt;/p&gt;

&lt;p&gt;The actor handles this with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Proxy rotation across residential IPs&lt;/li&gt;
&lt;li&gt;Session management to maintain cookies between requests&lt;/li&gt;
&lt;li&gt;Randomized delays that mimic human browsing patterns&lt;/li&gt;
&lt;li&gt;Automatic retry logic when a request gets blocked&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I also had to deal with Google's inconsistent HTML. Scholar's markup changes subtly over time. Element class names shift, layout structures get tweaked. The parser needs regular maintenance to keep working.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who Uses This
&lt;/h2&gt;

&lt;p&gt;Three main groups keep showing up:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Academics and PhD students&lt;/strong&gt; building systematic literature reviews. Instead of manually searching and copying results, they run batch queries and get structured data they can feed into reference managers or spreadsheets.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Research firms and think tanks&lt;/strong&gt; tracking publication trends. They want to know how many papers mention "large language models" per quarter, or which authors are publishing most frequently in a specific subfield.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI and ML teams&lt;/strong&gt; monitoring state of the art. When a new paper drops with high early citation velocity, they want to know about it fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;The actor is on the Apify Store with pay per result pricing ($0.004 per paper): &lt;a href="https://apify.com/george.the.developer/google-scholar-scraper" rel="noopener noreferrer"&gt;Google Scholar Scraper&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you have ever copy pasted results from Google Scholar into a spreadsheet, this will save you hours. And if you are doing it at scale, it will save you from getting IP banned.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built in Nairobi by George. 40+ actors, 5,000+ runs on Scholar alone.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>webdev</category>
      <category>api</category>
      <category>research</category>
    </item>
    <item>
      <title>YouTube Has No Transcript API So I Built One (150 Users Later)</title>
      <dc:creator>George Kioko</dc:creator>
      <pubDate>Thu, 23 Apr 2026 11:44:51 +0000</pubDate>
      <link>https://dev.to/the_aientrepreneur_7ae85/youtube-has-no-transcript-api-so-i-built-one-150-users-later-4p56</link>
      <guid>https://dev.to/the_aientrepreneur_7ae85/youtube-has-no-transcript-api-so-i-built-one-150-users-later-4p56</guid>
      <description>&lt;p&gt;You know what's wild? YouTube, a Google product, has no official API for pulling video transcripts. You can upload, search, and manage playlists through their API. But if you want the actual words spoken in a video? Good luck.&lt;/p&gt;

&lt;p&gt;I ran into this wall in late 2025 while building a content repurposing tool. I needed transcripts from YouTube videos to feed into an LLM for summarization. The YouTube Data API v3 gives you metadata, thumbnails, view counts. But transcripts? Nope.&lt;/p&gt;

&lt;p&gt;So I built my own.&lt;/p&gt;

&lt;h2&gt;
  
  
  What It Actually Does
&lt;/h2&gt;

&lt;p&gt;The actor loads a YouTube video page, grabs the auto generated captions that YouTube creates for most videos, and returns clean text with timestamps. It supports multiple languages because YouTube generates captions in different languages automatically.&lt;/p&gt;

&lt;p&gt;Here's what the output looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"videoUrl"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://www.youtube.com/watch?v=dQw4w9WgXcQ"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Example Video Title"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"language"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"en"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"transcript"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"text"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Welcome to this tutorial"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"start"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"duration"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;2.5&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"text"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Today we are going to cover"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"start"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;2.5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"duration"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;3.1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No API key needed. No OAuth flows. Just pass in a video URL and get the transcript back.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Numbers After 8 Months
&lt;/h2&gt;

&lt;p&gt;I published this on the Apify Store and kind of forgot about it for a while. Then I checked the dashboard:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;154 users&lt;/strong&gt; have tried it&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;1,737 total runs&lt;/strong&gt; across all users&lt;/li&gt;
&lt;li&gt;It's one of my most popular actors out of 40+&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The thing that surprised me was who's using it. I expected developers. And yes, developers building AI pipelines are a big chunk. But I also see researchers pulling transcripts from lecture series, content creators repurposing their own videos into blog posts, and marketing teams analyzing competitor video content.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hard Parts
&lt;/h2&gt;

&lt;p&gt;YouTube does not make this easy. Captions are loaded dynamically through a separate request after the page renders. The URL for the caption track is embedded inside a massive JSON blob in the page source. Finding and parsing that reliably took more debugging than the actual extraction logic.&lt;/p&gt;

&lt;p&gt;The other challenge: some videos have manually uploaded captions, some have auto generated ones, and some have both. The actor handles all three cases and lets you pick which language you want.&lt;/p&gt;

&lt;p&gt;Rate limiting is real too. YouTube will throttle you if you hammer it. The actor spaces out requests and uses session management to stay under the radar.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Not Just Use a Python Library?
&lt;/h2&gt;

&lt;p&gt;There are Python packages like &lt;code&gt;youtube_transcript_api&lt;/code&gt; that do something similar. They work fine for one off scripts. But when you need to run this at scale, on a schedule, with proxy rotation and automatic retries, you want infrastructure around it.&lt;/p&gt;

&lt;p&gt;That's what Apify gives you. The actor runs in the cloud, handles failures gracefully, and stores results in a dataset you can export to JSON, CSV, or push to a webhook.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;The actor is free to run on Apify (you just pay for compute, which is pennies per video): &lt;a href="https://apify.com/george.the.developer/youtube-transcript-scraper" rel="noopener noreferrer"&gt;YouTube Transcript Extractor&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you are building anything that needs video content as text, save yourself the headache of reverse engineering YouTube's caption system. Someone already did that part for you.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built in Nairobi by George. 40+ actors on the Apify Store, 154 users on this one alone.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>webdev</category>
      <category>api</category>
      <category>youtube</category>
    </item>
    <item>
      <title>My LinkedIn Scraper Just Hit Top 20 on Apify — Here's How I Built It</title>
      <dc:creator>George Kioko</dc:creator>
      <pubDate>Thu, 23 Apr 2026 11:41:25 +0000</pubDate>
      <link>https://dev.to/the_aientrepreneur_7ae85/my-linkedin-scraper-just-hit-top-20-on-apify-heres-how-i-built-it-3j5p</link>
      <guid>https://dev.to/the_aientrepreneur_7ae85/my-linkedin-scraper-just-hit-top-20-on-apify-heres-how-i-built-it-3j5p</guid>
      <description>&lt;p&gt;I woke up last week to an email from Apify saying my LinkedIn Employee Scraper had earned the Rising Star badge — meaning it cracked the top 20 actors on the entire platform. 176 users, 2,430 runs, and counting.&lt;/p&gt;

&lt;p&gt;This is the story of how a side project built in Nairobi turned into one of the most-used LinkedIn scrapers on Apify.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem: LinkedIn Has No Real API for Employee Data
&lt;/h2&gt;

&lt;p&gt;If you've ever tried to pull employee data from LinkedIn programmatically, you already know the pain. LinkedIn's official API is locked down tight — you need partner status or a Sales Navigator license ($800–$1,200/month) just to get basic company employee info.&lt;/p&gt;

&lt;p&gt;For indie developers, recruiters building internal tools, or startups doing competitive intel, that price tag kills the project before it starts. I needed a different approach.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works: Playwright + Crawlee + Anti-Detection
&lt;/h2&gt;

&lt;p&gt;The scraper runs as an Apify Actor using Crawlee (Apify's open-source crawling framework) with Playwright driving a real Chromium browser. Here's the core pattern:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Actor&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;apify&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;PlaywrightCrawler&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;crawlee&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;Actor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;init&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;input&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;Actor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getInput&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;linkedinUrls&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[],&lt;/span&gt; &lt;span class="nx"&gt;maxProfiles&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;input&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;crawler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;PlaywrightCrawler&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;proxyConfiguration&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;Actor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createProxyConfiguration&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;groups&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;RESIDENTIAL&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="p"&gt;}),&lt;/span&gt;
  &lt;span class="na"&gt;launchContext&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;launchOptions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;headless&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;--no-sandbox&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;--disable-blink-features=AutomationControlled&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;minConcurrency&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;maxConcurrency&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

  &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nf"&gt;requestHandler&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Human-like delay between actions&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;waitForTimeout&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2000&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;random&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;3000&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;employees&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;$&lt;/span&gt;&lt;span class="nf"&gt;$eval&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;.org-people-profile-card&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;cards&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;
      &lt;span class="nx"&gt;cards&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;card&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;card&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;querySelector&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;.artdeco-entity-lockup__title&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)?.&lt;/span&gt;&lt;span class="nx"&gt;innerText&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nf"&gt;trim&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;card&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;querySelector&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;.artdeco-entity-lockup__subtitle&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)?.&lt;/span&gt;&lt;span class="nx"&gt;innerText&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nf"&gt;trim&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="na"&gt;profileUrl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;card&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;querySelector&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;a&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)?.&lt;/span&gt;&lt;span class="nx"&gt;href&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;}))&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;Actor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;charge&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;eventName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;profile-scraped&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;count&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;employees&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;Actor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;pushData&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;employees&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;crawler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addRequests&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;linkedinUrls&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;url&lt;/span&gt; &lt;span class="p"&gt;})));&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;crawler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;Actor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exit&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The architecture isn't complicated, but the details are what make it survive in production. LinkedIn is one of the most aggressive anti-bot platforms out there, so every layer matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Keeps It Running
&lt;/h2&gt;

&lt;p&gt;Three things separate a LinkedIn scraper that works once from one that runs 2,430 times without breaking:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Session management.&lt;/strong&gt; Instead of logging in fresh every run, the scraper persists cookies and reuses sessions. This mimics real user behavior and avoids triggering LinkedIn's "new device" verification flow.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Residential proxies.&lt;/strong&gt; Datacenter IPs get flagged within minutes on LinkedIn. The actor routes through Apify's residential proxy pool, rotating IPs per request. Each request looks like it comes from a different home internet connection.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Randomized timing.&lt;/strong&gt; No fixed delays. Every pause between actions uses &lt;code&gt;Math.random()&lt;/code&gt; to vary between 2–5 seconds. Linear timing patterns are the easiest signal for bot detection systems to catch.&lt;/p&gt;

&lt;p&gt;I also limit concurrency to 1–2 parallel requests max. It's slower, but LinkedIn's rate limiting is harsh enough that going faster just burns through proxy credits with nothing to show for it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Numbers
&lt;/h2&gt;

&lt;p&gt;Here's where the scraper stands today:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;176 users&lt;/strong&gt; on Apify Store&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;2,430 total runs&lt;/strong&gt; in production&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rising Star badge&lt;/strong&gt; — top 20 actor on the platform&lt;/li&gt;
&lt;li&gt;Pay-per-event pricing at $0.004 per profile scraped&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For context, I launched this about a year ago as one of my first Apify actors. It started getting steady traction around the 500-run mark, and growth has been compounding since. The Rising Star badge was a genuine surprise — I didn't realize it had climbed that high until the notification hit my inbox.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons Learned Building Scrapers at Scale
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;LinkedIn changes its DOM constantly.&lt;/strong&gt; I've had to update selectors at least four times. If you build a LinkedIn scraper, abstract your selectors into a config object so you can patch them without rewriting handler logic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Users will throw anything at your actor.&lt;/strong&gt; Company pages with 50,000 employees, URLs with typos, private profiles, pages behind auth walls. Defensive coding isn't optional — it's the entire job. Every edge case that crashes your actor is a 1-star review waiting to happen.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pay-per-event pricing works.&lt;/strong&gt; Charging per profile scraped instead of per run aligns cost with value. Users scraping 10 profiles pay less than users scraping 10,000. This keeps casual users happy while still generating real revenue from power users.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Good README = more users.&lt;/strong&gt; My most-used actors all have detailed READMEs with input/output examples, Mermaid architecture diagrams, and clear pricing breakdowns. Developers don't install tools they can't understand in 30 seconds.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;I'm currently running 38+ actors on Apify covering everything from Google Scholar to Telegram channels to OFAC sanctions data. The LinkedIn scraper remains my top performer, and I'm working on v2 with better pagination handling and support for scraping by department filters.&lt;/p&gt;

&lt;p&gt;If you're building scrapers and want to see the code, everything is on GitHub. If you just need LinkedIn employee data without building anything, the actor is ready to run on Apify Store.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Apify Store&lt;/strong&gt;: &lt;a href="https://apify.com/george.the.developer" rel="noopener noreferrer"&gt;https://apify.com/george.the.developer&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/the-ai-entrepreneur-ai-hub" rel="noopener noreferrer"&gt;https://github.com/the-ai-entrepreneur-ai-hub&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built in Nairobi. Questions about the scraper or Apify actors in general — drop them in the comments.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>webdev</category>
      <category>scraping</category>
      <category>node</category>
    </item>
  </channel>
</rss>
