<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mohit Prateek</title>
    <description>The latest articles on DEV Community by Mohit Prateek (@mohitprateek).</description>
    <link>https://dev.to/mohitprateek</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mohitprateek"/>
    <language>en</language>
    <item>
      <title>Web Scraping API for structured data from any website - incl. authenticated and JS-heavy pages.</title>
      <dc:creator>Mohit Prateek</dc:creator>
      <pubDate>Wed, 18 Feb 2026 08:23:09 +0000</pubDate>
      <link>https://dev.to/mohitprateek/web-scraping-api-for-structured-data-from-any-website-incl-authenticated-and-js-heavy-pages-5g1l</link>
      <guid>https://dev.to/mohitprateek/web-scraping-api-for-structured-data-from-any-website-incl-authenticated-and-js-heavy-pages-5g1l</guid>
      <description>&lt;p&gt;We’ve built a developer-facing scraping &lt;a href="https://tinyurl.com/bdehkj2z" rel="noopener noreferrer"&gt;API&lt;/a&gt; at Anakin.&lt;/p&gt;

&lt;p&gt;The workflow is: you send a URL (or a batch of URLs) plus a few parameters, and you get back normalized output - raw HTML, clean Markdown, or structured JSON. The platform handles JavaScript rendering (headless browser execution), proxy routing, retries, anti-bot handling, and authenticated sessions when needed.&lt;/p&gt;

&lt;p&gt;The execution model (&lt;strong&gt;request → execution → extraction → response&lt;/strong&gt;)&lt;/p&gt;

&lt;p&gt;In practice, most scraping systems end up becoming a pile of components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;HTTP fetch + headless browser execution&lt;/li&gt;
&lt;li&gt;Proxy pools with geo routing logic&lt;/li&gt;
&lt;li&gt;Retries, backoff and fallbacks&lt;/li&gt;
&lt;li&gt;Extraction and normalization pipelines&lt;/li&gt;
&lt;li&gt;Session and authentication handling&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We packaged these into a single API surface: one request in, one normalized response out.&lt;/p&gt;

&lt;p&gt;At a high level:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Router decides the execution path (simple fetch vs browser render, proxy pool selection, wait conditions).&lt;/li&gt;
&lt;li&gt;Execution layer performs the request (HTTP client or isolated Chromium instance).&lt;/li&gt;
&lt;li&gt;Stability layer applies retry and fallback logic (proxy rotation, browser reconfiguration, timing adjustments).&lt;/li&gt;
&lt;li&gt;Extraction layer returns normalized output (HTML or Markdown) and optionally applies schema-driven JSON extraction.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What the Tool Includes (Try It as You Read)
&lt;/h2&gt;

&lt;p&gt;1) &lt;strong&gt;URL Scraper (single + batch)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Purpose&lt;/strong&gt;: Fetch and normalize a single page (or many).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Run a simple fetch (no browser). [Get the API Key from - &lt;a href="https://tinyurl.com/bdehkj2z" rel="noopener noreferrer"&gt;https://tinyurl.com/bdehkj2z&lt;/a&gt;]&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# TODO: Replace with your exact API KEY + URL
curl -X POST https://api.anakin.io/v1/url-scraper \
  -H "X-API-Key: your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "url": "https://example.com",
    "country": "us",
    "useBrowser": false,
    "generateJson": false
  }'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output :Raw HTML/JSON/Markdown&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2&lt;/strong&gt; : Get results.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# TODO: Replace with your exact API KEY + Job ID
curl -X GET https://api.anakin.io/v1/url-scraper/job_abc123xyz \
  -H "X-API-Key: your_api_key"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want to test it on a list of URLs together - Documentation &lt;a href="https://anakin.io/docs/url-scraper/batch-url-scraping" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2) &lt;strong&gt;Search API (query -&amp;gt; links + extracted content)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This takes a query and returns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a ranked list of results&lt;/li&gt;
&lt;li&gt;extracted content for those pages (not just URLs)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The point is to avoid building a second pipeline:&lt;br&gt;
“search → fetch each URL → render/extract → normalize”.&lt;/p&gt;

&lt;p&gt;Search API - Documentation &lt;a href="https://anakin.io/docs/search/search" rel="noopener noreferrer"&gt;Here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3) &lt;strong&gt;Agentic Search (multi-step research pipeline)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is a multi-stage workflow that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;searches&lt;/li&gt;
&lt;li&gt;selects relevant pages&lt;/li&gt;
&lt;li&gt;extracts content&lt;/li&gt;
&lt;li&gt;synthesizes&lt;/li&gt;
&lt;li&gt;returns structured output&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Agentic Search API - Documentation &lt;a href="https://anakin.io/docs/agentic-search/submit-search" rel="noopener noreferrer"&gt;Here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4) &lt;strong&gt;Browser Sessions (authenticated scraping via session IDs)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Users authenticate once inside an isolated, secure browser environment. Encrypted session cookies and storage are persisted server-side, enabling scraping of dashboards, gated portals, and login-protected views without re-running the login flow on every request. Credentials are never stored.&lt;br&gt;
Subsequent requests reference a session_id:&lt;/p&gt;

&lt;p&gt;Check it out at: &lt;a href="https://tinyurl.com/bdehkj2z" rel="noopener noreferrer"&gt;https://tinyurl.com/bdehkj2z&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc4zhctxsdykm77hb20z0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc4zhctxsdykm77hb20z0.png" alt="Browser Session" width="800" height="586"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Practical use cases we’ve seen&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Powering AI assistants with real-time web content (clean Markdown for retrieval/grounding)&lt;/li&gt;
&lt;li&gt;Enhancing sales and GTM enrichment with public website signals&lt;/li&gt;
&lt;li&gt;Extracting structured product/pricing data for monitoring workflows&lt;/li&gt;
&lt;li&gt;Scraping authenticated dashboards and member portals&lt;/li&gt;
&lt;li&gt;Automating multi-source research pipelines (search → extract → synthesize)&lt;/li&gt;
&lt;li&gt;Embedding web extraction into internal tools and developer workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you’ve worked on scraping systems in production, we’d value your feedback. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Try it on the kinds of pages that are usually annoying - dynamic rendering, geo-specific behavior, authenticated flows, or unstable markup - and let us know where it falls short.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>webscraping</category>
      <category>api</category>
      <category>rag</category>
      <category>llm</category>
    </item>
  </channel>
</rss>
