<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jason Wolf</title>
    <description>The latest articles on DEV Community by Jason Wolf (@jdwolf96).</description>
    <link>https://dev.to/jdwolf96</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jdwolf96"/>
    <language>en</language>
    <item>
      <title>How to give your AI agent web browsing in 5 minutes (with code)</title>
      <dc:creator>Jason Wolf</dc:creator>
      <pubDate>Fri, 10 Apr 2026 10:52:58 +0000</pubDate>
      <link>https://dev.to/jdwolf96/how-to-give-your-ai-agent-web-browsing-in-5-minutes-with-code-1mgl</link>
      <guid>https://dev.to/jdwolf96/how-to-give-your-ai-agent-web-browsing-in-5-minutes-with-code-1mgl</guid>
      <description>&lt;p&gt;Every AI agent eventually needs to browse the web. Whether it's research, competitor monitoring, or data extraction — at some point your agent needs to read a URL.&lt;/p&gt;

&lt;p&gt;The problem? Setting up Puppeteer yourself is a rabbit hole. And the popular scraping APIs charge $19-49/month.&lt;/p&gt;

&lt;p&gt;Here's how to wire up web browsing for your agent in under 5 minutes using CrawlAPI (free tier available).&lt;/p&gt;

&lt;h2&gt;
  
  
  LangChain (Python)
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.tools&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Tool&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;scrape_url&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://crawlapi.net/v1/scrape&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;X-RapidAPI-Key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;YOUR_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;url&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;formats&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;markdown&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]}&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;()[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;markdown&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="n"&gt;web_tool&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Tool&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;WebScraper&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;func&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;scrape_url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Scrape any URL and return clean markdown content. Input should be a URL.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Add to your agent
&lt;/span&gt;&lt;span class="n"&gt;tools&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;web_tool&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  CrewAI
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;crewai_tools&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;tool&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;

&lt;span class="nd"&gt;@tool&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Web Scraper&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;scrape_website&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Scrape a URL and return its content as clean markdown.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://crawlapi.net/v1/scrape&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;X-RapidAPI-Key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;YOUR_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;url&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;formats&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;markdown&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]}&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;()[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;markdown&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  OpenAI Function Calling
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"scrape_url"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Scrape a webpage and return its content as clean markdown"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"parameters"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"object"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"properties"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"string"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"The URL to scrape"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"required"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;handle_tool_call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://crawlapi.net/v1/scrape&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;X-RapidAPI-Key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;YOUR_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;url&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;formats&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;markdown&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]}&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;()[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;markdown&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Bonus: Search + scrape in one call
&lt;/h2&gt;

&lt;p&gt;Need your agent to research a topic? One endpoint, no Googling required:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://crawlapi.net/v1/search&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;X-RapidAPI-Key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;YOUR_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;query&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;latest news on AI agents&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;num&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# Returns scraped content from top 5 results
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Get started
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Free tier: 50 calls/day, no credit card&lt;/li&gt;
&lt;li&gt;Available on RapidAPI: search "CrawlAPI"&lt;/li&gt;
&lt;li&gt;Docs + landing page: crawlapi.net&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What are you building? Drop it in the comments — always curious what people are using agents for.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>langchain</category>
      <category>python</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Add web scraping to Claude Desktop in 60 seconds (MCP + CrawlAPI)</title>
      <dc:creator>Jason Wolf</dc:creator>
      <pubDate>Fri, 10 Apr 2026 10:52:37 +0000</pubDate>
      <link>https://dev.to/jdwolf96/add-web-scraping-to-claude-desktop-in-60-seconds-mcp-crawlapi-167c</link>
      <guid>https://dev.to/jdwolf96/add-web-scraping-to-claude-desktop-in-60-seconds-mcp-crawlapi-167c</guid>
      <description>&lt;p&gt;Claude Desktop is powerful. But by default it can't browse the web.&lt;/p&gt;

&lt;p&gt;Here's how to fix that in 60 seconds using the CrawlAPI MCP server — no coding required.&lt;/p&gt;

&lt;h2&gt;
  
  
  What you get
&lt;/h2&gt;

&lt;p&gt;Three tools available directly in Claude (and any MCP-compatible editor like Cursor or Windsurf):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;scrape_url&lt;/strong&gt; — fetch any page as clean markdown&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;batch_scrape&lt;/strong&gt; — scrape up to 10 URLs in parallel&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;search_and_scrape&lt;/strong&gt; — search the web + scrape top results in one shot&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Get a free API key&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Head to &lt;a href="https://rapidapi.com/crawlapi/api/crawlapi" rel="noopener noreferrer"&gt;RapidAPI&lt;/a&gt; and subscribe (free tier: 50 calls/day, no credit card).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Download the MCP server&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/JDWolf96/crawlapi-js
&lt;span class="nb"&gt;cd &lt;/span&gt;crawlapi-js
npm &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Add to Claude Desktop config&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Open &lt;code&gt;~/Library/Application Support/Claude/claude_desktop_config.json&lt;/code&gt; (Mac) or &lt;code&gt;%APPDATA%\Claude\claude_desktop_config.json&lt;/code&gt; (Windows):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"crawlapi"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"node"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"/absolute/path/to/crawlapi-js/mcp-server.js"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"env"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"CRAWLAPI_KEY"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"your_rapidapi_key_here"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Restart Claude Desktop&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That's it. You'll see the CrawlAPI tools appear in the tools panel.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it looks like in practice
&lt;/h2&gt;

&lt;p&gt;Ask Claude:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Scrape &lt;a href="https://news.ycombinator.com" rel="noopener noreferrer"&gt;https://news.ycombinator.com&lt;/a&gt; and summarise the top 5 stories"&lt;/p&gt;

&lt;p&gt;"Search for 'LangChain vs LlamaIndex 2026' and give me a comparison"&lt;/p&gt;

&lt;p&gt;"Batch scrape these 5 competitor pages and tell me how they position themselves"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Claude uses the tools automatically — you just ask in plain English.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cursor / Windsurf / Continue
&lt;/h2&gt;

&lt;p&gt;Same config, different location. Check your editor's MCP settings panel. The server works with any stdio-transport MCP client.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why markdown matters
&lt;/h2&gt;

&lt;p&gt;Most scraping APIs return raw HTML. CrawlAPI returns clean markdown — no nav bars, no cookie banners, no scripts. Just the content, ready to drop into an LLM context window.&lt;/p&gt;

&lt;p&gt;It also handles JavaScript rendering, so pages that load content dynamically (SPAs, React apps, etc.) work out of the box.&lt;/p&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;🌐 &lt;a href="https://crawlapi.net" rel="noopener noreferrer"&gt;crawlapi.net&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;📦 &lt;a href="https://rapidapi.com/crawlapi/api/crawlapi" rel="noopener noreferrer"&gt;RapidAPI listing&lt;/a&gt; — free tier available&lt;/li&gt;
&lt;li&gt;💻 &lt;a href="https://github.com/JDWolf96/crawlapi-js" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; — MCP server + JS SDK&lt;/li&gt;
&lt;li&gt;📄 &lt;a href="https://crawlapi.net/openapi.json" rel="noopener noreferrer"&gt;OpenAPI spec&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Built anything cool with it? Let me know in the comments.&lt;/p&gt;

</description>
      <category>mcp</category>
      <category>claude</category>
      <category>ai</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Best Web Scraping APIs for AI Agents in 2026 (Compared)</title>
      <dc:creator>Jason Wolf</dc:creator>
      <pubDate>Wed, 08 Apr 2026 17:21:09 +0000</pubDate>
      <link>https://dev.to/jdwolf96/best-web-scraping-apis-for-ai-agents-in-2026-compared-2kal</link>
      <guid>https://dev.to/jdwolf96/best-web-scraping-apis-for-ai-agents-in-2026-compared-2kal</guid>
      <description>&lt;p&gt;If you're building AI agents in 2026, you'll hit this problem within the first week: your agent needs to read a webpage, and suddenly you're deep in a Puppeteer rabbit hole, debugging headless Chrome, and wondering why a simple task takes three days.&lt;/p&gt;

&lt;p&gt;Web scraping APIs exist to solve this. But which one is actually worth using for an AI agent use case?&lt;/p&gt;

&lt;p&gt;I compared the four main options: Firecrawl, ScrapingBee, Apify, and CrawlAPI. Here's the honest breakdown.&lt;/p&gt;

&lt;p&gt;───&lt;/p&gt;

&lt;p&gt;The Contenders&lt;/p&gt;

&lt;p&gt;Firecrawl — The current darling of the AI agent world. Clean markdown output, good docs. Starting price: $19/month for 500 credits.&lt;/p&gt;

&lt;p&gt;ScrapingBee — The enterprise option. Handles proxies, CAPTCHAs, JS rendering. Starting price: $49/month.&lt;/p&gt;

&lt;p&gt;Apify — A full scraping platform. Starting price: $49/month. Powerful but complex.&lt;/p&gt;

&lt;p&gt;CrawlAPI — Newer, built specifically for AI agents. Clean markdown, Puppeteer fallback, Redis caching. Starting price: $9.99/month for 5,000 calls. Free tier: 50 calls/day, no credit card.&lt;/p&gt;

&lt;p&gt;───&lt;/p&gt;

&lt;p&gt;Which one should you use?&lt;/p&gt;

&lt;p&gt;Prototyping / side project: CrawlAPI. Free tier is genuinely useful, $9.99/mo when you upgrade, copy-paste manifests for LangChain + CrewAI + OpenAI Assistants.&lt;/p&gt;

&lt;p&gt;Production, high volume: Firecrawl if you want established, CrawlAPI if cost matters.&lt;/p&gt;

&lt;p&gt;Enterprise (proxy rotation, CAPTCHA): ScrapingBee or Apify.&lt;/p&gt;

&lt;p&gt;Using CrewAI: CrawlAPI — only one with a native tool manifest.&lt;/p&gt;

&lt;p&gt;───&lt;/p&gt;

&lt;p&gt;LangChain integration (CrawlAPI)&lt;/p&gt;

&lt;p&gt;from langchain.tools import Tool&lt;br&gt;
import requests&lt;/p&gt;

&lt;p&gt;def scrape_url(url: str) -&amp;gt; str:&lt;br&gt;
    r = requests.post(&lt;br&gt;
        "&lt;a href="https://crawlapi.net/v1/scrape" rel="noopener noreferrer"&gt;https://crawlapi.net/v1/scrape&lt;/a&gt;",&lt;br&gt;
        headers={"X-RapidAPI-Key": "YOUR_KEY"},&lt;br&gt;
        json={"url": url, "formats": ["markdown"]}&lt;br&gt;
    )&lt;br&gt;
    return r.json()["data"]["markdown"]&lt;/p&gt;

&lt;p&gt;web_tool = Tool(&lt;br&gt;
    name="WebScraper",&lt;br&gt;
    func=scrape_url,&lt;br&gt;
    description="Scrape any URL and return clean markdown. Input should be a URL."&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;CrewAI integration (CrawlAPI)&lt;/p&gt;

&lt;p&gt;from crewai_tools import tool&lt;br&gt;
import requests&lt;/p&gt;

&lt;p&gt;@tool("Web Scraper")&lt;br&gt;
def scrape_website(url: str) -&amp;gt; str:&lt;br&gt;
    """Scrape a URL and return its content as clean markdown."""&lt;br&gt;
    r = requests.post(&lt;br&gt;
        "&lt;a href="https://crawlapi.net/v1/scrape" rel="noopener noreferrer"&gt;https://crawlapi.net/v1/scrape&lt;/a&gt;",&lt;br&gt;
        headers={"X-RapidAPI-Key": "YOUR_KEY"},&lt;br&gt;
        json={"url": url, "formats": ["markdown"]}&lt;br&gt;
    )&lt;br&gt;
    return r.json()["data"]["markdown"]&lt;/p&gt;

&lt;p&gt;───&lt;/p&gt;

&lt;p&gt;CrawlAPI is the cheapest option that still checks all the boxes for agent use cases. Try it free: crawlapi.net&lt;/p&gt;

&lt;p&gt;Have a different tool you'd recommend? Drop it in the comments.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>langchain</category>
      <category>python</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
