<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Scraper0024</title>
    <description>The latest articles on DEV Community by Scraper0024 (@scraper0024).</description>
    <link>https://dev.to/scraper0024</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/scraper0024"/>
    <language>en</language>
    <item>
      <title>Scrapeless and Nstbrowser Jointly Establish “Browser Labs”</title>
      <dc:creator>Scraper0024</dc:creator>
      <pubDate>Wed, 15 Oct 2025 12:39:25 +0000</pubDate>
      <link>https://dev.to/scraper0024/scrapeless-and-nstbrowser-jointly-establish-browser-labs-309c</link>
      <guid>https://dev.to/scraper0024/scrapeless-and-nstbrowser-jointly-establish-browser-labs-309c</guid>
      <description>&lt;p&gt;&lt;strong&gt;Today, we’re excited to share some remarkable progress from our team:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Scrapeless has entered into a strategic partnership with Nstbrowser. Together, we will integrate our product lines, upgrade our cloud browser services, and establish a new joint R&amp;amp;D center — “Browser Labs.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Over the past few years, Nstbrowser has built strong expertise in fingerprint browser and anti-detection technologies, while Scrapeless has been advancing in web automation and AI agent infrastructure.&lt;/p&gt;

&lt;p&gt;As AI and autonomous agent technologies evolve rapidly, agents are demanding higher standards of browser authenticity, isolation, and large-scale concurrency.&lt;/p&gt;

&lt;p&gt;Both parties agreed that only by integrating the &lt;strong&gt;“authentic interaction and isolation capabilities of physical browser”&lt;/strong&gt; with the &lt;strong&gt;“high-concurrency adaptation capabilities of cloud browsers”&lt;/strong&gt; can long-term competitiveness be established for enterprises and automated scenarios.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Welcome to &lt;a href="https://www.youtube.com/watch?v=r_gh8aQQQDM" rel="noopener noreferrer"&gt;click&lt;/a&gt; to watch the video and learn more!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Future Dual-Brand Strategy
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Scrapeless&lt;/strong&gt; will focus on cloud browser technology, providing enterprise clients with scalable, high-performance infrastructure for data extraction, automation, and AI Agents. Leveraging its robust cloud capabilities, Scrapeless will deliver &lt;strong&gt;customized, scenario-driven solutions&lt;/strong&gt; across industries such as &lt;u&gt;&lt;strong&gt;finance&lt;/strong&gt;, &lt;strong&gt;retail, e-commerce, SEO, and marketing.&lt;/strong&gt;&lt;/u&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Nstbrowser&lt;/strong&gt; will concentrate on fingerprint browser clients, Browser kernel customization, and anti-detection technologies, serving users in &lt;strong&gt;&lt;u&gt;affiliate marketing, cross-border e-commerce, social media management, and ad verification&lt;/u&gt;&lt;/strong&gt;. It will offer authentic browser fingerprints and stable isolated environments to support multi-account operations and environment segregation.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Cloud Browser Service Upgrade
&lt;/h2&gt;

&lt;p&gt;To provide you with a higher-quality cloud browser service, we plan to migrate the original Nstbrowser Browserless service to the Scrapeless platform. After the migration, you will enjoy the following upgraded core capabilities on Scrapeless:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Out-of-the-Box Ready&lt;/strong&gt;: Natively compatible with &lt;a href="https://docs.scrapeless.com/en/scraping-browser/libraries/puppeteer/" rel="noopener noreferrer"&gt;Puppeteer&lt;/a&gt; and &lt;a href="https://docs.scrapeless.com/en/scraping-browser/libraries/playwright/" rel="noopener noreferrer"&gt;Playwright&lt;/a&gt;, supporting CDP connections. Migrate your projects with just one line of code.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Global IP Resources&lt;/strong&gt;: Covers residential IPs, static ISP IPs, and unlimited IPs across 195 countries. Transparent costs ($0.6–$1.8/GB, far lower than Browserbase) with support for custom browser proxies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bulk Isolated Environment Creation&lt;/strong&gt;: Each profile corresponds to an exclusive browser environment, enabling persistent login and identity isolation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unlimited Concurrent Scaling&lt;/strong&gt;: A single task supports second-level launch of 50 to 1000+ browser instances. Auto-scaling is available with no server resource limits.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Edge Node Service (ENS)&lt;/strong&gt;: Multiple nodes worldwide, offering 2–3× faster launch speed and higher stability than other cloud browsers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Intelligent Anti-Detection&lt;/strong&gt;: Built-in real-time solutions for major protections like reCAPTCHA, Cloudflare Turnstile/Challenge, and AWS WAF.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexible Fingerprint Customization&lt;/strong&gt;: Generate random fingerprints or customize fingerprint parameters as needed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Visual Debugging&lt;/strong&gt;: Achieve human-machine interactive debugging and real-time proxy traffic monitoring via Live View. Replay sessions page-by-page through Session Recordings to quickly identify issues and optimize operations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enterprise Customization&lt;/strong&gt;: Undertake customization of enterprise-level automation projects and AI Agent customization.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Migration Schedule
&lt;/h2&gt;

&lt;p&gt;This collaboration plans to fully migrate the original Nstbrowser Browserless service to the Scrapeless platform. The following is the specific service migration timeline.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Date&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;October 15, 2025&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Official shutdown of the Nstbrowser Browserless Service panel and documentation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;October 15 – November 15, 2025&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Transition period: Existing users may continue accessing the service via the original API&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;November 15, 2025 00:00 (UTC+2)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Nstbrowser Browserless Service will be officially discontinued&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h3&gt;
  
  
  User Support
&lt;/h3&gt;

&lt;p&gt;Existing users of the Nstbrowser Browserless Service can request assistance via our Ticket System, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Exclusive migration package&lt;/li&gt;
&lt;li&gt;One-on-one technical support&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Contact Us
&lt;/h3&gt;

&lt;p&gt;For migration support or further information, please reach out via:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://discord.gg/Np4CAHxB9a" rel="noopener noreferrer"&gt;Discord&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://t.me/scrapeless" rel="noopener noreferrer"&gt;Telegram&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="//mailto:market@scrapeless.com"&gt;Email&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Browser Labs will continue to invest in R&amp;amp;D, delivering more professional, efficient, and reliable cloud browser services through the Scrapeless platform.&lt;/p&gt;

&lt;p&gt;We sincerely appreciate your continued trust and support.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Browser Labs&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>programming</category>
      <category>scraping</category>
      <category>scrapeless</category>
      <category>nstbrowser</category>
    </item>
    <item>
      <title>How to Scrape Data on Make Automatically?</title>
      <dc:creator>Scraper0024</dc:creator>
      <pubDate>Tue, 01 Jul 2025 10:42:54 +0000</pubDate>
      <link>https://dev.to/scraper0024/how-to-scrape-data-on-make-automatically-2c45</link>
      <guid>https://dev.to/scraper0024/how-to-scrape-data-on-make-automatically-2c45</guid>
      <description>&lt;p&gt;We've recently launched an official &lt;a href="https://www.make.com/en/integrations/scrapeless" rel="noopener noreferrer"&gt;integration on Make&lt;/a&gt;, now available as a public app. This tutorial will show you how to create a powerful automated workflow that combines our Google Search API with Web Unlocker to extract data from search results, process it with Claude AI, and send it to a webhook.&lt;/p&gt;

&lt;h2&gt;
  
  
  What We'll Build
&lt;/h2&gt;

&lt;p&gt;In this tutorial, we'll create a workflow that:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Triggers automatically every day using integrated scheduling&lt;/li&gt;
&lt;li&gt;Searches Google for specific queries using Scrapeless Google Search API&lt;/li&gt;
&lt;li&gt;Processes each URL individually with Iterator&lt;/li&gt;
&lt;li&gt;Scrapes each URL with Scrapeless WebUnlocker to extract content&lt;/li&gt;
&lt;li&gt;Analyzes content with Anthropic Claude AI&lt;/li&gt;
&lt;li&gt;Sends processed data to a webhook (Discord, Slack, database, etc.)&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A Make.com account&lt;/li&gt;
&lt;li&gt;A Scrapeless API key (get one at &lt;a href="https://scrapeless.com/?utm_source=official&amp;amp;utm_medium=blog&amp;amp;utm_campaign=make-web-scraping" rel="noopener noreferrer"&gt;&lt;strong&gt;scrapeless.com&lt;/strong&gt;&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvaiezqgrowrcww7dftm1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvaiezqgrowrcww7dftm1.png" alt="Scrapeless API key" width="800" height="359"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An Anthropic Claude API key&lt;/li&gt;
&lt;li&gt;A webhook endpoint (Discord webhook, Zapier, database endpoint, etc.)&lt;/li&gt;
&lt;li&gt;Basic understanding of Make.com workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Complete Workflow Overview
&lt;/h2&gt;

&lt;p&gt;Your final workflow will look like this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scrapeless Google Search&lt;/strong&gt; (with integrated scheduling) → &lt;strong&gt;Iterator&lt;/strong&gt; → &lt;strong&gt;Scrapeless WebUnlocker&lt;/strong&gt; → &lt;strong&gt;Anthropic Claude&lt;/strong&gt; → &lt;strong&gt;HTTP Webhook&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frhigv51ffyc4ysbhlyc3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frhigv51ffyc4ysbhlyc3.png" alt="Complete Workflow Overview" width="800" height="238"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Adding Scrapeless Google Search with Integrated Scheduling
&lt;/h2&gt;

&lt;p&gt;We'll start by adding the Scrapeless Google Search module with built-in scheduling.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a new scenario in Make.com&lt;/li&gt;
&lt;li&gt;Click the "&lt;strong&gt;+&lt;/strong&gt;" button to add the first module&lt;/li&gt;
&lt;li&gt;Search for "&lt;strong&gt;Scrapeless&lt;/strong&gt;" in the module library&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Scrapeless&lt;/strong&gt; and choose &lt;strong&gt;Search Google&lt;/strong&gt; action&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe45e7cpin98ml6bde1ad.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe45e7cpin98ml6bde1ad.png" alt="Google Search module configuration" width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuring Google Search with Scheduling
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ipkkir7wbfafw3eso91.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ipkkir7wbfafw3eso91.png" alt="Google Search module configuration" width="800" height="1037"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Connection Setup:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Create a connection&lt;/strong&gt; by entering your Scrapeless API key&lt;/li&gt;
&lt;li&gt;Click "&lt;strong&gt;Add&lt;/strong&gt;" and follow the connection setup&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Search Parameters:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Search Query&lt;/strong&gt;: Enter your target query (e.g., "artificial intelligence news")&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Language&lt;/strong&gt;: &lt;code&gt;en&lt;/code&gt; (English)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Country&lt;/strong&gt;: &lt;code&gt;US&lt;/code&gt; (United States)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvw94l2ucono6ridqst40.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvw94l2ucono6ridqst40.png" alt="Search Google configuration" width="800" height="819"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scheduling Setup:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp90ce3zkswht13jq8osd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp90ce3zkswht13jq8osd.png" alt="Scheduling Setup" width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click the &lt;strong&gt;clock icon&lt;/strong&gt; on the module to open scheduling&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Run scenario&lt;/strong&gt;: Select "At regular intervals"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Minutes&lt;/strong&gt;: Set to &lt;code&gt;1440&lt;/code&gt; (for daily execution) or your preferred interval&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Advanced scheduling&lt;/strong&gt;: Use "Add item" to set specific times/days if needed&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 2: Processing Results with Iterator
&lt;/h2&gt;

&lt;p&gt;The Google Search returns multiple URLs in an array. We'll use Iterator to process each result individually.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add an &lt;strong&gt;Iterator&lt;/strong&gt; module after Google Search&lt;/li&gt;
&lt;li&gt;Configure the Array field to process search results&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk3lvdbepk9q2xxwxk6u9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk3lvdbepk9q2xxwxk6u9.png" alt="Iterator configuration" width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Iterator Configuration:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Array: &lt;code&gt;{{1.result.organic_results}}&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This will create a loop that processes each search result separately, allowing better error handling and individual processing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Adding Scrapeless WebUnlocker
&lt;/h2&gt;

&lt;p&gt;Now we'll add the WebUnlocker module to scrape content from each URL.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add another &lt;strong&gt;Scrapeless&lt;/strong&gt; module&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Scrape URL&lt;/strong&gt; (WebUnlocker) action&lt;/li&gt;
&lt;li&gt;Use the same Scrapeless connection&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6wigzby0txxc0f5qexis.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6wigzby0txxc0f5qexis.png" alt="WebUnlocker configuration" width="800" height="1060"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WebUnlocker Configuration:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Connection&lt;/strong&gt;: Use your existing Scrapeless connection&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Target URL&lt;/strong&gt;: &lt;code&gt;{{2.link}}&lt;/code&gt; (mapped from Iterator output)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Js Render&lt;/strong&gt;: Yes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Headless&lt;/strong&gt;: Yes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Country&lt;/strong&gt;: World Wide&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Js Instructions&lt;/strong&gt;: &lt;code&gt;[{"wait":1000}]&lt;/code&gt; (wait for page load)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Block&lt;/strong&gt;: Configure to block unnecessary resources for faster scraping&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzw0s46e02esd8mwpng8c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzw0s46e02esd8mwpng8c.png" alt="WebUnlocker configuration" width="800" height="746"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: AI Processing with Anthropic Claude
&lt;/h2&gt;

&lt;p&gt;Add Claude AI to analyze and summarize the scraped content.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add an &lt;strong&gt;Anthropic Claude&lt;/strong&gt; module&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Make an API Call&lt;/strong&gt; action&lt;/li&gt;
&lt;li&gt;Create a new connection with your Claude API key&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5fo9wm9a0xt8zifqq1y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5fo9wm9a0xt8zifqq1y.png" alt="Claude AI configuration" width="800" height="899"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Claude Configuration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Connection&lt;/strong&gt;: Create connection with your Anthropic API key&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prompt&lt;/strong&gt;: Configure to analyze the scraped content&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Model&lt;/strong&gt;: claude-3-sonnet-20240229 / claude-3-opus-20240229 or your preferred model&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Max Tokens&lt;/strong&gt;: 1000-4000 depending on your needs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;URL&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/v1/messages
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Header 1&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;Key : Content-Type&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Value : application/json&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Header 2&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;Key : anthropic-version&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Value : 2023-06-01&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example Prompt copy paste in body:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "model": "claude-3-sonnet-20240229",
  "max_tokens": 1000,
  "messages": [
    {
      "role": "user",
      "content": "Analyze this web content and provide a summary in English with key points:\n\nTitle: {{14.title}}\nURL: {{14.link}}\nDescription: {{14.snippet}}\nContent: {{13.content}}\n\nSearch Query: {{1.result.search_information.query_displayed}}"
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Don't forget to change number

`&lt;code&gt;14&lt;/code&gt; by your module number.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwadzowilgmfs2iruhn1z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwadzowilgmfs2iruhn1z.png" alt="HTTP webhook configuration" width="712" height="854"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Webhook Integration
&lt;/h2&gt;

&lt;p&gt;Finally, send the processed data to your webhook endpoint.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add an &lt;strong&gt;HTTP&lt;/strong&gt; module&lt;/li&gt;
&lt;li&gt;Configure it to send a POST request to your webhook&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fszbn0cglkmrv1xfnzkpv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fszbn0cglkmrv1xfnzkpv.png" alt="HTTP webhook configuration" width="800" height="944"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;HTTP Configuration:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;URL&lt;/strong&gt;: Your webhook endpoint (Discord, Slack, database, etc.)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Method&lt;/strong&gt;: POST&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Headers&lt;/strong&gt;: &lt;code&gt;Content-Type: application/json&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Body Type&lt;/strong&gt;: Raw (JSON)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example Webhook Payload:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

{
  "embeds": [
    {
      "title": "{{14.title}}",
      "description": "*{{15.body.content[0].text}}*",
      "url": "{{14.link}}",
      "color": 3447003,
      "footer": {
        "text": "Analysis complete"
      }
    }
  ]
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Running Results
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftbi4h6688xnhw6c92fyq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftbi4h6688xnhw6c92fyq.png" alt="Running results" width="800" height="569"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Module Reference and Data Flow
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Data Flow Through Modules:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Module 1 (Scrapeless Google Search)&lt;/strong&gt;: Returns &lt;code&gt;result.organic_results[]&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Module 14 (Iterator)&lt;/strong&gt;: Processes each result, outputs individual items&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Module 13 (WebUnlocker)&lt;/strong&gt;: Scrapes &lt;code&gt;{{14.link}}&lt;/code&gt;, returns content&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Module 15 (Claude AI)&lt;/strong&gt;: Analyzes &lt;code&gt;{{13.content}}&lt;/code&gt;, returns summary&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Module 16 (HTTP Webhook)&lt;/strong&gt;: Sends final structured data&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Key Mappings:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Iterator Array&lt;/strong&gt;: &lt;code&gt;{{1.result.organic_results}}&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;WebUnlocker URL&lt;/strong&gt;: &lt;code&gt;{{14.link}}&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Claude Content&lt;/strong&gt;: &lt;code&gt;{{13.content}}&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Webhook Data&lt;/strong&gt;: Combination of all previous modules&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Testing Your Workflow
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Run once&lt;/strong&gt; to test the complete scenario&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Check each module&lt;/strong&gt;:&lt;/li&gt;
&lt;li&gt;Google Search returns organic results&lt;/li&gt;
&lt;li&gt;Iterator processes each result individually&lt;/li&gt;
&lt;li&gt;WebUnlocker successfully scrapes content&lt;/li&gt;
&lt;li&gt;Claude provides meaningful analysis&lt;/li&gt;
&lt;li&gt;Webhook receives structured data&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Verify data quality&lt;/strong&gt; in your webhook destination&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Check scheduling&lt;/strong&gt; - ensure it runs at your preferred intervals&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Advanced Configuration Tips
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Error Handling
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Add &lt;strong&gt;Error Handler&lt;/strong&gt; routes after each module&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;Filters&lt;/strong&gt; to skip invalid URLs or empty content&lt;/li&gt;
&lt;li&gt;Set &lt;strong&gt;Retry&lt;/strong&gt; logic for temporary failures&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Benefits of This Workflow
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Fully Automated&lt;/strong&gt;: Runs daily without manual intervention&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI-Enhanced&lt;/strong&gt;: Content is analyzed and summarized automatically&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexible Output&lt;/strong&gt;: Webhook can integrate with any system&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalable&lt;/strong&gt;: Processes multiple URLs efficiently&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Quality Controlled&lt;/strong&gt;: Multiple filtering and validation steps&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-time Notifications&lt;/strong&gt;: Immediate delivery to your preferred platform&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Use Cases
&lt;/h2&gt;

&lt;p&gt;Perfect for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Content Monitoring&lt;/strong&gt;: Track mentions of your brand or competitors&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;News Aggregation&lt;/strong&gt;: Automated news summaries on specific topics&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Market Research&lt;/strong&gt;: Monitor industry trends and developments&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lead Generation&lt;/strong&gt;: Find and analyze potential business opportunities&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SEO Monitoring&lt;/strong&gt;: Track search result changes for target keywords&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Research Automation&lt;/strong&gt;: Gather and summarize academic or industry content&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This automated workflow combines the power of &lt;a href="https://www.scrapeless.com/en/product/deep-serp-api?utm_source=official&amp;amp;utm_medium=blog&amp;amp;utm_campaign=make-web-scraping" rel="noopener noreferrer"&gt;Scrapeless's Google Search&lt;/a&gt; and &lt;a href="https://www.scrapeless.com/en/product/deep-serp-api?utm_source=official&amp;amp;utm_medium=blog&amp;amp;utm_campaign=make-web-scraping" rel="noopener noreferrer"&gt;WebUnlocker&lt;/a&gt; with Claude AI's analysis capabilities, all orchestrated through Make's visual interface. The result is an intelligent content discovery system that runs automatically and delivers enriched, analyzed data directly to your preferred platform via webhook.&lt;/p&gt;

&lt;p&gt;The workflow will run on your schedule, automatically discovering, scraping, analyzing, and delivering relevant content insights without any manual intervention.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Time to build your first AI Agent on &lt;a href="https://www.make.com/en/integrations/scrapeless" rel="noopener noreferrer"&gt;Make using Scrapeless&lt;/a&gt;&lt;/strong&gt;!&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>ai</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Session URL: How to Ensure User Privacy During Human-Computer Interaction?</title>
      <dc:creator>Scraper0024</dc:creator>
      <pubDate>Tue, 27 May 2025 06:31:44 +0000</pubDate>
      <link>https://dev.to/scraper0024/session-url-how-to-ensure-user-privacy-during-human-computer-interaction-10n4</link>
      <guid>https://dev.to/scraper0024/session-url-how-to-ensure-user-privacy-during-human-computer-interaction-10n4</guid>
      <description>&lt;p&gt;Scrapeless Scraping Browser now fully supports automation tasks through Session-based workflows. Whether initiated via the Playground or API, all program executions can be synchronously tracked in the Dashboard.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open Live View to monitor runtime status in real time.&lt;/li&gt;
&lt;li&gt;Share the Live URL for remote user interaction—such as login pages, form filling, or payment completion.&lt;/li&gt;
&lt;li&gt;Review the entire execution process with Session Replay.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But you might wonder:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;What exactly are these Session features? How do they benefit me? And how do I use them?&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this blog, we’ll explore Scrapeless Scraping Browser’s Session in depth, covering:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The concept and purpose of &lt;strong&gt;Live View&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;What the &lt;strong&gt;Live URL&lt;/strong&gt; is&lt;/li&gt;
&lt;li&gt;How to use Live URL for direct user interaction&lt;/li&gt;
&lt;li&gt;Why &lt;strong&gt;Session Replay&lt;/strong&gt; is essential&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Live View: Real-Time Program Monitoring
&lt;/h2&gt;

&lt;p&gt;The Live View feature in Scrapeless Scraping Browser allows you to track and control browser sessions in real time. Specifically, it enables you to observe clicks, inputs, and all browser actions, monitor automation workflows, debug scripts manually, and take direct control of the session if needed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating a Browser Session
&lt;/h3&gt;

&lt;p&gt;First, you need to create a session. There are two ways to do this:&lt;/p&gt;

&lt;h4&gt;
  
  
  Method 1: Create a session via Playground
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw0ydx45gerccmua49sr1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw0ydx45gerccmua49sr1.png" alt="Create a session via Playground" width="800" height="518"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Method 2: Create a session via API
&lt;/h4&gt;

&lt;p&gt;You can also create a session using our API. Please refer to the API documentation: &lt;a href="https://docs.scrapeless.com/en/scraping-browser/quickstart/getting-started/" rel="noopener noreferrer"&gt;Scraping Browser API Documentation&lt;/a&gt;. Our session feature will help you manage your session, including real-time viewing capabilities.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;puppeteer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;puppeteer-core&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;token&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;API Key&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="c1"&gt;// custom fingerprint&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;fingerprint&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;platform&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Windows&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;query&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;URLSearchParams&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;session_ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;180&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;session_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;test_scraping&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// session name&lt;/span&gt;
    &lt;span class="na"&gt;proxy_country&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ANY&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;token&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;token&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;fingerprint&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;encodeURIComponent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;fingerprint&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;connectionURL&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`wss://browser.scrapeless.com/browser?&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;query&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toString&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;browser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;puppeteer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="na"&gt;browserWSEndpoint&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;connectionURL&lt;/span&gt;&lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;newPage&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;goto&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://www.scrapeless.com&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;setTimeout&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3000&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;goto&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://www.google.com&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;setTimeout&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3000&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;goto&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://www.youtube.com&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;setTimeout&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3000&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;})();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  View Live Sessions
&lt;/h3&gt;

&lt;p&gt;In the Scrapeless session management interface, you can easily view live sessions:&lt;/p&gt;

&lt;h4&gt;
  
  
  Method 1: View Live Sessions Directly in the Dashboard
&lt;/h4&gt;

&lt;p&gt;After creating a session in the Playground, you’ll see the live running session on the right side.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpwvq0642wgojk2aqrdwv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpwvq0642wgojk2aqrdwv.png" alt="live running session" width="800" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Or, you can check session status on the &lt;a href="https://docs.scrapeless.com/en/scraping-browser/features/live-session/#view-live-session" rel="noopener noreferrer"&gt;&lt;strong&gt;Live Sessions&lt;/strong&gt;&lt;/a&gt; page.&lt;/p&gt;

&lt;h4&gt;
  
  
  Method 2: View Session via Live URL
&lt;/h4&gt;

&lt;p&gt;A &lt;strong&gt;Live URL&lt;/strong&gt; is generated for a running session, allowing you to watch the process live in a browser.&lt;/p&gt;

&lt;p&gt;Live URLs are useful for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Debugging &amp;amp; Monitoring&lt;/strong&gt;: Watch everything in real time or share it with teammates.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Human Interaction&lt;/strong&gt;: Control or input directly—let the user enter sensitive info like passwords securely.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can copy the Live URL by clicking the "🔗" icon on the Live Sessions page. Both Playground and API-created sessions support Live URL.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Get Live URL from the Dashboard&lt;/strong&gt;. See our tutorial on &lt;a href="https://docs.scrapeless.com/en/scraping-browser/features/live-session/#on-site-display" rel="noopener noreferrer"&gt;document&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Get Live URL via API&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can also retrieve the Live URL through API calls. The sample code below fetches all &lt;a href="https://apidocs.scrapeless.com/api-16890953" rel="noopener noreferrer"&gt;running sessions via the session API&lt;/a&gt;, then uses the &lt;a href="https://apidocs.scrapeless.com/api-16891208" rel="noopener noreferrer"&gt;Live URL API&lt;/a&gt; to retrieve the live view for a specific session:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;

&lt;span class="n"&gt;API_CONFIG&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;host&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://api.scrapeless.com&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;headers&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;x-api-token&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;API Key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Content-Type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;application/json&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;fetch_live_url&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;task_id&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;live_response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;API_CONFIG&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;host&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;/browser/&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;task_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;/live&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;API_CONFIG&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;headers&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;live_response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ok&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;raise&lt;/span&gt; &lt;span class="nc"&gt;Exception&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;failed to fetch live url: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;live_response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;status_code&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;live_response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;reason&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="n"&gt;live_result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;live_response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;live_result&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="n"&gt;live_result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;taskId: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;task_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;liveUrl: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;live_result&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;data&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;no live url data available for this task&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;error fetching live url for task &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;task_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;fetch_browser_sessions&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;session_response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;API_CONFIG&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;host&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;/browser/running&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;API_CONFIG&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;headers&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;session_response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ok&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;raise&lt;/span&gt; &lt;span class="nc"&gt;Exception&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;failed to fetch sessions: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;session_response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;status_code&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;session_response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;reason&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="n"&gt;session_result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;session_response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

        &lt;span class="n"&gt;sessions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;session_result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;sessions&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="nf"&gt;isinstance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sessions&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sessions&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;no active browser sessions found&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt;

        &lt;span class="n"&gt;task_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;sessions&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;taskId&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;task_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;task id not found in the session data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt;

        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch_live_url&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;task_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;error fetching browser sessions: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;

&lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;fetch_browser_sessions&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Get Live URL via CDP command&lt;/strong&gt;. To obtain the Live URL while the code is running, use the CDP command &lt;a href="https://apidocs.scrapeless.com/doc-801748#6" rel="noopener noreferrer"&gt;&lt;code&gt;Agent.liveURL&lt;/code&gt;&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;

&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;pyppeteer&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;launcher&lt;/span&gt;


&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;browser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;launcher&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;browserWSEndpoint&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;wss://browser.scrapeless.com/browser?token=APIKey&amp;amp;session_ttl=180&amp;amp;proxy_country=ANY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;page&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;newPage&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;goto&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;https://www.scrapeless.com&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;target&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createCDPSession&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Agent.liveURL&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A Highlight Worth Mentioning:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Live URL not only allows real-time monitoring, but also human-machine interaction.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For example: You need the user to enter their login password.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;“Oh no! Are you trying to steal my private info? No way!”&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Actually, the user can input the data themselves on the screen — and everything remains &lt;strong&gt;100% private&lt;/strong&gt;. This &lt;strong&gt;direct yet secure&lt;/strong&gt; method is what Live URL enables — remote interaction.&lt;/p&gt;

&lt;h2&gt;
  
  
  Live URL: How It Enables Collaboration and User Interaction
&lt;/h2&gt;

&lt;p&gt;Let’s take registering and logging into Scrapeless as an example and walk through how to interact directly with users.&lt;/p&gt;

&lt;p&gt;Here’s the code you'll need:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;puppeteer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;puppeteer-core&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;fingerprint&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// custom screen fingerprint&lt;/span&gt;
        &lt;span class="na"&gt;screen&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1920&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;height&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1080&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="c1"&gt;// set windows size with same value to screen fingerprint&lt;/span&gt;
            &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;--window-size&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;1920,1080&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;query&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;URLSearchParams&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;token&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;APIKey&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;session_ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;600&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;proxy_country&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ANY&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;fingerprint&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;encodeURIComponent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;fingerprint&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;browserWsEndpoint&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`wss://browser.scrapeless.com/browser?&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;query&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toString&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;browser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;puppeteer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
            &lt;span class="na"&gt;browserWSEndpoint&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;browserWsEndpoint&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;

        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;newPage&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setViewport&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;goto&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`https://app.scrapeless.com/passport/register`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;120000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;waitUntil&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;domcontentloaded&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;

        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createCDPSession&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Agent.liveURL&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

        &lt;span class="c1"&gt;// you can share the live url to any user&lt;/span&gt;
        &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;liveURL&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

        &lt;span class="c1"&gt;// wait for 5 minutes for user registration&lt;/span&gt;
        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;waitForSelector&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;#none-existing-selector&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="nx"&gt;_000&lt;/span&gt;&lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;})()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the above and share the Live URL with the user, such as: &lt;a href="https://devtools.scrapeless.com/inspector?wss=browser.scrapeless.com/live/MTlkNjNiNWUtNjI4MS00NWYwLWE2NDAtMGUwMzhmN2IzMWYx/page/QkMzMTY5RDQyMTkzOUQ1N0Y3NjMxMjJGRUEwRTU3MTk=" rel="noopener noreferrer"&gt;&lt;strong&gt;Scrapeless Registration URL&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Every steps before like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigating to the website&lt;/li&gt;
&lt;li&gt;Visiting Scrapeless homepage&lt;/li&gt;
&lt;li&gt;Clicking login and entering the registration page&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of these can be done directly by creating a session using the above code. The most critical step is that &lt;strong&gt;the user needs to enter their email and password to complete the registration&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;After you share the Live URL with the user, you can remotely track the program execution process. The program will automatically run and jump until the page that requires user interaction. The password entered by the other party will be completely hidden, and the user does not need to worry about password leakage.&lt;/p&gt;

&lt;p&gt;In order to more intuitively reflect the user operation process, please refer to the following interaction steps:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;The following interactive process is completely executed in the Live URL&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.scrapeless.com%2Fprod%2Fposts%2Fsession-url%2F3c157ed8e05744d702dc60c50cd1717d.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.scrapeless.com%2Fprod%2Fposts%2Fsession-url%2F3c157ed8e05744d702dc60c50cd1717d.gif" alt="Live-URL interaction steps" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Session Replay: Replay Program Execution to Debug Everything
&lt;/h2&gt;

&lt;p&gt;Session Replay is a video-like recreation of a user session built using the Recording Library. Replays are created based on snapshots of the web application DOM state (the HTML representation in the browser's memory). When you replay each snapshot, you'll see a record of the actions taken during the entire session: including all page loads, refreshes, and navigations that occurred during your visit to the website.&lt;/p&gt;

&lt;p&gt;Session Replay can help you troubleshoot all aspects of your program's operation. All page operations will be recorded and saved as a video. If you find any problems during the session, you can troubleshoot and adjust them through replay.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to Sessions&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Session History&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Locate the session&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftvuffc4ktezbdjcucm4u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftvuffc4ktezbdjcucm4u.png" alt="Locate the target session" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the session details, click the Play button to watch and review execution:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7vi0m47t0lv8e5um2vci.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7vi0m47t0lv8e5um2vci.png" alt="session replay" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bottom Lines
&lt;/h2&gt;

&lt;p&gt;Scrapeless Scraping Browser lets you &lt;strong&gt;monitor in real-time, interact remotely, and replay every steps.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.scrapeless.com/en/scraping-browser/features/live-session/" rel="noopener noreferrer"&gt;&lt;strong&gt;Live View&lt;/strong&gt;&lt;/a&gt;: Watch browser activity like a live stream. See every click and input!&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Live URL&lt;/strong&gt;: Generate a shareable link where users can input their data directly. Fully private, completely secure.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.scrapeless.com/en/scraping-browser/features/session-replay/" rel="noopener noreferrer"&gt;&lt;strong&gt;Session Replay&lt;/strong&gt;&lt;/a&gt;: Debug like a pro by replaying exactly what happened — no need to rerun the program.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Whether you’re a developer debugging, a PM demoing, or customer support guiding a user — Scrapeless Sessions have your back.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It’s time to make automation smart and human-friendly.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://app.scrapeless.com/passport/login?utm_source=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=session-url" rel="noopener noreferrer"&gt;&lt;strong&gt;Start your free trial now!&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>ai</category>
      <category>javascript</category>
    </item>
    <item>
      <title>AI Agent vs. Agentic AI: Data-driven Intelligent Evolution and Scrapeless Solutions</title>
      <dc:creator>Scraper0024</dc:creator>
      <pubDate>Tue, 20 May 2025 07:36:38 +0000</pubDate>
      <link>https://dev.to/scraper0024/ai-agent-vs-agentic-ai-data-driven-intelligent-evolution-and-scrapeless-solutions-47ml</link>
      <guid>https://dev.to/scraper0024/ai-agent-vs-agentic-ai-data-driven-intelligent-evolution-and-scrapeless-solutions-47ml</guid>
      <description>&lt;p&gt;In the rapidly developing field of artificial intelligence, two concepts are gaining increasing attention: AI agents and Agentic AI. While both are dedicated to automating tasks and enhancing decision-making capabilities, they differ in architecture, functionality, and application scenarios.&lt;/p&gt;

&lt;p&gt;In this blog post, we will take a deep dive into the meaning of AI agents and Agentic AI, their differences, applicable scenarios, and advantages. In addition, we will explore how Scraping Browser can be an effective solution for both.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the AI Agent?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;AI Agents are single - entity systems&lt;/strong&gt;. They are augmented with Large Language Models (LLMs) and external tools. These agents are designed for task - specific autonomy, meaning they can perform a particular set of tasks with a high degree of independence. They are modular and reactive, and typically operate within narrow domains such as email triage, scheduling, and basic customer support.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the Agentic AI?
&lt;/h2&gt;

&lt;p&gt;Agentic AI represents a paradigm shift. It involves multiple collaborating agents working together. These agents have features like dynamic task decomposition, persistent memory, and orchestration layers. Agentic AI is built to handle complex, multi - step workflows, such as coordinated research assistance, ICU decision - making support, and robotic orchard harvesting.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI Agent vs. Agentic AI: Key Differences
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Comparison Dimension&lt;/th&gt;
&lt;th&gt;AI Agents&lt;/th&gt;
&lt;th&gt;Agentic AI&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Definition &amp;amp; Core Concept&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Single-entity systems based on LLMs and external tools, focused on task-specific autonomy (e.g., email classification, scheduling).&lt;/td&gt;
&lt;td&gt;Multi-agent collaborative systems with dynamic task decomposition, persistent memory, and orchestration layers, designed for complex multi-step workflows.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Autonomy&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;High autonomy within specific tasks.&lt;/td&gt;
&lt;td&gt;Greater autonomy across multi-step, cross-domain tasks.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Task Complexity&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Single, narrow tasks (e.g., report summarization).&lt;/td&gt;
&lt;td&gt;Complex, interdependent tasks (e.g., ICU decision support, robotic orchard harvesting).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Collaboration Capability&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Operate independently without collaboration.&lt;/td&gt;
&lt;td&gt;Multi-agent collaboration and information sharing.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Adaptability&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Learn within a fixed domain.&lt;/td&gt;
&lt;td&gt;Learn and adapt across environments.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Memory Mechanism&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Session-level or limited memory.&lt;/td&gt;
&lt;td&gt;Persistent and shared memory system.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Typical Use Cases&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Email filtering, customer support chatbots, content recommendation.&lt;/td&gt;
&lt;td&gt;Collaborative research assistant, adaptive game AI, coordinated robot control.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Core Challenges&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Fragile long-term planning, hallucinations, limited causal reasoning.&lt;/td&gt;
&lt;td&gt;Error cascades between agents, emerging instabilities, low communication transparency, ethical governance issues.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Future Directions&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Proactive intelligence, continual learning, enhanced trust and security.&lt;/td&gt;
&lt;td&gt;Multi-agent expansion, simulation-based planning, ethical frameworks, domain-specific optimization.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Scraping Browser: Effective Solution for AI Agent and Agentic AI
&lt;/h2&gt;

&lt;p&gt;The AI landscape is shifting—Agentic AI pushes autonomy and adaptability, while AI Agents stick to structured execution. As mentioned above, both AI Agent and Agentic AI have their own inherent challenges. How to ensure high accuracy and success rate during execution? Scrapeless Scraping Browser can be your best choice.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is the Scraping Browser?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.scrapeless.com/en/product/scraping-browser?utm_souce=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=ai-agent-vs-agentic-ai" rel="noopener noreferrer"&gt;Scrapeless Scraping Browser&lt;/a&gt; is a high-concurrency automation solution for AI. A high-concurrency, low-cost, and anti-ban browser platform designed for large-scale data crawling, highly anthropomorphic.&lt;br&gt;
Scraping Browser is a browser automation tool based on a cloud-based serverless architecture, designed to solve the three core problems of &lt;strong&gt;high-concurrency&lt;/strong&gt; bottlenecks, &lt;strong&gt;anti-bot&lt;/strong&gt; avoidance, and &lt;strong&gt;cost control in dynamic web crawling&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Core functions:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Dynamic Stealth Mode support&lt;/strong&gt;: Customize fingerprint parameters such as User-Agent, device information, locale, operating system, screen size, language, etc. to simulate real user devices; integrated CAPTCHA solver; support SDKs API, Node.js, Python SDK, etc., and implement advanced incognito mode through Scrapeless Chromium kernel.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Global proxy and IP management&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;70 million+ residential IPs: covering 195 countries, automatic IP rotation, support routing by target website geographic location, and manual selection of target country/region.&lt;/li&gt;
&lt;li&gt;Transparent proxy cost: 1.26−1.80/GB (compared to competitors' $9.5+/GB), support for self-provided proxy.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatic verification code cracking&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Built-in solutions: real-time processing of reCAPTCHA, Cloudflare Turnstile/Challange, AWS WAF, DataDome, etc.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.scrapeless.com/en/blog/session-replay?utm_souce=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=ai-agent-vs-agentic-ai" rel="noopener noreferrer"&gt;&lt;strong&gt;Session replay&lt;/strong&gt;&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;Human-computer interactive debugging, intuitive testing of failures and errors, analysis of user behavior and interaction through Live URL, and proxy traffic monitoring and real-time optimization of proxy traffic.&lt;/li&gt;
&lt;li&gt;Replay the session through Session Replay to fully check the executed operations and network requests.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Why Scrapeless Scraping Browser works well for AI Agent and Agentic AI?
&lt;/h2&gt;

&lt;p&gt;Through a deeply customized Chromium kernel and a global proxy network, Scraping Browser can simulate highly anthropomorphic user behaviors in headless mode and run hundreds or even thousands of browser instances in batches. Whether it is an AI agent that performs a single task or an Agentic AI system that coordinates multi-task processes, Scrapeless can achieve more stable, efficient, and scalable data interaction.&lt;/p&gt;

&lt;p&gt;The tool supports one-click access to mainstream automation frameworks such as Puppeteer and Playwright, and is easy to use out of the box. Through built-in verification code cracking, dynamic fingerprint control, and session replay mechanisms, it not only helps AI systems improve task success rates, but also provides key support for debugging and behavior analysis. For building intelligent systems with real Web perception capabilities, Scraping Browser is a key bridge connecting AI and the real network world.&lt;/p&gt;

&lt;p&gt;Scrapeless Scraping Browser provides powerful data crawling and anti-blocking functions to help AI Agents complete complex browser automation tasks. It supports multi-task parallel processing and is an ideal tool for building intelligent agent systems and AI-driven applications. Users do not need to build automation infrastructure from scratch, just focus on AI applications, and Scrapeless can easily handle all complex problems.&lt;/p&gt;

&lt;p&gt;In addition, in order to make Scraping Browser better adapt to AI tools and agent services, Scrapeless integrates cloud-hosted AI Agent solutions such as browser use and computer use; and integrates AI frameworks such as LangChain to achieve highly autonomous operation processes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bottom Lines
&lt;/h2&gt;

&lt;p&gt;AI agents and Agentic AI are two completely different but powerful concepts in the field of artificial intelligence. They each have unique characteristics, applicable scenarios and advantages, but also face challenges in data access and browser automation.&lt;/p&gt;

&lt;p&gt;Scrapeless Scraping Browser has become an ideal solution for AI agents and Agentic AI with its high concurrency, anti-blocking and cost-effective features, as well as seamless integration with AI frameworks. As the field of AI continues to develop, Scraping browsers will play a key role in helping these AI systems get the data they need to achieve optimal performance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.scrapeless.com/en/blog/session-replay?utm_souce=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=ai-agent-vs-agentic-ai" rel="noopener noreferrer"&gt;&lt;strong&gt;Don't miss our free trial!&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>discuss</category>
      <category>learning</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Browser Use &amp; Scraping Browser: Achieving Maximum Effectiveness of AI Agent</title>
      <dc:creator>Scraper0024</dc:creator>
      <pubDate>Mon, 12 May 2025 09:58:28 +0000</pubDate>
      <link>https://dev.to/scraper0024/browser-use-scraping-browser-achieving-maximum-effectiveness-of-ai-agent-32fo</link>
      <guid>https://dev.to/scraper0024/browser-use-scraping-browser-achieving-maximum-effectiveness-of-ai-agent-32fo</guid>
      <description>&lt;p&gt;&lt;a href="https://www.scrapeless.com/en/product/scraping-browser?utm_source=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=browser-use" rel="noopener noreferrer"&gt;Scraping Browser&lt;/a&gt; has become the go-to tool for daily data extraction and automation tasks. By integrating Browser-Use with &lt;a href="https://www.scrapeless.com/en?utm_source=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=browser-use" rel="noopener noreferrer"&gt;Scrapeless&lt;/a&gt; Scraping Browser, you can overcome browser automation limitations and avoid blocks.&lt;/p&gt;

&lt;p&gt;In this article, we’ll build an automated AI Agent tool using Browser-Use and Scrapeless Scraping Browser to perform automated data scraping. You’ll see how it saves you time and effort, making automation tasks a breeze!&lt;/p&gt;

&lt;p&gt;You will learn:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What is Browser-Use, and how does it help build AI agents?&lt;/li&gt;
&lt;li&gt;Why can Scraping Browser effectively overcome the limitations of Browser-Use?&lt;/li&gt;
&lt;li&gt;How to build a block-free AI agent using Browser-Use and Scraping Browser?&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What Is Browser-Use?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Browser-Use&lt;/strong&gt; is a Python-based AI browser automation library designed to empower AI agents with advanced browser automation capabilities. It can recognize all interactive elements on a webpage and allows agents to interact with the page programmatically—performing common tasks like search, clicking, form-filling, and data scraping. At its core, Browser-Use converts websites into structured text and supports browser frameworks like Playwright, greatly simplifying web interactions.&lt;/p&gt;

&lt;p&gt;Unlike traditional automation tools, Browser-Use combines visual understanding with HTML structure parsing, allowing AI agents to control the browser using natural language instructions. This makes the AI more intelligent in perceiving page content and efficiently executing tasks. Additionally, it supports multi-tab management, element interaction tracking, custom action handling, and built-in error recovery mechanisms to ensure the stability and consistency of automation workflows.&lt;/p&gt;

&lt;p&gt;More importantly, Browser-Use is compatible with all major large language models (such as GPT-4, Claude 3, Llama 2). With LangChain integration, users can simply describe tasks in natural language, and the AI agent will complete complex web operations. For users seeking AI-driven web interaction automation, this is a powerful and promising tool.&lt;/p&gt;

&lt;h2&gt;
  
  
  Limitations of Browser-Use in AI Agent Development
&lt;/h2&gt;

&lt;p&gt;As mentioned above, Browser-Use doesn’t work like a magic wand from Harry Potter. Instead, it combines visual input with AI control to automate browsers using Playwright.&lt;/p&gt;

&lt;p&gt;Browser-Use inevitably comes with some drawbacks, but these limitations do not stem from the automation framework itself. Rather, &lt;strong&gt;they arise from the browsers it controls&lt;/strong&gt;. Tools like Playwright launch browsers with specific configurations and tools for automation, which can also be exposed to anti-bot detection systems.&lt;/p&gt;

&lt;p&gt;As a result, your AI agent may frequently encounter CAPTCHA challenges or blocked pages such as “Sorry, something went wrong on our end.” To unlock the full potential of Browser-Use, thoughtful adjustments are required. The ultimate goal is to avoid triggering anti-bot systems to ensure your AI automation runs smoothly.&lt;/p&gt;

&lt;p&gt;After extensive testing, we can confidently say: Scraping Browser is the most effective solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Scrapeless Scraping Browser?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.scrapeless.com/en/product/scraping-browser?utm_source=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=browser-use" rel="noopener noreferrer"&gt;Scraping Browser&lt;/a&gt; is a cloud-based, serverless browser automation tool designed to solve three core problems in dynamic web scraping: &lt;strong&gt;high concurrency bottlenecks, anti-bot evasion, and cost control&lt;/strong&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;It consistently provides a high-concurrency, anti-blocking headless browser environment to help developers easily scrape dynamic content.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It comes with a global proxy IP pool and fingerprinting technology, capable of automatically solving CAPTCHA and bypassing blocking mechanisms.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Built specifically for AI developers, Scrapeless Scraping Browser features a deeply customized Chromium core and a globally distributed proxy network. Users can seamlessly run and manage multiple headless browser instances to build AI applications and agents that interact with the web. It eliminates the constraints of local infrastructure and performance bottlenecks, allowing you to fully focus on building your solutions.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Do Browser-Use and Scraping Browser Work Together?
&lt;/h2&gt;

&lt;p&gt;When combined, developers can use Browser-Use to orchestrate browser operations while relying on Scrapeless’s stable cloud service and powerful anti-blocking capabilities to reliably acquire web data.&lt;/p&gt;

&lt;p&gt;Browser-Use offers simple APIs that allow AI agents to “understand” and interact with web content. For example, it can use LLMs like OpenAI or Anthropic to interpret task instructions and complete actions such as searches or link clicks in the browser via Playwright.&lt;/p&gt;

&lt;p&gt;Scrapeless’s Scraping Browser complements this setup by addressing its weaknesses. When dealing with large websites with strict anti-bot measures, its high-concurrency proxy support, CAPTCHA solving, and browser emulation mechanisms ensure stable scraping.&lt;/p&gt;

&lt;p&gt;In summary, Browser-Use handles intelligence and task orchestration, while Scrapeless provides a robust scraping foundation, making automated browser tasks more efficient and reliable.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Integrate a Scraping Browser with Browser-Use?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1. Get Scrapeless API Key
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Register and log in to the &lt;a href="https://app.scrapeless.com/passport/login?utm_source=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=browser-use" rel="noopener noreferrer"&gt;&lt;strong&gt;Scrapeless Dashboard&lt;/strong&gt;&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Navigate to "&lt;strong&gt;Settings&lt;/strong&gt;".&lt;/li&gt;
&lt;li&gt;Click "&lt;strong&gt;API Key Management&lt;/strong&gt;".&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz830po6t88fdu4k7es04.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz830po6t88fdu4k7es04.png" alt="Scrapeless API Key" width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then copy and set the &lt;code&gt;SCRAPELESS_API_KEY&lt;/code&gt; environment variables in your .env file.&lt;/p&gt;

&lt;p&gt;To enable AI features in Browser-Use, you need a valid API key from an external AI provider. In this example, we will use OpenAI. If you haven't generated an API key yet, follow OpenAI's official guide to create one.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;OPENAI_API_KEYenvironment&lt;/code&gt; variables in your .env file are required too.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Disclaimer: The following steps focus on how to integrate OpenAI, but you can adapt the following to your needs, just make sure to use any other AI tool supported by Browser-Use.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;OPENAI_API_KEY=your-openai-api-key
SCRAPELESS_API_KEY=your-scrapeless-api-key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;💡Remember to replace the sample API key with your actual API key&lt;/p&gt;

&lt;p&gt;Next, import &lt;code&gt;ChatOpenAI&lt;/code&gt; in your program: &lt;code&gt;langchain_openaiagent.py&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain_openai import ChatOpenAI
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that Browser-Use relies on LangChain to handle AI integration. Therefore, even if you haven't explicitly installed &lt;code&gt;langchain_openai&lt;/code&gt; in your project, it is already available for use.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;gpt-4o&lt;/code&gt; sets up the OpenAI integration with the following model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;llm = ChatOpenAI(model="gpt-4o")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No additional configuration is required. This is because &lt;code&gt;langchain_openai&lt;/code&gt; automatically reads the API key from the environment variable &lt;code&gt;OPENAI_API_KEY&lt;/code&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;For integration with other AI models or providers, see the official Browser-Use documentation.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Step 2. Install Browser Use
&lt;/h3&gt;

&lt;p&gt;With pip (Python at least v.3.11):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;browser-use
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For memory functionality (requires Python&amp;lt;3.13 due to PyTorch compatibility):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="s2"&gt;"browser-use[memory]"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3. Set up Browser and Agent Configuration
&lt;/h3&gt;

&lt;p&gt;Here’s how to configure the browser and create an automation agent:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;dotenv&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;load_dotenv&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;urllib.parse&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;urlencode&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ChatOpenAI&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;browser_use&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Browser&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;BrowserConfig&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;pydantic&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;SecretStr&lt;/span&gt;

&lt;span class="n"&gt;task&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Go to Google, search for &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Scrapeless&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;, click on the first post and return to the title&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;SCRAPELESS_API_KEY&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SCRAPELESS_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;OPENAI_API_KEY&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;OPENAI_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;setup_browser&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Browser&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;scrapeless_base_url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;wss://browser.scrapeless.com/browser&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;query_params&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;token&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;SCRAPELESS_API_KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;session_ttl&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1800&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;proxy_country&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ANY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;browser_ws_endpoint&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;scrapeless_base_url&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;?&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;urlencode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query_params&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;BrowserConfig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cdp_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;browser_ws_endpoint&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;browser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Browser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;browser&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;setup_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Browser&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;llm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ChatOpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-4o&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;# Or choose the model you want to use
&lt;/span&gt;        &lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;SecretStr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;OPENAI_API_KEY&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4. Create the Main Function
&lt;/h3&gt;

&lt;p&gt;Here’s the main function that puts everything together:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="nf"&gt;load_dotenv&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;browser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;setup_browser&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;setup_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 5. Run your script
&lt;/h3&gt;

&lt;p&gt;Run your script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python run main.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see your Scrapeless session start in the &lt;a href="https://app.scrapeless.com/passport/login?utm_source=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=browser-use" rel="noopener noreferrer"&gt;Scrapeless Dashboard&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In addition, Scrapeless supports &lt;a href="https://www.scrapeless.com/en/blog/session-replay?utm_source=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=browser-use" rel="noopener noreferrer"&gt;session replay&lt;/a&gt;, which enables program visualization. Before running the program, make sure you have enabled the Web Recording function. When the session is completed, you can see the record directly on the Dashboard to help you quickly troubleshoot problems.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2t2pmxqwv0w6qlfxbi4s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2t2pmxqwv0w6qlfxbi4s.png" alt="session replay" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Full Code&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;dotenv&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;load_dotenv&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;urllib.parse&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;urlencode&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ChatOpenAI&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;browser_use&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Browser&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;BrowserConfig&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;pydantic&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;SecretStr&lt;/span&gt;

&lt;span class="n"&gt;task&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Go to Google, search for &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Scrapeless&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;, click on the first post and return to the title&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;SCRAPELESS_API_KEY&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SCRAPELESS_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;OPENAI_API_KEY&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;OPENAI_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;setup_browser&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Browser&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;scrapeless_base_url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;wss://browser.scrapeless.com/browser&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;query_params&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;token&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;SCRAPELESS_API_KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;session_ttl&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1800&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;proxy_country&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ANY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;browser_ws_endpoint&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;scrapeless_base_url&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;?&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;urlencode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query_params&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;BrowserConfig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cdp_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;browser_ws_endpoint&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;browser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Browser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;browser&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;setup_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Browser&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;llm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ChatOpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-4o&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;# Or choose the model you want to use
&lt;/span&gt;        &lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;SecretStr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;OPENAI_API_KEY&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="nf"&gt;load_dotenv&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;browser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;setup_browser&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;setup_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;💡Browser Use currently only supports Python.&lt;/p&gt;

&lt;p&gt;💡You can copy the URL in &lt;strong&gt;live session&lt;/strong&gt; to watch the session's progress in real-time, and you can also watch a replay of the session in &lt;strong&gt;session history&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 6. Running Results
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;done&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;text&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;The title of the first search result clicked is: 'Effortless Web Scraping Toolkit - Scrapeless'.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;success&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F68cos9bl6oppt8o69qgn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F68cos9bl6oppt8o69qgn.png" alt="Running results" width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, the Browser Use Agent will automatically open the URL and print the page title: “&lt;em&gt;Scrapeless: Effortless Web Scraping Toolkit&lt;/em&gt;” (this is an example of the title on Scrapeless’s official homepage).&lt;/p&gt;

&lt;p&gt;The entire execution process can be viewed in the Scrapeless console under the "Dashboard" → "&lt;strong&gt;Session&lt;/strong&gt;" → "&lt;strong&gt;Session History&lt;/strong&gt;" page, where you’ll see the details of the recently executed session.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 7. Exporting the Results
&lt;/h3&gt;

&lt;p&gt;For team sharing and archiving purposes, we can save the scraped information into a JSON or CSV file. For example, the following code snippet shows how to write the title results into a file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;pathlib&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Path&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;save_to_json&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;obj&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;filename&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Path&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;filename&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;parent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mkdir&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;parents&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;exist_ok&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;w&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;encoding&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;utf-8&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dump&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;obj&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ensure_ascii&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;indent&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="nf"&gt;load_dotenv&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;browser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;setup_browser&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;setup_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;save_to_json&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;model_dump&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;scrapeless_update_report.json&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code above demonstrates how to open a file and write content in JSON format, including the search keywords, links, and page titles. The generated &lt;code&gt;scrapeless_update_report.json&lt;/code&gt; file can be shared internally through a company knowledge base or collaboration platform, making it easy for team members to view the scraping results. For plain text format, you can simply change the extension to .txt and use basic text output methods instead.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap Up
&lt;/h2&gt;

&lt;p&gt;By using Scrapeless’s &lt;a href="https://www.scrapeless.com/en/product/scraping-browser?utm_source=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=browser-use" rel="noopener noreferrer"&gt;Scraping Browser&lt;/a&gt; service in combination with the Browser Use AI agent, we can easily build an automated system for information retrieval and reporting.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scrapeless provides a stable and efficient cloud-based scraping solution that can handle complex anti-scraping mechanisms.&lt;/li&gt;
&lt;li&gt;Browser Use allows the AI agent to intelligently control the browser to perform tasks such as search, click, and extract.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This integration enables developers to offload tedious web data collection tasks to automated agents, significantly improving research efficiency while ensuring accuracy and real-time results.&lt;/p&gt;

&lt;p&gt;Scrapeless’s Scraping Browser helps AI avoid network blocks while retrieving real-time search data and ensures operational stability. Combined with Browser Use’s flexible strategy engine, we’re able to build a more powerful AI automation research tool that offers strong support for smart business decision-making. This toolset enables AI agents to "query" web content as if they were interacting with a database, greatly reducing the cost of manual competitor monitoring and improving the efficiency of R&amp;amp;D and marketing teams.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>ai</category>
      <category>python</category>
    </item>
    <item>
      <title>Scrapeless Session Replay - Visualize Your Programs!</title>
      <dc:creator>Scraper0024</dc:creator>
      <pubDate>Mon, 12 May 2025 03:31:45 +0000</pubDate>
      <link>https://dev.to/scraper0024/scrapeless-session-replay-visualize-your-programs-26p9</link>
      <guid>https://dev.to/scraper0024/scrapeless-session-replay-visualize-your-programs-26p9</guid>
      <description>&lt;p&gt;Session Replay is now live on &lt;a href="https://www.scrapeless.com/en?utm_source=devto&amp;amp;utm_medium=blog&amp;amp;utmcampaign=session-replay" rel="noopener noreferrer"&gt;Scrapeless&lt;/a&gt;! This groundbreaking feature unifies Live View, and Session Recording into one seamless experience—offering a cutting-edge visual analysis capability for your browser automation workflows.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Is Session Replay?
&lt;/h2&gt;

&lt;p&gt;Session Replay is a video-like playback of user sessions, constructed using smart DOM snapshot technology rather than traditional screen recording. It captures your session at the DOM level—based on the browser's internal HTML structure—so you can accurately revisit every interaction, including page loads, refreshes, and navigations.&lt;/p&gt;

&lt;p&gt;Unlike conventional video recordings, Scrapeless leverages intelligent DOM snapshots to capture:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Every page load, refresh, and navigation&lt;/li&gt;
&lt;li&gt;User interactions such as clicks, scrolls, and inputs&lt;/li&gt;
&lt;li&gt;Dynamic DOM changes in real-time&lt;/li&gt;
&lt;li&gt;Full request/response network traffic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This innovative recording method allows you to travel back in time and see the exact context of what happened before, during, and after an issue. It eliminates guesswork and helps you reproduce, debug, and solve problems with complete clarity—just like having DevTools running during every session.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Choose Scrapeless Session Replay?
&lt;/h2&gt;

&lt;p&gt;During data extraction and automation testing, developers often face the "black box dilemma"—not knowing what truly happened inside the browser. Scrapeless Session Replay changes that by providing a visual session management system that requires no additional code and works directly from a user-friendly dashboard.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Benefits:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;✅ Real-Time Recording – Automatically logs all network requests during script execution&lt;/p&gt;

&lt;p&gt;✅ Frame-by-Frame Playback – Rewind browser actions with precision&lt;/p&gt;

&lt;p&gt;✅ Team Collaboration – Share session recordings easily for group debugging&lt;/p&gt;

&lt;p&gt;✅ Millisecond Accuracy – Inspect event-level timestamps for enhanced script tuning&lt;/p&gt;

&lt;p&gt;✅ Secure Isolation – Session data is encrypted and protected with fine-grained access control&lt;/p&gt;

&lt;p&gt;✅ Lightweight Format – Thanks to rrweb-powered DOM diffing, recording files are 90% smaller than video files&lt;/p&gt;




&lt;h2&gt;
  
  
  Features Breakdown
&lt;/h2&gt;

&lt;h3&gt;
  
  
  🔍 Live Session Management
&lt;/h3&gt;

&lt;p&gt;👍 &lt;strong&gt;Interactive monitoring dashboard&lt;/strong&gt;. Instantly view any running session, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Locate the target session from the session list&lt;/li&gt;
&lt;li&gt;Session information – ID, uptime and resource usage&lt;/li&gt;
&lt;li&gt;Live preview – live browser view&lt;/li&gt;
&lt;li&gt;Instantly stop the session with the stop button&lt;/li&gt;
&lt;li&gt;Adjust the content of the session in the playground&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  📼 Historical Session Replay
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Precision Search&lt;/strong&gt;. Search sessions by Session ID or Session Name to group and locate sessions easily.
&amp;gt; Session ID is unique, and Session Name can be customized as needed, which is equivalent to Session grouping.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Session Recording&lt;/strong&gt;. Automatically logs every browser action during a session and stores it as a playable recording.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Video Playback&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Enable Web Record to activate replay&lt;/li&gt;
&lt;li&gt;Click any session log to enter the detail view&lt;/li&gt;
&lt;li&gt;Watch auto-generated recordings&lt;/li&gt;
&lt;li&gt;Support full screen playback&lt;/li&gt;
&lt;li&gt;Use the playback controller to:&lt;/li&gt;
&lt;li&gt;Play / Pause&lt;/li&gt;
&lt;li&gt;Scrub timeline&lt;/li&gt;
&lt;li&gt;Adjust speed (1x, 2x, 4x, 8x)
&amp;gt; ⚠️ Session history is stored for 15 days. Older records will be auto-deleted.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Smart Status Indicators&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;✅ Successful sessions: Green label + “Completed”. You can play the recording in session history.&lt;/li&gt;
&lt;li&gt;❌ Failed sessions: Red label + “Error” status&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Common Session Failure Reasons&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Browser launch failure (e.g., invalid parameters or environment issues)&lt;/li&gt;
&lt;li&gt;Scheduling timeout&lt;/li&gt;
&lt;li&gt;Session exceeded maximum allowed time and didn’t complete&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Experience the Power of Session Replay Today
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Scrapeless Session Replay&lt;/strong&gt; transforms the abstract script execution process into a visual process, helping developers achieve the following in complex scenarios:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Rapidly reproduce problems: such as dynamic content loading failure, anti-crawling mechanism triggering, etc.&lt;/li&gt;
&lt;li&gt;Accurate optimization strategy: Targetedly adjust request frequency, interaction logic or fingerprint parameters based on video feedback&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;From eCommerce scraping to social media monitoring and search engine data extraction, this tool is indispensable for debugging and analysis in high-stakes environments.&lt;/p&gt;

&lt;p&gt;👉 Visit our &lt;a href="https://docs.scrapeless.com/en/scraping-browser/quickstart/introduction/" rel="noopener noreferrer"&gt;&lt;strong&gt;Documentation Center&lt;/strong&gt;&lt;/a&gt; to explore advanced usage, or contact our technical consultants for more usage details.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>ai</category>
      <category>devops</category>
      <category>news</category>
    </item>
    <item>
      <title>How to Integrate Browser Use into Scrapless Scraping Browser?</title>
      <dc:creator>Scraper0024</dc:creator>
      <pubDate>Thu, 08 May 2025 15:29:56 +0000</pubDate>
      <link>https://dev.to/scraper0024/how-to-integrate-browser-use-into-scrapless-scraping-browser-4d28</link>
      <guid>https://dev.to/scraper0024/how-to-integrate-browser-use-into-scrapless-scraping-browser-4d28</guid>
      <description>&lt;p&gt;Browser Use is a browser automation SDK that uses screenshots to capture the state of the browser and actions to simulate user interactions. This chapter will introduce how you can easily use browser-use to execute agent tasks on the Web with simple calls&lt;/p&gt;

&lt;h2&gt;
  
  
  Get Scrapeless API Key
&lt;/h2&gt;

&lt;p&gt;Go over the &lt;a href="https://app.scrapeless.com/?utm_source=dev-to&amp;amp;utm_medium=integration&amp;amp;utm_campaign=browser-use" rel="noopener noreferrer"&gt;&lt;strong&gt;Dashboard’s Settings tab&lt;/strong&gt;&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv39zvf6bhkuhuovj0grc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv39zvf6bhkuhuovj0grc.png" alt="Get Scrapeless API Key" width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then copy and set the &lt;code&gt;SCRAPELESS_API_KEY&lt;/code&gt; environment variables in your .env file.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;OPENAI_API_KEY&lt;/code&gt; environment variables in your .env file are required too.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;OPENAI_API_KEY=your-openai-api-key
SCRAPELESS_API_KEY=your-scrapeless-api-key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;Remember to replace the sample API key with your actual API key&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Install Browser Use
&lt;/h2&gt;

&lt;p&gt;With pip (Python&amp;gt;=3.11):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;browser-use
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For memory functionality (requires Python&amp;lt;3.13 due to PyTorch compatibility):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="s2"&gt;"browser-use[memory]"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Set up Browser and Agent Configuration
&lt;/h2&gt;

&lt;p&gt;Here’s how to configure the browser and create an automation agent:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;dotenv&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;load_dotenv&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;urllib.parse&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;urlencode&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ChatOpenAI&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;browser_use&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Browser&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;BrowserConfig&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;pydantic&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;SecretStr&lt;/span&gt;

&lt;span class="n"&gt;task&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Go to Google, search for &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Scrapeless&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;, click on the first post and return to the title&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;setup_browser&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Browser&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;scrapeless_base_url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;wss://browser.scrapeless.com/browser&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;query_params&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;token&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SCRAPELESS_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;session_ttl&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;180&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;proxy_country&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ANY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;browser_ws_endpoint&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;scrapeless_base_url&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;?&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;urlencode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query_params&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;BrowserConfig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cdp_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;browser_ws_endpoint&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;browser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Browser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;browser&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;setup_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Browser&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;llm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ChatOpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-4o&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;# Or choose the model you want to use
&lt;/span&gt;        &lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;SecretStr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;OPENAI_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Create the Main Function
&lt;/h2&gt;

&lt;p&gt;Here’s the main function that puts everything together:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="nf"&gt;load_dotenv&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;browser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;setup_browser&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;setup_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Run your script
&lt;/h2&gt;

&lt;p&gt;Run your script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python run main.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see your Scrapeless session start in the &lt;a href="https://app.scrapeless.com/?utm_source=dev-to&amp;amp;utm_medium=integration&amp;amp;utm_campaign=browser-use" rel="noopener noreferrer"&gt;Scrapeless Dashboard&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Full Code
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;dotenv&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;load_dotenv&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;urllib.parse&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;urlencode&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ChatOpenAI&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;browser_use&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Browser&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;BrowserConfig&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;pydantic&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;SecretStr&lt;/span&gt;

&lt;span class="n"&gt;task&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Go to Google, search for &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Scrapeless&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;, click on the first post and return to the title&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;setup_browser&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Browser&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;scrapeless_base_url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;wss://browser.scrapeless.com/browser&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;query_params&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;token&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SCRAPELESS_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;session_ttl&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;180&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;proxy_country&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ANY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;browser_ws_endpoint&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;scrapeless_base_url&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;?&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;urlencode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query_params&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;BrowserConfig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cdp_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;browser_ws_endpoint&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;browser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Browser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;browser&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;setup_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Browser&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;llm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ChatOpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-4o&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;# Or choose the model you want to use
&lt;/span&gt;        &lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;SecretStr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;OPENAI_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="nf"&gt;load_dotenv&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;browser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;setup_browser&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;setup_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;💡 Browser Use currently only supports Python.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;💡 You can copy the URL in &lt;strong&gt;live session&lt;/strong&gt; to watch the session's progress in real-time, and you can also watch a replay of the session in &lt;strong&gt;session history&lt;/strong&gt;.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>ai</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How to Integrate Browser Use into Scrapless Scraping Browser?</title>
      <dc:creator>Scraper0024</dc:creator>
      <pubDate>Thu, 08 May 2025 15:24:30 +0000</pubDate>
      <link>https://dev.to/scraper0024/how-to-integrate-browser-use-into-scrapless-scraping-browser-3p08</link>
      <guid>https://dev.to/scraper0024/how-to-integrate-browser-use-into-scrapless-scraping-browser-3p08</guid>
      <description>&lt;p&gt;Browser Use is a browser automation SDK that uses screenshots to capture the state of the browser and actions to simulate user interactions. This chapter will introduce how you can easily use browser-use to execute agent tasks on the Web with simple calls&lt;/p&gt;

&lt;h2&gt;
  
  
  Get Scrapeless API Key
&lt;/h2&gt;

&lt;p&gt;Go over the &lt;a href="https://app.scrapeless.com/?utm_source=dev-to&amp;amp;utm_medium=integration&amp;amp;utm_campaign=browser-use" rel="noopener noreferrer"&gt;&lt;strong&gt;Dashboard’s Settings tab&lt;/strong&gt;&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv39zvf6bhkuhuovj0grc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv39zvf6bhkuhuovj0grc.png" alt="Get Scrapeless API Key" width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then copy and set the &lt;code&gt;SCRAPELESS_API_KEY&lt;/code&gt; environment variables in your .env file.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;OPENAI_API_KEY&lt;/code&gt; environment variables in your .env file are required too.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;OPENAI_API_KEY=your-openai-api-key
SCRAPELESS_API_KEY=your-scrapeless-api-key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;Remember to replace the sample API key with your actual API key&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Install Browser Use
&lt;/h2&gt;

&lt;p&gt;With pip (Python&amp;gt;=3.11):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;browser-use
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For memory functionality (requires Python&amp;lt;3.13 due to PyTorch compatibility):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="s2"&gt;"browser-use[memory]"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Set up Browser and Agent Configuration
&lt;/h2&gt;

&lt;p&gt;Here’s how to configure the browser and create an automation agent:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;dotenv&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;load_dotenv&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;urllib.parse&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;urlencode&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ChatOpenAI&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;browser_use&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Browser&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;BrowserConfig&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;pydantic&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;SecretStr&lt;/span&gt;

&lt;span class="n"&gt;task&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Go to Google, search for &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Scrapeless&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;, click on the first post and return to the title&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;setup_browser&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Browser&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;scrapeless_base_url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;wss://browser.scrapeless.com/browser&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;query_params&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;token&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SCRAPELESS_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;session_ttl&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;180&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;proxy_country&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ANY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;browser_ws_endpoint&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;scrapeless_base_url&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;?&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;urlencode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query_params&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;BrowserConfig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cdp_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;browser_ws_endpoint&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;browser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Browser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;browser&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;setup_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Browser&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;llm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ChatOpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-4o&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;# Or choose the model you want to use
&lt;/span&gt;        &lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;SecretStr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;OPENAI_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Create the Main Function
&lt;/h2&gt;

&lt;p&gt;Here’s the main function that puts everything together:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="nf"&gt;load_dotenv&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;browser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;setup_browser&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;setup_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Run your script
&lt;/h2&gt;

&lt;p&gt;Run your script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python run main.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see your Scrapeless session start in the &lt;a href="https://app.scrapeless.com/?utm_source=dev-to&amp;amp;utm_medium=integration&amp;amp;utm_campaign=browser-use" rel="noopener noreferrer"&gt;Scrapeless Dashboard&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Full Code
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;dotenv&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;load_dotenv&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;urllib.parse&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;urlencode&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ChatOpenAI&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;browser_use&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Browser&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;BrowserConfig&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;pydantic&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;SecretStr&lt;/span&gt;

&lt;span class="n"&gt;task&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Go to Google, search for &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Scrapeless&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;, click on the first post and return to the title&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;setup_browser&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Browser&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;scrapeless_base_url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;wss://browser.scrapeless.com/browser&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;query_params&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;token&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SCRAPELESS_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;session_ttl&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;180&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;proxy_country&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ANY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;browser_ws_endpoint&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;scrapeless_base_url&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;?&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;urlencode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query_params&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;BrowserConfig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cdp_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;browser_ws_endpoint&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;browser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Browser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;browser&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;setup_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Browser&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;llm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ChatOpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-4o&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;# Or choose the model you want to use
&lt;/span&gt;        &lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;SecretStr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;OPENAI_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="nf"&gt;load_dotenv&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;browser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;setup_browser&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;setup_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;💡 Browser Use currently only supports Python.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;💡 You can copy the URL in &lt;strong&gt;live session&lt;/strong&gt; to watch the session's progress in real-time, and you can also watch a replay of the session in &lt;strong&gt;session history&lt;/strong&gt;.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>ai</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>👨‍💻 How to Integrate Naver Smart Store API in 5 Minutes to Automatically Fetch Product Data?</title>
      <dc:creator>Scraper0024</dc:creator>
      <pubDate>Tue, 22 Apr 2025 07:17:12 +0000</pubDate>
      <link>https://dev.to/scraper0024/how-to-integrate-naver-smart-store-api-in-5-minutes-to-automatically-fetch-product-data-1omd</link>
      <guid>https://dev.to/scraper0024/how-to-integrate-naver-smart-store-api-in-5-minutes-to-automatically-fetch-product-data-1omd</guid>
      <description>&lt;p&gt;With the rapid growth of online shopping, e-commerce now accounts for 24% of all retail sales globally. By 2025, worldwide e-commerce retail sales are expected to hit $7.4 trillion, reflecting its expanding influence on consumer behavior.&lt;/p&gt;

&lt;p&gt;At the center of South Korea's digital ecosystem lies Naver, the country's leading search engine and tech powerhouse. As a cornerstone of daily digital life, Naver integrates e-commerce, digital payment solutions, webtoons, blogging platforms, and mobile messaging services. This diverse range of offerings allows Naver to gather user data across more sectors than any other platform in the region, solidifying its role as a key player in shaping the nation's online landscape.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How can you scrape product data and from Naver Shop quickly, at scale, and at minimal cost?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let's Figure out the details now!&lt;/p&gt;

&lt;h2&gt;
  
  
  💡 What Product Data Can We Extract from Naver?
&lt;/h2&gt;

&lt;p&gt;A robust Naver scraping tool can extract a wide range of data fields, ensuring comprehensive and up-to-date insights. These include:&lt;/p&gt;

&lt;h3&gt;
  
  
  Core Product Information:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Product Name&lt;/li&gt;
&lt;li&gt;Descriptions&lt;/li&gt;
&lt;li&gt;Images&lt;/li&gt;
&lt;li&gt;Categories &amp;amp; Subcategories&lt;/li&gt;
&lt;li&gt;Brand&lt;/li&gt;
&lt;li&gt;Product ID&lt;/li&gt;
&lt;li&gt;SKU (Stock Keeping Unit)&lt;/li&gt;
&lt;li&gt;Weight/Volume&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Pricing and Promotions:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Original Price&lt;/li&gt;
&lt;li&gt;Discounted Price&lt;/li&gt;
&lt;li&gt;Discount Percentage&lt;/li&gt;
&lt;li&gt;Unit Price&lt;/li&gt;
&lt;li&gt;Promotions&lt;/li&gt;
&lt;li&gt;Bundle Offers&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Availability and Logistics:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Stock Status&lt;/li&gt;
&lt;li&gt;Delivery Options&lt;/li&gt;
&lt;li&gt;Delivery Time&lt;/li&gt;
&lt;li&gt;Return Policy&lt;/li&gt;
&lt;li&gt;Store Location&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Consumer Insights:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Customer Ratings&lt;/li&gt;
&lt;li&gt;Reviews&lt;/li&gt;
&lt;li&gt;Seller Information&lt;/li&gt;
&lt;li&gt;Expiration Date&lt;/li&gt;
&lt;li&gt;Ingredients&lt;/li&gt;
&lt;li&gt;Nutritional Information&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Metadata:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Last Updated&lt;/li&gt;
&lt;li&gt;Categories/Subcategories&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  ⚠️ Challenges in Scraping Naver Data
&lt;/h2&gt;

&lt;p&gt;While the benefits are clear, scraping Naver data is not without its hurdles. Here are the six major challenges businesses must navigate:&lt;/p&gt;

&lt;h3&gt;
  
  
  Lack of Stable Entry Points or Session Control
&lt;/h3&gt;

&lt;p&gt;Naver requires consistent user behavior for session validation. Anonymous scraping often triggers suspicion, leading to blocked access.&lt;/p&gt;

&lt;h3&gt;
  
  
  JavaScript Rendering Challenges
&lt;/h3&gt;

&lt;p&gt;Critical content is often loaded dynamically via JavaScript. Tools that fail to render JS accurately will miss vital data.&lt;/p&gt;

&lt;h3&gt;
  
  
  Session Validation, Geo-Locking, and CAPTCHA
&lt;/h3&gt;

&lt;p&gt;Naver employs multiple layers of protection, including CAPTCHA and geo-restrictions. Without robust session simulation and proxy rotation, scraping efforts can quickly fail.&lt;/p&gt;

&lt;h3&gt;
  
  
  Frequent Layout Changes
&lt;/h3&gt;

&lt;p&gt;Naver frequently updates its interface, altering pagination logic, tag structures, and load sequences. This requires constant adjustments to scraping tools.&lt;/p&gt;

&lt;h3&gt;
  
  
  Rate Limiting and Blockades
&lt;/h3&gt;

&lt;p&gt;High request volumes can trigger rate limits. Effective scraping requires behavior simulation, diversified access protocols, and careful pacing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Legal and Regulatory Compliance
&lt;/h3&gt;

&lt;p&gt;South Korea has stringent data privacy laws. Non-compliance can result in legal risks and reputational damage, especially for overseas businesses.&lt;/p&gt;




&lt;h2&gt;
  
  
  🤔 Why Use Scrapeless for Naver Data Extraction?
&lt;/h2&gt;

&lt;p&gt;Scrapeless offers a cutting-edge solution to overcome these challenges, providing seamless and reliable data extraction tailored to your business needs. Here’s why Scrapeless stands out:&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Features:
&lt;/h3&gt;

&lt;p&gt;1️⃣ &lt;strong&gt;Ultra-Fast and Reliable&lt;/strong&gt;: Acquire data quickly without compromising stability, even at scale.&lt;br&gt;&lt;br&gt;
2️⃣ &lt;strong&gt;Rich Data Fields&lt;/strong&gt;: Extract detailed information, including product details, seller info, pricing, reviews, and more.&lt;br&gt;&lt;br&gt;
3️⃣ &lt;strong&gt;Intelligent Proxy Rotation System&lt;/strong&gt;: Automatically switch proxy IPs to bypass IP-based restrictions and ensure uninterrupted access.&lt;br&gt;&lt;br&gt;
4️⃣ &lt;strong&gt;Advanced Fingerprint Technology&lt;/strong&gt;: Dynamically simulate browser characteristics and user interactions to bypass anti-scraping mechanisms.&lt;br&gt;&lt;br&gt;
5️⃣ &lt;strong&gt;Integrated CAPTCHA Solving&lt;/strong&gt;: Handle reCAPTCHA and Cloudflare challenges seamlessly, ensuring smooth data collection.&lt;br&gt;&lt;br&gt;
6️⃣ &lt;strong&gt;Automation&lt;/strong&gt;: Fully automated scraping processes adapt to updates in real time, minimizing manual intervention.&lt;/p&gt;
&lt;h3&gt;
  
  
  Business Benefits:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Market Analysis&lt;/strong&gt;: Gain deep insights into consumer behavior, emerging trends, and competitor strategies.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pricing Optimization&lt;/strong&gt;: Stay competitive by tracking price changes and promotional activities.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Inventory Management&lt;/strong&gt;: Ensure optimal stock levels and reduce operational inefficiencies.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Customer-Centric Decisions&lt;/strong&gt;: Use reviews and ratings to refine products and enhance satisfaction.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With Scrapeless’ Naver Scraping API, businesses can effortlessly track market trends, optimize strategies, and maintain a competitive edge in the fast-evolving FMCG industry.&lt;/p&gt;

&lt;p&gt;By addressing the complexities of Naver data scraping, Scrapeless empowers businesses to unlock valuable insights and drive growth. Whether you’re a retailer, e-commerce platform, or market analyst, leveraging Naver’s data can transform your decision-making and operational efficiency.&lt;/p&gt;
&lt;h2&gt;
  
  
  Naver Scraping API: Extract Naver Product Details Easily
&lt;/h2&gt;
&lt;h3&gt;
  
  
  How it works
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Simply configure the Store ID and Product ID .&lt;/li&gt;
&lt;li&gt;The Scrapeless Naver API will extract detailed product data from Naver Shop, including pricing, seller information, reviews, and more.&lt;/li&gt;
&lt;li&gt;You can download and analyze the data.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Step 1: Create your API Token
&lt;/h3&gt;

&lt;p&gt;To get started, you’ll need to obtain your API Key:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Log in to the &lt;a href="https://app.scrapeless.com/passport/login?utm_source=dev.to&amp;amp;utm_medium=blog&amp;amp;utm_campaign=naver-products"&gt;&lt;strong&gt;Scrapeless Dashboard&lt;/strong&gt;&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Navigate to &lt;strong&gt;API Key Management&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create&lt;/strong&gt; to generate your unique API Key.&lt;/li&gt;
&lt;li&gt;Once created, you can simply click on the API Key to copy it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn6efbj4hqrqsszto811j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn6efbj4hqrqsszto811j.png" alt="Scrapeless API token" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 2. Launch the Naver Shop API
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Find the Scraping API under the &lt;strong&gt;For Data&lt;/strong&gt; collection.&lt;/li&gt;
&lt;li&gt;Simply click on the &lt;strong&gt;Naver Shop&lt;/strong&gt; actor to get ready for scraping products data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqagu0p83reur9xgwce72.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqagu0p83reur9xgwce72.png" alt="Naver Shop API" width="800" height="338"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 3: Define Your Target
&lt;/h3&gt;

&lt;p&gt;To scrape product data using the Naver Scraping API, you must provide two mandatory parameters: &lt;code&gt;storeId&lt;/code&gt; and &lt;code&gt;productId&lt;/code&gt;. The &lt;code&gt;channelUid&lt;/code&gt; parameter is optional.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4yp7rfjxht2fr5w0yy8h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4yp7rfjxht2fr5w0yy8h.png" alt="necessary paras" width="800" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can find the Product ID and Store ID directly in the product URL. Let's take &lt;a href="https://brand.naver.com/barudak/products/4469033180?NaPm=ct%3Dm9mo5x4g%7Cci%3D800b828f830f1d3d81df0575f6009efc9235fd9a%7Ctr%3Dnshsnx%7Csn%3D727239%7Cic%3D%7Chk%3De39ed35e26996b18c35ced568d18f83bc39fdf94" rel="noopener noreferrer"&gt;[바르닭] 닭가슴살 143종 크런치 소품닭 닭스테이크 소스큐브 골라담기 [원산지:국산(경기도 포천시) 등]&lt;/a&gt;  as an example：&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Store ID: barudak&lt;/li&gt;
&lt;li&gt;Product ID: 4469033180&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;We firmly protect the privacy of the website. All data in this blog is public and is only used as a demonstration of the crawling process. We do not save any information and data.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F29u5zubdx9fogq2y7bgo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F29u5zubdx9fogq2y7bgo.png" alt="Find the target paras" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 4: Start Scraping Naver Product Data
&lt;/h3&gt;

&lt;p&gt;Once you’ve filled in the required parameters, simply click Start Scraping to obtain comprehensive product data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffxor9bnclm7b4pdfpn1m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffxor9bnclm7b4pdfpn1m.png" alt="Get the scraping results" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here’s an example code snippet for extracting Naver product data. Just replace &lt;code&gt;YOUR_SCRAPELESS_API_TOKEN&lt;/code&gt; with your actual API key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json

import requests

def send_request():
    host = "api.scrapeless.com"
    url = f"https://{host}/api/v1/scraper/request"
    token = "YOUR_SCRAPELESS_API_TOKEN"

    headers = {
        "x-api-token": token
    }

    json_payload = json.dumps({
        "actor": "scraper.naver.product",
        "input": {
            "storeId": "barudak",
            "productId": "4469033180",
            "channelUid": " " ## Optional
        }
    })

    response = requests.post(url, headers=headers, data=json_payload)

    if response.status_code != 200:
        print("Error:", response.status_code, response.text)
        return

    print("body", response.text)


if __name__ == "__main__":
    send_request()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ready to take your business to the next level? Trust Scrapeless to deliver actionable insights and streamline your data extraction process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;p&gt;Extracting data from Naver is a strategic move that can yield significant value. However, when relying on programming for scraping, teams must build adaptive systems, manage session behaviors effectively, and comply with platform rules and South Korean data regulations. Navigating Naver’s dynamic infrastructure often involves setting up proxies, solving CAPTCHAs, and mimicking real user interactions—all of which can be complex and time-consuming.&lt;/p&gt;

&lt;p&gt;The good news? Maintenance doesn't have to be a burden. By utilizing a reliable tech stack, including browser automation tools and APIs, you can ensure efficient, compliant, and scalable extraction of Naver product data without the fear of being blocked.&lt;/p&gt;

&lt;p&gt;Ready to get started? &lt;a href="https://app.scrapeless.com/passport/login?utm_source=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=naver-products" rel="noopener noreferrer"&gt;&lt;strong&gt;Sign up for a free trial today&lt;/strong&gt;&lt;/a&gt;! With pricing as low as &lt;strong&gt;$3 for 1,000 requests&lt;/strong&gt;, it’s the most affordable solution available online!&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>beginners</category>
      <category>ai</category>
      <category>python</category>
    </item>
    <item>
      <title>Only $3 — Scrape Naver Shop Product Details within 5 Seconds!</title>
      <dc:creator>Scraper0024</dc:creator>
      <pubDate>Mon, 21 Apr 2025 13:02:29 +0000</pubDate>
      <link>https://dev.to/scraper0024/only-3-scrape-naver-shop-product-details-within-5-seconds-3cf0</link>
      <guid>https://dev.to/scraper0024/only-3-scrape-naver-shop-product-details-within-5-seconds-3cf0</guid>
      <description>&lt;p&gt;With the rise of online shopping, 24% of all retail sales now come from e-commerce markets. By 2025, global e-commerce retail sales are projected to reach $7.4 trillion.&lt;/p&gt;

&lt;p&gt;Naver, South Korea's largest search engine and tech giant, is the heart of the country's digital life. From e-commerce and digital payments to webtoons, blogs, and mobile messaging, it captures user data across more verticals than any other platform.&lt;/p&gt;

&lt;p&gt;Naver’s architecture is designed to break predictable patterns, detect inconsistencies, and adapt faster than most systems. If your scraping strategy relies on static scripts or brute-force proxies, it’s already outdated. Successful &lt;strong&gt;Naver Shop data scraping&lt;/strong&gt; isn’t just about bypassing defenses—it requires coordinating session behavior, timing logic, and aligning with platform expectations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How can you scrape product data from Naver Shop quickly, at scale, and at minimal cost?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This guide is for business teams, data owners, and leaders facing modern Naver scraping challenges!&lt;/p&gt;

&lt;h2&gt;
  
  
  💼 Why Scrape Naver Data?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0mou4loshbt3iu5fub7d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0mou4loshbt3iu5fub7d.png" alt="Naver shop" width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Competitive Pricing Strategies&lt;/strong&gt;: Use Naver Shopping data scraping to collect competitor pricing, enabling you to stay ahead in the market.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Inventory Optimization&lt;/strong&gt;: Monitor stock levels in real time to reduce shortages and improve efficiency.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Market Trend Analysis&lt;/strong&gt;: Identify emerging trends and consumer preferences to tailor your offerings.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enhanced Product Listings&lt;/strong&gt;: Extract detailed descriptions, images, and specs to create compelling listings.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Price Monitoring &amp;amp; Adjustments&lt;/strong&gt;: Track price changes and discounts to optimize promotions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Competitor Analysis&lt;/strong&gt;: Analyze rivals’ product offerings, pricing, and promotions to outperform them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data-Driven Marketing&lt;/strong&gt;: Gather consumer behavior insights for targeted campaigns.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Improved Customer Satisfaction&lt;/strong&gt;: Monitor reviews and ratings to refine products and boost satisfaction.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  💡 What Product Data Can We Extract from Naver?
&lt;/h2&gt;

&lt;p&gt;Scraping prices, stock status, descriptions, reviews, and discounts ensures comprehensive, up-to-date data. A robust Naver scraping tool can extract:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Field&lt;/th&gt;
&lt;th&gt;Field&lt;/th&gt;
&lt;th&gt;Field&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;✅ Product Name&lt;/td&gt;
&lt;td&gt;✅ Customer Ratings&lt;/td&gt;
&lt;td&gt;✅ Promotions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;✅ Product Features&lt;/td&gt;
&lt;td&gt;✅ Descriptions&lt;/td&gt;
&lt;td&gt;✅ Images&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;✅ Reviews&lt;/td&gt;
&lt;td&gt;✅ Delivery Options&lt;/td&gt;
&lt;td&gt;✅ Categories&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;✅ Subcategories&lt;/td&gt;
&lt;td&gt;✅ Product ID&lt;/td&gt;
&lt;td&gt;✅ Brand&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;✅ Delivery Time&lt;/td&gt;
&lt;td&gt;✅ Return Policy&lt;/td&gt;
&lt;td&gt;✅ Availability&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;✅ Price&lt;/td&gt;
&lt;td&gt;✅ Seller Information&lt;/td&gt;
&lt;td&gt;✅ Expiration Date&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;✅ Store Location&lt;/td&gt;
&lt;td&gt;✅ Ingredients&lt;/td&gt;
&lt;td&gt;✅ Discounted Price&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;✅ Original Price&lt;/td&gt;
&lt;td&gt;✅ Bundle Offers&lt;/td&gt;
&lt;td&gt;✅ Last Updated&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;✅ Stock Keeping Unit (SKU)&lt;/td&gt;
&lt;td&gt;✅ Weight/Volume&lt;/td&gt;
&lt;td&gt;✅ Discount Percentage&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;✅ Unit Price&lt;/td&gt;
&lt;td&gt;✅ Nutritional Information&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  🤔 Why Use Scrapeless to Extract Naver Product Data?
&lt;/h2&gt;

&lt;p&gt;Scrapeless employs advanced web data scraping technology to ensure high-quality, precise data extraction to meet various business needs—from market analysis and competitive pricing strategies to inventory management and consumer behavior analysis. Our service provides seamless solutions for retailers, e-commerce platforms, and market analysts, helping them gain deep insights into the fast-moving consumer goods (FMCG) market.&lt;/p&gt;

&lt;p&gt;With our Naver Scraping API, you can easily track market trends, optimize pricing strategies, and maintain a competitive edge in the rapidly evolving grocery industry. Trust us to provide actionable insights to drive your business growth and innovation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Features
&lt;/h3&gt;

&lt;p&gt;1️⃣ &lt;strong&gt;Ultra-Fast and Reliable&lt;/strong&gt;: Quickly acquire data without compromising stability.&lt;br&gt;
2️⃣ &lt;strong&gt;Rich Data Fields&lt;/strong&gt;: Includes product details, seller information, pricing, ratings, and more.&lt;br&gt;
3️⃣ &lt;strong&gt;Intelligent Proxy Rotation System&lt;/strong&gt;: Automatically switches proxy IPs to effectively bypass IP-based access restrictions.&lt;br&gt;
4️⃣ &lt;strong&gt;Advanced Fingerprint Technology&lt;/strong&gt;: Dynamically simulates browser characteristics and user interaction patterns to bypass sophisticated anti-scraping mechanisms.&lt;br&gt;
5️⃣ &lt;strong&gt;Integrated CAPTCHA Solving&lt;/strong&gt;: Automatically handles reCAPTCHA and Cloudflare challenges, ensuring smooth data collection.&lt;br&gt;
6️⃣ &lt;strong&gt;Automation&lt;/strong&gt;: Fully automated scraping process with rapid response to updates.&lt;/p&gt;
&lt;h2&gt;
  
  
  ⏯️ PLAN-A. Extract Naver product data with API
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Simply configure the Store ID and Product ID .&lt;/li&gt;
&lt;li&gt;The Scrapeless Naver API will extract detailed product data from Naver Shop, including pricing, seller information, reviews, and more.&lt;/li&gt;
&lt;li&gt;You can download and analyze the data.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Step 1: Create your API Token
&lt;/h3&gt;

&lt;p&gt;To get started, you’ll need to obtain your API Key from the Scrapeless Dashboard:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Log in to the &lt;a href="https://app.scrapeless.com/passport/login?utm_source=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=naver-products" rel="noopener noreferrer"&gt;&lt;strong&gt;Scrapeless Dashboard&lt;/strong&gt;&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Navigate to &lt;strong&gt;API Key Management&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create&lt;/strong&gt; to generate your unique API Key.&lt;/li&gt;
&lt;li&gt;Once created, you can simply click on the API Key to copy it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbwlfts0bj593nrd8rdsy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbwlfts0bj593nrd8rdsy.png" alt="Create API Key" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 2. Launch the Naver Shop API
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Find the Scraping API under the For Data collection.&lt;/li&gt;
&lt;li&gt;Simply click on the Naver Shop actor to get ready for scraping products data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F22saz5rgz8wr5eps6svb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F22saz5rgz8wr5eps6svb.png" alt="Launch the Naver Shop API" width="800" height="338"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 3: Define Your Target
&lt;/h3&gt;

&lt;p&gt;To scrape product data using the Naver Scraping API, you must provide two mandatory parameters: &lt;code&gt;storeId&lt;/code&gt; and &lt;code&gt;productId&lt;/code&gt; . The &lt;code&gt;channelUid&lt;/code&gt; parameter is optional.&lt;/p&gt;

&lt;p&gt;You can find the Product ID and Store ID directly in the product URL. For example:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq04w0nhifype9kv3ctrl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq04w0nhifype9kv3ctrl.png" alt="Paras" width="800" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can find the Product ID and Store ID directly in the product URL. Let's take &lt;a href="https://brand.naver.com/barudak/products/4469033180?NaPm=ct%3Dm9mo5x4g%7Cci%3D800b828f830f1d3d81df0575f6009efc9235fd9a%7Ctr%3Dnshsnx%7Csn%3D727239%7Cic%3D%7Chk%3De39ed35e26996b18c35ced568d18f83bc39fdf94" rel="noopener noreferrer"&gt;[바르닭] 닭가슴살 143종 크런치 소품닭 닭스테이크 소스큐브 골라담기 [원산지:국산(경기도 포천시) 등]&lt;/a&gt;  as an example：&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Store ID: barudak&lt;/li&gt;
&lt;li&gt;Product ID: 4469033180&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;We firmly protect the privacy of the website. All data in this blog is public and is only used as a demonstration of the crawling process. We do not save any information and data.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26ap2mkd6nrb19oo221t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26ap2mkd6nrb19oo221t.png" alt="Naver product info" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 4: Start Scraping Naver Product Data
&lt;/h3&gt;

&lt;p&gt;Once you’ve filled in the required parameters, simply click Start Scraping to obtain comprehensive product data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fln4lba9jxrvz62awfwr2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fln4lba9jxrvz62awfwr2.png" alt="Scraping Naver Product Data" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here’s an example code snippet for extracting Naver product data. Just replace YOUR_SCRAPELESS_API_TOKEN with your actual API key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;

&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;send_request&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;host&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;api.scrapeless.com&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;/api/v1/scraper/request&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;token&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;YOUR_SCRAPELESS_API_TOKEN&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

    &lt;span class="n"&gt;headers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;x-api-token&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;token&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;json_payload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;actor&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;scraper.naver.product&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;input&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;storeId&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;barudak&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;productId&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;4469033180&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;channelUid&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="c1"&gt;## Optional
&lt;/span&gt;        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;

    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;json_payload&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;status_code&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Error:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;status_code&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt;

    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;body&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nf"&gt;send_request&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  ⏯️ PLAN-B. Extract Naver product data with Scraping Browser
&lt;/h2&gt;

&lt;p&gt;If your team prefers programming, Scrapeless’s &lt;a href="https://www.scrapeless.com/en/product/scraping-browser?utm_source=official&amp;amp;utm_medium=blog&amp;amp;utm_campaign=naver-products" rel="noopener noreferrer"&gt;Scraping Browser&lt;/a&gt; is an excellent choice. It encapsulates all complex operations, simplifying efficient, large-scale data extraction from dynamic websites. It integrates seamlessly with popular tools like Puppeteer and Playwright.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Integrate with Scrapeless Scraping Browser
&lt;/h3&gt;

&lt;p&gt;After entering the Scraping Browser, simply fill in the configuration parameters on the left to automatically generate a scraping script.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffx2ingu9mejaubbttpch.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffx2ingu9mejaubbttpch.png" alt="Integrate with Scrapeless Scraping Browser" width="800" height="324"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here’s an example integration code (JavaScript recommended):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;puppeteer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;puppeteer-core&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;connectionURL&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;wss://browser.scrapeless.com/browser?token=" YourAPIKey"&amp;amp;session_ttl=180&amp;amp;proxy_country=ANY&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;browser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;puppeteer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="na"&gt;browserWSEndpoint&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;connectionURL&lt;/span&gt;&lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;newPage&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;goto&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://www.scrapeless.com&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;title&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;})();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Scrapeless automatically matches proxies for you, so no additional configuration or CAPTCHA handling is required. Combined with proxy rotation, browser fingerprint management, and robust concurrent scraping capabilities, Scrapeless ensures large-scale scraping of Naver product data without detection, efficiently bypassing IP blocks and CAPTCHA challenges.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Set Export Format
&lt;/h3&gt;

&lt;p&gt;Now, you need to filter and clean the scraped data. Consider exporting the results in CSV format for easier analysis:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;csv&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="nx"&gt;productData&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
  &lt;span class="nx"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;writeFileSync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;naver_product_data.csv&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;csv&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;utf-8&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;CSV file saved: naver_product_data.csv&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;})();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Further reading: &lt;a href="https://www.scrapeless.com/en/blog/scrapeless-scraping-browser-for-ai" rel="noopener noreferrer"&gt;Detailed Guide of Scrapeless Scraping Browser&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here is our scraping script, as a reference:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;puppeteer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;puppeteer-core&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;fs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;fs&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;parse&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;json2csv&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;connectionURL&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;wss://browser.scrapeless.com/browser?token=YourAPIKey&amp;amp;session_ttl=180&amp;amp;proxy_country=KR&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;browser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;puppeteer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;browserWSEndpoint&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;connectionURL&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;newPage&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="c1"&gt;// Replace with the URL of the Naver product page you actually want to crawl&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://smartstore.naver.com/barudak/products/4469033180&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;goto&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;waitUntil&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;networkidle2&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Simple example: crawl product title, price, description, etc. (adapt according to the actual page structure)&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;productData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;evaluate&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;title&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;querySelector&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;h3._2Be85h&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)?.&lt;/span&gt;&lt;span class="nx"&gt;innerText&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;price&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;querySelector&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;span._1LY7DqCnwR&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)?.&lt;/span&gt;&lt;span class="nx"&gt;innerText&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;querySelector&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;div._2w4TxKo3Dx&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)?.&lt;/span&gt;&lt;span class="nx"&gt;innerText&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="nx"&gt;price&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="nx"&gt;description&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Product data:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;productData&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Export to CSV&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;csv&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="nx"&gt;productData&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
  &lt;span class="nx"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;writeFileSync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;naver_product_data.csv&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;csv&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;utf-8&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;CSV file saved: naver_product_data.csv&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;})();&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Congratulations, you have successfully completed the entire process of crawling Naver product data!&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bottom Lines
&lt;/h2&gt;

&lt;p&gt;Scraping Naver data is a strategic investment! However, when teams use programming to scrape, they need to implement adaptive systems, coordinate session behaviors, and strictly adhere to platform regulations and South Korean data laws. Competing with Naver’s dynamic architecture means configuring proxies, CAPTCHA solvers, and simulating real user operations—all labor-intensive tasks.&lt;/p&gt;

&lt;p&gt;In reality, we don’t need to spend much time on maintenance! To achieve this, simply leverage a robust tech stack, including browser automation tools and APIs, ensuring scalable, compliant Naver product data extraction at any scale without worrying about web blocks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://app.scrapeless.com/passport/login?utm_source=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=naver-products" rel="noopener noreferrer"&gt;Start your free trial now!&lt;/a&gt; At just $3 for 1,000 requests, it’s the lowest price on the web!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>tutorial</category>
      <category>python</category>
      <category>learning</category>
    </item>
    <item>
      <title>How to Set Up Scrapeless MCP Server on Cline?</title>
      <dc:creator>Scraper0024</dc:creator>
      <pubDate>Thu, 03 Apr 2025 12:52:23 +0000</pubDate>
      <link>https://dev.to/scraper0024/how-to-set-up-scrapeless-mcp-server-on-cline-22ip</link>
      <guid>https://dev.to/scraper0024/how-to-set-up-scrapeless-mcp-server-on-cline-22ip</guid>
      <description>&lt;h2&gt;
  
  
  What is Cline?
&lt;/h2&gt;

&lt;p&gt;Cline is another AI model developed by Anthropic, similar to &lt;a href="https://www.scrapeless.com/en/blog/scrapeless-mcp-server-on-claude" rel="noopener noreferrer"&gt;&lt;strong&gt;Claude&lt;/strong&gt;&lt;/a&gt;, but with a focus on a different set of capabilities, particularly for specific use cases in conversational AI. Cline is designed with an emphasis on coherence, efficiency, and responsibility in interactions, tailored to complex and high-stakes environments where precision and ethical behavior are crucial.&lt;/p&gt;

&lt;p&gt;The main goal of Cline is to offer robust, safe AI that can handle more intricate tasks with a focus on minimizing errors and maintaining a high level of ethical standards, which makes it especially useful in professional, educational, and enterprise environments. It supports advanced conversational AI use cases, including long-form content generation, decision support, and detailed analysis.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Does Cline Support MCP (Managed Cognitive Processing)?
&lt;/h3&gt;

&lt;p&gt;Cline, like Claude, supports MCP (Managed Cognitive Processing), which means it is designed to efficiently handle cognitive tasks with a structured and ethical approach. The MCP framework enables Cline to provide higher-quality reasoning, better context management, and adaptability for complex, multi-step operations.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Cline Supports MCP:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Advanced Reasoning and Multi-step Processing&lt;/strong&gt; – Cline can handle complicated cognitive processes that require multi-step reasoning. This is ideal for tasks like strategy development, financial forecasting, or intricate problem-solving.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Contextual Awareness and Long-Term Memory&lt;/strong&gt; – Cline has enhanced capabilities to retain and utilize context over longer periods of time. This makes it particularly useful in situations where multiple exchanges are needed to arrive at a conclusion or where understanding past interactions is critical.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Personalization and Adaptation&lt;/strong&gt; – Cline can dynamically adjust its responses and interactions based on user behavior, needs, and preferences. It can change its tone, adjust the depth of its responses, and modify its approach to align with a user's specific requirements or industry.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ethical Decision Support&lt;/strong&gt; – Cline excels in areas requiring high ethical standards. It uses MCP to support responsible decision-making, ensuring that the information it generates or suggests is ethically sound, reducing the risks associated with bias, misinformation, or harmful suggestions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is Scrapeless MCP server?
&lt;/h2&gt;

&lt;p&gt;Scrapeless MCP Server is a server built on the Model Context Protocol (MCP) by Scrapeless. It enables AI models (such as Claude and GPT) to access external information sources during conversations. With advanced search capabilities, Scrapeless MCP Server retrieves real-time data from sources like Google Search, including Google Maps, Google Jobs, Google Hotels, and Google Flights, ensuring accurate and relevant responses.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://mcp.so/server/scrapelessMcpServer/scrapeless-ai?utm_source=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=mcpcline" rel="noopener noreferrer"&gt;&lt;strong&gt;MCP.SO&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/scrapeless-ai/scrapeless-mcp-server?utm_source=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=mcpcline" rel="noopener noreferrer"&gt;&lt;strong&gt;Github&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.npmjs.com/package/scrapeless-mcp-server?utm_source=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=mcpcline" rel="noopener noreferrer"&gt;&lt;strong&gt;NPM&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://glama.ai/mcp/servers/@scrapeless-ai/scrapeless-mcp-server?utm_source=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=mcpcline" rel="noopener noreferrer"&gt;&lt;strong&gt;Glama.ai&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://smithery.ai/server/@scrapeless-ai/scrapeless-mcp-server?utm_source=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=mcpcline" rel="noopener noreferrer"&gt;&lt;strong&gt;Smithery.ai&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How to Set Up Scrapeless MCP Server on Claude?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1. Install Node.js and npm
&lt;/h3&gt;

&lt;p&gt;To run Scrapeless MCP Server, you must first install Node.js and npm:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Download the latest stable version of &lt;a href="https://nodejs.org/" rel="noopener noreferrer"&gt;Node.js&lt;/a&gt; from the official website.&lt;/li&gt;
&lt;li&gt;Install it on your system.&lt;/li&gt;
&lt;li&gt;Verify the installation by running the following commands in your terminal:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;node &lt;span class="nt"&gt;-v&lt;/span&gt;
npm &lt;span class="nt"&gt;-v&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If installed correctly, you should see output like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;v22.x.x
10.x.x
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2. Obtain a Scrapeless API Key
&lt;/h2&gt;

&lt;p&gt;To use Scrapeless MCP Server, you need an API key:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Register and log in to the &lt;a href="https://app.scrapeless.com/passport/login?utm_source=devto" rel="noopener noreferrer"&gt;&lt;strong&gt;Scrapeless Dashboard&lt;/strong&gt;&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Navigate to API Key Management and generate your Scrapeless API Key.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fede95smaha0qn886hu5l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fede95smaha0qn886hu5l.png" alt="Scrapeless API Key" width="800" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3. Download the Cline
&lt;/h3&gt;

&lt;p&gt;Click &lt;a href="https://marketplace.visualstudio.com/items?itemName=saoudrizwan.claude-dev" rel="noopener noreferrer"&gt;Cline&lt;/a&gt; to go to the download page. However, ensure your device has VS Code installed. If not, follow the prompts to complete the installation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3fgqlj0s23hka0i848xp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3fgqlj0s23hka0i848xp.png" alt="download Cline" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After clicking &lt;strong&gt;Install&lt;/strong&gt; , you’ll be redirected to VS Code. Click the &lt;strong&gt;Install&lt;/strong&gt; button again. Once you see the Cline logo on the left, the installation is complete!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3s61zt1j4xu4c5dqhw6a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3s61zt1j4xu4c5dqhw6a.png" alt="install cline" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4. Configure Scrapeless MCP Server
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Click the search bar and type: &lt;code&gt;&amp;gt;MCP Servers&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Enter the &lt;strong&gt;Installed&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Configure MCP Servers&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4n8z1e1rb61z41hoowfs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4n8z1e1rb61z41hoowfs.png" alt="Configure MCP Servers on Cline" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Remember the connection code we used earlier? Paste it into the program, replacing your Scrapeless API Key.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"scrapelessMcpServer"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"-y"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"scrapeless-mcp-server"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"env"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"SCRAPELESS_KEY"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"YOUR_SCRAPELESS_KEY"&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;replace&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;with&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;your&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;API&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;key&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Save after completing the configuration.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F05t6epd10hx9ivwe3d85.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F05t6epd10hx9ivwe3d85.png" alt="Configure Scrapeless MCP server" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5. Run Scrapeless MCP Server
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Return to the chat page (click Done on the previous page) and select Use MCP.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy7c4dhc1daeieq361x6r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy7c4dhc1daeieq361x6r.png" alt="Run Scrapeless MCP Server" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Send your query to Cline: For example, "Please check for me the Gold price today", and click Approve to allow Cline to fetch information via MCP.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fivcur3qvr98nvc2b5sl3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fivcur3qvr98nvc2b5sl3.png" alt="Run Scrapeless MCP Server" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Wait for Cline to process and retrieve the result.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F13ymxsbvrfcaj2bq7a9w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F13ymxsbvrfcaj2bq7a9w.png" alt="Run Scrapeless MCP Server" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of Using Scrapeless MCP Server on Cline
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Real-time search&lt;/strong&gt;: Access the latest data from external sources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Seamless integration&lt;/strong&gt;: Works directly within Cline’s AI-driven environment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enhanced AI context&lt;/strong&gt;: Enables AI models to provide more accurate and up-to-date responses.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Ending
&lt;/h2&gt;

&lt;p&gt;By integrating Scrapeless MCP Server with Cline, you can significantly enhance AI-assisted coding with real-time information retrieval. Follow this guide to set up your environment and unlock the full potential of AI-powered development.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>mcp</category>
      <category>ai</category>
    </item>
    <item>
      <title>How to Set Up Scrapeless MCP Server on Claude?</title>
      <dc:creator>Scraper0024</dc:creator>
      <pubDate>Thu, 03 Apr 2025 10:14:20 +0000</pubDate>
      <link>https://dev.to/scraper0024/how-to-set-up-scrapeless-mcp-server-on-claude-1o2g</link>
      <guid>https://dev.to/scraper0024/how-to-set-up-scrapeless-mcp-server-on-claude-1o2g</guid>
      <description>&lt;h2&gt;
  
  
  What is Claude?
&lt;/h2&gt;

&lt;p&gt;Claude is a family of AI chatbots developed by Anthropic, designed to provide safe, efficient, and intelligent conversational AI services. Named after Claude Shannon, the father of information theory, Claude focuses on ethical AI, advanced reasoning, and maintaining a coherent dialogue experience. It competes with models like OpenAI’s ChatGPT and Google’s Gemini.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Scrapeless MCP Server?
&lt;/h2&gt;

&lt;p&gt;Scrapeless MCP Server is a server built on the Model Context Protocol (MCP) by Scrapeless. It enables AI models (such as Claude and GPT) to access external information sources during conversations. With advanced search capabilities, Scrapeless MCP Server retrieves real-time data from sources like Google Search, including Google Maps, Google Jobs, Google Hotels, and Google Flights, ensuring accurate and relevant responses.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://mcp.so/server/scrapelessMcpServer/scrapeless-ai?utm_source=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=mcpclaude" rel="noopener noreferrer"&gt;&lt;strong&gt;MCP.SO&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/scrapeless-ai/scrapeless-mcp-server?utm_source=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=mcpclaude" rel="noopener noreferrer"&gt;&lt;strong&gt;Github&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.npmjs.com/package/scrapeless-mcp-server?utm_source=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=mcpclaude" rel="noopener noreferrer"&gt;&lt;strong&gt;NPM&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://glama.ai/mcp/servers/@scrapeless-ai/scrapeless-mcp-server?utm_source=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=mcpclaude" rel="noopener noreferrer"&gt;&lt;strong&gt;Glama.ai&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://smithery.ai/server/@scrapeless-ai/scrapeless-mcp-server?utm_source=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=mcpclaude" rel="noopener noreferrer"&gt;&lt;strong&gt;Smithery.ai&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How Does Claude Support MCP (Managed Cognitive Processing)?
&lt;/h2&gt;

&lt;p&gt;MCP (Managed Cognitive Processing) refers to Claude’s ability to intelligently manage cognitive tasks, ensuring efficient reasoning, contextual understanding, and decision-making. This allows Claude to handle complex interactions with structured thought processes and enhanced problem-solving capabilities.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Ways Claude Supports MCP:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Complex Problem Solving&lt;/strong&gt; – Claude can efficiently process multi-step reasoning tasks, such as logical inference, data analysis, and strategic planning.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Context Retention in Multi-Turn Conversations&lt;/strong&gt; – Through effective context management, Claude maintains coherence across extended discussions, reducing information loss and improving response accuracy.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Adaptive and Personalized Responses&lt;/strong&gt; – Claude adjusts its tone, depth of knowledge, and response style based on user needs, making interactions more personalized and relevant.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Decision Support &amp;amp; Advanced Reasoning&lt;/strong&gt; – By applying structured cognitive processing, Claude assists users in making informed decisions, even in complex or uncertain scenarios.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  How to Set Up Scrapeless MCP Server on Claude?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1. Install Node.js and npm
&lt;/h3&gt;

&lt;p&gt;To run Scrapeless MCP Server, you must first install Node.js and npm:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Download the latest stable version of Node.js from the official website.&lt;/li&gt;
&lt;li&gt;Install it on your system.&lt;/li&gt;
&lt;li&gt;Verify the installation by running the following commands in your terminal:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;node &lt;span class="nt"&gt;-v&lt;/span&gt;
npm &lt;span class="nt"&gt;-v&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If installed correctly, you should see output like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;v22.x.x
10.x.x
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2. Obtain a Scrapeless API Key
&lt;/h3&gt;

&lt;p&gt;To use Scrapeless MCP Server, you need an API key:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Register and log in to the &lt;a href="https://app.scrapeless.com/passport/login?utm_source=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=mcpclaude" rel="noopener noreferrer"&gt;&lt;strong&gt;Scrapeless Dashboard&lt;/strong&gt;&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Navigate to API Key Management and generate your Scrapeless API Key.&lt;/li&gt;
&lt;li&gt;Copy the key for later use.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqlckq5z7vg85zpz35n1g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqlckq5z7vg85zpz35n1g.png" alt="obtain scrapeless api key" width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3. Open your terminal and enter the following command:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;vim&lt;/span&gt; &lt;span class="o"&gt;~/&lt;/span&gt;&lt;span class="n"&gt;Library&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;Application&lt;/span&gt;\ &lt;span class="n"&gt;Support&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;Claude&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;claude_desktop_config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffu9d6jjjj3z1tt3n39si.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffu9d6jjjj3z1tt3n39si.png" alt="configure Scrapeless MCP server" width="800" height="507"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After pressing Enter, you should see the following result:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9v7oye8dmng923gdfrc5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9v7oye8dmng923gdfrc5.png" alt="install Scrapeless MCP server" width="800" height="507"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4. Use the following code to connect to Scrapeless MCP:
&lt;/h3&gt;

&lt;p&gt;You can also visit our Scrapeless MCP Server Tutorial Documentation for more details.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"scrapelessMcpServer"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"-y"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"scrapeless-mcp-server"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"env"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"SCRAPELESS_KEY"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"YOUR_SCRAPELESS_KEY"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;replace&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;with&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;your&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;API&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;key&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Paste the code and save it by typing &lt;code&gt;:&lt;/code&gt;, then &lt;code&gt;x&lt;/code&gt;, and finally press &lt;code&gt;Enter&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyfsha2qyxh1pubd5qzi7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyfsha2qyxh1pubd5qzi7.png" alt="configure Scrapeless MCP server" width="800" height="507"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5. Using Claude with Scrapeless MCP server
&lt;/h3&gt;

&lt;p&gt;Now, you can open the Claude. When you see a hammer icon, it means the MCP server is successfully connected! Now Claude can invoke MCP.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj6wdeiq2z1owv7smx63o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj6wdeiq2z1owv7smx63o.png" alt="Check the integration" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Input your query, e.g., "&lt;strong&gt;&lt;em&gt;Please check for me the Gold price today.&lt;/em&gt;&lt;/strong&gt;"&lt;/li&gt;
&lt;li&gt;Allow Claude to invoke the Scrapeless MCP server.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5cwyktmd03wkjrjv1nyi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5cwyktmd03wkjrjv1nyi.png" alt="Allow to use Scrapeless MCP server" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Get the response.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffsdsrtq6jr0czs0q6ir7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffsdsrtq6jr0czs0q6ir7.png" alt="response via Scrapeless MCP server" width="800" height="839"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of Using Scrapeless MCP Server on Cursor
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Real-time search&lt;/strong&gt;: Access the latest data from external sources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Seamless integration&lt;/strong&gt;: Works directly within Cursor’s AI-driven environment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enhanced AI context&lt;/strong&gt;: Enables AI models to provide more accurate and up-to-date responses.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Ending
&lt;/h2&gt;

&lt;p&gt;By integrating Scrapeless MCP Server with Claude, you can significantly enhance AI-assisted coding with real-time information retrieval. Follow this guide to set up your environment and unlock the full potential of AI-powered development.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://app.scrapeless.com/passport/login?utm_source=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=mcpclaude" rel="noopener noreferrer"&gt;Get the free trial&lt;/a&gt; now and figure out a new possibility!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>mcp</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
