DEV Community

Cover image for How to scrape HTML from a website built with Javascript?
Peter Hansen
Peter Hansen

Posted on • Updated on

 

How to scrape HTML from a website built with Javascript?

Hello World ✌️,

In this article, I would like to tell about how you can scrape HTML content from a website build with the Javascript framework.


But why it is even a problem to scrape a JS-based website? 🤔

Problem Definition:

You need to have a browser environment in order to execute Javascript code that will render HTML.



If you will try open this website (https://web-scraping-playground-site.firebaseapp.com) in your browser — you will see a simple page with some content.



Website built with javascript framework


However, if you will try to send HTTP GET request to the same url in the Postman — you will see a different response.


No HTML Response from a website build with javascript

A response to GET request ‘https://web-scraping-playground-site.firebaseapp.com’ in made in the Postman.




What? Why the response contains no HTML? It is happening because there is no browser environment when we sending requests from a server or Postman app.


🎓 We need a browser environment for executing Javascript code and rendering content — HTML.


It sounds like an easy and fun problem to solve! In the below 👇 section I will show 2 ways how to solve the above-mentioned problem using:

  1. Puppeteera Node library developed by Google.
  2. Proxybotan API service for web scraping.

Let's get started 👨‍💻



For people who prefer watching videos, there is a quick video 🎥 demonstrating how to get an HTML content of a JS-based website.


Solution using Puppeteer

The idea is simple. Use puppeteer on our server for simulating the browser environment in order to render HTML of a page and use it for scraping or something else 😉.

See the below code snippet.

This code simply:

  • Accepts GET request
  • Receives ‘url’ param
  • Returns response of the ‘getPageHTML’ function

The ‘getPageHTML’ function is the most interesting for us because that’s where the magic happens.

The ‘magic’ is, however, pretty simple. The function simply does the following steps:

  • Launch puppeteer
  • Open the desired url
  • Internally executes JS
  • Extract HTML of the page
  • Return the HTML

Easy-peasy 👏

Let’s run the script and send a request to http://localhost:3000?url=https://web-scraping-playground-site.firebaseapp.com in the Postman app.

The below screenshot shows the response from our local server.




Using puppeteer to scrape website's HTML


Yaaaaay 🎉🎉🎉 We Did it! Great job guys! We got HTML back!

It was easy, but it can be even easier, let’s have a look at the second approach.


Solution using Proxybot

With this approach, we actually only need to send an HTTP GET request. The API service will run a virtual browser internally and send you back HTML.

https://proxybot.io/api/v1/API_KEY?render_js=true&url=your-url-here

Let’s try to call the API in the Postman app.



Scrape HTML from a Javascript website with Proxybot API service

Yaaay 🎊🎊🎊 More HTML!

There is not much to say about the request, because it is pretty straightforward. However, I want to emphasize a small detail. When calling the API to remember to include the render_js=true url param.

Otherwise, the service will not execute Javascript 🤓


Congratulations 🥳 Now you can scrape websites build with javascript frameworks like Angular, React, Ember etc..

I hope this article was interesting and useful.

Proxybot it just one of the services allowing you to proxy your requests. If you are looking for proxy providers here you can find a list with best proxy providers.

Top comments (1)

Collapse
 
cliffgold profile image
cliffgold

I want to do something similar, but for an external website. Is there a way to data scrape an ember-built site? The IDs change every build, so I can't use those.

An Animated Guide to Node.js Event Loop

Node.js doesn’t stop from running other operations because of Libuv, a C++ library responsible for the event loop and asynchronously handling tasks such as network requests, DNS resolution, file system operations, data encryption, etc.

What happens under the hood when Node.js works on tasks such as database queries? We will explore it by following this piece of code step by step.