DEV Community

kulashekar reddy
kulashekar reddy

Posted on

Scrapping weather data from google for live climate updates

Web scraping is the process of using bots to extract content and data from a website . web scraping extracts underlying HTML code and, with it, data stored in a database. The scraper can then replicate entire website content elsewhere . Web Scrapers can extract all the data on particular sites or the specific data that a user wants. So, when a web scraper needs to scrape a site, first the URLs are provided. Then it loads all the HTML code for those sites and a more advanced scraper might even extract all the CSS and Java script elements as well. Then the scraper obtains the required data from this HTML code and outputs this data in the format specified by the user. Mostly, this is in the form of an Excel spreadsheet or a CSV file, but the data can also be saved in other formats, such as a JSON file.

Why python's used mostly for scrapping:
Python's a most popular language these days and can be handled smoothly because of its easily understandable syntax. It is also predominantly used for scraping because of its variety of libraries which makes us do less work.

Beautiful soup

is a Python library that is highly suitable for Web Scraping. It creates a parse tree that can be used to extract data from HTML on a website.

Requests allows you to send HTTP requests extremely easily. This module also does not comes built-in with Python. So we can install it using pip install requests
Note: Same goes with every library in python


import requests
from bs4 import BeautifulSoup
#have to specify the city name before requesting the info
city = "chennai"
url = ""+"weather"+city
info = requests.get(url).content
soup = BeautifulSoup(info, 'html.parser')
#After that using BeautifulSoup object we can search for the information and retreive it
temperature = soup.find('div', attrs={'class': 'BNeawe iBp4i AP7Wnd'}).text

Enter fullscreen mode Exit fullscreen mode

Top comments (0)