<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: SaaS.Group</title>
    <description>The latest articles on DEV Community by SaaS.Group (@zoltan).</description>
    <link>https://dev.to/zoltan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/zoltan"/>
    <language>en</language>
    <item>
      <title>Robots.Txt Files &amp;amp; SEO – Best Practices, and Fixes for Common Issues</title>
      <dc:creator>SaaS.Group</dc:creator>
      <pubDate>Fri, 19 Aug 2022 20:46:34 +0000</pubDate>
      <link>https://dev.to/zoltan/robotstxt-files-amp-seo-best-practices-and-fixes-for-common-issues-50kd</link>
      <guid>https://dev.to/zoltan/robotstxt-files-amp-seo-best-practices-and-fixes-for-common-issues-50kd</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KQGu4IxG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://prerender.io/wp-content/uploads/2022/08/robots-txt-files-seo-bestpractices-1200x300-c-default.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KQGu4IxG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://prerender.io/wp-content/uploads/2022/08/robots-txt-files-seo-bestpractices-1200x300-c-default.jpg" alt="" width="880" height="220"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Originally published on Prerender.io: &lt;a href="https://prerender.io/robots-txt-and-seo/"&gt;Robots.Txt Files &amp;amp; SEO – Best Practices, and Fixes for Common Issues&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Technical SEO is a well-executed strategy that factors in various on-page and off-page ranking signals to help your website rank higher in SERPs. Each SEO tactic plays into the grand scheme of boosting your page rank by ensuring web crawlers can easily crawl, rank, and index your website.&lt;/p&gt;

&lt;p&gt;From  &lt;a href="https://prerender.io/google-pagespeed-insights/"&gt;page speed&lt;/a&gt;  to proper title tags, there are many ranking signals that technical SEO can help with. But did you know that one of the most important files for your website’s SEO is also found on your server?&lt;/p&gt;

&lt;p&gt;The robots.txt file is a code that tells web crawlers which pages on your website they can and cannot crawl. This might not seem like a big deal, but if your robots.txt file is not configured correctly, it can have a serious negative effect on your website’s SEO.&lt;/p&gt;

&lt;p&gt;In this blog post, we’ll be discussing everything you need to know about robots.txt, from what is a robots.txt file in SEO to the best practices to the proper way to fix common issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What Is a robots.txt File &amp;amp; Why Is It Important in SEO?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Robots.txt file is a file located on your server that tells web crawlers which pages they can and cannot access. If a web crawler tries to crawl a page that is blocked in the robots.txt file, it will be considered a  &lt;a href="https://prerender.io/soft-404/"&gt;soft 404 error&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Although a soft 404 error will not hurt your website’s ranking, it is still considered an error. And too many errors on your website can lead to a slower crawl rate which can eventually hurt your ranking due to decreased crawling.&lt;/p&gt;

&lt;p&gt;If your website has a lot of pages that are blocked by the robots.txt file, it can also lead to a wasted crawl budget.  &lt;a href="https://prerender.io/crawl-budget-seo/"&gt;The crawl budget&lt;/a&gt; is the number of pages Google will crawl on your website during each visit.&lt;/p&gt;

&lt;p&gt;Another reason why robots.txt files are important in SEO is that they give you more control over the way Googlebot crawls and indexes your website. If you have a website with a lot of pages, you might want to block certain pages from being indexed so they don’t overwhelm search engine web crawlers and hurt your rankings.&lt;/p&gt;

&lt;p&gt;If you have a blog with hundreds of posts, you might want to only allow Google to index your most recent articles. If you have an eCommerce website with a lot of product pages, you might want to only allow Google to index your main category pages.&lt;/p&gt;

&lt;p&gt;Configuring your robots.txt file correctly can help you control the way Googlebot crawls and indexes your website, which can eventually help improve your ranking.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rqoPn1Vz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://prerender.io/wp-content/uploads/2022/08/google-best-robots-practices-1024x691.jpg.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rqoPn1Vz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://prerender.io/wp-content/uploads/2022/08/google-best-robots-practices-1024x691.jpg.webp" alt="google recommendations for robots.txt files" width="880" height="594"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What Google Says About robots.txt File Best Practices&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Now that we’ve gone over why robots.txt files are important in SEO, let’s discuss some best practices recommended by Google.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Create a File Named robots.txt&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The first step is to create a file named  &lt;strong&gt;&lt;em&gt;robots.txt&lt;/em&gt;&lt;/strong&gt;. This file needs to be placed in the root directory of your website – the highest-level directory that contains all other files and directories on your website.&lt;/p&gt;

&lt;p&gt;Here’s an example of proper placement of a robots.txt file: on the apple.com site, the root directory would be apple.com/.&lt;/p&gt;

&lt;p&gt;You can create a robots.txt file with any text editor, but many CMS’ like WordPress will automatically create it for you.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Add Rules to the robots.txt File&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Once you’ve created the robots.txt file, the next step is to add rules. These rules will tell web crawlers which pages they can and cannot access.&lt;/p&gt;

&lt;p&gt;There are two types of robot.txt syntax you can add: Allow and Disallow.&lt;/p&gt;

&lt;p&gt;Allow rules will tell web crawlers that they are allowed to crawl a certain page.&lt;/p&gt;

&lt;p&gt;Disallow rules will tell web crawlers that they are not allowed to crawl a certain page.&lt;/p&gt;

&lt;p&gt;For example, if you want to allow web crawlers to crawl your homepage, you would add the following rule:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Allow: /&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you want to disallow web crawlers from crawling a certain subdomain or subfolder on your blog, you use:&lt;strong&gt;Disallow: /&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Upload the robots.txt File to Your Site&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;After you have added the rules to your robots.txt file, the next step is to upload it to your website. You can do this using an FTP client or your hosting control panel.&lt;/p&gt;

&lt;p&gt;If you’re not sure how to upload the file, contact your web host and they should be able to help you.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Test Your robots.txt File&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;After you have uploaded the robots.txt file to your website, the next step is to test it to make sure it’s working correctly. Google provides a free tool called the robots.txt Tester in  &lt;a href="https://search.google.com/search-console/about"&gt;Google Search Console&lt;/a&gt; that you can use to test your file. It can only be used for robots .txt files that are located in the root directory of your website.&lt;/p&gt;

&lt;p&gt;To use the robots.txt tester, enter the URL of your website into the robots.txt Tester tool and then test it. Google will then show you the contents of your robots.txt file as well as any errors it found.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Use Google’s Open-Source Robots Library&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;If you are a more experienced developer, Google also has an open-source robots library that you can use to manage your robots.txt file locally on your computer.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What Can Happen to Your Website’s SEO if a robots.txt File Is Broken or Missing?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If your robots.txt file is broken or missing, it can cause search engine crawlers to index pages that you don’t want them to. This can eventually lead to those pages being ranked in Google, which is not ideal. It may also result in site overload as crawlers try to index everything on your website.&lt;/p&gt;

&lt;p&gt;A broken or missing robots.txt file can also cause search engine crawlers to miss important pages on your website. If you have a page that you want to be indexed, but it’s being blocked by a broken or missing robots.txt file, it may never get indexed.&lt;/p&gt;

&lt;p&gt;In short, it’s important to make sure your robots.txt file is working correctly and that it’s located in the root directory of your website. Rectify this problem by creating new rules or uploading the file to your root directory if it’s missing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zqL2L-_q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://prerender.io/wp-content/uploads/2022/08/best-practices-robots-txt-1024x564.jpg.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zqL2L-_q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://prerender.io/wp-content/uploads/2022/08/best-practices-robots-txt-1024x564.jpg.webp" alt="best practices for robots.txt files &amp;amp; SEO" width="880" height="485"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Best Practices for Robots.txt Files&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Now that you know the basics of robots.txt files, let’s go over some best practices. These are things you should do to make sure your file is effective and working properly.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Use a New Line for Each Directive&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;When you’re adding rules to your robots.txt file, it’s important to use a new line for each directive to avoid confusing search engine crawlers. This includes both Allow and Disallow rules.&lt;/p&gt;

&lt;p&gt;For example, if you want to disallow web crawlers from crawling your blog and your contact page, you would add the following rules:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disallow: /blog/&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disallow: /contact/&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Use Wildcards To Simplify Instructions&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;If you have a lot of pages that you want to block, it can be time-consuming to add a rule for each one. Fortunately, you can use wildcards to simplify your instructions.&lt;/p&gt;

&lt;p&gt;A wildcard is a character that can represent one or more characters. The most common wildcard is the asterisk (*).&lt;/p&gt;

&lt;p&gt;For example, if you want to block all files that end in .jpg, you would add the following rule:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disallow: /*.jpg&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Use “$” To Specify the End of a URL&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The dollar sign ($) is another wildcard that you can use to specify the end of a URL. This is helpful if you want to block a certain page but not the pages that come after it.&lt;/p&gt;

&lt;p&gt;For example, if you want to block the contact page but not the contact-success page, you would add the following rule:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disallow: /contact$&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Use Each User Agent Only Once&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Thankfully, when you’re adding rules to your robots.txt file, Google doesn’t mind if you use the same User-agent multiple times. However, it’s considered best practice to use each user agent only once.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Use Specificity To Avoid Unintentional Errors&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;When it comes to robots.txt files, specificity is key. The more specific you are with your rules, the less likely you are to make an error that could hurt your website’s SEO.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Use Comments To Explain Your robots.txt File to Humans&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Despite your robots.txt files being crawled by bots, humans will still need to be able to understand, maintain and manage them. This is especially true if you have multiple people working on your website.&lt;/p&gt;

&lt;p&gt;You can add comments to your robots.txt file to explain what certain rules do. Comments must be on their line and start with a #.&lt;/p&gt;

&lt;p&gt;For example, if you want to block all files that end in .jpg, you could add the following comment:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disallow: /*.jpg # Block all files that end in .jpg&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This would help anyone who needs to manage your robots.txt file understand what the rule is for and why it’s there.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Use a Separate robots.txt File for Each Subdomain&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;If you have a website with multiple subdomains, it’s best to create a separate robots.txt file for each one. This helps to keep things organized and makes it easier for search engine crawlers to understand your rules.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Common Robots.txt File Mistakes and How To Fix Them&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Understanding the most common mistakes people make with their robots.txt files can help you avoid making them yourself. Here are some of the most common mistakes and how to fix these  &lt;a href="https://prerender.io/technical-seo-issues/"&gt;technical SEO issues&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Missing robots.txt File&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The most common robots.txt file mistake is not having one at all. If you don’t have a robots.txt file, search engine crawlers will assume that they are allowed to crawl your entire website.&lt;/p&gt;

&lt;p&gt;To fix this, you’ll need to create a robots.txt file and add it to your website’s root directory.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Robots.txt File Not in the Directory&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;If you don’t have a robots.txt file in your website’s root directory, search engine crawlers won’t be able to find it. As a result, they will assume that they are allowed to crawl your entire website.&lt;/p&gt;

&lt;p&gt;It should be a single text file name that should be not placed in subfolders but rather in the root directory.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;No Sitemap URL&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Your robots.txt file should always include a link to your website’s sitemap. This helps search engine crawlers find and index your pages.&lt;/p&gt;

&lt;p&gt;Omitting the sitemap URL from your robots.txt file is a common mistake that may not hurt your website’s SEO, but adding it will improve it.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Blocking CSS and JS&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;According to John Mueller, you must avoid blocking CSS and JS files as Google search crawlers require them to render the page correctly.&lt;/p&gt;

&lt;p&gt;Naturally, if the bots can’t render your pages, they won’t be indexed.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Using NoIndex in robots.txt&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Since 2019, the noindex robots meta tag has been deprecated and is no longer supported by Google. As a result, you should avoid using it in your robots.txt file.&lt;/p&gt;

&lt;p&gt;If you’re still using the noindex robots meta tag, you should remove it from your website as soon as possible.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Improper Use Of Wildcards&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Using wildcards incorrectly will only result in restricting access to files and directories that you didn’t intend to.&lt;/p&gt;

&lt;p&gt;When using wildcards, be as specific as possible. This will help you avoid making any mistakes that could hurt your website’s SEO. Also, stick to the supported wildcards, that is asterisk and dollar symbol.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Wrong File Type Extension&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;As the name implies, a robot.txt file must be a text file that ends in.txt. It cannot be an HTML file, image, or any other type of file. It must be created in UTF-8 format. A useful introductory resource is  &lt;a href="https://developers.google.com/search/docs/advanced/robots/intro"&gt;Google’s robot.txt guide&lt;/a&gt; and  &lt;a href="https://developers.google.com/search/docs/advanced/robots/robots-faq"&gt;Google Robots.txt FAQ&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Use Robot.Txt Files Like A Pro&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A robots.txt file is a powerful tool that can be used to improve your website’s SEO. However, it’s important to use it correctly.&lt;/p&gt;

&lt;p&gt;When used properly, a robots.txt file can help you control which pages are indexed by search engines and improve your website’s crawlability. It can also help you avoid  &lt;a href="https://prerender.io/how-to-fix-duplicate-content-issues/"&gt;duplicate content issues&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;On the other hand, if used incorrectly, a robots.txt file can do more harm than good. It’s important to avoid common mistakes and follow the best practices that will help you use your robots.txt file to its full potential and improve your website’s SEO.In addition to expertly navigating Robot.txt files,  &lt;a href="https://dashboard.prerender.io/signup?_gl=1*2vn920*_ga*MTQ5MDkyOTA4Ny4xNjQzMDYxMDI3*_ga_5C99FX76HR*MTY1ODM0NTIxMy4xMy4wLjE2NTgzNDUyMTMuMA.."&gt;dynamic rendering with Prerender&lt;/a&gt;  also offers the opportunity to produce static HTML for complex Javascript websites. Now you can allow  &lt;a href="https://prerender.io/faster-indexation/"&gt;faster indexation&lt;/a&gt;,  &lt;a href="https://prerender.io/better-response-times/"&gt;faster response times&lt;/a&gt;, and an overall  &lt;a href="https://prerender.io/nicer-user-experience/"&gt;better user experience&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>10 Tips and Tricks on How to Make a Website Mobile Friendly</title>
      <dc:creator>SaaS.Group</dc:creator>
      <pubDate>Fri, 19 Aug 2022 19:42:00 +0000</pubDate>
      <link>https://dev.to/zoltan/10-tips-and-tricks-on-how-to-make-a-website-mobile-friendly-3hj7</link>
      <guid>https://dev.to/zoltan/10-tips-and-tricks-on-how-to-make-a-website-mobile-friendly-3hj7</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZnagwW3w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://prerender.io/wp-content/uploads/2022/06/mobile-friendly-feat-1200x300-c-default.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZnagwW3w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://prerender.io/wp-content/uploads/2022/06/mobile-friendly-feat-1200x300-c-default.jpg" alt="" width="880" height="220"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Originally published on Prerender: &lt;a href="https://prerender.io/how-to-make-a-website-mobile-friendly/"&gt;10 Tips and Tricks on How to Make a Website Mobile Friendly&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The future of the internet is mobile.&lt;/p&gt;

&lt;p&gt;Websites that are not mobile-friendly will soon be a thing of the past. With &lt;a href="https://www.statista.com/statistics/277125/share-of-website-traffic-coming-from-mobile-devices/#:~:text=Mobile%20accounts%20for%20approximately%20half,consistently%20surpassing%20it%20in%202020."&gt;over half (54.4%) of all internet traffic&lt;/a&gt;  coming from mobile devices, it is more important than ever to make sure your website looks great and functions properly on smartphones and tablets.&lt;/p&gt;

&lt;p&gt;If your website doesn’t look good on a smartphone, your users will leave and go to a competitor’s website that does.&lt;/p&gt;

&lt;p&gt;So what does it mean to be mobile-friendly? How can you check if your website is optimized for mobile traffic? Most importantly, why is it important that your website is mobile-friendly?&lt;/p&gt;

&lt;p&gt;In this post, we will answer all those questions and give you some tips on how to make a website mobile-friendly.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Does It Actually Mean To Be Mobile-Friendly?
&lt;/h2&gt;

&lt;p&gt;A mobile-friendly website is easy to use on a smartphone or tablet. This means that the layout is easy to navigate, and buttons and links are appropriately sized to be used with a finger.&lt;/p&gt;

&lt;p&gt;Mobile-friendly sites are well-optimized and use a particular design to serve the needs of mobile users. For instance, mobile-friendly websites also tend to load quickly, as users are often on the go and don’t have the patience to wait for a slow website to load.&lt;/p&gt;

&lt;p&gt;Overall, the goal of a mobile-friendly website is to provide users with a positive experience that is optimized for the device they are using.&lt;/p&gt;

&lt;h2&gt;
  
  
  How To Check If Your Website Is Mobile-Friendly
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZbSxnIeA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://prerender.io/wp-content/uploads/2022/06/is-site-mobile-friendly-1024x354.jpg.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZbSxnIeA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://prerender.io/wp-content/uploads/2022/06/is-site-mobile-friendly-1024x354.jpg.webp" alt="how to check if a website is mobile friendly" width="880" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Checking if your website is mobile-friendly is pretty easy due to the many approaches available. Here are some ways you can test your website’s mobile-friendliness:&lt;/p&gt;

&lt;h3&gt;
  
  
  Manually Change The Size Of Your Website’s Browser Window
&lt;/h3&gt;

&lt;p&gt;The quickest and easiest way to test if your website is mobile-friendly is to just open it on your smartphone yourself. Simply open your website on a desktop browser and resize the window to be as small as a smartphone. If your website is easy to use and navigate, then it is most likely mobile-friendly. If not, you will need to make some changes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Load Your Website From Different Devices
&lt;/h3&gt;

&lt;p&gt;Nothing beats good ole’ fashioned testing. If you have access to different types of mobile devices, load your website on each one and see how it looks and feels. This will give you a good idea of how your website appears on different screen sizes and can help you identify any areas that need improvement.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Google’s Mobile-Friendly Test Tool
&lt;/h3&gt;

&lt;p&gt;Google has a great free tool that will analyze your website and tell you if it is mobile-friendly. All you need to do is enter your website’s URL and the tool will do the rest.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://search.google.com/test/mobile-friendly"&gt;Mobile-Friendly Test Tool&lt;/a&gt;  will give your website a score that categorizes how mobile-friendly your website is. It will also provide a list of current issues and actionable items so you can fix and improve your website’s mobile-friendliness.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Your Website Must Be Mobile-Friendly
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0PRqIFgm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://prerender.io/wp-content/uploads/2022/06/why-mobile-friendly-1024x535.jpg.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0PRqIFgm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://prerender.io/wp-content/uploads/2022/06/why-mobile-friendly-1024x535.jpg.webp" alt="benefits of a mobile friendly website" width="880" height="460"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s always a wise idea to understand the many benefits that you can reap by making your website mobile-friendly. After all, why put in the extra effort if there’s no real payoff?&lt;/p&gt;

&lt;p&gt;Here are some key reasons why it’s important to have a mobile-friendly website:&lt;/p&gt;

&lt;h3&gt;
  
  
  Boost ROI and Conversion Rate
&lt;/h3&gt;

&lt;p&gt;If your website is not mobile-friendly, you will lose potential customers and revenue. Research has shown that &lt;a href="https://digital.com/1-in-2-visitors-abandon-a-website-that-takes-more-than-6-seconds-to-load/#:~:text=1%20in%202%20online%20shoppers,and%20any%20items%20in%20them."&gt;1 in 2 users&lt;/a&gt;  will leave a website that is not mobile-friendly and go to a competitor’s site.&lt;/p&gt;

&lt;h3&gt;
  
  
  Mobile-Friendly is a Major Ranking Factor
&lt;/h3&gt;

&lt;p&gt;Google has stated that they are now using &lt;a href="https://developers.google.com/search/mobile-sites/mobile-first-indexing"&gt;mobile-friendliness as a ranking factor&lt;/a&gt;  in its search algorithm. This means that if your website is not mobile-friendly, you’ll appear lower in the SERPs than competitors who are, and it will be harder for people to find you online.&lt;/p&gt;

&lt;h3&gt;
  
  
  Mobile Users Are Increasing Every Year
&lt;/h3&gt;

&lt;p&gt;As more and more people adopt smartphones and tablets, the number of mobile users is increasing every year. Over half (54.4%) of all web traffic now comes from mobile devices.&lt;/p&gt;

&lt;p&gt;This trend is only going to continue, so it’s important to make sure your website is prepared.&lt;/p&gt;

&lt;p&gt;Besides the many obvious benefits for you and your business, it also works in your users’ interest, which still comes back to benefit you in the long run.&lt;/p&gt;

&lt;h3&gt;
  
  
  Positive User Experience
&lt;/h3&gt;

&lt;p&gt;A mobile-friendly website provides users with a positive experience that is optimized for their devices. This results in happy users, which leads to more recommendations and positive testimonials, which leads to more traffic and conversions for your website.&lt;/p&gt;

&lt;h3&gt;
  
  
  Seamless Experience Across All Devices
&lt;/h3&gt;

&lt;p&gt;Having a seamless user experience from device to device is also becoming more important as people use multiple devices throughout their day.&lt;/p&gt;

&lt;p&gt;According to Google, &lt;a href="https://www.thinkwithgoogle.com/marketing-strategies/search/shift-to-constant-connectivity/"&gt;90% of people&lt;/a&gt;  move between screens across devices to accomplish a task. This means that people often start their journey on one device (usually a mobile phone) and then move to another device (usually a desktop computer) to complete it. For example, many people will casually research their choices on a mobile device before completing their purchase and checkout on their laptop.&lt;/p&gt;

&lt;h3&gt;
  
  
  Trust and Credibility Builder
&lt;/h3&gt;

&lt;p&gt;Lastly, it also helps your users see you in a positive light. Trust and credibility go a long way when it comes to nurturing and fostering positive relationships with your users, and creating a mobile-friendly website is a step in the right direction.&lt;/p&gt;

&lt;h2&gt;
  
  
  10 Tips To Make Your Website Mobile-Friendly
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--emwyS7om--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://prerender.io/wp-content/uploads/2022/06/mobile-friendly-how-to-w-1024x560.jpg.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--emwyS7om--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://prerender.io/wp-content/uploads/2022/06/mobile-friendly-how-to-w-1024x560.jpg.webp" alt="checklist for how to make a website mobile friendly" width="880" height="481"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that we’ve gone over what it means to be mobile-friendly and why it’s important, let’s take a look at some tips on how to make a website mobile-friendly.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Choose A Responsive Layout
&lt;/h3&gt;

&lt;p&gt;A responsive layout is a must if you want your website to be mobile-friendly.&lt;/p&gt;

&lt;p&gt;It allows the website to adapt and change its layout based on the device it is being viewed on. Also, with a responsive layout, side-scrolling and zooming are no longer necessary, which makes the experience much smoother for mobile users.&lt;/p&gt;

&lt;p&gt;This means that your website will look great and be easy to use no matter what device your users are on.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Avoid Intrusive Pop-Ups
&lt;/h3&gt;

&lt;p&gt;No one likes pop-ups, especially on mobile. They are intrusive, very difficult to close on a small screen, and just plain annoying. They’re also difficult to get rid of – a pop-up with a hard-to-see or non-existent close button is aggravating and the hallmark of a poorly-designed mobile website.&lt;/p&gt;

&lt;p&gt;If you have pop-ups on your website, make sure they are non-intrusive and can be easily closed on a mobile device.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Optimize Your Mobile Pages
&lt;/h3&gt;

&lt;p&gt;Today’s customers want everything to be fast and easily accessible, and if it’s not, they will quickly move on to something else.&lt;/p&gt;

&lt;p&gt;The same goes for mobile users. If your website speed is slow and laggy, your users will leave and find a faster website.&lt;/p&gt;

&lt;p&gt;If you want to keep your mobile users happy so they stick around, you need to make sure your website speed is up to par.&lt;/p&gt;

&lt;p&gt;Here are a few ways to do that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Use a content delivery network (CDN)&lt;/li&gt;
&lt;li&gt;  Optimize your images&lt;/li&gt;
&lt;li&gt;  Optimize your website code&lt;/li&gt;
&lt;li&gt;  Use browser caching&lt;/li&gt;
&lt;li&gt;  Minimize CSS files&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For a full breakdown of how to optimize your website speed, check out our full guide on &lt;a href="https://prerender.io/technical-seo-issues/"&gt;common technical SEO issues&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Use Large Font Sizes
&lt;/h3&gt;

&lt;p&gt;One of the main differences between desktop and mobile devices is the screen size.&lt;/p&gt;

&lt;p&gt;Mobile screens are much smaller, which can make it difficult to read small font sizes.&lt;/p&gt;

&lt;p&gt;To make your website mobile-friendly, use large font sizes that are easy to read on a small screen. This will make your website much easier to use for mobile users. Once you have done this, don’t forget to test it out on a few different devices to make sure it looks good and is easy to read.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Stick to a Minimalist Design
&lt;/h3&gt;

&lt;p&gt;Less is more when it comes to mobile design.&lt;/p&gt;

&lt;p&gt;A minimalist design is much easier to navigate and use on a small screen. Plus, it will help your website load faster, which is always a bonus.&lt;/p&gt;

&lt;p&gt;So, when you are designing your mobile-friendly website, stick to a minimalist design and only include the essentials such as your branding, navigation, and content.&lt;/p&gt;

&lt;p&gt;You can always add more bells and whistles later, but for now, keep it simple.&lt;/p&gt;

&lt;p&gt;This will make your website much easier to use on a mobile device and ensure that your users have a positive experience&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Make Sure Button Size and Placement Work on Mobile
&lt;/h3&gt;

&lt;p&gt;One of the most important things to consider when making your website mobile-friendly is button size and placement.&lt;/p&gt;

&lt;p&gt;On a desktop, it’s easy to click on a small button or in a difficult-to-reach place, but on mobile, it’s much more difficult.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.smashingmagazine.com/2016/09/the-thumb-zone-designing-for-mobile-users/#:~:text=Hoober's%20research%20shows%20that%2049,of%20interactions%20are%20thumb%2Ddriven."&gt;49% of mobile users use their thumbs&lt;/a&gt;  to navigate and click on elements that are within their thumbs’ reach. That means that any element placed in the upper corners of your website will be a struggle to reach for most users.&lt;/p&gt;

&lt;p&gt;To make your website mobile-friendly, make sure all buttons are large and placed in the middle of the screen where they are easy to reach.&lt;/p&gt;

&lt;p&gt;And, if you want to take it a step further, you can add a “hamburger” menu icon in the upper corner of your website that will expand to show all of your navigation options when clicked.&lt;/p&gt;

&lt;p&gt;This is a great way to make sure your mobile users can easily find what they are looking for without having to search through a long list of menu items.&lt;/p&gt;

&lt;h3&gt;
  
  
  7. Use the Viewport Meta Tag
&lt;/h3&gt;

&lt;p&gt;The viewport meta tag is an HTML element that tells the browser how to adjust the website’s dimensions and scaling to fit the width of the device’s screen.&lt;/p&gt;

&lt;p&gt;Without this tag, your website will not be responsive and will not adjust to different screen sizes.&lt;/p&gt;

&lt;p&gt;To make your website mobile-friendly, you need to add the viewport meta tag to the HTML code of your website.&lt;/p&gt;

&lt;p&gt;Once you have added the tag, be sure to test your website on different devices to make sure it is responsive and looks good on all screen sizes.&lt;/p&gt;

&lt;h3&gt;
  
  
  8. Use Media Queries
&lt;/h3&gt;

&lt;p&gt;Now a popular technique, media queries were designed to make the responsive design simpler and more effective.&lt;/p&gt;

&lt;p&gt;Media queries are responsive CSS code for all screen sizes. It allows you to specify different styles for different screen sizes using specific rules.&lt;/p&gt;

&lt;p&gt;For example, you could use a media query to make sure your website’s font size is large enough to be readable on a mobile device.&lt;/p&gt;

&lt;p&gt;Or, you could use a media query to hide certain elements of your website on mobile devices to make the design simpler and easier to use.&lt;/p&gt;

&lt;p&gt;Media queries are a great way to make sure your website looks good and functions well on all devices.&lt;/p&gt;

&lt;h3&gt;
  
  
  9. Make Forms Simpler – Turn Off Autocorrect on Mobile
&lt;/h3&gt;

&lt;p&gt;Another important thing to consider when making your website mobile-friendly is forms.&lt;/p&gt;

&lt;p&gt;Forms are often long and complex, which can be difficult to fill out on a mobile device.&lt;/p&gt;

&lt;p&gt;To make your forms simpler and easier to use on mobile, turn off the autocorrect function. This will prevent the keyboard from changing what you type and make it easier to fill out your form.&lt;/p&gt;

&lt;p&gt;In addition, make sure all form fields are large enough to be easily clicked on mobile devices.&lt;/p&gt;

&lt;p&gt;If you have a longer form, consider cutting down on the number of fields or breaking it up into multiple pages so that mobile users don’t have to scroll through a long list of fields.&lt;/p&gt;

&lt;h3&gt;
  
  
  10. Continually Test Your Website on Different Devices and Screen Sizes
&lt;/h3&gt;

&lt;p&gt;Finally, as your work is never done, continually test and optimize your website on different devices and screen sizes, even after development.&lt;/p&gt;

&lt;p&gt;Mobile devices are constantly changing and evolving as new devices and screen sizes and continuously being released, so it’s important to keep up with the latest trends.&lt;/p&gt;

&lt;p&gt;By using the previously mentioned methods, you can test your website regularly throughout its lifespan. When you take this necessary step, you can identify and tackle any potential mobile-unfriendly issues that may arise.&lt;/p&gt;

&lt;p&gt;You can take it one step further by testing out every user scenario and optimizing your customers’ journey. Taking this step will allow you to identify any potential bottlenecks and make the necessary changes to keep your website running smoothly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Make Your Website Look and Perform Its Best On All Fronts
&lt;/h2&gt;

&lt;p&gt;Similar to the desktop, first impressions matter on mobile.&lt;/p&gt;

&lt;p&gt;The average user will make a judgment about your website in just a few seconds, so it’s important to make sure your website looks and acts its best.&lt;/p&gt;

&lt;p&gt;If you follow the above tips and tricks, achieving that will be much more straightforward.Put these tips into practice one step at a time, and make your website mobile-friendly. To kick your website up a notch, factor in Javascript dynamic rendering so search engine bots can quickly process and index it.  &lt;a href="https://dashboard.prerender.io/signup?_gl=1*13psdcm*_ga*NjM4MzMyMzQxLjE2NTQ3MzEzNTY.*_ga_5C99FX76HR*MTY1NDczMTM1NS4xLjEuMTY1NDczMzM5NC4w"&gt;Register with Prerender for free today to get started&lt;/a&gt;. Your users will thank you for it, no matter their device.&lt;/p&gt;

</description>
      <category>mobile</category>
      <category>web</category>
      <category>webdev</category>
      <category>design</category>
    </item>
    <item>
      <title>SEO and Social Media: How Social Media Impacts Your SEO ROI</title>
      <dc:creator>SaaS.Group</dc:creator>
      <pubDate>Mon, 27 Jun 2022 23:47:40 +0000</pubDate>
      <link>https://dev.to/zoltan/seo-and-social-media-how-social-media-impacts-your-seo-roi-2an1</link>
      <guid>https://dev.to/zoltan/seo-and-social-media-how-social-media-impacts-your-seo-roi-2an1</guid>
      <description>&lt;p&gt;Post originally published on &lt;a href="https://prerender.io/seo-and-social-media/"&gt;Prender&lt;/a&gt;.&lt;br&gt;
Explaining SEO and social media’s relationship to stakeholders can be tricky because of how many different opinions exist on the topic – even among dedicated professionals. There do seem to be, however, a lot of benefits and nuances of using social media to enhance your SEO campaign ROI, although in most cases it won’t directly affect your rankings.&lt;/p&gt;

&lt;p&gt;Today we’re digging deeper into social media’s role in SEO to see if we can provide answers once and for all.&lt;/p&gt;

&lt;h2&gt;
  
  
  Is Social Media a Ranking Factor?
&lt;/h2&gt;

&lt;p&gt;Google admitted that &lt;a href="https://youtu.be/WszvyRune14?t=1193"&gt;social media signals aren’t a direct ranking factor&lt;/a&gt; for them. However, when you search for some brands on Google, you see that their social media accounts like Twitter, Instagram, and Facebook rank higher on search results than their official company website.&lt;/p&gt;

&lt;p&gt;You can also find recent social media posts on search engine result pages (SERPs), indicating that brands can drive significant traffic to their social media accounts directly from search.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JPcXerWf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7zh36wjdcmkciu65jv5f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JPcXerWf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7zh36wjdcmkciu65jv5f.png" alt="Image description" width="880" height="644"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also, features like Facebook page reviews are shown in Google search results, which can help brands create trust and authority. So, even though social media doesn’t directly impact search rankings, there is no denying that having a strong social presence gets you more visibility.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; One thing to keep in mind is that &lt;a href="https://www.seohermit.com/articles/how-social-signals-help-seo/#:~:text=Bing%20Loves%20Social%20Media%20Engagement&amp;amp;text=Likes%20and%20shares%20definitely%20do,steadily%20rising%20for%20several%20years."&gt;Bing does use social signals as a ranking factor&lt;/a&gt;. So even though the majority of traffic comes from Google, by optimizing our social media presence we’re also helping our website to rank higher on Bing’s search results. &lt;/p&gt;

&lt;h2&gt;
  
  
  What’s the Role of Social Media in SEO?
&lt;/h2&gt;

&lt;p&gt;Even though you cannot get direct search engine ranking benefits from social media, there are many ways your social presence helps your website’s SEO efforts:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Social media drives referral traffic to your site.&lt;br&gt;
Although getting traffic from your socials doesn’t have the same consistency as organic SEO traffic, social media can help you send targeted visitors interested in reading your content or buying your products/solutions to your website.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It increases the time your user spends on your page (Dwell time).&lt;br&gt;
Generally, site visitors who are referred from social media tend to spend more time on your website, which is a very valuable engagement metric. In addition, it signals to search engines that your site is providing valuable content that the users want to engage with.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Social media creates opportunities for backlinks.&lt;br&gt;
When posting on social media, you never know who will interact with your content. Even authoritative website owners can find your pages through social media posts and decide to give you a backlink if it provides value for their readers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It might help you get new pages indexed faster.&lt;br&gt;
By promoting your new articles and pages through social media, you’re creating new ways for Google to find and index your content. Social shares can also help Google find these pages and, if traffic is coming to them, it’s a clear indication for search engines that your pages are worth adding to their SERPs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Social media provides you with more psychographic metrics about your audience.&lt;br&gt;
Since people provide more information about themselves on social media, you can really gain an understanding of the beliefs, fear, and pain points of your target audience. These social media metrics help you create useful, engaging website content that caters to your visitors.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The 5 Best Social Media Platforms for SEO
&lt;/h2&gt;

&lt;p&gt;To help you choose the right platform to support your SEO campaign, we’ve compiled a list of some of the best social media platforms that can help you scale your SEO efforts:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Pinterest&lt;/strong&gt;&lt;br&gt;
Pinterest works more or less like a search engine, even though it is a social media platform. &lt;a href="https://influencermarketinghub.com/pinterest-stats/#toc-7"&gt;Over 1 billion active searches are generated on Pinterest per month&lt;/a&gt;, making it an ideal platform for bloggers, online marketers, and eCommerce stores.&lt;/p&gt;

&lt;p&gt;Unlike other social apps, Pinterest doesn’t limit the outreach of a post with only one outbound link (like in the case of Instagram and TikTok with the unique link in the profile).&lt;/p&gt;

&lt;p&gt;To rank on Pinterest search results, you need to make creative pins and optimize them with the right keywords. Don’t forget to create an outbound link to your website on every pin you create. Video pins are getting more viral on Pinterest now, which is something you can leverage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Quora&lt;/strong&gt;&lt;br&gt;
It must have happened to you at least once that you typed a question in Google, and the first result Google showed you were from Quora.&lt;/p&gt;

&lt;p&gt;Quora is a social media platform where people ask questions and get the answers in real-time, making it a fantastic platform for you to build authority by providing helpful answers.&lt;/p&gt;

&lt;p&gt;You can generate traffic directly from Google and from the platform itself. &lt;a href="https://www.similarweb.com/website/quora.com/#overview"&gt;Over 500 million people visit Quora every month &lt;/a&gt;according to Similar Web. &lt;/p&gt;

&lt;p&gt;To make the most out of Quora, you need to provide insightful, genuinely helpful answers. You can also add links to your website if it is relevant to the question and complements your answer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If you are new to Quora, it is better not to provide outbound links for the first few weeks. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. LinkedIn&lt;/strong&gt;&lt;br&gt;
Although LinkedIn was designed as a professional network platform, it has become the number one choice for B2B marketers to generate leads.&lt;/p&gt;

&lt;p&gt;If your target audience is top professionals, like CEOs, CMOs, and decision-makers, LinkedIn is the best platform on which to promote your content and create a presence.&lt;/p&gt;

&lt;p&gt;Compared to platforms like Facebook, Instagram, and Twitter, it is easier to get visibility on LinkedIn if you have a solid content strategy.&lt;/p&gt;

&lt;p&gt;Something to notice is that even though LinkedIn allows you to create a company page, most users prefer to follow and interact with other professionals. In other words, the best way to share and create conversations around your brand and content is to use your CEO’s profile.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Facebook&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://datareportal.com/essential-facebook-stats#:~:text=Here's%20what%20the%20latest%20data,"&gt;Facebook has over 10 Billion monthly visits &lt;/a&gt;%3A%202.936%20billion%20(April%202022)&amp;amp;text=Number%20of%20people%20who%20use,)%3A%201.960%20billion%20(April%202022)&amp;amp;text=Share%20of%20Facebook's%20monthly%20active,%3A%2067%25%20(April%202022))and is ranked as the 3rd most-visited website globally. It is very easy to share web content like blog posts, eCommerce products, and web pages on Facebook.&lt;/p&gt;

&lt;p&gt;The great thing about these numbers – besides the big pool of users you can reach – is that Google continuously crawls and indexes Facebook pages as it does other pages. So your company and product pages can rank on SERPs, and it can help Google to find new content faster.&lt;/p&gt;

&lt;p&gt;However, the best Facebook feature is its paid ads campaign. By investing some budget to promote your web content, you can reach out to new readers quickly and run CTR experiments for content optimization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Twitter&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://www.statista.com/statistics/470038/twitter-audience-reach-visitors/"&gt;Twitter drives over 6 billion monthly visits&lt;/a&gt;, ranking 7th most-visited website globally. Similar to Facebook, posting on Twitter can help you gain more SERP real estate thanks to Twitter carousels now being shown on result pages.&lt;/p&gt;

&lt;p&gt;Twitter also allows you to share links in your tweets, so make sure to add a backlink to your content when promoting it on the social network. It won’t pass any link authority but it will provide relevant referral traffic to your site.&lt;/p&gt;

&lt;p&gt;A good rule of thumb is to invest 40% of your time on Twitter talking about your industry, providing practical, useful tips, and retweeting content from relevant brands; the second 40% should be interacting with other brands and users; the last 20% should be invested in promoting your own content.&lt;/p&gt;

&lt;h2&gt;
  
  
  5 Strategies to Bring SEO and Social Media Together
&lt;/h2&gt;

&lt;p&gt;Hopefully, by now you have a clear understanding of how social media can impact your website’s traffic and how it may fit into your overall SEO strategy.&lt;/p&gt;

&lt;p&gt;To help you accomplish your goals, we want to share some best practices you can implement to successfully integrate SEO and social media:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Make Promotion a Key Part of Your Content Strategy&lt;/strong&gt;&lt;br&gt;
Promoting your content on social media has a lot of benefits, but as long as it is disjointed from the content development process, you’ll be crippling your reach.&lt;/p&gt;

&lt;p&gt;Instead, bring SEO and social media teams together to plan SEO-relevant content and how to best promote it in a social format.&lt;/p&gt;

&lt;p&gt;Some articles like listicles or step-by-step guides can be easy to break down into LinkedIn posts, Twitter threads, etc. If you keep the teams close, you’ll have a more cohesive marketing strategy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Create a Site &amp;amp; Social Content Calendar&lt;/strong&gt;&lt;br&gt;
Creating a content calendar for both website and social content will provide you with foresight on what needs to be done and when. This will also allow your team to work together and better coordinate efforts.&lt;/p&gt;

&lt;p&gt;And, because social media is an almost real-time medium, you can create seasonal content on the platform to drive traffic to old pages that become relevant at specific times of the year.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Make Your Web Content Easily Shareable&lt;/strong&gt;&lt;br&gt;
Optimizing on-page SEO and other &lt;a href="https://prerender.io/technical-seo-issues/"&gt;technical SEO &lt;/a&gt;factors only makes your blog ready for landing on search results. If you want to take advantage of the power of social media, you need to make your blog posts easily shareable, too.&lt;/p&gt;

&lt;p&gt;You want to leverage your loyal readers who love your articles and want to share them with their friends and family on social media by adding social share buttons for each popular platform. It is also a good idea to craft a compelling call-to-action to encourage users to share your content.&lt;/p&gt;

&lt;p&gt;On the technical side, you need to pay attention to your &lt;a href="https://prerender.io/social-media-sharing/"&gt;Open Graph Protocol&lt;/a&gt; to avoid any problems when sharing content on social apps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Repurpose Content Effectively&lt;/strong&gt;&lt;br&gt;
Creating great content is no easy feat. It takes time and creativity to generate and implement a great piece of content. To leverage it effectively, a good idea is to take the best-performing content of one platform and optimize it to be used on a different one.&lt;/p&gt;

&lt;p&gt;For example, if a post on LinkedIn generates a lot of comments, likes, and shares, it is a great indicator that your audience wants to learn or know more about the subject and that it might be worth writing a long-form piece for your blog.&lt;/p&gt;

&lt;p&gt;Vice versa, high-performing articles, and pages can be repurposed into social posts to bring new eyes to the pages and keep the conversation going.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Use Social Media To Find Keywords&lt;/strong&gt;&lt;br&gt;
Keeping an eye on the latest trends on social media like Twitter, Facebook, and Instagram can help you identify fresh keywords that can earn you huge traffic wins without having to compete with other websites. Social media can be the best place to find those low-hanging fruits.&lt;/p&gt;

&lt;p&gt;Another source of great keywords is recurring questions your audience sends you through social media. If enough people are asking the same things over and over, it can be an indicator of a content gap you need to fill.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping UP
&lt;/h2&gt;

&lt;p&gt;Sure, social media doesn’t directly affect your organic rankings, but it definitely has a place in the SEO playbook. After all, content marketing fuels both strategies, so it is only natural that both channels complement each other.&lt;/p&gt;

&lt;p&gt;In the end, it’s not just about improving your rankings but also improving the quality of traffic you drive to your site, and platforms like Quora, Twitter, and LinkedIn can help you find and attract your ideal audience.&lt;/p&gt;

</description>
      <category>seo</category>
      <category>roi</category>
      <category>socialmedia</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Build an eBay Web Scraper: How to Extract Product Data Using Javascript</title>
      <dc:creator>SaaS.Group</dc:creator>
      <pubDate>Fri, 29 Apr 2022 21:51:53 +0000</pubDate>
      <link>https://dev.to/zoltan/build-an-ebay-web-scraper-how-to-extract-product-data-using-javascript-47k3</link>
      <guid>https://dev.to/zoltan/build-an-ebay-web-scraper-how-to-extract-product-data-using-javascript-47k3</guid>
      <description>&lt;p&gt;Originally published on &lt;a href="https://www.scraperapi.com/blog/ebay-web-scraper-tutorial/"&gt;ScraperAPI&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;eBay is one of the largest eCommerce platforms in the world. With over 1 billion active listings on its site, it is also one of the largest data gold mines for pricing analysis, online purchase trends, and more. However, before analyzing their data, you need to extract it.&lt;/p&gt;

&lt;p&gt;Today, we’ll build an eBay web scraper using Node.JS and Cheerio and show you the step-by-step process behind it – from idea to execution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Is It Legal to Scrape eBay?
&lt;/h2&gt;

&lt;p&gt;Yes, scraping eBay is totally legal if you’re not accessing data behind login walls or personal data without consent.&lt;/p&gt;

&lt;p&gt;There’s still a lot of discussion around the legality of web scraping, and the fact that there are so many conflicting interests makes it hard to find reliable information. However, as it is today, it all depends on the type of data you’re scraping, how you’re extracting it, and its end-use.&lt;/p&gt;

&lt;p&gt;In this article and all our tutorials, we’ll only show you ethical web scraping processes, so you can be confident you can apply these strategies without repercussions.&lt;/p&gt;

&lt;p&gt;It’s important to know and understand the legal nuances of web scraping, so we build a complete guide to ensure legal and ethical web scraping practices.&lt;/p&gt;

&lt;p&gt;Now, let’s start coding, shall we?&lt;/p&gt;

&lt;h2&gt;
  
  
  Scrape eBay Product Data with Cheerio
&lt;/h2&gt;

&lt;p&gt;If you’ve been following our tutorial series, by now, we’ve gone through the basics of web scraping in JavaScript and built a more complex LinkedIn scraper using a for loop and the network tab in chrome’s DevTools.&lt;/p&gt;

&lt;p&gt;Note: You don’t need to read those first to understand this tutorial, but it might help to get a clearer picture of our thought process.&lt;/p&gt;

&lt;p&gt;To build on top of that, we’ll create an async function to scrape the name, price, and link of 4k TVs on eBay and then export the data into a CSV using the Object-to-CSV package.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;. Understanding eBay’s URL Structure&lt;/strong&gt;&lt;br&gt;
Let’s go to &lt;a href="https://www.ebay.com/"&gt;https://www.ebay.com/&lt;/a&gt; and search for “4k smart tv” on the search bar to grab our initial URL.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4crOyZB7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ngf5l4adjkox0th1rhou.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4crOyZB7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ngf5l4adjkox0th1rhou.png" alt="Image description" width="880" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It sends us to the following URL:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.ebay.com/sch/i.html?_from=R40&amp;amp;_trksid=p2380057.m570.l1312&amp;amp;_nkw=4k+smart+tv&amp;amp;_sacat=0"&gt;https://www.ebay.com/sch/i.html?_from=R40&amp;amp;_trksid=p2380057.m570.l1312&amp;amp;_nkw=4k+smart+tv&amp;amp;_sacat=0&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we were to scrape just this page, we could stop at this point and start writing our code. However, we want to see how it changes when moving through the pagination to understand how we can tell our script to do the same.&lt;/p&gt;

&lt;p&gt;At first glance, it seems like the _sacat parameter stores the page number, so if we change it, it would be enough. Because eBay uses a numbered pagination, we can just click on the “Next” button and see how the URL changes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ot5NQODE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/us3d5f2jicqj7x3kr13e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ot5NQODE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/us3d5f2jicqj7x3kr13e.png" alt="Image description" width="880" height="108"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here’s the resulting URL:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.ebay.com/sch/i.html?_from=R40&amp;amp;_nkw=4k+smart+tv&amp;amp;_sacat=0&amp;amp;_pgn=2"&gt;https://www.ebay.com/sch/i.html?_from=R40&amp;amp;_nkw=4k+smart+tv&amp;amp;_sacat=0&amp;amp;_pgn=2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is quite different from what we had before. So let’s go back and see if it returns to the previous version.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.ebay.com/sch/i.html?_from=R40&amp;amp;_nkw=4k+smart+tv&amp;amp;_sacat=0&amp;amp;_pgn=1"&gt;https://www.ebay.com/sch/i.html?_from=R40&amp;amp;_nkw=4k+smart+tv&amp;amp;_sacat=0&amp;amp;_pgn=1&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;No, it uses the “new” version of the URL when we use the pagination. This is great! All we need to do is change the _pgn parameter, and it will move to the next page. We confirmed this by just changing the number in the addressed bar.&lt;/p&gt;

&lt;p&gt;Awesome, we’ll use this new version as our base URL for the HTTP request and later on to allow us to scrape every page in the series.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;. Testing for JavaScript&lt;/strong&gt;&lt;br&gt;
Now that we have a consistent URL structure to work with, we need to test the website’s source code and make sure the data we want is available in the HTML and not injected through JavaScript – which would change our whole approach.&lt;/p&gt;

&lt;p&gt;Of course, we already told you we were using Cheerio, so you know we’re not dealing with JavaScript, but here’s how you can test this for any website in the future.&lt;/p&gt;

&lt;p&gt;Go to the page you want to scrape, right-click and click on “View Page Source”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--r_RWzsBT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ge0ahl34t7uvdifycshb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--r_RWzsBT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ge0ahl34t7uvdifycshb.png" alt="Image description" width="880" height="291"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It will show you the site’s source code before any AJAX injection. We’ll copy the name and look for it in the Page Source for the test.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TKSchS7y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n0mwpfuacfo21h745lq8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TKSchS7y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n0mwpfuacfo21h745lq8.png" alt="Image description" width="880" height="211"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And next is the price.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EM_m_AkB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zy3dk9qqroau5ri618aa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EM_m_AkB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zy3dk9qqroau5ri618aa.png" alt="Image description" width="880" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We did the same with a few other products just to be sure, and we could find the element every time.&lt;/p&gt;

&lt;p&gt;This step will tell us if we can go ahead and access the data using Axios and Cheerio or if we’ll need to use a tool like ScraperAPI’s JavaScript rendering to load the JS or a headless browser like Puppeteer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Sending Our HTTP Request with Axios&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The fun part begins! Let’s create a new folder for our project, open it on VScode (or your favorite editor) and start it with npm init -y to create the initial package.json file. From there, we can install Axios, a great and simple tool to send HTTP requests with Node.js, with npm install axios.&lt;/p&gt;

&lt;p&gt;To send and test the request, let’s create a new file called index.js (original, we know), require Axios at the top, and create an async function. Inside it, we’ll send our request using Axios and store the response’s data inside a variable called html for clarity.&lt;/p&gt;

&lt;p&gt;const axios = require('axios');&lt;/p&gt;

&lt;p&gt;(async function () {&lt;/p&gt;

&lt;p&gt;const response = await axios('&lt;a href="https://www.ebay.com/sch/i.html?_from=R40&amp;amp;_nkw=4k+smart+tv&amp;amp;_sacat=0&amp;amp;_pgn=1'"&gt;https://www.ebay.com/sch/i.html?_from=R40&amp;amp;_nkw=4k+smart+tv&amp;amp;_sacat=0&amp;amp;_pgn=1'&lt;/a&gt;);&lt;/p&gt;

&lt;p&gt;const html = await response.data;&lt;/p&gt;

&lt;p&gt;})();&lt;/p&gt;

&lt;p&gt;Because we’re using async, we now have access to the await operator which is “used to wait for a Promise,” making it a great tool for web scraping, as our code will be more resilient.&lt;/p&gt;

&lt;p&gt;Let’s console.log() the html variable to verify that our request is working:&lt;/p&gt;

&lt;p&gt;console.log(html)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--k_y_rDpx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/81lbrpn5bjqtdec0zsr2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k_y_rDpx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/81lbrpn5bjqtdec0zsr2.png" alt="Image description" width="880" height="356"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ah yes! A bunch of nonsense, as expected.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;. Parsing the Raw HTML with Cheerio&lt;/strong&gt;&lt;br&gt;
Before extracting our elements, we need to parse the raw data we downloaded to give it a structure we can navigate. That’s where Cheerio comes in!&lt;/p&gt;

&lt;p&gt;We’ll create a new variable and pass html to cheerio using cheerio.load():&lt;/p&gt;

&lt;p&gt;const $ = cheerio.load(html);&lt;/p&gt;

&lt;p&gt;From there, let’s head back to the website to find our selectors.&lt;/p&gt;

&lt;p&gt;Note: Don’t forget to install Cheerio with npm install cheerio before using it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Picking the Right Selectors&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The first thing we’re looking for is the element that contains all the data we’re looking for. So every product seems to be contained within a card, right?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NRiskMM---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ojfhywzovrah9u35zecq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NRiskMM---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ojfhywzovrah9u35zecq.png" alt="Image description" width="880" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We want to grab the element that contains all cards so we can then iterate through the list and extract the information we want (name, price, and URL – to refresh your memory).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WJ7l2Ub8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fws1ozgfgh0oo071u6mh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WJ7l2Ub8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fws1ozgfgh0oo071u6mh.png" alt="Image description" width="880" height="661"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This ul element wraps all product cards, so it is a great starting point to explore the HTML structure a little bit more.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BgkgTRV5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n6vxmr2bcmy9g3915imw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BgkgTRV5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n6vxmr2bcmy9g3915imw.png" alt="Image description" width="880" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Like we thought, every product card is a li element within the ul. All we need to do is grab all the li elements and assign them to a variable, effectively creating an array we can then go through to extract the data.&lt;/p&gt;

&lt;p&gt;For testing, let’s open the browser’s console and use the li element’s class and see what gets returned:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hoGskuB0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gw9vztytzy822cwi3wez.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hoGskuB0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gw9vztytzy822cwi3wez.png" alt="Image description" width="880" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Unlike Python’s Scrapy, Cheerio doesn’t have a built-in shell for testing, but we can use the console to test the selectors without having to send a request every time. We did the same thing with the rest of the elements.&lt;/p&gt;

&lt;p&gt;We’ll pick the only h3 tag inside of each element within the list for the name.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4Q0YUuYb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ndlesjt2o9o2upt8ftzl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4Q0YUuYb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ndlesjt2o9o2upt8ftzl.png" alt="Image description" width="880" height="299"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the case of the price, it’s wrapped within a span element with the class “s-item__price.”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0Yg6ZtOB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qkhei49fb7885w7if4bz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0Yg6ZtOB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qkhei49fb7885w7if4bz.png" alt="Image description" width="880" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lastly, for the URL we needed to do something a little different. Although the a tag had a class we could use, it was shared by other elements outside our list. Notice how it returned 64 nodes instead of 59, which is the correct number of li elements.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eoezkaK_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xoxsl2koc44sw8gtrpdu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eoezkaK_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xoxsl2koc44sw8gtrpdu.png" alt="Image description" width="880" height="149"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Technically it would still work for us because we’ll be telling our scraper to look for the element inside the list. However, just to be sure, we’ll be going up in the hierarchy and grabbing the div containing the URL and then moving down to the a tag itself like this: 'div.s-item__info.clearfix &amp;gt; a'. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---Vo3wZAt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0olp2nft3r82qmtdih2s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---Vo3wZAt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0olp2nft3r82qmtdih2s.png" alt="Image description" width="880" height="176"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Extracting eBay Data&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So the logic would be like this:&lt;/p&gt;

&lt;p&gt;// Pick all li elements&lt;/p&gt;

&lt;p&gt;// Go through each element within the list and extract the:&lt;/p&gt;

&lt;p&gt;// tvName, tvPrice, tvLink&lt;/p&gt;

&lt;p&gt;Let’s put it all together now, as we already know the selectors:&lt;/p&gt;

&lt;p&gt;const tvs = $('li.s-item.s-item__pl-on-bottom.s-item--watch-at-corner');&lt;/p&gt;

&lt;p&gt;tvs.each((index, element) =&amp;gt; {&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   const tvName = $(element).find('h3')

   const tvPrice = $(element).find('span.s-item__price')

   const tvLink = $(element).find('div.s-item__info.clearfix &amp;gt; a')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;})&lt;/p&gt;

&lt;p&gt;However, we’re not done yet. We need to add a new method at the end of each string, otherwise, we’ll be getting the entire HTML information which isn’t what we want.&lt;/p&gt;

&lt;p&gt;We want the text inside the element for the name and price, so all we need to do is add the text() method at the end. For the URL, we want the value stored inside the href attribute, so we use the attr() method and pass the attribute we want the value from.&lt;/p&gt;

&lt;p&gt;const tvs = $('li.s-item.s-item__pl-on-bottom.s-item--watch-at-corner');&lt;/p&gt;

&lt;p&gt;tvs.each((index, element) =&amp;gt; {&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   const tvName = $(element).find('h3').text()

   const tvPrice = $(element).find('span.s-item__price').text()

   const tvLink = $(element).find('div.s-item__info.clearfix &amp;gt; a').attr('href')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;})&lt;/p&gt;

&lt;p&gt;We could log each variable to the console but we would be getting a lot of messy data. Instead, let’s give it some structure before testing the scraper.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;7. Pushing the Extracted Data to an Empty Array *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is actually quite an easy process that will help us to organize the data and making it ready to export.&lt;/p&gt;

&lt;p&gt;First, we’ll create an empty array outside our function.&lt;/p&gt;

&lt;p&gt;const scrapedTVs = [];&lt;/p&gt;

&lt;p&gt;From inside the function, we can call this variable and use the push() method to add all elements to our empty array. We’ll add the following snippet of code inside tvs.each(), right after the tvLink variable:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  scrapedTVs.push({

       'productName': tvName,

       'productPrice': tvPrice,

       'productURL': tvLink,

   })
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Run the test with a console.log(scrapedTVs) and see what we get:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--a0YZlyUf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kzk7lti629lauv8h15sb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--a0YZlyUf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kzk7lti629lauv8h15sb.png" alt="Image description" width="880" height="262"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Nothing can beat the feeling of our code working! Our data is structured and clean. In perfect shape to be exported.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. Exporting Our Data to a CSV&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Exporting data into a CSV is made super simple with the ObjectsToCsv package. Just npm i objects-to-csv and add it to the dependencies at the top.&lt;/p&gt;

&lt;p&gt;const ObjectsToCsv = require('objects-to-csv');&lt;/p&gt;

&lt;p&gt;ObjectsToCsv has an easy syntax:&lt;/p&gt;

&lt;p&gt;const csv = new ObjectsToCsv(scrapedTVs)&lt;/p&gt;

&lt;p&gt;await csv.toDisk('./test.csv', { append: true })&lt;/p&gt;

&lt;p&gt;console.log("Saved to CSV")&lt;/p&gt;

&lt;p&gt;To create initiate the export, we need to create a new ObjectsToCsv() instance and pass it our dataset. Then, we’ll await the promise to resolve and save the result into a CSV file by giving it the path. We’re also setting append to true (it’s false by default) because we’re going to be adding more data to it from each page of the pagination.&lt;/p&gt;

&lt;p&gt;For testing, we’ll log “Saved to CSV” to the console:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qatvDiuR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b4oqpkzhfnvz0j5jfw21.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qatvDiuR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b4oqpkzhfnvz0j5jfw21.png" alt="Image description" width="880" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;9. Scrape eBay’s Pagination *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We already know we can scrape all pages inside the pagination by changing the _pgn parameter in the URL. So for this project, we can implement a for loop that changes this number after every iteration.&lt;/p&gt;

&lt;p&gt;But we also need to know when to stop. Let’s head back to the website and see how many pages the pagination has.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7mzqkORO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/acogvedfh1llnqfegmt6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7mzqkORO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/acogvedfh1llnqfegmt6.png" alt="Image description" width="880" height="154"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It caps at 21. However, if we push the URL and add a 22 in the parameter, the page still responds with a page, but it loads the last page of the series, in other words, page 21.&lt;/p&gt;

&lt;p&gt;We now can write the three statements for the for loop and put everything inside of it:&lt;/p&gt;

&lt;p&gt;let pageNumber = 1 //to set the initial state&lt;/p&gt;

&lt;p&gt;pageNumber &amp;lt;= 21 //it'll run as long as pageNumber is less or equal to 21&lt;/p&gt;

&lt;p&gt;pageNumber += 1 //after running, pageNumber increments by 1&lt;/p&gt;

&lt;p&gt;Here’s how it should look like:&lt;/p&gt;

&lt;p&gt;for (let pageNumber = 1; pageNumber &amp;lt;= 21; pageNumber += 1) {&lt;/p&gt;

&lt;p&gt;}&lt;/p&gt;

&lt;p&gt;If we put all our previous code inside this for loop (which is inside of our async function), it’ll keep running until it meets the condition and breaks. Still, there are two changes we need to make before we call it for the day.&lt;/p&gt;

&lt;p&gt;First, we need to add the pageNumber variable inside the URL, which can be done using ${} and a backtick (`) to surround the string. Like this:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;https://www.ebay.com/sch/i.html?_from=R40&amp;amp;_nkw=4k+smart+tv&amp;amp;_sacat=0&amp;amp;_pgn=${pageNumber}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The second change we’ll want to make is sending our request through ScraperAPI servers to handle IP rotation and headers automatically. To do so, we’ll need to create a free ScraperAPI account. It’ll provide us with an API key and the string we’ll need to add to the URL for it to work:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;http://api.scraperapi.com?api_key=51e43be283e4db2a5afb62660xxxxxxx&amp;amp;url=https://www.ebay.com/sch/i.html?_from=R40&amp;amp;_nkw=4k+smart+tv&amp;amp;_sacat=0&amp;amp;_pgn=${pageNumber}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This way we can avoid any kind of anti-scraping mechanism that could block our script.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;10. eBay Web Scraper Finished Code *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Here’s the finished code ready to use:&lt;/p&gt;

&lt;p&gt;//Dependencies&lt;/p&gt;

&lt;p&gt;const axios = require('axios');&lt;/p&gt;

&lt;p&gt;const cheerio = require('cheerio');&lt;/p&gt;

&lt;p&gt;const ObjectsToCsv = require('objects-to-csv');&lt;/p&gt;

&lt;p&gt;//Empty array&lt;/p&gt;

&lt;p&gt;const scrapedTVs = [];&lt;/p&gt;

&lt;p&gt;(async function () {&lt;/p&gt;

&lt;p&gt;//The for loop will keep running until pageNumber is greater than 21&lt;/p&gt;

&lt;p&gt;for (let pageNumber = 1; pageNumber &amp;lt;= 21; pageNumber += 1) {&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   //Sends the request, store the data and parse it with Cheerio

   const response = await axios(`http://api.scraperapi.com?api_key=51e43be283e4db2a5afb62660xxxxxxx&amp;amp;url=https://www.ebay.com/sch/i.html?_from=R40&amp;amp;_nkw=4k+smart+tv&amp;amp;_sacat=0&amp;amp;_pgn=${pageNumber}`);

   const html = await response.data;

   const $ = cheerio.load(html);



   //Grabs all the li elements containing the product cards

   const tvs = $('li.s-item.s-item__pl-on-bottom.s-item--watch-at-corner');

  //Goes through every element inside tvs to grab the data we're looking for

   tvs.each((index, element) =&amp;gt; {

       const tvName = $(element).find('h3').text()

       const tvPrice = $(element).find('span.s-item__price').text()

       const tvLink = $(element).find('div.s-item__info.clearfix &amp;gt; a').attr('href')

       //Pushes all the extracted data to our empty array

       scrapedTVs.push({

           'productName': tvName,

           'productPrice': tvPrice,

           'productURL': tvLink,

       })

   });

   //Saves the data into a CSV file

   const csv = new ObjectsToCsv(scrapedTVs)

   await csv.toDisk('./scrapedTVs.csv', { append: true })

   //If everything goes well, it logs a message to the console

   console.log('Save to CSV')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;}&lt;/p&gt;

&lt;p&gt;})();&lt;/p&gt;

&lt;p&gt;Note: Keep in mind that you’ll need to replace the API key in the code for your own key for it to work.&lt;/p&gt;

&lt;p&gt;Great work! You now have a fast and effective eBay web scraper ready to be deployed.&lt;/p&gt;

&lt;p&gt;To make it even more powerful, you could use the _nkw parameter inside the URL to make it easier to add a new term to the search, and with ScraperAPI by your side, you won’t have to worry about your IP getting blocked. However, that’s something we’ll leave to your imagination.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>saas</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>XPath Cheat Sheet for Web Scraping: Full Guide and XPath Examples</title>
      <dc:creator>SaaS.Group</dc:creator>
      <pubDate>Mon, 25 Apr 2022 22:22:54 +0000</pubDate>
      <link>https://dev.to/zoltan/xpath-cheat-sheet-for-web-scraping-full-guide-and-xpath-examples-316b</link>
      <guid>https://dev.to/zoltan/xpath-cheat-sheet-for-web-scraping-full-guide-and-xpath-examples-316b</guid>
      <description>&lt;p&gt;Originally published on &lt;a href="https://www.scraperapi.com/blog/xpath-cheat-sheet/" rel="noopener noreferrer"&gt;ScraperAPI&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;XML Path Language (XPath) is a query language and a major element of the XSLT standard. It uses a path-like syntax (called path expressions) to identify and navigate nodes in an XML and XML-like document.&lt;/p&gt;

&lt;p&gt;In web scraping, we can take advantage of XPath to find and select elements from the DOM tree of virtually any HTML document, allowing us to create more powerful parsers in our scripts.&lt;/p&gt;

&lt;p&gt;By the end of this guide, you’ll have a solid grasp of XPath expressions and how to use them in your scripts to scrape complex websites.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding XPath Syntax
&lt;/h2&gt;

&lt;p&gt;Writing XPath expressions is quite simple because it uses a structure we are all well versed in. You can imagine these path expressions like the ones we use in standard file systems.&lt;/p&gt;

&lt;p&gt;There’s a root folder, and inside it has several directories, which could also contain more folders. XPath uses the relationship between these elements to traverse the tree and find the elements we’re targeting.&lt;/p&gt;

&lt;p&gt;For example, we can use the expression //div to select all the div elements or write //div/p to target all paragraphs inside the divs. We can do this because of the nesting nature of HTML.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using XPath to Find Elements With Chrome Dev Tools&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let’s use an example to paint a clearer picture. Navigate to &lt;a href="https://quotes.toscrape.com/" rel="noopener noreferrer"&gt;https://quotes.toscrape.com/&lt;/a&gt; and inspect the page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F33e0931ktafb1mrmxo4n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F33e0931ktafb1mrmxo4n.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we’ll be able to see the HTML of the website and pick an element using our XPath expressions. If we want to scrape all the quotes displayed on the page, all we need to do is to press cmd + f to initiate a search and write our expression.&lt;/p&gt;

&lt;p&gt;Note: This is a great exercise to test your expressions before spending time on your code editor and without putting any stress on the site’s server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F88juc2w4ff1xinxv9smx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F88juc2w4ff1xinxv9smx.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we take a closer look, we can see that all quotes are wrapped inside a div with the class quote, with the text itself inside a span element with the class text, so let’s follow that structure to write our path:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsquu8p5xjxrij5w5mw8o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsquu8p5xjxrij5w5mw8o.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;XPath: //div[&lt;a class="mentioned-user" href="https://dev.to/class"&gt;@class&lt;/a&gt;=’quote’]/span[&lt;a class="mentioned-user" href="https://dev.to/class"&gt;@class&lt;/a&gt;=’text’]&lt;/p&gt;

&lt;p&gt;It highlighted the first element that matches our search, and also tell us that it’s the first of 10 elements, which perfectly matches the number of quotes on the page.&lt;/p&gt;

&lt;p&gt;Note: It would also work fine with //span[&lt;a class="mentioned-user" href="https://dev.to/class"&gt;@class&lt;/a&gt;='text'] because there’s only one span using that class. We want to be as descriptive as possible because, in most cases, we’ll be using XPath on websites with a messier structure.&lt;/p&gt;

&lt;p&gt;Did you notice we’re using the elements’ attributes to locate them? XPath allows us to move in any direction and almost any way through the node tree. We can target classes, IDs, and the relationship between elements.&lt;/p&gt;

&lt;p&gt;For the previous example, we can write our path like this: //div[&lt;a class="mentioned-user" href="https://dev.to/class"&gt;@class&lt;/a&gt;='quote']/span[1]; and still, locate the element. This last expression would translate into finding all the divs with the class quote and picking the first span element.&lt;/p&gt;

&lt;p&gt;Now, to summarize everything we’ve learned so far, here’s the structure of the XPath syntax:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Tagname – which is the name of the HTML element itself. Think of divs, H1s, etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Attribute – which can be IDs, classes, and any other property of the HTML element we’re trying to locate.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Value – which is the value stored in the attribute of the HTML element.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you’re still having a hard time with this syntax, a great place to start is understanding what data parsing is and how it works. In this article, we go deeper into the DOM and its structure, which in return will make everything about XPath click better.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to Use XPath vs. CSS for Web Scraping
&lt;/h2&gt;

&lt;p&gt;If you’ve readen any of our Beautiful Soup tutorials or Cheerio guides, you’ve noticed by now that we tend to use CSS in pretty much every project. However, it’s all due to practicality.&lt;/p&gt;

&lt;p&gt;In real-life projects, things will be a little more complicated, and understanding both will provide you with more tools to face any challenge.&lt;/p&gt;

&lt;p&gt;So let’s talk about the differences between XPath and CSS selectors to understand when you should use one over the other.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advantages of CSS for Web Scraping&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;CSS selectors tend to be easier to write and read than XPath selectors, making them more beginner-friendly for both learning and implementation.&lt;/p&gt;

&lt;p&gt;For comparison, here’s how we would select a paragraph with the class easy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;XPAth: //p[&lt;a class="mentioned-user" href="https://dev.to/class"&gt;@class&lt;/a&gt;=”easy”]&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CSS: p.easy&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Another thing to have in mind is that when working with a website that’s structured with unique IDs and very distinct classes, using CSS would be the best bet because picking elements based on CSS selectors is more reliable.&lt;/p&gt;

&lt;p&gt;One change to the DOM and our XPath will break, making our script very susceptible. But classes and IDs very rarely change, so you’ll be able to pick up the element no matter if its position gets altered.&lt;/p&gt;

&lt;p&gt;Although it might be very opinionated, we consider CSS as our first option for a project, and we only move to XPath if we can’t find an efficient way to use CSS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recommended:&lt;/strong&gt; &lt;a href="https://www.scraperapi.com/blog/css-selectors-cheat-sheet/" rel="noopener noreferrer"&gt;The Ultimate CSS Selectors Cheat Sheet for Web Scraping&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advantages of XPath for Web Scraping&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Unlike CSS, XPath can traverse the DOM tree up and down, giving you more flexibility when working with less structured websites. This opens many opportunities to interact with the DOM that CSS doesn’t.&lt;/p&gt;

&lt;p&gt;An easy example is imagining that you need to pick a specific parent div from a document with 15 different divs without any class, ID, or attribute. We won’t be able to use CSS effectively because there are no good targets to handle.&lt;/p&gt;

&lt;p&gt;However, with XPath, we can target a child element of the div we need to select and go up from there.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7v5tt1ew8s19n6nrvlaj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7v5tt1ew8s19n6nrvlaj.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With //span[&lt;a class="mentioned-user" href="https://dev.to/class"&gt;@class&lt;/a&gt;=”text”]/.. we’re making a path to find the span element with the class text and then move to the parent element of that specific span, effectively moving up the DOM.&lt;/p&gt;

&lt;p&gt;Another great use of XPath selectors/expressions is when trying to find a by matching its text – something that can’t be done with CSS – using the contains function.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn6tpldk24h5e97xy7w26.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn6tpldk24h5e97xy7w26.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;** XPath:** //*[contains(text(), ‘world’)]&lt;/p&gt;

&lt;p&gt;In the example above, our XPath expression matches two elements because both the quote and the tag have the word “world” in them. Although we won’t probably use this function too often in web scraping, it’s a great tool in niche situations.&lt;/p&gt;

&lt;p&gt;If you want to learn more about the differences between these two, we recommend Exadel’s guide on picking selectors for automation.&lt;/p&gt;

&lt;p&gt;Although it’s not directly related to web scraping, there’s a lot of value in learning about automation concepts.&lt;/p&gt;

&lt;p&gt;XPath is a powerful language needed in many cases so let’s check some common expressions you can use while web scraping.&lt;/p&gt;

&lt;h2&gt;
  
  
  XPath Cheat Sheet: Common Expressions for Web Scraping
&lt;/h2&gt;

&lt;p&gt;So if you’re into web scraping, here’s a quick cheat sheet you can use in your daily work. Save it to your bookmarks, and enjoy!&lt;/p&gt;

&lt;p&gt;Note: You can test every expression in the “example” column on Quotes to Scrape for extra clarity and see what you’re selecting. Except for the ID example because the website doesn’t use IDs, lol.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe3hxl3v39zabnnp8bnb6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe3hxl3v39zabnnp8bnb6.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These are probably the most common XPath expressions you’ll be using to select elements from an HTML document. However, these are not the only ones, and we encourage you to keep learning.&lt;/p&gt;

&lt;p&gt;For example, you can also select elements that doesn’t contain certain text by using the expression //tagName[not(contains(text(), "someText"))]. This could come in handy if the website adds some text to the elements depending on a variable like adding “out of stock” to product titles inside a category page.&lt;/p&gt;

&lt;p&gt;We can also use the OR logic when working with a class that changes depending on a variable using //tagName[&lt;a class="mentioned-user" href="https://dev.to/class"&gt;@class&lt;/a&gt;="class1" or &lt;a class="mentioned-user" href="https://dev.to/class"&gt;@class&lt;/a&gt;="class2"]. Telling our scraper to select an element that has one or the other class.&lt;/p&gt;

&lt;p&gt;In a previous entry, we were scraping some stock data, but the “price percentage change” class name changed depending on whether the price was increasing or decreasing. Because the change was consistent, we could easily implement the OR logic with XPath and make our scraper extract the value no matter which class the element was using.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu7d0gj24nikfl5ehhk3u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu7d0gj24nikfl5ehhk3u.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;XPath:&lt;/strong&gt; //span[&lt;a class="mentioned-user" href="https://dev.to/class"&gt;@class&lt;/a&gt;="instrument-price_change-percent_&lt;em&gt;19cas ml-2.5 text-positive-main" or &lt;a class="mentioned-user" href="https://dev.to/class"&gt;@class&lt;/a&gt;="instrument-price_change-percent&lt;/em&gt;_19cas ml-2.5 text-negative-main"]&lt;/p&gt;

&lt;h2&gt;
  
  
  XPath Web Scraper Example
&lt;/h2&gt;

&lt;p&gt;Before you go, we want to share with you a script written in Puppeteer, so you can see these XPath selectors in action.&lt;/p&gt;

&lt;p&gt;Note: Technologies like Cheerio or Beautiful Soup do not work well – and in some cases at all – with XPath, so we recommend you to use things like Scrapy for Python and Puppeteer for JavaScript whenever you need to use XPath. These are more complicated tools to begin with but you’ll be an expert in no time.&lt;/p&gt;

&lt;p&gt;Create a new folder called “xpathproject”, open it in VScode (or your preferred editor), initiate a new Node.js project using npm init -y, and install puppeteer inside – npm install puppeteer.&lt;/p&gt;

&lt;p&gt;Next, create a new file with whatever name you’d like (we named it index.js for simplicity) and paste the following code:&lt;/p&gt;

&lt;p&gt;const puppeteer = require('puppeteer');&lt;/p&gt;

&lt;p&gt;scrapedText = [];&lt;/p&gt;

&lt;p&gt;(async () =&amp;gt; {&lt;/p&gt;

&lt;p&gt;const browser = await puppeteer.launch({headless: false});&lt;/p&gt;

&lt;p&gt;const page = await browser.newPage();&lt;/p&gt;

&lt;p&gt;await page.goto('&lt;a href="https://quotes.toscrape.com/'" rel="noopener noreferrer"&gt;https://quotes.toscrape.com/'&lt;/a&gt;);&lt;/p&gt;

&lt;p&gt;await page.waitForXPath('//div[&lt;a class="mentioned-user" href="https://dev.to/class"&gt;@class&lt;/a&gt;="quote"]/span[1]');&lt;/p&gt;

&lt;p&gt;let elements = await page.$x('//div[&lt;a class="mentioned-user" href="https://dev.to/class"&gt;@class&lt;/a&gt;="quote"]/span[1]');&lt;/p&gt;

&lt;p&gt;const elementText = await page.evaluate((...elements) =&amp;gt; {&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   return elements.map(el =&amp;gt; el.textContent);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;}, ...elements)&lt;/p&gt;

&lt;p&gt;scrapedText.push({&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   'Results': elementText
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;})&lt;/p&gt;

&lt;p&gt;console.log(scrapedText);&lt;/p&gt;

&lt;p&gt;await browser.close();&lt;/p&gt;

&lt;p&gt;})();&lt;/p&gt;

&lt;p&gt;If you don’t have experience with Puppeteer, then check our guide on building web scrapers using Cheerio and Puppeteer. However, if you read this script carefully you’ll see how descriptive it is.&lt;/p&gt;

&lt;p&gt;The best part is that you can use any XPath example on the XPath cheat sheet table and replace the expressions in the script and it’ll pull the text of the elements it finds.&lt;/p&gt;

&lt;p&gt;It’s important to notice that this web scraper is made for pulling the text inside multiple elements so it might not work to just take the title of the page, for example.&lt;/p&gt;

&lt;p&gt;Try different combinations and play a little bit with the script. You’ll soon get the hang of XPath.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>tutorial</category>
      <category>programming</category>
      <category>webscraping</category>
    </item>
    <item>
      <title>Cheerio Vs Puppeteer for Web Scraping: Picking the Best Tool for Your Project</title>
      <dc:creator>SaaS.Group</dc:creator>
      <pubDate>Fri, 15 Apr 2022 22:15:40 +0000</pubDate>
      <link>https://dev.to/zoltan/cheerio-vs-puppeteer-for-web-scraping-picking-the-best-tool-for-your-project-4dkl</link>
      <guid>https://dev.to/zoltan/cheerio-vs-puppeteer-for-web-scraping-picking-the-best-tool-for-your-project-4dkl</guid>
      <description>&lt;p&gt;This post was originally featured on &lt;strong&gt;&lt;a href="https://www.scraperapi.com/blog/cheerio-vs-puppeteer/" rel="noopener noreferrer"&gt;ScraperAPI&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cheerio vs Puppeteer: Differences and When to Use Them
&lt;/h2&gt;

&lt;p&gt;Cheerio and Puppeteer are both libraries made for Node.js (a backend runtime environment for Javascript) that can be used for scraping the web. However, they have major differences that you need to consider before picking a tool for your project.&lt;/p&gt;

&lt;p&gt;Before moving into the details for each library, here’s an overview comparison between Cheerio and Puppeteer:&lt;/p&gt;

&lt;p&gt;Cheerio Vs Puppeteer&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Cheerio was built with web scraping in mind.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Puppeteer was designed for browser automation and testing&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cheerio is a DOM parser, able to parser HTML and XML files.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Puppeteer can execute Javascript, making it able to scrape dynamic pages like single-page applications (SPAs).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cheerio can’t interact with the site or access content behind scripts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Puppeteer can interact with websites, accessing content behind login forms and scripts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cheerio has an easy learning curve thanks to its simple syntax. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Puppeteer has a steep learning curve as it has more functionalities and requires Async for better results.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cheerio is lightning fast in comparison to Puppeteer.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Compared to Cheerio, Puppeteer is quite slow.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cheerio makes extracting data super simple using JQuery like syntax and CSS/XPath selectors to navigate the DOM.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Puppeteer can take screenshots, submit forms and make PDFs.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that you have a big picture vision, let’s dive deeper into what each library has to offer and how you can use them to extract alternative data from the web.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Cheerio?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Cheerio is a Node.js framework that parses raw HTML and XML data and provides a consistent DOM model to help us traverse and manipulate the result data structure. To select elements, we can use CSS and XPath selectors, making navigating the DOM easier.&lt;/p&gt;

&lt;p&gt;However, Cheerio is well known for its speed. Because Cheerio doesn’t render the website like a browser (it doesn’t apply CSS or load external resources), Cheerio is lightweight and fast. Although in small projects we won’t notice, in large scraping tasks it will become a big time saver.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Puppeteer?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;On the other hand, Puppeteer is actually a browser automation tool, designed to mimic users’ behavior to test websites and web applications. It “provides a high-level API to control headless Chrome or Chromium over the DevTools Protocol.”&lt;/p&gt;

&lt;p&gt;In web scraping, Puppeteer gives our script all the power of a browser engine, allowing us to scrape pages that require Javascript execution (like SPAs), scrape infinite scrolling, dynamic content, and more.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Should You Use Cheerio or Puppeteer for Web Scraping?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Although you might already have an idea of the best scenarios, let us take all doubts out of the way. If you want to scrape static pages that don’t require any interactions like clicks, JS rendering, or submitting forms, Cheerio is the best option, but If the website uses any form of Javascript to inject new content, you’ll need to use Puppeteer.&lt;/p&gt;

&lt;p&gt;The reasoning behind our recommendation is that Puppeteer is just overkill for static websites. Cheerio will help you scrape more pages faster and in fewer lines of code.&lt;/p&gt;

&lt;p&gt;That said, there are multiple cases where using both libraries is actually the best solution. After all, Cheerio can make it easier to parse and select elements, while Puppeteer would give you access to content behind scripts and help you automate events like scrolling down for infinite paginations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Building a Scraper with Cheerio and Puppeteer [Code Example]&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To make this example easy to follow, we’ll build a scraper using Puppeteer and Cheerio that’ll navigate to &lt;a href="https://quotes.toscrape.com/" rel="noopener noreferrer"&gt;https://quotes.toscrape.com/&lt;/a&gt; and bring back all quotes and authors from page 1.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0car6m4ijkpzuhej327.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0car6m4ijkpzuhej327.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Installing Node.js, Cheerio, and Puppeteer&lt;/strong&gt;&lt;br&gt;
We’ll download Node.js from the official site and follow the instructions from the installer. Then, we’ll create a new project folder (we named it ‘cheerio-puppeteer-project’) and open it inside VScode – you can use any other editor you’d prefer. Inside your project folder, open a new terminal and type npm init -y to kickstart your project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsejy4ugt7rwhgqouuvz1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsejy4ugt7rwhgqouuvz1.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Open the Target Website Using Puppeteer&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now we’re ready to install our dependencies using npm install cheerio puppeteer. After a few seconds, we should be ready to go. Create a new file named ‘index.js’ and import our dependencies at the top.&lt;/p&gt;

&lt;p&gt;_const puppeteer = require('puppeteer');&lt;/p&gt;

&lt;p&gt;const cheerio = require('cheerio');_&lt;/p&gt;

&lt;p&gt;Next, we’ll create an empty list named scraped_quotes to store all our results, followed by our async function, so we can have access to the await operator. Just so we don’t forget, we’ll write a browser.close() method at the of our function.&lt;/p&gt;

&lt;p&gt;_scraped_quotes = [];&lt;/p&gt;

&lt;p&gt;(async () =&amp;gt; {&lt;/p&gt;

&lt;p&gt;await browser.close();&lt;/p&gt;

&lt;p&gt;});_&lt;/p&gt;

&lt;p&gt;Using Puppeteer, let’s launch a new browser instance, open a new page and navigate to our target website.&lt;/p&gt;

&lt;p&gt;_const browser = await puppeteer.launch();&lt;/p&gt;

&lt;p&gt;const page = await browser.newPage();&lt;/p&gt;

&lt;p&gt;await page.goto('&lt;a href="https://quotes.toscrape.com/');_" rel="noopener noreferrer"&gt;https://quotes.toscrape.com/');_&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Parsing the HTML with Cheerio&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To get access to the HTML of the website, we can use evaluate and return the raw HTML data – this is an important step because Cheerio can only work with HTML or XML data, so we need to access it before being able to parse it.&lt;/p&gt;

&lt;p&gt;_  const pageData = await page.evaluate(() =&amp;gt; {&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   return {

       html: document.documentElement.innerHTML,

   };
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;});_&lt;/p&gt;

&lt;p&gt;For testing purposes, we can use console.log(pageData) to log the response to our terminal. Because we already know it works, we’ll send the raw HTML to Cheerio for parsing.&lt;/p&gt;

&lt;p&gt;_  const $ = cheerio.load(pageData.html);_&lt;/p&gt;

&lt;p&gt;Now we can use $ to refer to the parsed version of the HTML file for the rest of our project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Selecting Elements with Cheerio&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before we can actually write our code, we first need to find out how the page is structured. Let’s go to the page itself on our browser and inspect the cards containing the quotes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8mnrfg57jw1wzn7c18re.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8mnrfg57jw1wzn7c18re.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can see that the elements we’re interested in are inside a div with the class quote. So we can select them and iterate through all of the divs to extract the quote text and the author.&lt;/p&gt;

&lt;p&gt;After inspecting these elements, here are our targets:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Divs containing our target elements: $('div.quote')&lt;/li&gt;
&lt;li&gt;Quote text: $(element).find('span.text')&lt;/li&gt;
&lt;li&gt;Quote author: $(element).find('.author')&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s translate this into code: &lt;/p&gt;

&lt;p&gt;_let quote_cards = $('div.quote');&lt;/p&gt;

&lt;p&gt;quote_cards.each((index, element) =&amp;gt; {&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   quote = $(element).find('span.text').text();

   author = $(element).find('.author').text();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;});&lt;br&gt;
_&lt;br&gt;
Using the text() method we can access to the text inside the element instead of returning the string of HTML.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pushing the Scraped Data Into a Formatted List&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If we console.log() our data at this point, it will be a messy chunk of text. Instead, we’ll use the empty list we created outside our function and push the data over there. To do so, add these two new lines to your script, right after your author variable:&lt;/p&gt;

&lt;p&gt;_     scraped_quotes.push({&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;       'Quote': quote,

       'By': author,

   })_
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Finished Code Example&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now that everything is in place, we can console.log(scraped_quotes) before closing the browser:&lt;/p&gt;

&lt;p&gt;_//dependencies&lt;/p&gt;

&lt;p&gt;const puppeteer = require('puppeteer');&lt;/p&gt;

&lt;p&gt;const cheerio = require('cheerio');&lt;/p&gt;

&lt;p&gt;//empty list to store our data&lt;/p&gt;

&lt;p&gt;scraped_quotes = [];&lt;/p&gt;

&lt;p&gt;//main function for our scraper&lt;/p&gt;

&lt;p&gt;(async () =&amp;gt; {&lt;/p&gt;

&lt;p&gt;//launching and opening our page&lt;/p&gt;

&lt;p&gt;const browser = await puppeteer.launch();&lt;/p&gt;

&lt;p&gt;const page = await browser.newPage();&lt;/p&gt;

&lt;p&gt;//navigating to a URL&lt;/p&gt;

&lt;p&gt;await page.goto('&lt;a href="https://quotes.toscrape.com/'" rel="noopener noreferrer"&gt;https://quotes.toscrape.com/'&lt;/a&gt;);&lt;/p&gt;

&lt;p&gt;//getting access to the raw HTML&lt;/p&gt;

&lt;p&gt;const pageData = await page.evaluate(() =&amp;gt; {&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   return {

       html: document.documentElement.innerHTML,

   };
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;});&lt;/p&gt;

&lt;p&gt;//parsing the HTML and picking our elements&lt;/p&gt;

&lt;p&gt;const $ = cheerio.load(pageData.html);&lt;/p&gt;

&lt;p&gt;let quote_cards = $('div.quote');&lt;/p&gt;

&lt;p&gt;quote_cards.each((index, element) =&amp;gt; {&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   quote = $(element).find('span.text').text();

   author = $(element).find('.author').text();

   //pushing our data into a formatted list

   scraped_quotes.push({

       'Quote': quote,

       'By': author,

   })
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;});&lt;/p&gt;

&lt;p&gt;//console logging the results&lt;/p&gt;

&lt;p&gt;console.log(scraped_quotes);&lt;/p&gt;

&lt;p&gt;//closing the browser&lt;/p&gt;

&lt;p&gt;await browser.close();&lt;/p&gt;

&lt;p&gt;})();_&lt;/p&gt;

&lt;p&gt;Resulting in a formatted list of data:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu1wwok8s7wlg2z5347o7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu1wwok8s7wlg2z5347o7.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I hope you enjoyed this quick overview of arguably the two best web scraping tools available for Javascript/Node.js. Although in most cases you’ll want to use Cheerio over Puppeteer, for those extra complex projects Puppeteer brings the extra tools you’ll need to get the job done. &lt;/p&gt;

</description>
      <category>saas</category>
      <category>javascript</category>
      <category>webscraping</category>
      <category>webdev</category>
    </item>
    <item>
      <title>How to Scrape Stock Market Data in Python [Practical Guide, Plus Code]</title>
      <dc:creator>SaaS.Group</dc:creator>
      <pubDate>Tue, 08 Mar 2022 23:25:04 +0000</pubDate>
      <link>https://dev.to/zoltan/how-to-scrape-stock-market-data-in-python-practical-guide-plus-code-1d2j</link>
      <guid>https://dev.to/zoltan/how-to-scrape-stock-market-data-in-python-practical-guide-plus-code-1d2j</guid>
      <description>&lt;p&gt;&lt;em&gt;This article was originally published on &lt;a href="https://www.scraperapi.com/blog/how-to-scrape-stock-market-data-with-python/"&gt;ScraperAPI&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Whether you’re an investor tracking your portfolio, or an investment firm looking for a way to stay up-to-date more efficiently, creating a script to scrape stock market data can save you both time and energy.&lt;/p&gt;

&lt;p&gt;In this tutorial, we’ll build a script to track multiple stock prices, organize them into an easy-to-read CSV file that will update itself with the push of a button, and collect hundreds of data points in a few seconds.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building a Stock Market Scraper With Requests and Beautiful Soup
&lt;/h2&gt;

&lt;p&gt;For this exercise, we’ll be scraping investing.com to extract up-to-date stock prices from Microsoft, Coca-Cola, and Nike, and storing it in a CSV file. We’ll also show you how to protect your bot from being blocked by anti-scraping mechanisms and techniques using ScraperAPI.&lt;/p&gt;

&lt;p&gt;Note: The script will work to scrape stock market data even without ScraperAPI, but will be crucial for scaling your project later.&lt;/p&gt;

&lt;p&gt;Although we’ll be walking you through every step of the process, having some knowledge of the Beautiful Soup library beforehand is helpful. If you’re totally new to this library, check out our beautiful soup tutorial for beginners. It’s packed with tips and tricks, and goes over the basics you need to know to scrape almost anything.&lt;/p&gt;

&lt;p&gt;With that out of the way, let’s jump into the code so you can learn how to scrape stock market data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Setting Up Our Project&lt;/strong&gt;&lt;br&gt;
To begin, we’ll create a folder named “scraper-stock-project”, and open it from VScode (you can use any text editor you’d like). Next, we’ll open a new terminal and install our two main dependencies for this project:&lt;/p&gt;

&lt;p&gt;pip3 install bs4&lt;br&gt;
pip3 install requests&lt;br&gt;
After that, we’ll create a new file named “stockData-scraper.py” and import our dependencies to it.&lt;/p&gt;

&lt;p&gt;import requests&lt;/p&gt;

&lt;p&gt;from bs4 import BeautifulSoup&lt;/p&gt;

&lt;p&gt;With Requests, we’ll be able to send an HTTP request to download the HTML file which is then passed on to BeautifulSoup for parsing. So let’s test it by sending a request to Nike’s stock page:&lt;/p&gt;

&lt;p&gt;url = '&lt;a href="https://www.investing.com/equities/nike"&gt;https://www.investing.com/equities/nike&lt;/a&gt;'&lt;/p&gt;

&lt;p&gt;page = requests.get(url)&lt;/p&gt;

&lt;p&gt;print(page.status_code)&lt;/p&gt;

&lt;p&gt;By printing the status code of the page variable (which is our request), we’ll know for sure whether or not we can scrape the page. The code we’re looking for is a 200, meaning it was a successful request.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--P2kZmu3---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i2z18uufiopgr280zf6f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P2kZmu3---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i2z18uufiopgr280zf6f.png" alt="Image description" width="880" height="113"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Success! Before moving on, we’ll pass the response stored in page to Beautiful Soup for parsing:&lt;/p&gt;

&lt;p&gt;soup = BeautifulSoup(page.text, 'html.parser')&lt;/p&gt;

&lt;p&gt;You can use any parser you want, but we’re going with html.parser because it’s the one we like.&lt;/p&gt;

&lt;p&gt;Related Resource: What is Data Parsing in Web Scraping? [Code Snippets Inside]&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Inspect the Website’s HTML Structure (Investing.com)&lt;/strong&gt;&lt;br&gt;
Before we start scraping, let’s open &lt;a href="https://www.investing.com/equities/nike"&gt;https://www.investing.com/equities/nike&lt;/a&gt; in our browser to get more familiar with the website.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TssEdoid--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wlonf4czvwyn6n3lq7n2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TssEdoid--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wlonf4czvwyn6n3lq7n2.png" alt="Image description" width="880" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see in the screenshot above, the page displays the company’s name, stock symbol, price, and price change. At this point, we have three questions to answer:&lt;/p&gt;

&lt;p&gt;Is the data being injected with JavaScript?&lt;br&gt;
What attribute can we use to select the elements?&lt;br&gt;
Are these attributes consistent throughout all pages?&lt;br&gt;
&lt;strong&gt;Check for JavaScript&lt;/strong&gt;&lt;br&gt;
There are several ways to verify if some script is injecting a piece of data, but the easiest way is to right-click, View Page Source.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jyw1kfRG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jvyh1ux7mwpev1jzls7j.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jyw1kfRG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jvyh1ux7mwpev1jzls7j.jpg" alt="Image description" width="880" height="587"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PsA7hLDI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fhllwtf8sfsz80g660k3.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PsA7hLDI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fhllwtf8sfsz80g660k3.jpg" alt="Image description" width="880" height="509"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It looks like there isn’t any JavaScript that could potentially interfere with our scraper. Next we’ll do the W same for the rest of the information. We didn’t find any additional JavaScript we’re good to go.&lt;/p&gt;

&lt;p&gt;Note: Checking for JavaScript is important because Requests can’t execute JavaScript or interact with the website, so if the information is behind a script, we would have to use other tools to extract it, like Selenium.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Picking the CSS Selectors&lt;/strong&gt;&lt;br&gt;
Now let’s inspect the HTML of the site to identify the attributes we can use to select the elements.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--w0ewmN32--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/14dv6lzz7wxi5fw4hxmm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--w0ewmN32--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/14dv6lzz7wxi5fw4hxmm.png" alt="Image description" width="880" height="91"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Extracting the company’s name and the stock symbol will be a breeze. We just need to target the H1 tag with class ‘text-2xl font-semibold instrument-header_title__GTWDv mobile:mb-2’.&lt;/p&gt;

&lt;p&gt;However, the price, price change, and percentage change are separated into different spans.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HRfXyTKQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9a3b6n4vg20uy42mxmab.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HRfXyTKQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9a3b6n4vg20uy42mxmab.png" alt="Image description" width="880" height="666"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What’s more, depending on whether the change is positive or negative, the class of the element changes, so even if we select each span by using their class attribute, there will still be instances when it won’t work.&lt;/p&gt;

&lt;p&gt;The good news is that we have a little trick to get it out. Because Beautiful Soup returns a parsed tree, we can now navigate the tree and pick the element we want, even though we don’t have the exact CSS class.&lt;/p&gt;

&lt;p&gt;What we’ll do in this scenario is go up in the hierarchy and find a parent div we can exploit. Then we can use find_all(‘span’) to make a list of all the elements containing the span tag – which we know our target data uses. And because it’s a list, we can now easily navigate it and pick those we need.&lt;/p&gt;

&lt;p&gt;So here are our targets:&lt;/p&gt;

&lt;p&gt;company = soup.find('h1', {'class': 'text-2xl font-semibold instrument-header_title__GTWDv mobile:mb-2'}).text&lt;/p&gt;

&lt;p&gt;price = soup.find('div', {'class': 'instrument-price_instrument-price__3uw25 flex items-end flex-wrap font-bold'}).find_all('span')[0].text&lt;/p&gt;

&lt;p&gt;change = soup.find('div', {'class': 'instrument-price_instrument-price__3uw25 flex items-end flex-wrap font-bold'}).find_all('span')[2].text&lt;/p&gt;

&lt;p&gt;Now for a test run:&lt;/p&gt;

&lt;p&gt;print('Loading: ', url)&lt;/p&gt;

&lt;p&gt;print(company, price, change)&lt;/p&gt;

&lt;p&gt;And here’s the result:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hHUlp37V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cms3fxmu16y8qvvmk2jt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hHUlp37V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cms3fxmu16y8qvvmk2jt.png" alt="Image description" width="880" height="141"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Scrape Multiple Stocks&lt;/strong&gt;&lt;br&gt;
Now that our parser is working, let’s scale this up and scrape several stocks. After all, a script for tracking just one stock data is likely not going to be very useful.&lt;/p&gt;

&lt;p&gt;We can make our scraper parse and scrape several pages by creating a list of URLs and looping through them to output the data.&lt;/p&gt;

&lt;p&gt;urls = [&lt;/p&gt;

&lt;p&gt;'&lt;a href="https://www.investing.com/equities/nike"&gt;https://www.investing.com/equities/nike&lt;/a&gt;',&lt;/p&gt;

&lt;p&gt;'&lt;a href="https://www.investing.com/equities/coca-cola-co"&gt;https://www.investing.com/equities/coca-cola-co&lt;/a&gt;',&lt;/p&gt;

&lt;p&gt;'&lt;a href="https://www.investing.com/equities/microsoft-corp"&gt;https://www.investing.com/equities/microsoft-corp&lt;/a&gt;',&lt;/p&gt;

&lt;p&gt;]&lt;/p&gt;

&lt;p&gt;for url in urls:&lt;/p&gt;

&lt;p&gt;page = requests.get(url)&lt;/p&gt;

&lt;p&gt;soup = BeautifulSoup(page.text, 'html.parser')&lt;/p&gt;

&lt;p&gt;company = soup.find('h1', {'class': 'text-2xl font-semibold instrument-header_title__GTWDv mobile:mb-2'}).text&lt;/p&gt;

&lt;p&gt;price = soup.find('div', {'class': 'instrument-price_instrument-price__3uw25 flex items-end flex-wrap font-bold'}).find_all('span')[0].text&lt;/p&gt;

&lt;p&gt;change = soup.find('div', {'class': 'instrument-price_instrument-price__3uw25 flex items-end flex-wrap font-bold'}).find_all('span')[2].text&lt;/p&gt;

&lt;p&gt;print('Loading: ', url)&lt;/p&gt;

&lt;p&gt;print(company, price, change)&lt;/p&gt;

&lt;p&gt;Here’s the result after running it:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_CCOd8hq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ag4iy8jdcpxs23j5jiwb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_CCOd8hq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ag4iy8jdcpxs23j5jiwb.png" alt="Image description" width="880" height="235"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Awesome, it works across the board!&lt;/p&gt;

&lt;p&gt;We can keep adding more and more pages to the list but eventually, we’ll hit a big roadblock: anti-scraping techniques.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Integrating ScraperAPI to Handle IP Rotation and CAPCHAs&lt;/strong&gt;&lt;br&gt;
Not every website likes to be scraped, and for a good reason. When scraping a website, we need to have in mind that we are sending traffic to it, and if we’re not careful, we could be limiting the bandwidth the website has for real visitors, or even increasing hosting costs for the owner. That said, as long as we respect web scraping best practices, we won’t have any problems with our projects, and we won’t cause the sites we’re scraping any issues.&lt;/p&gt;

&lt;p&gt;However, it’s hard for businesses to differentiate between ethical scrapers and those that will break their sites. For this reason, most servers will be equipped with different systems like&lt;/p&gt;

&lt;p&gt;Browser behavior profiling&lt;br&gt;
CAPTCHAs&lt;br&gt;
Monitoring the number of requests from an IP address in a time period&lt;br&gt;
These measures are designed to recognize bots, and block them from accessing the website for days, weeks, or even forever.&lt;/p&gt;

&lt;p&gt;Instead of handling all of these scenarios individually, we’ll just add two lines of code to make our requests go through ScraperAPI’s servers and get everything automated for us.&lt;/p&gt;

&lt;p&gt;First, let’s create a free ScraperAPI account to access our API key and 5000 free API credits for our project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rgCe10vU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d51st6d1j1wjonhbqfl3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rgCe10vU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d51st6d1j1wjonhbqfl3.png" alt="Image description" width="880" height="168"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we’re ready to add to our loop a new params variable to store our key and target URL and use urlencode to construct the URL we’ll use to send the request inside the page variable.&lt;/p&gt;

&lt;p&gt;params = {'api_key': '51e43be283e4db2a5afb62660xxxxxxx', 'url': url}&lt;/p&gt;

&lt;p&gt;page = requests.get('&lt;a href="http://api.scraperapi.com/"&gt;http://api.scraperapi.com/&lt;/a&gt;', params=urlencode(params))&lt;/p&gt;

&lt;p&gt;Oh! And we can’t forget to add our new dependency to the top of the file:&lt;/p&gt;

&lt;p&gt;from urllib.parse import urlencode&lt;/p&gt;

&lt;p&gt;Every request will now be sent through ScraperAPI, which will automatically rotate our IP after every request, handle CAPCHAs, and use machine learning and statistical analysis to set the best headers to ensure success.&lt;/p&gt;

&lt;p&gt;Quick Tip: ScraperAPI also allows us to scrape a dynamic site by setting ‘render’: true as a parameter in our params variable. ScraperAPI will render the page before sending back the response. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Store Data In a CSV File&lt;/strong&gt;&lt;br&gt;
Tos tore your data in an easy-to-use CSV file, simply add these three lines between your URL list and your loop:&lt;/p&gt;

&lt;p&gt;file = open('stockprices.csv', 'w')&lt;/p&gt;

&lt;p&gt;writer = csv.writer(file)&lt;/p&gt;

&lt;p&gt;writer.writerow(['Company', 'Price', 'Change'])=&lt;/p&gt;

&lt;p&gt;This will create a new CSV file and pass it to our writer (set in the writer variable) to add the first row with our headers.&lt;/p&gt;

&lt;p&gt;It’s essential to add it outside of the loop, or it will rewrite the file after scraping each page, basically erasing previous data and giving us a CSV file with only the data from the last URL from our list.&lt;/p&gt;

&lt;p&gt;In addition, we’ll need to add another line to our loop to write the scraped data:&lt;/p&gt;

&lt;p&gt;writer.writerow([company.encode('utf-8'), price.encode('utf-8'), change.encode('utf-8')])&lt;/p&gt;

&lt;p&gt;And one more outside the loop to close the file:&lt;/p&gt;

&lt;p&gt;file.close()&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Finished Code: Stock Market Data Script&lt;/strong&gt;&lt;br&gt;
You’ve made it! You can now use this script with your own API key and add as many stocks as you want to scrape:&lt;/p&gt;

&lt;h1&gt;
  
  
  dependencies
&lt;/h1&gt;

&lt;p&gt;import requests&lt;/p&gt;

&lt;p&gt;from bs4 import BeautifulSoup&lt;/p&gt;

&lt;p&gt;import csv&lt;/p&gt;

&lt;p&gt;from urllib.parse import urlencode&lt;/p&gt;

&lt;h1&gt;
  
  
  list of URLs
&lt;/h1&gt;

&lt;p&gt;urls = [&lt;/p&gt;

&lt;p&gt;'&lt;a href="https://www.investing.com/equities/nike"&gt;https://www.investing.com/equities/nike&lt;/a&gt;',&lt;/p&gt;

&lt;p&gt;'&lt;a href="https://www.investing.com/equities/coca-cola-co"&gt;https://www.investing.com/equities/coca-cola-co&lt;/a&gt;',&lt;/p&gt;

&lt;p&gt;'&lt;a href="https://www.investing.com/equities/microsoft-corp"&gt;https://www.investing.com/equities/microsoft-corp&lt;/a&gt;',&lt;/p&gt;

&lt;p&gt;]&lt;/p&gt;

&lt;h1&gt;
  
  
  starting our CSV file
&lt;/h1&gt;

&lt;p&gt;file = open('stockprices.csv', 'w')&lt;/p&gt;

&lt;p&gt;writer = csv.writer(file)&lt;/p&gt;

&lt;p&gt;writer.writerow(['Company', 'Price', 'Change'])&lt;/p&gt;

&lt;h1&gt;
  
  
  looping through our list
&lt;/h1&gt;

&lt;p&gt;for url in urls:&lt;/p&gt;

&lt;p&gt;#sending our request through ScraperAPI&lt;/p&gt;

&lt;p&gt;params = {'api_key': '51e43be283e4db2a5afb62660fc6ee44', 'url': url}&lt;/p&gt;

&lt;p&gt;page = requests.get('&lt;a href="http://api.scraperapi.com/"&gt;http://api.scraperapi.com/&lt;/a&gt;', params=urlencode(params))&lt;/p&gt;

&lt;p&gt;#our parser&lt;/p&gt;

&lt;p&gt;soup = BeautifulSoup(page.text, 'html.parser')&lt;/p&gt;

&lt;p&gt;company = soup.find('h1', {'class': 'text-2xl font-semibold instrument-header_title__GTWDv mobile:mb-2'}).text&lt;/p&gt;

&lt;p&gt;price = soup.find('div', {'class': 'instrument-price_instrument-price__3uw25 flex items-end flex-wrap font-bold'}).find_all('span')[0].text&lt;/p&gt;

&lt;p&gt;change = soup.find('div', {'class': 'instrument-price_instrument-price__3uw25 flex items-end flex-wrap font-bold'}).find_all('span')[2].text&lt;/p&gt;

&lt;p&gt;#printing to have some visual feedback&lt;/p&gt;

&lt;p&gt;print('Loading :', url)&lt;/p&gt;

&lt;p&gt;print(company, price, change)&lt;/p&gt;

&lt;p&gt;#writing the data into our CSV file&lt;/p&gt;

&lt;p&gt;writer.writerow([company.encode('utf-8'), price.encode('utf-8'), change.encode('utf-8')])&lt;/p&gt;

&lt;p&gt;file.close()&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping Up: considerations when running your stock market data scraper
&lt;/h2&gt;

&lt;p&gt;You need to remember that the stock market isn’t always open. For example, if you’re scraping data from NYC’s stock exchange, it closes at 5 pm EST on Fridays and opens on Monday at 9:30 am. So there’s no point in running your scraper over the weekend. It also closes at 4 pm so that you won’t see any changes in the price after that.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_bFYNNHd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vcbfluomhzj0rkiyazq5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_bFYNNHd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vcbfluomhzj0rkiyazq5.jpg" alt="Image description" width="880" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another variable to keep in mind is how often you need to update the data. The most volatile times for the stock exchange are opening and closing times. So it might be enough to run your script at 9:30 am, at 11 am, and at 4:30 pm to see how the stocks closed. Monday’s opening is also crucial to monitor as many trades occur during this time.&lt;/p&gt;

&lt;p&gt;Unlike other markets like Forex, the stock market typically doesn’t make too many crazy swings. That said, oftentimes news and business decisions can heavily impact stock prices – take Meta shares crash or the rise of GameStop share price as examples – so reading the news related to the stocks you are scraping is vital.&lt;/p&gt;

&lt;p&gt;We hope this tutorial helped you build your own stock market data scraper, or at least pointed you in the right direction. In a future tutorial, we’ll build on top of this project to create a real-time stock data scraper to monitor your stocks, so stay tuned for that!&lt;/p&gt;

</description>
      <category>python</category>
    </item>
    <item>
      <title>10 Best Technical SEO Tools You Need for Your Next Audit</title>
      <dc:creator>SaaS.Group</dc:creator>
      <pubDate>Thu, 24 Feb 2022 01:09:42 +0000</pubDate>
      <link>https://dev.to/zoltan/10-best-technical-seo-tools-you-need-for-your-next-audit-55mn</link>
      <guid>https://dev.to/zoltan/10-best-technical-seo-tools-you-need-for-your-next-audit-55mn</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://prerender.io/blog/"&gt;Prerender&lt;/a&gt;.&lt;/em&gt;&lt;br&gt;
You want to do routine technical website audits for the same reason you take your car to see a mechanic – to make sure it’s working as it should.&lt;/p&gt;

&lt;p&gt;Technical SEO tools are like the wrenches and hammers the mechanic keeps in a toolbox – they help you identify any potential errors or issues with your website so that you can fix them before they cause long-term damage. Website audits can also help improve your SEO rankings and visibility. Websites that load quickly and are technically well-optimized tend to get higher search rankings and more organic traffic. &lt;/p&gt;

&lt;p&gt;In this blog post, we will discuss the 10 best technical SEO tools you need to get the most out of your next website audit.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;What are Technical SEO tools? *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Technical SEO tools are specifically designed to help you audit your website for technical errors and issues. They can help you identify problems such as: &lt;/p&gt;

&lt;p&gt;Broken links&lt;br&gt;
Incorrect, or missing, titles and metadata&lt;br&gt;
Coding errors&lt;br&gt;
Fixing these issues helps you improve your site’s performance. Plus, they offer many benefits that you can use to revamp and optimize your SEO strategy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Optimize Your SEO Strategy With Technical SEO Tools
&lt;/h2&gt;

&lt;p&gt;SEO audit tools offer a wide range of benefits, including gaining insights into your website’s inner workings. Also, they help identify how your competitors are optimizing their websites so you can reverse-engineer their strategies.&lt;/p&gt;

&lt;p&gt;Technical SEO software can also be used for keyword and backlink research. You can take what you learn about your website’s keyword optimization and backlink profile, and use it to improve your website technically to make your on-page SEO and link-building efforts more effective. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Save Time and Costs on Manual SEO Audits&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Technical website tools can also assist you by saving time and hefty costs on manual &lt;a href="https://prerender.io/how-to-conduct-a-technical-seo-audit/"&gt;technical SEO audits&lt;/a&gt;. Getting a technical SEO specialist to audit your website and having your developers implement those recommendations can require a team of specialists working together, and specialists don’t come cheap.&lt;/p&gt;

&lt;p&gt;You can automate the auditing process, which will help speed up results. Many of these tools are free or offer a free trial, so you can test them out before committing to them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Find High-Converting Keywords for Content &amp;amp; Content Marketing
&lt;/h2&gt;

&lt;p&gt;With every SEO expert clamoring to rank for high-converting keywords, technical SEO tools can help you rank for them before they can. Technical SEO tools give you useful information like your competitor’s website performance, backlink profile, and keyword rankings. Ranking for these competitive keywords can get your content to generate more traffic. &lt;/p&gt;

&lt;h2&gt;
  
  
  Gauge SEO Progress &amp;amp; Track KPIs
&lt;/h2&gt;

&lt;p&gt;Most SEO audit tools generate automated reports that track your SEO progress over time, based on a set of practical key performance indicators (KPIs). This data can help you determine whether your website is running well and what still needs improvement. It can also help you compare your website to your competitors’ sites and make strategic adjustments to stay ahead of them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Visualize &amp;amp; Conceptualize Data for Analysis
&lt;/h2&gt;

&lt;p&gt;Converting raw data into visual elements like charts and infographics can help you understand the data more easily and identify opportunities for improvement. After analyzing that data, you can identify and address any web issues and apply the information to enhance your strategies. &lt;/p&gt;

&lt;h2&gt;
  
  
  10 Best Technical SEO Tools
&lt;/h2&gt;

&lt;p&gt;To give you an extra edge in your website audits, here are 10 of the best technical SEO tools that we recommend having in your toolkit. These tools will help you identify opportunities, develop a more effective SEO strategy and further optimize your website for top performance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wU2ioEb8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xdhics4pefkqi7bkpysf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wU2ioEb8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xdhics4pefkqi7bkpysf.png" alt="Image description" width="381" height="595"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;list of 10 best technical SEO tools&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerender&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://prerender.io/"&gt;Prerender&lt;/a&gt; boosts search engine rankings by effectively prerendering Javascript during the search engine crawling process. It improves your &lt;a href="https://prerender.io/crawl-budget-seo/"&gt;crawl budget optimization&lt;/a&gt; and amplifies your &lt;a href="https://prerender.io/google-pagespeed-insights/"&gt;page speed scores&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;A convenient feature of Prerender is it detects when Google is crawling your page and creates an easily-crawled version of your website. It also renders JavaScript websites built with a wide range of popular frameworks and libraries like &lt;a href="https://prerender.io/angular/"&gt;Angular&lt;/a&gt;, &lt;a href="https://prerender.io/react/"&gt;React&lt;/a&gt;, and &lt;a href="https://prerender.io/vue-js/"&gt;Vue.JS&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It also detects whether it’s a bot or human user who is accessing your site so it can provide the &lt;a href="https://prerender.io/nicer-user-experience/"&gt;best user experience&lt;/a&gt; possible for human users. Afterward, it renders an HTML version for the search engine crawler and interactive JavaScript for the human, so you can optimize your crawl budget without sacrificing user experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Screaming Frog&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://www.screamingfrog.co.uk/seo-spider/"&gt;Screaming Frog&lt;/a&gt; is a heavy-duty, yet flexible, website crawler that helps you audit your website for technical errors and issues. &lt;/p&gt;

&lt;p&gt;Whether you have a large or small website, Screaming Frog can efficiently crawl and analyze your website in real-time. It’s an equally suitable SEO tool for beginner and experienced SEO professionals. &lt;/p&gt;

&lt;p&gt;It helps extract and analyze data for common SEO issues, which can help you find ways to tweak your on-page SEO tactics. &lt;/p&gt;

&lt;p&gt;Screaming Frog will let you crawl up to 500 URLs for free. Buying a license removes this limit and gives you access to more advanced features. If you have a new website, you can take advantage of its free version so you can audit your website without any additional cost, then invest in the paid license as your website grows. However, there are many &lt;a href="https://prerender.io/screaming-frog-alternative/"&gt;Screaming Frog alternatives&lt;/a&gt; that you can use to meet your website’s specific technical SEO needs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Google Search Console&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://www.searchenginejournal.com/google-search-console-guide/209318/"&gt;Google Search Console&lt;/a&gt; is a free SEO audit tool offered by Google that helps you monitor your website’s performance in the SERPs.  It also allows you to submit and track your website’s XML sitemaps.&lt;/p&gt;

&lt;p&gt;This tool is a must-have for business owners who want to make sure their website is receiving optimal search engine visibility. With Google Search Console, you can identify any errors or issues that may be preventing your website from ranking higher than it could.&lt;/p&gt;

&lt;p&gt;You can also use Google Search Console to analyze and benchmark your site’s performance and compare it to your competitors. This data can help you make necessary adjustments to stay ahead of the competition.&lt;/p&gt;

&lt;p&gt;Google Search Console allows you to:&lt;/p&gt;

&lt;p&gt;Submit your sitemaps to be crawled and indexed&lt;br&gt;
Monitor your website’s overall performance and search traffic&lt;br&gt;
Identify any indexing or crawling errors and issues with your website such as soft 404 errors&lt;br&gt;
Identify keywords and search queries that are bringing traffic to your site&lt;br&gt;
View the landing pages with the most traffic&lt;br&gt;
Check hreflang implementation on a site &lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Google Mobile-Friendly Test *&lt;/em&gt;&lt;br&gt;
Another practical SEO tool offered by Google is the &lt;a href="https://search.google.com/test/mobile-friendly"&gt;Google Mobile-Friendly Test&lt;/a&gt;. With this free tool, you can determine if your website is mobile-friendly. &lt;/p&gt;

&lt;p&gt;Over the years, Google has led the movement towards mobile responsiveness in web development. That’s why they made the Google Mobile-Friendly Test to guide webmasters and &lt;a href="https://prerender.io/how-to-hire-a-technical-seo-consultant/"&gt;technical SEO experts&lt;/a&gt; in giving their users a better experience on mobile devices. The tool also lets you identify any critical alerts or load time issues, and it helps you make sure your site loads and displays correctly on smartphones and tablets.&lt;/p&gt;

&lt;p&gt;The Google Mobile-Friendly Test allows you to:&lt;/p&gt;

&lt;p&gt;Check how visitors interact with the mobile version of your pages&lt;br&gt;
See how well your pages display on different types of mobile devices&lt;br&gt;
Test individual web pages or entire websites&lt;br&gt;
Get feedback on what you can do to make your website more mobile-friendly&lt;br&gt;
Send alerts if critical errors are found on your website&lt;br&gt;
Identify issues that might cause a decrease in load times&lt;br&gt;
A valuable feature of the Google Mobile-Friendly Test is that it gives you a score and breakdown of how mobile-friendly a page is. This information can help you determine where you need to make changes to improve your site’s mobile-friendliness.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Google’s Schema.org Structured Data Testing Tool&lt;/strong&gt;&lt;br&gt;
Structured data is information stored in a specific format, which allows search engines to better understand what the page is and what it’s about. With Google’s &lt;a href="https://developers.google.com/search/docs/advanced/structured-data"&gt;Schema.org &lt;/a&gt;Structured Data Testing Tool,  you can identify errors in your site’s &lt;a href="https://prerender.io/structured-data-for-seo/"&gt;structured data&lt;/a&gt; markup.&lt;/p&gt;

&lt;p&gt;It helps you ensure that your website’s structured data markup is accurate and error-free, which in turn can improve your website’s search rankings. It allows you to select the appropriate tool, the rich results test, or the schema markup validator. The rich results test tests your website to identify and preview the correct search engine-rich results. The schema markup validator approves all structured data embedded in your web pages to make sure it’s formatted correctly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;W3C Validator&lt;/strong&gt;&lt;br&gt;
The &lt;a href="https://validator.w3.org/"&gt;W3C Validator&lt;/a&gt; is a free, open-source tool that checks the markup validity of your Web documents in HTML, XHTML, SMIL, MathML, and other markup languages.&lt;/p&gt;

&lt;p&gt;By using the W3C Validator, you can ensure that your web pages are debugged and error-free. This can help you avoid penalties that may be associated with incorrect or poorly-written markup and boost your overall search rankings.&lt;/p&gt;

&lt;p&gt;Overall, website maintenance will be more efficient as you can solve any website code error that may arise. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Majestic&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://majestic.com/"&gt;Majestic&lt;/a&gt; is a paid tool that helps you track and analyze your website’s backlinks.&lt;/p&gt;

&lt;p&gt;Majestic is praised in the industry as an efficient performance tool that offers the largest crawler system and extensive backlink data repository for comprehensive backlink analysis. &lt;/p&gt;

&lt;p&gt;By using Majestic, you can get a detailed SEO report of all of your website’s backlinks including:&lt;/p&gt;

&lt;p&gt;Number of backlinks&lt;br&gt;
Referring domains of these links&lt;br&gt;
Anchor text associated with them&lt;br&gt;
If you’re looking for a way to track and analyze your website’s backlinks, then be sure to try out Majestic. &lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Barracuda Panguin Tool *&lt;/em&gt;&lt;br&gt;
The &lt;a href="https://barracuda.digital/"&gt;Barracuda Panguin Tool&lt;/a&gt; is a free tool that helps you identify whether a website has been penalized by Google. This can help you determine the cause of your website’s poor rankings and take steps to fix them.&lt;/p&gt;

&lt;p&gt;It also helps you track your website traffic when Google releases an update. With this, you can pinpoint what update penalized your website so you plan strategies to account for them. &lt;/p&gt;

&lt;p&gt;Barracuda offers a wide range of features to help you make the best results-driven choices to optimize your site:&lt;/p&gt;

&lt;p&gt;Mobile searches&lt;br&gt;
On-site link optimization&lt;br&gt;
Local and regional SEO insights&lt;br&gt;
Data encryptions&lt;br&gt;
Paid and organic visibility&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Google Search Console XML Sitemap&lt;/strong&gt;&lt;br&gt;
An XML sitemap is a file that contains information about the pages on your website. &lt;a href="https://search.google.com/search-console/not-verified?original_url=/search-console/sitemaps?utm_source%3Dwmx%26utm_medium%3Ddeprecation-pane%26utm_content%3Dsitemap-list&amp;amp;original_resource_id"&gt;The Google Search Console XML Sitemap&lt;/a&gt; is a free tool that helps you diagnose any problems with your sitemap. If an issue has been identified, you can then troubleshoot and monitor it quickly and easily without having to dig through your website’s code.&lt;/p&gt;

&lt;p&gt;With Google Search Console XML Sitemap, you can make sure that your website’s pages are being &lt;a href="https://prerender.io/bigger-crawl-budget/"&gt;crawled&lt;/a&gt; and indexed by Google, which is step 1 in improving your website’s search engine rankings. You can also receive alerts for any critical errors with your site, including indexing errors and spam. If you’re using Accelerated Mobile Pages (AMP), this technical SEO tool helps you to troubleshoot and resolve any issues that happen with them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Botify&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://www.botify.com/"&gt;Botify&lt;/a&gt; creates the ideal mix of technical SEO, content, and search intent data to create a comprehensive keyword analysis. &lt;/p&gt;

&lt;p&gt;Also, it gathers insights throughout each stage of the crawling and indexing process to efficiently automate any required SEO tasks. &lt;/p&gt;

&lt;p&gt;From start to finish, Botify compiles:&lt;/p&gt;

&lt;p&gt;Internal page rank&lt;br&gt;
Broken links analysis&lt;br&gt;
Page depth&lt;/p&gt;

&lt;h2&gt;
  
  
  Measure, Analyze and Optimize Your Website with These Top Technical SEO Tools
&lt;/h2&gt;

&lt;p&gt;SEO is a complex process that requires careful measurement and analysis to improve your rankings. Using some of the technical SEO tools listed in this article can help you measure, analyze, and optimize your website for better search engine results. &lt;/p&gt;

</description>
      <category>seo</category>
      <category>audit</category>
      <category>tutorial</category>
      <category>beginners</category>
    </item>
    <item>
      <title>How to Deal with Pagination in Python Step-By-Step Guide</title>
      <dc:creator>SaaS.Group</dc:creator>
      <pubDate>Fri, 18 Feb 2022 00:20:04 +0000</pubDate>
      <link>https://dev.to/zoltan/how-to-deal-with-pagination-in-python-step-by-step-guide-3hdo</link>
      <guid>https://dev.to/zoltan/how-to-deal-with-pagination-in-python-step-by-step-guide-3hdo</guid>
      <description>&lt;p&gt;Originally posted on &lt;a href="http://www.scraperapi.com/blog"&gt;ScraperAPI&lt;/a&gt;&lt;br&gt;
If you’re working on a large web scraping project (like scraping product information) you have probably stumbled upon paginated pages. It’s standard practice for eCommerce and content sites to break down content into multiple pages to improve user experience. However, web scraping pagination adds some complexity to our work.&lt;/p&gt;

&lt;p&gt;In this article, you’ll learn how to build a pagination web scraper in just a few minutes and without getting blocked by any anti-scraping techniques.&lt;/p&gt;

&lt;p&gt;Although you can follow this tutorial with no prior knowledge, it might be a good idea to check out our Scrapy for beginners guide first for a more in-depth explanation of the framework before you get started.&lt;/p&gt;

&lt;p&gt;Without further ado, let’s jump right into it!&lt;/p&gt;

&lt;p&gt;Scraping a Website with Pagination Using Python Scrapy&lt;br&gt;
For this tutorial, we’ll be scraping the SnowAndRock men’s hats category to extract all product names, prices, and links.&lt;/p&gt;

&lt;p&gt;A little disclaimer- we’re writing this article using a Mac, so you’ll have to adapt things a little bit to work on PC. Other than that, everything should be the same.&lt;/p&gt;

&lt;p&gt;TLDR: here’s a quick snippet to deal with pagination in Scrapy using the “next” button:&lt;/p&gt;

&lt;p&gt;next_page = response.css('a[rel=next]').attrib['href']&lt;/p&gt;

&lt;p&gt;if next_page is not None:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   yield response.follow(next_page, callback=self.parse)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Keep reading for an in-depth explanation on how to implement this code into your script, along with how to deal with pages without a next button.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Set Up Your Development Environment
Before we start writing any code, we need to set up our environment to work with Scrapy, a Python library designed for web scraping. It allows us to crawl and extract data from websites, parse the raw data into a structured format, and select elements using CSS and/or XPath selectors.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;First, let’s create a new directory (we’ll call it pagination-scraper) and create a python virtual environment inside using the command python -m venv venv. Where the second venv is the name of your environment – but you can call it whatever you want.&lt;/p&gt;

&lt;p&gt;To activate it, just type source venv/bin/activate. Your command prompt should look like this:&lt;/p&gt;

&lt;p&gt;Now, installing Scrapy is as simple as typing pip3 install scrapy – it might take a few seconds for it to download and install it.&lt;/p&gt;

&lt;p&gt;Once that’s ready, we’ll input cd venv and create a new Scrapy project: scrapy startproject scrapypagination.&lt;/p&gt;

&lt;p&gt;Now you can see that Scrapy kick-started our project for us by installing all the necessary files.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Setting Up ScraperAPI to Avoid Bans
The hardest part of handling paginated pages is not writing the script itself, it’s how to not get our bot blocked by the server.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For that, we’ll need to create a function (or set of functions) that rotates our IP address after several attempts (meaning we also need access to a pool of IP addresses). Also, some websites use advanced techniques like CAPTCHAs and browser behavior profiling.&lt;/p&gt;

&lt;p&gt;To save us time and headaches, we’ll use ScraperAPI, an API that uses machine learning, huge browser farms, 3rd party proxies, and years of statistical analysis to handle every anti-bot mechanism our script could encounter automatically.&lt;/p&gt;

&lt;p&gt;Best of all, setting up ScraperAPI into our project is super easy with Scrapy:&lt;/p&gt;

&lt;p&gt;import scrapy&lt;/p&gt;

&lt;p&gt;from urllib.parse import urlencode&lt;/p&gt;

&lt;p&gt;API_KEY = '51e43be283e4db2a5afb62660xxxxxxx'&lt;/p&gt;

&lt;p&gt;def get_scraperapi_url(url):&lt;/p&gt;

&lt;p&gt;payload = {'api_key': API_KEY, 'url': url}&lt;/p&gt;

&lt;p&gt;proxy_url = '&lt;a href="http://api.scraperapi.com/?"&gt;http://api.scraperapi.com/?&lt;/a&gt;' + urlencode(payload)&lt;/p&gt;

&lt;p&gt;return proxy_url&lt;/p&gt;

&lt;p&gt;As you can see, we’re defining the get_scraperapi_url() method to help us construct the URL we’ll send the request to. First, we added our dependencies on the top and then added the API_KEY variable containing our API key – to get your key, just sign up for a free ScraperAPI account and you’ll find it on your dashboard.&lt;/p&gt;

&lt;p&gt;This method will build the URL for the request for each URL our scraper finds, and that’s why we’re setting it up this way instead of the more direct way of just adding all parameters directly into the URL like this:&lt;/p&gt;

&lt;p&gt;start_urls = ['&lt;a href="http://api.scraperapi.com?api_key=%7ByourApiKey%7D&amp;amp;url=%7BURL%7D'"&gt;http://api.scraperapi.com?api_key={yourApiKey}&amp;amp;url={URL}'&lt;/a&gt;]&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Understanding the URL Structure of the Website
URL structure is pretty much unique to each website. Developers tend to use different structures to make it easier to navigate for them and, in some cases, optimize the navigation experience for search engine crawlers like Google and real users.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To scrape paginated content, we need to understand how it works and plan accordingly, and there’s no better way to do it than inspecting the pages and seeing how the URL itself changes from one page to the next.&lt;/p&gt;

&lt;p&gt;So if we go to &lt;a href="https://www.snowandrock.com/c/mens/accessories/hats.html"&gt;https://www.snowandrock.com/c/mens/accessories/hats.html&lt;/a&gt; and scroll to the last product listed, we can see that it uses a numbered pagination plus a next button.&lt;/p&gt;

&lt;p&gt;This is great news, as selecting the next button on every page will be easier than cycling through each page number. Still, let’s see how the URL changes when clicking on the second page.&lt;/p&gt;

&lt;p&gt;Here’s what we’ve found:&lt;/p&gt;

&lt;p&gt;Page 1: &lt;a href="https://www.snowandrock.com/c/mens/accessories/hats.html?page=0&amp;amp;size=48"&gt;https://www.snowandrock.com/c/mens/accessories/hats.html?page=0&amp;amp;size=48&lt;/a&gt;&lt;br&gt;
Page 2: &lt;a href="https://www.snowandrock.com/c/mens/accessories/hats.html?page=1&amp;amp;size=48"&gt;https://www.snowandrock.com/c/mens/accessories/hats.html?page=1&amp;amp;size=48&lt;/a&gt;&lt;br&gt;
Page 3: &lt;a href="https://www.snowandrock.com/c/mens/accessories/hats.html?page=2&amp;amp;size=48"&gt;https://www.snowandrock.com/c/mens/accessories/hats.html?page=2&amp;amp;size=48&lt;/a&gt;&lt;br&gt;
Notice that the page one URL changes  when you go back to the page using the navigation, changing to page=0. Although we’re going to use the next button to navigate this website’s pagination, it is not as simple in every case.&lt;/p&gt;

&lt;p&gt;Understanding this structure will help us build a function to change the page parameter in the URL and increase it by 1, allowing us to go to the next page without a next button.&lt;/p&gt;

&lt;p&gt;Note: not all pages follow this same structure so make sure to always check which parameters change and how.&lt;/p&gt;

&lt;p&gt;Now that we know the initial URL for the request we can create a custom spider.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Sending the Initial Request Using the Start_Requests() Method
For the initial request we’ll create a Spider class and give it the name of Pagi:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;class PaginationScraper(scrapy.Spider):&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name = "pagi"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then, we define the start_requests() method:&lt;/p&gt;

&lt;p&gt;def start_requests(self):&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;start_urls = ['https://www.snowandrock.com/c/mens/accessories/hats.html']

for url in start_urls:

    yield scrapy.Request(url=get_scraperapi_url(url), callback=self.parse)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, after running our script, it will send each new URL found to this method, where the new URL will merge with the result of the get_scraperapi_url() method, sending the request through the ScraperAPI severs and bullet-proofing our project.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Building Our Parser
After testing our selectors with Scrapy Shell, these are the selectors we came up with:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;def parse(self, response):&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for hats in response.css('div.as-t-product-grid__item'):

    yield {

           'name': hats.css('.as-a-text.as-m-product-tile__name::text').get(),

           'price': hats.css('.as-a-price__value--sell strong::text').get(),

           'link': hats.css('a').attrib['href'],

           }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If you’re not familiar with Scrapy Shell or with Scrapy in general, it might be a good idea to check our full Scrapy tutorial where we cover all the basics you need to know.&lt;/p&gt;

&lt;p&gt;However, we’re basically selecting all the divs containing the information we want (response.css('div.as-t-product-grid__item') and then extracting the name, the price, and product’s link.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Make Scrapy Move Through the Pagination
Great! We have the information we need from the first page, now what? Well, we’ll need to tell our parser to find the new URL somehow and send it to the start_requests() method we defined before. In other words, we need to find an ID or class we can use to get the link inside the next button.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Technically we could use the class ‘.as-a-btn.as-a-btn--pagination as-m-pagination__item’ but lucky for us, there’s a better target: rel=next. It won’t get confused with any other selectors and picking an attribute with Scrapy is simple.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;next_page = response.css('a[rel=next]').attrib['href']

if next_page is not None:

    yield response.follow(next_page, callback=self.parse)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now it will iterate between pages until there are no more pages in the pagination – so we don’t need to set any other stop mechanism.&lt;/p&gt;

&lt;p&gt;If you’ve been following along, your file should look like this:&lt;/p&gt;

&lt;p&gt;import scrapy&lt;/p&gt;

&lt;p&gt;from urllib.parse import urlencode&lt;/p&gt;

&lt;p&gt;API_KEY = '51e43be283e4db2a5afb62660xxxxxx'&lt;/p&gt;

&lt;p&gt;def get_scraperapi_url(url):&lt;/p&gt;

&lt;p&gt;payload = {'api_key': API_KEY, 'url': url}&lt;/p&gt;

&lt;p&gt;proxy_url = '&lt;a href="http://api.scraperapi.com/?"&gt;http://api.scraperapi.com/?&lt;/a&gt;' + urlencode(payload)&lt;/p&gt;

&lt;p&gt;return proxy_url&lt;/p&gt;

&lt;p&gt;class PaginationScraper(scrapy.Spider):&lt;/p&gt;

&lt;p&gt;name = "pagi"&lt;/p&gt;

&lt;p&gt;def start_requests(self):&lt;/p&gt;

&lt;p&gt;start_urls = ['&lt;a href="https://www.snowandrock.com/c/mens/accessories/hats.html'"&gt;https://www.snowandrock.com/c/mens/accessories/hats.html'&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;for url in start_urls:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   yield scrapy.Request(url=get_scraperapi_url(url), callback=self.parse)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;def parse(self, response):&lt;/p&gt;

&lt;p&gt;for hats in response.css('div.as-t-product-grid__item'):&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   yield {

          'name': hats.css('.as-a-text.as-m-product-tile__name::text').get(),

          'price': hats.css('.as-a-price__value--sell strong::text').get(),

          'link': hats.css('a').attrib['href'],

          }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;next_page = response.css('a[rel=next]').attrib['href']&lt;/p&gt;

&lt;p&gt;if next_page is not None:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   yield response.follow(next_page, callback=self.parse)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It is now ready to run!&lt;/p&gt;

&lt;p&gt;Dealing With Pagination Without Next Button&lt;br&gt;
So far we’ve seen how to build a web scraper that moves through pagination using the link inside the next button – remember that Scrapy can’t actually interact with the page so it won’t work if the button has to be clicked in order for it to show more content.&lt;/p&gt;

&lt;p&gt;However, what happens when it isn’t an option? In other words, how can we navigate a pagination without a next button to rely on.&lt;/p&gt;

&lt;p&gt;Here’s where understanding the URL structure of the site comes in handy:&lt;/p&gt;

&lt;p&gt;Page 1: &lt;a href="https://www.snowandrock.com/c/mens/accessories/hats.html?page=0&amp;amp;size=48"&gt;https://www.snowandrock.com/c/mens/accessories/hats.html?page=0&amp;amp;size=48&lt;/a&gt;&lt;br&gt;
Page 2: &lt;a href="https://www.snowandrock.com/c/mens/accessories/hats.html?page=1&amp;amp;size=48"&gt;https://www.snowandrock.com/c/mens/accessories/hats.html?page=1&amp;amp;size=48&lt;/a&gt;&lt;br&gt;
Page 3: &lt;a href="https://www.snowandrock.com/c/mens/accessories/hats.html?page=2&amp;amp;size=48"&gt;https://www.snowandrock.com/c/mens/accessories/hats.html?page=2&amp;amp;size=48&lt;/a&gt;&lt;br&gt;
The only thing changing between URLs is the page parameter, which increases by 1 for each next page. What does it mean for our script? Well, first of all, we’ll have to change the way we’re sending the initial request by adding a new variable:&lt;/p&gt;

&lt;p&gt;class PaginationScraper(scrapy.Spider):&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name = "pagi"

page_number = 1

start_urls = ['http://api.scraperapi.com?api_key=51e43be283e4db2a5afb62660xxxxxxx&amp;amp;url=https://www.snowandrock.com/c/mens/accessories/hats.html?page=0&amp;amp;size=48']
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In this case we’re also using the direct cURL structure of ScraperAPI because we’re just changing a parameter- meaning there’s no need to construct a whole new URL. This way every time it changes, it will still send the request through ScraperAPI’s servers.&lt;/p&gt;

&lt;p&gt;Next, we’ll need to change our condition at the end to match the new logic:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;next_page = 'http://api.scraperapi.com?api_key=51e43be283e4db2a5afb62660xxxxxxx&amp;amp;url=https://www.snowandrock.com/c/mens/accessories/hats.html?page=' + str(PaginationScraper.page_number) + '&amp;amp;size=48'

if PaginationScraper.page_number &amp;lt; 6:

    PaginationScraper.page_number += 1

    yield response.follow(next_page, callback=self.parse)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;What’s happening here is that we’re accessing the page_number variable from the PaginationScraper() method to replace the value of the page parameter inside the URL.&lt;/p&gt;

&lt;p&gt;Afterwards, it will check if the value of page_number is less than 6 – because after page 5 there are no more results.&lt;/p&gt;

&lt;p&gt;As long as the condition is met, it will increase the page_number value by 1 and send the URL to be parsed and scraped, and so on until the page_number is 6 or more.&lt;/p&gt;

&lt;p&gt;Here’s the full code to scrape paginated pages without a next button:&lt;/p&gt;

&lt;p&gt;import scrapy&lt;/p&gt;

&lt;p&gt;class PaginationScraper(scrapy.Spider):&lt;/p&gt;

&lt;p&gt;name = "pagi"&lt;/p&gt;

&lt;p&gt;page_number = 1&lt;/p&gt;

&lt;p&gt;start_urls = ['&lt;a href="http://api.scraperapi.com?api_key=51e43be283e4db2a5afb62660xxxxxxx&amp;amp;url=https://www.snowandrock.com/c/mens/accessories/hats.html?page=0&amp;amp;size=48'"&gt;http://api.scraperapi.com?api_key=51e43be283e4db2a5afb62660xxxxxxx&amp;amp;url=https://www.snowandrock.com/c/mens/accessories/hats.html?page=0&amp;amp;size=48'&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;def parse(self, response):&lt;/p&gt;

&lt;p&gt;for hats in response.css('div.as-t-product-grid__item'):&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   yield {

          'name': hats.css('.as-a-text.as-m-product-tile__name::text').get(),

          'price': hats.css('.as-a-price__value--sell strong::text').get(),

          'link': 'https://www.snowandrock.com/' + hats.css('a').attrib['href']

          }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;next_page = '&lt;a href="http://api.scraperapi.com?api_key=51e43be283e4db2a5afb62660fc6ee44&amp;amp;url=https://www.snowandrock.com/c/mens/accessories/hats.html?page="&gt;http://api.scraperapi.com?api_key=51e43be283e4db2a5afb62660fc6ee44&amp;amp;url=https://www.snowandrock.com/c/mens/accessories/hats.html?page=&lt;/a&gt;' + str(PaginationScraper.page_number) + '&amp;amp;size=48'&lt;/p&gt;

&lt;p&gt;if PaginationScraper.page_number &amp;lt; 6:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   PaginationScraper.page_number += 1

   yield response.follow(next_page, callback=self.parse)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Wrapping Up&lt;br&gt;
Whether you’re compiling real estate data or scraping eCommerce platforms like Etsy, dealing with pagination will be a common occurrence and you need to be prepared to get creative.&lt;/p&gt;

&lt;p&gt;Alternative data has become a must-have for almost every industry in the world, and having the ability to create complex and efficient scrapers will give you a huge competitive advantage.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>beginners</category>
      <category>tutorial</category>
      <category>python</category>
    </item>
    <item>
      <title>2022 SEO Statistics: Industry Stats From Video to Local SEO</title>
      <dc:creator>SaaS.Group</dc:creator>
      <pubDate>Tue, 01 Feb 2022 04:29:54 +0000</pubDate>
      <link>https://dev.to/zoltan/2022-seo-statistics-industry-stats-from-video-to-local-seo-2b2l</link>
      <guid>https://dev.to/zoltan/2022-seo-statistics-industry-stats-from-video-to-local-seo-2b2l</guid>
      <description>&lt;p&gt;SEO is constantly changing 100s of times in 100s of little ways. It’s considered one of the most challenging digital marketing disciplines because of its complexity and scope. &lt;/p&gt;

&lt;p&gt;SEO is a highly technical discipline at its heart, but it incorporates the creative and social aspects of marketing as well. SEO professionals need to be good not just at optimizing websites, but also at content creation and relationship-building. It’s not enough to have a website that just runs fast and looks good, you also have to provide content that’s valuable to your audience, and have a backlink profile that demonstrates that other websites trust you.&lt;/p&gt;

&lt;p&gt;To keep you up to date on what’s going on in SEO for 2022, we have gathered some of the most important SEO statistics available. Staying informed about SEO trends will help you make the best website you can, provide the best user experience possible, and keep the organic traffic coming in.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;General SEO Stats and Facts&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We’ll start with a top-level view on the value of SEO as a marketing strategy. These statistics are a solid foundation that you can use to build and develop an effective SEO strategy and execute it efficiently. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Google is the &lt;a href="https://www.netmarketshare.com/search-engine-market-share.aspx"&gt;most popular search engine&lt;/a&gt; in the market. In 2021, 79 % of the total desktop search traffic came from Google. The next big search engine is Bing at 7.27%, Baidu accounts for 6.55%, Yahoo is at 5.06%, and the remaining 2.2% is spread from AOL and other small players&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.prnewswire.com/news-releases/70-of-small-businesses-do-not-have-an-seo-strategy-as-online-visibility-becomes-critically-important-301033181.html"&gt;About 70% of small companies&lt;/a&gt; don’t have an SEO strategy in place. Businesses that are not investing in SEO are missing out on a chance to increase their online visibility and ranking on the first page &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://databox.com/seo-friendly-website"&gt;Quality content&lt;/a&gt; and off-page SEO are still considered the most significant aspects of SEO success. Several authoritative sources claim that the combination of these two factors is the effective way to rank content in search engines &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://propecta.com/long-tail-keywords"&gt;50% of all organic search queries&lt;/a&gt; consist of four or more words. If your website does not include long-tail keywords, you’re missing out on vital web traffic that can affect your conversion rates&lt;/li&gt;
&lt;li&gt;Around &lt;a href="https://www.hubspot.com/state-of-marketing?__hstc=20629287.9d1234faf6274f06ac0b2d8dc0d5bd9d.1558380913243.1574887448535.1575403018587.161&amp;amp;__hssc=20629287.1.1575403018587&amp;amp;__hsfp=2484520036"&gt;61% of marketers believe that SEO&lt;/a&gt; is significant for online success. Ongoing SEO optimization is the most effective way to increase traffic and grow your business online &lt;/li&gt;
&lt;li&gt;Google crawls &lt;a href="https://venturebeat.com/2013/03/01/how-google-searches-30-trillion-web-pages-100-billion-times-a-month/"&gt;100 billion pages&lt;/a&gt; in a month &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.aira.net/state-of-link-building/in-house-specific-questions/"&gt;46% of businesses&lt;/a&gt; have reported allocating $10,000 or more of their marketing budget for link building. Companies using SEO best practices have better chances to get on search engine pages and run successful campaigns &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://profitworks.ca/blog/488-conversions-of-longtail-keywords-are-2-5x-higher-than-head-keywords"&gt;Long-tail keywords have a higher click-through-rate&lt;/a&gt; (CTR), about 2.5x more than head terms (queries with one or two words)&lt;/li&gt;
&lt;li&gt;Businesses that have online reviews are trusted more by customers. &lt;a href="https://searchengineland.com/88-consumers-trust-online-reviews-much-personal-recommendations-195803"&gt;Around 88% of customers trust the businesses&lt;/a&gt; that have online reviews. Social reviews on platforms such as Google Business Profile, Facebook, and Yelp are effective trust and conversion boosters&lt;/li&gt;
&lt;li&gt;URLs on page 2 of Google SERPs only get &lt;a href="https://www.forbes.com/sites/forbesagencycouncil/2017/10/30/the-value-of-search-results-rankings/?sh=6595cf8844d3"&gt;6% of clicks&lt;/a&gt;. This means if you are not ranking on the first page of Google, you have less than a 1% chance of receiving organic clicks from users. This highlights the importance of getting your website on the first page of Google &lt;/li&gt;
&lt;li&gt;In 2020, &lt;a href="https://www.worldwidewebsize.com/"&gt;50 billion web pages were indexed&lt;/a&gt; webpages in Google search &lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.prnewswire.com/news-releases/nearly-all-consumers-97-now-use-online-media-to-shop-locally-according-to-biakelsey-and-constat-87221242.html"&gt;97% of customers research a company using the internet&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://searchengineland.com/2016-state-of-link-building-survey-coverage-246664"&gt;About 35% of companies allocate more than $1000 per month&lt;/a&gt; to their link-building efforts. The top-ranking websites of search engines are constantly working to retain their positions by adding high authority and relevant links to their websites every month &lt;/li&gt;
&lt;li&gt;Around &lt;a href="https://www.statista.com/chart/19058/number-of-websites-online/"&gt;1.88 billion websites&lt;/a&gt; exist in the market today &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aira.net/state-of-link-building/link-building-measurement-and-reporting/"&gt;65% of digital marketers claim that link building is the most challenging aspect of SEO&lt;/a&gt;. Link building is one of the most unpredictable and challenging aspect of SEO because it’s the one that web masters have the least control over&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://databox.com/seo-vs-ppc"&gt;70% of clicks in Google search&lt;/a&gt; results go to organic results, with only 30% going to Google Ads listings. Even after Google’s best result, users are still able to distinguish between the paid and organic listing in the search results &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JlcnB7QG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cjmd6njl6nfy8rlyycgz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JlcnB7QG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cjmd6njl6nfy8rlyycgz.png" alt="Image description" width="464" height="486"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Search Engine Statistics&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;SEO is a field that studies the way that search engines work, first and foremost. This means not just understanding what search engines think is important, but how they decide that.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The first result in &lt;a href="https://backlinko.com/google-ctr-stats"&gt;Google’s Organic search has an average click-through-rate (CTR) of 31.7%&lt;/a&gt;. This CTR decreases as you go down the page, with the 2nd result having a CTR of 24.71%, the 3rd result having a CTR of 18.6%, and the 4th result having a CTR of 13.6% &lt;/li&gt;
&lt;li&gt;Relevant search results influence &lt;a href="https://www.thinkwithgoogle.com/consumer-insights/consumer-trends/meet-needs-i-want-to-buy-moments/"&gt;39% of purchase decisions&lt;/a&gt;. In other words, users are more likely to make a purchase from websites they see on their first page of organic results &lt;/li&gt;
&lt;li&gt;Around &lt;a href="https://ahrefs.com/blog/featured-snippets-study/"&gt;12.29% of search queries&lt;/a&gt; find featured snippets in their search results. Featured snippets are popular among users as they allow them to quickly obtain the information they’re searching for&lt;/li&gt;
&lt;li&gt;About &lt;a href="https://www.internetlivestats.com/google-search-statistics/"&gt;3.5 billion searches&lt;/a&gt; take place on Google every day &lt;/li&gt;
&lt;li&gt;Around 34.4% of searches on mobile and &lt;a href="https://sparktoro.com/blog/google-ctr-in-2018-paid-organic-no-click-searches/"&gt;61.5% of searches&lt;/a&gt; on desktop result in absolutely no clicks&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TWUz3UMH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/innpqg69f942y59hz0l7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TWUz3UMH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/innpqg69f942y59hz0l7.png" alt="Image description" width="478" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Video SEO Statistics&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Video SEO refers to optimizing videos to rank higher in search, increasing each video’s organic viewership, and maximizing its potential reach. It mostly refers to YouTube videos, but can also include other video platforms such as Vimeo and Dailymotion. With these helpful video SEO statistics, you can use your video marketing efforts to augment your SEO strategy. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The first page videos on YouTube have an &lt;a href="https://backlinko.com/youtube-ranking-factors"&gt;average length of 14 minutes and 50 seconds&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;Posts that contain videos gain a &lt;a href="https://www.brightcove.com/en/resources/blog/create-compelling-video-experiences/"&gt;157% boost in organic traffic and backlinks&lt;/a&gt;. Multimedia content is best – a mixture of text and videos enhances your content and makes it more engaging for your users&lt;/li&gt;
&lt;li&gt;Engagement metrics such as the number of comments, shares, likes and view count has a strong correlation with &lt;a href="https://blog.youtube/news-and-events/youtube-search-now-optimized-for-time/"&gt;higher video rankings&lt;/a&gt; on YouTube &lt;/li&gt;
&lt;li&gt;However, &lt;a href="https://www.3playmedia.com/blog/9-quick-tips-for-youtube-seo-strategy/"&gt;there was no link between YouTube rankings and a keyword-optimized video description&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;Videos that gain new subscribers tend to rank &lt;a href="https://creatoracademy.youtube.com/page/lesson/engagement-analytics"&gt;higher on YouTube&lt;/a&gt;. &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://backlinko.com/youtube-ranking-factors#:~:text=SD%20from%20our%20correlation%20data,page%20of%20YouTube's%20search%20results."&gt;68.2% of first-page YouTube results are HD videos&lt;/a&gt;. &lt;/li&gt;
&lt;li&gt;YouTube is the &lt;a href="https://www.hootsuite.com/resources/digital-trends"&gt;second most viewed website&lt;/a&gt; on the internet&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PQlgRB3V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/52eguo2t7rs1uiu85mzg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PQlgRB3V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/52eguo2t7rs1uiu85mzg.png" alt="Image description" width="324" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Local SEO Statistics&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;ics&lt;br&gt;
Businesses with a strong local presence in a geographic region are more likely to earn consumer trust. For many industries like retail and service-based industries, people are more likely to gravitate towards a local business with a strong online reputation than a franchise without much visibility in their region. &lt;/p&gt;

&lt;p&gt;Here are a few facts and figures that help demonstrate that.  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You will find a Yelp page in the top 5 results for up to &lt;a href="https://freshchalk.com/blog/150k-small-business-website-teardown-2019"&gt;92% of search queries&lt;/a&gt; containing a city location and a business category &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.thinkwithgoogle.com/consumer-insights/consumer-trends/mobile-location-searches-to-store-visit-data/"&gt;76% of people who search&lt;/a&gt; on their smartphones for something nearby, visit the same business within a day &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.seroundtable.com/google-46-of-searches-have-local-intent-26529.html"&gt;46% of all searches&lt;/a&gt; on Google are for a local service or a local business &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.thinkwithgoogle.com/marketing-strategies/app-and-mobile/mobile-search-trends-consumers-to-stores/"&gt;28% of all local searches&lt;/a&gt; result in a purchase &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://freshchalk.com/blog/150k-small-business-website-teardown-2019"&gt;25% of websites&lt;/a&gt; that belong to small businesses do not have an H1 tag&lt;/li&gt;
&lt;li&gt;A survey found that 87% of customers &lt;a href="https://www.brightlocal.com/research/local-consumer-review-survey/"&gt;read online reviews&lt;/a&gt; for businesses during a local search. &lt;/li&gt;
&lt;li&gt;Around &lt;a href="https://www.brightlocal.com/research/local-consumer-review-survey/"&gt;93% of customers&lt;/a&gt; use the internet to find a business in their local area&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NLt84U1Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/szf91m4o0fqbw1g71yaz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NLt84U1Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/szf91m4o0fqbw1g71yaz.png" alt="Image description" width="370" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Mobile SEO Statistics&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Google rolled out &lt;a href="https://developers.google.com/search/mobile-sites/mobile-first-indexing"&gt;mobile-first indexing March 2018&lt;/a&gt; to account for the rise of web traffic on mobile devices. Mobile SEO, or the practice of optimizing websites for search on mobile devices, has become more important than ever as a result . To help you get started, we’ve gathered some insightful mobile SEO stats to help you effectively target mobile users. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://hitwise.connexity.com/070116_MobileSearchReport_CD_US.html"&gt;58% of all Google searches&lt;/a&gt; happen on mobile devices&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.go-globe.hk/local-seo/"&gt;87% of smartphone owners&lt;/a&gt; conduct a search on mobile at least once a day&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.thinkwithgoogle.com/marketing-strategies/app-and-mobile/mobile-search-trends-consumers-to-stores/"&gt;30% of all mobile searches&lt;/a&gt; are specific to a location &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.thinkwithgoogle.com/marketing-strategies/search/being-there-micromoments-especially-mobile/"&gt;51% of smartphone users&lt;/a&gt; found a new company or product while searching on mobile. &lt;/li&gt;
&lt;li&gt;Voice searches account for &lt;a href="https://searchengineland.com/google-reveals-20-percent-queries-voice-queries-249917"&gt;20% of the queries&lt;/a&gt; on mobile &lt;/li&gt;
&lt;li&gt;The organic &lt;a href="https://www.slideshare.net/randfish/the-search-seo-world-in-2018"&gt;click-through rate on mobile is approximately 50% less&lt;/a&gt; when compared to desktop. &lt;/li&gt;
&lt;li&gt;The very first organic search result on mobile receives around &lt;a href="https://www.seoclarity.net/mobile-desktop-ctr-study-11302/"&gt;27.26% of clicks&lt;/a&gt; which is higher than 19.3% of clicks received on desktop &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.thinkwithgoogle.com/consumer-insights/consumer-trends/mobile-near-me-searches-yoy-growth-2016/"&gt;84% of all “near me” queries&lt;/a&gt; take place on mobile &lt;/li&gt;
&lt;li&gt;Around 67% of smartphone users are more likely to buy from businesses that have a &lt;a href="https://www.thinkwithgoogle.com/marketing-strategies/app-and-mobile/local-search-mobile-search-micro-moments/"&gt;location-optimized mobile website&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;Paid clicks on mobile devices now account for almost &lt;a href="https://hitwise.connexity.com/070116_MobileSearchReport_CD_US.html"&gt;62% of Google’s paid ads&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5Z2_QJuv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nls3pdxiuzpggkiy8325.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5Z2_QJuv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nls3pdxiuzpggkiy8325.png" alt="Image description" width="444" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Link Building Statistics&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;When it comes to link building, quality is more important than quantity. You will get more value out of 5 good, strong, authoritative backlinks than 50 spammy backlinks with no traffic. The challenge of finding those strong, quality backlinks consistently and at scale is what makes link building one of the hardest SEO tactics to plan and execute.&lt;/p&gt;

&lt;p&gt;With this in mind, here are a few link building statistics to help give you an edge in your next outreach campaign.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://moz.com/blog/content-shares-and-links-insights-from-analyzing-1-million-articles"&gt;Content with at least 1,000 words&lt;/a&gt; or more gets more links and social media engagement than shorter-form content&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://deck7.com/blog/7-favorite-seo-tools-of-b2b-marketers"&gt;70% of digital marketers&lt;/a&gt; use Ahrefs as a tool for link building&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://databox.com/best-free-seo-tools"&gt;17% of SEO professionals&lt;/a&gt; say they only use free SEO tools&lt;/li&gt;
&lt;li&gt;The &lt;a href="https://ahrefs.com/blog/buy-backlinks/"&gt;average cost of a paid guest post&lt;/a&gt; is $77.80. &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aira.net/state-of-link-building/in-house-specific-questions/"&gt;46% of marketers&lt;/a&gt; spend $10,000 or more on linkbuilding&lt;/li&gt;
&lt;li&gt;A quality backlink costs an &lt;a href="https://www.siegemedia.com/seo/link-building-cost"&gt;average of $800-$1,000 to acquire&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Around &lt;a href="https://www.semrush.com/blog/link-building-strategies-professionals-choose/"&gt;53% of marketers&lt;/a&gt; consider guest posting as an effective tool for link building.&lt;/li&gt;
&lt;li&gt;More than &lt;a href="https://www.semrush.com/blog/link-building-strategies-professionals-choose/"&gt;63% of marketers&lt;/a&gt; and businesses prefer to outsource their linking building routines to a third party&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://databox.com/improving-website-domain-authority"&gt;34% of marketers&lt;/a&gt; report that their website has a Moz domain authority of 40 or higher&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://ahrefs.com/blog/search-traffic-study/"&gt;66.31% of web pages&lt;/a&gt; have no backlinks &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://seotribunal.com/blog/stats-to-understand-seo/"&gt;65% of marketers&lt;/a&gt; say that linkbuilding is the hardest SEO tactic to execute&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1ewnxeQI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mbhejdrbdmw9ophiurw2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1ewnxeQI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mbhejdrbdmw9ophiurw2.png" alt="Image description" width="396" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Keyword Statistics&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Keyword research is the backbone of SEO campaigns. This is where the actual campaign begins. Therefore, it is critical to understand keywords and their importance. These SEO statistics will educate you about the keyword trends, and help you understand how people search on Google. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://ahrefs.com/blog/long-tail-keywords/"&gt;70.87% of keywords&lt;/a&gt; that are searched more than 10,000 times monthly consist of only one or two words. &lt;/li&gt;
&lt;li&gt;Around 8% of search queries are &lt;a href="https://moz.com/blog/state-of-searcher-behavior-revealed"&gt;phrased as questions&lt;/a&gt; on search engines. &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://ahrefs.com/blog/long-tail-keywords/"&gt;92.42% of keywords&lt;/a&gt; only have ten or fewer monthly searches. &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://ahrefs.com/blog/long-tail-keywords/"&gt;13.5% of keywords&lt;/a&gt; that have ten or less searches per month consist of only one or two words &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://ahrefs.com/blog/long-tail-keywords/"&gt;0.16% of the most popular keywords&lt;/a&gt; are behind 60.67% of all searches taking place on the internet.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---R_XJDre--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bbcokaexlxt8yqqth26y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---R_XJDre--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bbcokaexlxt8yqqth26y.png" alt="Image description" width="500" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Search Engine Trends&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;SEO is constantly evolving, and new trends emerge every day. To stay on top of your SEO game, you must be aware of the search engine trends and incorporate them into your SEO strategy. These Search engine SEO facts will give you the current state of a search engine, and help you optimize your campaigns to maximize the results. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://blog.hubspot.com/insiders/inbound-marketing-stats"&gt;75% of search engine users&lt;/a&gt; never go beyond the first page of Google results &lt;/li&gt;
&lt;li&gt;Around 21% of people surfing the internet click on &lt;a href="https://moz.com/blog/state-of-searcher-behavior-revealed"&gt;more than one search result &lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Organic traffic accounts for more than &lt;a href="https://www.brightedge.com/resources/research-reports/channel_share"&gt;40% of revenue&lt;/a&gt; across Media and Entertainment, Retail, Hospitality, Technology/internet &lt;/li&gt;
&lt;li&gt;SEMrush reported that users visit &lt;a href="https://www.semrush.com/ranking-factors/"&gt;3 to 3.5 pages&lt;/a&gt; every time they land on a website from search engines. &lt;/li&gt;
&lt;li&gt;Ahrefs conducted a study that found that around &lt;a href="https://ahrefs.com/blog/search-traffic-study/"&gt;90.63% of content&lt;/a&gt; gets no traffic from Google. &lt;/li&gt;
&lt;li&gt;Domains ranking in the top 3 spots have an average &lt;a href="https://www.semrush.com/ranking-factors/"&gt;bounce rate of 49% &lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Wordstream reported that &lt;a href="https://www.wordstream.com/blog/ws/2017/02/08/brand-affinity-marketing"&gt;brand affinity helped boost&lt;/a&gt; the CTRs by 2 to 3 times. &lt;/li&gt;
&lt;li&gt;More than &lt;a href="https://www.internetlivestats.com/one-second/#google-band"&gt;86,000 searches&lt;/a&gt; happen on Google every second. &lt;/li&gt;
&lt;li&gt;The &lt;a href="https://www.searchenginewatch.com/2011/04/21/top-google-result-gets-36-4-of-clicks-study/"&gt;1st to 3rd ranking pages&lt;/a&gt; on SERPs generate a clickthrough rate of 36%. &lt;/li&gt;
&lt;li&gt;Google made more than &lt;a href="https://www.searchenginejournal.com/how-google-improves-search-results/377451/"&gt;3,620 improvements&lt;/a&gt; to search in 2018. &lt;/li&gt;
&lt;li&gt;Around 18% of people searching the internet &lt;a href="https://moz.com/blog/state-of-searcher-behavior-revealed"&gt;enter a new query&lt;/a&gt; before they click on any result for the original query. &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://ahrefs.com/blog/featured-snippets-study/"&gt;99.58% of featured snippets&lt;/a&gt; rank in 1 to 10 positions on Google. &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.digitalmarketing.org/blog/topdigitalmarketingtrends"&gt;Organic methods drive around 300% more traffic&lt;/a&gt; to websites when compared to social media. &lt;/li&gt;
&lt;li&gt;The &lt;a href="https://sparktoro.com/blog/the-powerhouses-of-the-internet-are-turning-hostile-to-websites/"&gt;major search engines&lt;/a&gt;, YouTube, Bing, Google and Yahoo account for a total of 70.6% of all website traffic. &lt;/li&gt;
&lt;li&gt;Searchmetrics reported that the &lt;a href="https://seoheronews.com/goto?http://searchengineland.com/searchmetrics-google-ranking-factors-study-says-content-gaining-links-losing-importance-265431"&gt;average time a user spends on a website in the top 10 SERP results&lt;/a&gt; is 3 minutes and 10 seconds. &lt;/li&gt;
&lt;li&gt;Google reported that &lt;a href="https://www.thinkwithgoogle.com/marketing-strategies/search/voice-search-mobile-use-statistics/"&gt;1.1 billion smartphone users&lt;/a&gt; will be use voice search every week &lt;/li&gt;
&lt;li&gt;OC&amp;amp;C reported that voice search has a &lt;a href="https://www.occstrategy.com/media/1285/the-talking-shop_uk.pdf"&gt;20% market share&lt;/a&gt; of all Google Searches in the US.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jlcfCLrF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tt4y7ia8en5997rttz5h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jlcfCLrF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tt4y7ia8en5997rttz5h.png" alt="Image description" width="286" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Voice Search Stats&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Voice searches have started to replace traditional text-based searches. They have made it easier for people to search the internet and have reduced the need for typing every query. These voice search SEO stats will get you started on an effective SEO strategy. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Voice search contains an average of only &lt;a href="https://backlinko.com/voice-search-seo-study"&gt;29 words&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;The average Google voice search result receives around &lt;a href="https://backlinko.com/voice-search-seo-study"&gt;44 tweets and 1,199 Facebook shares&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;The average page that results in &lt;a href="https://backlinko.com/voice-search-seo-study"&gt;Google voice search contains around 212 words&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://voicebot.ai/2020/04/28/nearly-90-million-u-s-adults-have-smart-speakers-adoption-now-exceeds-one-third-of-consumers/"&gt;ComScore predicted&lt;/a&gt; that search queries would take up half of the searches by 2020 &lt;/li&gt;
&lt;li&gt;Mobile voice searches are &lt;a href="https://www.searchenginewatch.com/2013/09/26/how-will-voice-search-change-seo-for-local-stores-global-enterprises/"&gt;3 times more likely&lt;/a&gt; to be for something local when compared to text searches &lt;/li&gt;
&lt;li&gt;Voice searches load in 4.6 seconds, which is &lt;a href="https://www.machmetrics.com/speed-blog/average-page-load-times-websites-2018/"&gt;52% faster&lt;/a&gt; when compared to the load time of the average page. &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://backlinko.com/voice-search-seo-study#:~:text=Indeed%2C%20we%20found%20that%20the,a%209th%20grade%20reading%20level.&amp;amp;text=The%20readability%20of%20that%20result,in%20their%20voice%20search%20algorithm."&gt;9th grade is the average reading level of a voice search result&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;Around &lt;a href="https://backlinko.com/voice-search-seo-study"&gt;40.7% of all answers that are displayed in a voice search&lt;/a&gt; are pulled from a featured snippet. &lt;/li&gt;
&lt;li&gt;Over &lt;a href="https://techcrunch.com/2018/03/07/47-3-million-u-s-adults-have-access-to-a-smart-speaker-report-says/?guccounter=1&amp;amp;guce_referrer=aHR0cHM6Ly9iYWNrbGlua28uY29tLw&amp;amp;guce_referrer_sig=AQAAAMKMvCFL8hv_PF2gykXZBREPVRbWLRXGgvZtJovD9jOS8oL4bAZamjfJY5LsVVsqibQz00bok0Cw7v_TaIRDE13UA0Ryp_ygBZqn69dYLgMsXhmC-XRTg2gz7uk7vweQqOTsrcgSUMv8bbhf2pnDAjpGQRn79exQpOYrtK06a2KF"&gt;47 million people&lt;/a&gt; use Alexa and Google smart speakers in the US. &lt;/li&gt;
&lt;li&gt;Websites that have a strong link authority are more likely to rank well in voice search. &lt;/li&gt;
&lt;li&gt;HTTPS websites have &lt;a href="https://backlinko.com/voice-search-seo-study"&gt;70.4% of voice search&lt;/a&gt; result pages. &lt;/li&gt;
&lt;li&gt;While the global average of voice search is 31.3%, &lt;a href="https://blog.hubspot.com/marketing/how-to-optimize-for-voice-search"&gt;schema pages is 36.4% of voice search&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://backlinko.com/voice-search-seo-study"&gt;75% of voice search results&lt;/a&gt; are pulled from any of the top 3 desktop ranking pages for any given query.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RGcGK68N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x7fibp19ei0fwawirnx0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RGcGK68N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x7fibp19ei0fwawirnx0.png" alt="Image description" width="552" height="494"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Content Marketing Stats&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Content is king, in marketing, and it’s just as true in SEO as anywhere else. Your website is highly unlikely to rank for the keywords you want without high-quality content that’s well-written, well-researched, and provides value to your audience.&lt;/p&gt;

&lt;p&gt;These statistics will help you understand the relationship between SEO and content marketing, and the thinking that goes into a content marketing strategy.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Simply updating the existing content with fresh images and written content has the potential to increase your &lt;a href="https://www.safaridigital.com.au/blog/seo-statistics-2019/"&gt;organic traffic by 111.3%&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;About &lt;a href="https://www.smartinsights.com/search-engine-optimisation-seo/seo-strategy/content-creation-still-effective-seo-tactic-also-difficult/"&gt;57% of SEO experts&lt;/a&gt; agree that content creation and marketing are the best way to drive results &lt;/li&gt;
&lt;li&gt;Around &lt;a href="https://www.codeinwp.com/blog/seo-stats/"&gt;65% of searchers&lt;/a&gt; unveil that relevance is the most important factor to curate a successful SEO campaign &lt;/li&gt;
&lt;li&gt;
&lt;a href="http://www.emarketer.com/Article/Driving-Engagement-B2B-Marketers-Put-Premium-on-Content/1009790"&gt;60% of marketers are publishing content once a day&lt;/a&gt;. Despite knowing the power of content marketing, only 60% of marketers are posting content once or more every week &lt;/li&gt;
&lt;li&gt;The average 1st-page result in &lt;a href="https://www.searchenginejournal.com/revisiting-word-count/316335/#:~:text=The%20average%20Google%20first%20page,%2C%20authority%2C%20and%20search%20intent."&gt;Google contains around 1890 words of content&lt;/a&gt;. This means detailed articles and landing pages have the potential of establishing you as the industry leader &lt;/li&gt;
&lt;li&gt;Content creation can also improve &lt;a href="http://www.techclient.com/blogging-statistics/"&gt;indexation rates by more than 434%&lt;/a&gt;. Websites that are regularly updated are loved by Google. The latest SEO stats show that creating regular blog content will result in 434% higher indexation rates when compared to the websites that do not update their content on a regular basis&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nsramY0C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dlco3x3a4tfzy704emyv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nsramY0C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dlco3x3a4tfzy704emyv.png" alt="Image description" width="594" height="562"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Technical SEO Statistics&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://prerender.io/how-to-conduct-a-technical-seo-audit/"&gt;Technical SEO&lt;/a&gt; is the process of optimizing a website from a performance standpoint. You need a website to be techically well-optimized for the same reason you need an engine for your car to run.&lt;/p&gt;

&lt;p&gt;Technical SEO gets highly complicated at the granular level, but even a basic understanding of what makes a website run well can help you make your website significantly better. It’s also useful for understanding what search users look for from a user experience and technical standpoint.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Mobile users are more likely to purchase from businesses that offer &lt;a href="https://www.thinkwithgoogle.com/marketing-strategies/app-and-mobile/local-search-mobile-search-micro-moments/"&gt;custom features for their location&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;The chances of &lt;a href="https://www.thinkwithgoogle.com/marketing-strategies/app-and-mobile/mobile-page-speed-new-industry-benchmarks/"&gt;bounce rates increase&lt;/a&gt; if the &lt;a href="https://prerender.io/how-to-conduct-a-technical-seo-audit/"&gt;page load speed is increased&lt;/a&gt; from one to three seconds. It automatically goes up by around 90% if the load speed increases to five seconds&lt;/li&gt;
&lt;li&gt;Your &lt;a href="https://www.portent.com/blog/analytics/research-site-speed-hurting-everyones-revenue.htm#:~:text=The%20first%205%20seconds%20of,(between%20seconds%200%2D5)"&gt;website conversion rates&lt;/a&gt; gradually fall by about 4.42% with each second of load time&lt;/li&gt;
&lt;li&gt;The &lt;a href="https://go.skimresources.com/?id=143429X1608040&amp;amp;isjs=1&amp;amp;jv=15.2.1-stackpath&amp;amp;sref=https%3A%2F%2Ftime.com%2F3858309%2Fattention-spans-goldfish%2F&amp;amp;url=http%3A%2F%2Fadvertising.microsoft.com%2Fen%2Fcl%2F31966%2Fhow-does-digital-affect-canadian-attention-spans&amp;amp;xs=1&amp;amp;xtz=300&amp;amp;xuuid=9e3f7cc3eedbe98fb8f2fb54a750ff3c&amp;amp;abp=1&amp;amp;xjsf=other_click__auxclick%20%5B2%5D"&gt;average attention span of a human is 8 seconds and dropping&lt;/a&gt;. Attention spans have decreased from 12 seconds in 2000 to 8 seconds in 2018. This means that you now have even less time to capture someone’s attention with your website or content&lt;/li&gt;
&lt;li&gt;The &lt;a href="https://www.portent.com/blog/analytics/research-site-speed-hurting-everyones-revenue.htm#:~:text=The%20first%205%20seconds%20of,(between%20seconds%200%2D5)"&gt;first 5 seconds of page-load time&lt;/a&gt; have a massive effect on your conversion rates. These are times when people are making up their minds whether to stay on the site or not. If you take too long, they’ll leave without seeing your content&lt;/li&gt;
&lt;li&gt;In the recent survey, about &lt;a href="https://unbounce.com/page-speed-report/"&gt;48% of the website owners&lt;/a&gt; surveyed said they would be willing to drop animation and video for faster load times on their sites&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;SEO Is Fundamental For Success&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;SEO is a multi-faceted discipline that requires analytical skills as well as creative thinking to be successful. It begins with anticipating how your users expect to find you on Google, and accounting for that with content that accounts for that while providing them with a good user experience.&lt;/p&gt;

&lt;p&gt;Your website is the face of your business on the internet, so it’s important to make sure that it represents who you are and what you’re capable of. &lt;a href="https://dashboard.prerender.io/signup"&gt;Sign up for Prerender&lt;/a&gt;, and make your website the best it can possibly be.&lt;/p&gt;

</description>
      <category>seo</category>
      <category>video</category>
      <category>linkbuilding</category>
      <category>statistics</category>
    </item>
    <item>
      <title>Step by Step Guide to Scrape Google Using Python 
 &amp; Scrapy </title>
      <dc:creator>SaaS.Group</dc:creator>
      <pubDate>Thu, 23 Dec 2021 20:28:54 +0000</pubDate>
      <link>https://dev.to/zoltan/step-by-step-guide-to-scrape-google-using-python-scrapy-5hj1</link>
      <guid>https://dev.to/zoltan/step-by-step-guide-to-scrape-google-using-python-scrapy-5hj1</guid>
      <description>&lt;p&gt;Collecting Customer Feedback Data to Inform Your Marketing&lt;br&gt;
In the modern shopping experience, it is common for consumers to look for product reviews before deciding on a purchase.&lt;/p&gt;

&lt;p&gt;With this in mind, a powerful application for a Google SERPs scraper is to collect reviews and customer feedback from your competitor’s products to understand what’s working and what’s not working for them.&lt;/p&gt;

&lt;p&gt;It can be to improve your product, find a way to differentiate yourself from the competition, or to know which features or experiences to highlight in your marketing.&lt;/p&gt;

&lt;p&gt;Keep this in mind because we’ll be building our scraper around this issue exactly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.scraperapi.com/blog/scrape-data-google-search-results/"&gt;https://www.scraperapi.com/blog/scrape-data-google-search-results/&lt;/a&gt; &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Technical SEO Advice from an Industry Veteran</title>
      <dc:creator>SaaS.Group</dc:creator>
      <pubDate>Wed, 01 Dec 2021 03:17:32 +0000</pubDate>
      <link>https://dev.to/zoltan/technical-seo-advice-from-an-industry-veteran-jf3</link>
      <guid>https://dev.to/zoltan/technical-seo-advice-from-an-industry-veteran-jf3</guid>
      <description>&lt;p&gt;It’s not that often that you meet experienced marketers who are nice people and also good at their jobs at the same time.&lt;/p&gt;

&lt;p&gt;Dave Davies is an SEO veteran we featured in our &lt;a href="https://prerender.io/technical-seo-experts-to-follow/"&gt;25 Technical SEO Experts on Twitter roundup&lt;/a&gt; who has been in the industry for longer than almost anyone. Davies has been writing about SEO topics as a contributor to Search Engine Journal and Search Engine Watch for over a decade. He is the founder of &lt;a href="http://www.beanstalk.com/"&gt;Beanstalk Marketing&lt;/a&gt; and is currently the Lead SEO at &lt;a href="https://wandb.ai/site"&gt;Weights &amp;amp; Biases&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Apart from being a skilled SEO professional, Davies is also knowledgeable about web development and machine learning topics. As such, Davies has a more intimate understanding of the relationship between internet users and search engines better than nearly anyone else in the field today.&lt;/p&gt;

&lt;p&gt;Davies isn’t just an SEO expert with technical chops either – he loves sharing his knowledge and using his experience to make the industry better for everyone. That coupled with his affable personality and sense of humor make him widely respected in the SEO world.&lt;/p&gt;

&lt;p&gt;We sat down with Davies to ask him about technical SEO, the relationship between Google and smaller brands, and where he thinks the next core algorithm update might have in store. Here’s what he has to say.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--F4-SPI_S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://ucarecdn.com/f2f0624b-3ac1-441c-b070-417805b85315/" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--F4-SPI_S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://ucarecdn.com/f2f0624b-3ac1-441c-b070-417805b85315/" alt="Screen Shot 2021-11-30 at 10.02.05 PM.png" width="880" height="659"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I. Google’s official stance is that &lt;a href="https://prerender.io/common-javascript-seo-problems/"&gt;Googlebot can crawl and index Javascript&lt;/a&gt; without any issues. The available studies out there show that although technically true, it takes them longer and uses more resources – meaning Javascript SPAs exhaust their crawl budget quickly.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;You’ve been in the SEO industry longer than almost anyone. What is your opinion on this?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;They are wrong.&lt;/p&gt;

&lt;p&gt;I am right now working for a company that has a SPA website that uses prerendering and let me tell you, whenever the slightest thing goes wrong I see it very clearly in the rankings and the caches.&lt;/p&gt;

&lt;p&gt;I noticed just a couple of months ago a &lt;a href="https://twitter.com/beanstalkim/status/1425856004542042112"&gt;hiccup at Google with prerendering&lt;/a&gt;, which was followed pretty closely with a &lt;a href="https://twitter.com/beanstalkim/status/1427298877737209857"&gt;lag in the coverage reports&lt;/a&gt; and a &lt;a href="https://www.seroundtable.com/google-opens-indexing-bugs-reporting-tool-31943.html"&gt;form to submit indexing bugs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In short, I do think they’re working on it – but they’re still a ways off from this being true, and I’m not sure that the solution will ever be &lt;a href="https://prerender.io/crawl-budget-seo/"&gt;crawling&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;II. In recent years it’s become harder and harder for small businesses and startups to get visibility on Google SERPs because of algorithm changes that favor established brands that already have an audience and a web presence. &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What can Google do to better support smaller businesses and startups and be their advocates?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;While I understand the context of the question and sentiment, when I really think about it I’m not sure that’s true.&lt;/p&gt;

&lt;p&gt;Yes, when we’re fighting battles with national brands on their turf (national SERPs) it does often come to this, but Google is giving local businesses a lot of new tools and visibility options. The national brands can play there, as applicable – but it’s a lot harder for them to stand out and they don’t seem as favored by traditional metrics.&lt;/p&gt;

&lt;p&gt;So if small businesses focus on local markets, which many do, they have serious advantages if they know how to take them. For smaller businesses tackling national markets against sites like Amazon and Walmart, it is true they’ll be fighting an uphill battle. &lt;/p&gt;

&lt;p&gt;They need to find a sub-niche to start, where keywords are easier and start there. In that context, not a lot has changed over the years.&lt;/p&gt;

&lt;p&gt;III. Many SEO professionals make the mistake of making the Google gods happy at the expense of user experience. &lt;/p&gt;

&lt;p&gt;This is a fundamentally flawed approach because Google’s mission statement focuses on the user – to provide the user with the best possible result for a given query.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How do we solve for the user instead? How do we make that user-first mentality the conventional wisdom in SEO?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;I have a very short answer to this question because I think we often make it more difficult than it has to be.&lt;/p&gt;

&lt;p&gt;Create the content the user wants. Deliver it in the format they want it in. And make sure Google understands that you’ve done that.&lt;/p&gt;

&lt;p&gt;To expand a twitch:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create the content the user wants&lt;/strong&gt; – Think about the user as the person entering the query, not your customer. Think about all the things a person entering that query might be looking for and deliver as many as you can while keeping the content clean. With that, you maximize the probability that you will satisfy a user, and that’s what Google wants you to do.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deliver it in the format they want it in&lt;/strong&gt; – If they want a video, give them a video. They all want it fast. They all want it secure. They all want to be able to access it on any device from any location. Give people what they want, and you’ll be ahead of the next rule Google throws at you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;And make sure Google understands that you’ve done that&lt;/strong&gt; – Make sure you link between your pages logically, add schema where applicable, etc. You’ve done the work for the user, do a bit more to make sure Google understands it, and you’ll be well on your way.&lt;/p&gt;

&lt;p&gt;IV. Even if we take Google’s word for it that their web crawler can crawl and render Javascript, there’s no guarantee that websites made using Javascript frameworks will be well-optimized for both users and search engines.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is the single most important thing that webmasters and technical SEO experts can do to make sure their Javascript web applications are well-optimized for search?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Monitor. Monitor. Monitor.&lt;/p&gt;

&lt;p&gt;Set up alerts on key pages to run daily and alert you to an unexpected drop.&lt;/p&gt;

&lt;p&gt;Manually check pages not just with a crawler, but inspect the cache and inspect the code produced by testing your URL in Google Search Console – see how it renders. Check a variety of pages and page types. Just because one part of the page is fine, doesn’t necessarily mean it all is.&lt;/p&gt;

&lt;p&gt;Beyond that, make sure you have a good dev and good technology.&lt;/p&gt;

&lt;p&gt;V. You’ve been covering Google core algorithm updates on Search Engine Journal for years.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What do you anticipate the next core algorithm update to focus on, and why? What’s missing in the way Google ranks and categorizes web pages that isn’t there already?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This question really got me thinking. &lt;/p&gt;

&lt;p&gt;I think as far as core updates go, the next series will likely focus on infrastructure and keeping an increasingly complex arrangement of pieces working together.&lt;/p&gt;

&lt;p&gt;We’re watching &lt;a href="https://blog.google/products/search/introducing-mum/"&gt;MUM&lt;/a&gt; starting to get used in the wild, and we’ve heard about &lt;a href="https://blog.google/technology/ai/lamda/"&gt;LamDA&lt;/a&gt;. We’ve read about KELM and the potential it has in creating a more reliable and “honest” picture of the world. &lt;/p&gt;

&lt;p&gt;What we don’t read a lot about (mainly because it’s boring and we don’t want to) is the technology behind it. &lt;a href="https://ai.googleblog.com/2021/05/kelm-integrating-knowledge-graphs-with.html"&gt;KELM&lt;/a&gt; would add verified facts to a picture of the world Google has created from a different system (MUM, for example). Great, but how do you get those two parts communicating and sharing information?&lt;/p&gt;

&lt;p&gt;This is, to me, the biggest of their challenges and why I suspect it will be the focus on their core updates for the foreseeable future.&lt;/p&gt;

&lt;p&gt;I’ve started reading some of the papers on some of the technologies behind the technologies we hear about. &lt;/p&gt;

&lt;p&gt;How &lt;a href="https://wandb.ai/onlineinference/byt5/reports/ByT5-What-It-Might-Mean-For-SEO--Vmlldzo4NzY1NzE"&gt;ByT5&lt;/a&gt; can improve understanding content in a noisy environment (where noise may be something like misspelled words on social media, etc) by moving away from tokens and working byte-to-byte which required a lot to overcome the hurdle of ballooning the compute time.&lt;/p&gt;

&lt;p&gt;Or how &lt;a href="https://wandb.ai/onlineinference/flan/reports/Google-Bakes-A-FLAN-Improved-Zero-Shot-Learning-For-NLP--VmlldzoxMDE0MDEx"&gt;Google FLAN&lt;/a&gt; improves zero-shot NLP across domains (where domains are not sites, but rather tasks) so a system trained on classifying sentiment (for example) can be used to improve a translation model with little additional training required for the new task.&lt;/p&gt;

&lt;p&gt;This, in my mind, is what the core updates need to deal with.&lt;/p&gt;

&lt;p&gt;VI.  Many web developers lack even a basic understanding of SEO. That creates problems down the line when SEO problems become ignored or buried under legacy code which makes them harder to diagnose and fix. &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;As an SEO veteran with web development credentials, what do you think we can do to bridge that gap? How can web developers make sure that an SEO infrastructure is in place from the moment they begin development? On the other side, what can marketing teams do to make the developers’ jobs easier?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;I honestly believe it’s a two-way street. &lt;/p&gt;

&lt;p&gt;When I was cutting my teeth I used Dreamweaver 4 to put content into tables and upload them page-by-page. I learned a lot on my way, but the pace of change in dev and SEO meant I needed to choose a path and I was never a developer so I stuck with SEO.&lt;/p&gt;

&lt;p&gt;Yes I can still throw together a decent WordPress site, and probably edit the themes without breaking anything, but I wouldn’t consider myself even an intermediate dev. And it’s great that I know that. &lt;/p&gt;

&lt;p&gt;That history and ability though, I think makes me a bit better than some at understanding how to communicate with developers.&lt;/p&gt;

&lt;p&gt;I can’t count the number of times I’ve outlined my needs and how to solve a problem to a capable developer, only to have it bite me in the butt when they followed by instructions to unexpected results.&lt;/p&gt;

&lt;p&gt;Now I isolate what the problem is, describe and send screenshots of how I know and how I’ll know when it’s fixed to the developer, and while I might include a potential fix I found – I try to be clear that it is for illustrative purposes only.&lt;/p&gt;

&lt;p&gt;9 times out of 10, if you are working with a good developer they’ll be able to think of solutions you never would and often solve additional problems you might not have known you had.&lt;/p&gt;

&lt;p&gt;Respect them, respect their knowledge and they will respect yours.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>webdev</category>
      <category>ux</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
