<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Daniel Kang</title>
    <description>The latest articles on DEV Community by Daniel Kang (@daniel2231).</description>
    <link>https://dev.to/daniel2231</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/daniel2231"/>
    <language>en</language>
    <item>
      <title>Taking a peak at Django - Part 1</title>
      <dc:creator>Daniel Kang</dc:creator>
      <pubDate>Thu, 18 Jun 2020 02:47:03 +0000</pubDate>
      <link>https://dev.to/daniel2231/taking-a-peak-at-django-part-1-32l9</link>
      <guid>https://dev.to/daniel2231/taking-a-peak-at-django-part-1-32l9</guid>
      <description>&lt;p&gt;Many people might think that python is not for the web, but that's actually not the case. We can use Python to make a backend server with the help of Django and Flask. Today, we are going to look over the basic installation process of Django and a quick guide on writing your first Django app.&lt;/p&gt;

&lt;p&gt;For this project, I will use &lt;a href="https://ide.goorm.io/"&gt;goormIDE&lt;/a&gt; and upload my container link down below so anyone could check out the code and see how it works.&lt;/p&gt;

&lt;p&gt;I'm going to set up my project using goormIDE's python stack (which already has python and Django installed), but for those who are using another IDE, feel free to see my previous tutorial on how to set up a Python development environment: &lt;a href="https://dev.to/daniel2231/a-python-ide-that-you-can-use-anywhere-26cg"&gt;https://dev.to/daniel2231/a-python-ide-that-you-can-use-anywhere-26cg&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also directly use the Django stack on the right side of the Python stack, and skip this part of the tutorial.&lt;/p&gt;

&lt;p&gt;For Django, all you have to do (if you have Python installed) is to type the following commands on your terminal.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;python &lt;span class="nt"&gt;-m&lt;/span&gt; pip &lt;span class="nb"&gt;install &lt;/span&gt;Django
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If you want to check whether if you already have Django installed, use the following commands.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;python &lt;span class="nt"&gt;-m&lt;/span&gt; django &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Let's begin!
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Setting up
&lt;/h3&gt;

&lt;p&gt;Assuming you already have Django installed, lets begin with making a new Django project. Type this on your terminal.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;django-admin startproject myblog
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You can change &lt;code&gt;example&lt;/code&gt; to anything, preferably your project name. For this tutorial, I'm going to name my project 'myblog'.&lt;/p&gt;

&lt;p&gt;This will create an 'myblog' project directory that will contain the necessary files you are going to need. The default project directory will look like this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;myblog/
    manage.py
    myblog/
        __init__.py
        settings.py
        urls.py
        asgi.py
        wsgi.py
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Let's go over the files.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;myblog/&lt;/code&gt; is your root folder for your project. Django doesn't care about this file. You can rename this to whatever you want.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;manage.py&lt;/code&gt; will help you create applications, start your development web server, and help you work with databases.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;/myblog/&lt;/code&gt; ,that is inside your project folder will be your actual Python package for your project. You will need to use its name when trying to import anything inside it (e.g. Instagram.urls).&lt;/p&gt;

&lt;p&gt;&lt;code&gt;__init__.py&lt;/code&gt; will instruct Python to treat this directory as a Python package.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;settings.py&lt;/code&gt; will contain your website's information. This is where you will store the location of our static files, databse configuration settings, and register any application that you create.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;urls.py&lt;/code&gt; is where you store the URL declarations for this Django project.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;asgi.py&lt;/code&gt; is an entry point for ASGI-compatible web servers to server your project.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;wsgi.py&lt;/code&gt; is used to help your Django application communicate with the webserver.&lt;/p&gt;

&lt;p&gt;Now, let's try running your server. Before we run anything, we must set up goormIDE so that we can run our project in a custom url goorm provides us with. &lt;/p&gt;

&lt;p&gt;On your settings.py, edit this part:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;ALLOWED_HOSTS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;to this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;ALLOWED_HOSTS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'*'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Go to &lt;strong&gt;Project&lt;/strong&gt; &amp;gt; &lt;strong&gt;Running URL and Port&lt;/strong&gt; and copy the url that is provided. If the port is not on 80, change the port to 80.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--soF6HiH6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/gvtfr75fcm8ndd4tmc0d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--soF6HiH6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/gvtfr75fcm8ndd4tmc0d.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run the following code in your terminal.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python manage.py runserver 0:80
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now try accessing the custom goorm link. If successful, You will see this screen:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pE1Kzrsi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ma61jv79xsuw1us35lca.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pE1Kzrsi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ma61jv79xsuw1us35lca.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Part 2 Coming soon...&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to crawl a website in just 5 minutes</title>
      <dc:creator>Daniel Kang</dc:creator>
      <pubDate>Mon, 15 Jun 2020 09:09:25 +0000</pubDate>
      <link>https://dev.to/daniel2231/how-to-crawl-a-website-in-just-5-minutes-4o23</link>
      <guid>https://dev.to/daniel2231/how-to-crawl-a-website-in-just-5-minutes-4o23</guid>
      <description>&lt;p&gt;Have you ever had that time when you want to just get the information that you want from a website automatically, without going inside the website? What we will make today is a crawler bot that will crawl, or get information for you automatically from any website that you want. This might be useful for those who wants to get updated information from the website, or get data from a website without all those copy and paste hasle.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we are going to use
&lt;/h2&gt;

&lt;p&gt;In this tutorial, we are going to use Scrapy, an open source python framework for extracting data from websites. For more information on Scrapy, I recommend visiting their official website. &lt;a href="https://scrapy.org/"&gt;https://scrapy.org/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For our development environment, I am going to use goormIDE so that I can share my container so that others can see. I'll share the container link with the completed code down below. You are free to use any development environment that you wish.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scrapy in 5 minutes
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Setting up the container
&lt;/h3&gt;

&lt;p&gt;For those who wish to use goormIDE, you can check out my previous guide on how to set up a Python development environment in goormIDE here: &lt;a href="https://dev.to/daniel2231/a-python-ide-that-you-can-use-anywhere-26cg"&gt;https://dev.to/daniel2231/a-python-ide-that-you-can-use-anywhere-26cg&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Installing Scrapy
&lt;/h3&gt;

&lt;p&gt;Now, lets install scrapy. Write the following command in the command line.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;scrapy
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Making a new spider
&lt;/h3&gt;

&lt;p&gt;After you are done installing scrapy, we are going to use scrapy to make a scrapy project. On your parent directory (or wherever you want to put your project), run the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;scrapy startproject scraper
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You can change scraper to any name that you want.&lt;/p&gt;

&lt;p&gt;The default project directory will look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;scraper/
   scrapy.cfg             # Deploy configuration file
   scraper/               # Project's Python module
      __init__.py         
      items.py            # Project items definition file
      middlewares.py      # Project middewares file
      pipelines.py        # Project pipelines file
      settings.py         # Project settings file
      spiders/            # A directory for your spiders
         __init__.py
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now, lets get inside our spiders folder.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;scraper/spiders/
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Making a new spider
&lt;/h3&gt;

&lt;p&gt;A Spider is a set of classes which contains instructions on how a certain site or sites will be scraped. &lt;br&gt;
"Spiders are the place where you define the custom behaviour for crawling and parsing pages for a particular site (or, in some cases, a group of sites)." (source: &lt;a href="https://docs.scrapy.org/en/latest/topics/spiders.html"&gt;https://docs.scrapy.org/en/latest/topics/spiders.html&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;We will now make a new spider that will crawl this site: &lt;a href="https://www.tiobe.com/tiobe-index/"&gt;https://www.tiobe.com/tiobe-index/&lt;/a&gt;&lt;br&gt;
What I want to get from this site is the ranking of each programming language and its ratings.&lt;/p&gt;

&lt;p&gt;Note: Before you crawl a site, make sure to check their policy on crawling. Some sites might have restrictions on what you can crawl or not. For more clear instructions, check a site's robots.txt file (You can check for them like this: &lt;a href="http://www.sitename.com/robots.txt"&gt;www.sitename.com/robots.txt&lt;/a&gt;) Check this article out for more information: &lt;a href="https://www.cloudflare.com/learning/bots/what-is-robots.txt/"&gt;https://www.cloudflare.com/learning/bots/what-is-robots.txt/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What we will make is a crawler that will get the rank, name of programming language, and ratings from the website.&lt;/p&gt;

&lt;p&gt;To make a new spider, enter the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;scrapy genspider crawlbot www.tiobe.com/tiobe-index
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;To break up, &lt;code&gt;scrapy genspider&lt;/code&gt; creates a new spider for us, &lt;code&gt;crawlbot&lt;/code&gt; is the name of the spider, and the link will tell the spider where we will crawl the information.&lt;/p&gt;

&lt;h3&gt;
  
  
  Finding out what to crawl
&lt;/h3&gt;

&lt;p&gt;Inside your spider folder, you will see that a new spider has been created.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--souwFHnh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2sgoc1a4ibgtdm6wxu9e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--souwFHnh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2sgoc1a4ibgtdm6wxu9e.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What we must do now is to find out how we are going to tell the scraper what must be crawled.&lt;/p&gt;

&lt;p&gt;Go to the website: &lt;a href="https://www.tiobe.com/tiobe-index/"&gt;https://www.tiobe.com/tiobe-index/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Right click and select inspect.&lt;/p&gt;

&lt;p&gt;Here, we can see that the information that we need is stored inside a table.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Tak5EIQy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/e5qn2n99wd0r1a1ho4f9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Tak5EIQy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/e5qn2n99wd0r1a1ho4f9.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When we open the table body tag, we can see that the information is stored inside each table row, as a form of table data.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KACbu_Ta--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/i1yszmah6vsdiwb1yajo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KACbu_Ta--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/i1yszmah6vsdiwb1yajo.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are two basic ways to tell our crawler what to crawl. The first way is to tell the crawler what class attribute the tag has. The second way is to get the Xpath of the tag that we would like to crawl. We are going to use the second method.&lt;/p&gt;

&lt;p&gt;To get a tag Xpath, simply right click on the tag (on the developer console) and select Copy, Copy Xpath.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--COvuY67j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/o5r7vqdw91re3dd6bl4i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--COvuY67j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/o5r7vqdw91re3dd6bl4i.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When I copy the Xpath of the rating, I get:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//*[@id="top20"]/tbody/tr[1]/td[1]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now, remove the number behind &lt;code&gt;tr&lt;/code&gt; and add &lt;code&gt;/text()&lt;/code&gt; at the end of the Xpath.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//*[@id="top20"]/tbody/tr/td[1]/text()
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;What I just did tells the crawler that I will get the text of every first table data of every table row.&lt;/p&gt;

&lt;p&gt;Now, lets get the Xpath of the ratings, programming language, and rank.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Rank: //*[@id="top20"]/tbody/tr/td[1]/text()
Programming language: //*[@id="top20"]/tbody/tr/td[4]/text()
Ratings: //*[@id="top20"]/tbody/tr/td[5]/text()
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now, open your crawlbot spider and add the following codes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# -*- coding: utf-8 -*-
&lt;/span&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;scrapy&lt;/span&gt;


&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;CrawlbotSpider&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;scrapy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Spider&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;'crawlbot'&lt;/span&gt;
    &lt;span class="n"&gt;allowed_domains&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'https://www.tiobe.com/tiobe-index/'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;start_urls&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'https://www.tiobe.com/tiobe-index/'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="bp"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;rank&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;xpath&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'//*[@id="top20"]/tbody/tr/td[1]/text()'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;extract&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;xpath&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'//*[@id="top20"]/tbody/tr/td[4]/text()'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;extract&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;ratings&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;xpath&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'//*[@id="top20"]/tbody/tr/td[5]/text()'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;extract&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nb"&gt;zip&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;rank&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ratings&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
            &lt;span class="n"&gt;scraped_info&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="s"&gt;'Rank'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;strip&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
                &lt;span class="s"&gt;'Name'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;strip&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
                &lt;span class="s"&gt;'Ratings'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;strip&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="n"&gt;scraped_info&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Let's break this code up. First, we are telling the crawler where to crawl by providing a url link. Then, we make a variable where we will store the crawled data. Don't forget to put &lt;code&gt;.extract()&lt;/code&gt; behind!&lt;br&gt;
Lastly, we are returning each crawled data by using &lt;code&gt;yield scraped_info&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;We are now done!&lt;br&gt;
Let's try running the crawler.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;scrapy crawl crawlbot
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fKn7T_U6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/3d8a3gppdxwgphonf0ee.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fKn7T_U6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/3d8a3gppdxwgphonf0ee.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can see that the crawler gets all the data that we need from the website. However, it is a bit inconvenient to see the data in our terminal. Let's tell our crawler to export the data in JSON format.&lt;/p&gt;

&lt;h3&gt;
  
  
  Export to JSON
&lt;/h3&gt;

&lt;p&gt;Go to settings.py on your project directory and add this two lines of code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;FEED_FORMAT&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"json"&lt;/span&gt;
&lt;span class="n"&gt;FEED_URI&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"rank.json"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now, run the crawler again and you will see that all the data you crawled will show up in rank.json.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FJPbmddx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/j247w0tmbie52lj7durq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FJPbmddx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/j247w0tmbie52lj7durq.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's it! I'll leave my container link down below for those who wants to see the code.&lt;/p&gt;

&lt;p&gt;Container link: &lt;a href="https://goor.me/KqvR6"&gt;https://goor.me/KqvR6&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's all folks! Happy coding!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to upgrade your Python version</title>
      <dc:creator>Daniel Kang</dc:creator>
      <pubDate>Tue, 09 Jun 2020 03:10:22 +0000</pubDate>
      <link>https://dev.to/daniel2231/how-to-upgrade-your-python-version-13mh</link>
      <guid>https://dev.to/daniel2231/how-to-upgrade-your-python-version-13mh</guid>
      <description>&lt;p&gt;On my latest post, I introduced a way to make a python IDE that you can access from anywhere. On this post, I will go over on how we can install the latest python 3.8.3 on our IDE. &lt;/p&gt;

&lt;p&gt;For those who haven't read the post before this, I recommend you to check them out: &lt;a href="https://dev.to/daniel2231/a-python-ide-that-you-can-use-anywhere-26cg"&gt;https://dev.to/daniel2231/a-python-ide-that-you-can-use-anywhere-26cg&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this tutorial I will use an IDE called goormIDE, but you can follow this tutorial on any IDE that you wish. However, you must be using a linux based terminal if you want to follow this tutorial. &lt;/p&gt;

&lt;p&gt;Personally, I would recommend using apt to install python as it is faster and less cumbersome.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing the latest Python from Source
&lt;/h2&gt;

&lt;p&gt;First, update your package lists and install the packages required.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;libffi-dev libsqlite3-dev wget libbz2-dev build-essential libssl-dev libreadline-dev zlib1g-dev libncurses5-dev libgdbm-dev libnss3-dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will be prompt to install the packages. Press y to continue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fn24lrem458ic50j0j8j6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fn24lrem458ic50j0j8j6.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, download the python version you want. In this tutorial, we are going to install python3.8.3&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wget https://www.python.org/ftp/python/3.8.3/Python-3.8.3.tgz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Extract the downloaded package using the command below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;tar&lt;/span&gt; &lt;span class="nt"&gt;-xf&lt;/span&gt; Python-3.8.3.tgz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, after you are done extracting the installation file, switch to the python source directory and run the following command.&lt;/p&gt;

&lt;p&gt;Move to python source directory&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;Python-3.8.3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then run command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./configure &lt;span class="nt"&gt;--enable-optimizations&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that, lets start the python build by using the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;make &lt;span class="nt"&gt;-j&lt;/span&gt; 8
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, install the python binaries.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;make altinstall
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once that's finished, check if your python is installed correctly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python3.8 &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fo7dho6qzfx2rn8tccmub.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fo7dho6qzfx2rn8tccmub.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Yup, its correctly installed. 👍 &lt;/p&gt;

&lt;h2&gt;
  
  
  Installing the latest Python using Apt
&lt;/h2&gt;

&lt;p&gt;This&lt;br&gt;
First of, run the following commands to update your package list and install the prerequisites.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;software-properties-common
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, add deadsnakes PPA to your system's resource list. PPAs are Personal Packages Archives that lets users upload Ubuntu source packages to be built and published as apt repositories.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;add-apt-repository ppa:deadsnakes/ppa
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will get prompted to press Enter. Press Enter to continue.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Press &lt;span class="o"&gt;[&lt;/span&gt;ENTER] to &lt;span class="k"&gt;continue &lt;/span&gt;or Ctrl-c to cancel adding it.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After you are done, install the python you desire. In this tutorial, we will be installing python 3.8&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;python3.8
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After installation, verify if python is installed correctly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python3.8 &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Presto! We got it!&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7mlf3agux5q79hpnrljb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7mlf3agux5q79hpnrljb.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>A python IDE that you can use anywhere</title>
      <dc:creator>Daniel Kang</dc:creator>
      <pubDate>Wed, 03 Jun 2020 10:17:55 +0000</pubDate>
      <link>https://dev.to/daniel2231/a-python-ide-that-you-can-use-anywhere-26cg</link>
      <guid>https://dev.to/daniel2231/a-python-ide-that-you-can-use-anywhere-26cg</guid>
      <description>&lt;p&gt;As a student who travels a lot (pub - campus - cafe - home), I tend to carry around my macbook air because its light (and I hate heavy things). However, macbook air isn't really a powerful machine and it often drives me mad when the code I'm trying to run is too heavy to run.&lt;/p&gt;

&lt;p&gt;I've been trying a lot of things to solve this problem, but recently I found that an online IDE might help run my codes from my laptop. &lt;/p&gt;

&lt;p&gt;So I did some quick research and found some products that might help.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;glitch&lt;/li&gt;
&lt;li&gt;goormIDE&lt;/li&gt;
&lt;li&gt;Repl.it&lt;/li&gt;
&lt;li&gt;Visual Studio Online&lt;/li&gt;
&lt;li&gt;paiza.io&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, all these IDEs were ok, but the one that felt easy to use and setup was goormIDE. So today, I would like to write a short tutorial on how to setup a python development environment in goormIDE.&lt;/p&gt;

&lt;h2&gt;
  
  
  Registration
&lt;/h2&gt;

&lt;p&gt;First, we need to sign up for a free account.&lt;br&gt;
Link to goormIDE: &lt;a href="https://ide.goorm.io/" rel="noopener noreferrer"&gt;https://ide.goorm.io/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now click Sign Up, and complete the signup process. (It doesn't take long)&lt;/p&gt;

&lt;p&gt;After registration, you will be redirected to your dashboard. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fk9wef48a7a0u1hj6b4x7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fk9wef48a7a0u1hj6b4x7.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;p&gt;Now, lets try making a new container where we will install everything we need to work with python. Click the create new container button (as this is a free tier account, you can only make 5 containers)&lt;/p&gt;

&lt;p&gt;Write your container name, select a region, and select the python stack. Now, if we see the bottom of the page, we can see that with a single click, all the things that you will need for python development is installed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fr48n1klhwpg9t3j2nrpu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fr48n1klhwpg9t3j2nrpu.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now click the create button on the top right of the page. After a bit of time, a popup will prompt you if you would like to run your container. Click run container.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fr48n1klhwpg9t3j2nrpu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fr48n1klhwpg9t3j2nrpu.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And presto! You got your own online python IDE! Now, lets check if we got all the stuff we need installed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fwlmzhrwwdnswjdehqoyz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fwlmzhrwwdnswjdehqoyz.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I'm going to use tensorflow, so I will only check if python and tensorflow is installed.&lt;br&gt;
Lets first try checking if python is correctly installed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Great!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F679zdistmbfucdpro093.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F679zdistmbfucdpro093.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now lets try tensorflow.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python3 &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'import tensorflow as tf; print(tf.__version__)'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F679zdistmbfucdpro093.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F679zdistmbfucdpro093.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Good. Everything is in place.&lt;/p&gt;

&lt;p&gt;Now, you can connect to your container from anywhere, which is great for me as I don't have to buy a new laptop.&lt;/p&gt;

&lt;p&gt;That's all for now, I will post further tutorials as I get used to the new IDE I will be using for a while.&lt;/p&gt;

</description>
      <category>python</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
