<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: rashi07-hub</title>
    <description>The latest articles on DEV Community by rashi07-hub (@rashi07hub).</description>
    <link>https://dev.to/rashi07hub</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rashi07hub"/>
    <language>en</language>
    <item>
      <title>Web scrapping part 4 (professionally)</title>
      <dc:creator>rashi07-hub</dc:creator>
      <pubDate>Fri, 10 Jan 2020 11:15:21 +0000</pubDate>
      <link>https://dev.to/rashi07hub/web-scrapping-part-4-professionally-1262</link>
      <guid>https://dev.to/rashi07hub/web-scrapping-part-4-professionally-1262</guid>
      <description>&lt;p&gt;Hello, dev masters how are you let welcome here with another section of web scrapping here I have e already complete 3 sections here and let's enjoy this 4th one.&lt;/p&gt;

&lt;p&gt;2.Scrapy framework&lt;/p&gt;

&lt;p&gt;Scrapy at a glance!&lt;/p&gt;

&lt;p&gt;Scrapy is a web scraping framework of python. Scrapy is best if you need to build&lt;br&gt;
a web crawler for large web scraping needs. Scrapy uses spiders which are self&lt;br&gt;
contained crawlers that are given a complete set of instructions. In scrapy it is&lt;br&gt;
much easier to build and scale large crawling projects by allowing developers to&lt;br&gt;
reuse their codes. Scrapy also provides a web crawling shell known as “Scrapy&lt;br&gt;
shell” that developer can use to test all assumptions on a website behavior.&lt;/p&gt;

&lt;p&gt;First of all you need to install following packages&lt;br&gt;
that i have Summarized.&lt;/p&gt;

&lt;p&gt;I would suggest you to use linux platform instead of others, And i will&lt;br&gt;
also explain you about selenium webdriver.&lt;/p&gt;

&lt;p&gt;Run these commands to your terminal.&lt;/p&gt;

&lt;p&gt;To install scrapy sudo apt install python-scrapy.&lt;/p&gt;

&lt;p&gt;To install pip sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install python-pip.&lt;/p&gt;

&lt;p&gt;To install selenium sudo pip install selenium.&lt;/p&gt;

&lt;p&gt;To install chrome webdriver.&lt;/p&gt;

&lt;p&gt;$ sudo apt-get install chromium-chromedriver&lt;/p&gt;

&lt;p&gt;$ sudo ln -s /usr/lib/chromium-browser/chromedriver/usr/bin/chromedriver&lt;/p&gt;

&lt;p&gt;$ sudo apt-get install libxi6 libgconf-2-4&lt;/p&gt;

&lt;p&gt;Now you need to do is:-&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Creating a new Scrapy project.&lt;/li&gt;
&lt;li&gt;Write a spider to crawl a site and extract data.&lt;/li&gt;
&lt;li&gt;Exporting the scraped data using the command line.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To create your first project open the terminal and change the directory where you&lt;br&gt;
have installed scrapy then run the following command &amp;gt;&amp;gt;&amp;gt; scrapy&lt;br&gt;
startproject project_name&lt;/p&gt;

&lt;p&gt;This will create a directory which looks like this:&lt;/p&gt;

&lt;p&gt;project_name/&lt;/p&gt;

&lt;p&gt;scrapy.cfg # deploy configuration file&lt;br&gt;
project_name/ # project's Python module,&lt;br&gt;
you'll import your code from here&lt;br&gt;
&lt;strong&gt;init&lt;/strong&gt;.py&lt;br&gt;
items.py # project items definition file&lt;br&gt;
middlewares.py # project middlewares file&lt;br&gt;
pipelines.py # project pipelines file&lt;br&gt;
settings.py # project settings file&lt;br&gt;
spiders/ # a directory where you'll later put&lt;br&gt;
your spiders&lt;br&gt;
&lt;strong&gt;init&lt;/strong&gt;.py&lt;/p&gt;

&lt;p&gt;Writing our first Spider&lt;/p&gt;

&lt;p&gt;All spiders in scrapy is nothing but python classes that scrapy uses to get all the information from a website. All classes (spiders) must inherit with scrapy.&lt;br&gt;
Spider and define the initial request to make. That means how to follow the links in the pages, how to get data from parsed pages, how to parse the data etc.&lt;br&gt;
Hope it is clear to you if not then simply assume that this is class in which we&lt;br&gt;
make different functions and every function is used for parsing the urls.&lt;/p&gt;

&lt;p&gt;Now we will code our first Spider. Save it in a file named test.py or you can name it anything with.py extension under the project_name/spiders directory in your project:&lt;/p&gt;

&lt;p&gt;Thanks for reading this hope you enjoy I'll continue this series here good luck everyone.&lt;/p&gt;

</description>
      <category>webscrapping</category>
      <category>java</category>
      <category>python</category>
    </item>
    <item>
      <title>Web scrapping through Python (Part 3)</title>
      <dc:creator>rashi07-hub</dc:creator>
      <pubDate>Tue, 07 Jan 2020 05:32:22 +0000</pubDate>
      <link>https://dev.to/rashi07hub/web-scrapping-through-python-part-3-235l</link>
      <guid>https://dev.to/rashi07hub/web-scrapping-through-python-part-3-235l</guid>
      <description>&lt;p&gt;I have already given 2 small session this is 3rd one hope you enjoy it.&lt;/p&gt;

&lt;p&gt;You can scrape a website in python either by beautiful soup library or by scrapy framework In this article we will learn to scrape websites using scrapy framework&lt;/p&gt;

&lt;p&gt;Table of Content&lt;/p&gt;

&lt;p&gt;Basic python&lt;br&gt;
Scrapy framework&lt;br&gt;
Practical example&lt;br&gt;
Unique Tips while programming&lt;br&gt;
Selenium webdriver&lt;br&gt;
Applications of web scraping&lt;br&gt;
Conclusions&lt;br&gt;
References&lt;/p&gt;

&lt;p&gt;IN THIS SESSION I'LL DISCUSS ABOUT PYTHON SCRAPPING...&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Python&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Python is a general-purpose interpreted, high-level, programming&lt;br&gt;
language. And the best thing about python is it is easiest language to learn&lt;br&gt;
for beginners as well as for professionals. It is created by Guido van&lt;br&gt;
Rossum and first released in 1991. Python can be used for many&lt;br&gt;
applications like web scraping, web and internet development, software&lt;br&gt;
development, science and numeric applications and many others. Python&lt;br&gt;
source code is also available under the GNU general public license (GPL).&lt;br&gt;
For this article knowledge of these topics will be advantage to you-&lt;br&gt;
 For loop&lt;br&gt;
 Conditional statements&lt;/p&gt;

&lt;p&gt;Functions&lt;br&gt;
 List, tuples, dictionaries&lt;br&gt;
 And some basics of strings, numbers and operators&lt;/p&gt;

&lt;p&gt;Examples of each topic is given below:&lt;/p&gt;

&lt;p&gt;[ Example 1&lt;br&gt;
country = ["india", "america", "japan",”germany”,”france”]&lt;br&gt;
for i in country:&lt;br&gt;
print(i)&lt;br&gt;
Explanation &amp;gt; this will simply print all elements of list one by&lt;br&gt;
one. ]&lt;/p&gt;

&lt;p&gt;[ Example 2&lt;br&gt;
a = 50&lt;br&gt;
b = 55&lt;br&gt;
if (b &amp;gt; a):&lt;br&gt;
print("b is greater than a")&lt;br&gt;
Explanation &amp;gt; as we can see “b” is greater than “a” so it will&lt;br&gt;
print 55. ]&lt;/p&gt;

&lt;p&gt;[ Example 3&lt;br&gt;
def my_function():&lt;br&gt;
if (i &amp;gt; 5):&lt;br&gt;
print("Hello from a function")&lt;/p&gt;

&lt;p&gt;my_function(i = 6)&lt;/p&gt;

&lt;p&gt;Explanation &amp;gt; as it satisfies the if condition so it will call&lt;br&gt;
the function and this function will simply print Hello from a&lt;br&gt;
function ]&lt;/p&gt;

&lt;p&gt;[ Example 4&lt;br&gt;
list1 = ["apple", "banana", "cherry"]&lt;/p&gt;

&lt;h1&gt;
  
  
  print(list1)
&lt;/h1&gt;

&lt;p&gt;=&lt;br&gt;
tuple1 = ("apple", "banana", "cherry")&lt;/p&gt;

&lt;h1&gt;
  
  
  print(tuple1)
&lt;/h1&gt;

&lt;p&gt;=&lt;br&gt;
dict1 = {&lt;br&gt;
"brand": "Ford",&lt;br&gt;
"model": "Mustang",&lt;br&gt;
"year": 1964&lt;br&gt;
}&lt;br&gt;
print(dict1)&lt;br&gt;
Explanation &amp;gt; this function will simply print the elements of&lt;br&gt;
list tuple and dictionaries.&lt;br&gt;
]&lt;/p&gt;

&lt;p&gt;WHAT YOU GUYS THINK ABOUT &lt;a href="https://desklib.com/document/data-visualisation-with-tableau-hsgv"&gt; DATA VISUALISATION&lt;/a&gt; TAP TO CHECK.&lt;/p&gt;

</description>
      <category>scrapping</category>
      <category>css</category>
      <category>java</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Web scrapping part 2 (professionally)</title>
      <dc:creator>rashi07-hub</dc:creator>
      <pubDate>Fri, 03 Jan 2020 09:13:24 +0000</pubDate>
      <link>https://dev.to/rashi07hub/web-scrapping-part-2-professionally-1gae</link>
      <guid>https://dev.to/rashi07hub/web-scrapping-part-2-professionally-1gae</guid>
      <description>&lt;p&gt;Yesterday I gave you a brief idea--------- let's make it continue.&lt;/p&gt;

&lt;p&gt;You can scrape a website in python either by beautiful soup library or by scrapy framework In this article we will learn to scrape websites using scrapy framework.&lt;/p&gt;

&lt;p&gt;Here I will be dealing with these important topics Below. &lt;/p&gt;

&lt;p&gt;------&amp;gt;Table of Content&amp;lt;------&lt;/p&gt;

&lt;p&gt;1-Basic python&lt;br&gt;
2-Scrapy framework&lt;br&gt;
3-Practical example&lt;br&gt;
4-Unique Tips while programming&lt;br&gt;
5-Selenium webdriver&lt;br&gt;
6-Applications of web scraping&lt;br&gt;
7-Conclusions&lt;br&gt;
8-References&lt;/p&gt;

&lt;p&gt;So, are you guys ready to deal with these interesting topic {web scrapping part 2}&lt;br&gt;
I am very eager to share this course: The only thing I need your motivation.&lt;br&gt;
Stay tuned for 3rd part.&lt;br&gt;
BASIC OF PYTHON------------------- &lt;/p&gt;

</description>
      <category>webscrapping</category>
      <category>java</category>
      <category>python</category>
    </item>
    <item>
      <title>How to do web scrapping part-1 (professionally)</title>
      <dc:creator>rashi07-hub</dc:creator>
      <pubDate>Thu, 02 Jan 2020 09:42:47 +0000</pubDate>
      <link>https://dev.to/rashi07hub/how-to-do-web-scrapping-part-1-professionally-311o</link>
      <guid>https://dev.to/rashi07hub/how-to-do-web-scrapping-part-1-professionally-311o</guid>
      <description>&lt;p&gt;Consider how can you get the data of all mobile phones which&lt;br&gt;
are available on amazon and compare it to phones available on&lt;br&gt;
snapdeal so that you can buy a best phone for you.&lt;/p&gt;

&lt;p&gt;Well! here web scraping can help you a lot.&lt;/p&gt;

&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;Now we are going to cover all the topics related to web scraping.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;/blockquote&gt;
&lt;br&gt;
&lt;/blockquote&gt;

&lt;p&gt;Note- You can scrape a website in python either by beautiful&lt;br&gt;
soup library or by scrapy framework In this article we&lt;br&gt;
will learn to scrape websites using scrapy framework.&lt;/p&gt;

&lt;p&gt;If you want to access this free course kindly give me your beautiful feedback thanks.&lt;/p&gt;

</description>
      <category>python</category>
      <category>java</category>
      <category>javascript</category>
    </item>
    <item>
      <title>welcome me git hub community</title>
      <dc:creator>rashi07-hub</dc:creator>
      <pubDate>Thu, 02 Jan 2020 05:48:18 +0000</pubDate>
      <link>https://dev.to/rashi07hub/welcome-me-git-hub-community-n3j</link>
      <guid>https://dev.to/rashi07hub/welcome-me-git-hub-community-n3j</guid>
      <description>&lt;p&gt;I really appreciate your effort here, how can we easily communicate and build a genuine community here. I am a programmer and hope I'll learn more skills here at  Dev community.&lt;br&gt;
Thanks,&lt;br&gt;
Dev.to&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
