<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sharan Kumar Paratala Rajagopal </title>
    <description>The latest articles on DEV Community by Sharan Kumar Paratala Rajagopal  (@prsharankumar).</description>
    <link>https://dev.to/prsharankumar</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/prsharankumar"/>
    <language>en</language>
    <item>
      <title>ADD_MONTHS and TRUNC for calculating dates in Oracle PL/SQL</title>
      <dc:creator>Sharan Kumar Paratala Rajagopal </dc:creator>
      <pubDate>Tue, 04 Aug 2020 16:52:36 +0000</pubDate>
      <link>https://dev.to/prsharankumar/addmonths-and-trunc-for-calculating-dates-in-oracle-pl-sql-2le0</link>
      <guid>https://dev.to/prsharankumar/addmonths-and-trunc-for-calculating-dates-in-oracle-pl-sql-2le0</guid>
      <description>&lt;p&gt;There are business use cases where dates have to be calculated based on certain cut off days. Especially this is important for account payables when invoices have to be paid based on certain criteria which involves cut off days configured for a specific vendor.&lt;/p&gt;

&lt;p&gt;This can be achieved by using Oracle PL/SQL inbuilt date functions and by using them we can leverage the cut off days criteria to get the expected date values for paying the invoices.&lt;br&gt;
In this article let’s look take a business use case for paying vendor on 10th of every month based on the invoice cutoff date.&lt;/p&gt;

&lt;p&gt;Example: &lt;br&gt;
Invoice date = 7/25/2020 then Payment date has to be = 8/10/2020&lt;br&gt;
Invoice date = 7/26/2020 then Payment date has to be = 9/10/2020&lt;/p&gt;

&lt;p&gt;Here we are using the cut off days = 26 and months forward = 1. But that will not resolve the issue when the invoice is on the exact cutoff date.&lt;/p&gt;

&lt;p&gt;Hence we will use Oracle PL/SQL inbuilt date function and resolve this date logic. We will use case function to determine the current day and then add the months using ADD_MONTHS. &lt;br&gt;
TRUNC (Date, ‘MM’) provides the 1st of the month and + 9 will give the 10th of the month.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SQL Query:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1_WDb5Cf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/0tlnc25vvy3gldcs0qu8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1_WDb5Cf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/0tlnc25vvy3gldcs0qu8.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Output:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--w1TG2GNm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/yuf54xdf51zng2ye6jmn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--w1TG2GNm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/yuf54xdf51zng2ye6jmn.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>oracle</category>
      <category>sql</category>
      <category>plsql</category>
      <category>trunc</category>
    </item>
    <item>
      <title>Using SUBSTR and INSTR functions in ORACLE PLSQL</title>
      <dc:creator>Sharan Kumar Paratala Rajagopal </dc:creator>
      <pubDate>Thu, 16 Jul 2020 23:00:42 +0000</pubDate>
      <link>https://dev.to/prsharankumar/using-substr-and-instr-functions-in-oracle-plsql-4lgp</link>
      <guid>https://dev.to/prsharankumar/using-substr-and-instr-functions-in-oracle-plsql-4lgp</guid>
      <description>&lt;p&gt;For reporting purpose there might be multiple occasions where there will be requirement to select only part of a string before or after a specified delimiter is present. And most challenging part is to get the values as it requires some additional effort to find the part of the string itself.&lt;/p&gt;

&lt;p&gt;Here is a simple example to fetch a number field in an address string.&lt;br&gt;
Any value after (# should be fetched as line2 along with the character (# and any value before the (# is line 1.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PLSQL query is as below:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--W6g-p4N_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6pqrxl3iunr8o1vwjmq4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--W6g-p4N_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6pqrxl3iunr8o1vwjmq4.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OUTPUT:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--T08NvEdO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/t0vc6qyw503kgd85pkgu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--T08NvEdO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/t0vc6qyw503kgd85pkgu.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now same example should work and column2 should be null if the matching string is not present in the input string.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Cieybxjw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/kml2knkvq5c1kelgo1w1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Cieybxjw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/kml2knkvq5c1kelgo1w1.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OUTPUT:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--B9lbSTVr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/mi4qy15qdfhbsqel6kio.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--B9lbSTVr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/mi4qy15qdfhbsqel6kio.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;QUERY:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iEzODCXw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ruap0pvza5607cs6cvwm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iEzODCXw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ruap0pvza5607cs6cvwm.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OUTPUT:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0TNUMuo9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/cl8y7m27m2p0uydw3sa3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0TNUMuo9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/cl8y7m27m2p0uydw3sa3.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;QUERY:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nZxsO_zX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nz46sfdz9eynowetbykv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nZxsO_zX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nz46sfdz9eynowetbykv.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OUTPUT:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ClBbZeW6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/761eozolh8w6dt1xhqdi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ClBbZeW6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/761eozolh8w6dt1xhqdi.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CONCLUSION:&lt;/strong&gt;&lt;br&gt;
You will be now able to get the string before and after a specific character using SUBSTR and INSTR functions in oracle.&lt;/p&gt;

</description>
      <category>substring</category>
      <category>plsql</category>
      <category>sql</category>
      <category>instr</category>
    </item>
    <item>
      <title>Transpose rows to columns in Oracle SQL using Oracle PIVOT clause</title>
      <dc:creator>Sharan Kumar Paratala Rajagopal </dc:creator>
      <pubDate>Sun, 12 Jul 2020 22:00:13 +0000</pubDate>
      <link>https://dev.to/prsharankumar/transpose-rows-to-columns-in-oracle-sql-using-oracle-pivot-clause-155b</link>
      <guid>https://dev.to/prsharankumar/transpose-rows-to-columns-in-oracle-sql-using-oracle-pivot-clause-155b</guid>
      <description>&lt;p&gt;In Oracle 11g &lt;em&gt;PIVOT clause&lt;/em&gt; helps to convert the data from row to column. Below are the examples to convert two column table and three column table result sets to cross tab format.&lt;/p&gt;

&lt;p&gt;This is very helpful for reporting and also queries where data has to be viewed as cross table. This is similar to excel PIVOT functionality.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Two column PIVOT:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;INPUT:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fe2w1m00eaxt9ww9r7ggm.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fe2w1m00eaxt9ww9r7ggm.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SQL QUERY:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;select * from (&lt;br&gt;
  select t.cstore_number, t.attr_Value,&lt;br&gt;
         row_number() over (partition by cstore_number order by attr_Value) rn from STORE_ATTR t)&lt;br&gt;
pivot (&lt;br&gt;
  min(attr_Value) &lt;br&gt;
  for (rn) in (1 as DEALERCODE1, 2 as DEALERCODE2, 3 as DEALERCODE3, 4 as DEALERCODE4, 5 as DEALERCODE5)&lt;br&gt;
);&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Output:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fk1tm7t8aght68nep775u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fk1tm7t8aght68nep775u.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Three column pivot:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Input:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F2v6f3rv17ypn09kqgjy5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F2v6f3rv17ypn09kqgjy5.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SQL query:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxsfszhopdncngler2w79.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxsfszhopdncngler2w79.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OUTPUT:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzo2121wtnl15qiue1whh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzo2121wtnl15qiue1whh.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Summary&lt;/strong&gt;&lt;br&gt;
Now you should be able to convert or transpose rows to columns into crosstab format by using oracle &lt;strong&gt;PIVOT&lt;/strong&gt; clause.&lt;/p&gt;

</description>
      <category>transpose</category>
      <category>sql</category>
      <category>oracle</category>
      <category>pivot</category>
    </item>
    <item>
      <title>Web Scraping using Python</title>
      <dc:creator>Sharan Kumar Paratala Rajagopal </dc:creator>
      <pubDate>Tue, 17 Dec 2019 01:14:22 +0000</pubDate>
      <link>https://dev.to/prsharankumar/web-scraping-using-python-2ip6</link>
      <guid>https://dev.to/prsharankumar/web-scraping-using-python-2ip6</guid>
      <description>&lt;p&gt;Web scraping, web harvesting, or web data extraction is data scraping used for extracting data from websites. Web scraping software may access the World Wide Web directly using the Hypertext Transfer Protocol, or through a web browser.&lt;/p&gt;

&lt;p&gt;Web scraping is a process of automating the extraction of data in an efficient and fast way. With the help of web scraping, you can extract data from any website, no matter how large is the data, on your computer. Moreover, websites may have data that you cannot copy and paste.&lt;/p&gt;

&lt;p&gt;All the job is carried out by a piece of code which is called a “scraper”. First, it sends a “GET” query to a specific website. Then, it parses an HTML document based on the received result.&lt;/p&gt;

&lt;p&gt;The reason why Python is a preferred language to use for web scraping is that Scrapy and Beautiful Soup are two of the most widely employed frameworks based on Python. Beautiful Soup- well, it is a Python library that is designed for fast and highly efficient data extraction. It's more like an all-rounder and can handle most of the web crawling related processes smoothly. &lt;br&gt;
To extract data using web scraping with python, you need to follow these basic steps:&lt;br&gt;
• Find the URL that you want to scrape.&lt;br&gt;
• Inspecting the Page.&lt;br&gt;
• Find the data you want to extract.&lt;br&gt;
• Write the code.&lt;br&gt;
• Run the code and extract the data.&lt;br&gt;
• Store the data in the required format.&lt;/p&gt;

&lt;p&gt;Let’s walk through an example.&lt;br&gt;
“The United States collects and analyzes demographic data from the U.S. population. The U.S. Census Bureau provides annual estimates of the population size of each U.S. state and region. Many important decisions are made using the estimated population dynamics, including the investments in new infrastructure, such as schools and hospitals.The census data and estimates are publicly available on the U.S. census website”.&lt;br&gt;
Steps to extract the weblinks from HTML code:&lt;br&gt;
a.  Import all the required libraries as shown &lt;br&gt;
b.  Set the url variable to the &lt;a href="https://www.census.gov/programs-surveys/popest.html"&gt;https://www.census.gov/programs-surveys/popest.html&lt;/a&gt;&lt;br&gt;
c.  Use request.get to fetch the details from the url&lt;br&gt;
d.  Use BeautifulSoup package to parse the html code and get the content from the website&lt;br&gt;
e.  Soup.find_all(“a”) gives all the links from the website&lt;br&gt;
f.  Link.get(“href”) provides the details of all the reference links in the website.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wTzmsuHk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/ji2lqbpsmtozifhr81oe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wTzmsuHk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/ji2lqbpsmtozifhr81oe.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s determine if a link is a locator to another HTML page. &lt;br&gt;
“href” tag in the html code provides the details of the link locator to another HTML page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4Tu1NFJT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/9rcwfbdrmd1akpc34s0j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4Tu1NFJT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/9rcwfbdrmd1akpc34s0j.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now will see how relative links are saved as absolute URIs in the output file.&lt;br&gt;
• Create function unique_links with parameters of tags and url.&lt;br&gt;
• Loop to all the links with “href” tag&lt;br&gt;
• If there is ‘None’ then continue to execute the code.&lt;br&gt;
• If the link ends with ‘/’ or ‘#’ then remove them from the link.&lt;br&gt;
• Parse the url fetched from steps c and d and join with the actual url with the cleaned url by removing the ‘/’ or ‘#’&lt;br&gt;
• Return the cleaned url.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BuI4zaAg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/jg2o1bpg4xy6yus84cza.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BuI4zaAg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/jg2o1bpg4xy6yus84cza.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Will make sure there is no duplicate links in the output.&lt;br&gt;
Set() function provides the unique url.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ytjf8mnz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/njw285w5io1zict3l7ja.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ytjf8mnz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/njw285w5io1zict3l7ja.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Write to a csv file:&lt;br&gt;
filename = "output_unique_weblink.csv"&lt;br&gt;&lt;br&gt;
f = open(filename,"w")&lt;br&gt;
header = "Output_Weblinks\n"&lt;br&gt;
f.write(header)&lt;br&gt;
write into the csv file all unique links from the website&lt;br&gt;
for link in cleaned_links:&lt;br&gt;
    f.write(str(link) + '\n')&lt;br&gt;
To execute the python code:&lt;br&gt;
Go to the command prompt and execute the saved python code.&lt;br&gt;
    Python python_code.py&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zC8AwczH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/p1vgzbjjm59hk0fuiwow.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zC8AwczH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/p1vgzbjjm59hk0fuiwow.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Conclusion:&lt;br&gt;
By using python libraries we are able to scrap the website and extract the links successfully. And removed the duplicate links. Wrote the output to a csv file.&lt;/p&gt;

</description>
      <category>python</category>
      <category>webscraping</category>
      <category>csv</category>
      <category>html</category>
    </item>
  </channel>
</rss>
