<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Zofia Owsiak</title>
    <description>The latest articles on DEV Community by Zofia Owsiak (@koalabzium).</description>
    <link>https://dev.to/koalabzium</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/koalabzium"/>
    <language>en</language>
    <item>
      <title>Websites Validation System</title>
      <dc:creator>Zofia Owsiak</dc:creator>
      <pubDate>Wed, 20 May 2020 13:11:20 +0000</pubDate>
      <link>https://dev.to/koalabzium/websites-validation-system-1no5</link>
      <guid>https://dev.to/koalabzium/websites-validation-system-1no5</guid>
      <description>&lt;h2&gt;
  
  
  Website Validation - my final project
&lt;/h2&gt;

&lt;p&gt;Nowadays, if we only need something - facts, products, statistics, addresses - we immediately turn to one of the popular search engines and after a while, we receive a list of potential answers. Research shows that most users choose one of the first five links on the result page. Therefore, in an era of a competitive market, for website owners, it is important for their site to be on a high position in the results list. This can be achieved by following a number of rules regarding search engine optimization. However, it is not that simple. Maintaining the quality of a huge website can be too much of a challenge. For example, checking if all of the links on the website work would be a very tedious task and seems to be almost impossible for huge services. &lt;/p&gt;

&lt;p&gt;That's why I decided to create a crawler that will automatically analyze the site and return the most important tips, which will help to keep the site on a high position. What distinguishes my solution from those currently existing on the market is the easily available and open-source project, which can be used as a step in CI/CD pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does it work?
&lt;/h2&gt;

&lt;p&gt;On the created website user can specify the link and parameters for the analysis. Then they receive a list of issues found on their website such as not working URLs, wrong header structure, lack of robots.txt, and poor safety of usage. Moreover, there is a possibility for developers to use only the Rest API which gives all of the information in the form of JSON file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BaDFlDrF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/49esm1pojqnl7mud8btk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BaDFlDrF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/49esm1pojqnl7mud8btk.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Link to Code
&lt;/h2&gt;

&lt;p&gt;Here are the links to GitHub repositories of the back-end and front-end of my project.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vJ70wriM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://practicaldev-herokuapp-com.freetls.fastly.net/assets/github-logo-ba8488d21cd8ee1fee097b8410db9deaa41d0ca30b004c0c63de0a479114156f.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/koalabzium"&gt;
        koalabzium
      &lt;/a&gt; / &lt;a href="https://github.com/koalabzium/SEO_Validator"&gt;
        SEO_Validator
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Tool to crawl and validate websites. My bachelor thesis project
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;br&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vJ70wriM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://practicaldev-herokuapp-com.freetls.fastly.net/assets/github-logo-ba8488d21cd8ee1fee097b8410db9deaa41d0ca30b004c0c63de0a479114156f.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/koalabzium"&gt;
        koalabzium
      &lt;/a&gt; / &lt;a href="https://github.com/koalabzium/SEO_Validator_Website"&gt;
        SEO_Validator_Website
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Simple front in React for my bachelor project. 
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;&lt;div class="plain"&gt;&lt;pre&gt;
&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;



&lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/koalabzium/SEO_Validator_Website"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;
&lt;br&gt;


&lt;h2&gt;
  
  
  The stack
&lt;/h2&gt;

&lt;p&gt;To build my project I used Python's BeautifulSoup to create a crawler and validators and Flask to create the API. For creating user interface I used React.js. &lt;/p&gt;

&lt;h2&gt;
  
  
  Thoughts and feelings &amp;lt;3
&lt;/h2&gt;

&lt;p&gt;The whole project was really fun to make! While doing it I was also writing a thesis about it and I really enjoyed this more literary side of it. I have to admit that the biggest challenge was gathering all the information about SEO to decide which parameters I want to take into consideration. I used more than 60 official sources to write my thesis. All in all, I am super happy with it and I am waiting impatiently for my thesis defense!&lt;/p&gt;

</description>
      <category>octograd2020</category>
    </item>
  </channel>
</rss>
