<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: AngelaMunyao</title>
    <description>The latest articles on DEV Community by AngelaMunyao (@kaveku).</description>
    <link>https://dev.to/kaveku</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kaveku"/>
    <language>en</language>
    <item>
      <title>A brief introduction to Quadsort</title>
      <dc:creator>AngelaMunyao</dc:creator>
      <pubDate>Wed, 03 May 2023 15:56:56 +0000</pubDate>
      <link>https://dev.to/kaveku/a-brief-introduction-to-quadsort-4aa6</link>
      <guid>https://dev.to/kaveku/a-brief-introduction-to-quadsort-4aa6</guid>
      <description>&lt;h2&gt;
  
  
  1. Description
&lt;/h2&gt;

&lt;p&gt;Quadsort can be described as a faster merge sort. Unlike merge sort, which makes comparisons of 2 elements at a go while performing a sorting operation, Quadsort compares 4 elements at a go. It defeats the performance of Merge sort and Quicksort, which have long been considered the best sorting algorithms.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Implementation
&lt;/h2&gt;

&lt;p&gt;Quadsort does require a successful implementation of the following methods:&lt;br&gt;
i) Quad-swap -This is the main building block of Quadsort.&lt;br&gt;
ii) Merging (This is where a sorted block of 4 elements is merged with another block of 4 elements or, generally, two sorted arrays are merged into one)&lt;br&gt;
iii) Performing a check to see if the block of 4 elements is sorted already, either in-order, or in reverse order.&lt;/p&gt;

&lt;p&gt;The whole array is segmented into chunks of 4 elements, and each chunk is sorted using the Quad-swap approach, then all chunks are merged together in sorted order.&lt;/p&gt;

&lt;p&gt;Quad-swap compares and sorts 4 array elements at a go, as can be seen in the Pseudocode below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rtiLcrP8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tz0yh0mx1vdn1eqqmvt1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rtiLcrP8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tz0yh0mx1vdn1eqqmvt1.png" alt="Image description" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the simple Quadswap above, if an array is already fully sorted it writes a lot of data back and forth from swap unnecessarily. If an array is fully in reverse order,  the algorithm will change numbers 10, 9, 8, 7, 6, 5, 4 to 7, 8, 9, 10, 6, 5, 4&lt;br&gt;
This will reduce the degree of orderliness rather than increase it. This problem can be solved by implementing 'Quad-swap in-place', which involves checking and handling the reverse order. Reverse order may be handled using a simple reversal function, as can be seen below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pGPUSkpd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rg437eqpxg2sz8wr7pe3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pGPUSkpd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rg437eqpxg2sz8wr7pe3.png" alt="Image description" width="800" height="260"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After sorting, the blocks need to be merged. If this is done by merging 4 blocks at once, this is known as Quad-merge or Ping-pong merge.&lt;/p&gt;

&lt;p&gt;Our merging approach makes use of Timsort’s galloping merge algorithm to merge two arrays. We create an empty array, that is continuously merged with every new chunk of sorted 4 elements.&lt;br&gt;
As seen while performing Quad-swap above, it is ideal checking whether the four blocks are in order. If they are in order, the merge operation is skipped. While performing the in-place Quadswap, we have already checked the reverse order. Thus, there is no need for rechecking it while performing the merge operation.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Other Merge Operations
&lt;/h2&gt;

&lt;p&gt;3.1. Parity Merge&lt;br&gt;
Parity merge works that,  given 2 n length arrays, we can be able to perform n merge operations at the start of each array, and n merger operations at the end of each array. One needs to ensure that the two arrays are of equal length.&lt;/p&gt;

&lt;p&gt;3.2. Hybrid Parity Merge&lt;br&gt;
In the case the two arrays are not of  equal length, we then need to perform a hybrid parity merge. A hybrid parity merge works by taking n parity merges, where n is the length of the smaller array. Once this is completed, it will then switch to the traditional merge.&lt;/p&gt;

&lt;p&gt;Parity merge is very necessary for branchless optimisations, and this is because it helps in speeding up the sorting of random data. Another advantage of parity merge is that two separate memory regions can be accessed in the same loop, and this allows for memory-level parallelism. On most hardware, this increases the runtime speed by 2.5 times for random data.&lt;/p&gt;

&lt;p&gt;3.3. Branchless Parity Merge&lt;br&gt;
Branchless parity merge works well for random data. However, it slower sorting ordered data. This is where the Quad galloping merge comes in. Quad galloping merge makes use of the galloping idea introduced by Timsort.&lt;br&gt;
The galloping merge algorithm may be implemented as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MgjmalWO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bw6t2k3bafzcz5bnwbta.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MgjmalWO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bw6t2k3bafzcz5bnwbta.png" alt="Image description" width="800" height="795"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When merging ABC and XYZ, for instance, it first checks if B is smaller or equal to X. If so, A and B are copied to swap. If not, it checks if A is greater than Y. If so, X and Y are copied to swap.&lt;br&gt;
If either of the checks is false, it is known that the two remaining distributions are A  X and X  A. This will allow the execution of an optimal branchless merge.&lt;br&gt;
Overally, the Quad galloping merge performs better for both ordered and unordered data.&lt;br&gt;
Other merge operations you may also want to look at are the Tail-merger and the Rotate-merge.&lt;br&gt;
Let’s take a look at a complete python implementation of the Quadsort algorithm. This will include calling the sorting and merging algorithms already implemented above, and may appear as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GuFvMvX0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vef5awb0q3p3s42hr8pc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GuFvMvX0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vef5awb0q3p3s42hr8pc.png" alt="Image description" width="800" height="931"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Complexity analysis
&lt;/h2&gt;

&lt;p&gt;4.1. Time Complexity&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jA0dlpnc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/svdb4ga8brk6rx01yrlr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jA0dlpnc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/svdb4ga8brk6rx01yrlr.png" alt="Image description" width="800" height="166"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4.2. Space Complexity&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8awKZ1uP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eqdiituvs0gh5owxpabt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8awKZ1uP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eqdiituvs0gh5owxpabt.png" alt="Image description" width="800" height="140"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Conclusion
&lt;/h2&gt;

&lt;p&gt;Quadsort performs better than Merge sort, and Quick sort in most runtime tests, except for generic data. Unlike Quick sort, it is able to pick a reliable pivot, hence the better performance for small arrays.  For arrays exceeding the L1 cache, Quicksort performs better due to its ability to partition.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to create a web crawler with Beautiful Soup</title>
      <dc:creator>AngelaMunyao</dc:creator>
      <pubDate>Sat, 29 Apr 2023 09:23:37 +0000</pubDate>
      <link>https://dev.to/kaveku/how-to-create-a-web-crawler-with-beautiful-soup-4mpg</link>
      <guid>https://dev.to/kaveku/how-to-create-a-web-crawler-with-beautiful-soup-4mpg</guid>
      <description>&lt;p&gt;Web crawling is an essential technique used in data extraction, indexing, and web scraping. Web crawlers are automated programs that navigate through websites, extract data, and save it for further processing. Python is an excellent language for building web crawlers, and the Beautiful Soup library is one of the most popular tools for web scraping.&lt;/p&gt;

&lt;p&gt;In this article, we will show you how to create a web crawler using Python and Beautiful Soup. We will extract text data from a website and save it to a CSV file.&lt;/p&gt;

&lt;p&gt;First, we need to import the required libraries. We will use the requests library to send GET requests to the website and the Beautiful Soup library to parse the HTML.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import requests
import csv
from bs4 import BeautifulSoup
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;url = "https://www.businessdailyafrica.com/"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will use the requests library to send a GET request to the website and store the response&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;response = requests.get(url)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, we can create a Beautiful Soup object from the response HTML.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;soup = BeautifulSoup(response.text, "html.parser")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will use the find_all method to extract all anchor tags from the HTML.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;links = soup.find_all("a")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We need to filter out links that do not belong to the website. We will create an empty list to store the internal links.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;internal_links = []
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will loop through all the links and append the internal links to the internal_links list.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for link in links:
    href = link.get("href")
    if href and href.startswith("/bd"):
        href1 = 'https://www.businessdailyafrica.com'+href
        internal_links.append(href1)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we will visit each internal link and repeat the process. We will create a set to store the visited links and a list to store the text data.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;visited_links = set()
p_array = []
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will open a CSV file to store the text data.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;with open('bdailyps.csv', mode='w') as csv_file:
    fieldnames = ['words']
    writer = csv.DictWriter(csv_file, fieldnames=fieldnames)
    writer.writeheader()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will loop through each internal link and send a GET request to the website. We will create a Beautiful Soup object from the response HTML and extract all the paragraphs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    for link in internal_links:
        if (len(visited_links) &amp;gt;= 100000 &amp;amp; len(p_array)&amp;gt;=10000001):
            break
        if link not in visited_links:
            visited_links.add(link)
            response = requests.get(link)
            soup = BeautifulSoup(response.text, "html.parser")
            h2s = soup.find_all("h2")
            ps = soup.find_all("p")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will extract all the links from the HTML and append the internal links to the internal_links list.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;links = soup.find_all("a")
for sublink in links:
    href = sublink.get("href")
    if href and href.startswith("/bd"):
        href1 = 'https://www.businessdailyafrica.com'+href
        internal_links.append(href1)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will loop through all the paragraphs and extract the text. We will split the text into words and save each word to the p_array list and the CSV file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for p in ps:
    paragraph =  p.text.strip()
    for word in paragraph.split():
        p_array.append(word)
        writer.writerow({'words': word})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we will print the number of visited links.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;print(f'{len(visited_links)} links visited.')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A summary of the code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import requests
import csv
from bs4 import BeautifulSoup

# Set the URL of the website to crawl
url = "https://www.businessdailyafrica.com/"

# Send a GET request to the URL and store the response
response = requests.get(url)

# Create a BeautifulSoup object from the response HTML
soup = BeautifulSoup(response.text, "html.parser")

# Find all anchor tags in the HTML
links = soup.find_all("a")

# Filter out links that do not belong to the website
internal_links = []
p_array = []
for link in links:
    href = link.get("href")
    if href and href.startswith("/bd"):
        href1 = 'https://www.businessdailyafrica.com'+href
        internal_links.append(href1)

# Visit each internal link and repeat the process
visited_links = set()
with open('bdailyps.csv', mode='w') as csv_file:
    fieldnames = ['words']
    writer = csv.DictWriter(csv_file, fieldnames=fieldnames)
    writer.writeheader()

    for link in internal_links:
        if (len(visited_links) &amp;gt;= 100000 &amp;amp; len(p_array)&amp;gt;=10000001):
            break
        if link not in visited_links:
            visited_links.add(link)
            response = requests.get(link)
            soup = BeautifulSoup(response.text, "html.parser")
            h2s = soup.find_all("h2")
            ps = soup.find_all("p")
            links = soup.find_all("a")
            for sublink in links:
                href = sublink.get("href")
                if href and href.startswith("/bd"):
                    href1 = 'https://www.businessdailyafrica.com'+href
                    internal_links.append(href1)

            # Print each h1 tag and the element directly below it
            for p in ps:
                print(link, p)
                paragraph =  p.text.strip()
                for word in paragraph.split():
                    p_array.append(word)
                    writer.writerow({'words': word})

# Print the number of visited links
print(f'{len(visited_links)} links visited.')

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it! We have created a web crawler using Beautiful Soup&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Choosing the right Linux Distro for you.</title>
      <dc:creator>AngelaMunyao</dc:creator>
      <pubDate>Sat, 05 Feb 2022 07:06:19 +0000</pubDate>
      <link>https://dev.to/kaveku/choosing-the-right-linux-distro-for-you-4n3k</link>
      <guid>https://dev.to/kaveku/choosing-the-right-linux-distro-for-you-4n3k</guid>
      <description>&lt;p&gt;The word ‘Distro’ is short for distribution. Linux distributions come in many different names such as Debian, Arch Linux, Kali Linux, Ubuntu, etc. Sometimes as a potential Linux user, you may find it difficult to choose a Linux distribution. Several questions need to be asked:&lt;br&gt;
• How do I use my computer?&lt;br&gt;
• How well do I know tech stuff?&lt;br&gt;
• Am I using this for business or personal use?&lt;br&gt;
• What hardware is installed on my desktop?&lt;br&gt;
With questions come many considerations that need to be put in place while choosing a Linux distro and they include:&lt;/p&gt;

&lt;h1&gt;
  
  
  1 Server versus Desktop.
&lt;/h1&gt;

&lt;p&gt;Stability is a major concern when it comes to servers, hence you need to go for Linux distributions that are reliable and backed by a strong client support team. For servers, you also need to consider the distro's community support. Distros like Ubuntu have huge community support where you can always run for help in case things break down. Another consideration you would want to put in place for servers is ensuring that the distro has long release cycles. Some distros get updated more frequently than others. There’s a disadvantage in running new software all the time. Newly released software is not always the best and can cause things to break. You certainly do not want this for your server. You may want to consider distributions like Ubuntu, SolusOS, or OpenSUSE as they do not update as frequently, and they are not outdated either.&lt;br&gt;
For a desktop environment, you need shorter update cycles to keep your apps updated with the latest version. Community support is a point of consideration too but is not as urgent as in for servers.&lt;/p&gt;

&lt;h1&gt;
  
  
  2 Beginner versus Advanced.
&lt;/h1&gt;

&lt;p&gt;Whether you are a beginner, an intermediate, or an advanced user, there is always a Linux distribution waiting for you, and all you need is to make the best choice. For beginners, friendly distributions would be Ubuntu, Linux Mint, and Elementary OS. For intermediate users, Solus, OpenSUSE, Fedora, and Debian would be a perfect choice. Advanced users are well suited to choose between Arch Linux, Kali Linux, and Gentoo. As an advanced user, you may also build your distro from scratch using Linux!&lt;/p&gt;

&lt;h1&gt;
  
  
  3 Hardware resources.
&lt;/h1&gt;

&lt;p&gt;When choosing a Linux distribution, it is important to consider the hardware specifications of your whole computer system. &lt;br&gt;
For instance, some Linux distributions such as Fedora, and Redhat do not support anything that is not open-source, and hence if your computer does not have open-source driver support from the manufacturer, running Distros like Fedora and Redhat would conflict with your system.&lt;br&gt;
Moreover, if using an old machine, or a new one but with limited resources you need to go for lightweight distros such as Puppy Linux, Linux Lite, or Lubuntu. These will support a RAM between 128MB and 512 MB. For a system RAM between 512MB and 1GB, consider Bodhi Linux, Manjaro, and Linux Mint. For a RAM between 2GB and 4GB, consider using Ubuntu, Fedora, Mageia, and Peppermint OS. If your computer RAM is greater than 4GB, then use CentOS, Debian, or OpenSUSE.&lt;/p&gt;

&lt;h1&gt;
  
  
  4 Software repositories.
&lt;/h1&gt;

&lt;p&gt;Another consideration that needs to be made is ensuring that the distro you choose has the software that you need. If you intend to use your desktop for daily tasks, Ubuntu would be a great choice, as it has a large software repository and huge third-party support.&lt;/p&gt;

&lt;p&gt;In conclusion, each Linux distribution is unique and serves the purpose it was designed for. With so many Linux distributions present, there is no perfect one, but there is a most suitable one, highly dependent on your expectations, your personal preferences, requirements, your hardware, and expertise. &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Ensuring Cross-browser Website Compatibility</title>
      <dc:creator>AngelaMunyao</dc:creator>
      <pubDate>Thu, 03 Feb 2022 06:22:40 +0000</pubDate>
      <link>https://dev.to/kaveku/ensuring-cross-browser-website-compatibility-4f6i</link>
      <guid>https://dev.to/kaveku/ensuring-cross-browser-website-compatibility-4f6i</guid>
      <description>&lt;p&gt;Introduction. &lt;/p&gt;

&lt;p&gt;When building websites, every developer desires their site to be visible to all target users. &lt;br&gt;
Users make use of different browsers, such as Chrome, IE, Safari, Firefox, Opera, etc. &lt;br&gt;
Developers must ensure site compatibility with all the browsers. &lt;br&gt;
I will share some tips on how to enhance browser visibility: &lt;/p&gt;

&lt;h1&gt;
  
  
  1 Use less complicated coding.
&lt;/h1&gt;

&lt;p&gt;The more complicated the code is, the higher the chances of something going wrong. Better to choose a simple layout over a complex one. For instance, for layout purposes, it is a better idea to go for “ul” and “li” instead of a 'Table', because this will serve fewer nested elements in the mark-up. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QopDQu6p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/47xw0uz22fappmi5dzzy.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QopDQu6p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/47xw0uz22fappmi5dzzy.PNG" alt="Image description" width="585" height="521"&gt;&lt;/a&gt;&lt;br&gt;
In the figure above, for similar content display, the first block of code, which is a “ul” has fewer nested loops than the second, which is a “Table”. &lt;/p&gt;

&lt;h1&gt;
  
  
  2 Make sure to validate the code.
&lt;/h1&gt;

&lt;p&gt;While code validation may not be a guarantee that the code will work in all browsers, it does minimize issues along the way. A good HTML and CSS validation service is provided by &lt;a href="https://validator.w3.org/"&gt;https://validator.w3.org/&lt;/a&gt;. As a developer, you get to decide how often to validate your code, eg, after every block, or after a whole page block. &lt;/p&gt;

&lt;h1&gt;
  
  
  3 Set a Doctype.
&lt;/h1&gt;

&lt;p&gt;A doctype or document type declaration (DTD) informs the web browser about the markup language in which the current page is written, and in this case, lets the browser know that the document is an HTML document and the version of HTML used in the document. This will usually eliminate browser-related errors. &lt;br&gt;
In an HTML building block, the doctype should be included at the beginning of the block as in the figure below.   &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GxQipnTI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wpkude11wwun0gyu2h5z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GxQipnTI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wpkude11wwun0gyu2h5z.png" alt="Image description" width="435" height="314"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  4 Keep CSS rewrite rules in mind.
&lt;/h1&gt;

&lt;p&gt;With every browser having its own rewrite rules, making use of CSS rewrite rules ensures that CSS is normalized, and this sets a standard for how CSS used will perform on different &lt;br&gt;
sites.&lt;br&gt;&lt;br&gt;
A tool that can be used for this is, ‘normalize.css’.&lt;br&gt;&lt;br&gt;
Normalize.css makes browsers render more consistently and in line with modern standards, and precisely targets only styles that need normalizing. Several companies like Twitter, TweetDeck, Soundcloud, Guardian, Medium, GOV.UK, Bootstrap, and HTML5 Boilerplate make use of normalize.css. &lt;/p&gt;

&lt;h1&gt;
  
  
  5 Use Conditional Comments.
&lt;/h1&gt;

&lt;p&gt;Conditional comments are used when targeting older browsers such as Internet Explorer and they help ensure that your code is compatible with these browsers and functions as expected.&lt;br&gt;&lt;br&gt;
Conditional statements usually make use of HTML markup wrapped up in a conditional statement. If the statement evaluates to true, the enclosed HTML is revealed within the document. Since these conditional comments are placed within HTML comments, the enclosed HTML also remains hidden from all browsers that don’t support conditional comments. &lt;br&gt;
Conditional comments make use of operators as below: &lt;/p&gt;

&lt;p&gt;Operator    Description &lt;br&gt;
IE  -Represents Internet Explorer; And if a number value is also specified, it represents a version vector. &lt;br&gt;
lt  -Less than operator &lt;br&gt;
lte     -Less than or equal to &lt;br&gt;
gt  -Greater than &lt;br&gt;
gte     -Greater than or equal to &lt;br&gt;
!   -The NOT operator &lt;br&gt;
()  -Subexpression operator &lt;br&gt;
&amp;amp;   -The AND operator &lt;br&gt;
|   -The OR operator &lt;br&gt;
true    -Evaluates to true &lt;br&gt;
false   -Evaluates to false &lt;/p&gt;

&lt;p&gt;Conditional statements placed inside HTML appear as below: &lt;br&gt;
  &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kyhaXVp_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hm8azl9gtw0q7r37e2w5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kyhaXVp_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hm8azl9gtw0q7r37e2w5.png" alt="Image description" width="769" height="570"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  6 Use Cross-Browser compatible libraries.
&lt;/h1&gt;

&lt;p&gt;Such libraries include Bootstrap, Foundation, etcetera. When building websites, it is a better choice to make use of libraries that were built with compatibility in mind, instead of using separate elements, and this reduces the chance of errors. &lt;/p&gt;

&lt;h1&gt;
  
  
  7 Use Dedicated Stylesheets for Each Browser.
&lt;/h1&gt;

&lt;p&gt;With dedicated Stylesheets, a list of commands that tell each browser to launch each stylesheet designed for it is also included. &lt;/p&gt;

&lt;h1&gt;
  
  
  8 Include as many browsers as possible in the testing process
&lt;/h1&gt;

&lt;p&gt;While building websites, it is much necessary to test in as many browsers as possible. Testing should also be considered for older browsers such as Internet Explorer 6, Safari 3, and Opera  9.&lt;br&gt;&lt;br&gt;
Cross-browser testing tools will provide better results. A good example of these tools is Comparium, which not only allows cross-browser testing but also allows for testing across multiple Operating Systems. Comparium will also allow for results comparison and help in finding differences between browsers used in testing. It allows for both manual and live testing.&lt;br&gt;&lt;br&gt;
If using Comparium for testing, the following steps would be helpful. &lt;br&gt;
• Visit &lt;a href="https://comparium.app/"&gt;https://comparium.app/&lt;/a&gt;.&lt;br&gt;&lt;br&gt;
• Get the URL of the site you need to test. &lt;br&gt;
• Select the browser you intend to test. You may also go ahead and include the &lt;br&gt;
Operating system you intend to test, and even choose the screen width. &lt;br&gt;
• Click “Create Screenshots”. &lt;br&gt;
• Comparium will use your selected metrics to display your site and make screenshots &lt;br&gt;
of the site. &lt;br&gt;
• Additional steps you may perform on your own will include deleting, refreshing, or finding differences from the comparium’s menu. To start all over, click on “New configuration” .&lt;/p&gt;

&lt;p&gt;Conclusion. &lt;br&gt;
Users make use of different browsers on their devices; hence, developers must implement everything possible to ensure users of different browsers can access their content fully. Site content visibility in different browsers should be considered right from the ground up. The tips declared above are a great way to ensure cross-browser visibility of websites and web applications. &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Building a Machine Learning model using Multiple Linear regression</title>
      <dc:creator>AngelaMunyao</dc:creator>
      <pubDate>Thu, 27 Jan 2022 09:52:06 +0000</pubDate>
      <link>https://dev.to/kaveku/building-a-machine-learning-model-using-multiple-linear-regression-19ke</link>
      <guid>https://dev.to/kaveku/building-a-machine-learning-model-using-multiple-linear-regression-19ke</guid>
      <description>&lt;p&gt;I spend couple hours this early morning modelling this article for all Machine Learning enthusiasts, and especially those at the beginner-intermediate level.&lt;/p&gt;

&lt;p&gt;One of the very necessary skills for ML engineers is to understand the concept of regression as related to volumes of data, both small data sets, and giant data sets.&lt;/p&gt;

&lt;p&gt;This article covers a practical example on how to build an Machine Learning model using Multiple Linear Regression.&lt;/p&gt;

&lt;p&gt;For this exercise, make sure to have Anaconda software installed, and from there, open Jupyter notebooks.&lt;br&gt;
Download combine cycle power plant Data set From UCL Machine Learning Repository: &lt;a href="https://archive.ics.uci.edu/ml/datasets/combined+cycle+power+plant"&gt;https://archive.ics.uci.edu/ml/datasets/combined+cycle+power+plant&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Import the libraries below that we will be using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pylab as pl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Enable inline plotting with Matplotlib&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;%matplotlib inline
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Import the data: Unzip the downloaded zipped folder and make sure your data file exists in the same environment as your Notebook files:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4tUkHY0L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aw6riln4x35348dkgwvp.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4tUkHY0L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aw6riln4x35348dkgwvp.PNG" alt="Image description" width="880" height="556"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data_df=pd.read_excel("Folds5x2_pp.xlsx")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can now view your data with the command below (By default, it displays the first 5 rows)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data_df.head()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--O1c0nO9s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ba9636zz8kfvt5gy6kir.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--O1c0nO9s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ba9636zz8kfvt5gy6kir.PNG" alt="Image description" width="577" height="220"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To recap the variables defined above:&lt;br&gt;
    AT refers to temperature in the range 1.81°c - 37.11°c&lt;br&gt;
    Exhaust vacuum V, in the range 25.36-81.56 cm Hg&lt;br&gt;
    Ambient Pressure (AP) in the range 992.89-1033.30 milibar&lt;br&gt;
    Relative Humidity (RH) in the range 25.56% -100.16%&lt;br&gt;
    Net hourly electrical energy (PE) in the range 420.26-495.76MW&lt;/p&gt;

&lt;p&gt;Your dependent variable is PE.&lt;/p&gt;

&lt;p&gt;Let's define X and Y, X being the independent variables and Y being the dependent variable.&lt;/p&gt;

&lt;p&gt;To capture the independent variables, we need to use the function 'x=data_df.drop(['PE'], axis=1).values'. &lt;br&gt;
The drop function excludes the independent variable PE,and 'axis=1' helps drop the column, and '.values' captures the x values.&lt;/p&gt;

&lt;p&gt;To capture the dependent variable EP, we use 'y=data_df['PE'].values'&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;x=data_df.drop(['PE'], axis=1).values
y=data_df['PE'].values
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Confirm X and Y values.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;print(x)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Bxqt3yXw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u5dddhr02vbkkdpb8itg.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Bxqt3yXw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u5dddhr02vbkkdpb8itg.PNG" alt="Image description" width="447" height="171"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;print(y)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Split the data set into training and test set:&lt;/p&gt;

&lt;p&gt;We use the function from Scikit library, 'train_test_split'&lt;br&gt;
Import the train_test_split function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from sklearn.model_selection import train_test_split
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Devide your data into x_train, x_test, y_train, y_test.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;x_train,x_test,y_train,y_test=train_test_split(x,y)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Voila! Your data is split into training and test set.&lt;br&gt;
Next is to train the model using the training set. We will make use of linear regression.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from sklearn.linear_model import LinearRegression
model=LinearRegression()
model.fit(x_train,y_train)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After training the model, predict the test set results.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;y_pred=model.predict(x_test)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's print the prediction results&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;print(y_pred)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--s9NfEseU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/imj02s0w2s0s5pbi9h2s.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--s9NfEseU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/imj02s0w2s0s5pbi9h2s.PNG" alt="Image description" width="778" height="69"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above prediction of PE is generated for all the rows in relation to the corresponding set of independent variables represented by X.&lt;/p&gt;

&lt;p&gt;We can also execute as below, PE prediction per a specific one row set of x values (AT, V, AP, RH).&lt;br&gt;
The example below is values from the first row of x values.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;model.predict([[14.96,41.76,1024.07,73.17]])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9Xc3FHhQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/caucqbsxrne7li293hdt.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9Xc3FHhQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/caucqbsxrne7li293hdt.PNG" alt="Image description" width="589" height="47"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lets check how accurate our model is.&lt;br&gt;
We need to import the function 'r2_score'.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from sklearn.metrics import r2_score
r2_score(y_test,y_pred)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--l1i5qA7Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hph24dr5mr14q2ybq054.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--l1i5qA7Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hph24dr5mr14q2ybq054.PNG" alt="Image description" width="556" height="55"&gt;&lt;/a&gt;&lt;br&gt;
The accuracy of our model is 92. :)&lt;/p&gt;

&lt;p&gt;Next: Lets visualize the predicted results in a scatter plot&lt;br&gt;
We already imported matplotlib which we are going to use.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Make sure to import the Figure function which we will use to increase the scale of your graph so it doesn't appear too small 
plt.figure(figsize=(15, 10))
plt.scatter(y_test,y_pred)
plt.xlabel('Actual')
plt.ylabel('Predicted')
plt.title('Actual PE Values vs. Predicted')
plt.show()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FAmrUW4Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qttkathmq57wgllh1kxd.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FAmrUW4Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qttkathmq57wgllh1kxd.PNG" alt="Image description" width="880" height="610"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From our dataset, let's also create a comparison of the test values and the predicted values.&lt;/p&gt;

&lt;p&gt;Use Pandas already imported as 'pd', to put the values into a data frame.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pred_comparison=pd.DataFrame({'Actual Value': y_test,'Predicted Value':y_pred, 'Diffrence':y_test-y_pred})
pred_comparison
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IjDm6947--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1wgfabxlarzp812bdr9z.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IjDm6947--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1wgfabxlarzp812bdr9z.PNG" alt="Image description" width="628" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Above is the 1st 5 and the last 5 rows from our data set.&lt;/p&gt;

&lt;p&gt;To view the first 40 rows for more clarity, we use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pred_comparison[0:40]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QFEM5P1S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/90xqqwhs9fmfu25njpap.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QFEM5P1S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/90xqqwhs9fmfu25njpap.PNG" alt="Image description" width="600" height="716"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Awesomeee, that's our model right there! Research on more ways on how to improve the model. Bye!
&lt;/h1&gt;

</description>
      <category>machinelearning</category>
      <category>python</category>
      <category>womenintech</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
