<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sifat</title>
    <description>The latest articles on DEV Community by Sifat (@shhossain).</description>
    <link>https://dev.to/shhossain</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/shhossain"/>
    <language>en</language>
    <item>
      <title>The Evolution of Sorting Algorithms Over the Years (Bubble sort to AI-driven sort)</title>
      <dc:creator>Sifat</dc:creator>
      <pubDate>Sat, 21 Oct 2023 07:28:29 +0000</pubDate>
      <link>https://dev.to/shhossain/the-evolution-of-sorting-algorithms-over-the-years-bubble-sort-to-ai-driven-sort-31pg</link>
      <guid>https://dev.to/shhossain/the-evolution-of-sorting-algorithms-over-the-years-bubble-sort-to-ai-driven-sort-31pg</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In the fascinating realm of computer science, where data manipulation is king, the sorting algorithm stands as a foundational pillar. Sorting algorithms are like the heroes behind the scenes, tirelessly arranging data in the blink of an eye without taking credit. However, the journey of these algorithms has been far from straightforward. It's a story of innovation, ingenuity, and the relentless pursuit of efficiency. Let's embark on a journey through the evolution of sorting algorithms, from their humble beginnings in the 1950s to their state-of-the-art adaptations in the present.&lt;/p&gt;

&lt;h2&gt;
  
  
  Early Sorting Methods (1950s - 1960s)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sOG3vXmZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gqgv9efngu00rk28m3le.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sOG3vXmZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gqgv9efngu00rk28m3le.gif" alt="Bubble Sort" width="300" height="180"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The birth of sorting algorithms can be traced back to the 1950s when computers were in their infancy. The earliest sorting methods, such as Bubble Sort and Selection Sort, were simple and inefficient. These algorithms had time complexities of O(n^2), which made them impractical for large datasets. They were like the pioneers trying to build a house with wooden sticks instead of bricks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction of Quicksort (1960s)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--da6iLl-s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vhn12xmkmi17ofb1b4d3.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--da6iLl-s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vhn12xmkmi17ofb1b4d3.gif" alt="Quick Sort" width="300" height="180"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The breakthrough came in the 1960s with the invention of Quicksort by Tony Hoare. Quicksort introduced the concept of "divide and conquer," significantly improving sorting efficiency. With an average-case time complexity of O(n log n), it became a game-changer and remains widely used today.&lt;/p&gt;

&lt;p&gt;In 1959, Tony Hoare found himself at Moscow State University working on a machine translation project. The task at hand was to sort Russian words for translation. Hoare initially considered using insertion sort, a well-known sorting algorithm at the time. However, he quickly realized that insertion sort would be too slow for the massive dataset he was dealing with.&lt;/p&gt;

&lt;p&gt;The inspiration for Quicksort struck him while he was struggling with the unsorted segments of the list. He recognized that a "divide and conquer" approach could make the sorting process significantly faster. He wrote the partition part of the algorithm in Mercury Autocode, and upon returning to England, he was challenged to write code for Shellsort.&lt;/p&gt;

&lt;p&gt;Hoare, confident in his new algorithm's speed, mentioned it to his boss, who was skeptical and bet a sixpence that there was no faster algorithm. Hoare accepted the bet and set out to prove the superiority of Quicksort.&lt;/p&gt;

&lt;p&gt;As he implemented Quicksort and demonstrated its remarkable speed, Hoare's boss had to acknowledge defeat. Quicksort was indeed faster and had the potential to revolutionize the world of sorting. Tony Hoare published his algorithm in a paper in The Computer Journal in 1962, describing the elegance and efficiency of Quicksort.&lt;/p&gt;

&lt;h2&gt;
  
  
  Merge Sort (1950s - 1960s)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_ulOPQgH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wmy5ydau5wvm53so75ve.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_ulOPQgH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wmy5ydau5wvm53so75ve.gif" alt="Merge Sort" width="300" height="180"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Around the same time, John von Neumann proposed Merge Sort. This algorithm gained popularity in the 1960s and, like Quicksort, introduced the idea of divide and conquer in sorting. Its average-case time complexity of O(n log n) made it another powerful tool in the sorting toolbox.&lt;/p&gt;

&lt;h2&gt;
  
  
  Heapsort (1964)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ts21vdm6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4fy68cergz4xuqurgy1l.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ts21vdm6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4fy68cergz4xuqurgy1l.gif" alt="Heap Sort" width="280" height="214"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In 1964, J. W. J. Williams introduced Heapsort, an in-place sorting algorithm utilizing a binary heap data structure. With its time complexity of O(n log n), Heapsort became a crucial tool for in-place sorting, a method widely used in applications where memory conservation is essential.&lt;/p&gt;

&lt;h2&gt;
  
  
  Development of Efficient Algorithms (1970s - 1980s)
&lt;/h2&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/NVIjHj-lrT4"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;The 1970s and 1980s witnessed significant progress in the realm of sorting algorithms. Computer scientists developed more sophisticated algorithms, including Timsort (1989), which blended the strengths of Merge Sort and Insertion Sort. Timsort is used as Python's built-in sorting method.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction of Comparison-Based Lower Bounds (1980s - 1990s)
&lt;/h2&gt;

&lt;p&gt;Theoretical computer science established lower bounds for comparison-based sorting algorithms, demonstrating that O(n log n) was the best achievable average-case complexity. This revelation shifted the focus towards improving constant factors and the practical aspects of sorting.&lt;/p&gt;

&lt;h2&gt;
  
  
  Parallel Sorting (1990s - 2000s)
&lt;/h2&gt;

&lt;p&gt;With the advent of parallel computing, sorting algorithms adapted to harness the power of multi-core processors and distributed systems. Algorithms like Bitonic Sort and Batcher's Odd-Even Mergesort led the way in parallel sorting, allowing for greater speed and efficiency in the sorting process.&lt;/p&gt;

&lt;h2&gt;
  
  
  External Sorting (2000s - Present)
&lt;/h2&gt;

&lt;p&gt;As data sets continued to expand exponentially, external sorting algorithms became essential. These algorithms were designed to handle data that couldn't fit into memory. Techniques such as External Merge Sort and Quick Sort were adapted for external memory and distributed computing environments, facilitating the processing of vast datasets.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hybrid and Adaptive Algorithms (2000s - Present)
&lt;/h2&gt;

&lt;p&gt;In the modern era, sorting algorithms have become more versatile. Modern libraries utilize hybrid and adaptive sorting algorithms that can switch between various methods depending on the data's characteristics. These algorithms aim to strike a balance between speed and resource utilization, ensuring optimal performance in various scenarios.&lt;/p&gt;

&lt;h2&gt;
  
  
  Machine Learning and AI Sorting Techniques (Present)
&lt;/h2&gt;

&lt;p&gt;In the present, sorting algorithms are experiencing yet another revolution through the integration of machine learning and artificial intelligence. These cutting-edge techniques adapt and optimize sorting algorithms based on the specific characteristics of the data, predicting the most efficient sorting method for a given dataset. This development has opened up new frontiers for sorting in the age of AI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The evolution of sorting algorithms is a remarkable journey through the annals of computer science. From the rudimentary beginnings of Bubble Sort to the cutting-edge adaptability of machine learning-based sorting techniques, these algorithms have continuously strived for efficiency. As data continues to grow in complexity and volume, sorting algorithms remain at the forefront of data management, proving that even in the digital world, the art of arranging information is an ever-evolving science.&lt;/p&gt;

</description>
      <category>algorithms</category>
      <category>sortingalgorithms</category>
      <category>computerscience</category>
      <category>learning</category>
    </item>
    <item>
      <title>Face Recognition on a Large Collection of Faces with Python</title>
      <dc:creator>Sifat</dc:creator>
      <pubDate>Tue, 05 Sep 2023 18:57:40 +0000</pubDate>
      <link>https://dev.to/shhossain/face-recognition-on-a-large-collection-of-faces-with-python-4e36</link>
      <guid>https://dev.to/shhossain/face-recognition-on-a-large-collection-of-faces-with-python-4e36</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6amADdbx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tpcyifvqtdojilllkcow.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6amADdbx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tpcyifvqtdojilllkcow.jpg" alt="Lots of faces" width="720" height="405"&gt;&lt;/a&gt;&lt;br&gt;
Source: &lt;a href="https://edition.cnn.com/2019/04/19/tech/ai-facial-recognition/index.html"&gt;cnn.com&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Hey there!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Ever wondered how computers can recognize faces? Well, nowadays, it’s not as complicated as it used to be, thanks to the amazing advancements in computer vision. There are libraries like “face_recognition” and “deepface” that make face recognition tasks quite straightforward. You can easily recognize one or two faces, or even a hundred, with just a few lines of code. However, as you might expect, things get a bit tricky when you’re dealing with a large collection of faces. The more faces you have, the more time and effort it takes.&lt;/p&gt;

&lt;p&gt;But fear not! In this article, we’re going to dive into how you can tackle this challenge and perform face recognition on a big bunch of faces.&lt;/p&gt;
&lt;h2&gt;
  
  
  Understanding Embeddings:
&lt;/h2&gt;

&lt;p&gt;First things first, let’s talk about something called “embeddings.” Think of embeddings as unique signatures for each face. These are arrays of numbers that describe the essence of a face. To get these embeddings using Python’s “face_recognition” library, follow these steps:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;face_recognition&lt;/span&gt;

&lt;span class="c1"&gt;# Load the known image (e.g., Joe Biden's face)
&lt;/span&gt;&lt;span class="n"&gt;known_image&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;face_recognition&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;load_image_file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"biden.jpg"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;biden_embeddings&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;face_recognition&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;face_encodings&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;known_image&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you print out these embeddings, you’ll see an array of numbers, usually with a length of 128. Different deep-learning models might produce embeddings of different lengths.&lt;/p&gt;

&lt;h2&gt;
  
  
  Calculating Similarity:
&lt;/h2&gt;

&lt;p&gt;Now, what’s the use of these embeddings? Well, they help us compare faces. Let’s say we have another face, and we want to see how similar it is to Joe Biden’s face. We can use mathematical measures like “cosine similarity” or “Euclidean distance” for this.&lt;/p&gt;

&lt;p&gt;Here’s how you calculate cosine similarity:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;numpy&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;dot&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;numpy.linalg&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;norm&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;cosine_similarity&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;list_1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;list_2&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
  &lt;span class="n"&gt;cos_sim&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;dot&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;list_1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;list_2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;norm&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;list_1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;norm&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;list_2&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;cos_sim&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In simple terms, the closer the similarity score is to 1, the more alike the faces are. So, if you get a similarity score of 0.86, you can say these faces are about 86% similar.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Vector Databases:
&lt;/h2&gt;

&lt;p&gt;But wait, when you have a ton of faces, calculating similarity for each pair of faces can be slow and memory-intensive. This is where “vector databases” come to the rescue. Think of a vector database as a smart way to store and quickly retrieve embeddings.&lt;/p&gt;

&lt;p&gt;Let’s take “ChromaDB” as an example. Here’s how you can use it for your face recognition task:&lt;/p&gt;

&lt;p&gt;First, create a collection to store your images:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;chromadb&lt;/span&gt;

&lt;span class="c1"&gt;# Choose where to store the database
&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;chromadb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;PersistentClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get_or_create_collection&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'facedb'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="s"&gt;"hnsw:space"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;'cosine'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, you can add your embeddings to the database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;ids&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'1'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;embeddings&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;embeds&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;  &lt;span class="c1"&gt;# Replace with your embeddings
&lt;/span&gt;    &lt;span class="n"&gt;metadatas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="s"&gt;'name'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;'Joe Biden'&lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To search for similar faces in the database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;query_embeddings&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;unknown_embeddings&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;  &lt;span class="c1"&gt;# Replace with your unknown embeddings
&lt;/span&gt;    &lt;span class="n"&gt;n_results&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The results will tell you which faces are similar and how close they are.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding Distance Metrics:
&lt;/h2&gt;

&lt;p&gt;I’ve done some experiments with different distance metrics for ChromaDB. Imagine the blue indicating that the faces match and the red meaning they don’t.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Cosine: Cosine similarity measures angles between vectors.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CQG3zJJA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6h7p5sb4vmhgizd3z4os.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CQG3zJJA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6h7p5sb4vmhgizd3z4os.png" alt="Cosine" width="552" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  L2 (Euclidean): Euclidean distance measures straight-line distances between points.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_Cm5IxZl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o9torsnp4hkko2ahqze9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_Cm5IxZl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o9torsnp4hkko2ahqze9.png" alt="Euclidean" width="543" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Using the “facedb” Package:
&lt;/h2&gt;

&lt;p&gt;To make your life easier, I’ve bundled all this functionality into a handy package called “facedb.” You can install it with a simple pip command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;facedb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TrRlq9NT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eexnsg02o6b8owga132q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TrRlq9NT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eexnsg02o6b8owga132q.png" alt="Image description" width="496" height="268"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/ageitgey/face_recognition#identify-faces-in-pictures"&gt;Source: github.com/ageitgey/face_recognition&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here’s how you can use it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Import the FaceDB library
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;facedb&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;FaceDB&lt;/span&gt;

&lt;span class="c1"&gt;# Create a FaceDB instance and specify where to store the database
&lt;/span&gt;&lt;span class="n"&gt;db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;FaceDB&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"facedata"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Add a new face to the database
&lt;/span&gt;&lt;span class="n"&gt;face_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Joe Biden"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;img&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"joe_biden.jpg"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Recognize a face from a new image
&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;recognize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;img&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"new_face.jpg"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Check if the recognized face matches the one in the database
&lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="n"&gt;face_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Recognized as Joe Biden"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Unknown face"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  For More Use Cases:
&lt;/h2&gt;

&lt;p&gt;If you’re interested in exploring more use cases and diving deeper into the code, you can check out the &lt;a href="https://github.com/shhossain/FaceDB"&gt;GitHub repository&lt;/a&gt;. There, you’ll find additional examples and resources to help you with your face recognition projects.&lt;/p&gt;

&lt;p&gt;So, go ahead and give it a try! Goodbye, and I hope you find this information helpful for your face recognition projects!&lt;/p&gt;

</description>
      <category>python</category>
      <category>opensource</category>
      <category>computervision</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Why Skills in Algorithmic Problem Solving are More Important Than Ever in 2023</title>
      <dc:creator>Sifat</dc:creator>
      <pubDate>Thu, 08 Dec 2022 15:07:15 +0000</pubDate>
      <link>https://dev.to/shhossain/why-skills-in-algorithmic-problem-solving-are-more-important-than-ever-in-2023-28mf</link>
      <guid>https://dev.to/shhossain/why-skills-in-algorithmic-problem-solving-are-more-important-than-ever-in-2023-28mf</guid>
      <description>&lt;p&gt;As the field of artificial intelligence continues to advance, the importance of competitive programming is becoming increasingly clear. In the year 2023, the rise of AI systems like CoPilot will highlight the need for skilled programmers who can develop and maintain these complex systems.&lt;/p&gt;

&lt;p&gt;Competitive programming, also known as algorithmic problem solving, involves writing computer programs to solve complex problems and puzzles. This type of programming requires a deep understanding of algorithms, data structures, and computer science principles, as well as the ability to think logically and solve problems efficiently.&lt;/p&gt;

&lt;p&gt;In the age of AI, the importance of competitive programming cannot be overstated. AI systems like CoPilot, which is developed by OpenAI, are able to generate various kinds of code, from creating a server to making a game. These systems require highly skilled programmers to develop and maintain them, and the ability to solve complex algorithmic problems is essential.&lt;/p&gt;

&lt;p&gt;Furthermore, the rise of AI is also creating new opportunities for competitive programmers. As AI systems become more advanced, there is a growing demand for skilled programmers who can work on these systems and help them reach their full potential. Competitive programming provides a valuable skill set that can be applied to a wide range of industries, from finance and healthcare to transportation and gaming.&lt;/p&gt;

&lt;p&gt;In addition to the practical applications of competitive programming, the discipline also has numerous benefits for individuals. Competitive programming can improve problem-solving skills, logical thinking, and attention to detail, as well as provide a sense of accomplishment and achievement. It can also be a valuable tool for personal growth and development, as it encourages continuous learning and improvement.&lt;/p&gt;

&lt;p&gt;In conclusion, the importance of competitive programming will only continue to grow in the age of AI. As AI systems like CoPilot become more advanced and widespread, the need for skilled programmers who can develop and maintain these systems will become increasingly clear. Competitive programming provides a valuable skill set that can be applied to a wide range of industries and offers numerous benefits for individuals.&lt;/p&gt;

</description>
      <category>watercooler</category>
    </item>
  </channel>
</rss>
