<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Oussama Ben Sassi</title>
    <description>The latest articles on DEV Community by Oussama Ben Sassi (@usamaa).</description>
    <link>https://dev.to/usamaa</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/usamaa"/>
    <language>en</language>
    <item>
      <title>Why Every Engineer Should Learn to Read Research Papers (And How to Start)</title>
      <dc:creator>Oussama Ben Sassi</dc:creator>
      <pubDate>Mon, 03 Nov 2025 15:19:45 +0000</pubDate>
      <link>https://dev.to/usamaa/why-every-engineer-should-learn-to-read-research-papers-and-how-to-start-11n</link>
      <guid>https://dev.to/usamaa/why-every-engineer-should-learn-to-read-research-papers-and-how-to-start-11n</guid>
      <description>&lt;p&gt;You're debugging a performance issue in your microservices. The database queries are slow, but not obviously wrong. You search Stack Overflow, read a blog post or two, implement what seemed reasonable, and the problem gets worse. Then someone mentions an academic paper from 2012 that directly addresses your situation. You read it. The researchers tested exactly this scenario, documented where the optimization works and where it fails, and included a section on production gotchas you'd already hit.&lt;/p&gt;

&lt;p&gt;That moment sticks with you. It shouldn't have taken a casual mention to find that information. You had the tools available. You just didn't think to look for them.&lt;/p&gt;

&lt;p&gt;This is the core reason engineers should read research papers. Not for some abstract intellectual exercise. Not to impress people at conferences. But because the work that matters in your career has often already been documented, tested, and published. The question is whether you'll find it before or after you burn a week on the wrong approach.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Problem Research Papers Solve
&lt;/h2&gt;

&lt;p&gt;Modern engineering involves picking from an enormous set of tools, patterns, and trade-offs. Most engineers make these choices based on blog posts, conference talks, and what their team happened to use last. There's nothing wrong with that as a starting point. The problem is what happens when the obvious choices stop working.&lt;/p&gt;

&lt;p&gt;That's when you need to understand the principles beneath the tools. And that's where research papers live.&lt;/p&gt;

&lt;p&gt;AI, container orchestration, database optimization, distributed systems consensus—these didn't start as commercial products or blog posts. They started as peer-reviewed research. Someone designed the thing, tested it carefully, documented what worked and what didn't, and published it. Years later, companies built products around it. Still later, people wrote quick-start guides.&lt;/p&gt;

&lt;p&gt;If you read only the quick-start guide, you're working from an interpretation of an interpretation. The actual design decisions, the trade-offs, the known limitations—those live in the original paper.&lt;/p&gt;

&lt;p&gt;And here's the practical angle: by the time something becomes common enough for a quick-start guide, you're probably dealing with a problem that guide doesn't cover. That's when you need the research.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Engineers Skip This
&lt;/h2&gt;

&lt;p&gt;The barriers are real. Academic papers are dense. They use unfamiliar notation. They assume prior knowledge. And you have actual work to do.&lt;/p&gt;

&lt;p&gt;But the barrier isn't as high as it feels. The secret is not reading the entire paper thoroughly. You read strategically.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Actually Start
&lt;/h2&gt;

&lt;p&gt;Pick a problem from your current work. Something real, something you're actively thinking about. Don't start with a random paper that sounds interesting. Start with a paper directly related to a problem you have.&lt;/p&gt;

&lt;p&gt;If you work with databases, search for papers on indexing strategies or query optimization. If you work with distributed systems, search for papers on consensus algorithms. If you work with deployment infrastructure, search for papers on container orchestration or service discovery.&lt;/p&gt;

&lt;p&gt;Existing domain knowledge is your friend here. You already understand the problem. The paper is just showing you how researchers approached it.&lt;/p&gt;

&lt;p&gt;When you find a paper, don't read it like a book. Read it like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Read the abstract and introduction. This tells you what problem the paper solves and why it matters.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Skip to the results section and look at the diagrams. What did they actually test? What were the performance characteristics?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Read the discussion or limitations section. This is where researchers admit what didn't work and why. This is the most practical part.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Skim the technical sections. You don't need to understand every equation. You need to understand the experimental design well enough to know whether their results apply to your situation.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's it. You don't need to parse all the mathematical derivations or know every citation in the related work section.&lt;/p&gt;

&lt;h2&gt;
  
  
  Recognize Academic Conventions
&lt;/h2&gt;

&lt;p&gt;Researchers write with specific constraints. They need to justify everything. They need to be precise even when precision isn't the most readable approach. They structure papers in ways that make sense for peer review, not necessarily for learning.&lt;/p&gt;

&lt;p&gt;So filter accordingly. A section that looks like essential reading might be the authors proving a mathematical property they assumed you'd care about. You probably don't. What you care about is whether their testing scenario matches yours and whether their results make sense.&lt;/p&gt;

&lt;p&gt;The abstract and conclusion are often the most useful parts. They contain the actual story of what the paper did and why it matters. The middle sections prove it works, but you can usually get away with less depth there than you'd expect.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Resources as Pointers, Not Replacements
&lt;/h2&gt;

&lt;p&gt;There are sites and blogs that summarize research papers. They're useful for finding papers and getting initial context. But they're still interpretations. Once you know a paper matters, go read it.&lt;/p&gt;

&lt;p&gt;This isn't gatekeeping. It's practical. A blog post about a paper might not cover the specific trade-off you're hitting. The original paper will. Or it won't, and you'll know that faster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build the Habit
&lt;/h2&gt;

&lt;p&gt;Don't try to read a paper a week. That's too ambitious when you're starting. Instead, allocate one hour. Pick one paper. Read it using the strategy above. Write a brief note on what the paper solved, how they tested it, and what limitations they found.&lt;/p&gt;

&lt;p&gt;That's your actual artifact. Not reading the paper. Writing the note. Because you don't retain information just from reading. You retain it by explaining it.&lt;/p&gt;

&lt;p&gt;Over time, papers become easier to read. You recognize the patterns. You get faster at separating what matters from what doesn't. And you start noticing citation chains. This paper cites an older paper that might be even more foundational. Follow those chains. That's how you build real understanding.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters for Your Career
&lt;/h2&gt;

&lt;p&gt;Engineers who read research papers debug faster. They make better architectural decisions. They communicate with more confidence about why they're choosing one approach over another. They're less likely to spend a week optimizing something that's fundamentally limited by the approach itself.&lt;/p&gt;

&lt;p&gt;They also tend to get promoted faster. Not because they're smarter. But because they can explain technical decisions in terms of actual trade-offs rather than hunches or hype.&lt;/p&gt;

&lt;p&gt;This is practical professional development. It's not separate from your job. It's literally how you get better at your job.&lt;/p&gt;

&lt;p&gt;Start this week. Find a paper related to something you're working on. Read the abstract and jump to the results. Spend thirty minutes on it. See if it gives you any insight you didn't have before. Most of the time it won't help immediately. Sometimes it changes how you think about a problem.&lt;/p&gt;

&lt;p&gt;That's the point. You're not looking for immediate impact. You're building the habit and the pattern recognition to spot when a paper matters.&lt;/p&gt;

</description>
      <category>computerscience</category>
    </item>
    <item>
      <title>Convolutional Neural Networks: How Does AI Understand Images?</title>
      <dc:creator>Oussama Ben Sassi</dc:creator>
      <pubDate>Sat, 14 Sep 2024 11:29:40 +0000</pubDate>
      <link>https://dev.to/usamaa/convolutional-neural-networks-how-do-computers-understand-images-4ge6</link>
      <guid>https://dev.to/usamaa/convolutional-neural-networks-how-do-computers-understand-images-4ge6</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkbm945212wsd2eifsaeq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkbm945212wsd2eifsaeq.png" alt="An image that shows how real-world objects are detected by cnn." width="800" height="514"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Have you ever asked yourself how AI systems can understand images? Like how they can tell whether an animal in a picture is a cat or a dog? &lt;/p&gt;

&lt;p&gt;Recently, I’ve been learning about deep learning through a TensorFlow course by IBM on edX, and I’ve been really amazed by how powerful Convolutional Neural Networks (CNNs) are in recognizing images.&lt;/p&gt;

&lt;p&gt;If you're a developer, no matter your background, I think you’ll find CNNs both interesting and very useful. &lt;/p&gt;

&lt;p&gt;In this article, I want to guide you through the basics of how CNNs work, breaking down each layer to show you how they can classify images.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is a Convolutional Neural Network (CNN)?
&lt;/h2&gt;

&lt;p&gt;A CNN is a type of &lt;a href="https://www.geeksforgeeks.org/artificial-neural-networks-and-its-applications/" rel="noopener noreferrer"&gt;artificial neural network&lt;/a&gt; specifically designed to process and classify visual data, &lt;/p&gt;

&lt;p&gt;CNNs are great at capturing patterns like edges, textures, and shapes which makes them perfect for tasks like identifying objects in pictures or distinguishing between a cat and a dog.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Building Blocks of a CNN
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fun97yqketd9nhzbmpunn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fun97yqketd9nhzbmpunn.png" alt="An example of a convolutional neural network with multiple layers,  convolutional, ReLU, pooling, and fully-connected layers." width="800" height="488"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A CNN is made up of several layers, each serving a different purpose:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Convolutional Layer
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4hfowe7xqi4sficgqcpp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4hfowe7xqi4sficgqcpp.png" alt="Illustration of grayscale image pixels." width="373" height="229"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Of course, Every image you see is made up of &lt;strong&gt;pixels&lt;/strong&gt;. A pixel is the smallest part of a digital image, and each one holds a number. &lt;/p&gt;

&lt;p&gt;In a grayscale image, each pixel has a value between 0 and 255, representing a single channel of intensity. For a colored image (RGB), each pixel has three channels (Red, Green, Blue), with each channel holding a value between 0 and 255.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufkmcv2en611nuu8lhe6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufkmcv2en611nuu8lhe6.jpg" alt="Visualization of color channels in an RGB image, showing red, green, and blue layers each holding a value between 0 and 255." width="525" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this layer, A CNN uses something called a &lt;strong&gt;filter&lt;/strong&gt; or &lt;strong&gt;kernel&lt;/strong&gt; (small matrices of numbers) that moves over the image, applying a &lt;strong&gt;dot product operation&lt;/strong&gt; on the pixels to detect specific patterns, as shown in the image below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqxabrb8j5ek0o1kn5b4a.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqxabrb8j5ek0o1kn5b4a.gif" alt="Animated diagram of a kernel or filter sliding over an image." width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Typically, we initialize the kernels with random values, and during the training phase, the values will be updated with optimum values.&lt;/p&gt;

&lt;h4&gt;
  
  
  Why Sliding Matters
&lt;/h4&gt;

&lt;p&gt;The sliding motion of the filter across the image allows the CNN to learn spatial hierarchies of patterns. &lt;/p&gt;

&lt;p&gt;As the filter moves, it captures basic features like edges in the early layers and more complex patterns, such as shapes or objects, in deeper layers.&lt;/p&gt;

&lt;p&gt;For example, applying different kernels to the digit 2 can highlight various patterns:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5s0zxm3cnianolxatvh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5s0zxm3cnianolxatvh.png" alt="Application of different filters to the digit 2, resulting in various highlighted edges and patterns." width="620" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you see in the image above, applying those filters to an image highlights its edges, creating a new image (in CNN terms, &lt;strong&gt;a feature map&lt;/strong&gt;), that represents the detected features. &lt;/p&gt;

&lt;p&gt;This process, known as convolution, results in a feature map that shows the detected features in the image.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. ReLU Activation:
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fej9gx03k9hu55qei8v6u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fej9gx03k9hu55qei8v6u.png" alt="Comparison of an image before and after ReLU activation, showing how negative values are discarded, leaving a clearer representation of the important features." width="800" height="299"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After the convolution step, where the CNN identifies patterns like edges or textures, we need to decide which patterns are &lt;strong&gt;important&lt;/strong&gt;. This is where &lt;strong&gt;ReLU (Rectified Linear Unit)&lt;/strong&gt; comes in.&lt;/p&gt;

&lt;h4&gt;
  
  
  What Does ReLU Do ?
&lt;/h4&gt;

&lt;p&gt;Think of ReLU as a filter that cleans up the image further. After the convolution layer finds patterns, some of those patterns are helpful (positive values) and some are not useful (negative values)&lt;/p&gt;

&lt;p&gt;The ReLU layer takes the output from the convolution layer and simply changes all the negative values to zero as shown in the image below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjflf7mbiok8wnnz1klp5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjflf7mbiok8wnnz1klp5.png" alt="Demonstration of ReLU activation, setting all negative pixel values to zero while retaining positive values in a feature map." width="800" height="389"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By doing this, ReLU discards the unhelpful parts, keeping only the meaningful patterns, But ReLU’s impact goes beyond just setting negatives to zero—it introduces &lt;strong&gt;non-linearity&lt;/strong&gt; into the model,&lt;/p&gt;

&lt;h4&gt;
  
  
  What Does Non-Linearity Mean ?
&lt;/h4&gt;

&lt;p&gt;If you were to only draw straight lines, you’d miss out on capturing the curves and complexities of real-world objects. A linear function sees only straightforward, flat changes, like a gradual slope.&lt;/p&gt;

&lt;p&gt;For example, picture a shadow or a gradient on an image. A linear function would only see it as a &lt;strong&gt;flat&lt;/strong&gt;, dull line of change. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ReLU&lt;/strong&gt;, by cutting off negative values, highlights differences more sharply, helping the CNN to “see” and react to those changes in more dynamic ways. in other words, it can start capturing rounded shapes, complex textures, or variations.&lt;/p&gt;

&lt;p&gt;So, ReLU helps the CNN "break out" of basic, straight-line thinking, allowing the model process images in a richer, more detailed way—much like how our brains can see the fine differences between shadows, colors, and textures.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Pooling Layer:
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhfq8f5q8hamzjcg1r146.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhfq8f5q8hamzjcg1r146.png" alt="Comparison of an image before and after Pooling Layer, showing how an image is reduce in size" width="800" height="324"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next up is the &lt;strong&gt;Pooling layer&lt;/strong&gt;, which is used to reduce the spatial dimensions (height and width) of the feature maps as shown in the image above.&lt;/p&gt;

&lt;p&gt;This layer helps to make the CNN more &lt;strong&gt;computationally efficient&lt;/strong&gt; by reducing the number of parameters and ensuring that the model focuses on the most important features.&lt;/p&gt;

&lt;h4&gt;
  
  
  How does it work ?
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzj5z7gv1updce1eowvlz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzj5z7gv1updce1eowvlz.png" alt="Example of a pooling operation, reducing the size of the feature map by selecting the highest value in small windows across the map." width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The most common pooling method is &lt;strong&gt;Max-Pooling&lt;/strong&gt;, which selects the highest value in a small window (e.g., 2x2) and slides it across the feature map, creating a smaller, downsampled version&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Fully-Connected Layer:
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpqp4c0w8xse7q0xsm24r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpqp4c0w8xse7q0xsm24r.png" alt="Visualization of the fully-connected layer in a CNN, showing how flattened feature data is passed through neurons for final classification." width="750" height="505"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After the convolution and pooling layers have extracted the relevant features from the image, the final step is &lt;strong&gt;classification&lt;/strong&gt;. This is done by the &lt;strong&gt;Fully-Connected layer&lt;/strong&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  How does it actually work ?
&lt;/h4&gt;

&lt;h4&gt;
  
  
  1. Flattening :
&lt;/h4&gt;

&lt;p&gt;First, the high-level features (like shapes and patterns) extracted by the previous layers are &lt;strong&gt;flattened&lt;/strong&gt; into a single, long list of numbers:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1funpch6ei1npfl0lt8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1funpch6ei1npfl0lt8.png" alt="Illustration of the flattening process in a CNN, converting high-level feature maps into a single list of numbers for classification." width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Why do we need flattening layers?, so the network can consider all the features together, which helps in making a final decision or classification.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Processing Through Neurons
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzshvys04pk241spfcmax.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzshvys04pk241spfcmax.png" alt="Fully-connected layers with neurons processing inputs, representing how features are combined and weighted to classify an image." width="800" height="471"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once flattened, the data is passed through the Fully-Connected layers. Each neuron in these layers is connected to every neuron in the previous layer, This means the FC layer can look at all the features at once and understand how they relate to each other.&lt;/p&gt;

&lt;p&gt;Each neuron receives inputs from all the neurons in the previous layer. These inputs are multiplied by &lt;strong&gt;weights&lt;/strong&gt;, Weights are numbers that tell the network how important each feature is. These weights are learned during training to improve the network's accuracy.&lt;/p&gt;

&lt;p&gt;The results are then summed up with a small value called a &lt;strong&gt;bias&lt;/strong&gt;, The bias helps by shifting the neuron's &lt;strong&gt;activation threshold&lt;/strong&gt;, allowing it to still respond appropriately even if the input values aren't perfect. &lt;/p&gt;

&lt;p&gt;This flexibility helps the model adjust better to the data.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Decision Making with Softmax activation function:
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8hrok23979rnhb5g6kfz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8hrok23979rnhb5g6kfz.png" alt="Diagram showing the softmax function converting scores from the fully-connected layer into probabilities for image classification." width="640" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the final step, the last FC layer outputs a set of scores, one for each possible class. The &lt;strong&gt;Softmax&lt;/strong&gt; function is then applied to these scores, converting them into &lt;strong&gt;probabilities&lt;/strong&gt;. The class with the &lt;strong&gt;highest probability&lt;/strong&gt; is selected as the network’s final prediction.&lt;/p&gt;

&lt;p&gt;In summary, the Fully-Connected layers integrate all extracted features, perform complex transformations, and make the final classification decision using probabilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion ✨
&lt;/h2&gt;

&lt;p&gt;At its core, a CNN is just a series of numbers and math, but those numbers allow it to "see" and interpret images in incredible ways. I hope you now have a good idea of the deep learning world, especially CNNs.&lt;/p&gt;

&lt;p&gt;As a bonus, if you’re looking to dive deeper, feel free to check out my &lt;a href="https://github.com/Oussama1403/Deep-Learning-with-Tensorflow" rel="noopener noreferrer"&gt;Deep Learning with TensorFlow&lt;/a&gt; GitHub repo, which contains Jupyter Notebook labs from the "Deep Learning with TensorFlow" course by IBM on edX, and my &lt;a href="https://github.com/Oussama1403/Machine-Learning-with-Python" rel="noopener noreferrer"&gt;Machine Learning with Python&lt;/a&gt; repo, featuring labs from IBM's Coursera course.&lt;/p&gt;

&lt;p&gt;If you have any questions or just want to chat more about it, feel free to reach out. Thanks for reading 😊.&lt;/p&gt;

</description>
      <category>deeplearning</category>
      <category>ai</category>
      <category>computervision</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Beginner’s Guide to Implementing a Simple Machine Learning Project</title>
      <dc:creator>Oussama Ben Sassi</dc:creator>
      <pubDate>Thu, 25 Jul 2024 09:25:28 +0000</pubDate>
      <link>https://dev.to/usamaa/beginners-guide-to-implementing-a-simple-machine-learning-project-3al7</link>
      <guid>https://dev.to/usamaa/beginners-guide-to-implementing-a-simple-machine-learning-project-3al7</guid>
      <description>&lt;p&gt;I completed few days ago the "Machine Learning with Python" course by IBM on Coursera. And Because this field can seem quite challenging for many, I decided to write a set of simple, concise, and fun articles to share my knowledge and guide new learners in the machine learning field!&lt;/p&gt;

&lt;p&gt;In this easy, fun, and simple-to-understand guide, I will unravel the mystery of machine learning by helping you create your first project: predicting customer categories for a telecommunications provider company.&lt;/p&gt;

&lt;h1&gt;
  
  
  Setting Up Your Environment
&lt;/h1&gt;

&lt;p&gt;Before we begin, you can use &lt;a href="https://colab.research.google.com/" rel="noopener noreferrer"&gt;Google Colab&lt;/a&gt; to run all the code provided here. This will allow you to execute the code in the cloud and analyze the output directly without using your machine's resources.&lt;/p&gt;

&lt;h1&gt;
  
  
  Project Overview
&lt;/h1&gt;

&lt;p&gt;Imagine a telecommunications provider has &lt;strong&gt;segmented&lt;/strong&gt; its customer base by service usage patterns, categorizing the customers into &lt;strong&gt;four groups&lt;/strong&gt;: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Basic Service&lt;/li&gt;
&lt;li&gt;E-Service&lt;/li&gt;
&lt;li&gt;Plus Service&lt;/li&gt;
&lt;li&gt;Total Service.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Why Categorize (or Classify) Customers?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F104fsslwak4l54zsitft.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F104fsslwak4l54zsitft.gif" alt="classify" width="700" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Categorizing customers allows a company to tailor offers based on specific needs. For example, new customers might receive welcome discounts, while loyal customers could get exclusive early access to sales.&lt;/p&gt;

&lt;p&gt;Our objective in this project is to build a &lt;strong&gt;classifier&lt;/strong&gt; that can &lt;strong&gt;classify new customers&lt;/strong&gt; into one of these four groups based on previously categorized data. For this, we will use a specific type of classification algorithm called &lt;strong&gt;K-Nearest Neighbors (kNN)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;To learn more about different ML algorithms, check out this informative article: &lt;a href="https://www.geeksforgeeks.org/types-of-machine-learning/" rel="noopener noreferrer"&gt;Types of Machine Learning&lt;/a&gt;, &lt;/p&gt;

&lt;p&gt;Anyway, let the fun begin!&lt;/p&gt;

&lt;h1&gt;
  
  
  Data
&lt;/h1&gt;

&lt;p&gt;Of course, Machines learn from the data you give them, which is the essence of machine learning algorithms. They analyze data, learn patterns and relationships, and make predictions based on what they've learned.&lt;/p&gt;

&lt;h2&gt;
  
  
  Downloading and Understanding Our Dataset
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw67qy4zvgsho7qb7rd0e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw67qy4zvgsho7qb7rd0e.png" alt="dataset" width="800" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To make it easier to understand our dataset, let's first download it and output the first five rows using the &lt;strong&gt;Pandas&lt;/strong&gt; library.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Pandas is a Python library used for working with data sets. It has functions for analyzing, cleaning, exploring, and manipulating data.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So, go ahead and open &lt;strong&gt;Google Colab&lt;/strong&gt;, create a new notebook, and in the first code cell, type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pandas&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;matplotlib.pyplot&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;plt&lt;/span&gt; &lt;span class="c1"&gt;# For creating visualizations and plots
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and then run it to import the Pandas library. &lt;br&gt;
Next, in the second code cell, type the following code to read the dataset from the provided URL and display the first five rows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;df&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read_csv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%203/data/teleCust1000t.csv&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;head&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6c9wf5ilkh5v253hnsf3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6c9wf5ilkh5v253hnsf3.png" alt="data" width="800" height="220"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you see in the first five rows, we have customers with attributes such as &lt;strong&gt;region&lt;/strong&gt;, &lt;strong&gt;age&lt;/strong&gt;, and &lt;strong&gt;marital status&lt;/strong&gt;. We will use these attributes to predict a new case. In machine learning terminology, these are called &lt;strong&gt;features&lt;/strong&gt;, while the target field (the attribute we want to predict), called custcat (short for customer category), which has four possible values corresponding to the four customer groups we discussed earlier, is known as the &lt;strong&gt;label&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Let's perform a quick analysis with Pandas to see how many of each class is in our dataset. Type the following in a new code cell to get the result:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;custcat&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;value_counts&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Extracting Features and Labels
&lt;/h2&gt;

&lt;p&gt;We will use &lt;code&gt;X&lt;/code&gt; variable to store our feature set, and &lt;code&gt;Y&lt;/code&gt; to store our labels:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;numpy&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;
&lt;span class="n"&gt;X&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;region&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;tenure&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;age&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;marital&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;address&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;income&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ed&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;employ&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;retire&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;gender&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;reside&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]].&lt;/span&gt;&lt;span class="n"&gt;values&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The line of code above selects specific columns from the DataFrame df to be used as features for our machine learning model.&lt;br&gt;
To see a sample of the data. let's output the first five rows from the array &lt;code&gt;X&lt;/code&gt; using &lt;code&gt;X[0:5]&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nf"&gt;array&lt;/span&gt;&lt;span class="p"&gt;([[&lt;/span&gt;  &lt;span class="mf"&gt;2.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="mf"&gt;13.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="mf"&gt;44.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="mf"&gt;1.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="mf"&gt;9.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="mf"&gt;64.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="mf"&gt;4.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="mf"&gt;5.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="mf"&gt;0.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="mf"&gt;0.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="mf"&gt;2.&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
       &lt;span class="p"&gt;[&lt;/span&gt;  &lt;span class="mf"&gt;3.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="mf"&gt;11.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="mf"&gt;33.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="mf"&gt;1.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="mf"&gt;7.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;136.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="mf"&gt;5.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="mf"&gt;5.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="mf"&gt;0.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="mf"&gt;0.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="mf"&gt;6.&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
       &lt;span class="p"&gt;[&lt;/span&gt;  &lt;span class="mf"&gt;3.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="mf"&gt;68.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="mf"&gt;52.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="mf"&gt;1.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="mf"&gt;24.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;116.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="mf"&gt;1.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="mf"&gt;29.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="mf"&gt;0.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="mf"&gt;1.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="mf"&gt;2.&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
       &lt;span class="p"&gt;[&lt;/span&gt;  &lt;span class="mf"&gt;2.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="mf"&gt;33.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="mf"&gt;33.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="mf"&gt;0.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="mf"&gt;12.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="mf"&gt;33.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="mf"&gt;2.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="mf"&gt;0.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="mf"&gt;0.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="mf"&gt;1.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="mf"&gt;1.&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
       &lt;span class="p"&gt;[&lt;/span&gt;  &lt;span class="mf"&gt;2.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="mf"&gt;23.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="mf"&gt;30.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="mf"&gt;1.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="mf"&gt;9.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="mf"&gt;30.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="mf"&gt;1.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="mf"&gt;2.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="mf"&gt;0.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="mf"&gt;0.&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="mf"&gt;4.&lt;/span&gt;&lt;span class="p"&gt;]])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, Let's store the values of our label in &lt;code&gt;Y&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;y&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;custcat&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;values&lt;/span&gt;
&lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Data Standardization
&lt;/h2&gt;

&lt;p&gt;Normalizing our data helps convert it into a uniform format. Why? Because imagine in our raw data we have age in years, income in dollars, and height in centimeters; the algorithm may give more importance to features with larger scales. This can skew the results and lead to a biased model.&lt;/p&gt;

&lt;p&gt;The line of code below scales and normalizes the data. It standardizes the features so they have a mean of 0 and a standard deviation of 1. This is a good practice in general. &lt;/p&gt;

&lt;p&gt;First, make sure to install &lt;strong&gt;scikit-learn&lt;/strong&gt; library&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;pip&lt;/span&gt; &lt;span class="n"&gt;install&lt;/span&gt; &lt;span class="n"&gt;scikit&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;learn&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Scikit-learn is a library in Python that provides many unsupervised and supervised learning algorithms&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Then, Go ahead and the run the code below and see in the output how all the values are unified on similar scale:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sklearn&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;preprocessing&lt;/span&gt;
&lt;span class="n"&gt;X&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;preprocessing&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;StandardScaler&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;fit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;transform&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;astype&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;float&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Again, don't worry if the line above seems complex; the important thing right now is to understand the importance of it 😊.&lt;/p&gt;

&lt;h1&gt;
  
  
  Train-Test Split
&lt;/h1&gt;

&lt;p&gt;Since we only work with one dataset, Train/Test Split involves splitting our dataset into training data (the data we will give to our model to learn the patterns of it) and testing data (the one we will use for predictions).&lt;/p&gt;

&lt;p&gt;In future articles, I’ll cover various and more in-depth evaluation techniques for models, including Train/Test Split, K-Fold Cross-Validation, and more.&lt;/p&gt;

&lt;p&gt;So, To perform the Train/Test split, we import it from &lt;strong&gt;scikit-learn&lt;/strong&gt; library&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sklearn.model_selection&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;train_test_split&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And now let's use our imported function and explain what is happenig in the line below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;X_train&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;X_test&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_train&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_test&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;train_test_split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt; &lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;test_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;random_state&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;train_test_split()&lt;/code&gt; divides the features (X) and labels (y) into two parts: 80% for training the model (X_train and y_train) and 20% for testing it (X_test and y_test). The random_state=4 ensures that each time you run the code, the split result is the same.&lt;/p&gt;

&lt;p&gt;To see the size (number of rows and columns) of our train and test dataset, we will use the &lt;code&gt;.shape&lt;/code&gt; attribute:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Train set:&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;X_train&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;shape&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="n"&gt;y_train&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;shape&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Test set:&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;X_test&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;shape&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="n"&gt;y_test&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;shape&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Classification
&lt;/h1&gt;

&lt;p&gt;As I mentioned before, we will use K-Nearest Neighbors or (KNN) algorithm  to train and to predict new classes.&lt;br&gt;
(If you want to know how this algorithm works, just leave a comment; I’ll be glad to write an article about it 😉)&lt;/p&gt;

&lt;p&gt;Good news!, We don't need to implement the algorithm by ourselves as we can easily import the model from the scikit-learn library:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sklearn.neighbors&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;KNeighborsClassifier&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Training
&lt;/h2&gt;

&lt;p&gt;The block of code below creates a K-Nearest Neighbors model, The fit() method trains the model with the training data (&lt;code&gt;X_train&lt;/code&gt;) and its corresponding labels (&lt;code&gt;y_train&lt;/code&gt;), allowing the model to learn from this data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;k&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;
&lt;span class="n"&gt;neigh&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;KNeighborsClassifier&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;n_neighbors&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;fit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X_train&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;y_train&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Predicting
&lt;/h2&gt;

&lt;p&gt;Now, let's make some predictions!&lt;br&gt;
Let's create a variable named &lt;code&gt;y_predict&lt;/code&gt; to store our model’s predictions on the test data we created earlier, We will also use this variable to compare its results with &lt;code&gt;y_test&lt;/code&gt; which has the real and correct values. This means we will compare our prediction with the real values to test our model’s accuracy.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;y_predict&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;neigh&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;predict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X_test&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;y_predict&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Accuracy evaluation
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;y_predict[0:5]&lt;/code&gt; will output the first five prediction results, you can do some comparison yourself with &lt;code&gt;y_test&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;But, Comparing results manually can be error-prone and inconsistent, That’s why Scikit-learn offers various accuracy evaluation methods that standardize this process. For example, &lt;code&gt;metrics.accuracy_score&lt;/code&gt; calculates the proportion of correct predictions, providing a clear and objective measure of how well your model performs.&lt;/p&gt;

&lt;p&gt;Let's use &lt;code&gt;metrics.accuracy_score&lt;/code&gt; to see how well our trained model is working:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sklearn&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;metrics&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Train set Accuracy: &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;accuracy_score&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;y_train&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;neigh&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;predict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X_train&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Test set Accuracy: &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;accuracy_score&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;y_test&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_predict&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our code output accuracy scores says that the model correctly predicted the labels for about 54.75% of the training data and correctly predicted the labels for about 32% of the test data. &lt;/p&gt;

&lt;p&gt;The large gap between training and test accuracy might suggest &lt;strong&gt;overfitting&lt;/strong&gt;, where the model learns the training data too well but struggles with new, unseen data.&lt;/p&gt;

&lt;p&gt;Of course, If time permits, In future articles, I will discuss strategies to improve a model’s accuracy on new, unseen data.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion ✨
&lt;/h1&gt;

&lt;p&gt;I really hope this guide was easy to follow and I hope it helped you learn something new.&lt;/p&gt;

&lt;p&gt;Always keep experimenting and exploring new techniques. There’s always something new to learn in machine learning, and I’m excited to share more with you in future articles.&lt;/p&gt;

&lt;p&gt;Feel free to drop your thoughts or questions in the comments. I’m here to help and would love to hear about your experiences and progress.&lt;/p&gt;

&lt;p&gt;Happy coding 👨‍💻&lt;/p&gt;

&lt;p&gt;Bye for now! 😊&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>tutorial</category>
      <category>python</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Should Sensitive Systems Be Open Source ?</title>
      <dc:creator>Oussama Ben Sassi</dc:creator>
      <pubDate>Mon, 22 Jan 2024 13:16:05 +0000</pubDate>
      <link>https://dev.to/usamaa/should-sensitive-systems-be-open-source--4j56</link>
      <guid>https://dev.to/usamaa/should-sensitive-systems-be-open-source--4j56</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp5tpev98h2qk7hu9ptda.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp5tpev98h2qk7hu9ptda.jpg" alt="Being open source means anyone can independently review the code. If it was closed source, nobody could verify the security. I think it’s essential for a program of this nature to be open source." width="700" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The other day, I was talking to a friend who's studying computer engineering. We were discussing my open-source project, a small online tool that helps people encrypt their data. During our conversation, my friend dropped a bombshell: &lt;strong&gt;"Your app shouldn't be open source."&lt;/strong&gt; He argued that because my web app's source code is public, anyone can examine it, find weak points, and potentially break into the encryption system.&lt;/p&gt;

&lt;p&gt;This led to a lively debate. I'm a big fan of open-source software and had lots to say in its defense. But my friend's comment got me thinking, and I decided to dig deeper into the topic.&lt;/p&gt;

&lt;p&gt;In this article, I'll explore whether sharing my encryption app's code with the public is a risk. I'll also look at the bigger question of whether it's better for sensitive systems like encryption apps to be open for everyone to see or kept private. &lt;/p&gt;



&lt;ul&gt;
&lt;li&gt;Closed Source for Sensitive Software &lt;/li&gt;
&lt;li&gt;Why Hiding system design Isn't a Reliable Security Strategy&lt;/li&gt;
&lt;li&gt;Should Sensitive Systems Be Open Source ?&lt;/li&gt;
&lt;li&gt;
Defense in Depth principle

&lt;ul&gt;
&lt;li&gt;So how can I apply Defense in Depth approach in open-source model ?&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Conclusion&lt;/li&gt;

&lt;/ul&gt;



&lt;h2&gt;
  
  
  Closed Source for Sensitive Software
&lt;/h2&gt;

&lt;p&gt;The term is self-explanatory, but let's look at its definition:&lt;/p&gt;

&lt;p&gt;Closed source refers to a type of software whose source code is &lt;strong&gt;not made available to the public or users&lt;/strong&gt;. In closed-source software, users are not allowed to copy, delete, or modify parts of the source code.&lt;/p&gt;

&lt;p&gt;Of course, Closed-source software comes with several advantages, particularly when dealing with sensitive applications:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Enhanced Security:&lt;/strong&gt; Closed-source software provides an extra layer of security by keeping the source code confidential, making it much harder for attackers &lt;strong&gt;to find weaknesses.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Control:&lt;/strong&gt; With closed source, you get to decide who can use your software. This helps keep it away from unauthorized users and protects sensitive data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Quick Issue Fixes:&lt;/strong&gt; If a problem is found in your software, you can fix it quickly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Focused Development:&lt;/strong&gt; Without constant public feedback, developers can focus on building a strong and secure foundation for your software.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But, even with these advantages of closed source, it's good to question whether relying solely on the secrecy of system design is a reliable security strategy, posing the question of:&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Hiding system design Isn't a Reliable Security Strategy
&lt;/h2&gt;

&lt;p&gt;Imagine you have a secret, like a special hiding place for your favorite toy. You don't tell anyone about it, and you hope that because it's a secret, nobody will find your toy.&lt;/p&gt;

&lt;p&gt;Now, the problem with this approach is that if someone accidentally stumbles upon your hiding place or figures out your secret, they can easily take your toy. So, it's not a reliable way to keep things safe.&lt;/p&gt;

&lt;p&gt;So, when a system relies on secrecy rather than the &lt;strong&gt;strength of its security measures&lt;/strong&gt;, it hides vulnerabilities and weaknesses from detection. This lack of transparency means that issues may remain undiscovered until &lt;strong&gt;a breach occurs.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A determined attacker (like me, lol) can go to great lengths to discover hidden secrets. Just like someone might search diligently for your hidden toy, hackers can use various techniques to uncover hidden system details. Once they do, &lt;strong&gt;the security crumbles.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So, to sum it up, security through obscurity is discouraged because it relies on secrecy alone, &lt;strong&gt;which can be easily breached if discovered.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So, does this mean&lt;/p&gt;

&lt;h2&gt;
  
  
  Should Sensitive Systems Be Open Source ?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft16l1vx1k28q253ou9xm.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft16l1vx1k28q253ou9xm.jpg" alt="one_does_not_simply_open_source" width="651" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is a complex question, and there isn't a simple answer. However, I've done my best to provide a thoughtful response.&lt;/p&gt;

&lt;p&gt;Let's begin by briefly compare the advantages and disadvantages of open-sourcing sensitive systems, starting with the advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Community Auditing:&lt;/strong&gt; With an open-source approach, the global community of a diverse range of developers and security professionals can actively review and audit the code and that will result in a &lt;strong&gt;more robust and secure system.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Faster Issue Resolution:&lt;/strong&gt; In an open-source model, if a vulnerability is discovered, it can be addressed quickly by the community.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Transparency:&lt;/strong&gt; Open-source software allows users to see how the software works, which can build trust and confidence in its functionality.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Unfortunately, there are disadvantages, with one of the biggest being:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Target for Attackers:&lt;/strong&gt; Exposing the source code of sensitive software can make it easier for attackers to exploit vulnerabilities. and so It requires a proactive and vigilant community to stay ahead of potential threats.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Potential Misuse:&lt;/strong&gt; Open-sourcing sensitive software, like encryption apps,exposes the code to &lt;strong&gt;potential misuse by attackers.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Maintenance challenges:&lt;/strong&gt; Maintaining a large open source project can be challenging, as it requires coordinating the work of a diverse group of developers, ensuring continuous updates, responding to issues, and managing the community involvement can be resource-intensive.&lt;br&gt;
This can be particularly difficult for sensitive software, where security and reliability are paramount.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, we understand that open-sourcing software can be both an &lt;strong&gt;advantage&lt;/strong&gt; and &lt;strong&gt;disadvantage&lt;/strong&gt; for security. Therefore, there is &lt;strong&gt;no direct answer&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;If you have a sensitive software and are unsure whether to open source it, consider factors like &lt;strong&gt;the level of community support available&lt;/strong&gt;, &lt;strong&gt;the maturity of the software&lt;/strong&gt; (is the software still under development or stable?), and &lt;strong&gt;the capabilities of the development team&lt;/strong&gt;. Think carefully before making your decision.&lt;/p&gt;

&lt;p&gt;Wait, the article isn't finished yet! 😃&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fawexip8036xb4gpfw58w.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fawexip8036xb4gpfw58w.jpg" alt="open_source_all_the_things" width="320" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Because, as an open-source lover and advocate, I encourage open sourcing, introducing:&lt;/p&gt;

&lt;h2&gt;
  
  
  Defense in Depth principle
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqu777tnre9x2ylaz3c84.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqu777tnre9x2ylaz3c84.png" alt="defense_in_depth" width="200" height="230"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Defense in Depth principle is a cybersecurity strategy that involves &lt;strong&gt;implementing multiple layers of security measures to protect a system&lt;/strong&gt;, rather than relying on a single security measure, multiple layers are implemented to create a more resilient defense against potential threats.&lt;br&gt;
These layers include &lt;strong&gt;technology&lt;/strong&gt;, &lt;strong&gt;procedures&lt;/strong&gt;, and &lt;strong&gt;people&lt;/strong&gt;. The goal is to &lt;strong&gt;prevent&lt;/strong&gt;, &lt;strong&gt;detect&lt;/strong&gt;, and &lt;strong&gt;mitigate&lt;/strong&gt; attacks at different layers of the system.&lt;/p&gt;

&lt;h3&gt;
  
  
  So how can I apply Defense in Depth approach in open-source model ?
&lt;/h3&gt;

&lt;p&gt;We can define the layers of an open-source software as the Community Layer, Operational or Procedural Layer, and the Code Layer.&lt;/p&gt;

&lt;p&gt;1- &lt;strong&gt;Community Layer&lt;/strong&gt;&lt;br&gt;
As mentioned earlier, the community can conduct &lt;strong&gt;code reviews&lt;/strong&gt; and &lt;strong&gt;security audits&lt;/strong&gt; to identify potential issues and improve overall security. &lt;br&gt;
So encourage your community members &lt;strong&gt;to prioritize writing secure code&lt;/strong&gt; and actively report any vulnerabilities they discover.&lt;br&gt;
2- &lt;strong&gt;Operational Layer:&lt;/strong&gt;&lt;br&gt;
Simply by conducting regular security audits and to use automated tools like static code analyzers and dependency scanners to catch common vulnerabilities early in the development process, reducing the risk of exploitation.&lt;br&gt;
3- &lt;strong&gt;Code Layer:&lt;/strong&gt;&lt;br&gt;
Learn how to write &lt;strong&gt;secure code&lt;/strong&gt; by applying Secure Coding Practices like input validation, error handling and logging, encrypting sensitive data, and much more.&lt;/p&gt;

&lt;p&gt;Another crucial aspect: In case of an attack, develop an &lt;strong&gt;Incident Response Plan&lt;/strong&gt; where you define the steps to take in the event of a security incident to mitigate potential damage.&lt;/p&gt;

&lt;p&gt;By applying these measures, you can confidently open source your software and enjoy the benefits of community collaboration, security, rapid development, learning, and skill development, and much more.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The article may seem simple to write, but trust me, it was a bit challenging due to the complexity of the topic. The decision to open source a project involves &lt;strong&gt;careful consideration&lt;/strong&gt;. However, you can always embrace &lt;strong&gt;the Defense in Depth strategy&lt;/strong&gt; for securing sensitive systems when opting for open source.&lt;/p&gt;

&lt;p&gt;Thank you for investing your precious time in reading my article. I truly appreciate your engagement, whether through questions, critiques, or sharing your opinions and ideas in the comments. I look forward to hearing your perspectives below. Happy coding and securing! 😊&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>opensource</category>
      <category>security</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>Every change breaks someone's workflow</title>
      <dc:creator>Oussama Ben Sassi</dc:creator>
      <pubDate>Sun, 01 Oct 2023 11:53:31 +0000</pubDate>
      <link>https://dev.to/usamaa/every-change-breaks-someones-workflow-2052</link>
      <guid>https://dev.to/usamaa/every-change-breaks-someones-workflow-2052</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa73o0gryvsqzz7wqgjcd.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa73o0gryvsqzz7wqgjcd.jpg" alt=" " width="639" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;During my recent internship, I found myself working on the company's human resource management system. They trusted me enough to give me access to the whole project and even allowed me to make changes directly to the live system. One seemingly simple task, however, turned into a valuable lesson about the unintended consequences of software changes, a lesson perfectly encapsulated by Hyrum's Law.&lt;/p&gt;

&lt;p&gt;In this article, I'll share with you my experience and how I came to understand Hyrum's Law while working on this HR system.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;One of my tasks involved the section responsible for handling leave and time-off requests. The system had a specific behavior: when an employee applied for leave while there was already a pending request, it displayed an error message to the user, saying there was a pending leave request. Simultaneously, the system sent an error code and a message to the admin dashboard, informing them of the pending request and prompting them to accept or reject it.&lt;/p&gt;

&lt;p&gt;As I started refactoring the code, I realized that the system was in a bit of a mess. It was outdated, and the prevailing mindset was "if it works, don't touch it." This mindset, I soon discovered, was not always correct, and I will write an article about it. Anyway, During the refactoring, I made a seemingly small change to the admin's notification message.&lt;/p&gt;

&lt;p&gt;The original message was vague, merely indicating a pending leave request without mentioning the employee's name or the request date. In an attempt to make it clearer and more professional, I added these details to the message. Little did I know, &lt;strong&gt;this small change would lead to significant issues.&lt;/strong&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Every Change Breaks Someone's Workflow
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjoehmmq0hib3ra19p2il.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjoehmmq0hib3ra19p2il.png" alt=" " width="278" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It turned out that one of the admins responsible for managing leave requests rarely logged into the system. Instead, he had written a code that sent him an email notification whenever a leave request was pending. The catch was that he relied on the specific text in the message, not the error code. So, when I changed the message text, he stopped receiving emails, and consequently, he had no idea about pending requests.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fixing the Problem
&lt;/h2&gt;

&lt;p&gt;Around two weeks after my change, the admin finally noticed the issue and reported it to my supervisor. Fortunately, I had been diligently maintaining a changelog file for every modification I made. With the help of this changelog file, my supervisor quickly identified the problem and updated the admin's code to match the new message format. Finally, everything started working smoothly again.&lt;/p&gt;

&lt;h1&gt;
  
  
  Hyrum's Law in Action
&lt;/h1&gt;

&lt;p&gt;This experience led me to explore how others manage to work on complex systems without inadvertently causing disruptions. That's when I stumbled upon Hyrum's Law, which perfectly described what I had gone through.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The law says that:&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;With a sufficient number of users of an API, it does not matter what you promise in the contract: all observable behaviours of your system will be depended on by somebody.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In simple terms, Hyrum's Law states that when a sufficient number of users rely on an aspect of your system, regardless of what your official documentation or promises say, &lt;strong&gt;all observable behaviors of your system will become essential to someone.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding My Situation Through Hyrum's Law
&lt;/h2&gt;

&lt;p&gt;Hyrum's Law helped me make sense of what happened during my internship. Here's how it worked in my situation:&lt;/p&gt;

&lt;p&gt;The admin had, without formal documentation, established a dependency on the message text. He had come to expect the message in a specific format, and my modification broke that unwritten contract. This was the essence of Hyrum's Law in action: &lt;strong&gt;the observable behavior of the system, regardless of official documentation, became critical to someone's workflow.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;From my experience, I learned that you should be very careful when you make changes that means &lt;strong&gt;you should understand how things are connected and communicate your changes to others.&lt;/strong&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion and Advice
&lt;/h1&gt;

&lt;p&gt;My internship and this exact experience taught me some important lessons, and I want to share them with you:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Write Down What You Change: Keep a list of all the things you change in a system. It can help you fix problems later, like it did for me.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Respect What's Already There: Remember that people might be using things you didn't even know were important. Be careful when you change stuff.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Talk to Your Team:&lt;/strong&gt; Don't keep your changes a secret. Talk to your team and let them know what you're doing. They might have good ideas or warn you about problems.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Test Before You Make It Official: Try out your changes in a safe place first. That way, you can see if they cause any problems.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Balance New and Old: It's good to make things better, but don't break what's already working. Find a balance between making new stuff and keeping the old stuff working.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Hyrum's Law reminds us that even &lt;strong&gt;small changes can have big consequences.&lt;/strong&gt; So, be careful and think about how your changes will affect everything around them. This way, you can make sure everything keeps working smoothly.&lt;/p&gt;

&lt;p&gt;That's it for today! 😊 I hope you found this article helpful and gained some useful insights. If you have questions or if you've faced similar challenges, don't hesitate to share your thoughts in the comments below. I'm excited to read your comments and eager to hear any suggestions you might have for making this article even better. Happy coding!&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>programming</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>Your process is never faster than its slowest part.</title>
      <dc:creator>Oussama Ben Sassi</dc:creator>
      <pubDate>Tue, 26 Sep 2023 13:36:59 +0000</pubDate>
      <link>https://dev.to/usamaa/your-process-is-never-faster-than-its-slowest-part-1853</link>
      <guid>https://dev.to/usamaa/your-process-is-never-faster-than-its-slowest-part-1853</guid>
      <description>&lt;h1&gt;
  
  
  The Problem
&lt;/h1&gt;

&lt;p&gt;The other day, I was working on an old project of mine on GitHub. This project had a lot of tasks to do, but the whole program was designed to run one task at a time, one after the other. It was slow, and I wasn't happy with how I had programmed it in the past (maybe because I was new to programming back then). So, I decided to break the program into smaller tasks, hoping to make it faster and improve its performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Before and After Measurement
&lt;/h2&gt;

&lt;p&gt;Before making any changes, I measured how long it took for the program to finish its job. After I finished optimizing it, I measured the time again, and I was surprised to find that the execution time remained the same both before and after the changes. Why did this happen?&lt;/p&gt;

&lt;h2&gt;
  
  
  Amdahl's Law: The Key Insight
&lt;/h2&gt;

&lt;p&gt;As I looked into why this happened, I came across something called &lt;strong&gt;"Amdahl's Law."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;from wikipedia :&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In computer architecture, Amdahl's law (or Amdahl's argument) is a formula which gives the theoretical speedup in latency of the execution of a task at fixed workload that can be expected of a system whose resources are improved.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now, let's simplify this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amdahl's Law&lt;/strong&gt; tells us that the overall speedup we can achieve by parallelizing a program is &lt;strong&gt;limited by the portion of the program that cannot be parallelized&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So let's take a simple example:&lt;/p&gt;

&lt;p&gt;imagine you have a race with different parts, like running and swimming. Amdahl's Law is like saying, "Your overall race time can never be faster than your slowest part of the race." In other words, even if you're really fast at running, if you're slow at swimming, your race won't be faster overall&lt;/p&gt;

&lt;h2&gt;
  
  
  The Parallelism Pitfall
&lt;/h2&gt;

&lt;p&gt;in my case, the program was initially entirely sequential, meaning it had no inherent parallelism. When I divided it into smaller tasks, I introduced parallelism, which should theoretically lead to faster execution. However, here's the catch: Amdahl's Law reminds us that if a significant portion of the program still remains sequential and cannot be parallelized, the speedup will be limited.&lt;/p&gt;

&lt;h2&gt;
  
  
  Confronting Sequential Bottlenecks
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgs8m2x1ukl0dza2cpasx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgs8m2x1ukl0dza2cpasx.png" alt=" " width="730" height="319"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;in my scenario, I have found that a part of my program couldn't be broken down into smaller parallel tasks due to dependencies on other tasks. This non-parallelizable portion acted as bottleneck, preventing the program from running significantly faster, no matter how well-optimized the parallelizable parts were.&lt;/p&gt;

&lt;p&gt;So, Amdahl's Law helped me understand that simply breaking down the program into smaller tasks and parallelizing it wouldn't yield the desired performance improvements. To make the program faster, I needed to identify and address the sequential bottlenecks that were holding it back. In essence, Amdahl's Law taught me that true performance gains &lt;strong&gt;require a comprehensive understanding of a program's limitations&lt;/strong&gt;, not just a superficial attempt at parallelism.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, let's sum up the important lessons i've learned about parallel programming in simpler terms:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Before you try to make your software faster with parallelism, Check if the program can be divided into smaller independent tasks and these individual tasks can be executed concurrently and independently. If you can do this, then this type of program can take advantage of parallel computing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If your program can't be split into independent tasks, then parallelism won't help much. Some programs just can't be made parallel.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Identify the Performance Bottlenecks, these bottlenecks often hide in sequential portions of your program, which, if left unaddressed, can limit your optimization efforts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You need to have deep understanding of your program's limitations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Remember that parallelism isn't always the answer. If your program is small and simple, it might be better to keep it sequential. Parallel programming can be a lot more complicated, with threads and processes to manage, and tricky problems to solve.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's all for now! 😊 I hope you've gained some valuable insights from this article. If you have any questions or if you've encountered similar challenges, please feel free to share them in the comments. I'm eager to read and respond to your comments, and I'm open to any suggestions you may have to improve this article. Happy coding!.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>softwareengineering</category>
      <category>softwaredevelopment</category>
      <category>discuss</category>
    </item>
  </channel>
</rss>
