<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Deepesha Burse</title>
    <description>The latest articles on DEV Community by Deepesha Burse (@deepeshaburse).</description>
    <link>https://dev.to/deepeshaburse</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/deepeshaburse"/>
    <language>en</language>
    <item>
      <title>What is eBPF?</title>
      <dc:creator>Deepesha Burse</dc:creator>
      <pubDate>Sun, 19 Feb 2023 19:09:51 +0000</pubDate>
      <link>https://dev.to/deepeshaburse/what-is-ebpf-2bkb</link>
      <guid>https://dev.to/deepeshaburse/what-is-ebpf-2bkb</guid>
      <description>&lt;p&gt;Before diving into any concepts, let us first get the full form of eBPF out of the way. eBPF stands for,&lt;br&gt;
e   - extended &lt;br&gt;
B   - Berkeley &lt;br&gt;
P   - Packet &lt;br&gt;
F   - Filter&lt;/p&gt;

&lt;p&gt;This full form, however, does not really help us with understanding what eBPF is other than the fact that it helps with filtering network packets (but it’s “extended” so what all does that include?) which is why it is now considered a standalone term. Why is it one of the hottest technology areas in modern infrastructure computing?&lt;/p&gt;

&lt;p&gt;&lt;em&gt;eBPF allows us to load the kernel of the operating system with custom code dynamically. That means it can extend or even modify the way the kernel behaves.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fijp6vexvawxg04yb7mzi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fijp6vexvawxg04yb7mzi.png"&gt;&lt;/a&gt;&lt;br&gt;Operation of eBPF
  &lt;/p&gt;

&lt;h2&gt;
  
  
  Kernel
&lt;/h2&gt;

&lt;p&gt;The Linux kernel is the software layer that sits between your applications and the hardware on which they execute. Programs run in an unprivileged layer known as user space, which has no direct access to hardware. Instead, an application uses the system call (syscall) interface to request that the kernel operate on its behalf. This hardware access can include reading and writing files, sending and receiving network data, or simply accessing memory. The kernel is also in charge of coordinating concurrent processes, which allows multiple programs to operate at the same time. We need over a 100 system calls just to print &lt;em&gt;hello&lt;/em&gt; from a file using &lt;em&gt;cat&lt;/em&gt;!&lt;/p&gt;

&lt;p&gt;The Linux kernel is very complex and comprises of around 30 million lines of code. If we ever want to introduce a new function to the kernel, we would require familiarity with this codebase. So, unless you’re a kernel developer, this could pose as a serious challenge. Even if we are able to come up with an amazing solution, it takes around 5-8 years for it to actually reach any users. There may be new releases of the Linux kernel every 2-3 months, but most of us don’t directly use the Linux kernel, we use Linux distributions such as Ubuntu, Fedora, Debian, etc. All these distributions use older versions of the Linux kernel which is why our feature only reaches the end user after a few years.&lt;/p&gt;

&lt;h2&gt;
  
  
  An Alternative: Kernel Modules
&lt;/h2&gt;

&lt;p&gt;The Linux kernel was built to support kernel modules that can be loaded and unloaded as needed. If you wish to update or enhance kernel behavior, developing a module is one option. The biggest challenge with this is we still need a kernel developer and if the kernel code crashes, it’ll take the entire machine and the processes running down with it. Along with this, a major factor is security. This kernel module could have malicious code or include vulnerabilities that an attacker might exploit. To use any kernel module, we need to be 100% sure that it is “safe to run”. &lt;/p&gt;

&lt;p&gt;eBPF offers a very different approach to safety: the eBPF verifier, which ensures that an eBPF program is only loaded if it’s safe to run.&lt;/p&gt;

&lt;h2&gt;
  
  
  eBPF Verifier and Security
&lt;/h2&gt;

&lt;p&gt;Because eBPF allows us to run arbitrary code in the kernel, there must be a method in place to ensure that it is safe to run, will not crash users' PCs, and will not jeopardize their data. The eBPF verifier is this approach.&lt;/p&gt;

&lt;p&gt;The verifier examines an eBPF program to confirm that it will always terminate securely and within a defined amount of instructions, regardless of input. Verification also ensures that eBPF programs only access memory that they are authorized to access.&lt;/p&gt;

&lt;p&gt;Of course, writing a malicious eBPF program is still possible. If data may be observed for legitimate reasons, it can also be observed for illegitimate ones. Only load trusted eBPF programs from reliable sources, and offer eBPF tool management permissions to people you would trust with root access.&lt;/p&gt;

&lt;h2&gt;
  
  
  eBPF Programs
&lt;/h2&gt;

&lt;p&gt;eBPF programs can be dynamically loaded into and unloaded from the kernel. They will be triggered by an event once they are associated to it, regardless of what caused the event to occur. If you attach a program to the syscall for opening files, for example, it will be activated anytime any process tries to open a file. It makes no difference whether or not that process was already operating when the program was loaded.&lt;/p&gt;

&lt;p&gt;This leads to one of the most significant advantages of observability or security tooling that employs eBPF: &lt;em&gt;it instantaneously gains visibility into everything that is happening on the computer.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2o3gmnasz6o4f59vjgdn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2o3gmnasz6o4f59vjgdn.png"&gt;&lt;/a&gt;&lt;br&gt;Source: What is eBPF? by Liz Rice
  &lt;/p&gt;

&lt;h2&gt;
  
  
  eBPF in Cloud Native Environments
&lt;/h2&gt;

&lt;p&gt;Before we get into the details of why eBPF is so widely used in Cloud-Native environments, let us be clear on this, we have only one kernel per machine (or virtual machine), and all the containers running on it share the same kernel. &lt;/p&gt;

&lt;p&gt;Containers might be grouped up into different pods but they're still all sharing the same kernel and whenever those pods the application code in those pods want to do anything interesting like accessing the network or creating more containers is going to have the kernel involved so the kernel is aware of everything that's happening in all of your applications running on that node and that means we can write ebpf programs to hook into the kernel and observe possibly even modify the behavior of all of our applications which is very powerful. We see this used in a CNCF sandbox project called Pixie which helps us to extract stack information of running applications. Another CNCF graduated project called Cilium which is an eBPF enabled networking, observability and security for cloud native environments. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I hope this blog gave you a small introduction to what eBPF is. For a more technical understanding, you may refer to ebpf.io or if you want any beginner guides, you can refer to Liz Rice’s GitHub repository.&lt;/p&gt;

&lt;p&gt;References:&lt;br&gt;
&lt;a href="https://isovalent.com/ebpf/#:~:text=What%20is%20eBPF%3F,the%20way%20the%20kernel%20behaves." rel="noopener noreferrer"&gt;What is eBPF? by Liz Rice&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.youtube.com/watch?v=5t7-HM2jlTM&amp;amp;t=1993s" rel="noopener noreferrer"&gt;WTF are eBPF &amp;amp; Cilium? with Liz Rice and Christopher Luciano&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloudnative</category>
      <category>kubernetes</category>
      <category>linux</category>
      <category>networking</category>
    </item>
    <item>
      <title>Cloud Native Networking Using eBPF</title>
      <dc:creator>Deepesha Burse</dc:creator>
      <pubDate>Sun, 12 Feb 2023 18:16:34 +0000</pubDate>
      <link>https://dev.to/deepeshaburse/cloud-native-networking-using-ebpf-4p9h</link>
      <guid>https://dev.to/deepeshaburse/cloud-native-networking-using-ebpf-4p9h</guid>
      <description>&lt;h2&gt;
  
  
  Cloud Native? What’s that?
&lt;/h2&gt;

&lt;p&gt;Nowadays we expect softwares to be up 24/7, release frequent updates to keep up with their competitors, scale up and down as and when required, and so many other things. The limitations of monolithic architectures prevent us from being able to keep up with these ever-growing needs of clients. &lt;/p&gt;

&lt;p&gt;The above mentioned requirements, combined with the availability of new platforms on which we run software, have directly resulted in the emergence of a new architectural style for software: cloud-native software. We focus on making the applications more stable and resilient than the infrastructure we run them on. There is a growing awareness of the benefits of deploying loosely coupled microservices using containers, most of which are orchestrated with Kubernetes.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Cloud-native software is highly distributed, must operate in a constantly changing environment, and is itself constantly changing.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Cloud Native Networking?
&lt;/h2&gt;

&lt;p&gt;Cloud native architectures have specific network requirements, the majority of which are not met by traditional network infrastructure. Containers are altering how applications are developed, but they are also altering how applications communicate with one another. However, the network is also critical for these production deployments and must have cloud-native characteristics. Intelligent automation, elastic scalability, and security are all necessary. Because of the dynamism of containers, there is a greater need in this environment for visibility and observability.&lt;/p&gt;

&lt;p&gt;Cloud Native Networking allows containers to communicate with other containers or hosts to share resources and data. Typically, it is based on the standards set by the Container Network Interface (CNI). The Container Network Interface was designed as a simple contract between network plugins and the container runtime. Many projects, including Apache Mesos, Kubernetes, and rkt, have adopted CNI.&lt;/p&gt;

&lt;p&gt;CNI has the following key characteristics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CNI defines the desired input and output from CNI network plugins using a JSON schema.&lt;/li&gt;
&lt;li&gt;CNI allows you to run multiple plugins in a container that connects networks driven by various plugins.&lt;/li&gt;
&lt;li&gt;When CNI plugins are invoked, CNI describes networks in configuration JSON files and instantiates them as new namespaces.&lt;/li&gt;
&lt;li&gt;CNI plugins can support the addition and removal of container network interfaces from and from networks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;A project that implements CNI specs is a CNI Plugin.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  More on CNI Plugins
&lt;/h2&gt;

&lt;p&gt;CNI plugins fall under 3 major categories, i.e, routed networks, VXLAN overlays and additional features. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NyZ9dS1---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ji7ifsji6vdsmge3idg7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NyZ9dS1---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ji7ifsji6vdsmge3idg7.png" width="820" height="583"&gt;&lt;/a&gt;&lt;br&gt;Categories of CNI Plugins
  &lt;/p&gt;

&lt;p&gt;Routed Networks:&lt;br&gt;
Usually, an implementation that falls under this category would be Kuberouter. The way it works is by installing routes on your hosts that are running into our containers and then propagating those&lt;br&gt;
routes throughout all of your cluster. We can use Project Calico if we require support for any of the advanced features (like network policy in Kubernetes).&lt;/p&gt;

&lt;p&gt;VXLAN Overlays:&lt;br&gt;
This too, is used for connecting containers and helping them communicate with one another. The most simple implementation can be seen in flannel but if we want to use a VXLAN based CNI Plugin which supports advance features, like using gossip protocol to connect all the nodes and share information about the nodes inside the cluster we can use Weave Net.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Cilium?
&lt;/h2&gt;

&lt;p&gt;As we can see, Cilium integrates all three major categories involving CNI Plugins. It offers a great deal of flexibility and features, such as Layer 7 Network Policy (the application layer where microservices communicate with one another), policy decisions based on application level traffic, and improved observability.&lt;/p&gt;

&lt;p&gt;Cilium implements the CNI specification using eBPF and XDP, where XDP allows Cilium to connect to a physical interface as close to the physical interface as possible and BPF programs allow highly efficient packet processing with kernel-layer programs. Cilium loads endpoint/IP maps into BPF maps for BPF programmes to access quickly in the kernel.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cilium Component Overview
&lt;/h2&gt;

&lt;p&gt;Without getting into much detail, let us understand the flow of the working of Cilium. Suppose, in our case, we take Kubernetes as our input to Cilium’s policy repository. This input then goes to the Cilium Daemon where it is recompiled into bytecode. The code generated is injected into the BPF programs that are running in the kernel which connect our physical interface to our containers. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zqEsFvr6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hcjjp2r6c4n3gtv1u0yl.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zqEsFvr6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hcjjp2r6c4n3gtv1u0yl.jpg" width="880" height="784"&gt;&lt;/a&gt;&lt;br&gt;Source: &lt;a href="https://docs.cilium.io/en/stable/concepts/overview/"&gt;https://docs.cilium.io/en/stable/concepts/overview/&lt;/a&gt;&lt;br&gt;

  &lt;/p&gt;

&lt;p&gt;References:&lt;br&gt;
&lt;a href="https://www.youtube.com/watch?v=yXm7yZE2rk4&amp;amp;t=342s"&gt;Talk on "Cloud Native Networking with eBPF" by Raymond Maika&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.oreilly.com/library/view/cloud-native-patterns/9781617294297/"&gt;Cloud Native Patterns by Cornelia Davis &lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.netevents.org/demystifying-cloud-native-networking/"&gt;Demystifying Cloud-Native Networking&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.cisco.com/c/en/us/solutions/service-provider/industry/cable/cloud-native-network-functions.html"&gt;Cloud-Native Network Functions&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloudnative</category>
      <category>cilium</category>
      <category>cni</category>
      <category>microservices</category>
    </item>
    <item>
      <title>Introduction to CNN</title>
      <dc:creator>Deepesha Burse</dc:creator>
      <pubDate>Fri, 15 Apr 2022 18:13:54 +0000</pubDate>
      <link>https://dev.to/deepeshaburse/introduction-to-cnn-1i03</link>
      <guid>https://dev.to/deepeshaburse/introduction-to-cnn-1i03</guid>
      <description>&lt;p&gt;Convolutional neural network (ConvNet/CNN) is an algorithm used particularly in Computer Vision with Deep Learning and classification. It is a type of artificial neural network. We’ve used too many terms, let’s break it down one by one.&lt;/p&gt;

&lt;p&gt;Computer Vision - Computer vision is a branch of artificial intelligence (AI) that allows computers and systems to extract useful information from digital photos, videos, and other visual inputs, as well as to conduct actions or make suggestions based on that data.&lt;/p&gt;

&lt;p&gt;Deep Learning - Deep learning is a machine learning approach that allows computers to learn by example in the same way that people do.&lt;/p&gt;

&lt;p&gt;Classification - A class label is predicted for a particular example of input data in classification, which is a predictive modelling issue.&lt;/p&gt;

&lt;p&gt;Artificial Neural Network - A computer network based on biological neural networks that create the structure of the human brain is known as an artificial neural network. Artificial neural networks, like human brains, include neurons that are coupled to each other in various levels of the networks. These neurons are called nodes. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ywfoeA0h--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5sbm6m18vzzscwrzbkst.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ywfoeA0h--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5sbm6m18vzzscwrzbkst.png" alt="Neural Network" width="275" height="183"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How do CNNs work?
&lt;/h2&gt;

&lt;p&gt;They are distinguished from other neural networks by their superior performance with image, speech, or audio signal inputs. There are 3 main layers:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Convolutional layer&lt;/li&gt;
&lt;li&gt; Pooling layer&lt;/li&gt;
&lt;li&gt; Fully connected layer (FC layer)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The CNN becomes more complicated with each layer, detecting larger areas of the picture. Earlier layers concentrate on basic elements like colors and borders. As the visual data travels through the CNN layers, it begins to distinguish bigger components or features of the item, eventually identifying the target object.&lt;/p&gt;

&lt;p&gt;Let us dive deeper into what happens in each layer.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Convolutional Layer
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7iS8G9sX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dt3i8j08otfnouoph2ic.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7iS8G9sX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dt3i8j08otfnouoph2ic.gif" alt="Convolutional layer - 1" width="526" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All the major computation occurs in this layer. We require 3 components in this layer, the input data, a filter, and a feature map. The filter is also known as a kernel or feature detector. This filter moves across receptive fields of the image to check if a particular feature is present or not. This process is called convolution.&lt;/p&gt;

&lt;p&gt;Let us assume that the input is a color image and is a matrix of pixels in 3D. This means that it will have RGB values for height, width, and breadth. The feature detector is a 2D array of weights which represents a part of the image. The filter is typically a 3x3 matrix and determines the size of the receptive field.&lt;/p&gt;

&lt;p&gt;The filter is applied to an area of the image, and a dot product is calculated between the input pixels and the filter. This dot product is then fed into the output array. The filter, then shifts by stride, and keeps repeating the process until the entire input image is covered by the filter. The final output from the series of dot products from the input and the filter is known as feature map, activation map, or a convolved feature.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bWCGeHoD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kk426frrqlht6yqu40m2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bWCGeHoD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kk426frrqlht6yqu40m2.jpg" alt="Convolutional Layer - 2" width="880" height="512"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Pooling Layer
&lt;/h3&gt;

&lt;p&gt;This layer is also known as downsampling, conducts dimensionality reduction, reducing the number of parameters in the input. It is required to decrease the computational power required to process the data. Furthermore, it aids in extracting dominant features, thus maintaining the process of effectively training the model.&lt;/p&gt;

&lt;p&gt;Similar to the convolutional layer, it sweeps the filter across the entire input data, but the difference is this filter does not have weights. Instead, the kernel uses aggregation functions on values within the receptive field, populating the output array. &lt;/p&gt;

&lt;p&gt;There are 2 types of pooling:&lt;br&gt;
a.  Max Pooling: The filter returns the maximum value from the portion of the image covered by the kernel. This type is used more often.&lt;br&gt;
b.  Average Pooling: The filter returns the average of all the values from the portion of the image covered by the kernel. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aK-AS0Jg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/368zd08a9qp9rd0cl4bm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aK-AS0Jg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/368zd08a9qp9rd0cl4bm.png" alt="Pooling Layer" width="596" height="439"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have successfully enabled the model to grasp the features after going through the above method. After that, we'll flatten the final output and input it to a standard Neural Network for classification.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Full Connected Layer
&lt;/h3&gt;

&lt;p&gt;The pixel values of the input image are not directly connected to the output layer in partially connected layers. However, in this layer, each node in the output layer connects directly to a node in the previous layer. This layer performs the task of classification based on features extracted from previous layers and various filters. &lt;/p&gt;

&lt;p&gt;In convolutional layers and pooling layers, we usually use ReLu functions, while we usually use softmax activation function in this layer to classify inputs properly. This function returns a probability ranging between 0 and 1.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EdFNJHgG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c9uiy2ks9wjgrg84xtz4.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EdFNJHgG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c9uiy2ks9wjgrg84xtz4.jpeg" alt="FC Layer" width="802" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hope this gave you a brief introduction on what Convolutional Neural Networks are. :)&lt;/p&gt;

&lt;p&gt;References:&lt;br&gt;
&lt;a href="https://www.ibm.com/cloud/learn/convolutional-neural-networks"&gt;Convolutional Neural Networks&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://towardsdatascience.com/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way-3bd2b1164a53"&gt;A Comprehensive Guide to Convolutional Neural Networks — the ELI5 way&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://insightsimaging.springeropen.com/articles/10.1007/s13244-018-0639-9"&gt;Convolutional neural networks: an overview and application in radiology&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cnn</category>
      <category>deeplearning</category>
      <category>computervision</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>How does LSTM work?</title>
      <dc:creator>Deepesha Burse</dc:creator>
      <pubDate>Wed, 30 Mar 2022 17:29:24 +0000</pubDate>
      <link>https://dev.to/deepeshaburse/how-does-lstm-work-475</link>
      <guid>https://dev.to/deepeshaburse/how-does-lstm-work-475</guid>
      <description>&lt;p&gt;One of the most popular models in the time series domain is LSTM – Long Short-Term Memory model. It is a type of recurrent neural network and is heavily used in sequence prediction. In this blog, we will go through why LSTM is preferred and how it works. Before jumping into LSTM, let us dive a little deeper into what these terms mean.&lt;/p&gt;

&lt;p&gt;Time Series Analysis – In this, data points are analyzed over specific intervals of time. It is used to understand the pattern over a period of time, could be monthly, yearly, or even daily. This kind of analysis can be seen in stock price predictions or in businesses. &lt;/p&gt;

&lt;p&gt;Neural Network – A neural network consists of multiple algorithms that is majorly used to analyze the underlying relationship between various data points. It is inspired by the biological neural network.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpjezap0rgnfr3oztqfzv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpjezap0rgnfr3oztqfzv.png" alt="Neural Network"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Recurrent Neural Network (RNN) – If we have data points which are related, then we use RNNs. RNNs use the concept of memory, where they store certain data points. The problem with traditional RNNs is that as the number of data points increases, they are unable to remember data. Say, we want to process a paragraph of text to do predictions, RNN’s may leave out important information from the beginning. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0d0yuq9ekf6v50cbcbog.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0d0yuq9ekf6v50cbcbog.png" alt="Recurring Neural Network"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;LSTM is a special type of RNN where it stores data long term. It overcomes two technical problems vanishing gradients and exploding gradients. An LSTM module consists of a cell state and 3 gates. The cell state is like a conveyer belt, it allows information to flow linearly with minor changes. The model does have the ability to remove or add information, this is done using the 3 gates. The gates help in regulating the information. But as the flow of information is linear, it makes the flow easier. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5jaadk4x4nyd54bauhr4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5jaadk4x4nyd54bauhr4.png" alt="Cell in LSTM"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Architecture of LSTM:
&lt;/h3&gt;

&lt;p&gt;In LSTM, there are three main steps. We either forget, input or output. An analogy for this would be how news channels work. Say, there is a murder case that they are broadcasting and initially it is suspected that the cause of death is poisoning, but once the post mortem report comes through, the cause of death turns out to be an injury on the head, the information about the poisoning is “forgotten”. &lt;/p&gt;

&lt;p&gt;Similarly, if there were 3 suspects and then another suspected. This person is added or “inputted”. &lt;/p&gt;

&lt;p&gt;Finally, after the investigation of the police, there is a prime suspect, this information will be “outputted”. &lt;/p&gt;

&lt;p&gt;To carry out these three steps, we have 3 gates. Let us look at each one of them in detail:&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Forget gate:
&lt;/h4&gt;

&lt;p&gt;A forget gate is responsible for removing information. It removes information that is no longer needed for analysis and vacates space for the next information. This helps the model to become more efficient. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F93tt3wo4dm4dd6hm3epy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F93tt3wo4dm4dd6hm3epy.png" alt="Output Gate"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This gate takes in 2 inputs, &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ht_1: Hidden state from previous cell&lt;/li&gt;
&lt;li&gt;x_t: Input at the particular step&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These inputs are multiplied by the weight matrices and then a bias is added. Following this, the sigmoid function is applied to the calculated value. The sigmoid function gives an output between 0 and 1. This helps the model decide which information to “forget”. If the output is 0, the information of that cell is forgotten completely. Similarly, if the output is 1, the information of that entire cell is to be remembered. This vector output from the sigmoid function is multiplied with the cell state. &lt;/p&gt;

&lt;h4&gt;
  
  
  2. Input gate:
&lt;/h4&gt;

&lt;p&gt;This gate, as the name suggests, is used to add information to the cell state. Here is its structure,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F57hsemr00h79zbmi32ua.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F57hsemr00h79zbmi32ua.png" alt="Input Gate"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, the values that need to be added are regulated using the sigmoid function. The inputs are still h_t-1 and x_t. Next, a vector is created which contains all the possible values to be added to the cell state. This is done using the tan h function. Tan outputs a value between -1 to +1. The value of the regulatory function (sigmoid function) is multiplied with the created vector. The useful information is then added to the cell state using the addition operation. &lt;/p&gt;

&lt;p&gt;This allows us to make sure we have only filtered and important information.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Output gate:
&lt;/h4&gt;

&lt;p&gt;This gate is used to use the information currently available and show the most relevant output. It looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0f3ix63mcvyc501sodzj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0f3ix63mcvyc501sodzj.png" alt="Output Gate"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A vector is created after applying the tan h function to the cell state. The output ranges between -1 and +1. The sigmoid function is again used to regulate the values that need to be outputted from the vector using h_t-1 and x_t. The value of the regulatory function is multiplied with the vector and sent as the output. It is also sent to the hidden state of the next cell. &lt;/p&gt;

&lt;p&gt;LSTMs have proven to give state of the art results in sequence predictions. It is used in complex problem domains like machine translation, speech recognition, text generation, etc. I hope this gave you a basic idea on how LSTM models work. &lt;/p&gt;

&lt;p&gt;References:&lt;br&gt;
&lt;a href="https://www.analyticsvidhya.com/blog/2017/12/fundamentals-of-deep-learning-introduction-to-lstm/" rel="noopener noreferrer"&gt;Essentials of Deep Learning : Introduction to Long Short Term Memory&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://colah.github.io/posts/2015-08-Understanding-LSTMs/" rel="noopener noreferrer"&gt;Understanding LSTM Networks&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://towardsdatascience.com/illustrated-guide-to-lstms-and-gru-s-a-step-by-step-explanation-44e9eb85bf21" rel="noopener noreferrer"&gt;Illustrated Guide to LSTM’s and GRU’s: A step by step explanation&lt;/a&gt;&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>lstm</category>
      <category>rnn</category>
      <category>datascience</category>
    </item>
    <item>
      <title>2021 Wrapped </title>
      <dc:creator>Deepesha Burse</dc:creator>
      <pubDate>Sat, 01 Jan 2022 18:00:21 +0000</pubDate>
      <link>https://dev.to/deepeshaburse/2021-wrapped-1n41</link>
      <guid>https://dev.to/deepeshaburse/2021-wrapped-1n41</guid>
      <description>&lt;p&gt;What a year 2021 has been! It has easily been the year I’ve upgraded the most in tech, from undertaking various courses to taking part in events to my first ever kaggle competition, last year was absolutely mind blowing! &lt;/p&gt;

&lt;p&gt;I started the year with an introductory course on AI, followed by a course on ML and introduction to Data Science in Python. Amidst these courses, I also attended Google I/O, which honestly was such a turning point for me! Attending keynotes and other events was exhilarating but nothing can match the experience I had of being able to interact with women in tech from all around the world! Being the youngest in all the groups, I received lots of advice on how to advance in the field. I connected with quite a few of them on LinkedIn and they were all such incredible women that have definitely been one of the major inspirations for me to keep learning this year! &lt;/p&gt;

&lt;p&gt;I also made a few projects, used GitHub for the first time (took me over a day to figure out the basics but it was so worth it!!), and hosted my projects there. Following this, I attended a Kaggle event on ML through which I got to take part in my first ever Kaggle competition and a few mini courses along the way. &lt;/p&gt;

&lt;p&gt;I did a few labs on GCP throughout the year. I also took part in my first ever open source program, GirlScript Winter of Contributing where I mainly contributed under the domain of Data Science With Python. I have learnt so much from my mentors and supervisors, it’s honestly quite hard to put into words! Hacktoberfest’21 was a whole new experience too. Finding repos and making valid and meaningful contributions was incredibly fun. &lt;/p&gt;

&lt;p&gt;Oh and I also published my first ever blog! I’ve always been passionate about writing but clubbing that with my love for tech was just something else. &lt;/p&gt;

&lt;p&gt;Last year I mainly expanded my knowledge in ML, AI and data science so carring that forward, this year I would love to explore a few more fields and decide what I would like to begin my career in. &lt;/p&gt;

&lt;p&gt;Thank you so much for this wonderful year, with even more wonderful experiences! I have grown so much this year, and the credit goes to the entire tech community! &lt;/p&gt;

&lt;p&gt;Wishing you all the very best for 2022. &lt;/p&gt;

&lt;p&gt;I would love to connect with you all, please drop me a message if you’d like! :))&lt;/p&gt;

</description>
      <category>learning</category>
      <category>growth</category>
      <category>career</category>
      <category>motivation</category>
    </item>
    <item>
      <title>My First PR</title>
      <dc:creator>Deepesha Burse</dc:creator>
      <pubDate>Thu, 30 Sep 2021 18:41:14 +0000</pubDate>
      <link>https://dev.to/deepeshaburse/my-first-pr-7mg</link>
      <guid>https://dev.to/deepeshaburse/my-first-pr-7mg</guid>
      <description>&lt;p&gt;Making your first pull request is definitely daunting. Whether you are taking part in some open source program or not, there are multiple things that go through your mind while making it. This blog is my experience and everything I learnt.&lt;/p&gt;

&lt;p&gt;I made my first PR through an open source program, so I was given a basic format in which we had to document everything. Now, I was familiar with putting my personal projects on GitHub, but I had never tried to make a contribution. Making the appropriate documentation/files wasn’t too hard for me, sure I had my doubts on whether it was good enough and I was ‘qualified’ enough to contribute, but with lots of research and my basic knowledge of the topic got me through it somehow.&lt;/p&gt;

&lt;p&gt;The part which I procrastinated the most was making the PR. I had this mental block of creating a PR and had somewhere decided that it would be very complicated. As someone who’s still new to open-source, let me tell you, it is not! It is actually one of the easiest parts of contributing (if not the easiest!).&lt;/p&gt;

&lt;p&gt;Another huge learning for me was patience. Having worked only on personal projects, I never had to think about others’ views too much. I would ask a few people to review my project once but that was it. When making contributions, we need to consider that it is not our project, we are only fixing a bug or adding something to an entire project of theirs. They may have different expectations from as simple files as README files or the documentation or whatever it is you are contributing to. Something that looks okay to you, may not look so to your mentor/supervisor/maintainer. Instead of taking it negatively, try to understand what they expect and tweak your files accordingly.&lt;/p&gt;

&lt;p&gt;Taking part in an open source program has helped me grow a lot, and I would definitely recommend you to try it out! The best part about it is, there is always something you can contribute to. Giving back to a community that has given us so much is such an amazing feeling, made me feel so grateful to everyone who has directly/indirectly helped me. Yes, it is a little scary, but once you make your first PR, there is no going back!&lt;/p&gt;

&lt;p&gt;Here are a few articles that helped me to understand the process better:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.freecodecamp.org/news/how-to-make-your-first-pull-request-on-github-3/"&gt;How to make your first pull request on GitHub&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.better.dev/create-your-first-github-pull-request"&gt;Create Your First GitHub Pull Request&lt;/a&gt;&lt;br&gt;
&lt;a href="https://dev.to/doctolib/make-your-first-pull-request-to-an-open-source-project-1m57"&gt;Make your first pull request to an open-source project&lt;br&gt;
&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I hope this gave you a brief idea on how it is to make your first PR. &lt;/p&gt;

&lt;p&gt;If there are any tips you would like to share, please leave a comment!&lt;/p&gt;

&lt;p&gt;Until next time, Happy Coding! :))&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>beginners</category>
      <category>github</category>
    </item>
    <item>
      <title>Not Feeling Like You're Enough When in Tech</title>
      <dc:creator>Deepesha Burse</dc:creator>
      <pubDate>Mon, 27 Sep 2021 18:23:15 +0000</pubDate>
      <link>https://dev.to/deepeshaburse/not-feeling-like-enough-when-in-tech-234c</link>
      <guid>https://dev.to/deepeshaburse/not-feeling-like-enough-when-in-tech-234c</guid>
      <description>&lt;p&gt;Most of us, if not all, feel as though we’re not enough at some point in some context. Since I’ve decided to make a career in tech, I have felt so at multiple points and so thought of writing about it. This blog is my point of view, how I deal with the feeling and what I would expect from the other person if I were to express to someone. My sole aim of writing this blog is to start a conversation and not just be more in touch with our feelings but also become more sensitive towards others.&lt;/p&gt;

&lt;p&gt;Social media has definitely made a huge impact on me, and the large number of amazing, supportive tech communities is something that I am so grateful for. Unfortunately, just like everything else, I feel like the presence of so many communities and social media platforms is a huge contributor to this feeling. I’ll feel perfectly fine until I see this one tweet saying someone has learnt some new technology or a post on LinkedIn talking about this exceptional internship, they got at a famous company and although I am very happy to read such a news, it starts the ‘Am I doing enough? Can I achieve that?’ loop. It makes me criticize everything I’ve achieved yet and if I really belong to this field. &lt;/p&gt;

&lt;p&gt;It’s not a good feeling to say the least! &lt;/p&gt;

&lt;p&gt;Even though I absolutely understand why we feel this way, and it is very tough to think of it in any other way, it is really worth considering what we’re comparing to and who. Tech is such a vast field. New things are being made/launched every day, we are making progress in leaps and bounds, and we need to be mindful of that. Keeping up with that and expecting ourselves to know everything is honestly, a little unfair to ourselves. As long as we’re making some sort of progress every day, and that includes breaks every once in a while (especially when we need one!), scrutinizing yourself over every little thing is not a good idea.&lt;/p&gt;

&lt;p&gt;Another little thing I love to remind myself is, that someone will always be better than me and I will always be better than someone. And this is also very contextual, I might be better than someone in one technology and that same person may be so much better than me in some other technology. &lt;/p&gt;

&lt;p&gt;Talking to loved ones or anyone you are close to helps a lot too! They remind you of the efforts you are putting in and that at the end of the day that’s all that matters. I have been very lucky with the people in my life, they have always been very supportive of me and not just reminded me that I’m good at what I do but also that I am much more than just my career. It’s a great reminder that our career is very important, but it is not the only thing that matters. &lt;/p&gt;

&lt;p&gt;Even though we feel this, I think we are not as sensitive to others or, rather considerate that others go through this too. When someone confides in you that they feel this way, the most common reply I’ve seen is ‘Don’t worry you’ll get there some day’ and although that’s a nice thing to say, I don’t really think that helps. Talking from experience, instead of being told I will probably get there some day, I would love one addition to that sentence: ‘You’re doing great right now, don’t compare yourself to them, but, if you do wish to achieve what they did, I’m sure you can and will get there some day.’ When someone is already feeling low, one ‘you’re doing great’ can go a long way, especially when they trust you with their feelings. &lt;/p&gt;

&lt;p&gt;And in case no one has told you today, I think you’re doing great! :))&lt;/p&gt;

&lt;p&gt;I hope this article pushes you to be kinder to yourself and those around you!&lt;/p&gt;

&lt;p&gt;I would love to hear how you all cope with this feeling, please drop a comment and share how you do, it might help someone!&lt;/p&gt;

</description>
      <category>mentalhealth</category>
      <category>career</category>
      <category>productivity</category>
      <category>growth</category>
    </item>
    <item>
      <title>Getting Started with Python</title>
      <dc:creator>Deepesha Burse</dc:creator>
      <pubDate>Mon, 13 Sep 2021 17:12:51 +0000</pubDate>
      <link>https://dev.to/deepeshaburse/getting-started-with-python-3a7a</link>
      <guid>https://dev.to/deepeshaburse/getting-started-with-python-3a7a</guid>
      <description>&lt;p&gt;Everyone’s journey of learning a programming language, or anything new for that matter is different. The beauty of Python is how easy it is to read and understand it whether you have experience in programming or not. I started learning Python last year and it became one of my favorite languages in no time! So, how should you get started with Python? Here are some of the things that worked for me.&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Join some course on Python:
&lt;/h4&gt;

&lt;p&gt;Although this is very subjective, I feel like following a course helps learn the language in a proper order. It’s easier to learn a programming language in a structured way. There are many paid and free courses out there to learn from. I have linked a few down below:&lt;br&gt;
&lt;a href="https://www.youtube.com/watch?v=rfscVS0vtbw"&gt;Course on YouTube by freecodecamp&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.udemy.com/course/complete-python-bootcamp/"&gt;Course on Udemy&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.coursera.org/specializations/python"&gt;Course on Coursera&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Don’t skip the basics:
&lt;/h4&gt;

&lt;p&gt;This one is especially for those who are familiar with other programming languages. The syntax of Python is extremely easy and readable, and it is very tempting to skip through the basics. Python has a lot of features that you might miss on if you do. A code you have written in five lines might just need one or two. This may not seem like a lot, but it matters when you’re making larger projects.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Practice as you learn the concepts:
&lt;/h4&gt;

&lt;p&gt;As I mentioned earlier, it is very easy to get carried away and not put in time in practicing every concept. But in order to achieve fluency in programming, it is crucial to practice the smallest of concepts, so you remember them when you need them. You do not need to remember everything, but it helps to explore the different features available. I would also suggest looking up the solutions to some exercises to find more efficient and innovative ways to solve a particular problem.  &lt;/p&gt;

&lt;h4&gt;
  
  
  4. What are you learning Python for?
&lt;/h4&gt;

&lt;p&gt;Contrary to what most people think, Python can be used for a lot more than data science. There are lots of libraries and modules open sourced for various purposes. As you advance in the language, it is very helpful to acquaint yourself with those libraries and modules like Beautiful Soup for Web Development, PIL for Image and Video Manipulation, etc. It helps us understand what exactly we’re required to code and what is already available for us.&lt;/p&gt;

&lt;h4&gt;
  
  
  5. Make some fun projects!
&lt;/h4&gt;

&lt;p&gt;Implementing all the concepts you have learnt is just as important as learning them. Exercises help us practice every concept separately but when you make projects, you learn how to put them all together and make something out of it. You could make games like tic-tac-toe, card games or as you go further, bigger projects too! This will give you the confidence that you can build something of your own.&lt;/p&gt;

&lt;p&gt;Learning anything new is not easy and the journey is full of ups and downs. You might feel demotivated at some points but remember that we all do too! Some concepts might look daunting, break them into smaller concepts, look it up on the internet and give yourself time to absorb the concepts. &lt;/p&gt;

&lt;p&gt;I hope this gave you a brief idea on how to get started with one of the most popular programming languages right now!&lt;/p&gt;

&lt;p&gt;Happy Coding! :))&lt;/p&gt;

</description>
      <category>python</category>
      <category>programming</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
