<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sarvesh Kesharwani</title>
    <description>The latest articles on DEV Community by Sarvesh Kesharwani (@sarvesh42).</description>
    <link>https://dev.to/sarvesh42</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sarvesh42"/>
    <language>en</language>
    <item>
      <title>Whats the difference b/w Path and Query in FastAPI?</title>
      <dc:creator>Sarvesh Kesharwani</dc:creator>
      <pubDate>Sat, 07 Jun 2025 15:39:11 +0000</pubDate>
      <link>https://dev.to/sarvesh42/whats-the-difference-bw-path-and-query-in-fastapi-1c3e</link>
      <guid>https://dev.to/sarvesh42/whats-the-difference-bw-path-and-query-in-fastapi-1c3e</guid>
      <description>&lt;p&gt;Path parameters are part of the URL path and are &lt;strong&gt;required&lt;/strong&gt;.&lt;br&gt;
Example: /get_patient_data/1&lt;br&gt;
— here, 1 is a path parameter.&lt;/p&gt;

&lt;p&gt;Query parameters come after the ? in the URL and are **optional **by default.&lt;br&gt;
Example: /sorted_patient_list?sort_by=asc&lt;br&gt;
— here, sort_by=asc is a query parameter.&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>python</category>
      <category>fastapi</category>
      <category>programming</category>
    </item>
    <item>
      <title>How does Reinforcement learning relate to CNN and RNN?</title>
      <dc:creator>Sarvesh Kesharwani</dc:creator>
      <pubDate>Sat, 22 Apr 2023 08:08:44 +0000</pubDate>
      <link>https://dev.to/sarvesh42/how-does-reinforcement-learning-relate-to-cnn-and-rnn-3bn8</link>
      <guid>https://dev.to/sarvesh42/how-does-reinforcement-learning-relate-to-cnn-and-rnn-3bn8</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KQ_rGRKA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d6mrhw5ajjykyb3vmhje.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KQ_rGRKA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d6mrhw5ajjykyb3vmhje.png" alt="Image description" width="800" height="533"&gt;&lt;/a&gt;&lt;br&gt;
[&lt;a href="https://s.yimg.com/ny/api/res/1.2/L1OX80x3gaRMlnO9CMpP0A--/YXBwaWQ9aGlnaGxhbmRlcjtoPTY2Ng--/https://o.aolcdn.com/images/dar/5845cadfecd996e0372f/4b6df289c1dc7d95e9ac554cc3b882a59e74491e/aHR0cDovL28uYW9sY2RuLmNvbS9oc3Mvc3RvcmFnZS9taWRhcy82YzZlNTIyNTNkODJhMTU1MjIyNDdmNjNhMGU3MmYxMS8yMDIxNTE1ODEvbWFyaW8uanBn"&gt;Image Source&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;Reinforcement learning is when we teach a robot by giving it treats or stickers when it does a good job. The robot learns what to do to get the treats and tries to get as many treats as possible by doing the right things. It's like playing a game to learn how to make good choices.&lt;/p&gt;

&lt;p&gt;CNN and RNN are like helpers for the robot. They help the robot see and understand things better. They can look at pictures and tell the robot what's in them, like a cat or a dog, or they can help the robot understand words and sentences. They are like tools to help the robot learn even more about the world around it.&lt;/p&gt;

&lt;p&gt;Similar to how you may have seen models trained in Reinforcement learning play games like Mario or Flappy Bird to learn how to clear levels and earn as many achievement points or treats as possible.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Implementing max heap from scratch in Python!</title>
      <dc:creator>Sarvesh Kesharwani</dc:creator>
      <pubDate>Sat, 18 Mar 2023 21:02:00 +0000</pubDate>
      <link>https://dev.to/sarvesh42/implementing-max-heap-from-scratch-in-python-35ep</link>
      <guid>https://dev.to/sarvesh42/implementing-max-heap-from-scratch-in-python-35ep</guid>
      <description>&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class MaxHeap:
    def __init__(self):
        self.heap = []

    def insert(self, val):
        self.heap.append(val)
        self._percolate_up(len(self.heap)-1)

    def extract_max(self):
        if len(self.heap) &amp;gt; 1:
            max_val = self.heap[0]
            self.heap[0] = self.heap.pop()
            self._percolate_down(0)
        elif len(self.heap) == 1:
            max_val = self.heap.pop()
        else:
            max_val = None
        return max_val

    def _percolate_up(self, idx):
        parent_idx = (idx - 1) // 2
        if parent_idx &amp;lt; 0:
            return
        if self.heap[idx] &amp;gt; self.heap[parent_idx]:
            self.heap[idx], self.heap[parent_idx] = self.heap[parent_idx], self.heap[idx]
            self._percolate_up(parent_idx)

    def _percolate_down(self, idx):
        child_idx1 = idx * 2 + 1
        child_idx2 = idx * 2 + 2
        max_idx = idx
        if child_idx1 &amp;lt; len(self.heap) and self.heap[child_idx1] &amp;gt; self.heap[max_idx]:
            max_idx = child_idx1
        if child_idx2 &amp;lt; len(self.heap) and self.heap[child_idx2] &amp;gt; self.heap[max_idx]:
            max_idx = child_idx2
        if max_idx != idx:
            self.heap[idx], self.heap[max_idx] = self.heap[max_idx], self.heap[idx]
            self._percolate_down(max_idx)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the code above, we define a MaxHeap class with methods to insert elements into the heap and extract the maximum value from the heap. The _percolate_up and _percolate_down methods are used to maintain the heap property after inserting or extracting elements.&lt;/p&gt;

&lt;p&gt;To create a new MaxHeap object and insert values into it, you can do the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;heap = MaxHeap()
heap.insert(10)
heap.insert(30)
heap.insert(20)
heap.insert(5)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To extract the maximum value from the heap, you can call the extract_max method:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;max_val = heap.extract_max()
print(max_val) # Output: 30
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
    </item>
    <item>
      <title>C# code to create a bullet and shoot it in Unity</title>
      <dc:creator>Sarvesh Kesharwani</dc:creator>
      <pubDate>Fri, 17 Mar 2023 19:18:00 +0000</pubDate>
      <link>https://dev.to/sarvesh42/c-code-to-create-a-bullet-and-shoot-it-in-unity-3b9f</link>
      <guid>https://dev.to/sarvesh42/c-code-to-create-a-bullet-and-shoot-it-in-unity-3b9f</guid>
      <description>&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;using UnityEngine;

public class ShootBullet : MonoBehaviour
{
    // Reference to the bullet prefab
    public GameObject bulletPrefab;

    // Speed of the bullet
    public float bulletSpeed = 10.0f;

    // Update is called once per frame
    void Update()
    {
        // Check if the player pressed the fire button (left mouse button)
        if (Input.GetButtonDown("Fire1"))
        {
            // Create a new bullet instance
            GameObject bullet = Instantiate(bulletPrefab, transform.position, Quaternion.identity);

            // Get the rigidbody component of the bullet
            Rigidbody rb = bullet.GetComponent&amp;lt;Rigidbody&amp;gt;();

            // Add force to the bullet to shoot it forward
            rb.AddForce(transform.forward * bulletSpeed, ForceMode.Impulse);
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
    </item>
    <item>
      <title>How to build an image from Dockerfile?</title>
      <dc:creator>Sarvesh Kesharwani</dc:creator>
      <pubDate>Thu, 16 Mar 2023 12:50:00 +0000</pubDate>
      <link>https://dev.to/sarvesh42/how-to-build-an-image-from-dockerfile-1i7j</link>
      <guid>https://dev.to/sarvesh42/how-to-build-an-image-from-dockerfile-1i7j</guid>
      <description>&lt;p&gt;To build a Docker image from a Dockerfile, you can follow these steps:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Open a terminal or command prompt and navigate to the directory where your Dockerfile is located.

Run the following command: docker build -t &amp;lt;image_name&amp;gt; .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The docker build command is used to build a Docker image, and the -t flag is used to give the image a name. The . at the end of the command tells Docker to use the current directory as the build context.&lt;/p&gt;

&lt;p&gt;For example, if your Dockerfile is located in a directory called myapp, and you want to name the image myapp-image, you would run the following command:&lt;/p&gt;

&lt;p&gt;docker build -t myapp-image myapp&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Docker will start building the image based on the instructions in your Dockerfile. It will download any necessary dependencies and packages and execute any commands specified in the Dockerfile.

Once the build process is complete, you can verify that the image was created by running the docker images command. This will show a list of all the Docker images on your system, including the one you just built.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;That's it! You have successfully built a Docker image from a Dockerfile. You can now use this image to create and run Docker containers.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Understanding Conv2d Layer with Tensorflow implementation?</title>
      <dc:creator>Sarvesh Kesharwani</dc:creator>
      <pubDate>Thu, 16 Mar 2023 04:07:00 +0000</pubDate>
      <link>https://dev.to/sarvesh42/understanding-conv2d-layer-with-tensorflow-implementation-32h8</link>
      <guid>https://dev.to/sarvesh42/understanding-conv2d-layer-with-tensorflow-implementation-32h8</guid>
      <description>&lt;p&gt;Conv2D is a type of convolutional layer commonly used in deep learning for image recognition tasks. It applies a set of filters to the input image to detect specific features and patterns.&lt;/p&gt;

&lt;p&gt;Here's an example implementation of Conv2D layer using TensorFlow in Python:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import tensorflow as tf

# Define the input shape of the image
input_shape = (28, 28, 1)  # 28x28 grayscale image

# Define the number of filters and their size
filters = 32
kernel_size = (3, 3)

# Define the input tensor
inputs = tf.keras.Input(shape=input_shape)

# Define the Conv2D layer
x = tf.keras.layers.Conv2D(filters=filters, kernel_size=kernel_size, activation='relu')(inputs)

# Print the output shape
print(x.shape)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, we define the input shape of our image as a 28x28 grayscale image. We then define the number of filters and their size for our Conv2D layer. We create an input tensor using tf.keras.Input() and pass in the input_shape.&lt;/p&gt;

&lt;p&gt;Next, we define the Conv2D layer using tf.keras.layers.Conv2D() and pass in the filters, kernel_size, and activation function relu. We apply this layer to the inputs tensor using functional API of TensorFlow.&lt;/p&gt;

&lt;p&gt;Finally, we print the shape of the output tensor. The output shape of Conv2D layer will be (batch_size, height, width, filters).&lt;/p&gt;

&lt;p&gt;Note that this is just a basic example, and in real-world applications, there may be many other parameters and layers involved in the model.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>What is the venn-diagrammatic relation b/w container, venv and git_branches?</title>
      <dc:creator>Sarvesh Kesharwani</dc:creator>
      <pubDate>Wed, 15 Mar 2023 13:33:04 +0000</pubDate>
      <link>https://dev.to/sarvesh42/what-is-the-venn-diagrammatic-relation-bw-container-venv-and-gitbranches-1mig</link>
      <guid>https://dev.to/sarvesh42/what-is-the-venn-diagrammatic-relation-bw-container-venv-and-gitbranches-1mig</guid>
      <description>&lt;p&gt;The order of setting up these tools can vary depending on your specific project needs and workflow.&lt;/p&gt;

&lt;p&gt;Typically, you would start by setting up Git branches to manage different versions of your codebase. Once you have set up your Git repository, you can create a Dockerfile to define the environment for your Python application, including any necessary dependencies, tools, and configurations. The Docker container can be used to provide a consistent and reproducible environment for running your Python application.&lt;/p&gt;

&lt;p&gt;Next, you can create a Python virtual environment inside the Docker container to manage your Python dependencies. This can help ensure that your project dependencies are consistent and reproducible across different environments.&lt;/p&gt;

&lt;p&gt;Once you have set up your Docker container and Python venv, you can start developing your code in a Git branch. You can activate the Python virtual environment and start installing any necessary project dependencies using pip. When you are ready to commit your changes, you can use Git to manage and version control your code.&lt;/p&gt;

&lt;p&gt;Overall, the order of setting up Git branches, Docker containers, and Python venv can vary depending on your specific project needs and workflow. However, it's important to ensure that all three tools are integrated and working together to provide a consistent and reproducible development and deployment environment.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>physical memory, virtual address space, size of the page table</title>
      <dc:creator>Sarvesh Kesharwani</dc:creator>
      <pubDate>Wed, 15 Mar 2023 07:29:48 +0000</pubDate>
      <link>https://dev.to/sarvesh42/physical-memory-virtual-address-space-size-of-the-page-table-581o</link>
      <guid>https://dev.to/sarvesh42/physical-memory-virtual-address-space-size-of-the-page-table-581o</guid>
      <description>&lt;p&gt;&lt;a href="https://gateoverflow.in/739/gate-cse-2001-question-2-21?show=739#q739"&gt;https://gateoverflow.in/739/gate-cse-2001-question-2-21?show=739#q739&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YMwcuQTT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/98hoh8sk2tjxym9kot8u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YMwcuQTT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/98hoh8sk2tjxym9kot8u.png" alt="Image description" width="441" height="577"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To solve this question, you need to have a basic understanding of the following concepts of memory management:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Memory management: Memory management refers to the process of managing the memory resources of a computer system, including physical memory and virtual memory. In a virtual memory system, the memory addresses used by a program are virtual addresses, which are mapped to physical addresses by the operating system.

Paging: Paging is a memory management technique used in virtual memory systems to divide the virtual address space into fixed-size blocks called pages. Each page is mapped to a corresponding page frame in physical memory.

Page tables: A page table is a data structure used by the operating system to keep track of the mapping between virtual addresses and physical addresses. It contains information about which pages are mapped to which page frames, and it is used by the memory management unit (MMU) to translate virtual addresses to physical addresses.

Page size: The page size is the size of each page in a paging system. It is typically a power of two, and it determines the granularity of memory allocation in the system.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
    </item>
    <item>
      <title>Relation between Docker and Kubernetes?</title>
      <dc:creator>Sarvesh Kesharwani</dc:creator>
      <pubDate>Wed, 15 Mar 2023 07:23:00 +0000</pubDate>
      <link>https://dev.to/sarvesh42/relation-between-docker-and-kubernetes-2jg1</link>
      <guid>https://dev.to/sarvesh42/relation-between-docker-and-kubernetes-2jg1</guid>
      <description>&lt;p&gt;Docker and Kubernetes are two complementary technologies that are often used together in modern software development and deployment.&lt;/p&gt;

&lt;p&gt;Docker is a platform for creating and managing containers, which are a lightweight and portable way to package and distribute applications, along with their dependencies and configurations, as a single unit that can run consistently across different environments.&lt;/p&gt;

&lt;p&gt;Kubernetes, on the other hand, is a container &lt;a href="https://dev.to/sarveshkesharwani/what-is-container-orchestration-platform-3h2o"&gt;orchestration platform&lt;/a&gt; that automates the deployment, scaling, and management of containerized applications across clusters of hosts. Kubernetes provides a declarative API for managing containers, and it can run on a wide range of platforms, from local development environments to public clouds.&lt;/p&gt;

&lt;p&gt;Docker and Kubernetes work together in the following way:&lt;/p&gt;

&lt;p&gt;Developers use Docker to create container images that contain their applications and dependencies.&lt;/p&gt;

&lt;p&gt;These container images are then pushed to a Docker registry, where they can be easily shared and deployed.&lt;/p&gt;

&lt;p&gt;Operators use Kubernetes to deploy and manage these container images on a cluster of hosts, using Kubernetes' declarative API and its built-in features for scaling, load balancing, and self-healing.&lt;/p&gt;

&lt;p&gt;Kubernetes can also manage the configuration, secrets, and networking of the containers, as well as monitor their health and performance.&lt;/p&gt;

&lt;p&gt;In summary, Docker and Kubernetes are two complementary technologies that work together to enable modern software development and deployment. Docker provides a way to create and manage containers, while Kubernetes provides a way to automate the deployment and management of containerized applications across clusters of hosts.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>What is container orchestration platform?</title>
      <dc:creator>Sarvesh Kesharwani</dc:creator>
      <pubDate>Wed, 15 Mar 2023 07:22:00 +0000</pubDate>
      <link>https://dev.to/sarvesh42/what-is-container-orchestration-platform-3h2o</link>
      <guid>https://dev.to/sarvesh42/what-is-container-orchestration-platform-3h2o</guid>
      <description>&lt;p&gt;A container orchestration platform is a tool or framework that automates the deployment, scaling, and management of containerized applications across a cluster of hosts. Container orchestration platforms provide a set of APIs, tools, and services that enable developers and operators to manage containers at scale, while ensuring high availability, scalability, and reliability.&lt;/p&gt;

&lt;p&gt;Container orchestration platforms are designed to address the challenges of managing large and complex container deployments, which can involve thousands or even tens of thousands of containers running on multiple hosts or cloud instances. Container orchestration platforms provide features such as:&lt;/p&gt;

&lt;p&gt;Container deployment and scaling: The ability to deploy containers across multiple hosts or cloud instances, and to scale containers up or down based on demand.&lt;/p&gt;

&lt;p&gt;Load balancing: The ability to distribute traffic across containers, and to automatically route traffic to healthy containers.&lt;/p&gt;

&lt;p&gt;Service discovery: The ability to automatically discover and manage the network addresses of containers, and to route traffic to the appropriate container based on service name or label.&lt;/p&gt;

&lt;p&gt;Container health monitoring and self-healing: The ability to monitor the health and performance of containers, and to automatically restart or replace containers that are unhealthy or failing.&lt;/p&gt;

&lt;p&gt;Configuration and secret management: The ability to manage the configuration and secrets of containers, and to update them automatically when needed.&lt;/p&gt;

&lt;p&gt;Some popular container orchestration platforms include Kubernetes, Docker Swarm, Mesos, and Nomad. These platforms provide a powerful set of tools and services for managing containers at scale, and are widely used in modern software development and deployment.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>keras.applications .preprocess_input(x) to preprocess the input image for Neural Network.</title>
      <dc:creator>Sarvesh Kesharwani</dc:creator>
      <pubDate>Wed, 15 Mar 2023 07:00:39 +0000</pubDate>
      <link>https://dev.to/sarvesh42/kerasapplications-preprocessinputx-to-preprocess-the-input-image-for-neural-network-21fc</link>
      <guid>https://dev.to/sarvesh42/kerasapplications-preprocessinputx-to-preprocess-the-input-image-for-neural-network-21fc</guid>
      <description>&lt;p&gt;preprocess_input(x) is a function that applies some preprocessing to an input image before feeding it to a neural network model. It is part of the keras.applications module and is typically used when working with pre-trained models such as ResNet50.&lt;/p&gt;

&lt;p&gt;The preprocess_input function takes a single argument x, which is a NumPy array representing an image. The function applies a series of transformations to the image, such as normalizing the pixel values to be between -1 and 1 and zero-centering the data.&lt;/p&gt;

&lt;p&gt;The purpose of this preprocessing step is to ensure that the input image is in the same format as the images used to train the pre-trained model. This is important because the pre-trained model has learned to recognize patterns in images of a specific format, and if the input images are not in that format, the model's performance may suffer.&lt;/p&gt;

&lt;p&gt;Once the input image has been preprocessed, it can be passed through the neural network model to obtain a prediction.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>What is a batch in the context of neural networks?</title>
      <dc:creator>Sarvesh Kesharwani</dc:creator>
      <pubDate>Wed, 15 Mar 2023 06:55:43 +0000</pubDate>
      <link>https://dev.to/sarvesh42/what-is-a-batch-in-the-context-of-neural-networks-b77</link>
      <guid>https://dev.to/sarvesh42/what-is-a-batch-in-the-context-of-neural-networks-b77</guid>
      <description>&lt;p&gt;In machine learning, a batch refers to a subset of the training data that is processed at once during the training process. Instead of processing the entire dataset in one go, it is often more efficient to break it down into smaller batches and feed them into the model sequentially. This allows the model to update its parameters more frequently, which can help it converge to a better solution faster.&lt;/p&gt;

&lt;p&gt;For example, if you have a training set of 1000 examples, you might choose to process it in batches of 100. This means that the model will see the data in 10 passes, with each pass updating its parameters based on the loss calculated for the 100 examples in that batch. The size of the batch is a hyperparameter that can be tuned to find the optimal balance between training time and performance.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
