<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ravi Bhuvan</title>
    <description>The latest articles on DEV Community by Ravi Bhuvan (@algon31).</description>
    <link>https://dev.to/algon31</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/algon31"/>
    <language>en</language>
    <item>
      <title>Neural Networks</title>
      <dc:creator>Ravi Bhuvan</dc:creator>
      <pubDate>Mon, 13 Apr 2026 16:46:50 +0000</pubDate>
      <link>https://dev.to/algon31/neural-networks-fod</link>
      <guid>https://dev.to/algon31/neural-networks-fod</guid>
      <description>&lt;p&gt;what it is - &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;it is a system of connected nodes that learns patterns by adjusting the weights.&lt;/li&gt;
&lt;li&gt;it is a type of model used in machine learning.&lt;/li&gt;
&lt;li&gt;it is made by layers of nodes, where each layer can have multiple nodes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Other things.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;node is a small unit.&lt;/li&gt;
&lt;li&gt;neural network is trained through books , websites and not just text data things like images, audio data too(everything becomes numbers/vectors so that the model can learn).&lt;/li&gt;
&lt;li&gt;but mainly used for images.&lt;/li&gt;
&lt;li&gt;node performs calculation and prediction done using these nodes.&lt;/li&gt;
&lt;li&gt;node are responsible for simple calculation in the network.&lt;/li&gt;
&lt;li&gt;node size/layers are fixed.

&lt;ul&gt;
&lt;li&gt;meaning number of layers, neurons per layer and how they are connected( the architecture of the network).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;weights and bias values changes through out training.&lt;/li&gt;

&lt;li&gt;the problem is processed parallelly within layers.

&lt;ul&gt;
&lt;li&gt;neurons in same layer computing simultaneously is called parallel.&lt;/li&gt;
&lt;li&gt;the same input is sent to all neurons and they process it differently( basically with different weights).&lt;/li&gt;
&lt;li&gt;as the layer increases the input is refined and transformed and the finally a output is created&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;steps in processing the questions.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;when we give a input.

&lt;ul&gt;
&lt;li&gt;the input is converted into vectors.&lt;/li&gt;
&lt;li&gt;the first layer is used and the vectors are processed&lt;/li&gt;
&lt;li&gt;the output = new set of numbers or vectors.&lt;/li&gt;
&lt;li&gt;next, this output(above output) is taken processes again (more pattern)&lt;/li&gt;
&lt;li&gt;going through each layers refines the output.&lt;/li&gt;
&lt;li&gt; finally a output is generated.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Key Words in Neural Networks&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;neurons&lt;/strong&gt; ==  nodes small unit that performs math.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Weights&lt;/strong&gt; are nothing but tells how much influence one neuron's has on the next neuron.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Connections&lt;/strong&gt; are the links between the nodes.

&lt;ul&gt;
&lt;li&gt;the connection is also responsible for the importance of that specific node.&lt;/li&gt;
&lt;li&gt;lets say A node, B node and C node, node A and B are connected to C, then after calculation of that node, the connection between those decided the importance, if connection between a and c is more weighted then it is given more importance in the output.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Propagation function&lt;/strong&gt; is the weighted sum + activation  that happens inside the neuron as the data moves forward.&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;learning Rule&lt;/strong&gt; tells how the model updates the weights after prediction using loss and gradients.&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Learning rate&lt;/strong&gt; tells the model how much the model updates the weights after the error.&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Hidden Layers&lt;/strong&gt; middle layers that process and learn patterns.&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;gradient&lt;/strong&gt; is a number which tells how loss changes, if +ve add weights and vice versa.&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Basic Flow in Neural Network.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. Input (a)

2. For each connection:
   input × weight

3. At neuron:
   sum all inputs + bias → z

4. Apply activation:
   output = activation(z)

5. Pass output to next layer

6. Final output → compare with actual

7. Loss calculated

8. Backpropagation (error goes back)

9. Gradient Descent (update weights)

10. Repeat
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Activation Function&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;it is the math that is in the node.&lt;/li&gt;
&lt;li&gt;in each neuron, the output transforms, this happens in each layer and helping the network getting a meaningful output.&lt;/li&gt;
&lt;li&gt;activation function introduced non-linearity transformation.&lt;/li&gt;
&lt;li&gt; this avoids the problem of straight line (avoids linear limitation).&lt;/li&gt;
&lt;li&gt;each layer changes the output differently.&lt;/li&gt;
&lt;li&gt;this makes the model learn and understand complex pattern like images, speech and languages.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Loss Function&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;measures the error.&lt;/li&gt;
&lt;li&gt;a function that tells you weather the output is wrong.&lt;/li&gt;
&lt;li&gt;this is at the output layer of the neural network.&lt;/li&gt;
&lt;li&gt;this is used only in the training phase and can be used for evaluation.&lt;/li&gt;
&lt;li&gt;input given -&amp;gt; prediction -&amp;gt; loss function(compares it with actual) -&amp;gt; loss calculated -&amp;gt; output.&lt;/li&gt;
&lt;li&gt;after each loss is calculated the weights are updated using backpropagation + gradient descent.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Backpropagation&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;computes gradients(how error changes when weights are updated).&lt;/li&gt;
&lt;li&gt;process of sending error back to update the weights.&lt;/li&gt;
&lt;li&gt;the error flows backward through the layers.&lt;/li&gt;
&lt;li&gt;it is done using gradient descent. the gradient descent uses them to update weights.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Gradient Descent&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;method to update the weights to reduce loss.&lt;/li&gt;
&lt;li&gt;basically it moves the weights to where the error reduces.&lt;/li&gt;
&lt;li&gt;the gradient tells which connection influenced the error and the weights are updated accordingly(all the connection's weights are updated accordingly).&lt;/li&gt;
&lt;li&gt; it computes gradient mathematically, which connection affect the error and change accordingly.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Types of Activation Function

ReLU → hidden layers
Sigmoid → binary output
Tanh → alternative hidden layer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;ReLU - Rectified Linear Unit&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;this is a activation function that keeps positive values and removes negative values.

&lt;ul&gt;
&lt;li&gt;this helps in faster computation(makes the functions simple by removing negatives)&lt;/li&gt;
&lt;li&gt;not just by removing also because it is computationally simple. &lt;/li&gt;
&lt;li&gt;negative values are ignored.&lt;/li&gt;
&lt;li&gt;basically acts like filters.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;If z &amp;lt; 0 → output = 0  
If z ≥ 0 → output = z
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sigmoid&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;function that converts any values into 0 to 1 range.&lt;/li&gt;
&lt;li&gt;easy interpreting proper yes or no classification.&lt;/li&gt;
&lt;li&gt;converting of raw output to confidence score
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;f(x) = 1 / (1 + e^(-x))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Tanh - Hyperbolic tangent&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;converts the output between -1 and 1.&lt;/li&gt;
&lt;li&gt;exponential-based function.&lt;/li&gt;
&lt;li&gt;centered at 0 meaning well balanced.&lt;/li&gt;
&lt;li&gt;better than sigmoid (negative values are considered).
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tanh(x) = (e^x - e^(-x)) / (e^x + e^(-x))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Types of Neural Network&lt;/p&gt;

&lt;p&gt;Feedforward Neural Network (FNN)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the input flows in one direction there is no memory saved.&lt;/li&gt;
&lt;li&gt;this is the simplest form of neural network.&lt;/li&gt;
&lt;li&gt;Used in

&lt;ul&gt;
&lt;li&gt;basic classification&lt;/li&gt;
&lt;li&gt;Regression&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Convolutional Neural Network(CNN)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;best for images, can be used for others too.&lt;/li&gt;
&lt;li&gt;designed to process grid like data&lt;/li&gt;
&lt;li&gt;uses the grid of pixel values (numbers) to understand the image.
key components of CNN&lt;/li&gt;
&lt;li&gt;input layer - gets the raw image data and passes it to the network.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>deeplearning</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Redis</title>
      <dc:creator>Ravi Bhuvan</dc:creator>
      <pubDate>Tue, 04 Nov 2025 20:30:51 +0000</pubDate>
      <link>https://dev.to/algon31/redis-4abo</link>
      <guid>https://dev.to/algon31/redis-4abo</guid>
      <description>&lt;p&gt;what is it?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;is a &lt;strong&gt;RE&lt;/strong&gt;mote &lt;strong&gt;DI&lt;/strong&gt;ctionary &lt;strong&gt;S&lt;/strong&gt;erver.&lt;/li&gt;
&lt;li&gt;it is a open-source in memory data store.&lt;/li&gt;
&lt;li&gt;so, it like the cache stored memory, it is usually way too much faster than requesting querying the data.&lt;/li&gt;
&lt;li&gt;usually when the query is made, first it finds in cache(redis memory) if not query is sent to DB, then the data is sent back to user while also saving it in the redis memory.&lt;/li&gt;
&lt;li&gt; it is a structured key-value pair, but with various data structure.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;redis port number : 6379
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;installing redis&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;redis can be a standalone application, but usually in production we use it through the docker.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;first u need to create a redis-stack image in docker&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;map the port to the computer port number so that we can use it on the computer.&lt;br&gt;
for executing the redis stack container we&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker exec -it &amp;lt;continer_ID&amp;gt; bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;this gives you a bash inside the redis stack, now for redis cli u should use &lt;code&gt;redis-cli ping&lt;/code&gt; this will return &lt;code&gt;pong&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;instead we can simply use &lt;code&gt;redis-cli&lt;/code&gt; and skip writting the redis-cli each time&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Data types in redis&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Redis Strings
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;set name bhuv # setting the key-value pair
ok # output...
get name
"bhuv" # value which was saved 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;in redis DB it is not recommended that &lt;code&gt;name&lt;/code&gt; as the key, we should always use the convention like mentioned below&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;set &amp;lt;entity&amp;gt;:&amp;lt;id&amp;gt; &amp;lt;key&amp;gt; &amp;lt;value&amp;gt;
set name:1 bhuv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;now these are grouped according to the name (for visualizing purpose)&lt;/p&gt;

&lt;p&gt;for setting multiple values&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mset name:1 bhu name:2 rav name:3 olaa name:3 bhoo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;for using redis in nodejs application&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const {Redis} = require('ioredis'); // uses ioredis for connecting DB

const client = new Redis(); creates a redis instance 

module.exports = client; 
// exports it, so that it can be used in other modules.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;for creating and setting in node js&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const client = require('./client') // import connected redis module

async function init() {
    await client.set("name:4", "boy"); // setter
    const res = await client.get("name:4"); // getter

    console.log('Result :', res);
}
init();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;in redis there is TTL&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; Time To Live sets an expiry time to the key in redis memory, this specifies how much time would it leave in the redis memory.&lt;/li&gt;
&lt;li&gt;helps in freeing up used up memory, improve data retrival.&lt;/li&gt;
&lt;li&gt;to set expiry &lt;code&gt;expiry &amp;lt;key&amp;gt; &amp;lt;time in seconds&amp;gt;&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Redis Lists

&lt;ul&gt;
&lt;li&gt;lists are similar to vectors in cpp, in redis we use lists for stacks and queues.&lt;/li&gt;
&lt;li&gt;in this we push the data from left or right according to the command.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;lpush&lt;/code&gt; and &lt;code&gt;rpush&lt;/code&gt; pushes the data from left and right respectively.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;lpop&lt;/code&gt; and &lt;code&gt;rpop&lt;/code&gt; pops element from left and right respectively.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;llen&lt;/code&gt; gives you the length of lists&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;blpop&lt;/code&gt; removes element from left, if empty waits for specified time to delete the element
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;blpop &amp;lt;list&amp;gt; &amp;lt;time_in_sec&amp;gt;
blpop msg 20
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>beginners</category>
      <category>database</category>
      <category>performance</category>
    </item>
    <item>
      <title>Docker - one read to understand</title>
      <dc:creator>Ravi Bhuvan</dc:creator>
      <pubDate>Sun, 26 Oct 2025 22:55:23 +0000</pubDate>
      <link>https://dev.to/algon31/docker-one-read-to-understand-4f8i</link>
      <guid>https://dev.to/algon31/docker-one-read-to-understand-4f8i</guid>
      <description>&lt;p&gt;&lt;strong&gt;problem statement&lt;/strong&gt;&lt;br&gt;
developer working on a project, he has his own configuration. he installs all dependencies (Nodejs, MongoDB and so on). &lt;br&gt;
when other developer comes into picture then, he want to install all dependencies.&lt;br&gt;
he may also have diff OS, he installed all the latest versions when he installed. both has different versions and environment.&lt;br&gt;
he can't use OS-specific software.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Containers&lt;/strong&gt; - &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;these containers have their own environment with the version specified.&lt;/li&gt;
&lt;li&gt;these containers can be shared when he shares the image.&lt;/li&gt;
&lt;li&gt;these are lightweight, sharable and have their own env.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Docker Setup&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Docker Daemon&lt;/strong&gt; - this is the brain of docker, creating, pulling images.
&lt;code&gt;docker run -it ubuntu&lt;/code&gt; this is a normal CLI in windows it says start a ubuntu container i.e. an container with ubuntu as OS.
this checks for image of ubuntu locally in your machine, if not downloads it from the docker hub , and you get an image of ubuntu.
now u have a ubuntu container running in docker, u can use it. 
after creation of the docker container, the docker(daemon ) uses your host kernel, and not full OS and creates a container, now what ever changes you make in the container is only specific to the container. this does not affect your local OS, anything installed its only in the container.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Container and Images&lt;br&gt;
you can create as many container using a single image. An image is like a blueprint for containers. each containers are isolated from each other, one data created in one container is not accessible in container created using the same image. meaning each data is isolated to their own container.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Containerizing a app&lt;/strong&gt;&lt;br&gt;
create your  application.&lt;br&gt;
now create a file named &lt;code&gt;Dockerfile&lt;/code&gt; without extension.&lt;br&gt;
this is your image configuration basically.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM ubuntu
RUN apt-get update
RUN apt-get install -y curl
RUN curl -fsSL https://deb.nodesource.com/setup_20.x | bash -
RUN apt-get upgrade -y
RUN apt-get install -y nodejs

COPY package.json package.json` -
COPY package-lock.json package-lock.json` -
COPY server.js server.js`

RUN npm install
ENTRYPOINT ["node" , "server.js"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;FROM ubuntu&lt;/code&gt; -  this is you base image, we build on this image.&lt;br&gt;
&lt;code&gt;RUN apt-get update&lt;/code&gt; - this updates the packages.&lt;br&gt;
&lt;code&gt;RUN apt-get install -y curl&lt;/code&gt; - we are installing curl tool inside the container, and now we can download files using curl command in the container.&lt;br&gt;
&lt;code&gt;RUN curl -fsSL https://deb.nodesource.com/setup_20.x | bash -&lt;/code&gt; this downloads the nodejs packages&lt;br&gt;
&lt;code&gt;RUN apt-get upgrade -y&lt;/code&gt; updates the package.&lt;br&gt;
&lt;code&gt;RUN apt-get install -y nodejs&lt;/code&gt; this runs the nodejs.&lt;/p&gt;

&lt;p&gt;till here we installed a OS ubuntu and then installed the nodejs application on ubuntu&lt;/p&gt;

&lt;p&gt;&lt;code&gt;COPY package.json package.json&lt;/code&gt;&lt;br&gt;
&lt;code&gt;COPY package-lock.json package-lock.json&lt;/code&gt;&lt;br&gt;
&lt;code&gt;COPY server.js server.js&lt;/code&gt;&lt;br&gt;
this says copy the fileA  in the folder to the fileA inside the container.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;RUN npm install&lt;/code&gt; installs and creates node_modules folder with packages.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ENTRYPOINT ["node" , "server.js"]&lt;/code&gt; - this starts the node adn server.js&lt;/p&gt;

&lt;p&gt;Now you have created a basic image configuration for your application.&lt;br&gt;
in your terminal&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t image_name .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;this builds your image and &lt;code&gt;.&lt;/code&gt; that the file is in the &lt;code&gt;pwd&lt;/code&gt;.&lt;br&gt;
&lt;code&gt;-t&lt;/code&gt; means tag name you are giving&lt;br&gt;
&lt;code&gt;build&lt;/code&gt; builds your docker image using the configuration in Dockerfile&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker exec -it &amp;lt;container_ID&amp;gt; bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;this give u a interactive(&lt;code&gt;i&lt;/code&gt;) terminal(&lt;code&gt;t&lt;/code&gt;) inside the container,&lt;br&gt;
lets say u build the image using ubuntu, then it gives u ubuntu terminal to interact with the container.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -it -e PORT=5000 -p 8000:5000 &amp;lt;image_tag&amp;gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;run&lt;/code&gt;  runs your image&lt;br&gt;
&lt;code&gt;-it&lt;/code&gt; interactive terminal&lt;br&gt;
&lt;code&gt;-e&lt;/code&gt; environment variable&lt;br&gt;
&lt;code&gt;PORT=5000&lt;/code&gt; we are defining the env variable PORT to 5000&lt;br&gt;
&lt;code&gt;-p&lt;/code&gt; this is port mapping&lt;br&gt;
&lt;code&gt;8000:4000&lt;/code&gt; here, 8000 ---&amp;gt; host port and 4000 --&amp;gt; containers port&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build test .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;test:latest&lt;/code&gt; tag is used for reference for building image.&lt;br&gt;
Docker images are immutable, it creates a new image replaces tag with the new images.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Layer Caching&lt;/strong&gt; (each  code in Dockerfile can be called Layers off building image).&lt;br&gt;
so, basically the order of the Dockerfile command is important, lets say u do some server.js , now the docker checks the Dockerfile till where the code is cached, now everything after that line commands are executed when built again. so keep common code(installing node or other dependencies) before the actual code (that keeps changing) to avoid unnecessary downloading of dependencies on repeat.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Publishing to Docker hub&lt;/strong&gt;&lt;br&gt;
first create a repo in docker hub&lt;br&gt;
now create a local image using the name given in the repo&lt;br&gt;
finally &lt;code&gt;docker push &amp;lt;image_name&amp;gt;&lt;/code&gt; to push it to your repo&lt;br&gt;
u need to be logged in to push the image.&lt;/p&gt;

&lt;p&gt;lets say you are a developer working on multiple containers that work together. like Nodejs, PostgreSQL or MongoDB. so, now manually managing these container would be huge headache. so, &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker Compose&lt;/strong&gt; it is a tool used to configure, define and multi-container application easily.&lt;br&gt;
this lets u configure your application to run with multiple containers.&lt;br&gt;
&lt;code&gt;docker-compose.yml&lt;/code&gt; just like Dockerfile this file is used for configure the managing of containers and interacting with other containers and communication seamlessly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: "3.8"

services: 
  postgres: 
    image: postgres # takes the postgres imag from docker hub
    ports:
      - "5432:5432"
    environment:
      POSTGRES_USER : username
      POSTGRES_DB : review
      POSTGRES_PASSWORD : password

  redis:
    image : redis
    ports : ["6379:6379"]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;version&lt;/code&gt; can be anything.&lt;br&gt;
&lt;code&gt;services:&lt;/code&gt; says what are the services you want&lt;br&gt;
&lt;code&gt;Postgres:&lt;/code&gt; which service u want&lt;br&gt;
&lt;code&gt;image:&lt;/code&gt; image name in docker hub. u can use ur own image too.&lt;br&gt;
&lt;code&gt;port:&lt;/code&gt; port mapping&lt;br&gt;
&lt;code&gt;environment:&lt;/code&gt; env variables&lt;/p&gt;

&lt;p&gt;this is a basic setup for docker compose.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker compose up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;this starts the configuration and with the port mapping.&lt;br&gt;
now, this creates a stack of containers with your services needed in you containers page in docker desktop.&lt;br&gt;
&lt;code&gt;docker compose down&lt;/code&gt; clears the stack of containers&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Kubernetes</title>
      <dc:creator>Ravi Bhuvan</dc:creator>
      <pubDate>Fri, 24 Oct 2025 07:41:38 +0000</pubDate>
      <link>https://dev.to/algon31/kubernetes-3nab</link>
      <guid>https://dev.to/algon31/kubernetes-3nab</guid>
      <description>&lt;p&gt;basic prerequisites &lt;/p&gt;

&lt;p&gt;while development,&lt;br&gt;
at first people used to buy physical servers to deploy their application.&lt;br&gt;
then AWS came into picture, and developers, startup started to use the AWS cloud system.&lt;br&gt;
&lt;strong&gt;AWS&lt;/strong&gt; - it is a cloud computing platform, used by developers, startups, to build their application on the cloud, it provides a on-demand resources, and easy to use.&lt;br&gt;
now everything was on the cloud, everything became cloud native and on a different machine as always.&lt;br&gt;
now, sometimes the code written code in windows and deploying the code in some other OS like the cloud platform was hard. This was solved by VMs.&lt;br&gt;
but there is a problem, it to heavy-weighted for the system, their own OS, this was solved by docker, containerized application, which are light weighted.&lt;/p&gt;

&lt;p&gt;so, managing these containers was absolute, so Kubernetes came into picture.&lt;/p&gt;

&lt;p&gt;what is it?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; it is a orchestration platform that automating deployment, scaling and managing containerized application.&lt;/li&gt;
&lt;li&gt; so, basically its like a conductor(Kubernetes) managing the musicians(containers) to ensure a coordinative and optimized performance.&lt;/li&gt;
&lt;li&gt;this also creates a generic development of the containerized application, meaning that applications are not specific cloud dependent.&lt;/li&gt;
&lt;li&gt;it can run in AWS ECS(elastic container system) , DO(digital ocean) , GCP(google cloud platform).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Kubernetes Architecture - &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cluster&lt;/strong&gt; a group of nodes(control plane , compute nodes) where Kubernetes runs workload. these are the core units of the Kubernetes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Control plane&lt;/strong&gt; is responsible for maintaining the desired state of the cluster, such as which applications are running and which container images they use. its like the brain of the cluster.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;API server&lt;/strong&gt; is the central interface, all commands pass through it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scheduler&lt;/strong&gt; assigns the pods to the nodes based on availability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Controller Manager&lt;/strong&gt; monitors cluster state and ensures desired matches the actual state. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;etcd&lt;/strong&gt; key-value store maintaining cluster configuration date.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Worker nodes&lt;/strong&gt; are the machines in a Kubernetes cluster that actually run your containers.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Kubelet&lt;/code&gt; Agents ensuring containers in pods are running as instructed.&lt;/li&gt;
&lt;li&gt;kube-proxy manages networking, routing traffic between pods and services.&lt;/li&gt;
&lt;li&gt;Nodes use container runtimes.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Objects and Resources &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;pods/Containers&lt;/strong&gt; Smallest deployable units, hold one or more containers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Services&lt;/strong&gt; exposes pods to network traffics.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployment&lt;/strong&gt; manages pod replicas and rolling updates&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Namespace&lt;/strong&gt; divides cluster resources logically for isolation.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Febfx719k4wuxlkq1qmp7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Febfx719k4wuxlkq1qmp7.png" alt=" " width="800" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;basic flow, &lt;br&gt;
there is a physical server that is running the control plane, this is responsible for managing the workernodes,&lt;br&gt;
this control plane runs separately from the worker node(it can run in same machine in development phase).&lt;br&gt;
lets say u want to run two nginx containers,&lt;br&gt;
this instruction is sent to control plane through API server.(authentication is done, weather the request is made by a authenticated system)&lt;br&gt;
now, the &lt;strong&gt;API server&lt;/strong&gt; tells the &lt;strong&gt;Controller&lt;/strong&gt; to create two pods with nginx container, controller creates a pod, now the pod is created. now to run this we need a physical server to run this pods.&lt;br&gt;
now the &lt;strong&gt;Scheduler&lt;/strong&gt; checks for unassigned pods and assigns pods to the worker nodes. it also distributes the pod to the worker nodes.&lt;br&gt;
the worker node is basically where your actual code runs.&lt;br&gt;
now, basically the scheduler through the API server sends the message to kubelet in the worker node to start the pod which was unassigned.&lt;br&gt;
&lt;strong&gt;Kubelet&lt;/strong&gt;'s main job it to ensure that containers that are scheduled are running in the node.&lt;br&gt;
&lt;strong&gt;Kube proxy&lt;/strong&gt; - redirecting of network traffics.&lt;br&gt;
here desired state is matched with the current state.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>devops</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Docker</title>
      <dc:creator>Ravi Bhuvan</dc:creator>
      <pubDate>Wed, 22 Oct 2025 11:03:11 +0000</pubDate>
      <link>https://dev.to/algon31/docker-3mjm</link>
      <guid>https://dev.to/algon31/docker-3mjm</guid>
      <description>&lt;p&gt;what is Docker?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;it is a platform for developing, shipping and running application.&lt;/li&gt;
&lt;li&gt;it separates your application from your infrastructure for faster building and delivery.&lt;/li&gt;
&lt;li&gt;these containers are great for Continuous Integration and continuous Delivery i.e. CI/CD workflow.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Docker Platform&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker provides a way package and run application on loosely isolated environment called containers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Containers&lt;/strong&gt; are lightweight and contain everything needed for application to run, so that you wont need to rely on what's installed in host.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Docker Architecture &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker uses the &lt;strong&gt;client-sever architecture&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;It is lightweight, meaning they share the host kernel, not include the full OS, and also have isolated user-space.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9oioqlkf5zgmlygn0qo9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9oioqlkf5zgmlygn0qo9.png" alt=" " width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker daemon (&lt;code&gt;dockerd&lt;/code&gt;)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Listens for Docker API and manages docker objects such as images, containers, networks and volumes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Docker Client&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It is the CLI of that used to communicate with the docker daemon process&lt;/li&gt;
&lt;li&gt;It can use the REST APIs, over UNIX socket or TCP to the daemon&lt;/li&gt;
&lt;li&gt;When you use commands such as docker run, the client sends these commands to &lt;code&gt;dockerd&lt;/code&gt;, which carries them out. The docker command uses the Docker API.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Docker Objects&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When you use Docker, you are creating and using images, containers, networks, volumes, plugins, and other objects. This section is a brief overview of some of those objects.

&lt;ul&gt;
&lt;li&gt;Images(blue print) - A ready-made template for creating the docker container. you actually create using other images with your own customization i.e. &lt;code&gt;Dockerfile&lt;/code&gt; .&lt;/li&gt;
&lt;li&gt;Containers(running instance of that blue print) - is a runnable instance of an image. You can create, stop, move or delete a container using docker API or CLI.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Docker Registries&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It stores a docker images. Docker hub is a public registry that anybody can use. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;so, here is &lt;strong&gt;the basic flow&lt;/strong&gt;,&lt;br&gt;
 At, first the command is issued via Docker Client(CLI), the client creates a REST API request.&lt;br&gt;
the client communicates with &lt;code&gt;dockerd&lt;/code&gt; via UNIX socket or TCP.&lt;br&gt;
daemon checks the local cache.&lt;br&gt;
daemon, sends the request to Docker Registry(docker hub or private registry).&lt;br&gt;
after the request, daemon uses retrieved image to create a writable layer on top of the image for the containers.&lt;br&gt;
it sets up container resources using Docker Objects.&lt;br&gt;
starting the container, the &lt;code&gt;dockerd&lt;/code&gt; launches the container as a process on the host OS using the shared kernel.&lt;br&gt;
during the execution of the image, it runs its defined processes.&lt;br&gt;
daemon also tracks lifecycles, resource usage and logs.&lt;br&gt;
response to client is sent back, container ID, logs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;User (CLI)
   ↓
Docker Client → API Request
   ↓
Docker Daemon (Server)
   ↓
Check local image → Pull from Registry if absent
   ↓
Create container using Docker objects
   ↓
Run container (process on host kernel)
   ↓
Return output to Client

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Extras : &lt;br&gt;
&lt;strong&gt;Nginx&lt;/strong&gt;- open-source software that acts as the server, reverse proxy, load balancer and HTTP cache.&lt;br&gt;
&lt;strong&gt;Reverse proxy&lt;/strong&gt; - when a client sends requests to server, basically it does not send to the server directly(maintaining Abstraction), it sends to a middle server(like Nginx | VPN )that decides, which server to send the request(the deciding factor is during developing the Nginx).&lt;br&gt;
&lt;strong&gt;Load balancer&lt;/strong&gt; - distributing the incoming traffic to multiple servers to ensure no single server is overwhelmed.&lt;br&gt;
&lt;strong&gt;HTTP caching&lt;/strong&gt; -  refers to NGINX's ability to store copies of responses from backend servers (origin servers) and serve those cached responses directly to clients for subsequent requests&lt;/p&gt;

</description>
    </item>
    <item>
      <title>DevOps</title>
      <dc:creator>Ravi Bhuvan</dc:creator>
      <pubDate>Mon, 20 Oct 2025 08:03:49 +0000</pubDate>
      <link>https://dev.to/algon31/devops-4ofc</link>
      <guid>https://dev.to/algon31/devops-4ofc</guid>
      <description>&lt;p&gt;what is it?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;set of tools, that automate and integrate the processes between software development team and IT teams.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Delivery&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Automation&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;improving the &lt;strong&gt;Quality&lt;/strong&gt; &lt;/li&gt;
&lt;li&gt;continuous  &lt;strong&gt;Monitoring&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;continuous  &lt;strong&gt;Testing&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ok, but why?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;for automating the testing&lt;/li&gt;
&lt;li&gt;for faster delivery &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;SDLC - Software Developer Life Cycle&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;its a structural process of design, building, developing and maintaining  of software.&lt;/li&gt;
&lt;li&gt;it contains of distinct phases each responsible for different phase of the development of the software.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffox3xjq22pgzoqav3i0p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffox3xjq22pgzoqav3i0p.png" alt=" " width="660" height="330"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;so, basically Dev-ops engineer will make sure the 

Building ----&amp;gt; Testing ----&amp;gt; Deployment

is automated and not manual.(usually dev-ops part)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Virtual Machines&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;is a compute resource that uses software instead of physical computer to run programs and deploy apps.&lt;/li&gt;
&lt;li&gt;a machine that uses resource that are on another computer, thinking it as its own.&lt;/li&gt;
&lt;li&gt;basically there is a host(the computer with the physical resource) machine and then the guest(the computer which is on the host machine)&lt;/li&gt;
&lt;li&gt;providing a logical isolation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Hypervisor&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;is a virtual machine monitor (also known as VMM).&lt;/li&gt;
&lt;li&gt;is a software that creates and runs VMs.&lt;/li&gt;
&lt;li&gt;it allows one host machine to have multiple VMs by virtually sharing resource, like memory and processing.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS (Amazon Web Services)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; is cloud computing platform offered by Amazon where, it provides required resources.&lt;/li&gt;
&lt;li&gt;they a build a physical server or data centers, that provides a VM(EC2 i.e. in AWS) with specified computing, databases and servers.&lt;/li&gt;
&lt;li&gt;users can only interact it with online.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;EC2: Virtual servers for running applications.
S3: Object storage for files and media.
RDS: Managed relational databases.
Lambda: Run code without managing servers.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;CLI&lt;/strong&gt; - Command Line Interface is a text-base tool used to interact with the computer using command lines&lt;/p&gt;

</description>
    </item>
    <item>
      <title>What happens when u click google.com - Computer Networks</title>
      <dc:creator>Ravi Bhuvan</dc:creator>
      <pubDate>Sat, 18 Oct 2025 10:32:53 +0000</pubDate>
      <link>https://dev.to/algon31/what-happens-when-u-click-googlecom-computer-networks-pmh</link>
      <guid>https://dev.to/algon31/what-happens-when-u-click-googlecom-computer-networks-pmh</guid>
      <description>&lt;p&gt;Lets start from beginning - &lt;/p&gt;

&lt;p&gt;When you install a router, it receives a public IP from the ISP on its WAN interface. &lt;br&gt;
Usually, when u setup a router in your home or anywhere, basic default setting will already be set like&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;DHCP enabled &lt;/li&gt;
&lt;li&gt;NAT enabled &lt;/li&gt;
&lt;li&gt;Default routing connecting LAN to WAN and back&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;DHCP&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When a host connects via Ethernet or Wi-fi, &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Host sends broadcast &lt;code&gt;DHCPDISCOVER&lt;/code&gt; to the Network&lt;/li&gt;
&lt;li&gt;This asks the router to assign a IP Address to itself(host)&lt;/li&gt;
&lt;li&gt;now when the router(DHCP server) receives the request &lt;/li&gt;
&lt;li&gt;the DHCP server send back a &lt;code&gt;DHCPOFFER&lt;/code&gt; this contains
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;IP Address - host expected IP address
Subnet Mask - the local networks range
Default Gateway - if its outside the local to external IPs
DNS server - usually the router itself
Lease Time - duration of the IP address 
some other parameters
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;now after the host receives host responds with &lt;code&gt;DHCPREQUEST&lt;/code&gt; confirming the IP Address&lt;/li&gt;
&lt;li&gt;DHCP server confirms the assignment by sending &lt;code&gt;DHCPACK&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;IP Address&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Internet Protocol is a unique address that identifies a device on the internet.&lt;/li&gt;
&lt;li&gt;DHCP assigns a IP address selecting in a address pool&lt;/li&gt;
&lt;li&gt;It is a unique assigned numerical label, which is given to the device that is connected to the internet.&lt;/li&gt;
&lt;li&gt;two types

&lt;ul&gt;
&lt;li&gt;Public IP address - it is address that is assigned to the device that is connected to the internet directly.&lt;/li&gt;
&lt;li&gt;Private IP address - this is a private address that are used in private networks, these cannot directly interact with the internet, basically these talk through the internet by mechanism called NAT.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;-based on version two types&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;IPv4(32 bit addressing)

&lt;ul&gt;
&lt;li&gt;it is the most common form of IP address, it consists of four set of numbers
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1 byte = 8 bits 
so,
32 bits --&amp;gt; 4 bytes
example IP address :
    192.168.1.1

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;IPv6(128 bits addressing)

&lt;ul&gt;
&lt;li&gt;this version was created to overcome the shortage of IPv4 addresses. this gives us a large amount of addresses.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;example:
  2001:0db8:85a3:0000:0000:8a2e:0370:7334
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Subnet Mask&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a subnet is a portion of a larger network, basically it tells the network boundary or range of the IP address in the local subnet&lt;/li&gt;
&lt;li&gt;subnet mask - is 32 bit number that is used to differentiate between network and host part in the IP address&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;DNS&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Domain Name System is the internet phonebook mapping/translating the domain names(people can remember) to the IP addresses(what system/routers understand) via hierarchical name servers.&lt;/li&gt;
&lt;li&gt;server for translating the names to IP address&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Lease Time&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;duration of the IP is valid before renewal sent when &lt;code&gt;DHCPOFFER&lt;/code&gt; to host.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;after assigning, after sometime the host sends a renewal (&lt;code&gt;DHCPREQUEST&lt;/code&gt;) request to the DHCP server to maintain the IP address&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When you enter google.com in the browser, the OS checks for local cache in your system. If it is not found, the host sends a DNS query to the router. then the router forwards to the actual DNS server i.e. ISP. after that the ISP sends back a IP address back to the host.&lt;/p&gt;

&lt;p&gt;lets say the host doesn't know the mac address of the router it only knows the IP address of the router, so it can't send a DNS lookup, so the ARP request/reply exchange is done&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ARP - request/reply Process&lt;/strong&gt;&lt;br&gt;
basic analogy&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   Host----switch----router-----(ISP)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;host  - ARP Table (mapping of MAC address to IP Addresses)&lt;/li&gt;
&lt;li&gt;switch - MAC table (mapping of MAC address to switch ports)&lt;/li&gt;
&lt;li&gt;router - Arp table (mapping of MAC address to IP Addresses)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;lets say all tables are empty&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;the host sends broadcast ARP requesting the mac address of the default gateway
something like this -
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;source IP address : 192.xx.xx.xx (host's IP)
source MAC address : 00:1A:2B:3C:4D:5E (host's mac)
destination IP address : 192.1xx.x.1 (this will be the router's IP)
destination : FF:FF:FF:FF:FF:FF(this says unknown)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;p&gt;the switch doesn't know, but it is a broadcasts, so it sends the message to the whole LAN and updates the mac table about the host and maps the port number.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;other hosts check and discard the IP packet, because in the ARP request the destination IP address is not their owns so.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the router also gets the ARP broadcast, the router gets to know that the host wants the MAC address of the router(the default gateway). The router also updates the ARP table in router and maps mac to IP address of the host.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the router sends the unicast message back to host, as it knows the mac address of the host, by checking the ARP request. here the switch also updates the mac table by adding, the router's mac address and maps the port.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;6.the host updates the ARP table and adds the MAC address&lt;/p&gt;

&lt;p&gt;now the host has a mac address and can communicate to the internet through router.&lt;/p&gt;

&lt;p&gt;the browser sends a an HTTP or HTTPS(secure) request to the server.&lt;br&gt;
the router performs a NAT.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NAT&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;it is a process of allowing the multiple private IPs to share a single public IP address to interact with the WAN or to access the internet. its in the router&lt;/li&gt;
&lt;li&gt;NAT contains a translation table, which contains the mapping of internal host address and port that is connected to the internet.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;let me explain when a host A sends a request to a external server via router, the NAT here maps it to a port number before sending it the actual server and send it with a port number. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Host A (192.168.1.2:5000) sends a request to a web server.&lt;/li&gt;
&lt;li&gt;Router replaces sourceIP:port with publicIP:10000 in NAT table.
for the same server the host B also sends a request&lt;/li&gt;
&lt;li&gt;Host B (192.168.1.3:5000) sends a request to the same server&lt;/li&gt;
&lt;li&gt;Router replaces source IP:port with publicIP:10001 in NAT table.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;the server replies to publicIP:10000, so router checks and forwards it to the host A.&lt;/p&gt;

&lt;p&gt;This process is called Port Address Translation (PAT).&lt;/p&gt;

&lt;p&gt;so, like this the NAT replaces the  private IP address with the Public IP address using NAT table. when the router receives the reply from the server it check the table and forwards back to host.&lt;/p&gt;

&lt;p&gt;Finally, the host’s transport layer checks the destination port to deliver the payload to the proper application/session(here the browser), with TCP/UDP connections potentially reused for multiple requests.&lt;/p&gt;

</description>
      <category>computerscience</category>
      <category>computernetworks</category>
    </item>
    <item>
      <title>Important OS</title>
      <dc:creator>Ravi Bhuvan</dc:creator>
      <pubDate>Tue, 26 Aug 2025 10:54:44 +0000</pubDate>
      <link>https://dev.to/algon31/important-os-4g6h</link>
      <guid>https://dev.to/algon31/important-os-4g6h</guid>
      <description>&lt;h4&gt;
  
  
  Types of OS
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Single Process &lt;/li&gt;
&lt;li&gt;Batch Processing &lt;/li&gt;
&lt;li&gt;Multiprogramming - multi jobs/ programs running &lt;/li&gt;
&lt;li&gt;Multi-tasking - extension of 3 logically - single CPU able to run multiple task simultaneously.&lt;/li&gt;
&lt;li&gt;Multiprocessing - more than 1 CPU single computer&lt;/li&gt;
&lt;li&gt;Distributed - more CPUs, memory GPUs are or maybe in more than oen place it is distributed.&lt;/li&gt;
&lt;li&gt;Real Time - &lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Multi-tasking vs Multi-Threading
&lt;/h4&gt;

&lt;h5&gt;
  
  
  Multi-Tasking
&lt;/h5&gt;

&lt;p&gt;working with many tasks simultaneously.&lt;br&gt;
CPU switches from multiple processes.&lt;br&gt;
eg. employees working in same company(CPU) in different departments(Processes) in their offices(their memory).&lt;/p&gt;

&lt;h5&gt;
  
  
  Multi-Threading
&lt;/h5&gt;

&lt;p&gt;Breaking a process into several threads which has there own path of execution.&lt;br&gt;
CPU switches between threads from same process, sharing memory.&lt;br&gt;
eg. employees working in same company(CPU) in same project(Processes) in same office(their memory).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: in multithreading each thread is executing based on priorities. Each thread their own time slices in the runtime(whole process has the same runtime).&lt;/p&gt;

</description>
    </item>
    <item>
      <title>OS Concepts and terms</title>
      <dc:creator>Ravi Bhuvan</dc:creator>
      <pubDate>Tue, 22 Jul 2025 15:20:03 +0000</pubDate>
      <link>https://dev.to/algon31/os-concepts-and-terms-3d6g</link>
      <guid>https://dev.to/algon31/os-concepts-and-terms-3d6g</guid>
      <description>&lt;h3&gt;
  
  
  Firmware
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Permanently stored software&lt;/strong&gt; on the hardware Device. give low-level control for specific hardware.&lt;/p&gt;

&lt;h3&gt;
  
  
  BIOS
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Basic input-output system&lt;/strong&gt;, its a low level firmware that embedded on the motherboard. it is responsible for booting up the system using 16-bit real.&lt;/p&gt;

&lt;h3&gt;
  
  
  UEFI
&lt;/h3&gt;

&lt;p&gt;bios is old, &lt;strong&gt;Unified Extensible Interface&lt;/strong&gt; is a replacement of traditional bios. which has a faster booting time as it can run in both 32 and 64-bit mode.&lt;/p&gt;




&lt;h2&gt;
  
  
  Important Terms
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Compiler
&lt;/h3&gt;

&lt;p&gt;A program that converts the high-level language(C++,java) into binary code(0s or 1s) so, that the computer can understand.&lt;/p&gt;

&lt;h3&gt;
  
  
  Loader
&lt;/h3&gt;

&lt;p&gt;part of OS that loads and executes the file in the disk into memory, so PC can run.&lt;/p&gt;

&lt;h3&gt;
  
  
  Assembler
&lt;/h3&gt;

&lt;p&gt;programs that converts low-level language to binary instructions(machine code)&lt;/p&gt;

&lt;h3&gt;
  
  
  Interpreter
&lt;/h3&gt;

&lt;p&gt;Program that translates the code and runs line by line.(unlike compiler which translates the whole code first and runs)&lt;/p&gt;

&lt;h3&gt;
  
  
  System Calls
&lt;/h3&gt;

&lt;p&gt;it is a way of programs that request services from OS kernel, like &lt;code&gt;open()&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#include &amp;lt;unistd.h&amp;gt;

int main() {
    const char *message = "Hello, World\n";
    write(1, message, 13);  //here write() is a system call
    return 0;
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  API (Application Programing Interface)
&lt;/h3&gt;

&lt;p&gt;it is a set of rules/protocols that needs to be followed by the program, where two different software interact/communicate with each other.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kernel
&lt;/h3&gt;

&lt;p&gt;it is the core of an OS, which controls all the hardware&lt;/p&gt;

&lt;h3&gt;
  
  
  Shell
&lt;/h3&gt;

&lt;p&gt;its a program the helps the user to interact with the kernel via commands&lt;/p&gt;

&lt;h3&gt;
  
  
  JVM (Java Virtual Machine)
&lt;/h3&gt;

&lt;p&gt;it is VM which acts as a bridge between compiled java code and the OS. that's why java is referred as &lt;strong&gt;Compile Once, Run Anywhere&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Booting
&lt;/h3&gt;

&lt;p&gt;the process of starting and loading the OS into the main memory.&lt;/p&gt;




&lt;h3&gt;
  
  
  Multi -
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjf4xrqbkgu0vu4jypapw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjf4xrqbkgu0vu4jypapw.png" alt=" " width="800" height="593"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;difference between multiprogramming and multitasking is&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;in multitasking - programs running simultaneously &lt;br&gt;
in multiprogramming - programs are either ready or waiting for their turn.&lt;/p&gt;




&lt;h3&gt;
  
  
  Process vs Program
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Program -&lt;/strong&gt;&lt;br&gt;
set of instruction which is yet to be executed. Its is a passive entity that stays in the secondary, which is content of the disks&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;single user can execute multiple programs&lt;/li&gt;
&lt;li&gt;it is a file that needs to be executed&lt;/li&gt;
&lt;li&gt; single program can be linked to multiple processes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Process -&lt;/strong&gt;&lt;br&gt;
active instance of a program. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;it is an active entity which is running in the main memory&lt;/li&gt;
&lt;li&gt;remains for a specific time &lt;/li&gt;
&lt;li&gt;dynamic entity&lt;/li&gt;
&lt;li&gt;sequence of execution of instructions&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  States of Processes
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu4ha9a5bvynq1jmd5ae1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu4ha9a5bvynq1jmd5ae1.png" alt=" " width="800" height="296"&gt;&lt;/a&gt;&lt;/p&gt;




</description>
    </item>
    <item>
      <title>Operating System</title>
      <dc:creator>Ravi Bhuvan</dc:creator>
      <pubDate>Tue, 22 Jul 2025 07:21:53 +0000</pubDate>
      <link>https://dev.to/algon31/operating-system-1f4o</link>
      <guid>https://dev.to/algon31/operating-system-1f4o</guid>
      <description>&lt;h1&gt;
  
  
  What is OS?
&lt;/h1&gt;

&lt;p&gt;OS is an Operating System, which interacts with the system software and hardware instead of user, where user is not required to understand the system hardware in order to use the system.&lt;/p&gt;




&lt;h2&gt;
  
  
  Services Provided by an OS -
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Program Execution &lt;/li&gt;
&lt;li&gt;Memory Management&lt;/li&gt;
&lt;li&gt;Process Management&lt;/li&gt;
&lt;li&gt;File System Management&lt;/li&gt;
&lt;li&gt;I/O Device Management&lt;/li&gt;
&lt;li&gt;Security Access Control&lt;/li&gt;
&lt;li&gt;Resource Management&lt;/li&gt;
&lt;li&gt;Time Management&lt;/li&gt;
&lt;li&gt;Error Detection &lt;/li&gt;
&lt;li&gt;Communication Services&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Process Execution
&lt;/h3&gt;

&lt;p&gt;The OS runs the application which is installed in the system.&lt;/p&gt;

&lt;h4&gt;
  
  
  Memory Management
&lt;/h4&gt;

&lt;p&gt;OS is responsible for Allocating and Deallocating for processes.&lt;/p&gt;

&lt;h4&gt;
  
  
  Process Management
&lt;/h4&gt;

&lt;p&gt;OS tells the Program to run first and till how much - multitasking(creating and sheduling of processes only).&lt;/p&gt;

&lt;h4&gt;
  
  
  File System Management
&lt;/h4&gt;

&lt;p&gt;Managing and organizing of of  file in the system.&lt;/p&gt;

&lt;h4&gt;
  
  
  I/O Device Management
&lt;/h4&gt;

&lt;p&gt;Devices like Mouse, keyboard scanner and so on are managed by OS.&lt;/p&gt;

&lt;h4&gt;
  
  
  Security and Access Control
&lt;/h4&gt;

&lt;p&gt;Protects from unauthorized access of data, manages the permission controls of other software.&lt;/p&gt;

&lt;h4&gt;
  
  
  Resource Management
&lt;/h4&gt;

&lt;p&gt;manages the resources like CPU, RAM and other hardware and assigning of processes is done.&lt;/p&gt;

&lt;h4&gt;
  
  
  Time management
&lt;/h4&gt;

&lt;p&gt;keeps timer for the running processes to stop or wait for execution of programs.&lt;/p&gt;

&lt;h4&gt;
  
  
  Error Detection
&lt;/h4&gt;

&lt;p&gt;Detects Hardware and Software errors and handles it. errors like software crashing or disk failure.&lt;/p&gt;

&lt;h4&gt;
  
  
  Communication service
&lt;/h4&gt;

&lt;p&gt;Helps interact processes to each other for performing specific tasks.&lt;/p&gt;




&lt;h3&gt;
  
  
  Types of OS
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Batched OS
&lt;/h4&gt;

&lt;p&gt;runs multiple jobs gives us batch output and does not interact with the user. Output comes after all jobs(programs that needs to be executed) are done.&lt;/p&gt;

&lt;h4&gt;
  
  
  Time-Sharing OS
&lt;/h4&gt;

&lt;p&gt;shares CPU time among multiple user or tasks. multiple users use the OS at the same time, but actually the system is switching them very fast.&lt;/p&gt;

&lt;h5&gt;
  
  
  Distributed OS
&lt;/h5&gt;

&lt;p&gt;this is multiple computer acts as a single computer. Manages tasks across all nodes.&lt;/p&gt;

&lt;h5&gt;
  
  
  Network OS
&lt;/h5&gt;

&lt;p&gt;Computer working together in the same network(LAN) using Network OS(independent), share file and user management.&lt;/p&gt;

&lt;h4&gt;
  
  
  Real-Time OS
&lt;/h4&gt;

&lt;p&gt;system where needs instant respond and are reliable. system will be very fast as the OS is build for real-time decision-making.&lt;/p&gt;




&lt;h3&gt;
  
  
  RAM and ROM (types of memory)
&lt;/h3&gt;

&lt;h4&gt;
  
  
  RAM(Random Access Memory)
&lt;/h4&gt;

&lt;p&gt;it is also known as main memory, it is a fast processer. information stored or executed in this is temporary(Volatile Memory), if power goes off then the information stored is also gone.&lt;/p&gt;

&lt;h4&gt;
  
  
  ROM(Read Only Memory)
&lt;/h4&gt;

&lt;p&gt;it is a non-volatile memory(permanently stored, and is not lost when turned off). the contents of the ROM is written by the manufacturer while manufacturing.&lt;/p&gt;

&lt;h2&gt;
  
  
  SCRAM and DRAM (Types of RAM)
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Static RAM -&lt;/strong&gt; its like a cache memory of the RAM. it is faster and more reliable compared to other RAM. it does not need refreshing, more expensive and needs more power.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dynamic RAM -&lt;/strong&gt; mainly used as main memory, its is used to store data, in the form of bit capacitors value 1(charged) and 0(discharged). needs to be refreshed to retain data.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  (Types of ROM)
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Programable ROM(PROM) -&lt;/strong&gt; can be reprogramed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Erasable ROM(EROM) -&lt;/strong&gt; can be erased by UV light and reprogramed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Electrically Erasable ROM (EEROM) -&lt;/strong&gt; can be erased and reprogramed electrically.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mask ROM -&lt;/strong&gt; programed during manufacturing only.&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  Virtualization
&lt;/h3&gt;

&lt;p&gt;using of a whole hardware(extra power, ram and other things) typa sht on the current system OS using software like hypervisors. example - VM ware&lt;/p&gt;

&lt;h3&gt;
  
  
  Containerization
&lt;/h3&gt;

&lt;p&gt;running of a specific app/web app isolated from the OS on a container. example - Docker&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
