<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ishan Mishra</title>
    <description>The latest articles on DEV Community by Ishan Mishra (@ishanextreme).</description>
    <link>https://dev.to/ishanextreme</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ishanextreme"/>
    <language>en</language>
    <item>
      <title>❌ A2A "vs" MCP | ✅ A2A "and" MCP - Tutorial with Demo Included!!!</title>
      <dc:creator>Ishan Mishra</dc:creator>
      <pubDate>Fri, 16 May 2025 04:53:23 +0000</pubDate>
      <link>https://dev.to/ishanextreme/a2a-vs-mcp-a2a-and-mcp-tutorial-with-demo-included-4i4c</link>
      <guid>https://dev.to/ishanextreme/a2a-vs-mcp-a2a-and-mcp-tutorial-with-demo-included-4i4c</guid>
      <description>&lt;p&gt;Hello Readers!!&lt;/p&gt;

&lt;p&gt;[Code &lt;a href="https://github.com/ishanExtreme/a2a_mcp-example" rel="noopener noreferrer"&gt;github link&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;You must have heard about &lt;strong&gt;MCP&lt;/strong&gt; an emerging protocol, "&lt;em&gt;razorpay's MCP server out&lt;/em&gt;", "&lt;em&gt;stripe's MCP server out&lt;/em&gt;"... But have you heard about &lt;strong&gt;A2A&lt;/strong&gt;, a protocol sketched by google engineers and together with MCP these two protocols can help in making complex applications.&lt;/p&gt;

&lt;p&gt;Let me guide you to both of these protocols, their objectives and when to use them!&lt;/p&gt;

&lt;p&gt;Lets start with MCP first, What MCP actually is in very simple terms?[&lt;a href="https://modelcontextprotocol.io/introduction" rel="noopener noreferrer"&gt;docs&lt;/a&gt;]&lt;br&gt;
Model Context &lt;strong&gt;Protocol&lt;/strong&gt; where &lt;em&gt;protocol&lt;/em&gt; means set of predefined rules which server follows to communicate with the client. In reference to LLMs this means if I design a server using any framework(django, nodejs, fastapi...) but it follows the rules laid by the MCP guidelines then I can connect this server to any supported LLM and that LLM when required will be able to fetch information using my server's DB or can use any tool that is defined in my server's route.&lt;/p&gt;

&lt;p&gt;Lets take a simple example to make things more clear[&lt;strong&gt;See &lt;a href="https://youtu.be/nSjj1ZaNP2c" rel="noopener noreferrer"&gt;youtube video&lt;/a&gt; for illustration&lt;/strong&gt;]:&lt;br&gt;
I want to make my LLM personalized for myself, this will require LLM to have relevant context about me when needed, so I have defined some routes in a server like &lt;em&gt;/my_location&lt;/em&gt;, &lt;em&gt;/my_profile&lt;/em&gt;, &lt;em&gt;/my_fav_movies&lt;/em&gt; and a tool &lt;em&gt;/internet_search&lt;/em&gt; and this server follows MCP hence I can connect this server seamlessly to any LLM platform that supports MCP(like claude desktop, langchain, even with chatgpt in coming future), now if I ask a question like "what movies should I watch today" then LLM can fetch the context of movies I like and can suggest similar movies to me, or I can ask LLM for best non vegan restaurant near me and using the tool call plus context fetching my location it can suggest me some restaurants.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: &lt;em&gt;I am again and again referring that a MCP server can connect to a supported client (I am not saying to a supported LLM) this is because I cannot say that &lt;del&gt;Lllama-4 supports MCP and Lllama-3 doesn't&lt;/del&gt; its just a tool call internally for LLM its the responsibility of the client to communicate with the server and give LLM tool calls in the required format.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Now its time to look at A2A protocol[&lt;a href="https://google.github.io/A2A/topics/what-is-a2a/" rel="noopener noreferrer"&gt;docs&lt;/a&gt;]&lt;br&gt;
Similar to MCP, A2A is also a set of rules, that when followed allows server to communicate to any a2a client. By definition: A2A standardizes how independent, often opaque, &lt;em&gt;&lt;strong&gt;AI agents communicate and collaborate with each other as peers&lt;/strong&gt;&lt;/em&gt;. In simple terms, where MCP allows an LLM client to connect to tools and data sources, A2A allows for a back and forth communication from a host(client) to different A2A servers(also LLMs) via task object. This task object has  state like completed, input_required, errored.&lt;/p&gt;

&lt;p&gt;Lets take a simple example involving both A2A and MCP[&lt;strong&gt;See &lt;a href="https://youtu.be/nSjj1ZaNP2c" rel="noopener noreferrer"&gt;youtube video&lt;/a&gt; for illustration&lt;/strong&gt;]:&lt;br&gt;
I want to make a LLM application that can run command line instructions irrespective of operating system i.e for linux, mac, windows. First there is a client that interacts with user as well as other A2A servers which are again LLM agents. So, our client is connected to 3 A2A servers, namely mac agent server, linux agent server and windows agent server all three following A2A protocols.&lt;br&gt;
When user sends a command, "delete readme.txt located in Desktop on my windows system" client first checks the agent card, if found relevant agent it creates a task with a unique id and send the instruction in this case to windows agent server. Now our windows agent server is again connected to MCP servers that provide it with latest command line instruction for windows as well as execute the command on CMD or powershell, once the task is completed server responds with "completed" status and host marks the task as completed.&lt;br&gt;
Now image another scenario where user asks "please delete a file for me in my mac system", host creates a task and sends the instruction to mac agent server as previously, but now mac agent raises an "input_required" status since it doesn't know which file to actually delete this goes to host and host asks the user and when user answers the question, instruction goes back to mac agent server and this time it fetches context and call tools, sending task status as completed.&lt;/p&gt;

&lt;p&gt;A more detailed explanation with illustration and code go through can be found in this &lt;a href="https://youtu.be/nSjj1ZaNP2c" rel="noopener noreferrer"&gt;youtube video&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I hope I was able to make it clear that its not &lt;del&gt;A2A vs MCP&lt;/del&gt; but its A2A and MCP to build complex applications.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Set Up Your Own Free Coding Copilot with Continue, Deepseek &amp; Open Models – No Limits!</title>
      <dc:creator>Ishan Mishra</dc:creator>
      <pubDate>Sat, 15 Feb 2025 15:15:08 +0000</pubDate>
      <link>https://dev.to/ishanextreme/set-up-your-own-free-coding-copilot-with-continue-deepseek-open-models-no-limits-1p7j</link>
      <guid>https://dev.to/ishanextreme/set-up-your-own-free-coding-copilot-with-continue-deepseek-open-models-no-limits-1p7j</guid>
      <description>&lt;p&gt;&lt;a href="https://youtu.be/Q-4n9CCYgr0" rel="noopener noreferrer"&gt;Video Tutorial&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Intended Audience 👤
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Every software engineer!!&lt;/li&gt;
&lt;li&gt;Unlimited autocomplete and chat without paying anything extra!!&lt;/li&gt;
&lt;li&gt;VRAM or RAM(macos) required: 8GB&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;NOTE: Click on video &lt;a href="https://youtu.be/Q-4n9CCYgr0" rel="noopener noreferrer"&gt;link&lt;/a&gt; to see comparison between mac and linux as well as hosted deepseek vs qwen-2.5-coder&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Intro 👋
&lt;/h2&gt;

&lt;p&gt;We are going to setup our own custom coding copilot in both &lt;strong&gt;Linux&lt;/strong&gt; and &lt;strong&gt;Mac&lt;/strong&gt;(tested on M1 pro) , which will be &lt;strong&gt;&lt;u&gt;free unlimited&lt;/u&gt;&lt;/strong&gt; and will be hosted on local machine thus no leakage of sensitive data. We will use an open-source tool called &lt;a href="https://www.continue.dev/" rel="noopener noreferrer"&gt;continue&lt;/a&gt; for our setup and &lt;a href="https://ollama.com/" rel="noopener noreferrer"&gt;ollama&lt;/a&gt; to run the model.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing continue 📥
&lt;/h2&gt;

&lt;p&gt;Continue come with a VS Code and JetBrains extension, simply search continue in extensions bar and install it.&lt;strong&gt;(NOTE: disable github copilot or other coding assistants)&lt;/strong&gt;. Below is a vs-code example to install continue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feojv6nq0ogygoc30kxux.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feojv6nq0ogygoc30kxux.png" alt="Image description" width="800" height="230"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing ollama 📥
&lt;/h2&gt;

&lt;p&gt;Ollama is also very simple to install, just visit the &lt;a href="https://ollama.com/download" rel="noopener noreferrer"&gt;download&lt;/a&gt; section, select your OS and get the installer, and is available for Linux and MacOS&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up continue 🛠️
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4rzw46nsdqpd4xxbceb0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4rzw46nsdqpd4xxbceb0.png" alt="Image description" width="800" height="268"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As shown in the image above click on the continue icon(if not visible reload the vscode), click on "configure autocomplete options" from the drop-down , this will open the config file for continue. Replace it with the following code snippet&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"models"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="nl"&gt;"model"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"gpt-4o"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"GPT-4o"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="nl"&gt;"systemMessage"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"You are an expert software developer. You give helpful and concise responses."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="nl"&gt;"apiKey"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="nl"&gt;"provider"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"openai"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="nl"&gt;"model"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"gpt-4o-mini"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"GPT-4o Mini"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="nl"&gt;"systemMessage"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"You are an expert software developer. You give helpful and concise responses."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="nl"&gt;"apiKey"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="nl"&gt;"provider"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"openai"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"model"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"qwen2.5-coder:latest"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Qwen2.5 coder:7B"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"systemMessage"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"You are an expert software developer. You give helpful and concise responses."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"provider"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ollama"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"tabAutocompleteModel"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"model"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"qwen2.5-coder:latest"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Qwen 2.5 Coder 7b"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"provider"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ollama"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"customCommands"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"test"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"prompt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"{{{ input }}}&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="s2"&gt;Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Write unit tests for highlighted code"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"contextProviders"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"code"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"params"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"docs"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"params"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"diff"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"params"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"terminal"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"params"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"problems"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"params"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"folder"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"params"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"codebase"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"params"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"web"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"params"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"n"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"file"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="err"&gt;🤖&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"currentFile"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"open"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"search"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"url"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"clipboard"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"slashCommands"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"edit"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Edit selected code"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"comment"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Write comments for the selected code"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"share"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Export the current chat session to markdown"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="err"&gt;🤖&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"cmd"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Generate a shell command"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"commit"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Generate a git commit message"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This adds qwen-2.5 coder as chat and autocomplete model, there is also commented code to add openAI's gpt as well. To add different model for chat and autocomplete from different providers follow these docs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.continue.dev/chat/model-setup" rel="noopener noreferrer"&gt;chat model-setup&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.continue.dev/autocomplete/model-setup" rel="noopener noreferrer"&gt;autocomplete model-setup&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Running Qwen2.5 coder(7B) 🤖
&lt;/h2&gt;

&lt;p&gt;After installing ollama, go to your terminal and run &lt;code&gt;ollama pull qwen2.5-coder&lt;/code&gt; and wait for it to complete.&lt;/p&gt;

&lt;h2&gt;
  
  
  Chat and Edit features ✨
&lt;/h2&gt;

&lt;p&gt;Open vs-code and press &lt;code&gt;ctrl+L&lt;/code&gt; for linux or &lt;code&gt;cmd+shift+L&lt;/code&gt; for mac. this will open the continue window, drag it to right and select the qwen model. It may take longer for the first time since model is getting loaded into memory, now you can ask your queries. Let me walk you through some interesting features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use &lt;code&gt;@docs&lt;/code&gt; in query box to reference the already provided docs, or you can add a new one as well.&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;@web&lt;/code&gt; to search and reference the web(doesn't work very well, instead setup google search)&lt;/li&gt;
&lt;li&gt;Other context providers: &lt;a href="https://docs.continue.dev/customize/context-providers" rel="noopener noreferrer"&gt;docs&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;ctrl+I&lt;/code&gt; to open the selected code in &lt;a href="https://docs.continue.dev/edit/how-to-use-it" rel="noopener noreferrer"&gt;edit mode&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Autocomplete 🚀
&lt;/h2&gt;

&lt;p&gt;Go to the continue icon at the bottom right and enable the autocomplete option, now simply go to a code or write any comment autocomplete should start to show, press &lt;code&gt;tab&lt;/code&gt; to accept whole or &lt;code&gt;ctrl + -&amp;gt;&lt;/code&gt; to accept line by line.&lt;br&gt;
For my system with M1 chip(pro) qwen-2.5:7b worked slow, and wasn't very usable for autocomplete, but was for chat. In linux with Rtx4060 both features were fast and usable. Go through the &lt;a href="https://youtu.be/Q-4n9CCYgr0" rel="noopener noreferrer"&gt;video&lt;/a&gt; for comparison.&lt;/p&gt;

&lt;p&gt;And done 🥳🥳!! You know have your unlimited free coding copilot running on your local system!! 🥂🥂&lt;/p&gt;

</description>
      <category>githubcopilot</category>
      <category>continue</category>
      <category>deepseek</category>
      <category>ai</category>
    </item>
    <item>
      <title>Supercharging Deepseek-R1 with Ray + vLLM: A Distributed System Approach</title>
      <dc:creator>Ishan Mishra</dc:creator>
      <pubDate>Sun, 02 Feb 2025 19:18:50 +0000</pubDate>
      <link>https://dev.to/ishanextreme/supercharging-deepseek-r1-with-ray-vllm-a-distributed-system-approach-40d4</link>
      <guid>https://dev.to/ishanextreme/supercharging-deepseek-r1-with-ray-vllm-a-distributed-system-approach-40d4</guid>
      <description>&lt;p&gt;&lt;a href="https://youtu.be/WaqXThCKbUA" rel="noopener noreferrer"&gt;Video Tutorial&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Intended Audience 👤
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Everyone who is curious and ready to explore extra links
OR&lt;/li&gt;
&lt;li&gt;Familiarity with Ray&lt;/li&gt;
&lt;li&gt;Familiarity with vLLM&lt;/li&gt;
&lt;li&gt;Familiarity with kubernetes&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Intro 👋
&lt;/h2&gt;

&lt;p&gt;We are going to explore how we can run a 32B Deepseek-R1 quantized to 4 bit model, &lt;a href="https://huggingface.co/Valdemardi/DeepSeek-R1-Distill-Qwen-32B-AWQ" rel="noopener noreferrer"&gt;model_link&lt;/a&gt;. We will be using 2 Tesla-T4 gpus each 16GB of VRAM, and azure for our kubernetes setup and vms, but this same setup can be done in any platform or local as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up kubernetes ☸️
&lt;/h2&gt;

&lt;p&gt;Our kubernetes cluster will have 1 CPU and 2 GPU modes. Lets start by creating a resource group in azure, once done then we can create our cluster with the following command(change name, resource group and vms accordingly):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az aks create --resource-group rayBlog \  
    --name rayBlogCluster \  
    --node-count 1 \  
    --enable-managed-identity \  
    --node-vm-size Standard_D8_v3 \  
    --generate-ssh-keys
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here I am using &lt;code&gt;Standard_D8_v3&lt;/code&gt; VM it has 8vCPUs and 32GB of ram, after the cluster creation is done lets add two more gpu nodes using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az aks nodepool add \  
    --resource-group rayBlog \  
    --cluster-name rayBlogCluster \  
    --name gpunodepool \  
    --node-count 2 \  
    --node-vm-size Standard_NC4as_T4_v3 \  
    --labels node-type=gpu
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I have chosen &lt;code&gt;Standard_NC4as_T4_v3&lt;/code&gt; VM for for GPU node and kept the count as 2, so total we will have 32GB of VRAM(16+16). Lets now add the kubernetes config to our system: &lt;code&gt;az aks get-credentials --resource-group rayBlog --name rayBlogCluster&lt;/code&gt;.&lt;br&gt;
We can now use k9s&lt;a href="https://k9scli.io/" rel="noopener noreferrer"&gt;(want to explore k9s?)&lt;/a&gt; to view our nodes and check if everything is configured correctly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq57k7putl7bmupb47l21.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq57k7putl7bmupb47l21.png" alt="k9s node description" width="800" height="460"&gt;&lt;/a&gt;&lt;br&gt;
As shown in image above, our gpu resources are not available in gpu node, this is because we have to create a nvidia config, so lets do that, we are going to use kubectl&lt;a href="https://kubernetes.io/docs/reference/kubectl/" rel="noopener noreferrer"&gt;(expore!)&lt;/a&gt; for it:&lt;br&gt;
&lt;code&gt;kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.17.0/deployments/static/nvidia-device-plugin.yml&lt;/code&gt;&lt;br&gt;
Now lets check again:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhje04c36mmfup08rlt5z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhje04c36mmfup08rlt5z.png" alt="k9s node description gpu available" width="800" height="460"&gt;&lt;/a&gt;&lt;br&gt;
Great! but before creating our ray cluster we still have one step to do: apply taints to gpu nodes so that its resources are not exhausted by other helper functions: &lt;code&gt;kubectl taint nodes &amp;lt;gpu-node-1&amp;gt; gpu=true:NoSchedule&lt;/code&gt; and same for second gpu node.&lt;/p&gt;
&lt;h2&gt;
  
  
  Creating ray cluster 👨‍👨‍👦‍👦
&lt;/h2&gt;

&lt;p&gt;We are going to use kuberay operator&lt;a href="https://docs.ray.io/en/latest/cluster/kubernetes/index.html" rel="noopener noreferrer"&gt;(🤔)&lt;/a&gt; and kuberay apiserver&lt;a href="https://ray-project.github.io/kuberay/components/apiserver/" rel="noopener noreferrer"&gt;(❓)&lt;/a&gt;. Kuberay apiserve allows us to create the ray cluster without using native kubernetes, so that's a  convenience, so lets install them&lt;a href="http://helm.sh/" rel="noopener noreferrer"&gt;(what is helm?)&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add kuberay https://ray-project.github.io/kuberay-helm/

helm install kuberay-operator kuberay/kuberay-operator --version 1.2.2

helm install kuberay-apiserver kuberay/kuberay-apiserver --version 1.2.2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lets portforward our kuberay api server using this command: &lt;code&gt;kubectl port-forward &amp;lt;api server pod name&amp;gt; 8888:8888&lt;/code&gt;. Now lets create a common namespace where ray cluster related resources will reside &lt;code&gt;k create namespace ray-blog&lt;/code&gt;. Finally we are ready to create our cluster!&lt;br&gt;
We are first creating the compute template that specifies the resource for head and worker group.&lt;br&gt;
Send &lt;strong&gt;POST&lt;/strong&gt; request with below payload to &lt;code&gt;http://localhost:8888/apis/v1/namespaces/ray-blog/compute_templates&lt;/code&gt; &lt;br&gt;
For head:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "name": "ray-head-cm",
    "namespace": "ray-blog",
    "cpu": 5,
    "memory": 20
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For worker:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "name": "ray-worker-cm",
    "namespace": "ray-blog",
    "cpu": 3,
    "memory": 20,
    "gpu": 1,
    "tolerations": [
    {
      "key": "gpu",
      "operator": "Equal",
      "value": "true",
      "effect": "NoSchedule"
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NOTE: We have added tolerations to out worker spec since we tainted our gpu nodes earlier.&lt;/strong&gt;&lt;br&gt;
Now lets create the ray cluster, send &lt;strong&gt;POST&lt;/strong&gt; request with below payload to &lt;code&gt;http://localhost:8888/apis/v1/namespaces/ray-blog/clusters&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
   "name":"ray-vllm-cluster",
   "namespace":"ray-blog",
   "user":"ishan",
   "version":"v1",
   "clusterSpec":{
      "headGroupSpec":{
         "computeTemplate":"ray-head-cm",
         "rayStartParams":{
            "dashboard-host":"0.0.0.0",
            "num-cpus":"0",
            "metrics-export-port":"8080"
         },
         "image":"ishanextreme74/vllm-0.6.5-ray-2.40.0.22541c-py310-cu121-serve:latest",
         "imagePullPolicy":"Always",
         "serviceType":"ClusterIP"
      },
      "workerGroupSpec":[
         {
            "groupName":"ray-vllm-worker-group",
            "computeTemplate":"ray-worker-cm",
            "replicas":2,
            "minReplicas":2,
            "maxReplicas":2,
            "rayStartParams":{
               "node-ip-address":"$MY_POD_IP"
            },
            "image":"ishanextreme74/vllm-0.6.5-ray-2.40.0.22541c-py310-cu121-serve:latest",
            "imagePullPolicy":"Always",
            "environment":{
               "values":{
                  "HUGGING_FACE_HUB_TOKEN":"&amp;lt;your_token&amp;gt;"
               }
            }
         }
      ]
   },
   "annotations":{
      "ray.io/enable-serve-service":"true"
   }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Things to understand here:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We passed the compute templates that we created above&lt;/li&gt;
&lt;li&gt;Docker image &lt;code&gt;ishanextreme74/vllm-0.6.5-ray-2.40.0.22541c-py310-cu121-serve:latest&lt;/code&gt; setups ray and vllm on both head and worker, refer to &lt;a href="https://github.com/ishanExtreme/ray-serve-vllm" rel="noopener noreferrer"&gt;code repo&lt;/a&gt; for more detailed understanding. The code is an updation of already present vllm sample in ray examples, I have added few params and changed the vllm version and code to support it&lt;/li&gt;
&lt;li&gt;Replicas are set to 2 since we are going to shard our model between two workers(1 gpu each)&lt;/li&gt;
&lt;li&gt;HUGGING_FACE_HUB_TOKEN is required to pull the model from hugging face, create and pass it here&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;"ray.io/enable-serve-service":"true"&lt;/code&gt; this exposes 8000 port where our fast-api application will be running&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Deploy ray serve application 🚀
&lt;/h2&gt;

&lt;p&gt;Once our ray cluster is ready(use k9s to see the status) we can now create a ray serve application which will contain our fast-api server for inference. First lets port forward our head-svc &lt;code&gt;8265&lt;/code&gt; port where our ray serve is running, once done send a &lt;strong&gt;PUT&lt;/strong&gt; request with below payload to &lt;code&gt;http://localhost:8265/api/serve/applications/&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
   "applications":[
     {
         "import_path":"serve:model",
         "name":"deepseek-r1",
         "route_prefix":"/",
         "autoscaling_config":{
            "min_replicas":1,
            "initial_replicas":1,
            "max_replicas":1
         },
         "deployments":[
            {
               "name":"VLLMDeployment",
               "num_replicas":1,
               "ray_actor_options":{

               }
            }
         ],
         "runtime_env":{
            "working_dir":"file:///home/ray/serve.zip",
            "env_vars":{
               "MODEL_ID":"Valdemardi/DeepSeek-R1-Distill-Qwen-32B-AWQ",
               "TENSOR_PARALLELISM":"1",
               "PIPELINE_PARALLELISM":"2",
               "MODEL_NAME":"deepseek_r1"
            }
         }
      }
   ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Things to understand here:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;ray_actor_options&lt;/code&gt; are empty because whenever we pass tensor-parallelism or pipeline-parallelism &amp;gt; 1 then it should either be empty to num_gpus set to zero, refer this &lt;a href="https://github.com/ray-project/kuberay/issues/2354" rel="noopener noreferrer"&gt;issue&lt;/a&gt; and this &lt;a href="https://github.com/vllm-project/vllm/blob/main/examples/offline_inference/distributed.py" rel="noopener noreferrer"&gt;sample&lt;/a&gt; for further understanding.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;MODEL_ID&lt;/code&gt; is hugging face model id, which model to pull.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;PIPELINE_PARALLELISM&lt;/code&gt; is set to 2, since we want to shard our model among two worker nodes.
After sending request we can visit localhost:8265 and under serve our application will be under deployment it usually takes some time depending on the system.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Inference 🎯
&lt;/h2&gt;

&lt;p&gt;After application is under "healthy" state we can finally inference our model. So to do so first port-forward &lt;code&gt;8000&lt;/code&gt; from the same head-svc that we prot-forwarded ray serve and then send the &lt;strong&gt;POST&lt;/strong&gt; request with below payload to &lt;code&gt;http://localhost:8000/v1/chat/completions&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "model": "deepseek_r1",
    "messages": [
        {
            "role": "user",
            "content": "think and tell which shape has 6 sides?"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NOTE: model: deepseek_r1 is same that we passed to ray serve&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And done 🥳🥳!!! Congrats on running a 32B deepseek-r1 model 🥂🥂&lt;/p&gt;

</description>
      <category>deepseek</category>
      <category>ray</category>
      <category>distributedsystems</category>
      <category>ai</category>
    </item>
    <item>
      <title>What is Apache arrow and when to use it?</title>
      <dc:creator>Ishan Mishra</dc:creator>
      <pubDate>Sun, 21 Jan 2024 19:32:47 +0000</pubDate>
      <link>https://dev.to/ishanextreme/what-is-apache-arrow-and-when-to-use-it-2i2j</link>
      <guid>https://dev.to/ishanextreme/what-is-apache-arrow-and-when-to-use-it-2i2j</guid>
      <description>&lt;h2&gt;
  
  
  What is Apache arrow(Bookish definition)? 🤔
&lt;/h2&gt;

&lt;p&gt;Apache Arrow defines a language-independent columnar and in memory format, it also supports modern hardware like CPUs and GPUs. The Arrow memory format also supports zero-copy reads for lightning-fast data access without &lt;strong&gt;serialization&lt;/strong&gt; overhead.&lt;/p&gt;

&lt;h2&gt;
  
  
  Is it a database? 💾
&lt;/h2&gt;

&lt;p&gt;No, Apache Arrow is not a database. Instead, it's a cross-language development platform for in-memory data. Arrow's primary role is to act as a "universal data language" allowing different systems and programming languages to work with large datasets more efficiently and quickly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Then what is Apache arrow...
&lt;/h2&gt;

&lt;p&gt;My use-case for apache arrow arrived, when I wanted to create a data streaming pipeline of a very large dataset with almost zero latency for ML related processing.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Why not sql databases(PG, mysql etc..)&lt;/strong&gt;: My primary usecase is to have very low streaming latency and the transactional nature and disk-based storage of many SQL databases can introduce significant latency&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Why not a database plus kafka or redis streams&lt;/strong&gt;: While this setup is good for production applications with distributed system scaling. But for developing a data streaming pipeline which will be used by a processing service for data analysis, setting up kafka and configuring it for required throughput becomes a little complex or overkill. Plus I didn't wanted data to be transferred over network between my compute nodes. So eventually I had to setup kafka on each system, thus giving me no advantage of its distributed architecture.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Why Apache arrow&lt;/strong&gt;: Apache arrow is in-memory storage and lazily loads data when iterated to it, making latency very small, and its table format storage also allows me to use simple filters. Plus it can interact with GPU memory as well for data processing. And it doesn't have any serialisation/de-serialisation overheads, as compared to redis which sores data in-memory as bytes and for my usecase it required serialisation/de-serialisation overheads.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion 👋
&lt;/h2&gt;

&lt;p&gt;In conclusion, Apache Arrow emerges as an ideal solution for specific use cases where low-latency and efficient in-memory data processing are paramount. It's particularly well-suited for scenarios, where the goal is to create a high-performance data streaming pipeline for machine learning and data analysis purposes.&lt;/p&gt;

</description>
      <category>database</category>
      <category>datascience</category>
      <category>programming</category>
      <category>python</category>
    </item>
    <item>
      <title>Weekly Awesome JS KT Part-2: Promise.all() vs multiple awaits</title>
      <dc:creator>Ishan Mishra</dc:creator>
      <pubDate>Sun, 19 Mar 2023 14:43:02 +0000</pubDate>
      <link>https://dev.to/ishanextreme/weekly-awesome-js-kt-part-2-promiseall-vs-multiple-awaits-mo4</link>
      <guid>https://dev.to/ishanextreme/weekly-awesome-js-kt-part-2-promiseall-vs-multiple-awaits-mo4</guid>
      <description>&lt;p&gt;Imagine you have a case where you want to retrieve some information about a list of users, and for each user you have to hit the endpoint and await a promise, how will you do it? I came across a similar problem(I am giving a simplified version of the problem I came across not the full code), and the approach I took was something like this:&lt;/p&gt;

&lt;p&gt;First I created a async function which handles all the logic of updating and getting a particular user from its id&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const updateUser = async ({userId}:{userId:string})=&amp;gt;{

await updateUserDetails(userId)

}

userIds = [....]

// loop for each userId and update it
for(const id of userIds)
{
 await updateUser({userId: id})
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But there is a problem with this code, let me explain with a more simplified example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const test1 = async () =&amp;gt; {
  const delay1 = await Promise.delay(600); 
  const delay2 = await Promise.delay(600); 
  const delay3 = await Promise.delay(600); 
};

const test2 = async () =&amp;gt; {
  await Promise.all([
  Promise.delay(600), 
  Promise.delay(600), 
  Promise.delay(600)]);
};

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here &lt;code&gt;test1&lt;/code&gt; takes almost ~1800ms to run since it runs sync each promises whereas &lt;code&gt;test2&lt;/code&gt; takes ~600ms to runs since it runs all promises in parallel&lt;/p&gt;

&lt;p&gt;So a faster version of the code snippet one would be something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const updateUser = async ({userId}:{userId:string})=&amp;gt;{

await updateUserDetails(userId)

}

userIds = [....]

await Promise.all(userIds.map((userId)=&amp;gt;updateUser({userId})))

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Keep Exploring 🛥️&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://stackoverflow.com/questions/45285129/any-difference-between-await-promise-all-and-multiple-await" rel="noopener noreferrer"&gt;https://stackoverflow.com/questions/45285129/any-difference-between-await-promise-all-and-multiple-await&lt;/a&gt;&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>tutorial</category>
      <category>programming</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Application to make short gifs for your videos</title>
      <dc:creator>Ishan Mishra</dc:creator>
      <pubDate>Sat, 18 Mar 2023 11:57:09 +0000</pubDate>
      <link>https://dev.to/ishanextreme/application-to-make-short-gifs-for-your-videos-4hek</link>
      <guid>https://dev.to/ishanextreme/application-to-make-short-gifs-for-your-videos-4hek</guid>
      <description>&lt;h2&gt;
  
  
  💡 Introduction
&lt;/h2&gt;

&lt;p&gt;This article is a related to my &lt;a href="https://dev.to/ishanextreme/from-idea-to-production-in-just-four-hours-n54"&gt;previous&lt;/a&gt; article, in this I will discuss and share the code to make a gif maker of your own.  If you want to know the idea behind this project do read my previous article.&lt;/p&gt;

&lt;h2&gt;
  
  
  🤔 Why is it useful or what's the importance of this project?
&lt;/h2&gt;

&lt;p&gt;Have you tried converting a video to a gif using an online tool? even a 5 sec gif can go upto 10 of MBs, and gifs can improve the overall look of the application by replacing the static images with a moving short glimpse of the video or reels.&lt;/p&gt;

&lt;h2&gt;
  
  
  🏛️ Architecture
&lt;/h2&gt;

&lt;p&gt;Our current architecture for gif maker is, we have a consumer(RabbitMQ) written in python and when a video is uploaded and processes successfully, an event is fired with params "mp4Url" and "s3Destination", then the script downloads the video from the given url convert it to a compressed gif and upload it to the S3 destination.&lt;/p&gt;

&lt;h2&gt;
  
  
  🥸 Code and Explanation
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from moviepy.editor import VideoFileClip
import os
import urllib.request
import uuid
import logging
import boto3
from pathlib import Path
from dotenv import dotenv_values

ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
config = dotenv_values(".env")


logging.basicConfig(
    filename=str(Path(__file__).parents[2]) + "/logs/app.log",
    format="%(name)s - %(levelname)s - %(message)s",
)

mp4UrlPrefix = config["CLOUDFRONT_VIDEO_PREFIX"]


def getGifUrlFromMp4Driver(params):

    logging.info("Started getGifUrlFromMp4Driver")

    if "mp4Url" not in params:
        logging.error("mp4Url not in params")
        return {"gifUrl": None}
    if "s3Destination" not in params:
        logging.error("s3Destination not in params")
        return {"gifUrl": None}

    # get full mp4 url
    mp4Url = mp4UrlPrefix + "/" + params["mp4Url"]
    s3Destination = params["s3Destination"]

    # if tweaking params are not passed, then use the default 
    # value
    fps = params.get("fps", 20)
    fuzz = params.get("fuzz", 1)
    start = params.get("start", 5)
    duration = params.get("duration", 5)
    videoWidth = params.get("videoWidth", 144)
    videoHeight = params.get("videoHeight", 256)

    try:
        # initializing s3 session
        session = boto3.Session(
            aws_access_key_id=config["AWS_KEY"],
            aws_secret_access_key=config["AWS_SECRET"],
        )

        s3 = session.resource("s3")

        mp4_folder = ROOT_DIR + "/videos/"
        gif_folder = ROOT_DIR + "/gifs/"
        # creating a unique name for mp4 video download path 
        # and gif
        name = str(uuid.uuid4())

        # download mp4
        downloadedVideoPath = f"{mp4_folder}{name}.mp4"
        urllib.request.urlretrieve(mp4Url, downloadedVideoPath)

        # to reduce size of gif as well as to not take a long 
        # time we will try 3 times with reduced frame rates 
        # and increased fuzz to reduce size of gif
        counter = 0
        convertedGifPath = f"{gif_folder}{name}.gif"
        while True:
            counter += 1
            videoClip = VideoFileClip(downloadedVideoPath)

            # take a clip of video from x to y
            videoClip = videoClip.subclip(start, start + duration)
            # resizing video dimensions to desired width and 
            # height, this also reduces gif size to choose it 
            # wisely
            videoClip = videoClip.resize((videoWidth, videoHeight))

            # setting video fps, this also reduces gif size
            videoClip = videoClip.set_fps(fps)
            videoClip.write_gif(
                filename=convertedGifPath,
                program="ImageMagick",
                opt="optimizeplus",
                tempfiles=True,
                verbose=False,
                fuzz=fuzz,
                logger=None,
            )

            # get size of converted gif
            file_size = os.path.getsize(convertedGifPath)

            # greater than 500Kb then reduce fps
            if file_size &amp;gt; 500000 and counter &amp;lt;= 3:
                if counter == 1:
                    fps = 15
                elif counter == 2:
                    fps = 10
                elif counter == 3:
                    fps = 5
                continue
            break

        # remove downloaded video from disk
        os.remove(downloadedVideoPath)
        destBucketName = config["AWS_BUCKET_IMAGE_NAME"]

        if s3Destination[-1] != "/":
            s3Destination += "/"

        gifPath = "gif" + ".gif"

        # upload gif to s3 bucket        
       s3.Bucket(destBucketName).upload_file(convertedGifPath, 
       s3Destination + gifPath)

        gifUrl = f"{s3Destination}{gifPath}"
        os.remove(convertedGifPath)

        # return back the uploaded gif url
        return {"gifUrl": gifUrl}

    except Exception as e:
        logging.error(f"Error in getGifUrlFromMp4Driver: {e}")
        os.remove(downloadedVideoPath)
        return {"gifUrl": None}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above code is executed when the consumer listening to the event is triggered, here's an example of consumer&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json
from dotenv import dotenv_values
import logging
from getGifUrlFromMp4 import getGifUrlFromMp4Driver
import requests

config = dotenv_values(".env")


def gifConsumer(ch, method, properties, body):
    try:
        params = json.loads(body)
        print(f" Received: {params}")

        res = getGifUrlFromMp4Driver(params)
        print(f" Gif created, url: {res}")

        if res["gifUrl"] == None:
            print("Gif url not found")
            ch.basic_ack(delivery_tag=method.delivery_tag)
            return

        res["id"] = res["gifUrl"].split("/")[1]
        mediaResponse = requests.post(
            &amp;lt;url&amp;gt;, data=res
        )

        print(f"Response from media endpoint status code: {mediaResponse.status_code}")
        print(f"Response from endpoint: {mediaResponse.json()}")

        ch.basic_ack(delivery_tag=method.delivery_tag)
    except Exception as e:
        print("Error in gifConsumer: ", e)
        logging.error(f"Error in gifConsumer: {e}")
        ch.basic_ack(delivery_tag=method.delivery_tag)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This consumer calls &lt;code&gt;getGifUrlFromMp4Driver&lt;/code&gt; with mp4Url and s3Destination, the code is well explained in the comment, so do read them. After successfull or unsuccessfull conversion of gif controls comes back to the consumer function, now in case there's a gif url we give a api call to our endpoint informing it a successfull conversion of the gif and to save the gif url with the reel.&lt;/p&gt;

&lt;p&gt;Here's the gist code in case formatting is messed up here: &lt;a href="https://gist.github.com/ishanExtreme/e414c7a9d30c78c2a282312a5adc0d7a" rel="noopener noreferrer"&gt;gist&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Keep Exploring 🛥️
&lt;/h3&gt;

</description>
      <category>python</category>
      <category>programming</category>
      <category>startup</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>From Ideation to Production In Just Four Hours</title>
      <dc:creator>Ishan Mishra</dc:creator>
      <pubDate>Fri, 10 Mar 2023 06:03:28 +0000</pubDate>
      <link>https://dev.to/ishanextreme/from-idea-to-production-in-just-four-hours-n54</link>
      <guid>https://dev.to/ishanextreme/from-idea-to-production-in-just-four-hours-n54</guid>
      <description>&lt;h3&gt;
  
  
  Second Part: &lt;a href="https://dev.to/ishanextreme/application-to-make-short-gifs-for-your-videos-4hek"&gt;link&lt;/a&gt;. This part contains the code and explanation.
&lt;/h3&gt;

&lt;p&gt;During my internship with YellowClass, I came through developing an interesting project. Little did I know that this project would end up feeling like a hackathon. In this post, I'll be sharing the journey I went through while developing this project. From the initial architecture and the thought process behind it, to the challenges I faced and how I overcame them, I'll be detailing my experiences step-by-step. So, buckle up and join me as I take you through my journey of developing this exciting project.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it started?
&lt;/h2&gt;

&lt;p&gt;It was a friday evening around 7PM when our team manager brought up an interesting idea, the possibility of creating an Instagram-style auto-play section for the reel on our app's home page. Instead of displaying a static image, the reel would showcase a short snippet of the content, enticing the user to click and watch the full reel. This sparked a conversation between the app team and product managers, as they brainstormed and debated the best approach for implementing this new feature. I was sitting and doing the jira's assigned to me as well as listening to the whole conversation(not very focused on work that day 😅).&lt;/p&gt;

&lt;p&gt;As everyone discussed the possibility of creating an auto-play section for the reel on our app's home page, the team began brainstorming different ways to display short snippets of content. Someone suggested using GIFs, as it would require no app change, but there was a catch. They had previously tried using GIFs, but found that the size of converting to them was too large for our needs. I also began to explore different libraries in Python to see if I could find a way to create smaller, more compressed GIFs. After some experimentation, I discovered a library that allowed me to tweak certain parameters and achieve the desired result, it was &lt;em&gt;moviepy&lt;/em&gt;. My initial code was a simple snippet that would take a video path and create a GIF for the first 5 seconds of the video. I shared this with my manager and the engineering director, and to our delight, the snippet was able to generate GIFs under 1MB for many of our videos. While we still needed to make some tweaks to get under the 500KB limit, we finally had a viable solution. Best of all, since it was a script, we could simply give it the links to all of our reels and upload the resulting GIFs to our AWS S3 bucket without having to involve the content team, this could save their one week work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hackathon begins...
&lt;/h2&gt;

&lt;p&gt;With the decision made to use GIFs as our solution, it was time to create a roadmap for the project. It was already 8PM, and most of the office had emptied out for the day. We knew that we needed to act quickly to make progress. The project was divided into two distinct steps. The first step was to replace the existing static images for the reels with the newly generated GIFs. The second step was to automate the process of generating and storing GIFs for any new video that was uploaded to the app. This would be a more complex undertaking.&lt;/p&gt;

&lt;p&gt;With step one of the project in motion, our manager and engineering director began to write a query to get the reels and their uploaded paths in a proper CSV format. They also worked on changing the backend to accommodate the new GIF format. Meanwhile, I was tasked with writing a Python script that would read the CSV and generate a GIF for each video, then upload it to our S3 bucket. To ensure that the GIFs were compressed to the desired size, we added an additional step that would retry the process 3-4 times if a particular GIF was larger than 500KB. We also included a feature to reduce the frame rate until the file was within the desired size range. Additionally, we knew that we would need to make the script multithreaded in order to generate GIFs for the approximately 900 reels in a reasonable amount of time. This was a complex task, but we were determined to see it through to the end.&lt;/p&gt;

&lt;p&gt;After one hour of hard work, our script, CSV, and backend were finally ready to be put to the test. With a deep breath and a sense of excitement, we ran the script, watching as the timer ticked down to the finish line. Two hours was the estimated time to complete the task, and we eagerly watched the number of generated GIFs climb higher and higher on the large screen in front of us. As we waited for the script to finish, we took a break to order and enjoy a much-needed dinner, our eyes glued to the screen the entire time. The anticipation was almost too much to bear. But finally, the script came to a close, and we could see the results of our hard work displayed in front of us.&lt;br&gt;
With eager anticipation, I opened the app and navigated to the home page, eager to see the results of our hard work. But to my surprise and shock, none of the videos were displaying GIFs. My heart sank, and I was overcome with a sense of disappointment. But then, my manager opened his phone, and to my relief, every video was displaying a GIF. I quickly realized that we were in the midst of an A/B testing, and that our hard work had indeed paid off. We tested everything thoroughly and found that everything was working perfectly. Finally, as the clock approached midnight, we wrapped up our hackathon. Now, we could monitor the click rates on reels during the weekends and make informed decisions based on the data we collected.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/SNpEbNHw9pY"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;This post is already going longer than I expected 😅. In the part-2 of this post I will share the code snippet as well as the working process.&lt;/p&gt;

</description>
      <category>python</category>
      <category>programming</category>
      <category>writing</category>
      <category>startup</category>
    </item>
    <item>
      <title>Weekly Awesome JS KT Part-1</title>
      <dc:creator>Ishan Mishra</dc:creator>
      <pubDate>Thu, 09 Mar 2023 15:03:19 +0000</pubDate>
      <link>https://dev.to/ishanextreme/weekly-awesome-js-kt-part-1-4104</link>
      <guid>https://dev.to/ishanextreme/weekly-awesome-js-kt-part-1-4104</guid>
      <description>&lt;p&gt;1=&amp;gt; ⚡️ Spread Operator(...)&lt;/p&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let numbers = [1, 2, 3, 4];
let newArray = [...numbers, 5, 6];
console.log(newArray)
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;code&gt;Output: [1,2,3,4,5,6]&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;2=&amp;gt; ⚡️ Closures in javascript with real world example&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Closures: a function that returns another function with access to the parent scope. Example below:&lt;/p&gt;


&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function outerFunction(outerValue) {
  return function innerFunction(innerValue) {
    console.log(`Outer value: ${outerValue}, Inner value: ${innerValue}`);
  };
}
const innerFunc = outerFunction("Hello");
innerFunc("World");
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;code&gt;Output: "Outer value: Hello, Inner value: World"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Here we can see that parent function's variables are preserved and still accessible by child function even though parent function has been already executed.&lt;/p&gt;

&lt;p&gt;Now, let's see a real world use case:&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function createCounter() {
  let count = 0;

  function increment() {
    count++;
    console.log(`Count: ${count}`);
  }

  function decrement() {
    count--;
    console.log(`Count: ${count}`);
  }

  return {
    increment,
    decrement,
  };
}

const counter = createCounter();
counter.increment(); // Output: "Count: 1"
counter.increment(); // Output: "Count: 2"
counter.decrement(); // Output: "Count: 1"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;In this example, the createCounter function returns an object with two methods, increment and decrement. The count variable is defined inside the createCounter function and is not accessible outside of it. The increment and decrement methods have access to the count variable due to the closure created by the createCounter function.&lt;/p&gt;

&lt;p&gt;This allows us to create an object with private variables and methods that are not accessible from outside the object, providing a way to encapsulate functionality and prevent outside code from interfering with the object's state.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;3=&amp;gt; ⚡️ Prototypes in javascript with example&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Prototypes: an object that is associated with every function and object in JavaScript. Example:&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function Person(name, age) {
  this.name = name;
  this.age = age;
}

Person.prototype.greet = function() {
  console.log(`Hello, my name is ${this.name}`);
};

const john = new Person("John", 30);
const jane = new Person("Jane", 25);

john.greet(); // Output: "Hello, my name is John"
jane.greet(); // Output: "Hello, my name is Jane"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;In this example, we define a Person constructor function that takes a name and an age parameter and sets them as properties on the newly created Person object using this. We then define a greet method on the Person.prototype object, which is shared by all Person objects.&lt;/p&gt;

&lt;p&gt;By using the prototype object to define methods on the Person class, we can create new Person objects that inherit these methods, without having to define them again for each new object. This makes our code more efficient and easier to maintain, as we can make changes to the Person class and have those changes reflected in all instances of the class.&lt;/p&gt;

&lt;p&gt;In addition, we can also use prototypes to add new methods and properties to existing objects, allowing us to extend their functionality dynamically.&lt;/p&gt;
&lt;/blockquote&gt;




</description>
      <category>ai</category>
      <category>gemini</category>
      <category>bolt</category>
      <category>networking</category>
    </item>
    <item>
      <title>Awesome &amp; Productive Terminal For MacOS</title>
      <dc:creator>Ishan Mishra</dc:creator>
      <pubDate>Sun, 15 Jan 2023 17:17:10 +0000</pubDate>
      <link>https://dev.to/ishanextreme/awesome-productive-terminal-for-macos-11ph</link>
      <guid>https://dev.to/ishanextreme/awesome-productive-terminal-for-macos-11ph</guid>
      <description>&lt;h2&gt;
  
  
  1. Intended audience.
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;This article is for every programmers, who want to make their day to day command line experience cool as well as more productive.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  2. Overview.
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;As a programmer, we always use terminal for git, navigating through directories, running servers etc. Why not take some time out and make this basic tool more productive and cool, thats what we are going to do in this article.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Note: To save in nano hit &lt;em&gt;control+o&lt;/em&gt; and hit return, to exit hit &lt;em&gt;control+x&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Step-1: Download Iterm2 and Oh-My-Zsh.
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Download Iterm2 from its &lt;a href="https://dev.toiTerm2%20is%20a%20replacement%20for%20Terminal"&gt;official website&lt;/a&gt; and install it. &lt;strong&gt;iTerm2 is a replacement for MacOS default Terminal&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;MacOS now by default uses zsh, so we just need to install &lt;a href="https://ohmyz.sh/#install" rel="noopener noreferrer"&gt;oh-my-zsh&lt;/a&gt; which is a framework for managing zsh configurations. Install it by using entering the following command &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  4. Add VsCode Shortcut.
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Run &lt;code&gt;nano ~/.zshrc&lt;/code&gt; to open and edit zsh configuration files&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;To open vscode in PWD using "code ./"&lt;/strong&gt; add the following line in your .zshrc:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;# vscode&lt;br&gt;
code () { VSCODE_CWD="$PWD" open -n -b "com.microsoft.VSCode" --args $* ;}&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  5. Add theme
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Theme Credit: &lt;a href="https://github.com/stefanjudis/dotfiles/blob/primary/config/oh-my-zsh/stefanjudis.zsh-theme" rel="noopener noreferrer"&gt;https://github.com/stefanjudis/dotfiles/blob/primary/config/oh-my-zsh/stefanjudis.zsh-theme&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This theme will have random emoji at prompt section and will also display current git branch.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Make a new theme file using &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;nano ~/.oh-my-zsh/themes/ishanextreme.zsh-theme&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Copy and paste theme code from &lt;a href="https://github.com/ishanExtreme/AwesomeTerminal/blob/main/theme/ishanextreme.zsh-theme" rel="noopener noreferrer"&gt;here&lt;/a&gt; to "~/.oh-my-zsh/themes/ishanextreme.zsh-theme" file&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Finally run &lt;code&gt;nano ~/.zshrc&lt;/code&gt; and set &lt;code&gt;ZSH_THEME="ishanextreme"&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run &lt;code&gt;source ~/.zshrc&lt;/code&gt; to reload and apply changes.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  6. Plugins
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Here's the list of plugins I use&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;git&lt;/li&gt;
&lt;li&gt;zsh-autosuggestions&lt;/li&gt;
&lt;li&gt;sudo&lt;/li&gt;
&lt;li&gt;zsh-syntax-highlighting &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;tutorial to add plugins &lt;a href="https://opensource.com/article/19/9/adding-plugins-zsh" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  7. Style ITerm2
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Open ITerm2 and hit &lt;code&gt;⌘ ,&lt;/code&gt; go to "Appearance" tab and set theme to "Minimal".&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3y41xjdi86rw630t8bgj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3y41xjdi86rw630t8bgj.png" alt="Image description" width="800" height="285"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Color theme used: &lt;a href="https://github.com/mbadolato/iTerm2-Color-Schemes/blob/master/schemes/Blue%20Matrix.itermcolors" rel="noopener noreferrer"&gt;Blue Matrix&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next lets setup status bar, open settings (⌘ ,) then go to profiles tab select "Default" profile select "Session" tab from it and click on enable status bar("status bar enabled"), and finally start configuring it. Here's the list of things we will add to our status bar:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Github current branch: drag "git state" from status bar component menu to active components(down). You can configure colors of component by selecting them and clicking "configure component" option at bottom.&lt;/li&gt;
&lt;li&gt;Current Working Directory: drag "Current Directory" component to active components list.&lt;/li&gt;
&lt;li&gt;Current conda environment: (If you have anaconda installed) or you can use this method to display any information(like node version, etc). &lt;/li&gt;
&lt;li&gt;Enable Shell Integration from ITerm2-&amp;gt;Install shell integration. &lt;em&gt;Note-&amp;gt; this will enable a blue prompt by default in terminal you can disable it, search "show mark indicators" setting and disable it.&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Now you can create your own variable and display them using "Interpolated Strings".&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;nano ~/.zshrc&lt;/code&gt; and add the following&lt;/li&gt;
&lt;/ul&gt;


&lt;pre class="highlight plaintext"&gt;&lt;code&gt;iterm2_print_user_vars() {
  # extend this to add whatever
  # you want to have printed out in the status bar
  # if you don't have anaconda install use it for displaying
  # your node version "echo node -v" or anything you like
  iterm2_set_user_var condaEnv $(echo $CONDA_DEFAULT_ENV)
  iterm2_set_user_var pwd $(pwd)
}
&lt;/code&gt;&lt;/pre&gt;


&lt;ul&gt;
&lt;li&gt;Now use this variable in status bar, again open configuration setting for status bar, drag "Interpolated String", click on configure component and input &lt;code&gt;\(user.condaEnv)&lt;/code&gt; in "String value" field.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq4wtaedtraajdwlsa30q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq4wtaedtraajdwlsa30q.png" alt="Image description" width="800" height="526"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  7. Adding and enabling scripts.
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;In this step we will add some useful scripts to our ITerm and will also add shortcuts for the same.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj7k2pdf4srybiirtdl2c.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj7k2pdf4srybiirtdl2c.jpeg" alt="Image description" width="800" height="324"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run &lt;code&gt;cd /Users/"your user name"/Library/Application\ Support/iTerm2/Scripts&lt;/code&gt;, open any editor in this directory, create a new file "awesome_scripts.py"  and copy the code from &lt;a href="https://github.com/ishanExtreme/AwesomeTerminal/blob/main/scripts/awesome_scripts.py" rel="noopener noreferrer"&gt;here&lt;/a&gt; to the editor and save the file &lt;strong&gt;(take care of identations, its a python code)&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;This will add globally callable functions which can be added as shortcuts, you need to everytime enable this scripts when you open ITerm window from scripts-&amp;gt;awesome_scripts&lt;/li&gt;
&lt;li&gt;Now to add schortcut to touchbar, hit &lt;code&gt;⌘ ,&lt;/code&gt; got to Keys tab, click add Touch Bar Item. &lt;/li&gt;
&lt;li&gt;Give it a label(use fn for emojis), Under actions field choose "Invoke Script Function" now add the function name to "Function Call" field. &lt;strong&gt;(See function name and there action below)&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;To customize touch bar, go to view-&amp;gt;Customize Touchbar, you can see all your newly created shortcuts, drag and add them. &lt;em&gt;(Tip: Remove siri icon to make more space in touchbar 😉)&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;List of function calls and their actions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;clear()&lt;/code&gt;: It will run "clear" command on all the sessions in a particular tab.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;start_server()&lt;/code&gt;: It will run "yarn start:qa" on all the sessions in a particular tab.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;stop_servers()&lt;/code&gt;: It will run "killall node" and "clear" on all the sessions in a particular tab.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;shutdown()&lt;/code&gt;: It will kill all processes in every tab and force closes ITerm window.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;git_push()&lt;/code&gt;: &lt;strong&gt;To use this make sure you have the theme in Step-5 installed&lt;/strong&gt;. It will run "git add -A", "git commit -m'random emoji in your current line of ITerm'" and "git push origin 'your current branch'" in only your current session.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;git_pull_staging()&lt;/code&gt;: It will run "git pull origin @staging --ff" in all the sessions in your current tab.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Thats It!!!!!
&lt;/h3&gt;

</description>
      <category>gratitude</category>
    </item>
    <item>
      <title>What is Tor and how to use it?</title>
      <dc:creator>Ishan Mishra</dc:creator>
      <pubDate>Mon, 31 Oct 2022 11:17:51 +0000</pubDate>
      <link>https://dev.to/ishanextreme/what-is-tor-and-how-to-use-it-411n</link>
      <guid>https://dev.to/ishanextreme/what-is-tor-and-how-to-use-it-411n</guid>
      <description>&lt;h2&gt;
  
  
  1. Intended audience.
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;This article is for everyone who browses the internet and wants to learn how to surf the internet anonymously to protect one's identity.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  2. Overview.
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;This article is an introduction to an open-source project "TOR" in this article we will get an overview of how tor works,  how it can help protect one's identity while browsing internet and how to download, set it up and use it. I will try to make this article as simple and free of technical terms as possible. This article is for educational purposes only.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  3. What is Tor?
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;The United States Naval Research Laboratory developed The Onion Routing Protocol (Tor) to protect U.S. intelligence communications online. Currently Tor is an open source project maintained by the Tor Project, Inc., nonprofit organization. According to the official website Tor is helpful to &lt;em&gt;"Browse privately, explore freely. Defend yourself against tracking and surveillance. Circumvent censorship."&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  4. Tor vs VPN.
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Normal request&lt;/strong&gt;&lt;br&gt;
When we visit a website let's say "&lt;a href="http://www.google.com" rel="noopener noreferrer"&gt;www.google.com&lt;/a&gt;" a packet similar to the one shown in the image below is generated and sent to the requested website's server, now this packet contains our(source) IP address, destination IP address, and the request data. If we are using an HTTPS connection(see the lock icon in url bar) then the data is encrypted and cannot be read by any middle man but the source and destination IP addresses can be sniffed by hackers and other advertisement agencies to show relevant ads to a user based on the website he/she is visiting.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkjxi7b8l9uf61hq9v0uc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkjxi7b8l9uf61hq9v0uc.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Through VPN(Centralised)&lt;/strong&gt;&lt;br&gt;
When a request is sent through VPN, the client's IP address is hidden and replaced by the VPN server's IP therefore its not possible for any middleman to get the client's source and destination but the VPN service provider knows everything about the website a user is visiting, so there should be trust between the user and the VPN service provider, plus VPN services are not free of cost.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflnlc6zbqwa1zd19q5da.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflnlc6zbqwa1zd19q5da.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Through Tor(Decentralised)&lt;/strong&gt;&lt;br&gt;
When a request is sent through Tor, it goes through Tor Nodes, then finally to a destination, so when source(client) sends a request it will show destination as "Tor Node-1" and when exit node send request to destination it will show source as "Tor node-3(ex)", therefore it is very difficult to track by a middleman as to which website a request is sent and from which user a particular request is coming. Drawback of using Tor is that it's very slow as compared to normal and VPN connection. &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft7mtjcsrqedp8cz4mtvv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft7mtjcsrqedp8cz4mtvv.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  5. How to download and setup Tor.
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Visit tor project official website, &lt;a href="https://www.torproject.org/download/" rel="noopener noreferrer"&gt;here&lt;/a&gt;, then choose your operating system, &lt;strong&gt;for android Tor browser can also be downloaded directly from playstore&lt;/strong&gt;. Tor is currently available for windows, MacOs, linux, and android. This will download a Tor browser now whenever you want to surf the internet anonymously simply open the browser and click the "Connect" button.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  6. Is Tor legal?
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;So, is Tor legal? In short: yes, using Tor is completely legal and is not banned in India. However, if you use Tor for illicit activities, like buying drugs, weapons etc.. then you can face legal scrutiny if your activities are traced back to you.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  7. Conclusion.
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Tor is a great project to protect one's identity online and can be used like a free and no data limit VPN, if you can tolerate slow loading 😅.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt; [1]: &lt;a href="https://www.torproject.org" rel="noopener noreferrer"&gt;Tor project official website&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; [2]: https://vpnoverview.com/privacy/anonymous-browsing/is-tor-legal/ &lt;/p&gt;




</description>
      <category>tor</category>
      <category>opensource</category>
      <category>vpn</category>
      <category>cybersecurity</category>
    </item>
    <item>
      <title>TCP Pros and Cons with working example</title>
      <dc:creator>Ishan Mishra</dc:creator>
      <pubDate>Sat, 24 Jul 2021 07:40:30 +0000</pubDate>
      <link>https://dev.to/ishanextreme/tcp-pros-and-cons-with-working-example-47mb</link>
      <guid>https://dev.to/ishanextreme/tcp-pros-and-cons-with-working-example-47mb</guid>
      <description>&lt;h1&gt;
  
  
  Backend Essentials-2 [TCP]
&lt;/h1&gt;




&lt;h3&gt;
  
  
  1. Overview
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;This article covers the basics of TCP(Transmission Control Protocol) in terms of its pros and cons. This article also provides the working example of TCP.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  2. TCP
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;TCP is a transport protocol that is used on top of IP to ensure reliable transmission of packets. TCP includes mechanisms to solve many of the problems that arise from packet-based messaging, such as lost packets, out of order packets, duplicate packets, and corrupted packets.&lt;/p&gt;
&lt;h5&gt;
  
  
  Pros:
&lt;/h5&gt;

&lt;ol&gt;
&lt;li&gt;Acknowledgement: To ensure that data is received successfully this header is used. After sending the data sender starts a timer and if receiver doesn't sends back the acknowledgement after the timer ends sender sends the packet again. This ensures no data loss.&lt;/li&gt;
&lt;li&gt;Guaranteed delivery.&lt;/li&gt;
&lt;li&gt;Connection Based: Client and server needs to establish a unique connection.&lt;/li&gt;
&lt;li&gt;Congestion Control: Data waits(if network is busy) and sent when network can handle it.&lt;/li&gt;
&lt;li&gt;Ordered Packets: TCP connections can detect out of order packets by using the sequence and acknowledgement numbers.&lt;/li&gt;
&lt;/ol&gt;
&lt;h5&gt;
  
  
  Cons:
&lt;/h5&gt;

&lt;ol&gt;
&lt;li&gt;Larger Packets: Because of all the above features.&lt;/li&gt;
&lt;li&gt;Slower than UDP.&lt;/li&gt;
&lt;li&gt;Stateful.&lt;/li&gt;
&lt;li&gt;Server Memory(DOS): Because of statfullness server needs to store the connection information. &lt;em&gt;If some low level client keeps on sending request to the server without doing any acknowledgement, than server keeps waiting and the data is stored in memory(until timeout) so server can eventually die if many of these clients sends request.&lt;/em&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  3. Code Example
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Requirements: nodejs, code editor, telnet&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;net&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;net&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;server&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;net&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createServer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;socket&lt;/span&gt;&lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;socket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Hello&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="nx"&gt;socket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;data&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toString&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;listen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;8080&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the script by executing following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;node &amp;lt;filename&amp;gt;.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next step is to connect to the TCP server, for this execute the following command in a terminal&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;telnet localhost 8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and now a "Hello" message will be printed on terminal screen, type any message in terminal and it will be printed on server's console screen.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fusbhgnf6nrlwg1p906gd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fusbhgnf6nrlwg1p906gd.png" alt="Alt Text" width="800" height="290"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faoxcospn3vxfqzn4bbnv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faoxcospn3vxfqzn4bbnv.png" alt="Alt Text" width="800" height="322"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt; [1]: &lt;a href="https://youtu.be/qqRYkcta6IE" rel="noopener noreferrer"&gt;TCP vs UDP Crash Course[Hussein Nasser]&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; [2]: &lt;a href="https://www.khanacademy.org/computing/computers-and-internet/xcae6f4a7ff015e7d:the-internet/xcae6f4a7ff015e7d:transporting-packets/a/transmission-control-protocol--tcp#:~:text=The%20Transmission%20Control%20Protocol%20(TCP,duplicate%20packets%2C%20and%20corrupted%20packets." rel="noopener noreferrer"&gt;Transmission Control Protocol[Khan Academy]&lt;/a&gt;&lt;/p&gt;




</description>
      <category>backendessentials</category>
      <category>webdev</category>
      <category>udp</category>
      <category>tcp</category>
    </item>
    <item>
      <title>Why your data is not secure in public WIFI?-The OSI Model</title>
      <dc:creator>Ishan Mishra</dc:creator>
      <pubDate>Fri, 21 May 2021 07:08:57 +0000</pubDate>
      <link>https://dev.to/ishanextreme/why-your-data-is-not-secure-in-public-wifi-the-osi-model-270k</link>
      <guid>https://dev.to/ishanextreme/why-your-data-is-not-secure-in-public-wifi-the-osi-model-270k</guid>
      <description>&lt;h1&gt;
  
  
  Backend Essentials-1 [The OSI Model]
&lt;/h1&gt;




&lt;h3&gt;
  
  
  1. Overview
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;This article gives basic information of the OSI(Open Systems Interconnection) Model. OSI model is conceptual framework used to describe the functions of a networking system, it consists of 7 layers. It was published in 1984 and is still used today as a means to describe Network Architecture.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  2. The 7 Layers of OSI Model
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;To understand the layers of OSI model lets take an example of sending a simple get request to the server.&lt;/p&gt;
&lt;h4&gt;
  
  
  Layer-7 Application:
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdkbuukm73jc56rhelbhf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdkbuukm73jc56rhelbhf.png" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This layer contains bunch of strings like header, cookies..., prepared by the network application. It also receives stuff for application to process and display.&lt;/p&gt;
&lt;h4&gt;
  
  
  Layer-6 Presentation
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fldoq6o7r6b4u1sgt9jpq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fldoq6o7r6b4u1sgt9jpq.png" alt="Alt Text" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If using HTTPS, this layer encrypts/decrypts the above data.&lt;/p&gt;
&lt;h4&gt;
  
  
  Layer-5 Session
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fojvtr8lqm2tggxu6aqil.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fojvtr8lqm2tggxu6aqil.png" alt="Alt Text" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Controls connection between different computers. Like for HTTP we require a TCP connection. Assigns a tag which is send with the data.&lt;/p&gt;
&lt;h4&gt;
  
  
  Layer-4 Transport
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy9vhi5g5la7rgcx0a5bg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy9vhi5g5la7rgcx0a5bg.png" alt="Alt Text" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On receiving the data from above layers, this layer performs segmentation, i.e. breaks the data into smaller units attaches the target and source port(mostly auto generated by application). It also implements error control.&lt;/p&gt;
&lt;h4&gt;
  
  
  Layer-3 Network
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg5fiisso8psfpju48rbs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg5fiisso8psfpju48rbs.png" alt="Alt Text" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This layer handles the transmission of data. It doesnot checks for any error, its job is to just send the data to destination IP address.&lt;/p&gt;
&lt;h4&gt;
  
  
  Layer-2 Data Link
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdk6eac8zbqcpvouplrmg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdk6eac8zbqcpvouplrmg.png" alt="Alt Text" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This layers performs node-to-node transmission. It devides data into frames and attaches the MAC address to the frames. It also performs error correction.&lt;/p&gt;
&lt;h4&gt;
  
  
  Layer-1 Physical
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F746e9qfvp5cpdv1wh1us.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F746e9qfvp5cpdv1wh1us.png" alt="Alt Text" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This layer contains information in the form of "bits" and transmits individual bits from one node to another in from of electrical/light/wifi... signals.&lt;/p&gt;

&lt;p&gt;Now at receivers end data is processed from Layer-1 to Layer-7.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  3. Why Public WIFI is Unsafe?
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F956qo2t5zhd14hl0itr7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F956qo2t5zhd14hl0itr7.png" alt="Alt Text" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Frames sent by the client connected to WIFI network goes to every user connected to that network, network card rejects the frames which are not meant to be received by it, but a hacker can design an application to receive all the frames which are coming and if the data is not encrypted(site is not having HTTPS connection for example) the hacker can read all the data. This method is known as "network sniffing".&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  4. Conclusion
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;This was an overview of The OSI model in software engineering perspective.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt; [1]: &lt;a href="https://youtu.be/7IS7gigunyI" rel="noopener noreferrer"&gt;The OSI Model-Explained by Example[Hussein Nasser]&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; [2]: &lt;a href="https://www.geeksforgeeks.org/layers-of-osi-model/" rel="noopener noreferrer"&gt;Layers of OSI Model&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; [3]: &lt;a href="https://www.forcepoint.com/cyber-edu/osi-model" rel="noopener noreferrer"&gt;What is the OSI Model?&lt;/a&gt;&lt;/p&gt;




</description>
      <category>backendessentials</category>
      <category>webdev</category>
      <category>networking</category>
      <category>osi</category>
    </item>
  </channel>
</rss>
