<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: ASHDEEP SINGH</title>
    <description>The latest articles on DEV Community by ASHDEEP SINGH (@arsh_the_coder).</description>
    <link>https://dev.to/arsh_the_coder</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/arsh_the_coder"/>
    <language>en</language>
    <item>
      <title>MCP server</title>
      <dc:creator>ASHDEEP SINGH</dc:creator>
      <pubDate>Mon, 20 Oct 2025 10:04:16 +0000</pubDate>
      <link>https://dev.to/arsh_the_coder/mcp-server-1abi</link>
      <guid>https://dev.to/arsh_the_coder/mcp-server-1abi</guid>
      <description>&lt;p&gt;Hi&lt;br&gt;
So this week was spent learning AI in which I learnt MCP server. So an MCP server is model context protocol. We can understand it easily using a below scenario.&lt;br&gt;
Assume for a moment you're using an AI agent , which runs a logic using a tool you designed. Now you decide to use another AI agent, so you'll have to write the tool logic again. Introduce a new more AI agents / users and you'll see things going hay-wire. On top of it complexities are introduced if we need more tools.&lt;br&gt;
Now you might have guessed , MCP is the answer to such a problem of having a shared common logic to be used accross various agents. A few benfits of MCP is : &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Separation of Logic &amp;amp; Model&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Your tools (APIs, DB queries, logic) stay outside the AI model.&lt;/p&gt;

&lt;p&gt;You can update tools without retraining or redeploying the AI.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Standardized Communication&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;MCP gives a common protocol — any AI client (Cursor, OpenAI, Claude, etc.) can use your tools without code changes.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Security Sandbox&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The AI can only access what you explicitly expose via MCP tools — no accidental file or DB access.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Reusability&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;One MCP server = reusable by many projects or AIs.&lt;br&gt;
(e.g., same “weather” server used by GPT and Cursor)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Scalability&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can deploy MCP servers as microservices and plug them into multiple AIs.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Observability &amp;amp; Control&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can log, monitor, or throttle how AI uses each tool — full control over access.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Future Compatibility&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;MCP is becoming the universal plugin standard for AI models (like HTTP for web).&lt;br&gt;
Future IDEs, apps, and LLMs will support it natively.&lt;/p&gt;

&lt;p&gt;Now have a look at the code : &lt;a href="https://github.com/Ashdeep-Singh-97/MCP-server" rel="noopener noreferrer"&gt;https://github.com/Ashdeep-Singh-97/MCP-server&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The code is self explanatory , it exposes a logic as a tool which always returns a unified answer to the questioner. And with this we have come to an end of our AI journey.&lt;br&gt;
This post officially marks the end of learning AI. In future we'll try to build more projects and see where things take us. Till then , take care eat healthy and enjoy life.&lt;br&gt;
Peace....&lt;/p&gt;

</description>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Intro to Conversational AI</title>
      <dc:creator>ASHDEEP SINGH</dc:creator>
      <pubDate>Sun, 05 Oct 2025 13:19:35 +0000</pubDate>
      <link>https://dev.to/arsh_the_coder/intro-to-conversational-ai-5f4m</link>
      <guid>https://dev.to/arsh_the_coder/intro-to-conversational-ai-5f4m</guid>
      <description>&lt;p&gt;Hi &lt;br&gt;
So this week was spent in learning how to build a real time AI agent. Well real time agent means an agent which is capable of giving answers in real time ( assume it means : answering in the same time frame in which you're asking questions , just like a human ) now technically even a chat agent is a real time agent , but it's typing oriented and not voice oriented. So how do we do voice agent. Let's find out.&lt;/p&gt;

&lt;p&gt;Github : &lt;a href="https://github.com/Ashdeep-Singh-97/conversationalAI" rel="noopener noreferrer"&gt;https://github.com/Ashdeep-Singh-97/conversationalAI&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Building a voice agent is similar just like a chat bot , the only differnce being , we give voice commands and not written input. So it will include invoking browser functions like using microphone and speaker. This much is available on internet easily. Once done , let's see how we can send this info to the agent.&lt;/p&gt;

&lt;p&gt;Step 1: Creating a Real-Time Session&lt;/p&gt;

&lt;p&gt;The first step is to establish a session with the AI model.&lt;/p&gt;

&lt;p&gt;This session acts like a temporary connection between your application and OpenAI’s Realtime API. When the frontend (the web page) requests to start a chat, the backend securely contacts OpenAI and creates a temporary API key just for that session.&lt;/p&gt;

&lt;p&gt;This temporary key is essential — it ensures that:&lt;/p&gt;

&lt;p&gt;The frontend can safely connect without exposing your actual API key.&lt;/p&gt;

&lt;p&gt;The connection is valid only for a short period.&lt;/p&gt;

&lt;p&gt;Think of it as generating a “temporary ticket” that lets your web app talk to the AI in real time.&lt;/p&gt;

&lt;p&gt;Step 2: Defining the AI’s Personality&lt;/p&gt;

&lt;p&gt;Once the session is ready, it’s time to define who the AI will be.&lt;/p&gt;

&lt;p&gt;In this case, we create a “Girlfriend Agent” — a voice-based AI with a cheerful and affectionate personality. This is where the character design happens:&lt;/p&gt;

&lt;p&gt;You decide her tone — friendly, playful, or caring.&lt;/p&gt;

&lt;p&gt;You set her voice style — soft, lively, or calm.&lt;/p&gt;

&lt;p&gt;You give her background instructions — how she should talk, what emotions to convey, and how to respond naturally.&lt;/p&gt;

&lt;p&gt;These instructions make the AI feel more human and less robotic. It’s what gives the agent personality, emotion, and context — so instead of generic answers, she responds like a person who knows you.&lt;/p&gt;

&lt;p&gt;Step 3: Connecting the Agent in Real Time&lt;/p&gt;

&lt;p&gt;With the session and personality ready, the system now brings everything to life.&lt;/p&gt;

&lt;p&gt;When the user clicks the “Start Agent” button, the frontend connects to OpenAI’s real-time model using the temporary key from Step 1.&lt;br&gt;
The connection allows two-way communication — your microphone input goes to the AI, and the AI’s voice output comes back instantly.&lt;/p&gt;

&lt;p&gt;To make the conversation smooth and realistic:&lt;/p&gt;

&lt;p&gt;The system applies noise reduction to clean up your voice input.&lt;/p&gt;

&lt;p&gt;It transcribes your speech into text so the AI can understand you.&lt;/p&gt;

&lt;p&gt;The AI’s text response is converted into speech using a natural-sounding voice.&lt;/p&gt;

&lt;p&gt;All this happens in milliseconds, creating a seamless back-and-forth experience — just like talking to someone on a call.&lt;/p&gt;

&lt;p&gt;Step 4: Talking to the AI Girlfriend&lt;/p&gt;

&lt;p&gt;Now comes the fun part — the conversation.&lt;/p&gt;

&lt;p&gt;Once the connection is active, you can start speaking to your AI girlfriend in real time. She listens, understands what you say, and responds instantly with a voice that feels alive and expressive.&lt;/p&gt;

&lt;p&gt;You can ask questions, share thoughts, or simply talk — and she’ll react with empathy, humor, or curiosity, depending on how you designed her personality earlier.&lt;/p&gt;

&lt;p&gt;This creates an immersive experience where you forget you’re talking to a computer. The delay is almost non-existent, and the tone feels human.&lt;/p&gt;

&lt;p&gt;It’s not a pre-recorded script — it’s true AI interaction happening in real time.&lt;/p&gt;

&lt;p&gt;Step 5: The Flow in Action&lt;/p&gt;

&lt;p&gt;Here’s how the entire flow looks from start to finish:&lt;/p&gt;

&lt;p&gt;The user clicks “Start Agent.”&lt;/p&gt;

&lt;p&gt;The frontend asks the backend to create a new realtime session.&lt;/p&gt;

&lt;p&gt;The backend requests a temporary key from OpenAI and sends it back.&lt;/p&gt;

&lt;p&gt;The frontend connects to the Realtime API using that key.&lt;/p&gt;

&lt;p&gt;The AI agent (our girlfriend) comes online with her defined voice and personality.&lt;/p&gt;

&lt;p&gt;You start speaking → the AI listens, processes, and replies instantly.&lt;/p&gt;

&lt;p&gt;Within seconds, a real conversation begins — all handled by the OpenAI Realtime API.&lt;/p&gt;

&lt;p&gt;Step 6: The Bigger Picture&lt;/p&gt;

&lt;p&gt;While this example focuses on an “AI girlfriend,” the underlying technology has much broader applications.&lt;/p&gt;

&lt;p&gt;This same flow can be used to build:&lt;/p&gt;

&lt;p&gt;Voice-based customer support agents&lt;/p&gt;

&lt;p&gt;Personal AI assistants&lt;/p&gt;

&lt;p&gt;Emotional wellness companions&lt;/p&gt;

&lt;p&gt;Educational tutors&lt;/p&gt;

&lt;p&gt;Storytelling or entertainment characters&lt;/p&gt;

&lt;p&gt;The ability to talk naturally with AI — with tone, timing, and emotional nuance — is what makes this next generation of Conversational AI so powerful.&lt;/p&gt;

&lt;p&gt;Conclusion&lt;/p&gt;

&lt;p&gt;We’re entering a new phase of AI — one where you don’t type, you talk.&lt;br&gt;
Where AI doesn’t just respond — it listens, feels, and reacts instantly.&lt;/p&gt;

&lt;p&gt;This real-time conversational framework shows how easily we can create experiences that blend technology with human-like interaction.&lt;br&gt;
A simple voice chat with an AI girlfriend today is just a glimpse of what tomorrow’s digital relationships — between humans and machines — might look like.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>beginners</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Relationship in LLM</title>
      <dc:creator>ASHDEEP SINGH</dc:creator>
      <pubDate>Sat, 27 Sep 2025 11:03:55 +0000</pubDate>
      <link>https://dev.to/arsh_the_coder/relationship-in-llm-mbf</link>
      <guid>https://dev.to/arsh_the_coder/relationship-in-llm-mbf</guid>
      <description>&lt;p&gt;Hi&lt;br&gt;
So this week was spent in learning about how AI stores relationships so as to generate more relevant and precise answers to queries.&lt;br&gt;
Uptil this point we know that we can use memory in AI to make it's query response more organic , and it works too , but issue is we still cant expect AI to know what I like or where I live as I havent stored that in vector space. Now obviously we can store it there but we'll have to manually extract relevant info and then make it vector entry.&lt;/p&gt;

&lt;p&gt;Wont it be cool if our gen-ai model could know about us just by us talking with it. This way , even when we are asking for something, it can know and store some info about us. This is how it can next time answer more precisely if the question revolves around something that we might have not asked but talked about with AI.&lt;/p&gt;

&lt;p&gt;here's the github : &lt;a href="https://github.com/Ashdeep-Singh-97/gen-ai-graph" rel="noopener noreferrer"&gt;https://github.com/Ashdeep-Singh-97/gen-ai-graph&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So here's an example :  we can say this to AI&lt;br&gt;
"When should i travel to Patiala. I live in Chandigarh and wanted to go for some outing."&lt;/p&gt;

&lt;p&gt;Note that here we are not explicitly feeding the AI that we live in Chandigarh , but it will store this info in relation to us so it can fetch it in future whenever it needs it or even use it when it has to.&lt;br&gt;
With all said and done , let's see how we can do that. We will need to update code from previous week (in which we were using memory) to accomodate the relationship part. For tech , we can use neo4j.&lt;br&gt;
Let's have a code walkthrough:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configuring Memory&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here comes the cool part: setting up where and how memories will be stored.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const mem = new Memory({
  version: 'v1.1',
  enableGraph: true,
  graphStore: {
    provider: 'neo4j',
    config: {
      url: 'neo4j://localhost:7687',
      username: 'neo4j',
      password: process.env.PASSWORD,
    },
  },
  vectorStore: {
    provider: 'qdrant',
    config: {
      collectionName: 'memories',
      embeddingModelDims: 1536,
      host: 'localhost',
      port: 6333,
    },
  },
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Graph Store (Neo4j)&lt;/p&gt;

&lt;p&gt;Stores relationships between entities (like “Arsh likes Book A” or “Arsh asked about travel”).&lt;/p&gt;

&lt;p&gt;Helps the AI understand connections.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Chat Function
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;async function chat(query = '') {
  const memories = await mem.search(query, { userId: 'arsh' });
  const memStr = memories.results.map((e) =&amp;gt; e.memory).join('\n');

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First, it searches memory based on the current query.&lt;/p&gt;

&lt;p&gt;It looks up all past interactions for the user "arsh".&lt;/p&gt;

&lt;p&gt;Then it joins those memories into a big string (memStr).&lt;/p&gt;

&lt;p&gt;This ensures the AI knows the context about the user before replying.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;System Prompt with Context
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const SYSTEM_PROMPT = `
    Context About User:
    ${memStr}
  `;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The system prompt tells GPT:&lt;br&gt;
“Here’s everything I know about the user from past conversations. Use this while responding.”&lt;/p&gt;

&lt;p&gt;This is how GPT gets a kind of long-term memory.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sending the Query to GPT
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const response = await client.chat.completions.create({
  model: 'gpt-4.1-mini',
  messages: [
    { role: 'system', content: SYSTEM_PROMPT },
    { role: 'user', content: query },
  ],
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Model used: gpt-4.1-mini (a lightweight GPT-4 variant).&lt;/p&gt;

&lt;p&gt;Messages:&lt;/p&gt;

&lt;p&gt;system → The context (memory of the user).&lt;/p&gt;

&lt;p&gt;user → The actual question/query.&lt;/p&gt;

&lt;p&gt;The model then generates a response considering both.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Showing and Saving the Response
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;console.log(`\n\n\nBot:`, response.choices[0].message.content);
console.log('Adding to memory...');
await mem.add(
  [
    { role: 'user', content: query },
    { role: 'assistant', content: response.choices[0].message.content },
  ],
  { userId: 'arsh' }
);
console.log('Adding to memory done...');

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First, it prints GPT’s answer.&lt;/p&gt;

&lt;p&gt;Then, both the user query and the assistant’s reply are stored back in memory.&lt;/p&gt;

&lt;p&gt;This way, next time you ask something, the bot already knows you asked about books, movies, or whatever in the past.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Running the Chat
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chat('Suggest me some books?');

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you run this, the chatbot will:&lt;/p&gt;

&lt;p&gt;Check if you’ve asked about books before.&lt;/p&gt;

&lt;p&gt;Use that past context in its answer.&lt;/p&gt;

&lt;p&gt;Save this conversation for the future.&lt;/p&gt;

&lt;p&gt;Over time, the bot starts feeling more personalized, because it remembers your preferences.&lt;/p&gt;

&lt;p&gt;And that's how you make an LLM store a chat's relationships. Now your relationship with LLM can improve.&lt;/p&gt;

&lt;p&gt;And this was all for this week folks. Keep following for more.&lt;/p&gt;

&lt;p&gt;Peace....&lt;/p&gt;

</description>
      <category>nlp</category>
      <category>llm</category>
      <category>ai</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Intro to memory in GenAI</title>
      <dc:creator>ASHDEEP SINGH</dc:creator>
      <pubDate>Sun, 14 Sep 2025 10:56:54 +0000</pubDate>
      <link>https://dev.to/arsh_the_coder/intro-to-memory-in-genai-1pc0</link>
      <guid>https://dev.to/arsh_the_coder/intro-to-memory-in-genai-1pc0</guid>
      <description>&lt;p&gt;Epsikamjung&lt;br&gt;
So this week was spent on embedding memory in a GenAI memory. You remember what your name is? Offcourse, you do, or do you? Actually , that's how memory works, everything that is important information is stored in it. But we don't remember everything, for example, try remembering the first word of this article. Now, your brain stores information in 2 parts: long-term and short-term, which is self-explanatory. Same goes for an LLM; we can store important things in Long-term, and some small, irrelevant type info can be referred as short-term. But the question is, how can we do it? And is it even necessary?&lt;/p&gt;

&lt;p&gt;Github for code : &lt;a href="https://github.com/Ashdeep-Singh-97/GenAI-memory" rel="noopener noreferrer"&gt;https://github.com/Ashdeep-Singh-97/GenAI-memory&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Short-Term vs Long-Term Memory in LLMs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When humans have a conversation, we naturally remember things said a few moments ago (short-term memory), but we don’t retain every detail forever. Only meaningful events or facts are stored in long-term memory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For an LLM (Large Language Model), the same principle applies:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Short-Term Memory: This is like the conversation window or context. The model remembers the current dialogue and past few exchanges, but this memory disappears once the session ends.&lt;/p&gt;

&lt;p&gt;Long-Term Memory: This is persistent memory, stored outside the model in databases, vector stores, or graph stores. It allows the AI to recall facts, preferences, and past conversations across multiple sessions.&lt;/p&gt;

&lt;p&gt;Why is this important? Without long-term memory, an LLM feels like someone with amnesia — it can have smart responses in the moment, but it won’t remember you tomorrow. With persistent memory, the AI becomes more human-like, able to build relationships and continuity over time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let's see it in code.&lt;/strong&gt;&lt;br&gt;
Refer github link for code. Now since you have read the code , you can see it is a simple implementation of long-term + short-term memory using mem0ai, Neo4j (for graph-based storage), and Qdrant (for vector embeddings).&lt;br&gt;
Here’s what’s happening:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Neo4j (Graph Store)&lt;/strong&gt; → Stores relationships between pieces of information (like a mind map). This is great for knowledge graphs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Qdrant (Vector Store)&lt;/strong&gt; → Stores embeddings (numerical representations of text). This helps in semantic search so the model can recall similar memories.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How this works:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Search memory: Before responding, the code fetches relevant past memories (mem.search) for the user "piyush".&lt;/p&gt;

&lt;p&gt;System prompt building: These memories are inserted into the system prompt, giving the AI context about past interactions.&lt;/p&gt;

&lt;p&gt;Generate response: The query is sent to OpenAI (gpt-4.1-mini) with both the user query and context.&lt;/p&gt;

&lt;p&gt;Store new memory: The conversation (both user’s question and AI’s response) is stored in the memory for future reference.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why this matters&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This approach gives your GenAI app a sense of continuity. With short-term memory, it handles the current session, and with long-term memory, it builds a persistent knowledge base about users. The blend of vector search (semantic recall) and graph storage (structured relationships) makes the memory powerful and human-like.&lt;/p&gt;

&lt;p&gt;And with this , let's wrap today's article. Hopefully you'll store it in your Long-term memory.&lt;/p&gt;

&lt;p&gt;Keep following for more.&lt;/p&gt;

&lt;p&gt;Peace.....&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Intro to Agent SDK - Making our own Cursor AI.</title>
      <dc:creator>ASHDEEP SINGH</dc:creator>
      <pubDate>Sun, 07 Sep 2025 17:25:41 +0000</pubDate>
      <link>https://dev.to/arsh_the_coder/intro-to-agent-sdk-making-our-own-cursor-ai-1fma</link>
      <guid>https://dev.to/arsh_the_coder/intro-to-agent-sdk-making-our-own-cursor-ai-1fma</guid>
      <description>&lt;p&gt;This week of learning Gen AI was spent in learning Agent SDK and building our own Cursor AI. Agent SDK is going to be the new way of using Agents LLM Model and building gen AI projects on top of it.&lt;/p&gt;

&lt;p&gt;Agent SDK , by Open AI , are very well versed in tool calling. Tool calling is : giving tools to LLM so it can perform a given task. But wait .. does that mean our LLM cant do a task , due to which we’ll need to call a tool ? Let’s explore this.&lt;/p&gt;

&lt;p&gt;So assume you ask any AI chatbot , tell me weather of my city (obviously write your city’s name) , and you’ll see it’s unable to give an answer (ask question using javascript and pass it to model , don’t go online as they can do tool calling). We observe that we cant get our answer , but we can manually write a method where we can get city’s name , pass it to the method and get the weather details from any 3rd party API.&lt;/p&gt;

&lt;p&gt;But the problem remains , how it helps us ? Actually it helps us by enhancing our AI-Bots’ capability to perform a task. Refer the below github where we have made it possible.&lt;/p&gt;

&lt;p&gt;Code : &lt;a href="https://github.com/Ashdeep-Singh-97/CursorAI" rel="noopener noreferrer"&gt;https://github.com/Ashdeep-Singh-97/CursorAI&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you might ask , all this for a weather ? well actually no , you can enhance your AI-Bot’s working by writing a tool to push code to github , fetch somebody’s detail or even write a message on whatsapp and send it to someone. All this made possible using tool calling, just write the correct tool and inject it into the LLM you’re using , so your LLM knows it has a tool at it’s disposal for a particular task.&lt;/p&gt;

&lt;p&gt;Now , lets see how you can build your own cursor. It’s easy , just make some tools to do some tasks , like execute a command , push code to github etc and you’re good to go. Refer github for some inspiration.&lt;/p&gt;

&lt;p&gt;And that’s all. Trust me it’s easier than it looks. Just give it a try.&lt;/p&gt;

&lt;p&gt;And keep following for more.&lt;/p&gt;

&lt;p&gt;Peace.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Building our own RAG</title>
      <dc:creator>ASHDEEP SINGH</dc:creator>
      <pubDate>Sun, 31 Aug 2025 17:16:05 +0000</pubDate>
      <link>https://dev.to/arsh_the_coder/building-our-own-rag-4lj2</link>
      <guid>https://dev.to/arsh_the_coder/building-our-own-rag-4lj2</guid>
      <description>&lt;p&gt;This week in Learning of AI was spent in learning how RAGs work.&lt;br&gt;
Now think of RAG as a "FACE" to which you can talk , given that the brain of this "FACE" is "DATA" , which you provide it.&lt;br&gt;
Thus you can know what a document / pdf / website contains just by chatting with the "FACE".&lt;/p&gt;

&lt;p&gt;Technically , the process of setting up such a thing is easier than you might think.&lt;/p&gt;

&lt;p&gt;Follow the &lt;a href="https://github.com/Ashdeep-Singh-97/RAG" rel="noopener noreferrer"&gt;github for code&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Follow the below blueprint of setting up a RAG based chatBOT.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Select Sources:&lt;/strong&gt; PDFs or websites. Choose which data and from where you wish to inject data into the model. Refer files named indexing.js (for pdf loading) and webindex.js (for website loading).&lt;/p&gt;

&lt;p&gt;Once done , the next procedure is same for both pdf loading and webloading (you can also do both in same project).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Chunking:&lt;/strong&gt; Divie the text into chunks. It is these chunks that will be stored in the Database as vector embeddings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Metadata:&lt;/strong&gt; source, title, section, date, tags are to be added , but you dont have to do it , as your vectorDB will do it for each chunk you give it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Embeddings:&lt;/strong&gt; Make vector embeddings for each chunk (again done by code but you need to call the code command)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Index/Store:&lt;/strong&gt; Store the result uptill this point in the VectorDB.&lt;/p&gt;

&lt;p&gt;Now comes the fun part.&lt;/p&gt;

&lt;p&gt;For each User message , make a query embedding out of it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Retrieve:&lt;/strong&gt; Get similar chunks from VectorDB and then use them to generate the response for the user. You just need to inject this chunk into the model and you're good to go for processing the user query.&lt;/p&gt;

&lt;p&gt;And that's it folks. See how easy it was for setting up a RAG. Now there's more to the story , like fine-tuning the RAG, it's system design. But we'll cover that some other day.&lt;/p&gt;

&lt;p&gt;Keep following for more.&lt;/p&gt;

&lt;p&gt;Peace.......&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Writing a Persona ChatBot</title>
      <dc:creator>ASHDEEP SINGH</dc:creator>
      <pubDate>Wed, 20 Aug 2025 11:48:15 +0000</pubDate>
      <link>https://dev.to/arsh_the_coder/writing-a-persona-chatbot-5fk9</link>
      <guid>https://dev.to/arsh_the_coder/writing-a-persona-chatbot-5fk9</guid>
      <description>&lt;p&gt;You might have wondered how we can actually have a BOT which will be capable of talking like a human, like tone, wordings, style etc.&lt;br&gt;
Here in my journey of learning AI , I made a persona ChatBot which is a Gemini based persona BOT , of none other than Hitesh Sir.&lt;br&gt;
Here's how I accomplished it.&lt;br&gt;
So it's basically an easy task , requiring labour of love in defining the system prompt.&lt;br&gt;
We need to make sure we have a sound system prompt that matches our use case and can handle things beyond. Rest things were made up using sockets and a basic react frontend.&lt;br&gt;
Summary of system Prompt :&lt;br&gt;
Tone/style: Hindi-English mixed, friendly, energetic, coding-focused — jaise Hitesh Sir bolte hain.&lt;/p&gt;

&lt;p&gt;Process:&lt;/p&gt;

&lt;p&gt;Agar coding/maths/computation ka sawaal ho toh AI ko 5 steps me jawab dena hai — understand, explore, compute, crosscheck, wrap_up.&lt;/p&gt;

&lt;p&gt;Simple/casual baaton me steps skip karke bas normal, cool style me reply.&lt;/p&gt;

&lt;p&gt;Rules:&lt;/p&gt;

&lt;p&gt;Har computational jawab strict JSON array format me hoga.&lt;/p&gt;

&lt;p&gt;Chai ka mention har reply me zaroori hai ☕.&lt;/p&gt;

&lt;p&gt;Har response hamesha "Hanji" se start hoga.&lt;/p&gt;

&lt;p&gt;With more such rules , we can build an even close to reality persona of someone.&lt;/p&gt;

&lt;p&gt;Here's the github link : &lt;a href="https://github.com/Ashdeep-Singh-97/PersonaBot" rel="noopener noreferrer"&gt;https://github.com/Ashdeep-Singh-97/PersonaBot&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Keep following for more.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Intro to Gen AI</title>
      <dc:creator>ASHDEEP SINGH</dc:creator>
      <pubDate>Tue, 12 Aug 2025 12:32:58 +0000</pubDate>
      <link>https://dev.to/arsh_the_coder/intro-to-gen-ai-1k9</link>
      <guid>https://dev.to/arsh_the_coder/intro-to-gen-ai-1k9</guid>
      <description>&lt;p&gt;What is Generative AI (GenAI)?&lt;/p&gt;

&lt;p&gt;Generative AI is a type of artificial intelligence that can create new content — like text, images, music, or even code — instead of just analyzing existing data.&lt;br&gt;
Unlike traditional AI that mostly classifies or predicts, GenAI learns patterns from massive datasets and then produces something new based on that knowledge.&lt;/p&gt;

&lt;p&gt;GenAI models are usually trained on huge amounts of data, and to understand that data, they break it into smaller, meaningful pieces — and that’s where tokenization comes in.&lt;/p&gt;

&lt;p&gt;But to understand it even more simply , we can assume of GenAI as an alogithmic program that is capable of creating something based on the dataset it has been trained on. &lt;/p&gt;

&lt;p&gt;What is Tokenization?&lt;/p&gt;

&lt;p&gt;Tokenization is the process of breaking down text into smaller units, called tokens, so that machines can understand and process it.&lt;br&gt;
A token can be:&lt;/p&gt;

&lt;p&gt;A word (Hello)&lt;br&gt;
A subword (ing in running)&lt;br&gt;
A single character (a, b, c)&lt;br&gt;
Or even punctuation marks and spaces&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
"Hello world!"&lt;br&gt;&lt;br&gt;
→ ["Hello", "world", "!"]&lt;/p&gt;

&lt;p&gt;Computers don’t directly understand text. They need numbers.&lt;/p&gt;

&lt;p&gt;Why it matters in GenAI:&lt;br&gt;
Tokenization ensures the AI model processes input consistently, no matter how big or small the text. This is the first step before the model learns relationships between words.&lt;br&gt;
Note : GenAI generates tokens one by one , and in each iteration generated tokens from previous iteration are used , and all are combined at last to find the final answer.&lt;/p&gt;

&lt;p&gt;What are Vector Embeddings?&lt;/p&gt;

&lt;p&gt;Once text is tokenized, each token is converted into a vector — a list of numbers that represent its meaning in a mathematical space.&lt;br&gt;
This is called an embedding.&lt;/p&gt;

&lt;p&gt;Words with similar meanings have vectors that are closer together in this space.&lt;/p&gt;

&lt;p&gt;This lets AI “understand” context, similarity, and relationships between words.&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
If you plot word vectors in 3D space:&lt;/p&gt;

&lt;p&gt;"king" and "queen" will be close to each other&lt;/p&gt;

&lt;p&gt;"cat" and "dog" will be closer compared to "cat" and "car"&lt;/p&gt;

&lt;p&gt;Why embeddings matter:&lt;br&gt;
They allow GenAI to:&lt;/p&gt;

&lt;p&gt;Find relevant documents for a search query&lt;/p&gt;

&lt;p&gt;Understand synonyms and related concepts&lt;/p&gt;

&lt;p&gt;Make conversational answers more context-aware&lt;/p&gt;

&lt;p&gt;Understand it using an example.&lt;br&gt;
Ever thought of going for a walk and suddenly looking at sky , thinking it might rain. Assume you are desparate for a walk even if it rains, so you carry an umbrella and head out. Now you have your umbrella and are walking down the street , ready to open it any moment it rains.&lt;/p&gt;

&lt;p&gt;So here you are actually predicting every next moment of what's going to happen (this is essence of generative AI , predicting the next token).&lt;/p&gt;

&lt;p&gt;But here's a detail you might have missed , why did you carry the umbrella in 1st place , because your brain saw a few days with same weather and knew it can rain in such a weather. It's the same data fed in your brain that makes you "PREDICT" the outcome , and bring the umbrella. This process of prediction continues entire duration of your journey.&lt;/p&gt;

&lt;p&gt;Now think about how your brain recalled those past days.&lt;/p&gt;

&lt;p&gt;You didn’t literally remember every single weather detail from years ago.&lt;/p&gt;

&lt;p&gt;Instead, your brain keeps a compressed mental representation of each memory — not the exact video, but the essence of the scene (e.g., “cloudy sky + humid air + cool wind” = likely rain).&lt;/p&gt;

&lt;p&gt;This compressed representation is like a vector embedding:&lt;/p&gt;

&lt;p&gt;Each weather day you’ve experienced is converted into a list of numbers that capture key features (color of sky, humidity level, wind speed).&lt;/p&gt;

&lt;p&gt;When you see today’s weather, your brain turns it into another list of numbers (an embedding).&lt;/p&gt;

&lt;p&gt;You compare it with your “memory embeddings” to find the most similar ones.&lt;/p&gt;

&lt;p&gt;If similar ones often had rain, you predict rain today.&lt;br&gt;
But now you might ask , where is Vector embeddings role in it ? Well actually , look at it this way , you sense it'll rain heavy -&amp;gt; you dont go , it'll rain lightly -&amp;gt; you go with an umbrella. Here you see , you brain already has a mapping of what will happen + what you'll do. If this is to be repersented mathematically we can do it using vector embeddings.&lt;/p&gt;

&lt;p&gt;The working of GenAI can be considered analogous to this example.&lt;/p&gt;

&lt;p&gt;TLDR; Putting It All Together&lt;/p&gt;

&lt;p&gt;Tokenization breaks text into smaller parts and converts them into numbers.&lt;br&gt;
Embeddings turn these tokens into vectors that capture meaning and relationships.&lt;br&gt;
Generative AI uses these vectors in deep learning models to create new content.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Intro to EVM IV</title>
      <dc:creator>ASHDEEP SINGH</dc:creator>
      <pubDate>Sun, 24 Nov 2024 11:13:57 +0000</pubDate>
      <link>https://dev.to/arsh_the_coder/intro-to-evm-iv-31eo</link>
      <guid>https://dev.to/arsh_the_coder/intro-to-evm-iv-31eo</guid>
      <description>&lt;p&gt;In this week we ended our tour de EVM and here's what all I learnt.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Memory management&lt;/strong&gt;&lt;br&gt;
Even though the name's memory , Memory is equivalent to the heap in other languages but there is no garbage collector.&lt;br&gt;
Now , we'll explore it using an example to store a string.&lt;/p&gt;

&lt;p&gt;first we have a scratch space , Scratch space in the Ethereum Virtual Machine (EVM) is a temporary memory area, 256 bytes in size, used for intermediate calculations during execution. It resets after each instruction and is cheaper to use than persistent memory or storage.&lt;/p&gt;

&lt;p&gt;Then , we have home to FREE MEMORY POINTER, that is , the address where our thing will be stored next.&lt;/p&gt;

&lt;p&gt;and then finally we have space where we store our data.&lt;/p&gt;

&lt;p&gt;And to make things even more clear , we have reserved space for dynamic arrays.&lt;/p&gt;

&lt;p&gt;here's how to understand it:&lt;br&gt;
0x00 - scratch space&lt;br&gt;
0x20 - scratch space&lt;br&gt;
0x40 - stores next free address location&lt;br&gt;
0x60 - Reserved for dynamic arrays&lt;br&gt;
0x80 - where we actually store data&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MStore and MLoad&lt;/strong&gt;&lt;br&gt;
Just like malloc of C , we can ourselves we can manually store some information in a particular address.&lt;br&gt;
Here's how we do it :&lt;br&gt;
mstore(0x00, valueToStore)&lt;/p&gt;

&lt;p&gt;and as you might have guessed , we use Mload to retrieve our data.&lt;br&gt;
Value := mload(0xa0)&lt;/p&gt;

&lt;p&gt;So you might be thinking , we can just use any space and fill it to optimise . Well you cant ,reason being we have some reserved spaces for various other works. Have a look at it:&lt;br&gt;
Hashing operations : 0x00 - 0x3f&lt;br&gt;
Memory management : 0x40 - 0x5f&lt;br&gt;
Zero Slot Usage : 0x60 - 0x7f&lt;/p&gt;

&lt;p&gt;Also take a look at what happens behind the scenes to get a better understanding.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Memory Initialization:&lt;/strong&gt;&lt;br&gt;
When a contract is executed, the EVM initializes a free memory pointer to 0x80 (128 in decimal). This is done to provide some space for stack operations and other data that the contract might need during execution. This means the first 128 bytes of memory (from 0x00 to 0x7F) are reserved, and the free memory pointer initially points to 0x80.&lt;br&gt;
&lt;strong&gt;Loading the Free Memory Pointer:&lt;/strong&gt;&lt;br&gt;
In the assembly block, the command mload(0x40) loads the value stored at memory location 0x40. This location holds the address of the first free byte of memory. Since the EVM initializes this pointer to 0x80, mload(0x40) returns 0x80.&lt;br&gt;
&lt;strong&gt;Memory Layout:&lt;/strong&gt;&lt;br&gt;
EVM memory is byte-addressed, meaning each address corresponds to one byte. However, memory operations like mload and mstore work with 32-byte (256-bit) words. So, when you read or write to memory, you deal with 32-byte chunks.&lt;br&gt;
Interpreting the Result:&lt;br&gt;
&lt;strong&gt;The value&lt;/strong&gt; 0x0000000000000000000000000000000000000000000000000000000000000080 you received is a 32-byte word where the last byte (0x80) indicates the starting point of the free memory, and the preceding bytes are padded with zeros.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Instruction Set&lt;/strong&gt;&lt;br&gt;
We have a few instruction sets too that enable us to understand (&amp;amp; if needed , dive deeper into the knitty gritty of) EVM and bytecodes&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stack instructions&lt;/strong&gt;&lt;br&gt;
Stack instructions involve manipulating the position of values on the stack. &lt;br&gt;
pushN value: pushes a value to the top of the stack where N is the byte size of the value. &lt;/p&gt;

&lt;p&gt;pop: pops a value from the top of the stack. &lt;/p&gt;

&lt;p&gt;swapN: swaps the value from the top of the stack with a value at stack index N. &lt;/p&gt;

&lt;p&gt;dupN: duplicates a value from the stack at index N and pushes it to the stack.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Arithmetic Instructions&lt;/strong&gt;&lt;br&gt;
Arithmetic instructions pop two or more values from the stack, performs an arithmetic operation, and pushes the result. &lt;br&gt;
add: pushes the result of addition of two values. &lt;/p&gt;

&lt;p&gt;sub: pushes the result of subtraction of two values. &lt;/p&gt;

&lt;p&gt;mul / smul: pushes the result of multiplication of two values. &lt;/p&gt;

&lt;p&gt;div / sdiv: pushes the result of the division of two values.&lt;/p&gt;

&lt;p&gt;mod: pushes the result of the modulus of two values. &lt;/p&gt;

&lt;p&gt;exp: pushes the result of exponentiation of two values. addmod / mulmod combines add with mod and mul with mod.&lt;/p&gt;

&lt;p&gt;Note - smul and sdiv treat the values as “signed” integers&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Comparison Instructions&lt;/strong&gt;&lt;br&gt;
Comparison pop one or two values from the stack, performs a comparison and based on the result, pushes either true (0) or false&lt;br&gt;
lt / slt: pushes true if the top stack value is less than the second.&lt;/p&gt;

&lt;p&gt;gt / sgt: pushes true if the top stack value is greater than the second. &lt;/p&gt;

&lt;p&gt;eq: pushes true if the top two stack values are equal. &lt;/p&gt;

&lt;p&gt;iszero: pushes true if the top stack value is zero.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bitwise Instructions&lt;/strong&gt;&lt;br&gt;
Bitwise instructions pop one or more values from the stack and performs bitwise operations on them.&lt;br&gt;
and: performs bitwise AND on the top two stack values.&lt;/p&gt;

&lt;p&gt;or: performs bitwise OR on the top two stack values. &lt;/p&gt;

&lt;p&gt;xor: performs bitwise Exclusive OR on the top two stack values. &lt;/p&gt;

&lt;p&gt;not: performs bitwise NOT on the top stack value. - shr / shl performs a bit-shift right and left, respectively.&lt;/p&gt;

&lt;p&gt;shr / shl performs a bit-shift right and left, respectively. Top element of stack is tell the number of shifts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Memory Instructions&lt;/strong&gt;&lt;br&gt;
Memory instructions read and write to a chunk of memory. Memory expands linearly and can be read / written to arbitrarily. &lt;br&gt;
mstore: stores a 32 byte (256 bit) word in memory.&lt;/p&gt;

&lt;p&gt;mstore8: stores a one byte (8 bit) word in memory.&lt;/p&gt;

&lt;p&gt;mload: loads a 32 byte word from memory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Context Instructions (Read)&lt;/strong&gt;&lt;br&gt;
The following is a non-comprehensive, short list of instructions that can read from the global state and execution context. &lt;br&gt;
caller: pushes the address that called the current context.&lt;/p&gt;

&lt;p&gt;timestamp: pushes the current block’s timestamp.&lt;/p&gt;

&lt;p&gt;staticcall: can make a read-only call to another contract. &lt;/p&gt;

&lt;p&gt;calldataload: can load a chunk of the calldata in the current context. - sload can read a piece of data from persistent storage on the current contract.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Context Instructions (Write)&lt;/strong&gt; &lt;br&gt;
The following is a non-comprehensive, short list of instructions that can write to the global state and the execution context.&lt;br&gt;
sstore: can store data to persistent storage.&lt;/p&gt;

&lt;p&gt;logN: can append data to the current transaction logs where N is the number of special, indexed values in the log.&lt;/p&gt;

&lt;p&gt;call: can make a call to external code, which can also update the global state.&lt;/p&gt;

&lt;p&gt;create / create2: can deploy code to a new address, creating a new contract.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My 2 cents :&lt;/strong&gt;&lt;br&gt;
mstore will take all 32 bytes , to ensure it takes less space , we can use code like mstore8&lt;br&gt;
things we have talked today are more EVM - assembly specific and not "CODE" specific.&lt;/p&gt;

&lt;p&gt;So that's all for this week folks , Stay tuned for more.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Making Git Clone</title>
      <dc:creator>ASHDEEP SINGH</dc:creator>
      <pubDate>Sun, 03 Nov 2024 12:12:47 +0000</pubDate>
      <link>https://dev.to/arsh_the_coder/making-git-clone-22dh</link>
      <guid>https://dev.to/arsh_the_coder/making-git-clone-22dh</guid>
      <description>&lt;p&gt;This week was spent in learning by building and I made a prototype of Git using Javascript.&lt;br&gt;
Here is the summary of it.&lt;br&gt;
&lt;a href="https://github.com/Ashdeep-Singh-97/gitClone/tree/main/gitClone" rel="noopener noreferrer"&gt;Github Link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Overview&lt;/p&gt;

&lt;p&gt;The script provides basic Git-like operations, such as:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Initializing a Git-like repository with a .git directory.&lt;/li&gt;
&lt;li&gt;Handling file hashing and object storage in a way similar to Git’s object storage.&lt;/li&gt;
&lt;li&gt;Building and committing trees to model directory structures.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Commands Implemented&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;init: Sets up a .git directory structure with objects and refs folders and creates a HEAD file pointing to the main branch.&lt;/li&gt;
&lt;li&gt;cat-file: Reads and decompresses a Git object given its SHA-1 hash, displaying its contents.&lt;/li&gt;
&lt;li&gt;hash-object: Hashes a file, creates a compressed version, and stores it in the .git/objects folder.&lt;/li&gt;
&lt;li&gt;ls-tree: Lists the contents of a tree object, similar to the git ls-tree command.&lt;/li&gt;
&lt;li&gt;write-tree: Writes the current directory structure into a tree object and stores it.&lt;/li&gt;
&lt;li&gt;commit-tree: Creates a new commit object, linking it to a tree object and a parent commit, and stores it in the Git object storage.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Functionality Breakdown&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Git Directory Initialization (init):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creates .git, .git/objects, and .git/refs directories.&lt;/li&gt;
&lt;li&gt;Sets HEAD to reference the main branch.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. File Hashing and Object Storage (hash-object):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reads a file, hashes its contents using SHA-1, and compresses it using zlib.&lt;/li&gt;
&lt;li&gt;Stores the compressed object in .git/objects using the first two characters of the SHA-1 hash to create a subdirectory.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Viewing Object Content (cat-file):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; Decompresses a stored object from .git/objects and prints its content.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. Tree Management (ls-tree and write-tree):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ls-tree: Lists the files and directories in a tree object.&lt;/li&gt;
&lt;li&gt;write-tree: Recursively traverses the current directory, hashing files and building a tree object. Stores the tree object in .git/objects.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5. Commit Creation (commit-tree):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Constructs a commit object containing metadata (author, committer, timestamp) and a reference to the tree object and parent commit.&lt;/li&gt;
&lt;li&gt;Compresses and stores the commit object in .git/objects.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's all for this week folks.&lt;br&gt;
Stay tuned for more.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Intro to EVM II</title>
      <dc:creator>ASHDEEP SINGH</dc:creator>
      <pubDate>Sun, 20 Oct 2024 12:33:04 +0000</pubDate>
      <link>https://dev.to/arsh_the_coder/intro-to-evm-ii-48fg</link>
      <guid>https://dev.to/arsh_the_coder/intro-to-evm-ii-48fg</guid>
      <description>&lt;p&gt;Hello World !&lt;/p&gt;

&lt;p&gt;Last week we learnt about EVM and it's slots. this week this learning continues and we dive deeper into it and understand other aspects of storage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dynamic Type In Storage&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In Solidity, dynamically-sized state variables (such as dynamic arrays and mappings) are stored differently from statically-sized state variables. This distinction is made because dynamic state variables can grow and shrink, and their data can't fit neatly into a single storage slot.&lt;/p&gt;

&lt;p&gt;Marker Slots: The storage slot assigned to a dynamically-sized variable serves as a "marker" or "pointer" rather than directly holding the data. This slot typically holds metadata, such as the length of a dynamic array.&lt;br&gt;
Separate Storage for Data: The actual data of the dynamically-sized variable is stored in separate storage locations. The starting position of this data is computed using a hashing function.&lt;/p&gt;

&lt;p&gt;Formula to the memory address where the i-th element of the array c is stored.&lt;br&gt;
c is the array.&lt;br&gt;
slot_c is the storage slot of the array c.&lt;br&gt;
keccak(x) is the Keccak hash function applied to slot x.&lt;br&gt;
i is the index of the element in the array.&lt;br&gt;
The address of the i-th element of the array c can be calculated as:&lt;br&gt;
Address(c[i])=keccak(slot_c)+ i&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mapping In Storage&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Mappings in Solidity are stored in a unique way to ensure efficient access and avoid collisions. The marker slot for a mapping marks its existence but does not directly store any of the mapping's key-value pairs. Instead, the key-value pairs are stored at specific computed storage locations based on the keys.&lt;br&gt;
Marker Slot: The storage slot assigned to the mapping itself (the marker slot) serves as a reference point but doesn't store any actual key-value pairs. This slot only indicates that a mapping exists.&lt;/p&gt;

&lt;p&gt;Formula to the memory address where the values of mapping c is stored.&lt;br&gt;
k is the key of mapping&lt;br&gt;
p is the position (or storage slot) of the mapping declaration in the contract.&lt;br&gt;
. denotes concatenation.&lt;br&gt;
The address of the key element of the mapping c can be calculated as:&lt;br&gt;
Address(c[k])=keccak256(k . p)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;State variables&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;State variables of contracts are stored in storage in a compact way such that multiple values sometimes use the same storage slot (except dynamically-sized arrays and mappings).&lt;/p&gt;

&lt;p&gt;Multiple, contiguous items that need less than 32 bytes are packed into a single storage slot if possible, according to the following rules:&lt;/p&gt;

&lt;p&gt;The first item in a storage slot is stored lower-order aligned:&lt;/p&gt;

&lt;p&gt;This means that when multiple items are packed into a single storage slot, the first item is aligned to the lowest byte address of the slot. In other words, the first item is stored starting from the first byte of the slot, without any padding or gaps.&lt;/p&gt;

&lt;p&gt;For example, if a slot is 32 bytes long and the first item is a uint8 (1 byte), it will be stored in the first byte of the slot (bytes 0-1). If the next item is a uint16 (2 bytes), it will be stored in bytes 1-3, and so on.&lt;/p&gt;

&lt;p&gt;Value types use only as many bytes as are necessary to store them:&lt;/p&gt;

&lt;p&gt;This means that each item is stored in the minimum number of bytes required to represent its value, without any extra padding or waste.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A uint8 (1 byte) will only use 1 byte of storage.&lt;/li&gt;
&lt;li&gt;A uint16 (2 bytes) will only use 2 bytes of storage.&lt;/li&gt;
&lt;li&gt;A uint256 (32 bytes) will use the full 32 bytes of storage.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If a value type does not fit the remaining part of a storage slot, it is stored in the next storage slot.&lt;/p&gt;

&lt;p&gt;Structs and array data always start a new slot and their items are packed tightly according to these rules.&lt;/p&gt;

&lt;p&gt;Items following struct or array data always start a new storage slot.&lt;/p&gt;

&lt;p&gt;For example, consider the following contracts:&lt;br&gt;
Contract A (base-most contract):&lt;br&gt;
uint public x;&lt;/p&gt;

&lt;p&gt;Contract B (inherits from A):&lt;br&gt;
uint public y;&lt;/p&gt;

&lt;p&gt;Contract C (inherits from B):&lt;br&gt;
uint public z;&lt;/p&gt;

&lt;p&gt;The ordering of state variables in Contract C would be:&lt;/p&gt;

&lt;p&gt;x (from Contract A)&lt;br&gt;
y (from Contract B)&lt;br&gt;
z (from Contract C)&lt;/p&gt;

&lt;p&gt;If x, y, and z are all uint256 (32 bytes), they would share the same storage slot, with x occupying the first 32 bytes, y occupying the next 32 bytes, and z occupying the final 32 bytes.&lt;/p&gt;

&lt;p&gt;That's all for this week folks.&lt;br&gt;
Stay tuned for more.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Intro to EVM</title>
      <dc:creator>ASHDEEP SINGH</dc:creator>
      <pubDate>Sun, 13 Oct 2024 18:08:57 +0000</pubDate>
      <link>https://dev.to/arsh_the_coder/intro-to-evm-4iad</link>
      <guid>https://dev.to/arsh_the_coder/intro-to-evm-4iad</guid>
      <description>&lt;p&gt;This week we decided to deep dive into EVM (Ethereum virtual machine) . But before that let us know why to deep dive into EVM.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Understanding Smart Contracts:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The EVM is where smart contracts (self-executing programs on the blockchain) are deployed and executed. If you want to write, deploy, or interact with smart contracts, understanding how the EVM processes and executes these contracts is crucial.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Building Decentralized Applications (dApps):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ethereum is one of the most popular platforms for building decentralized applications. To develop a dApp, you need to know how your code interacts with the EVM, as it determines how your contracts will execute, use gas, and handle data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Gas Fees and Efficiency:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The EVM charges gas fees for every operation (computation or storage). Learning about the EVM helps you optimize your smart contract code to be more efficient, minimizing gas usage and making your application cheaper to run.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Security and Vulnerabilities:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Many smart contract hacks and exploits happen due to developers not fully understanding how the EVM works. Learning the EVM helps you write secure code by avoiding common pitfalls like reentrancy attacks, integer overflows, or issues with how state is stored and managed on-chain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Cross-Chain Compatibility:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The EVM has become a de facto standard for many other blockchain networks like Binance Smart Chain, Polygon, Avalanche, and Fantom. Understanding the EVM can open doors to working with various EVM-compatible blockchains beyond Ethereum.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Career Opportunities:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Blockchain development is a growing field, and Ethereum is the second-largest blockchain by market capitalization. Mastering EVM-based development increases your marketability as a blockchain developer and opens up numerous career opportunities in DeFi, NFTs, gaming, and other blockchain-based industries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Token Standards (e.g., ERC-20, ERC-721):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Many Ethereum token standards, such as ERC-20 (for fungible tokens) and ERC-721 (for NFTs), rely on the EVM. Understanding how these tokens work under the hood requires knowledge of the EVM’s mechanics.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. Blockchain Scaling Solutions:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Understanding how the EVM works helps you comprehend the challenges Ethereum faces (e.g., scalability, gas costs) and how Layer 2 solutions like rollups (Optimistic or ZK) aim to solve them by extending the EVM's capabilities off-chain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9. Upgrades and Future Development:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ethereum is constantly evolving, with upgrades like Ethereum 2.0 (the transition to Proof of Stake) and other scaling solutions. Understanding the EVM makes it easier to follow and adapt to these changes, ensuring your knowledge stays relevant as the platform develops.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;10. Interoperability with Web3 Ecosystem:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ethereum’s EVM is integral to the broader Web3 ecosystem. Understanding how the EVM interacts with decentralized finance (DeFi) protocols, NFTs, oracles, and other blockchain services helps you navigate the decentralized internet and create applications that can integrate across different blockchain services.&lt;/p&gt;

&lt;p&gt;Now let's proceed further.&lt;/p&gt;

&lt;p&gt;In bitcoin blockchain , a block size = 1MB , while in Ethereum Block size = 30 million Gas. If all the Ethereum transaction costs 21,000 gas, what will be the total number of transactions in an Ethereum block? Total number of transaction in an Ethereum Block = 30,000,000/Tx gas = 30,000,000/21000 = 1428. That means ideally , it's around 1428 when we have 30 million Gas.&lt;br&gt;
Now you might think why only 30 mn Gas. Reason is to prevent Heavy Transaction from running. So does it have some drawbacks too ?&lt;br&gt;
Well yes.&lt;br&gt;
1) Any transaction above 30 million gas will fail.&lt;/p&gt;

&lt;p&gt;2) The average block time for Ethereum is 12.06 seconds (at the time of recording the video), so a maximum of 1,428 transactions can ideally be processed.&lt;/p&gt;

&lt;p&gt;In the Ethereum Virtual Machine (EVM), the fundamental data unit is a 32-byte word (256 bits), which is used to store various types of data, including hexadecimal numbers.  storage is organized into slots of fixed size, each slot being a 32-byte (256-bit) chunk. Understanding the slot arrangement is crucial for managing how smart contracts store and access data in the blockchain.&lt;/p&gt;

&lt;p&gt;Each bytecode is divided in 3 parts&lt;br&gt;
1) Contract Deployment&lt;br&gt;
2) Runtime Bytecode&lt;br&gt;
3) Metadata&lt;/p&gt;

&lt;p&gt;Now let us see slots in more details.&lt;/p&gt;

&lt;p&gt;Look at this code below :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// SPDX-License-Identifier: MIT
pragma solidity ^0.8.26;


contract Demo {
    uint8 public a = 0x1e; //slot 0
    uint256 public b = 0xffe123; //slot 1
    bool public c = true; //slot 2
    string public d = "hello"; //slot 3


    // Function to read the contents of a storage slot
    function getSlotValue(uint slot) public view returns (bytes32) {
        bytes32 value;
        assembly {
            value := sload(slot)
        }
        return value;
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using this code we can see the slots and what they are storing. This way we can know the slots in which we are having our variables. Note how it happens is we have all variables in one slot till it's memory runs out. We store them in either full or none. If some space is left that is wasted.&lt;/p&gt;

&lt;p&gt;So that's all for today folks.&lt;br&gt;
Stay tuned for more.&lt;br&gt;
Next time we'll dive even deeper into EVM and explore some more aspects of it.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
