<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Dhruv</title>
    <description>The latest articles on DEV Community by Dhruv (@dhruvik_15).</description>
    <link>https://dev.to/dhruvik_15</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dhruvik_15"/>
    <language>en</language>
    <item>
      <title>My journey optimizing TensorFlow.js to run neural networks directly in the client's browser.</title>
      <dc:creator>Dhruv</dc:creator>
      <pubDate>Mon, 01 Dec 2025 08:56:25 +0000</pubDate>
      <link>https://dev.to/dhruvik_15/my-journey-optimizing-tensorflowjs-to-run-neural-networks-directly-in-the-clients-browser-583</link>
      <guid>https://dev.to/dhruvik_15/my-journey-optimizing-tensorflowjs-to-run-neural-networks-directly-in-the-clients-browser-583</guid>
      <description>&lt;p&gt;As a computer science student, most of my AI coursework was strictly Python-based. We used PyTorch or TensorFlow in Jupyter notebooks, running on Google Colab GPUs. It was powerful, but it felt disconnected from the web apps I loved building.&lt;/p&gt;

&lt;p&gt;For my capstone project, I wanted to bridge that gap. I asked myself: Can I run a neural network directly in the browser, without a backend server?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Discovery: TensorFlow.js&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I stumbled upon TensorFlow.js, a library that lets you define, train, and run models in JavaScript. The coolest part? It uses WebGL to tap into the user's GPU. This means the heavy math of machine learning happens on the client's graphics card, not their CPU.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Challenge&lt;/strong&gt;&lt;br&gt;
My first attempt was a simple object detection app using a pre-trained model (COCO-SSD).&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Loading the Model&lt;/strong&gt;: The model files were huge (MBs). Loading them on a slow network took forever.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Lag&lt;/strong&gt;: Initial inference took 2-3 seconds. The UI froze.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;How I Fixed It&lt;/strong&gt;&lt;br&gt;
To optimize performance, I learned about:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Quantization&lt;/strong&gt;: Using smaller, slightly less accurate models (Int8 instead of Float32) to drastically reduce download size.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Warmups&lt;/strong&gt;: Running a "dummy" prediction through the network when the page loads to compile the WebGL shaders before the user actually interacts with it.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;The Result&lt;/strong&gt;&lt;br&gt;
I built a webcam filter that detects faces in real-time at 30FPS. It was messy, but it worked. It taught me that the future of AI isn't just massive server farms—it's also edge devices and browsers.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>webgl</category>
      <category>ai</category>
      <category>tensorflow</category>
    </item>
    <item>
      <title>Understanding the logic behind 'Chat with PDF' apps by building a Retrieval-Augmented Generation agent manually.</title>
      <dc:creator>Dhruv</dc:creator>
      <pubDate>Mon, 01 Dec 2025 08:53:13 +0000</pubDate>
      <link>https://dev.to/dhruvik_15/understanding-the-logic-behind-chat-with-pdf-apps-by-building-a-retrieval-augmented-generation-1p0i</link>
      <guid>https://dev.to/dhruvik_15/understanding-the-logic-behind-chat-with-pdf-apps-by-building-a-retrieval-augmented-generation-1p0i</guid>
      <description>&lt;p&gt;Everyone is talking about "Chat with PDF" apps. I wanted to build one, not by using a wrapper library that hides everything, but by understanding the logic underneath. This is my journey building a RAG &lt;strong&gt;(Retrieval-Augmented Generation)&lt;/strong&gt; agent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Problem with LLMs&lt;/strong&gt;&lt;br&gt;
Large Language Models like GPT-4 are amazing, but they have two flaws:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;They don't know my private data.&lt;/li&gt;
&lt;li&gt;They hallucinate (make things up) when they don't know the answer.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;The Architecture&lt;/strong&gt;&lt;br&gt;
RAG solves this by giving the LLM a "textbook" to study before answering.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Chunking&lt;/strong&gt;: I split my document into small paragraphs (chunks).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Embeddings&lt;/strong&gt;: I used an embedding model (text-embedding-3-small) to turn those text chunks into lists of numbers (vectors). Similar concepts have mathematically similar vectors.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vector Store&lt;/strong&gt;: I stored these vectors in a database (I used Pinecone, but a simple local JSON file works for testing).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;The Flow&lt;/strong&gt;&lt;br&gt;
When a user asks: "What is the refund policy?"&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;I convert that question into a vector.&lt;/li&gt;
&lt;li&gt;I search my database for the chunk of text mathematically closest to that question vector.&lt;/li&gt;
&lt;li&gt;I find the text: "Refunds are processed within 14 days."&lt;/li&gt;
&lt;li&gt;I send a prompt to GPT-4: "Using this context: 'Refunds are processed within 14 days', answer the question: 'What is the refund policy?'"&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It felt like magic when it finally worked. The hardest part wasn't the AI—it was cleaning the data before feeding it in!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>rag</category>
      <category>python</category>
    </item>
    <item>
      <title>A breakdown of why MERN is the best starting point for new developers.</title>
      <dc:creator>Dhruv</dc:creator>
      <pubDate>Mon, 01 Dec 2025 08:49:23 +0000</pubDate>
      <link>https://dev.to/dhruvik_15/a-breakdown-of-why-mern-is-the-best-starting-point-for-new-developers-1o7p</link>
      <guid>https://dev.to/dhruvik_15/a-breakdown-of-why-mern-is-the-best-starting-point-for-new-developers-1o7p</guid>
      <description>

&lt;p&gt;In university, we learned Java and C++. When I started looking at web development, the number of choices was overwhelming. Django? Laravel? Spring Boot? Ruby on Rails?&lt;/p&gt;

&lt;p&gt;Eventually, I settled on the MERN Stack (MongoDB, Express, React, Node.js). Here is why I think it’s the best choice for new developers in 2024.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. One Language to Rule Them All&lt;/strong&gt;&lt;br&gt;
The biggest benefit is Context Switching. In MERN, everything is JavaScript (or TypeScript).&lt;br&gt;
Database query? JSON/JS objects.&lt;br&gt;
Backend logic? JavaScript.&lt;br&gt;
Frontend UI? JavaScript.&lt;br&gt;
I didn't have to mentally switch between Python indentation rules and SQL syntax every 10 minutes. This helped me master the concepts of web dev (HTTP, REST, State) without fighting syntax errors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. JSON is Native&lt;/strong&gt;&lt;br&gt;
MongoDB stores data in BSON, which is basically binary JSON. In React, we consume data as JSON. In Node APIs, we send data as JSON. The data flow is seamless. I didn't need complex ORMs (Object-Relational Mappers) just to read a user profile.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. The React Ecosystem&lt;/strong&gt;&lt;br&gt;
React is massive. If I had a problem (e.g., "How to handle form validation"), there were 50 tutorials and 3 libraries for it. As a beginner, having a huge community is a safety net.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Is MERN perfect? No. SQL is often better for complex data relationships. But for building a portfolio and getting hired fast? It’s hard to beat.
&lt;/h2&gt;

</description>
      <category>mern</category>
      <category>react</category>
      <category>beginners</category>
      <category>node</category>
    </item>
    <item>
      <title>Authentication: JWTs vs. Sessions (My Study Notes)</title>
      <dc:creator>Dhruv</dc:creator>
      <pubDate>Mon, 01 Dec 2025 08:40:04 +0000</pubDate>
      <link>https://dev.to/dhruvik_15/authentication-jwts-vs-sessions-my-study-notes-5ajc</link>
      <guid>https://dev.to/dhruvik_15/authentication-jwts-vs-sessions-my-study-notes-5ajc</guid>
      <description>

&lt;p&gt;Authentication was the scariest part of my backend journey. I kept hearing terms like "Session Cookies," "JWTs," "OAuth," and "Salting." Here is my attempt to de-mystify JWTs (JSON Web Tokens) based on what I implemented in my recent project.&lt;/p&gt;

&lt;p&gt;The Old Way: Sessions&lt;/p&gt;

&lt;p&gt;In the past, when you logged in, the server created a "session ID" and stored it in the server's memory. It gave you a cookie with that ID. Every time you clicked a page, the server looked up your ID in its memory.&lt;/p&gt;

&lt;p&gt;Problem: If you have 1 million users, that's a lot of memory. If you have 2 servers, you need to sync that memory.&lt;/p&gt;

&lt;p&gt;The New Way: JWT (Stateless)&lt;/p&gt;

&lt;p&gt;JWTs are like a stamped movie ticket.&lt;/p&gt;

&lt;p&gt;Header: "I am a JWT."&lt;/p&gt;

&lt;p&gt;Payload: "User ID is 123. Role is Admin."&lt;/p&gt;

&lt;p&gt;Signature: A cryptographic stamp signed by the server's secret key.&lt;/p&gt;

&lt;p&gt;When I log in, the server creates this string and sends it to me. The server does not save anything.&lt;br&gt;
When I request data later, I show my JWT. The server checks the Signature. If the math works out, it knows the token is real and hasn't been tampered with.&lt;/p&gt;

&lt;p&gt;Where to store it?&lt;/p&gt;

&lt;p&gt;I learned this the hard way:&lt;/p&gt;

&lt;p&gt;LocalStorage: Easy to use, but vulnerable to XSS attacks (hackers running JS on your site can read it).&lt;/p&gt;

&lt;p&gt;HttpOnly Cookies: Harder to set up, but safer because JavaScript cannot read them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Authentication is complex, but understanding the difference between "Stateful" (Sessions) and "Stateless" (JWT) was the lightbulb moment for me.
&lt;/h2&gt;

</description>
      <category>security</category>
      <category>auth0challenge</category>
      <category>jwt</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
