<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Naitik Verma</title>
    <description>The latest articles on DEV Community by Naitik Verma (@naitik23verma).</description>
    <link>https://dev.to/naitik23verma</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/naitik23verma"/>
    <language>en</language>
    <item>
      <title>Help me learn Ai agents.</title>
      <dc:creator>Naitik Verma</dc:creator>
      <pubDate>Sat, 07 Mar 2026 03:46:28 +0000</pubDate>
      <link>https://dev.to/naitik23verma/help-me-learn-ai-agents-2h6p</link>
      <guid>https://dev.to/naitik23verma/help-me-learn-ai-agents-2h6p</guid>
      <description>&lt;p&gt;This is not exactly a post Just want to ask I am starting my journey learning AI agents what would be the best roadmap or resources from where i can learn and do practical implementation for free.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Built a RAG AI Teaching Assistant for Video Lectures</title>
      <dc:creator>Naitik Verma</dc:creator>
      <pubDate>Sat, 07 Mar 2026 03:28:12 +0000</pubDate>
      <link>https://dev.to/naitik23verma/built-a-rag-ai-teaching-assistant-for-video-lectures-pgd</link>
      <guid>https://dev.to/naitik23verma/built-a-rag-ai-teaching-assistant-for-video-lectures-pgd</guid>
      <description>&lt;p&gt;Most of the time when watching long lecture videos, finding a specific concept later becomes difficult. You either have to rewatch the entire lecture or manually search through timestamps. I wanted a system where you could simply ask questions about a lecture video and get answers instantly.&lt;/p&gt;

&lt;p&gt;So I built a Retrieval-Augmented Generation (RAG) based AI Teaching Assistant for video lectures.&lt;/p&gt;

&lt;p&gt;The idea is simple: convert lecture videos into searchable knowledge.&lt;/p&gt;

&lt;p&gt;Pipeline:&lt;/p&gt;

&lt;p&gt;Video → Audio&lt;br&gt;
The lecture video (MP4) is first converted into audio (MP3).&lt;/p&gt;

&lt;p&gt;Audio → Transcript&lt;br&gt;
The audio is transcribed into text so the system can understand the lecture content.&lt;/p&gt;

&lt;p&gt;Chunking + Embeddings&lt;br&gt;
The transcript is split into smaller chunks and converted into embeddings.&lt;/p&gt;

&lt;p&gt;Vector Retrieval&lt;br&gt;
The embeddings are stored in a vector index. When a question is asked, the system retrieves the most relevant lecture segments.&lt;/p&gt;

&lt;p&gt;LLM Answer Generation&lt;br&gt;
The retrieved lecture context is passed to a local LLM running with Ollama, which generates the final answer grounded in the lecture content.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>rag</category>
      <category>showdev</category>
    </item>
    <item>
      <title>This has the capability to reduce traffic and pollution with densely populated countries.</title>
      <dc:creator>Naitik Verma</dc:creator>
      <pubDate>Fri, 06 Mar 2026 19:05:15 +0000</pubDate>
      <link>https://dev.to/naitik23verma/this-has-the-capability-to-reduce-traffic-and-pollution-with-densely-populated-countries-3kkk</link>
      <guid>https://dev.to/naitik23verma/this-has-the-capability-to-reduce-traffic-and-pollution-with-densely-populated-countries-3kkk</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/naitik23verma" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3810433%2Fcd1ae92e-d667-4a70-af35-bc369f6ccef2.jpeg" alt="naitik23verma"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/naitik23verma/ai-based-green-light-optimization-using-computer-vision-21ib" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;AI-Based Green Light Optimization using Computer Vision&lt;/h2&gt;
      &lt;h3&gt;Naitik Verma ・ Mar 6&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#ai&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#deeplearning&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#machinelearning&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#showdev&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>ai</category>
      <category>deeplearning</category>
      <category>machinelearning</category>
      <category>showdev</category>
    </item>
    <item>
      <title>AI-Based Green Light Optimization using Computer Vision</title>
      <dc:creator>Naitik Verma</dc:creator>
      <pubDate>Fri, 06 Mar 2026 18:58:26 +0000</pubDate>
      <link>https://dev.to/naitik23verma/ai-based-green-light-optimization-using-computer-vision-21ib</link>
      <guid>https://dev.to/naitik23verma/ai-based-green-light-optimization-using-computer-vision-21ib</guid>
      <description>&lt;p&gt;Urban traffic systems still rely largely on fixed timer traffic lights. These timers do not adapt to real-time traffic conditions, which often leads to congestion, unnecessary waiting time, and increased fuel consumption.&lt;/p&gt;

&lt;p&gt;To explore a more intelligent approach, I built the Metropolitan AI Control Center, a traffic signal optimization system that combines Deep Reinforcement Learning, Computer Vision, and traffic simulation.&lt;/p&gt;

&lt;p&gt;The goal of the project is to replace static traffic signal timers with an AI agent that continuously learns how to manage intersections based on real-time traffic conditions.&lt;/p&gt;

&lt;p&gt;Project Overview&lt;/p&gt;

&lt;p&gt;The system operates on a simulated 10-intersection city grid using Eclipse SUMO (Simulation of Urban Mobility). A Deep Q-Network (DQN) agent learns how to control signal phases in order to reduce overall waiting time and prevent congestion from spreading across intersections.&lt;/p&gt;

&lt;p&gt;Vehicle density is estimated using YOLOv8-based computer vision, while the entire system is monitored through a Flask-based web dashboard.&lt;/p&gt;

&lt;p&gt;This setup allows the AI to interact with a realistic traffic environment and continuously improve its signal control strategy.&lt;/p&gt;

&lt;p&gt;Technology Stack&lt;/p&gt;

&lt;p&gt;AI and Learning&lt;/p&gt;

&lt;p&gt;Python 3.9+&lt;/p&gt;

&lt;p&gt;PyTorch&lt;/p&gt;

&lt;p&gt;Deep Q-Network (DQN)&lt;/p&gt;

&lt;p&gt;Computer Vision&lt;/p&gt;

&lt;p&gt;Ultralytics YOLOv8 for vehicle detection and classification&lt;/p&gt;

&lt;p&gt;Traffic Simulation&lt;/p&gt;

&lt;p&gt;Eclipse SUMO&lt;/p&gt;

&lt;p&gt;TraCI API&lt;/p&gt;

&lt;p&gt;Web Interface&lt;/p&gt;

&lt;p&gt;Flask&lt;/p&gt;

&lt;p&gt;HTML5&lt;/p&gt;

&lt;p&gt;CSS3&lt;/p&gt;

&lt;p&gt;JavaScript&lt;/p&gt;

&lt;p&gt;Analytics&lt;/p&gt;

&lt;p&gt;NumPy&lt;/p&gt;

&lt;p&gt;Matplotlib&lt;/p&gt;

&lt;p&gt;Key Features&lt;br&gt;
Deep Reinforcement Learning Signal Control&lt;/p&gt;

&lt;p&gt;A Deep Q-Network agent learns optimal traffic signal policies by interacting with the SUMO simulation environment. The objective is to minimize waiting time across all intersections while preventing traffic spillback.&lt;/p&gt;

&lt;p&gt;Computer Vision Traffic Monitoring&lt;/p&gt;

&lt;p&gt;Simulated camera feeds are processed using YOLOv8, which detects different types of road users such as cars, trucks, buses, bikes, and pedestrians. This information is used to estimate traffic load.&lt;/p&gt;

&lt;p&gt;Emergency Vehicle Priority&lt;/p&gt;

&lt;p&gt;The system includes logic to detect emergency vehicles such as ambulances and temporarily prioritize their routes by adjusting signal phases.&lt;/p&gt;

&lt;p&gt;Environmental Impact Monitoring&lt;/p&gt;

&lt;p&gt;The simulation also tracks metrics such as:&lt;/p&gt;

&lt;p&gt;CO2 emissions&lt;/p&gt;

&lt;p&gt;fuel consumption&lt;/p&gt;

&lt;p&gt;acoustic noise levels&lt;/p&gt;

&lt;p&gt;This helps evaluate the environmental impact of improved traffic flow.&lt;/p&gt;

&lt;p&gt;Comparative Testing Mode&lt;/p&gt;

&lt;p&gt;The dashboard includes a testing feature that compares traditional traffic control with the AI system.&lt;/p&gt;

&lt;p&gt;Phase 1 – Baseline Simulation&lt;br&gt;
A SUMO simulation runs with traditional fixed traffic timers.&lt;/p&gt;

&lt;p&gt;Phase 2 – AI Optimization&lt;br&gt;
A new isolated SUMO instance runs with the trained DQN agent controlling the signals.&lt;/p&gt;

&lt;p&gt;The dashboard then visualizes differences in:&lt;/p&gt;

&lt;p&gt;average waiting time&lt;/p&gt;

&lt;p&gt;traffic flow efficiency&lt;/p&gt;

&lt;p&gt;CO2 emissions&lt;/p&gt;

&lt;p&gt;Repository&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/naitik23verma/Green_light_optimization" rel="noopener noreferrer"&gt;https://github.com/naitik23verma/Green_light_optimization&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>deeplearning</category>
      <category>machinelearning</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
