<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Yash Maheshwari</title>
    <description>The latest articles on DEV Community by Yash Maheshwari (@yashmaheshwari).</description>
    <link>https://dev.to/yashmaheshwari</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/yashmaheshwari"/>
    <language>en</language>
    <item>
      <title>We Built a RAG-Powered AI Question Engine Into a JavaScript Interview Platform - Here's Exactly How It Works</title>
      <dc:creator>Yash Maheshwari</dc:creator>
      <pubDate>Wed, 08 Apr 2026 09:36:21 +0000</pubDate>
      <link>https://dev.to/yashmaheshwari/we-built-a-rag-powered-ai-question-engine-into-a-javascript-interview-platform-heres-exactly-how-4ijo</link>
      <guid>https://dev.to/yashmaheshwari/we-built-a-rag-powered-ai-question-engine-into-a-javascript-interview-platform-heres-exactly-how-4ijo</guid>
      <description>&lt;p&gt;This is what happens when you stop treating an interview prep platform like a CRUD app.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9nxs3gfho7yq64rcu9es.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9nxs3gfho7yq64rcu9es.png" alt="JsPrepPro time bound interview feature"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Someone writes questions. Someone formats them. They get pasted into a database. Users read them.&lt;/p&gt;

&lt;p&gt;That's it. That's the entire "AI" story at most platforms - a ChatGPT wrapper that answers whatever you type, with zero context about what you've already practiced, zero awareness of duplicate content, and zero intelligence about what question should exist next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://jsprep.pro" rel="noopener noreferrer"&gt;JSPrep Pro&lt;/a&gt; went a different direction.&lt;/p&gt;

&lt;p&gt;We built an actual AI pipeline - one that generates questions using RAG (Retrieval-Augmented Generation), checks for semantic duplicates using embedding-based cosine similarity, runs on an automated weekly cron, and feeds into a manual QA + approval workflow before anything touches the question bank.&lt;/p&gt;

&lt;p&gt;This article provides a comprehensive technical breakdown. If you're a developer who wants to understand how these systems work - and wants to try a JavaScript interview prep platform that's actually intelligent - keep reading.&lt;/p&gt;

&lt;h3&gt;
  
  
  🤔 The Problem With Static Question Banks
&lt;/h3&gt;

&lt;p&gt;Here's what every other JS prep platform does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Someone writes 100 questions manually&lt;/li&gt;
&lt;li&gt;They get seeded into a database&lt;/li&gt;
&lt;li&gt;They sit there forever&lt;/li&gt;
&lt;li&gt;The platform has 100 questions for the next 3 years&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Questions go stale&lt;/strong&gt;. JavaScript evolves. structuredClone, Array.at(), Promise.withResolvers() - If your question bank was written in 2021, it doesn't reflect what interviewers are asking in 2026.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Duplicate questions are everywhere&lt;/strong&gt;. When you manually write hundreds of questions across categories, you inevitably repeat yourself. Two different phrasings of "what is a closure?" pollute the same bank.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No semantic intelligence&lt;/strong&gt;. The AI doesn't know what questions already exist when generating new ones, so it generates what it's already covered.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We wanted to fix all three. Here's how.&lt;/p&gt;

&lt;h3&gt;
  
  
  🧠 The Architecture: Four Layers
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Firestore          → Source of truth (questions + embeddings)
Embeddings         → Intelligence layer (what does each question mean?)
Similarity Search  → Retrieval engine (what already exists nearby?)
AI (Groq/LLaMA)    → Reasoning layer (generate, evaluate, explain)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each layer has a specific job. None of them does too much. This separation is why the system is actually maintainable and extensible.&lt;/p&gt;

&lt;h3&gt;
  
  
  📐 Layer 1: Embeddings - Giving Every Question a Mathematical Identity
&lt;/h3&gt;

&lt;p&gt;An embedding converts text into a vector - a list of numbers that represents the semantic meaning of that text in multi-dimensional space.&lt;/p&gt;

&lt;p&gt;Two questions that mean the same thing, even if phrased differently, will have similar vectors. Two questions about completely different concepts will be far apart in that space.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// What an embedding looks like (384 numbers for MiniLM-L6-v2)&lt;/span&gt;
&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mf"&gt;0.023&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;0.147&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.891&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.034&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;0.562&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;...]&lt;/span&gt; &lt;span class="c1"&gt;// 384 dimensions&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The tricky part: you can't just embed the question title. That loses too much signal. We embed type-aware inputs - different fields combined based on what type of question it is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;buildEmbeddingInput&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;question&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;output&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// What matters: the title + the code + the expected output&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;title&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;code&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; Output: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;expectedOutput&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;debug&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// What matters: the title + what the bug is + the broken code&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;title&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; Bug: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;bugDescription&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;brokenCode&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="c1"&gt;// Theory: the title + the full answer + the explanation&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;title&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stripHTML&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;&lt;span class="s2"&gt; &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;explanation&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Why does this matter? Consider two output questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"What does this print?" + code that tests var hoisting&lt;/li&gt;
&lt;li&gt;"What does this print?" + code that tests Promise resolution order&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both have the same title. But their embeddings are far apart because the code and output are semantically very different. If you only embedded the title, the similarity search would think they're duplicates. Embedding the full context makes it accurate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Model choice:&lt;/strong&gt; We use embed-english-light-v3.0 from Cohere - 384 dimensions, completely free on the trial tier, works on Vercel serverless without any model download or cold start issues.&lt;/p&gt;

&lt;h3&gt;
  
  
  🔍 Layer 2: Cosine Similarity - Finding What Already Exists
&lt;/h3&gt;

&lt;p&gt;Once every question has an embedding, we can find "nearby" questions mathematically. The metric is cosine similarity - it measures the angle between two vectors, returning a score from 0 to 1.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;cosineSimilarity&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;[]):&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;dot&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;normA&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;normB&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
  &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;dot&lt;/span&gt;   &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="nx"&gt;normA&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="nx"&gt;normB&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;dot&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sqrt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;normA&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sqrt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;normB&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Score &amp;gt; 0.85 → near-duplicate → reject&lt;/li&gt;
&lt;li&gt;Score 0.5–0.85 → related question → show as context&lt;/li&gt;
&lt;li&gt;Score &amp;lt; 0.5 → distinct question → safe to add&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We don't use a vector database for this. With ~200 questions, pure in-memory cosine similarity runs in under 10ms. The entire similarity search is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;findSimilarQuestions&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;targetEmbedding&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;questions&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;topK&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;questions&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;q&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;q&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;embedding&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;q&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;score&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;cosineSimilarity&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;targetEmbedding&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;q&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;embedding&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}))&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sort&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;score&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;score&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;slice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;topK&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;No LangChain&lt;/strong&gt;. &lt;strong&gt;No Pinecone&lt;/strong&gt;. &lt;strong&gt;No infrastructure overhead&lt;/strong&gt;. Pure math that works perfectly at this scale.&lt;/p&gt;

&lt;h3&gt;
  
  
  🤖 Layer 3: RAG - Making the AI Context-Aware
&lt;/h3&gt;

&lt;p&gt;This is where it gets interesting.&lt;/p&gt;

&lt;p&gt;RAG (Retrieval-Augmented Generation) means: before asking the AI to generate something, retrieve relevant existing content and include it in the prompt. The AI now knows what already exists and won't repeat it.&lt;/p&gt;

&lt;p&gt;Here's the pipeline for generating a new question:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Step 1: Get a seed embedding for the topic&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;seedEmbedding&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;getEmbedding&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;topic&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;category&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; JavaScript interview`&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;// Step 2: Find the most similar existing questions&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;similar&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;findSimilarQuestions&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;seedEmbedding&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;allQuestions&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;// Step 3: Build RAG context string&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ragContext&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`Related questions already in the database:
- [output] What happens when you use var inside a for loop? (Closures &amp;amp; Scope)
- [theory] Explain closure with a practical example (Closures &amp;amp; Scope)
- [debug] Fix the stale closure in this React useEffect (Closure Traps)`&lt;/span&gt;
&lt;span class="c1"&gt;// Step 4: Inject into the AI prompt&lt;/span&gt;
&lt;span class="nx"&gt;systemPrompt&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="s2"&gt;`
&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;ragContext&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;IMPORTANT: Do NOT generate a question similar to any of the above.&lt;br&gt;
Cover a distinct angle, edge case, or sub-concept.&lt;/p&gt;

&lt;p&gt;The AI now generates a question that's aware of &lt;strong&gt;what already exists&lt;/strong&gt;. Without RAG, you'd inevitably get question #47 about closures, which is basically question #12 again with different wording. With RAG, the AI actively avoids that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-model approach:&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Groq&lt;/strong&gt; + &lt;strong&gt;LLaMA 3.3 70B&lt;/strong&gt; → Question generation, answer evaluation, AI tutoring (free, 500 tokens/sec)&lt;br&gt;
Cohere embed-english-light-v3.0 → Embeddings for similarity (free trial)&lt;/p&gt;

&lt;p&gt;Two models, two jobs. Neither does the other's job.&lt;/p&gt;
&lt;h3&gt;
  
  
  ⚙️ Layer 4: The Generation Pipeline
&lt;/h3&gt;

&lt;p&gt;Here's the full flow when a new question gets generated, from click to Firestore:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Admin clicks Generate
        ↓
Fetch all existing questions + embeddings from Firestore
        ↓
Generate seed embedding for the topic (Cohere)
        ↓
Run similarity search → find top 5 related questions
        ↓
Build RAG context from similar questions
        ↓
Call Groq/LLaMA with RAG-enriched prompt
        ↓
Parse the generated question JSON
        ↓
Generate embedding for the new question (Cohere)
        ↓
Dedup check: cosine similarity against all existing questions
        ↓
If similarity &amp;gt; 0.85 → flag as duplicate
If similarity ≤ 0.85 → mark as safe
        ↓
Return candidate with similarity score + top similar questions
        ↓
Admin reviews in UI → Approve or Reject
        ↓
On Approve: save to Firestore with embedding + auto-generated slug
Every step has a purpose. The dedup check means a human never has to manually check "does this already exist?" The RAG context means the AI rarely generates something repetitive in the first place. The manual approval step means nothing bad gets into the question bank automatically.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  ⏰ The Weekly Cron Pipeline
&lt;/h3&gt;

&lt;p&gt;The real power is when this runs automatically.&lt;br&gt;
Every Monday at 9 am UTC, a Vercel cron job triggers /api/cron:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;vercel.json&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"crons"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"path"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/api/cron"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"schedule"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0 9 * * 1"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The cron iterates through a set of generation targets - categories that need more content:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;const GENERATION_TARGETS = [&lt;br&gt;
  { type: 'theory', category: 'Async JS',     topics: ['Promise.allSettled', 'AbortController'] },&lt;br&gt;
  { type: 'output', category: 'Event Loop',   topics: ['microtask queue order', 'Promise chaining'] },&lt;br&gt;
  { type: 'debug',  category: 'Async Bugs',   topics: ['missing await', 'Promise rejection'] },&lt;br&gt;
  // ... 7 categories total&lt;br&gt;
]&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;For each target, it runs the full RAG pipeline and saves the result to a questions_pending collection, not  questions directly. Nothing goes live automatically.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;addDoc&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;collection&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;questions_pending&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;generatedQuestion&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;embedding&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;topSimilar&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;        &lt;span class="c1"&gt;// the 3 most similar existing questions&lt;/span&gt;
  &lt;span class="nx"&gt;similarityScore&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="c1"&gt;// how close to the nearest duplicate (0–1)&lt;/span&gt;
  &lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;pending&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// pending | approved | rejected&lt;/span&gt;
  &lt;span class="na"&gt;generatedAt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Timestamp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  🎛️ The Admin QA Interface
&lt;/h3&gt;

&lt;p&gt;Monday morning, the admin visits /admin/generate and clicks "Cron Queue".&lt;br&gt;
They see everything generated overnight:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌─────────────────────────────────────────────────────────────┐
│ [output] [core]  23% similar                                │
│ What does Promise.allSettled return when promises reject?   │
│ Similar to: "What does Promise.all do?" (23%)               │
│                        [Preview ▾]  [✓ Approve]  [✗ Reject] │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ [debug] [medium]  ⛔ DUPLICATE  91% similar                 │
│ Fix the missing await in this async function                │
│ Similar to: "Debug: async function not returning" (91%)     │
│                                            [✗ Reject]       │
└─────────────────────────────────────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The similarity score tells you immediately whether something is worth reading carefully or should just be rejected. Duplicates are auto-flagged and can't be approved - the Approve button is hidden if isDuplicate === true.&lt;/p&gt;

&lt;p&gt;For non-duplicates, clicking Preview expands:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The full question (code, answer, explanation)&lt;/li&gt;
&lt;li&gt;The top 3 most similar existing questions (so you can judge fit)&lt;/li&gt;
&lt;li&gt;The exact bug description / expected output / full answer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One click to approve → it saves to Firestore with embedding, auto-generated slug, and publishes immediately. The whole review process takes about 5 minutes per batch.&lt;/p&gt;

&lt;h3&gt;
  
  
  🔬 How This Makes the AI Tutor Smarter
&lt;/h3&gt;

&lt;p&gt;The embeddings don't just serve the generation pipeline. They improve the real-time AI features, too.&lt;/p&gt;

&lt;p&gt;When a user opens a question and asks the AI Tutor a follow-up, the system now runs a similarity search first:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Find questions related to what the user is asking about&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;similar&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;findSimilarQuestions&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;questionEmbedding&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;allQuestions&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;// Inject as context into the AI tutor prompt&lt;/span&gt;
&lt;span class="nx"&gt;systemPrompt&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="s2"&gt;`
Related questions in the database:
&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;similar&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;q&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;`- [&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;q&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;type&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;] &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;q&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;title&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;&lt;span class="s2"&gt;
Build on or contrast these. Avoid repeating what they already cover.
`&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The AI tutor now gives answers that connect to related concepts instead of explaining each question in isolation. Ask about closures, and the AI knows you've probably also seen the closure-in-loops question and won't re-explain the same thing.&lt;/p&gt;

&lt;p&gt;Same for Evaluate Me - when your answer is being scored, the evaluator has context about related concepts and can probe whether you understand the broader picture, not just the specific question.&lt;/p&gt;

&lt;h3&gt;
  
  
  📊 The Numbers
&lt;/h3&gt;

&lt;p&gt;Feature Stack Cost Question generation Groq LLaMA 3.3 70B Free Embeddings Cohere embed-english-light Free Similarity search Pure JS cosine similarity $0 Cron scheduling Vercel Cron Free Database Firestore Free tier Hosting Vercel Free tier&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Total infrastructure cost for the AI pipeline: $0/month.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is important because it means the system scales. As the question bank grows from 200 to 500 to 2000 questions, the only thing that changes is the in-memory similarity computation, which stays under 50ms for up to 5,000 questions.&lt;/p&gt;

&lt;h3&gt;
  
  
  🆚 How This Compares to Other Platforms
&lt;/h3&gt;

&lt;p&gt;Other platforms: JSPrep Pro, Question quality control Manual only RAG dedup + manual approval AI answers Generic ChatGPT wrapper Context-aware with RAG New questions Written by humans occasionally Weekly automated pipeline Duplicate detection None Cosine similarity (0.85 threshold) AI aware of question bank No Yes - via embeddings Question format Theory only Theory + output + debug Interview simulation None Timed sprint, 3 question types&lt;/p&gt;

&lt;p&gt;The biggest difference isn't any single feature. It's that the AI actually knows what it's working with. Most platforms have an AI bolted on the side. JSPrep Pro has AI embedded in the core loop - generation, deduplication, retrieval, and evaluation.&lt;/p&gt;

&lt;h3&gt;
  
  
  🚀 What This Means For You As a Developer Preparing for Interviews
&lt;/h3&gt;

&lt;p&gt;All of this infrastructure serves one goal: you get better question quality, faster.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Questions are semantically diverse&lt;/strong&gt; - the similarity search prevents redundant coverage of the same concept&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Questions stay current&lt;/strong&gt; - the weekly pipeline adds new questions automatically as JavaScript evolves&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The AI tutor is context-aware&lt;/strong&gt; - it knows what you're practicing and what related concepts you might be missing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The answer evaluator is more precise&lt;/strong&gt; - it evaluates depth, not just surface-level coverage&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And you get to practice with three question types that test completely different skills:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Theory → Can you explain it clearly? (AI-evaluated)&lt;/li&gt;
&lt;li&gt;Output → Can you execute JavaScript mentally?&lt;/li&gt;
&lt;li&gt;Debug → Can you diagnose broken code?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All three in a single timed sprint that mirrors what a real JavaScript interview actually feels like.&lt;/p&gt;

&lt;h3&gt;
  
  
  💻 Try It Now - No Account Required
&lt;/h3&gt;

&lt;p&gt;The sprint is completely free, no signup:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://jsprep.pro/sprint" rel="noopener noreferrer"&gt;👉 Try Sprint&lt;/a&gt;&lt;br&gt;
5 questions, fully timed, all three question types. When you finish, you'll see a shareable scorecard with your accuracy, strengths, and weak areas.&lt;/p&gt;

&lt;p&gt;That 10-minute sprint will tell you more about your JavaScript interview readiness than an hour of reading documentation.&lt;/p&gt;
&lt;h3&gt;
  
  
  🧵 The Stack Summary
&lt;/h3&gt;

&lt;p&gt;If you want to replicate this architecture for your own project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Groq API (llama-3.3-70b-versatile)  → LLM inference, free
Cohere API (embed-english-light-v3)  → Embeddings, free trial
Pure JS cosine similarity            → Vector search, no infrastructure
Firestore                            → Stores embeddings as number[]
Vercel Cron                          → Weekly generation trigger
Next.js API routes                   → Pipeline endpoints
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No LangChain. No vector database. No complex orchestration framework. The concepts (RAG, embeddings, similarity search) are powerful. The implementation doesn't need to be complicated.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>javascript</category>
      <category>programming</category>
    </item>
    <item>
      <title>[Boost]</title>
      <dc:creator>Yash Maheshwari</dc:creator>
      <pubDate>Sat, 14 Mar 2026 14:26:59 +0000</pubDate>
      <link>https://dev.to/yashmaheshwari/-abn</link>
      <guid>https://dev.to/yashmaheshwari/-abn</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/yashmaheshwari" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3227579%2F6b900a95-5326-4523-9029-cc6a60a39994.jpeg" alt="yashmaheshwari"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/yashmaheshwari/stop-reading-docs-start-passing-javascript-interviews-1n8l" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Stop Reading Docs. Start Passing JavaScript Interviews.&lt;/h2&gt;
      &lt;h3&gt;Yash Maheshwari ・ Mar 13&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#javascript&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#ai&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#webdev&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#programming&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>javascript</category>
      <category>ai</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>Stop Reading Docs. Start Passing JavaScript Interviews.</title>
      <dc:creator>Yash Maheshwari</dc:creator>
      <pubDate>Fri, 13 Mar 2026 10:43:45 +0000</pubDate>
      <link>https://dev.to/yashmaheshwari/stop-reading-docs-start-passing-javascript-interviews-1n8l</link>
      <guid>https://dev.to/yashmaheshwari/stop-reading-docs-start-passing-javascript-interviews-1n8l</guid>
      <description>&lt;p&gt;You've been preparing for weeks.&lt;br&gt;
You've read MDN cover to cover. You've gone through JavaScript.info. You've watched hours of YouTube tutorials. You feel like you know JavaScript — closures, the event loop, prototypes, async/await.&lt;br&gt;
Then the interviewer asks: "What does this code output?"&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nf"&gt;javascriptfor &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;setTimeout&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You freeze.&lt;br&gt;
Not because you don't know JavaScript. Because you've never actually practiced JavaScript the way interviews test it.&lt;br&gt;
That's the gap. And it's the exact gap JSPrep Pro was built to close.&lt;/p&gt;

&lt;p&gt;The Real Reason Developers Fail JavaScript Interviews&lt;br&gt;
Here's something nobody tells you: interview performance is a separate skill from JavaScript knowledge.&lt;br&gt;
You can deeply understand closures and still stumble explaining one under pressure. You can know the event loop inside out and still blank on predicting output from a nested setTimeout. You can have built real-world apps and still get caught on a hoisting question.&lt;br&gt;
The problem isn't your knowledge. It's that you've been training for a marathon by reading about running.&lt;/p&gt;

&lt;p&gt;MDN is documentation — it tells you what things are. JavaScript.info is a textbook — it teaches you how things work. Neither of them trains you to perform under interview conditions. Neither simulates the pressure. Neither gives you feedback on your actual answers. Neither tells you which topics to focus on or how you're improving week over week.&lt;/p&gt;

&lt;p&gt;For that, you need a platform built specifically for JavaScript interview preparation.&lt;br&gt;
That platform is JSPrep Pro.&lt;/p&gt;

&lt;p&gt;What JSPrep Pro Is — And Why It's Different&lt;br&gt;
JSPrep Pro is not another documentation site with a quiz slapped on top. Every single feature on the platform exists for one purpose: to make you perform better in your next JavaScript interview.&lt;br&gt;
Here's what makes it different.&lt;/p&gt;

&lt;p&gt;⚡ The Sprint — The Closest Thing to a Real Interview&lt;br&gt;
This is the feature that changes everything.&lt;br&gt;
The JavaScript Interview Sprint drops you into a timed, mixed-format session that replicates exactly what a real technical interview feels like. You pick your length — 5, 10, 15, or 20 questions — and the clock starts.&lt;br&gt;
You face three types of questions in a single session:&lt;br&gt;
Theory — "Explain event delegation and why it matters." You write your answer as you would in an interview. No multiple choice. No hints. Just you and a text box.&lt;/p&gt;

&lt;p&gt;Output Prediction — "What does this print?" You type your exact prediction. This is the most brutal and most valuable practice format that exists because it forces your brain to execute code mentally, step by step.&lt;/p&gt;

&lt;p&gt;Debug Challenges — You're shown broken code. Find the bug. Explain what's wrong. These are the async race conditions, closure traps, and this context losses that trip up even experienced developers.&lt;br&gt;
When the sprint ends, you get a full debrief: your score, your accuracy percentage, which categories you're strong in, which are weak, and a shareable score card you can screenshot and post.&lt;br&gt;
Every sprint is saved to your history. Run one sprint a day for two weeks and your Analytics page tells you a story — where you started, where you are, which categories improved, which still need work.&lt;br&gt;
No other prep platform does this. LeetCode has algorithmic challenges. Flashcard apps have definitions. Nothing combines theory, output prediction, and debugging into a single timed session that mirrors the real thing.&lt;/p&gt;

&lt;p&gt;Try it free, no account needed: &lt;a href="//jsprep.pro/sprint"&gt;jsprep.pro/sprint&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📚 150+ Questions. Three Formats. Zero Filler.&lt;br&gt;
JSPrep Pro has 150+ questions across three distinct modes — each targeting a different interview skill.&lt;/p&gt;

&lt;p&gt;Theory Questions (91 questions)&lt;/p&gt;

&lt;p&gt;"What is a closure?" "Explain this binding." "What's the difference between == and ===?"&lt;br&gt;
Every question has a full written answer. But more importantly, every question has the Evaluate Me feature — you type your answer as you'd give it in an interview, and the AI scores it 1–10, tells you what you got right, what you missed, and gives you a better phrasing to use next time.&lt;br&gt;
This turns passive reading into active recall. Active recall is what actually builds memory.&lt;br&gt;
Output Quiz (66 questions)&lt;br&gt;
Short code snippets across closures, hoisting, the event loop, type coercion, and this binding. You type the exact output. No hints, no multiple choice.&lt;br&gt;
This is the single best exercise for building an accurate mental model of how JavaScript executes. If you can consistently predict output correctly, you can handle almost any JavaScript interview question — because you understand how the engine actually thinks.&lt;br&gt;
Debug Lab (38 questions)&lt;br&gt;
Real broken code with real bugs: async functions that miss await, closures that capture the wrong value, promises that never resolve, this that points nowhere. You spot the issue, and the fix is shown with a full explanation of why it broke and how to avoid it.&lt;br&gt;
Questions are filterable by category and difficulty. Bookmark any question to revisit. Every question links to its topic hub for deeper context.&lt;/p&gt;

&lt;p&gt;🗺️ Topic Hubs — The Curriculum MDN Never Gave You&lt;br&gt;
Go to &lt;a href="//jsprep.pro/topics"&gt;jsprep.pro/topics&lt;/a&gt; and you'll find something no documentation site offers: a structured interview curriculum.&lt;br&gt;
Every major JavaScript concept has a dedicated hub:&lt;/p&gt;

&lt;p&gt;Closures &amp;amp; Scope&lt;br&gt;
The Event Loop&lt;br&gt;
Promises &amp;amp; Async/Await&lt;br&gt;
Prototypal Inheritance&lt;br&gt;
this Binding&lt;br&gt;
Generators &amp;amp; Iterators&lt;br&gt;
ES6+ Features&lt;br&gt;
JavaScript Modules&lt;br&gt;
Memory Management&lt;br&gt;
And 25+ more&lt;/p&gt;

&lt;p&gt;Each hub tells you the difficulty level (Beginner → Senior), how often this topic appears in interviews at different company tiers, and gives you a full explanation written specifically for interview contexts — not as documentation, but as how to understand and explain this concept when someone is evaluating you.&lt;br&gt;
Every hub links directly to the related questions in the question bank and to relevant blog posts. It's a complete learning path, not a reference sheet.&lt;br&gt;
This is the structure developers spend days trying to build themselves from scattered blog posts. JSPrep Pro gives it to you in one place.&lt;/p&gt;

&lt;p&gt;🤖 AI That Actually Helps You Get Better&lt;br&gt;
JSPrep Pro has AI built into the core practice loop — not bolted on as a chatbot feature.&lt;br&gt;
Evaluate Me&lt;br&gt;
Write your answer to any theory question. The AI evaluates it like a senior engineer would:&lt;/p&gt;

&lt;p&gt;Scores it 1–10&lt;br&gt;
Identifies what you got right&lt;br&gt;
Points out what was missing&lt;br&gt;
Suggests a better answer with improved phrasing&lt;br&gt;
Gives you a letter grade&lt;/p&gt;

&lt;p&gt;This is practicing with feedback. It's the difference between shooting free throws alone versus shooting with a coach who tells you what you're doing wrong after every attempt.&lt;br&gt;
AI Follow-Up&lt;br&gt;
On any question, open the AI Tutor and ask anything. "Show me a real-world example." "How does this relate to React?" "What edge cases do interviewers ask about?"&lt;br&gt;
It has full context of the question and answer. This turns every question into a conversation instead of a dead end.&lt;br&gt;
AI Study Plan&lt;br&gt;
Go to &lt;a href="//jsprep.pro/study-plan"&gt;jsprep.pro/study-plan&lt;/a&gt;, enter your interview date, and the AI generates a personalized preparation schedule based on your actual performance data. It knows which categories you're weak in. It knows how many days you have. It tells you what to study when.&lt;/p&gt;

&lt;p&gt;📊 Analytics — Know Exactly Where You Stand&lt;br&gt;
Most developers prep blindly — they study everything and hope it sticks. The Analytics page ends that.&lt;br&gt;
You get a real dashboard:&lt;/p&gt;

&lt;p&gt;Overall progress across all three question types&lt;br&gt;
Category breakdown — you might be 90% on Arrays and 35% on Closures&lt;br&gt;
Sprint history with accuracy, score, and trend over time&lt;br&gt;
Strengths and weak areas automatically identified from your sprint results&lt;br&gt;
Streak tracking — consecutive days of practice&lt;/p&gt;

&lt;p&gt;The data tells you something no amount of passive studying can: exactly what you need to work on next.&lt;/p&gt;

&lt;p&gt;🏆 Leaderboard — Accountability That Actually Works&lt;br&gt;
There's a weekly XP leaderboard on JSPrep Pro. You earn XP by mastering questions, completing sprints, solving output challenges, and maintaining your streak.&lt;br&gt;
This sounds like a small thing. It isn't.&lt;br&gt;
Knowing you're in 4th place on a Tuesday afternoon and someone just overtook you is a surprisingly effective motivator. Consistency beats cramming — and the leaderboard rewards consistency.&lt;/p&gt;

&lt;p&gt;📅 Question of the Day&lt;br&gt;
Every time you open your dashboard, there's one question waiting for you. Read it, write your answer, evaluate it, move on. Five minutes.&lt;br&gt;
Do this every day for a month and you've actively recalled 30 questions with written, evaluated answers. Research is unambiguous on this: spaced, active recall beats passive re-reading by a significant margin for long-term retention.&lt;br&gt;
The Question of the Day is JSPrep Pro's implementation of that principle in the smallest possible time commitment.&lt;/p&gt;

&lt;p&gt;📋 The JavaScript Interview Cheatsheet&lt;br&gt;
&lt;a href="//jsprep.pro/javascript-interview-cheatsheet"&gt;javascript-interview-cheatsheet&lt;/a&gt;&lt;br&gt;
A single, dense, downloadable PDF covering every topic that regularly surfaces in JavaScript interviews. Not "everything about JavaScript" — just what interviewers actually ask about, organized so you can scan it in 20 minutes the night before an interview.&lt;br&gt;
Zero fluff. Zero filler. Exactly what you need and nothing you don't.&lt;/p&gt;

&lt;p&gt;✍️ The Blog — Interview-Focused, Not Tutorial-Focused&lt;br&gt;
JSPrep Pro's blog doesn't explain what closures are. It covers things like:&lt;/p&gt;

&lt;p&gt;"The 5 Closure Mistakes That Kill Technical Interviews"&lt;br&gt;
"How Senior Developers Answer Event Loop Questions"&lt;br&gt;
"Why Output Prediction Practice is the Most Underrated Interview Prep Technique"&lt;/p&gt;

&lt;p&gt;Every post is written with one goal: to make you give better answers in your next interview. Practical, specific, and organized by the topic hubs they support.&lt;/p&gt;

&lt;p&gt;Free vs Pro — Exactly What You Get&lt;br&gt;
Free forever:&lt;/p&gt;

&lt;p&gt;First 5 questions in every category&lt;br&gt;
Full 5-question Sprint (no account required)&lt;br&gt;
Question of the Day&lt;br&gt;
All Topic Hubs&lt;br&gt;
Blog&lt;br&gt;
Cheatsheet&lt;/p&gt;

&lt;p&gt;Pro unlocks:&lt;/p&gt;

&lt;p&gt;All 150+ questions (theory, output, debug)&lt;br&gt;
Full 10/15/20-question Sprints&lt;br&gt;
AI Evaluate Me and AI Follow-Up on every question&lt;br&gt;
Analytics dashboard&lt;br&gt;
Sprint history&lt;br&gt;
AI Study Plan&lt;/p&gt;

&lt;p&gt;Pro is a one-time payment. No subscription. No monthly fee. Prep for your interview, pass, and keep access forever.&lt;/p&gt;

&lt;p&gt;The 10-Minute Reality Check&lt;br&gt;
Here's my challenge to you.&lt;br&gt;
Close this article. Go to &lt;a href="//jsprep.pro/sprint"&gt;jsprep.pro/sprint&lt;/a&gt;. Start a free 5-question sprint. No account. No credit card. Takes 10 minutes.&lt;br&gt;
In those 10 minutes you'll learn more about your actual interview readiness than 3 hours of reading MDN. You'll know which question types are comfortable and which make you hesitate. You'll know whether you can predict output under time pressure. You'll know whether you can explain concepts clearly in writing.&lt;br&gt;
That's the point of JSPrep Pro: to show you where you actually are, not where you think you are.&lt;br&gt;
And if the sprint shows gaps — which it will, for almost everyone — the platform gives you exactly what you need to close them.&lt;/p&gt;

&lt;p&gt;The Bottom Line&lt;br&gt;
MDN is for looking things up. Use it when you're building.&lt;br&gt;
JavaScript.info is for learning the language. Use it when you're starting out.&lt;br&gt;
JSPrep Pro is for passing JavaScript interviews. Use it when you have an interview coming up — or when you want to be ready for one before it's scheduled.&lt;br&gt;
The platform was built by a developer who failed JavaScript interviews despite knowing JavaScript well. Every feature exists because it would have helped then.&lt;br&gt;
It might be exactly what helps you now.&lt;br&gt;
Start your free sprint at &lt;a href="//jsprep.pro"&gt;jsprep.pro &lt;/a&gt;→&lt;/p&gt;

&lt;p&gt;JSPrep Pro — JavaScript Interview Preparation Platform | &lt;a href="//jsprep.pro"&gt;jsprep.pro&lt;/a&gt;&lt;br&gt;
Free sprint available with no account required. Pro access is a one-time payment.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>ai</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>Mastering React Query with getServerSideProps in Next.js</title>
      <dc:creator>Yash Maheshwari</dc:creator>
      <pubDate>Fri, 30 May 2025 19:13:35 +0000</pubDate>
      <link>https://dev.to/yashmaheshwari/mastering-react-query-with-getserversideprops-in-nextjs-1o42</link>
      <guid>https://dev.to/yashmaheshwari/mastering-react-query-with-getserversideprops-in-nextjs-1o42</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe23k4hhphcqq1ezhbvg2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe23k4hhphcqq1ezhbvg2.png" alt="React Query with getServerSideProps in Next.js" width="800" height="443"&gt;&lt;/a&gt;&lt;br&gt;
In this article, I walk through how to integrate React Query with getServerSideProps in a Next.js app. Learn how to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use Hydrate and dehydrate for SSR&lt;/li&gt;
&lt;li&gt;Manage server-side caching&lt;/li&gt;
&lt;li&gt;Optimize for SEO and performance&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Read the full article on &lt;a href="https://medium.com/devmap/mastering-react-query-with-getserversideprops-in-next-js-a-complete-guide-300407ef7592" rel="noopener noreferrer"&gt;Medium: Mastering React Query with getServerSideProps&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Would love your thoughts, feedback, and improvements!&lt;/p&gt;

</description>
      <category>react</category>
      <category>nextjs</category>
      <category>webdev</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
