<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Krishna kant singh</title>
    <description>The latest articles on DEV Community by Krishna kant singh (@afkkrishna).</description>
    <link>https://dev.to/afkkrishna</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/afkkrishna"/>
    <language>en</language>
    <item>
      <title>How I Stopped My AI Coding Assistant from Hallucinating (and Saved My Token Budget)</title>
      <dc:creator>Krishna kant singh</dc:creator>
      <pubDate>Sun, 17 May 2026 07:04:59 +0000</pubDate>
      <link>https://dev.to/afkkrishna/how-i-stopped-my-ai-coding-assistant-from-hallucinating-and-saved-my-token-budget-2kf2</link>
      <guid>https://dev.to/afkkrishna/how-i-stopped-my-ai-coding-assistant-from-hallucinating-and-saved-my-token-budget-2kf2</guid>
      <description>&lt;p&gt;Every developer using tools like Claude Engineer, ChatGPT, or Lovable eventually hits the exact same wall.&lt;/p&gt;

&lt;p&gt;You start a new project, and everything feels like magic. The AI understands your vision, writes clean components, and you’re moving at warp speed. Then week two hits. The codebase gets larger, you add a few nested directories, and suddenly, the AI goes sideways. It forgets how your routing works. It tries to reinstall dependencies you already settled days ago. Worst of all, it accidentally overwrites a feature you already fixed.&lt;/p&gt;

&lt;p&gt;If you like switching between models—say, bouncing from Claude 3.5 Sonnet to Gemini 1.5 Pro depending on usage limits—onboarding the new model becomes an absolute nightmare. You waste hundreds of tokens just trying to explain, "No, don't use that database library, use this one."&lt;/p&gt;

&lt;p&gt;To solve this, I built a lightweight framework in my root directory called the .ai_context protocol. It keeps the AI grounded, enforces strict guardrails, and drops token bills significantly.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fswbn58l3if8y2s6ikwsj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fswbn58l3if8y2s6ikwsj.png" alt=" " width="374" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is exactly how it works and why you should steal it for your current projects.&lt;/p&gt;

&lt;p&gt;The Core Fix: A Router for AI Context&lt;br&gt;
Most people let their AI tools blindly scan their whole workspace or pass massive chunks of code back and forth in the prompt. This burns through your token limit and clutters the LLM's working memory with noise it doesn't need for simple tasks.&lt;/p&gt;

&lt;p&gt;The .ai_context protocol changes that by introducing five simple Markdown files at your project root:&lt;/p&gt;

&lt;p&gt;Plaintext&lt;br&gt;
your-project-root/&lt;br&gt;
├── .ai_context/&lt;br&gt;
│   ├── README.md               &amp;lt;-- The "Router" &amp;amp; Rules&lt;br&gt;
│   ├── completed_features.md   &amp;lt;-- Read-only historical log&lt;br&gt;
│   ├── future_roadmap.md      &amp;lt;-- The strict backlog&lt;br&gt;
│   ├── architecture_map.md    &amp;lt;-- File tree &amp;amp; structural flow&lt;br&gt;
│   └── secrets_manifest.md    &amp;lt;-- Tracking env variables safely&lt;br&gt;
The real magic here is the README.md. It acts as a traffic controller. Instead of the AI loading all files simultaneously, the README explicitly dictates when the agent is allowed to open the other files.&lt;/p&gt;

&lt;p&gt;If you are just asking for a small CSS bug fix, the AI reads the README, realizes it doesn't need to touch the roadmap or secrets log, and stops right there. Huge token savings.&lt;/p&gt;

&lt;p&gt;Why This Actually Works (From a Human Perspective)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Zero-Friction Model Handoffs&lt;br&gt;
When you switch to a brand new AI agent, you don't need to write a massive explanation. You simply prompt it: "Read the .ai_context/README.md and tell me what our next task is." The new model is instantly on track without guessing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Guardrails Against Hallucination&lt;br&gt;
Because the AI maintains a read-only ledger of what is already built (completed_features.md), it stops inventing weird, duplicate utility functions. It knows exactly what tools are available in the codebase.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Bulletproof Security&lt;br&gt;
We’ve all seen AI agents accidentally hardcode a secret token or an API key right into a client-side file. The secrets_manifest.md keeps a strict map of environment variable locations without ever exposing the actual values. It forces the AI to check your .gitignore configuration before writing backend logic.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;How to Set It Up Instantly&lt;br&gt;
If you want to try this out, I made a single-prompt setup script. You just copy the prompt, drop it into your workspace AI agent, and it generates the entire folder structure and populates your current repository layout automatically.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://buildbykrishna.netlify.app/digital-gerden-blog/30-ai-prompt-library/project-initialization/ai-context-promt/" rel="noopener noreferrer"&gt;full promt link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you are building legacy projects, maintaining side hustles, or fully embracing the AI-assisted development loop, this is the missing manual. It takes two minutes to set up, but it completely changes how reliably your AI handles your code.&lt;/p&gt;

&lt;p&gt;How are you keeping your workspace agents from drifting out of context? Drop a comment below—I’d love to see how other people are organizing this.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>productivity</category>
      <category>programming</category>
    </item>
    <item>
      <title>Why I Started Using Anti-Gravity with Supabase and Clerk for My Projects</title>
      <dc:creator>Krishna kant singh</dc:creator>
      <pubDate>Sat, 16 May 2026 08:24:05 +0000</pubDate>
      <link>https://dev.to/afkkrishna/why-i-started-using-anti-gravity-with-supabase-and-clerk-for-my-projects-5501</link>
      <guid>https://dev.to/afkkrishna/why-i-started-using-anti-gravity-with-supabase-and-clerk-for-my-projects-5501</guid>
      <description>&lt;p&gt;While working on modern web projects, I realized that setting up authentication, backend services, and databases separately takes a lot of time. That’s when I came across Anti-Gravity. It made the whole workflow much simpler by working smoothly with Supabase and Clerk.&lt;/p&gt;

&lt;p&gt;Instead of spending hours configuring everything manually, I could focus more on building the actual project. The integration felt clean, beginner-friendly, and surprisingly fast. Whether you are building a SaaS product, dashboard, or personal project, Anti-Gravity helps reduce unnecessary setup work and keeps development organized.&lt;/p&gt;

</description>
      <category>antigravity</category>
      <category>supabase</category>
      <category>webdeveloper</category>
      <category>clerk</category>
    </item>
  </channel>
</rss>
