<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Harsh Gosavi</title>
    <description>The latest articles on DEV Community by Harsh Gosavi (@harsh_gosavi_).</description>
    <link>https://dev.to/harsh_gosavi_</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/harsh_gosavi_"/>
    <language>en</language>
    <item>
      <title>Why Using AI with Real Data is Riskier Than You Think (And How I Built a Fix)</title>
      <dc:creator>Harsh Gosavi</dc:creator>
      <pubDate>Thu, 30 Apr 2026 12:44:25 +0000</pubDate>
      <link>https://dev.to/harsh_gosavi_/why-using-ai-with-real-data-is-riskier-than-you-think-and-how-i-built-a-fix-35cg</link>
      <guid>https://dev.to/harsh_gosavi_/why-using-ai-with-real-data-is-riskier-than-you-think-and-how-i-built-a-fix-35cg</guid>
      <description>&lt;p&gt;We are using AI tools everywhere — from coding assistants to resume builders to business workflows.&lt;/p&gt;

&lt;p&gt;But there’s a problem most people ignore:&lt;/p&gt;

&lt;p&gt;We are pasting sensitive data into AI systems without thinking twice.&lt;/p&gt;

&lt;p&gt;Emails. API keys. Client details. Internal documents.&lt;/p&gt;

&lt;p&gt;And once that data is sent, we lose control over how it’s processed.&lt;/p&gt;




&lt;p&gt;The Problem Nobody Talks About&lt;/p&gt;

&lt;p&gt;AI tools are powerful, but they are not designed with user-side privacy protection in mind.&lt;/p&gt;

&lt;p&gt;Most users either:&lt;/p&gt;

&lt;p&gt;• Manually remove sensitive data before using AI&lt;br&gt;
• Or ignore the risk completely&lt;/p&gt;

&lt;p&gt;Neither approach is reliable.&lt;/p&gt;

&lt;p&gt;Manual editing is slow and error-prone. Ignoring the risk can lead to serious consequences.&lt;/p&gt;




&lt;p&gt;The Idea: A Privacy Layer for AI&lt;/p&gt;

&lt;p&gt;Instead of changing how AI works, I asked:&lt;/p&gt;

&lt;p&gt;What if we add a security layer before the data reaches the model?&lt;br&gt;
(LLM MODEL) &lt;br&gt;
That’s how ARGUS OBSIDIAN was built.&lt;/p&gt;




&lt;p&gt;How ARGUS Works&lt;/p&gt;

&lt;p&gt;ARGUS sits between the user and the AI model and processes data in real time.&lt;/p&gt;

&lt;p&gt;The system follows a simple pipeline:&lt;/p&gt;

&lt;p&gt;Input → Detect → Mask → Send → Restore → Display&lt;/p&gt;




&lt;ol&gt;
&lt;li&gt;Detection&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The system identifies sensitive data such as:&lt;/p&gt;

&lt;p&gt;• API keys&lt;br&gt;
• Emails&lt;br&gt;
• Phone numbers&lt;br&gt;
• Passwords&lt;br&gt;
• Addresses&lt;/p&gt;




&lt;ol&gt;
&lt;li&gt;Masking&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Sensitive data is replaced with placeholders:&lt;/p&gt;

&lt;p&gt;[EMAIL_1], [API_KEY_1]&lt;/p&gt;

&lt;p&gt;A mapping is stored internally.&lt;/p&gt;




&lt;ol&gt;
&lt;li&gt;Secure Processing&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Only the masked version is sent to the AI model.&lt;/p&gt;

&lt;p&gt;This ensures that raw sensitive data never leaves the system.&lt;/p&gt;




&lt;ol&gt;
&lt;li&gt;Restoration&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After receiving the response, ARGUS restores the original data seamlessly.&lt;/p&gt;

&lt;p&gt;The user sees a clean, natural output without any loss of meaning.&lt;/p&gt;




&lt;p&gt;The Experience&lt;/p&gt;

&lt;p&gt;To make the system usable, I designed it as a chat interface.&lt;/p&gt;

&lt;p&gt;The goal was simple:&lt;/p&gt;

&lt;p&gt;The user should not have to think about privacy.&lt;/p&gt;

&lt;p&gt;They type normally. The system protects automatically.&lt;/p&gt;




&lt;p&gt;Why This Matters&lt;/p&gt;

&lt;p&gt;AI adoption is increasing rapidly, but privacy practices are not keeping up.&lt;/p&gt;

&lt;p&gt;If we want AI to be trusted in real-world workflows, we need systems that protect users by default.&lt;/p&gt;

&lt;p&gt;ARGUS is a step in that direction.&lt;/p&gt;




&lt;p&gt;What’s Next&lt;/p&gt;

&lt;p&gt;• Support for more sensitive data types&lt;br&gt;
• Local model integration for full privacy&lt;br&gt;
• Browser-level protection for all AI tools&lt;/p&gt;




&lt;p&gt;Final Thought&lt;/p&gt;

&lt;p&gt;AI should not force users to choose between convenience and privacy.&lt;/p&gt;

&lt;p&gt;It should give both.&lt;/p&gt;




&lt;p&gt;ARGUS is an attempt to make that possible.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
