<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Aditya Tiwari</title>
    <description>The latest articles on DEV Community by Aditya Tiwari (@aditya_tiwari_techo).</description>
    <link>https://dev.to/aditya_tiwari_techo</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/aditya_tiwari_techo"/>
    <language>en</language>
    <item>
      <title>Chatlectify: turn your chat history into a writing style your LLM can reuse</title>
      <dc:creator>Aditya Tiwari</dc:creator>
      <pubDate>Mon, 20 Apr 2026 06:21:56 +0000</pubDate>
      <link>https://dev.to/aditya_tiwari_techo/chatlectify-turn-your-chat-history-into-a-writing-style-your-llm-can-reuse-4d3e</link>
      <guid>https://dev.to/aditya_tiwari_techo/chatlectify-turn-your-chat-history-into-a-writing-style-your-llm-can-reuse-4d3e</guid>
      <description>&lt;p&gt;&lt;strong&gt;Two years of daily Claude + ChatGPT.&lt;/strong&gt; They've seen probably a million tokens of my writing. Every response still opens with &lt;em&gt;"Certainly!"&lt;/em&gt; or &lt;em&gt;"Great question!"&lt;/em&gt; and closes with &lt;em&gt;"In conclusion…"&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Nobody writes like that. The model has no idea who you are — you're just another session.&lt;/p&gt;

&lt;p&gt;So I built &lt;strong&gt;chatlectify&lt;/strong&gt;. Point it at your exported chat history (Claude / ChatGPT / Gemini JSON, or a folder of your own writing — blog posts, emails, notes). It outputs a &lt;code&gt;SKILL.md&lt;/code&gt; + &lt;code&gt;system_prompt.txt&lt;/code&gt; that makes the model write like you.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it works
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Extracts ~20 stylometric features from your messages — sentence-length distribution, contraction rate, bullet usage, hedge words, typo rate, punctuation histograms, question-vs-imperative ratio, top sentence starters&lt;/li&gt;
&lt;li&gt;Picks a stratified sample of your messages across length buckets as exemplars&lt;/li&gt;
&lt;li&gt;One LLM call distills it all into a portable style file&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Privacy
&lt;/h2&gt;

&lt;p&gt;Runs locally. Exactly one outbound LLM call to your configured model — the synth step that writes the style file. That call includes your feature summary and ~40 exemplar messages (the stratified sample). Nothing else leaves your machine. No telemetry, no cloud backend, no account.&lt;/p&gt;

&lt;h2&gt;
  
  
  Usage
&lt;/h2&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install chatlectify
chatlectify all ./conversations.json --out-dir ./my_skill
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Drop the folder into &lt;code&gt;~/.claude/skills/&lt;/code&gt; or paste &lt;code&gt;system_prompt.txt&lt;/code&gt; into any model that takes one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Repo:&lt;/strong&gt; &lt;a href="https://github.com/0x1Adi/chatlectify" rel="noopener noreferrer"&gt;https://github.com/0x1Adi/chatlectify&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Curious what people think. Also — which export formats should I add next? Slack, iMessage, email, Discord, Obsidian vault?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>showdev</category>
      <category>writing</category>
    </item>
    <item>
      <title>🎭 Slopsquatting: The Supply Chain Attack Hiding in Plain Sight</title>
      <dc:creator>Aditya Tiwari</dc:creator>
      <pubDate>Fri, 07 Nov 2025 11:21:33 +0000</pubDate>
      <link>https://dev.to/aditya_tiwari_techo/slopsquatting-the-supply-chain-attack-hiding-in-plain-sight-5ai8</link>
      <guid>https://dev.to/aditya_tiwari_techo/slopsquatting-the-supply-chain-attack-hiding-in-plain-sight-5ai8</guid>
      <description>&lt;p&gt;Your AI coding assistant just suggested a package that doesn't exist. An attacker is about to register it with malware.&lt;br&gt;
Researchers analyzed 576,000 AI-generated code samples and found something terrifying:&lt;br&gt;
→ 205,474 unique "phantom packages" that don't exist in PyPI/npm&lt;br&gt;
 → 43% repeat PERFECTLY across identical prompts&lt;br&gt;
 → Commercial AI (GPT-4, Claude, Copilot): 5.2% hallucination rate&lt;br&gt;
 → Open-source LLMs: 21.7% hallucination rate&lt;/p&gt;

&lt;p&gt;This isn't typosquatting. It's slopsquatting - exploiting systematic AI behavior.&lt;/p&gt;

&lt;p&gt;The attack is trivial:&lt;br&gt;
Query AI assistant with common prompts&lt;br&gt;
Collect hallucinated package names&lt;br&gt;
Register them on PyPI/npm with malicious code&lt;br&gt;
Wait for developers to pip install your malware&lt;/p&gt;

&lt;p&gt;Here's what makes this surreal:&lt;br&gt;
Despite 6 months of security research, 205K identified targets, and trivial exploitation... zero confirmed attacks exist in the wild.&lt;br&gt;
The window for defense is open. But it's closing fast.&lt;br&gt;
I wrote a deep-dive on: &lt;br&gt;
 ✓ Why AI reliably hallucinates the same phantom packages&lt;br&gt;
 ✓ Which security tools detect this (spoiler: almost none)&lt;br&gt;
 ✓ A 15-minute scanner you can deploy today&lt;br&gt;
 ✓ Why zero attacks won't last much longer&lt;/p&gt;

&lt;p&gt;&lt;a href="https://lnkd.in/dw6R2qSN" rel="noopener noreferrer"&gt;https://lnkd.in/dw6R2qSN&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you're using AI coding assistants (and you probably are), this affects you.&lt;/p&gt;

&lt;p&gt;Read it before the first confirmed attack makes headlines.&lt;/p&gt;

&lt;p&gt;Disclaimer: Personal analysis based on my cybersecurity background. Not legal advice. Views are my own.&lt;/p&gt;

&lt;p&gt;hashtag#CyberSecurity hashtag#AppSec hashtag#AI hashtag#DevSecOps hashtag#SupplyChainSecurity hashtag#SoftwareDevelopment hashtag#InfoSec hashtag#DevOps&lt;/p&gt;

</description>
      <category>ai</category>
      <category>security</category>
      <category>cybersecurity</category>
    </item>
    <item>
      <title>Are you fuzzing?</title>
      <dc:creator>Aditya Tiwari</dc:creator>
      <pubDate>Fri, 07 Nov 2025 11:17:02 +0000</pubDate>
      <link>https://dev.to/aditya_tiwari_techo/are-you-fuzzing-2j98</link>
      <guid>https://dev.to/aditya_tiwari_techo/are-you-fuzzing-2j98</guid>
      <description>&lt;p&gt;Why Fuzzing Matters More Than Ever in the AI Code Generation Era&lt;/p&gt;

&lt;p&gt;By 2025, nearly half of all code in AI-assisted projects is generated by LLMs. 63% of developers now use AI tools daily. &lt;/p&gt;

&lt;p&gt;We've embraced the productivity gains without updating our testing practices.&lt;/p&gt;

&lt;p&gt;Result? We're testing AI-generated code with techniques designed for human cognitive biases.&lt;/p&gt;

&lt;p&gt;The data is stark: Google's AI-powered fuzzer found a vulnerability in OpenSSL that had existed for 20 years. Another AI system found a SQLite bug that 150 CPU-hours of traditional fuzzing missed entirely.&lt;/p&gt;

&lt;p&gt;These aren't edge cases. As of May 2025, OSS-Fuzz has identified 13,000+ vulnerabilities across 1,000+ projects. The 26 vulnerabilities found by AI-enhanced fuzzing were all unreachable by human-written test harnesses.&lt;/p&gt;

&lt;p&gt;I spent time researching what actually works for testing AI-generated code. The answer: automated fuzzing. Not because it's trendy, but because it's the only technique that doesn't rely on human assumptions about how code should behave.&lt;/p&gt;

&lt;p&gt;Wrote up the full analysis with implementation guide and cost breakdown: &lt;a href="https://lnkd.in/dYRNQxEB" rel="noopener noreferrer"&gt;https://lnkd.in/dYRNQxEB&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Tools are free. Techniques are proven. What's missing is organizational adoption.&lt;/p&gt;

&lt;p&gt;Disclaimer: Views are my own.&lt;/p&gt;

&lt;p&gt;hashtag#CyberSecurity hashtag#ApplicationSecurity hashtag#Fuzzing hashtag#AI hashtag#SoftwareEngineering hashtag#DevSecOps hashtag#VulnerabilityResearch&lt;/p&gt;

</description>
      <category>cybersecurity</category>
      <category>security</category>
      <category>go</category>
      <category>ai</category>
    </item>
    <item>
      <title>Announcing SlopGuard — Open-Source Defence Against AI Supply Chain Attacks</title>
      <dc:creator>Aditya Tiwari</dc:creator>
      <pubDate>Fri, 07 Nov 2025 11:16:02 +0000</pubDate>
      <link>https://dev.to/aditya_tiwari_techo/announcing-slopguard-open-source-defence-against-ai-supply-chain-attacks-46ph</link>
      <guid>https://dev.to/aditya_tiwari_techo/announcing-slopguard-open-source-defence-against-ai-supply-chain-attacks-46ph</guid>
      <description>&lt;p&gt;Your AI coding assistant just suggested installing a package. It doesn't exist. You install it anyway. Now you're compromised.&lt;/p&gt;

&lt;p&gt;This isn't hypothetical—AI models hallucinate non-existent package names in 5-21% of generated code. Research analyzing 576,000 code samples found 205,000+ unique phantom packages, with 58% recurring predictably across sessions.&lt;/p&gt;

&lt;p&gt;Attackers exploit this by monitoring AI outputs, registering these hallucinated packages with malware, and waiting for developers to blindly install them. It's called "slopsquatting."&lt;/p&gt;

&lt;p&gt;While exploring AI supply chain risks (wrote about it here: &lt;a href="https://lnkd.in/dS3D-zwt" rel="noopener noreferrer"&gt;https://lnkd.in/dS3D-zwt&lt;/a&gt;), I built SlopGuard to detect these attacks before they reach production.&lt;/p&gt;

&lt;p&gt;🔍 Technical approach: &lt;br&gt;
• 3-stage lazy-loading trust scorer (downloads → dependents → maintainer/GitHub)&lt;br&gt;
• 87% of packages exit at stage 1 (basic trust), only 3% need full analysis &lt;br&gt;
• Detects typosquatting (Levenshtein distance), namespace squatting, download inflation, ownership changes &lt;br&gt;
• Automated scoring from verifiable signals—no manual whitelists that break&lt;br&gt;
⚡ Validated performance: &lt;br&gt;
• &amp;lt;3% false positives (tested against 500 legitimate + 500 malicious packages) &lt;br&gt;
• 96% detection rate on documented supply chain attacks &lt;br&gt;
• 7 seconds to scan 700+ packages (warm cache) &lt;/p&gt;

&lt;p&gt;Built in Ruby, ~2,500 lines, fully open source (MIT).&lt;br&gt;
━━━━━━━━━━━━━━━━━━━━━━━━&lt;br&gt;
Current stage: Early development, personal research project. RubyGems/PyPI/Golang for now. Metadata-based detection—can't analyze behavioral patterns.&lt;/p&gt;

&lt;p&gt;Looking for technical peer review: &lt;br&gt;
• Are trust thresholds (80/70/40) optimal for production? • What attack patterns am I missing? &lt;br&gt;
• Would you deploy this in CI/CD, or does it solve the wrong problem?&lt;/p&gt;

&lt;p&gt;Try it: &lt;a href="https://lnkd.in/dHyvucVQ" rel="noopener noreferrer"&gt;https://lnkd.in/dHyvucVQ&lt;/a&gt;&lt;br&gt;
GitHub: &lt;a href="https://lnkd.in/d2TsnQ3s" rel="noopener noreferrer"&gt;https://lnkd.in/d2TsnQ3s&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you work on supply chain security or AI security research—where are the blind spots in this approach?&lt;/p&gt;

&lt;p&gt;Disclaimer: This is my personal project in early development. The algorithms are based on academic research (TypoSmart, Sonatype data).&lt;/p&gt;

</description>
      <category>security</category>
      <category>ai</category>
      <category>ruby</category>
      <category>go</category>
    </item>
  </channel>
</rss>
