<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Deviprasad Shetty</title>
    <description>The latest articles on DEV Community by Deviprasad Shetty (@deviprasadshetty).</description>
    <link>https://dev.to/deviprasadshetty</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/deviprasadshetty"/>
    <language>en</language>
    <item>
      <title>How Code-Executing AI Agents are Making 128K Context Windows Obsolete</title>
      <dc:creator>Deviprasad Shetty</dc:creator>
      <pubDate>Sat, 10 Jan 2026 16:02:00 +0000</pubDate>
      <link>https://dev.to/deviprasadshetty/how-code-executing-ai-agents-are-making-128k-context-windows-obsolete-5dk</link>
      <guid>https://dev.to/deviprasadshetty/how-code-executing-ai-agents-are-making-128k-context-windows-obsolete-5dk</guid>
      <description>&lt;h1&gt;
  
  
  🧠 Recursive Language Models: How Code-Executing AI Agents Will Make 128K Context Windows Obsolete
&lt;/h1&gt;

&lt;p&gt;We've spent years chasing a mythical number: the &lt;strong&gt;context window&lt;/strong&gt;. 8K. 32K. 128K. A million. The assumption was simple—bigger context equals smarter model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;That assumption is wrong.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Context window expansion is a brute-force solution to a nuanced problem. While researchers race to cram more tokens into a single forward pass, a different paradigm is emerging: the &lt;strong&gt;Recursive Language Model (RLM)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It doesn't need a larger context. It needs a smaller, &lt;em&gt;smarter&lt;/em&gt; one.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔍 The Problem: Context Rot
&lt;/h2&gt;

&lt;p&gt;Here's what the benchmarks don't tell you: &lt;strong&gt;long context is expensive, slow, and often wasted.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A typical agent tasked with analyzing a lengthy document will load the entire 50,000-word text into its context window, process it once, and then struggle to recall a specific sentence from the middle. This is &lt;strong&gt;"context rot"&lt;/strong&gt; in action—attention scores dilute, and the model forgets what it just read.&lt;/p&gt;

&lt;p&gt;Buying a larger context window is like buying a larger suitcase because you can't decide what to pack. It doesn't solve the organizational problem.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔄 The RLM Inversion: Don't Process, Orchestrate
&lt;/h2&gt;

&lt;p&gt;The Recursive Language Model flips the script. Instead of ingesting data, it &lt;strong&gt;interacts&lt;/strong&gt; with data.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"The LLM's context is not a storage tank. It's a workbench."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;An RLM is given a persistent &lt;strong&gt;Python REPL&lt;/strong&gt;. The data—whether it's a 10,000-page PDF or a massive database—is not loaded into the model's context. It exists as a variable, &lt;code&gt;input_data&lt;/code&gt;, accessible only through code.&lt;/p&gt;

&lt;p&gt;This forces a fundamental shift in behavior:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. 🔎 Search, Don't Read
&lt;/h3&gt;

&lt;p&gt;The RLM can't "see" the data directly. It must write Python code to search for keywords, filter for entities, or slice into specific sections. It retrieves only what it needs.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. 💾 Store in RAM, Not in Neurons
&lt;/h3&gt;

&lt;p&gt;Intermediate findings are stored in Python variables, not in the model's context history. This acts as an "extended memory" that doesn't suffer from attention decay.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. 🤖 Delegate, Don't Deliberate
&lt;/h3&gt;

&lt;p&gt;For large datasets, the RLM can spawn &lt;strong&gt;"sub-LLMs"&lt;/strong&gt;—fresh model instances with clean contexts. It can batch-process 100 document chunks in parallel via &lt;code&gt;llm_batch()&lt;/code&gt;. The main RLM only sees the summaries, keeping its own context crystal clear.&lt;/p&gt;




&lt;h2&gt;
  
  
  ✨ The "Diffusion" Answer: Multi-Turn Reasoning
&lt;/h2&gt;

&lt;p&gt;Perhaps the most radical feature is the &lt;strong&gt;Diffusion Answer&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In a traditional chat model, the response is one-shot. Once a sentence is written, it's locked in. An RLM operates differently. It initializes an &lt;code&gt;answer&lt;/code&gt; state:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;answer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ready&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The model doesn't "respond"—it &lt;strong&gt;diffuses&lt;/strong&gt; its answer over multiple reasoning turns. It drafts, fact-checks, revises, and only sets &lt;code&gt;ready=True&lt;/code&gt; when the artifact is refined.&lt;/p&gt;




&lt;h2&gt;
  
  
  📊 Traditional Context vs. RLM
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;Traditional Long-Context&lt;/th&gt;
&lt;th&gt;Recursive Language Model (RLM)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Data Handling&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Load everything into context&lt;/td&gt;
&lt;td&gt;Access programmatically via code&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Memory&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Attention-based (decays)&lt;/td&gt;
&lt;td&gt;Python variables (persistent)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Scaling&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Larger context window&lt;/td&gt;
&lt;td&gt;Parallel sub-LLM delegation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Transparency&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Black box&lt;/td&gt;
&lt;td&gt;Fully auditable code trace&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  🚀 Get Involved
&lt;/h2&gt;

&lt;p&gt;The RLM paradigm isn't just a theory—it's an architecture you can explore today.&lt;/p&gt;

&lt;p&gt;We've open-sourced a reference implementation of the RLM system, built with &lt;strong&gt;PydanticAI&lt;/strong&gt; and &lt;strong&gt;FastAPI&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;👉 &lt;strong&gt;Check out the Repository on GitHub:&lt;/strong&gt; &lt;a href="https://github.com/deviprasadshetty-dev/Recursive-LLM" rel="noopener noreferrer"&gt;https://github.com/deviprasadshetty-dev/Recursive-LLM&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The future doesn't belong to the model with the longest memory. It belongs to the one that knows it doesn't need to remember everything.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you found this interesting, feel free to ⭐ the repo and share your thoughts on the RLM paradigm in the comments!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>machinelearning</category>
      <category>architecture</category>
    </item>
    <item>
      <title>SCAR: A High-Trust Operating System for AI Coding Assistants (Stop Package Hallucinations in Your Repo)</title>
      <dc:creator>Deviprasad Shetty</dc:creator>
      <pubDate>Sun, 09 Nov 2025 05:52:50 +0000</pubDate>
      <link>https://dev.to/deviprasadshetty/scar-a-high-trust-operating-system-for-ai-coding-assistants-stop-package-hallucinations-in-your-2g81</link>
      <guid>https://dev.to/deviprasadshetty/scar-a-high-trust-operating-system-for-ai-coding-assistants-stop-package-hallucinations-in-your-2g81</guid>
      <description>&lt;p&gt;AI coding assistants are everywhere—but trust is not.&lt;/p&gt;

&lt;p&gt;We’ve all seen it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Invented npm/PyPI packages that don’t exist.&lt;/li&gt;
&lt;li&gt;Confident code that ignores your architecture.&lt;/li&gt;
&lt;li&gt;“TODO: implement later” mocks accidentally shipped to production.&lt;/li&gt;
&lt;li&gt;Long context windows wasted because the model never actually reads your repo.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;SCAR fixes this.&lt;/p&gt;

&lt;p&gt;SCAR (Specification for Code Assistant Reliability) is a high-trust operating system for AI coding assistants. It’s an open specification powered by a single prompt.yaml that turns generic models into governed, senior-level engineering copilots.&lt;/p&gt;

&lt;p&gt;Get SCAR:&lt;br&gt;
&lt;a href="https://github.com/redmoon0x/scar-spec.git" rel="noopener noreferrer"&gt;https://github.com/redmoon0x/scar-spec.git&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What SCAR Solves&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Package hallucination&lt;/li&gt;
&lt;li&gt;Enforces strict package verification rules.&lt;/li&gt;
&lt;li&gt;No suggesting libraries that don’t exist.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Encourages verified, documented, actively maintained dependencies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Misunderstanding developer intent&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Forces assistants to ask clarifying questions instead of guessing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Encourages breaking work into concrete steps.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Aligns with your codebase, not generic examples.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Mock vs real implementation abuse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Defaults to production-grade, runnable code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Only uses mocks when explicitly requested or clearly appropriate.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Bans empty stubs and TODOs in core paths.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Poor context usage in large codebases&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Requires reading package.json, requirements.txt, README, styles, and structure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Forces alignment with existing patterns, naming, design systems, and architecture.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Why This Matters&lt;/p&gt;

&lt;p&gt;Every hallucinated package, broken abstraction, and fake implementation is drag on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Delivery stability&lt;/li&gt;
&lt;li&gt;Developer trust&lt;/li&gt;
&lt;li&gt;Incident rate&lt;/li&gt;
&lt;li&gt;Onboarding and code review time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;SCAR is designed to be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simple to adopt&lt;/li&gt;
&lt;li&gt;Transparent&lt;/li&gt;
&lt;li&gt;Compatible with any LLM / AI coding tool&lt;/li&gt;
&lt;li&gt;Auditable as part of your engineering governance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;How to Use SCAR&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add SCAR to your tooling:&lt;/li&gt;
&lt;li&gt;Clone the repo:
git clone &lt;a href="https://github.com/redmoon0x/scar-spec.git" rel="noopener noreferrer"&gt;https://github.com/redmoon0x/scar-spec.git&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open prompt.yaml.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Load it as a system-level prompt:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use SCAR as the non-editable “system message” for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your in-IDE assistant&lt;/li&gt;
&lt;li&gt;Your internal AI dev tools&lt;/li&gt;
&lt;li&gt;ChatOps bots that propose or edit code&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Layer your org rules on top:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add framework choices, architecture constraints, security rules.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Keep SCAR as the foundation for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No hallucinated packages&lt;/li&gt;
&lt;li&gt;No incomplete implementations&lt;/li&gt;
&lt;li&gt;No design-system drift&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Monitor compliance:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Log violations (e.g., hallucinated dependencies, missing error handling).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use SCAR as a standard in code reviews for AI-generated changes.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Who Should Use SCAR?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Teams running AI coding copilots at scale&lt;/li&gt;
&lt;li&gt;Platform / DevEx engineers designing internal AI tools&lt;/li&gt;
&lt;li&gt;Security and compliance teams who need enforceable guardrails&lt;/li&gt;
&lt;li&gt;Solo devs who want their AI to behave like a senior engineer, not a code jukebox&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you’re serious about AI in your engineering stack, SCAR gives you a pragmatic, enforceable baseline.&lt;/p&gt;

&lt;p&gt;Start here:&lt;br&gt;
&lt;a href="https://github.com/redmoon0x/scar-spe[](url)c.git" rel="noopener noreferrer"&gt;Github&lt;/a&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>tooling</category>
      <category>ai</category>
      <category>llm</category>
    </item>
    <item>
      <title>Clickbait Detection with Machine Learning: A Complete Python Tutorial</title>
      <dc:creator>Deviprasad Shetty</dc:creator>
      <pubDate>Sat, 04 Oct 2025 17:07:14 +0000</pubDate>
      <link>https://dev.to/deviprasadshetty/clickbait-detection-with-machine-learning-a-complete-python-tutorial-hf8</link>
      <guid>https://dev.to/deviprasadshetty/clickbait-detection-with-machine-learning-a-complete-python-tutorial-hf8</guid>
      <description>&lt;p&gt;Hey devs! 👋 Ever wondered how to build a real-world NLP classifier? Today, we're diving into clickbait detection using scikit-learn, TF-IDF, and Random Forest. I'll walk you through the entire process, from data prep to deployment on Hugging Face.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Clickbait Detection Matters
&lt;/h2&gt;

&lt;p&gt;In the age of social media, clickbait wastes time and spreads misinformation. As developers, we can build tools to combat this. My model achieves 91.45% accuracy on 32,000 headlines.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dataset &amp;amp; Setup
&lt;/h2&gt;

&lt;p&gt;We're using the &lt;a href="https://www.kaggle.com/datasets/amananandrai/clickbait-dataset" rel="noopener noreferrer"&gt;Clickbait Dataset&lt;/a&gt; from Kaggle. Balanced classes: 16K clickbait, 16K real news.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;pandas scikit-learn matplotlib seaborn joblib huggingface_hub
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 1: Data Loading &amp;amp; Preprocessing
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pandas&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;numpy&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sklearn.model_selection&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;train_test_split&lt;/span&gt;

&lt;span class="c1"&gt;# Load data
&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read_csv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;clickbait_data.csv&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dropna&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;inplace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Map labels
&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;rename&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;columns&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;clickbait&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;label&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="n"&gt;inplace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;label&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;label&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;real&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;clickbait&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Dataset shape: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;shape&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;head&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: Train-Test Split
&lt;/h2&gt;

&lt;p&gt;Stratified split to maintain class balance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;X_train&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;X_test&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_train&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_test&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;train_test_split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;headline&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;label&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; 
    &lt;span class="n"&gt;test_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
    &lt;span class="n"&gt;random_state&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
    &lt;span class="n"&gt;stratify&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;label&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Train: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X_train&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;, Test: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X_test&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Feature Extraction with TF-IDF
&lt;/h2&gt;

&lt;p&gt;Convert text to numerical features:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sklearn.feature_extraction.text&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;TfidfVectorizer&lt;/span&gt;

&lt;span class="n"&gt;vectorizer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;TfidfVectorizer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;stop_words&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;english&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;max_features&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;5000&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;X_train_vec&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vectorizer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fit_transform&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X_train&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;X_test_vec&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vectorizer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;transform&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X_test&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Feature matrix shape: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;X_train_vec&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;shape&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Model Training
&lt;/h2&gt;

&lt;p&gt;Random Forest for robust classification:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sklearn.ensemble&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;RandomForestClassifier&lt;/span&gt;

&lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;RandomForestClassifier&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;n_estimators&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;random_state&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;n_jobs&lt;/span&gt;&lt;span class="o"&gt;=-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X_train_vec&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_train&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Model trained! ✅&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 5: Evaluation
&lt;/h2&gt;

&lt;p&gt;Check performance on test set:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sklearn.metrics&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;classification_report&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;accuracy_score&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;confusion_matrix&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;seaborn&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;sns&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;matplotlib.pyplot&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;plt&lt;/span&gt;

&lt;span class="n"&gt;y_pred&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;predict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X_test_vec&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Accuracy: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;accuracy_score&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;y_test&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_pred&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;%&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;classification_report&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;y_test&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_pred&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

&lt;span class="c1"&gt;# Confusion Matrix
&lt;/span&gt;&lt;span class="n"&gt;cm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;confusion_matrix&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;y_test&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_pred&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;sns&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;heatmap&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;annot&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;d&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cmap&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Blues&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;title&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Confusion Matrix&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;show&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Results:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Accuracy: 91.45%&lt;/li&gt;
&lt;li&gt;Macro F1: 0.91&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Testing on Real Headlines
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;test_headlines&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You won&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;t believe what this celebrity did!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;New study reveals surprising health benefits&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;10 hacks to boost your productivity&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="n"&gt;predictions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;predict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;vectorizer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;transform&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;test_headlines&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;pred&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;zip&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;test_headlines&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;predictions&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"'&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt; → &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;pred&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploy to Hugging Face
&lt;/h2&gt;

&lt;p&gt;Save and upload the model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;joblib&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;huggingface_hub&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;HfApi&lt;/span&gt;

&lt;span class="c1"&gt;# Save locally
&lt;/span&gt;&lt;span class="n"&gt;joblib&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dump&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;clickbait_detector.pkl&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;joblib&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dump&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;vectorizer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tfidf_vectorizer.pkl&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Upload
&lt;/span&gt;&lt;span class="n"&gt;api&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;HfApi&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;api&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;upload_file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;path_or_fileobj&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;clickbait_detector.pkl&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;path_in_repo&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;clickbait_detector.pkl&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;repo_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Devishetty100/clickbait-detector&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;token&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your-hf-token&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# Same for vectorizer
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Usage in Production
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;huggingface_hub&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;hf_hub_download&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;joblib&lt;/span&gt;

&lt;span class="c1"&gt;# Load from HF
&lt;/span&gt;&lt;span class="n"&gt;model_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;hf_hub_download&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;repo_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Devishetty100/clickbait-detector&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;filename&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;clickbait_detector.pkl&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;vectorizer_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;hf_hub_download&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;repo_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Devishetty100/clickbait-detector&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;filename&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tfidf_vectorizer.pkl&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;joblib&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;load&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model_path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;vectorizer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;joblib&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;load&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;vectorizer_path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Predict
&lt;/span&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;detect_clickbait&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;headline&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;features&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vectorizer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;transform&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="n"&gt;headline&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;predict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;features&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;detect_clickbait&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Shocking truth about coffee!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Next Steps &amp;amp; Improvements
&lt;/h2&gt;

&lt;p&gt;While this model performs well, here are some ideas for future enhancements (these are suggestions, not planned features):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Try BERT or other transformers for better accuracy&lt;/li&gt;
&lt;li&gt;Add multilingual support&lt;/li&gt;
&lt;li&gt;Build a web API with FastAPI&lt;/li&gt;
&lt;li&gt;Integrate into browser extensions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Feel free to fork the notebook and experiment!&lt;/p&gt;

&lt;p&gt;What do you think? Have you built similar classifiers? Share your projects in the comments!&lt;/p&gt;

&lt;p&gt;🔗 &lt;a href="https://www.kaggle.com/code/deviprasadshetty666/notebookb1c2f4f893" rel="noopener noreferrer"&gt;Kaggle Notebook&lt;/a&gt; | &lt;a href="https://huggingface.co/Devishetty100/clickbait-detector" rel="noopener noreferrer"&gt;HF Model&lt;/a&gt; | &lt;a href="https://huggingface.co/spaces/Devishetty100/clickbait-detector" rel="noopener noreferrer"&gt;Demo Space&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh7nmgosxc23gqy77c2f5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh7nmgosxc23gqy77c2f5.png" alt=" " width="665" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>tutorial</category>
      <category>datascience</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>SpecPilot – A No-Bloat GitHub Specs Alternative You’ll Actually Use</title>
      <dc:creator>Deviprasad Shetty</dc:creator>
      <pubDate>Mon, 08 Sep 2025 10:37:41 +0000</pubDate>
      <link>https://dev.to/deviprasadshetty/specpilot-a-no-bloat-github-specs-alternative-youll-actually-use-4baf</link>
      <guid>https://dev.to/deviprasadshetty/specpilot-a-no-bloat-github-specs-alternative-youll-actually-use-4baf</guid>
      <description>&lt;p&gt;Specs are necessary—but bloat isn’t.&lt;/p&gt;

&lt;p&gt;GitHub Speckit is good, but it’s still too much for most developers. That’s why I built SpecPilot – markdown-first, lightweight, and distraction-free.&lt;/p&gt;

&lt;p&gt;What you get:&lt;br&gt;
✔ Clean and simple&lt;br&gt;
✔ Easy to version&lt;br&gt;
✔ No fancy UI distractions&lt;br&gt;
✔ Fully open-source&lt;br&gt;
✔ Designed for developers, not managers&lt;/p&gt;

&lt;p&gt;Use it for APIs, features, or any spec that needs clarity and structure—without the fluff.&lt;/p&gt;

&lt;p&gt;Check it out here ➡ &lt;a href="https://github.com/redmoon0x/SpecPilot.git" rel="noopener noreferrer"&gt;SpecPilot GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Document better. Work smarter.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Introducing Claude 3.7 Sonnet and GPT-3.5 in noir-llm v0.2.3</title>
      <dc:creator>Deviprasad Shetty</dc:creator>
      <pubDate>Fri, 25 Apr 2025 12:50:59 +0000</pubDate>
      <link>https://dev.to/deviprasadshetty/introducing-claude-37-sonnet-and-gpt-35-in-noir-llm-v023-367i</link>
      <guid>https://dev.to/deviprasadshetty/introducing-claude-37-sonnet-and-gpt-35-in-noir-llm-v023-367i</guid>
      <description>&lt;h1&gt;
  
  
  Introducing Claude 3.7 Sonnet and GPT-3.5 in noir-llm v0.2.3
&lt;/h1&gt;

&lt;p&gt;I'm excited to announce the release of noir-llm v0.2.3, which now includes support for Anthropic's Claude 3.7 Sonnet and OpenAI's GPT-3.5 Turbo models! This update brings two powerful AI models to the noir-llm package, making them freely accessible for educational and personal projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is noir-llm?
&lt;/h2&gt;

&lt;p&gt;noir-llm is a Python package that provides a unified interface for accessing various LLM models freely. It offers a simple API for interacting with different language models, allowing you to easily switch between models without changing your code.&lt;/p&gt;

&lt;h2&gt;
  
  
  New Models in v0.2.3
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Claude 3.7 Sonnet
&lt;/h3&gt;

&lt;p&gt;Claude 3.7 Sonnet is Anthropic's latest model, known for its advanced reasoning capabilities and nuanced responses. Our implementation includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Token refresh mechanism to maintain session validity&lt;/li&gt;
&lt;li&gt;Rate limit bypass with user agent rotation&lt;/li&gt;
&lt;li&gt;Exponential backoff for retries&lt;/li&gt;
&lt;li&gt;Clean response handling without debug information&lt;/li&gt;
&lt;li&gt;System prompt support&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  GPT-3.5 Turbo
&lt;/h3&gt;

&lt;p&gt;GPT-3.5 Turbo is OpenAI's widely-used model known for its fast response times and good general capabilities. Our implementation features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cookie refresh for session maintenance&lt;/li&gt;
&lt;li&gt;Rate limit detection and handling&lt;/li&gt;
&lt;li&gt;Streaming response parsing&lt;/li&gt;
&lt;li&gt;Clean response output&lt;/li&gt;
&lt;li&gt;System prompt support&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How to Use
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;noir-llm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Python API
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;noir&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;NoirClient&lt;/span&gt;

&lt;span class="c1"&gt;# Create a client
&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;NoirClient&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# List available models
&lt;/span&gt;&lt;span class="n"&gt;models&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_available_models&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Available models: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;models&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Select Claude 3.7 model
&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;select_model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;claude-3-7-sonnet&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# Or select GPT-3.5 model
# client.select_model("gpt-3.5-turbo")
&lt;/span&gt;
&lt;span class="c1"&gt;# Set a system prompt
&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_system_prompt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You are a helpful assistant.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Send a message
&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send_message&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What is the capital of France?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Response: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Command Line Interface
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Start a chat session with Claude 3.7&lt;/span&gt;
noir-llm chat &lt;span class="nt"&gt;--model&lt;/span&gt; claude-3-7-sonnet

&lt;span class="c"&gt;# Or with GPT-3.5&lt;/span&gt;
noir-llm chat &lt;span class="nt"&gt;--model&lt;/span&gt; gpt-3.5-turbo

&lt;span class="c"&gt;# Send a single message&lt;/span&gt;
noir-llm send &lt;span class="s2"&gt;"What is the capital of France?"&lt;/span&gt; &lt;span class="nt"&gt;--model&lt;/span&gt; claude-3-7-sonnet
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Technical Implementation Details
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Claude 3.7 Integration
&lt;/h3&gt;

&lt;p&gt;The Claude 3.7 implementation connects to a proxy API that provides access to Anthropic's model. Key features include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cookie management with automatic refresh every 5 minutes&lt;/li&gt;
&lt;li&gt;User agent rotation from a pool of common browser agents&lt;/li&gt;
&lt;li&gt;Exponential backoff with jitter for rate limit handling&lt;/li&gt;
&lt;li&gt;Response cleaning to remove artifacts and debug information&lt;/li&gt;
&lt;li&gt;Conversation history tracking
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Example of the rate limit bypass implementation
&lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;status_code&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;429&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;403&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
    &lt;span class="c1"&gt;# Clear session and get new cookies
&lt;/span&gt;    &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Session&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;refresh_cookies&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="c1"&gt;# Rotate user agent
&lt;/span&gt;    &lt;span class="n"&gt;random_user_agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;random&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;choice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;user_agents&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user-agent&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;random_user_agent&lt;/span&gt;

    &lt;span class="c1"&gt;# Wait with exponential backoff
&lt;/span&gt;    &lt;span class="n"&gt;backoff_time&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;retry_delay&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt; &lt;span class="n"&gt;attempt&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;random&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;uniform&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;1.2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;backoff_time&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  GPT-3.5 Integration
&lt;/h3&gt;

&lt;p&gt;The GPT-3.5 implementation uses a similar approach with some model-specific optimizations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Streaming response handling for faster user experience&lt;/li&gt;
&lt;li&gt;Special parsing for the chunked response format&lt;/li&gt;
&lt;li&gt;Automatic retry logic for various error conditions&lt;/li&gt;
&lt;li&gt;Clean response formatting to remove debug information&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Available Models
&lt;/h2&gt;

&lt;p&gt;noir-llm now provides access to a growing list of models:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Claude 3.7 Sonnet: Anthropic's advanced reasoning model&lt;/li&gt;
&lt;li&gt;GPT-3.5-Turbo: OpenAI's reliable general-purpose model&lt;/li&gt;
&lt;li&gt;GLM-4-32B: A powerful language model with web search capabilities&lt;/li&gt;
&lt;li&gt;Z1-32B: Another powerful language model with web search capabilities&lt;/li&gt;
&lt;li&gt;Z1-Rumination: A model optimized for deep research and analysis&lt;/li&gt;
&lt;li&gt;Mistral-31-24B: A high-quality language model from Venice AI&lt;/li&gt;
&lt;li&gt;Llama-3.2-3B: A compact but powerful model from Venice AI&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Disclaimer
&lt;/h2&gt;

&lt;p&gt;This package is for educational purposes only. Use at your own risk. The package accesses third-party APIs without official authorization, which may violate terms of service of the respective providers.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next?
&lt;/h2&gt;

&lt;p&gt;We're continuously working to add more models and improve the existing implementations. Stay tuned for future updates!&lt;/p&gt;

&lt;p&gt;Have you tried noir-llm? What models would you like to see added next? Let me know in the comments!&lt;/p&gt;

</description>
      <category>python</category>
      <category>ai</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Introducing noir-llm: Free Access to Premium AI Models for Everyone 🚀</title>
      <dc:creator>Deviprasad Shetty</dc:creator>
      <pubDate>Thu, 24 Apr 2025 09:58:13 +0000</pubDate>
      <link>https://dev.to/deviprasadshetty/introducing-noir-llm-free-access-to-premium-ai-models-for-everyone-36jn</link>
      <guid>https://dev.to/deviprasadshetty/introducing-noir-llm-free-access-to-premium-ai-models-for-everyone-36jn</guid>
      <description>&lt;h1&gt;
  
  
  Introducing noir-llm: Free Access to Premium AI Models for Everyone 🚀
&lt;/h1&gt;

&lt;h2&gt;
  
  
  The Problem 🤔
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhltqod6if7fkslh76rfg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhltqod6if7fkslh76rfg.png" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As AI continues to transform our world, access to powerful language models has become increasingly important for education, research, and development. However, many premium AI models are locked behind expensive API subscriptions or require complex setup processes.&lt;/p&gt;

&lt;p&gt;This creates a significant barrier for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;👨‍🎓 Students trying to learn about AI&lt;/li&gt;
&lt;li&gt;🔬 Researchers with limited budgets&lt;/li&gt;
&lt;li&gt;👨‍💻 Developers building prototypes&lt;/li&gt;
&lt;li&gt;👩‍🏫 Educators teaching AI concepts&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Solution: noir-llm ✨
&lt;/h2&gt;

&lt;p&gt;I'm excited to introduce &lt;strong&gt;noir-llm&lt;/strong&gt; 🎉, a Python package that democratizes access to premium AI models by providing a simple, unified interface - completely free of charge.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;noir-llm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;noir-llm allows you to access powerful language models like GLM-4-32B, Mistral-31-24B, and Llama-3.2-3B without requiring API keys or subscriptions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Features 💫
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Simple, Unified API 🔌
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;noir&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;NoirClient&lt;/span&gt;

&lt;span class="c1"&gt;# Create a client
&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;NoirClient&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# List available models
&lt;/span&gt;&lt;span class="n"&gt;models&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_available_models&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Available models: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;models&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Select a model
&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;select_model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mistral-31-24b&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Send a message
&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send_message&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What is the capital of France?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Response: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. System Prompts 🎯
&lt;/h3&gt;

&lt;p&gt;Customize model behavior with system prompts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_system_prompt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You are a helpful assistant specialized in explaining complex topics in simple terms.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Web Search Capabilities 🌐
&lt;/h3&gt;

&lt;p&gt;Enable models to search the web for up-to-date information:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send_message&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What are the latest developments in AI?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
    &lt;span class="n"&gt;websearch&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Rate Limit Bypassing 🛡️
&lt;/h3&gt;

&lt;p&gt;noir-llm implements sophisticated techniques to prevent rate limiting, ensuring reliable access to the models.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Command-Line Interface 💻
&lt;/h3&gt;

&lt;p&gt;For those who prefer working in the terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# List available models&lt;/span&gt;
noir-llm list

&lt;span class="c"&gt;# Start an interactive chat session&lt;/span&gt;
noir-llm chat &lt;span class="nt"&gt;--model&lt;/span&gt; glm-4-32b

&lt;span class="c"&gt;# Enable web search&lt;/span&gt;
noir-llm chat &lt;span class="nt"&gt;--model&lt;/span&gt; mistral-31-24b &lt;span class="nt"&gt;--websearch&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Available Models 🤖
&lt;/h2&gt;

&lt;p&gt;noir-llm currently supports the following models:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GLM-4-32B&lt;/strong&gt;: A powerful language model with web search capabilities and excellent reasoning skills&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mistral-31-24B&lt;/strong&gt;: A high-quality language model with web search capabilities and strong performance&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Z1-32B&lt;/strong&gt;: A powerful language model with web search capabilities and excellent context handling&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Llama-3.2-3B&lt;/strong&gt;: A compact but powerful model with web search capabilities and fast responses&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Z1-Rumination&lt;/strong&gt;: A model optimized for deep research and analysis&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Coming Soon! 🔜
&lt;/h3&gt;

&lt;p&gt;We're excited to announce that the following models will be added in upcoming releases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Claude-3 Opus&lt;/strong&gt;: State-of-the-art model with exceptional reasoning and analysis capabilities&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GPT-4 Turbo&lt;/strong&gt;: Latest version with enhanced performance and real-time knowledge&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gemini Ultra&lt;/strong&gt;: Google's most capable model with advanced multimodal abilities&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mixtral-8x7B&lt;/strong&gt;: Powerful mixture-of-experts model with broad capabilities&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Use Cases 🎯
&lt;/h2&gt;

&lt;h3&gt;
  
  
  For Students 👨‍🎓
&lt;/h3&gt;

&lt;p&gt;Access to the same powerful AI tools used in industry, without the cost barrier. Learn, experiment, and build your skills with state-of-the-art models.&lt;/p&gt;

&lt;h3&gt;
  
  
  For Researchers 🔬
&lt;/h3&gt;

&lt;p&gt;Conduct experiments and advance the field without being limited by API costs or rate limits. Focus on your research, not your budget.&lt;/p&gt;

&lt;h3&gt;
  
  
  For Developers 👨‍💻
&lt;/h3&gt;

&lt;p&gt;Build innovative applications and prototypes with premium AI capabilities. Test your ideas without worrying about usage costs.&lt;/p&gt;

&lt;h3&gt;
  
  
  For Educators 👩‍🏫
&lt;/h3&gt;

&lt;p&gt;Teach AI concepts with practical, hands-on examples using real, powerful models. Prepare students for the AI-driven future.&lt;/p&gt;

&lt;h2&gt;
  
  
  Important Disclaimer ⚠️
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;noir-llm is for educational purposes only.&lt;/strong&gt; Users should be aware that using this package may violate terms of service of the respective providers and must use it at their own risk. The source code is kept private to prevent potential misuse.&lt;/p&gt;

&lt;h2&gt;
  
  
  Community 👥
&lt;/h2&gt;

&lt;p&gt;Join our WhatsApp community to chat with other users, get help, and stay updated on the latest developments: &lt;a href="https://chat.whatsapp.com/DDAPE3o5wv500oe02cYLF7" rel="noopener noreferrer"&gt;Join WhatsApp Group&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started 🚀
&lt;/h2&gt;

&lt;p&gt;Installation is simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;noir-llm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check out the package on PyPI: &lt;a href="https://pypi.org/project/noir-llm/" rel="noopener noreferrer"&gt;noir-llm&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion 🎉
&lt;/h2&gt;

&lt;p&gt;noir-llm was created to bridge the gap between expensive commercial AI services and the need for high-quality language models in education, research, and personal projects.&lt;/p&gt;

&lt;p&gt;By providing free access to premium AI models, we hope to democratize access to this transformative technology and empower the next generation of AI researchers, developers, and enthusiasts.&lt;/p&gt;

&lt;p&gt;Try it out and let me know what you think in the comments below!&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Remember: This package is for educational purposes only. Use responsibly and at your own risk.&lt;/em&gt; ⚠️&lt;/p&gt;

</description>
      <category>python</category>
      <category>ai</category>
      <category>machinelearning</category>
      <category>freetouse</category>
    </item>
    <item>
      <title>🛡️ NeoGuardianAI: Building an Advanced URL Phishing Detection System with Machine Learning</title>
      <dc:creator>Deviprasad Shetty</dc:creator>
      <pubDate>Wed, 23 Apr 2025 05:47:45 +0000</pubDate>
      <link>https://dev.to/deviprasadshetty/neoguardianai-building-an-advanced-url-phishing-detection-system-with-machine-learning-5g0n</link>
      <guid>https://dev.to/deviprasadshetty/neoguardianai-building-an-advanced-url-phishing-detection-system-with-machine-learning-5g0n</guid>
      <description>&lt;p&gt;In today's digital landscape, phishing attacks remain one of the most prevalent cyber threats, affecting millions of users worldwide. As a developer passionate about cybersecurity and machine learning, I embarked on a journey to create NeoGuardianAI, a sophisticated URL phishing detection system that achieves over 96% accuracy. In this comprehensive article, I'll share my experience building this tool, the challenges faced, and how artificial intelligence not only powered the final product but also assisted in the development process itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  💡 The Genesis of NeoGuardianAI
&lt;/h2&gt;

&lt;p&gt;The idea for NeoGuardianAI emerged from a simple observation: despite numerous existing solutions, phishing attacks continue to evolve and succeed. I wanted to create a tool that would not only be highly accurate but also accessible to everyone, from individual users to developers looking to integrate phishing detection into their applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  🏗️ Technical Architecture
&lt;/h2&gt;

&lt;h3&gt;
  
  
  📊 1. Data Foundation
&lt;/h3&gt;

&lt;p&gt;The project is built on the robust pirocheto/phishing-url dataset from Hugging Face, which provided a comprehensive collection of both legitimate and phishing URLs. This high-quality dataset was crucial for training a reliable model.&lt;/p&gt;

&lt;h3&gt;
  
  
  🔍 2. Feature Engineering
&lt;/h3&gt;

&lt;p&gt;One of the most critical aspects of the project was feature engineering. NeoGuardianAI analyzes over 30 different URL characteristics, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;📏 Basic URL properties (length, special character counts)&lt;/li&gt;
&lt;li&gt;🌐 Domain-specific features (hostname length, IP presence)&lt;/li&gt;
&lt;li&gt;🔠 TLD analysis&lt;/li&gt;
&lt;li&gt;🔄 Subdomain characteristics&lt;/li&gt;
&lt;li&gt;🛤️ Path and query parameter analysis&lt;/li&gt;
&lt;li&gt;📈 Statistical patterns&lt;/li&gt;
&lt;li&gt;🏢 Brand-related indicators&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🤖 3. Model Selection and Training
&lt;/h3&gt;

&lt;p&gt;After experimenting with various algorithms, I chose XGBoost for its:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🚀 Superior performance on structured data&lt;/li&gt;
&lt;li&gt;📊 Excellent handling of non-linear relationships&lt;/li&gt;
&lt;li&gt;📋 Built-in feature importance analysis&lt;/li&gt;
&lt;li&gt;⚡ Fast training and inference times&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The model achieved impressive metrics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Accuracy: 96.31%&lt;/li&gt;
&lt;li&gt;🎯 Precision: 96.00%&lt;/li&gt;
&lt;li&gt;🔍 Recall: 96.66%&lt;/li&gt;
&lt;li&gt;📊 F1 Score: 96.33%&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🛠️ Implementation Details
&lt;/h2&gt;

&lt;p&gt;The implementation process involved several key components:&lt;/p&gt;

&lt;h3&gt;
  
  
  🧠 1. Core Model Development
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;🧹 Data preprocessing and cleaning&lt;/li&gt;
&lt;li&gt;🔄 Feature extraction pipeline&lt;/li&gt;
&lt;li&gt;⚙️ Model training and optimization&lt;/li&gt;
&lt;li&gt;✓ Cross-validation and testing&lt;/li&gt;
&lt;li&gt;📊 Performance metric analysis&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🖥️ 2. Web Interface
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;🎨 Gradio-based user interface&lt;/li&gt;
&lt;li&gt;⚡ Real-time URL analysis&lt;/li&gt;
&lt;li&gt;📊 Confidence score visualization&lt;/li&gt;
&lt;li&gt;🚦 Status indicators and explanations&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🔌 3. API Integration
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;🤗 Hugging Face Inference API implementation&lt;/li&gt;
&lt;li&gt;🌐 RESTful endpoint creation&lt;/li&gt;
&lt;li&gt;📨 Response formatting and error handling&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🚀 4. Deployment
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;☁️ Hugging Face Spaces hosting&lt;/li&gt;
&lt;li&gt;🔄 Model versioning&lt;/li&gt;
&lt;li&gt;📈 Performance monitoring&lt;/li&gt;
&lt;li&gt;📝 Error logging and tracking&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🤝 How AI Assisted in Development
&lt;/h2&gt;

&lt;p&gt;One unique aspect of this project was how AI tools, particularly large language models, assisted in the development process:&lt;/p&gt;

&lt;h3&gt;
  
  
  📐 1. Architecture Planning
&lt;/h3&gt;

&lt;p&gt;AI helped in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🏗️ Designing the system architecture&lt;/li&gt;
&lt;li&gt;🚧 Identifying potential bottlenecks&lt;/li&gt;
&lt;li&gt;✅ Suggesting best practices&lt;/li&gt;
&lt;li&gt;🔄 Planning the feature extraction pipeline&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  💻 2. Code Development
&lt;/h3&gt;

&lt;p&gt;AI assisted with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;⚡ Code optimization&lt;/li&gt;
&lt;li&gt;🐛 Bug identification&lt;/li&gt;
&lt;li&gt;📚 Documentation generation&lt;/li&gt;
&lt;li&gt;🧪 Test case creation&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🔍 3. Feature Engineering
&lt;/h3&gt;

&lt;p&gt;AI provided insights for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🔎 Identifying relevant URL characteristics&lt;/li&gt;
&lt;li&gt;🛠️ Implementing extraction methods&lt;/li&gt;
&lt;li&gt;⚙️ Optimizing feature calculations&lt;/li&gt;
&lt;li&gt;🧩 Handling edge cases&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  ⚙️ 4. Model Optimization
&lt;/h3&gt;

&lt;p&gt;AI helped in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🎛️ Hyperparameter tuning&lt;/li&gt;
&lt;li&gt;🚀 Performance optimization&lt;/li&gt;
&lt;li&gt;🔍 Error analysis&lt;/li&gt;
&lt;li&gt;🤖 Model selection&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🧗 Challenges and Solutions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  🔄 1. Feature Extraction Complexity
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Challenge:&lt;/strong&gt; 🤔 URLs can have vastly different structures and characteristics.&lt;br&gt;
&lt;strong&gt;Solution:&lt;/strong&gt; 💪 Implemented a robust feature extraction system that handles various URL formats and edge cases.&lt;/p&gt;
&lt;h3&gt;
  
  
  ⚡ 2. Performance Optimization
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Challenge:&lt;/strong&gt; 🏎️ Needed real-time analysis capabilities.&lt;br&gt;
&lt;strong&gt;Solution:&lt;/strong&gt; 🚀 Optimized the feature extraction pipeline and model inference for speed.&lt;/p&gt;
&lt;h3&gt;
  
  
  🎯 3. False Positive Management
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Challenge:&lt;/strong&gt; ⚖️ Minimizing false positives while maintaining high detection rates.&lt;br&gt;
&lt;strong&gt;Solution:&lt;/strong&gt; 🔍 Implemented a trusted domain system and confidence thresholds.&lt;/p&gt;
&lt;h3&gt;
  
  
  📈 4. Deployment Scalability
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Challenge:&lt;/strong&gt; 🌐 Ensuring consistent performance under load.&lt;br&gt;
&lt;strong&gt;Solution:&lt;/strong&gt; ☁️ Utilized Hugging Face's infrastructure for reliable scaling.&lt;/p&gt;
&lt;h2&gt;
  
  
  🔮 Future Developments
&lt;/h2&gt;

&lt;p&gt;NeoGuardianAI is an ongoing project with several planned enhancements:&lt;/p&gt;
&lt;h3&gt;
  
  
  🔧 1. Technical Improvements
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;🔍 Enhanced feature extraction methods&lt;/li&gt;
&lt;li&gt;🔄 Real-time model updates&lt;/li&gt;
&lt;li&gt;🤖 Additional ML model implementations&lt;/li&gt;
&lt;li&gt;🔌 Improved API capabilities&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  👤 2. User Experience
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;🧩 Browser extension development&lt;/li&gt;
&lt;li&gt;📱 Mobile app integration&lt;/li&gt;
&lt;li&gt;📊 Enhanced visualization tools&lt;/li&gt;
&lt;li&gt;📝 Detailed threat analysis reports&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  👥 3. Community Features
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;🤝 Collaborative threat detection&lt;/li&gt;
&lt;li&gt;💬 User feedback integration&lt;/li&gt;
&lt;li&gt;📋 Community-driven trusted domain list&lt;/li&gt;
&lt;li&gt;🔄 Integration with other security tools&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  🚀 Try It Yourself
&lt;/h2&gt;

&lt;p&gt;You can experience NeoGuardianAI through multiple channels:&lt;/p&gt;
&lt;h3&gt;
  
  
  🌐 1. Web Interface
&lt;/h3&gt;

&lt;p&gt;Visit: &lt;a href="https://devishetty100-neoguardianai-space.hf.space" rel="noopener noreferrer"&gt;https://devishetty100-neoguardianai-space.hf.space&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  🏠 2. Project Website
&lt;/h3&gt;

&lt;p&gt;Learn more at: &lt;a href="https://neoguardianai.pages.dev" rel="noopener noreferrer"&gt;https://neoguardianai.pages.dev&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  💻 3. GitHub Repository
&lt;/h3&gt;

&lt;p&gt;Explore the code: &lt;a href="https://github.com/redmoon0x/NeoGuardianAI---URL-Phishing-Detection" rel="noopener noreferrer"&gt;https://github.com/redmoon0x/NeoGuardianAI---URL-Phishing-Detection&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  🔌 4. API Integration
&lt;/h3&gt;

&lt;p&gt;Integrate with your projects using the Hugging Face Inference API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;

&lt;span class="n"&gt;API_URL&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://api-inference.huggingface.co/models/Devishetty100/neoguardianai&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;headers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Authorization&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Bearer YOUR_API_TOKEN&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;payload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;inputs&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;API_URL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# Example usage
&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://example.com&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  📊 Impact and Results
&lt;/h2&gt;

&lt;p&gt;Since its launch, NeoGuardianAI has:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🔍 Analyzed thousands of URLs&lt;/li&gt;
&lt;li&gt;🛡️ Protected users from numerous phishing attempts&lt;/li&gt;
&lt;li&gt;👍 Received positive feedback from the security community&lt;/li&gt;
&lt;li&gt;💡 Demonstrated the potential of AI in cybersecurity&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🤝 Contributing to the Project
&lt;/h2&gt;

&lt;p&gt;NeoGuardianAI is open-source, and contributions are welcome! You can contribute by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;📝 Submitting pull requests&lt;/li&gt;
&lt;li&gt;🐛 Reporting issues&lt;/li&gt;
&lt;li&gt;💡 Suggesting improvements&lt;/li&gt;
&lt;li&gt;📢 Sharing the project&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🔗 Connect and Learn More
&lt;/h2&gt;

&lt;p&gt;You can find me and learn more about the project through:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;💻 GitHub: &lt;a href="https://github.com/redmoon0x" rel="noopener noreferrer"&gt;https://github.com/redmoon0x&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;👔 LinkedIn: &lt;a href="https://www.linkedin.com/in/deviprasadshetty2003" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/deviprasadshetty2003&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;🤗 Hugging Face: &lt;a href="https://huggingface.co/Devishetty100" rel="noopener noreferrer"&gt;https://huggingface.co/Devishetty100&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;📱 Telegram: &lt;a href="https://t.me/redmoon0x" rel="noopener noreferrer"&gt;https://t.me/redmoon0x&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🎯 Conclusion
&lt;/h2&gt;

&lt;p&gt;Building NeoGuardianAI has been an enlightening journey that showcases the potential of combining machine learning with cybersecurity. The project demonstrates how AI can be both the end product and a valuable development tool, leading to more efficient and effective solutions for real-world problems.&lt;/p&gt;

&lt;p&gt;The success of this project highlights the importance of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🔍 Thorough feature engineering&lt;/li&gt;
&lt;li&gt;🤖 Robust model selection and training&lt;/li&gt;
&lt;li&gt;👤 User-friendly implementation&lt;/li&gt;
&lt;li&gt;👥 Community engagement&lt;/li&gt;
&lt;li&gt;🔄 Continuous improvement&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As cyber threats continue to evolve, tools like NeoGuardianAI will play an increasingly important role in protecting users online. I invite you to try the tool, contribute to its development, and join the conversation about the future of AI-powered cybersecurity.&lt;/p&gt;

&lt;h2&gt;
  
  
  📜 License
&lt;/h2&gt;

&lt;p&gt;NeoGuardianAI is available under the MIT License, making it freely available for both personal and commercial use.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Tags:&lt;/strong&gt; #MachineLearning #CyberSecurity #AI #Python #OpenSource #HuggingFace #DataScience #WebSecurity #PhishingDetection&lt;/p&gt;

</description>
      <category>cybersecurity</category>
      <category>machinelearning</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
