<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nadine </title>
    <description>The latest articles on DEV Community by Nadine  (@nadinev).</description>
    <link>https://dev.to/nadinev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nadinev"/>
    <language>en</language>
    <item>
      <title>Gemini 3: The Overthinker - Project Silas</title>
      <dc:creator>Nadine </dc:creator>
      <pubDate>Wed, 04 Mar 2026 14:32:21 +0000</pubDate>
      <link>https://dev.to/nadinev/gemini-3-the-overthinker-project-silas-1e2</link>
      <guid>https://dev.to/nadinev/gemini-3-the-overthinker-project-silas-1e2</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/mlh-built-with-google-gemini-02-25-26"&gt;Built with Google Gemini: Writing Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built with Google Gemini
&lt;/h2&gt;

&lt;p&gt;I built &lt;strong&gt;Silas&lt;/strong&gt;, a character-driven hardware debugging assistant powered by Gemini 3. This project was a submission for the Gemini 3 Hackathon hosted by Devpost, where I wanted to explore Gemini's &lt;strong&gt;"thought signatures"&lt;/strong&gt;: a feature native to &lt;strong&gt;Gemini 3&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;But Silas isn't just a chatbot with attitude. He's my solution to a fascinating problem: &lt;strong&gt;overthinking&lt;/strong&gt;. When an AI considers so many possibilities simultaneously that it gets stuck in an endless loop of "Wait, I should also check...", and usually stalls. I discovered that the answer isn't to constrain the model, rather to give it a personality that &lt;em&gt;justifies&lt;/em&gt; its overthinking.&lt;/p&gt;

&lt;p&gt;Gemini 3 introduces "thought signatures", essentially the model can think about HOW to think before answering. It's like having a conversation with someone who visibly pauses to consider the complexity of your question before responding.&lt;/p&gt;




&lt;h3&gt;
  
  
  The Problem: The "Infinite Planning Loop"
&lt;/h3&gt;

&lt;p&gt;Without the Silas persona, Gemini 3’s native "thought signature" often looks like this internally:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;[Internally considering 47 different factors simultaneously...]&lt;/em&gt;&lt;br&gt;
"I'll investigate console logs. Wait, I should also try to click at 500, 500 in case it needs a focus click. Actually, I'll just wait. Wait, I'll check the metadata: 'No browser pages open.' Let's go. Wait, I'll also try to reload if it's stuck. But first, check the network requests for heavy video loading. Actually, I'll just wait."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This continues for hundreds of lines as the model tries to be "too helpful." Silas fixes this by being too grumpy to wait.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Character Design Matters for AI
&lt;/h3&gt;

&lt;p&gt;Most AI assistants are designed to be helpful and polite. However, when Gemini 3 tries to be &lt;em&gt;too&lt;/em&gt; helpful, it considers every possible way to help you—simultaneously—forever. By making Silas grumpy and impatient, I gave the model permission to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Make decisions quickly&lt;/strong&gt;: He is too irritated to dither.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Judge your work&lt;/strong&gt;: Transforming uncertainty into disappointment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Show expertise&lt;/strong&gt;: His overthinking becomes "mental circuit simulation".&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Hardware Consciousness
&lt;/h3&gt;

&lt;p&gt;I used &lt;strong&gt;PlatformIO&lt;/strong&gt; (Silas's DNA blueprint) to connect his AI brain to physical electronics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The Brain&lt;/strong&gt;: An ESP32 microcontroller—a "gum-stick" sized computer that acts as Silas's physical anchor.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Senses &amp;amp; Organs&lt;/strong&gt;: 

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Ear&lt;/strong&gt;: Microphone mapped to Pin 34 via the I2S protocol.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Face&lt;/strong&gt;: TFT Display screen connected to &lt;strong&gt;Pin 15&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Voice Box&lt;/strong&gt;: Audio amplifier connected to &lt;strong&gt;Pins 25, 26, and 22&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;In embedded electronics, the "Brain" (the ESP32) has many generic ports called GPIO pins. Without a map, the AI has no idea which pin is a mouth and which is an ear. I used the configuration file to define these "nerves":&lt;/p&gt;

&lt;p&gt;By defining MIC_PIN=34, I'm telling the system: "The physical wire for your microphone is soldered to Port 34."&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Defining the Voice&lt;/strong&gt;: Assigning I2S_LRC=25 and I2S_BCLK=26 tells it exactly which "vocal cords" to vibrate to produce sound through the amplifier.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;Terminal Simulation:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While I used the terminal to input text for this specific demo, the internal logic remains hard-wired to these physical definitions. The AI "believes" it is interacting through these pins because the mapping remains active, bridging the logic between my keystrokes and the ESP32’s actual audio output pins.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;(Note: For this demo, I'm typing to Silas instead of speaking, and using computer speakers instead of his dedicated 8-ohm speaker but the principle remains the same.)&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;


  
  Your browser does not support the video tag.


&lt;p&gt;

&lt;/p&gt;
&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
      &lt;div class="c-embed__body flex items-center justify-between"&gt;
        &lt;a href="https://res.cloudinary.com/dvzaxinlw/video/upload/v1772630793/output_compressed_jzcl3c.mp4" rel="noopener noreferrer" class="c-link fw-bold flex items-center"&gt;
          &lt;span class="mr-2"&gt;res.cloudinary.com&lt;/span&gt;
          

        &lt;/a&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;




&lt;p&gt;Notice how he says: &lt;em&gt;"I've analysed the logic gate timing in my head"&lt;/em&gt;. He's not stalling; he's genuinely simulated the circuit behaviour using Gemini's parallel processing as a feature, not a bug. &lt;/p&gt;

&lt;p&gt;His internal reasoning is summarised in a &lt;code&gt;logic_summary&lt;/code&gt; field within a mandatory JSON block at the end of every message. In my architectural plan, this field feeds a &lt;strong&gt;CRT Dashboard&lt;/strong&gt; for real-time status updates.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"hardware_state"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"pin_12"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"active"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"i2s_dac"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"streaming"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"tft_state"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"rendering_disappointment"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"logic_check"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"disappointment_level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"logic_summary"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"I've analysed the SPI bus timing on pins 18, 19, and 23; while the wiring is theoretically correct, your use of 115200 baud for the monitor is a quaint relic of a slower era."&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;While the dashboard isn't active in this specific version, the "hooks" are already built into Silas's thought process.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Constraints Create Creativity
&lt;/h3&gt;

&lt;p&gt;A timeout policy is essential. Without a clear order of priorities or a set "timeout," the agent will second-guess basic mechanisms. By framing the model's natural tendency to consider everything as a "perfectionist standard," I turned hundreds of lines of internal indecision into a single, sharp expert critique.&lt;/p&gt;

&lt;p&gt;The system prompt specifically instructs Silas to be "cynical and blunt." When the model adheres to this, it naturally produces high-impact, low-token responses.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. The JSON Block as Action Forcing
&lt;/h3&gt;

&lt;p&gt;I used a JSON output block to force commitment:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The model cannot endlessly reconsider once it has to fill a specific field.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;disappointment_level&lt;/code&gt; numerical output provides an outlet for uncertainty.&lt;/li&gt;
&lt;li&gt;Indecision is effectively transformed into high standards.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Turning Grudges into Perfectionism
&lt;/h3&gt;

&lt;p&gt;I learned to use Gemini’s "helpful assistant" nature to build a "disappointment memory". By keeping track of past errors, the model moves from analysis paralysis into perfectionism. Prompt engineering is more effective when you provide the model with a "decision tree" and common patterns tested through trial and error.&lt;/p&gt;




&lt;h2&gt;
  
  
  Google Gemini Feedback
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What worked well?&lt;/strong&gt; Simulation.&lt;/p&gt;

&lt;p&gt;One of the biggest hurdles was the lack of support for I2S audio components in browser-based simulators like Wokwi. This forced a "hybrid" approach: the logic is 100% hardware-compliant, but the demo relies on terminal interaction. Gemini handled this abstraction well.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where did I run into friction?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;While the &lt;code&gt;platformio.ini&lt;/code&gt; is configured for a physical I2S microphone (Pin 34) and an audio amplifier, I used terminal-based input for this demo. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Wokwi is an incredible tool, but it currently lacks support for the specific I2S audio and microphone components Silas requires to "hear" and "speak." However, the "Central Nervous System" mapping remains active in the code, bridging the logic between the terminal and the ESP32’s intended audio pins.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;Project Silas&lt;/strong&gt;: &lt;a href="https://devpost.com/software/project-silas-the-ghost-in-the-machine?ref_content=user-portfolio&amp;amp;ref_feature=in_progress" rel="noopener noreferrer"&gt;The Silicon Savant&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Future plans:&lt;/strong&gt;&lt;br&gt;
A step-by-step Codelabs guide where Silas himself will teach you to build him (while thoroughly judging your wire management).&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Until then, Silas is watching. And disappointed.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>geminireflections</category>
      <category>gemini</category>
      <category>hardware</category>
    </item>
    <item>
      <title>Keep Your Secrets Safe</title>
      <dc:creator>Nadine </dc:creator>
      <pubDate>Thu, 12 Feb 2026 22:49:27 +0000</pubDate>
      <link>https://dev.to/nadinev/keep-your-secrets-safe-35nd</link>
      <guid>https://dev.to/nadinev/keep-your-secrets-safe-35nd</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/github-2026-01-21"&gt;GitHub Copilot CLI Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I exposed an API key in a GitHub repo that was supposed to be private. For a whole month, the key sat in git history while I worked on other things.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt; Prevent API keys and secrets from being accidentally committed to git. Set it up once, no need to remember.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Problem
&lt;/h3&gt;

&lt;p&gt;Most &lt;code&gt;.gitignore&lt;/code&gt; templates only cover common variants like:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;.env&lt;/em&gt;&lt;br&gt;
&lt;em&gt;.env.local&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;But miss production/staging variants like:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;.env.production&lt;/em&gt;&lt;br&gt;
.&lt;em&gt;env.staging&lt;/em&gt;&lt;br&gt;
&lt;em&gt;.env.development&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is exactly how I accidentally exposed my API key. I thought my &lt;code&gt;.gitignore&lt;/code&gt; was thorough, but when my project configuration was converted to &lt;code&gt;env.production&lt;/code&gt;, it wasn't blocked—and got committed silently.&lt;/p&gt;
&lt;h3&gt;
  
  
  The Solution
&lt;/h3&gt;

&lt;p&gt;I created a &lt;strong&gt;secure project template&lt;/strong&gt; that uses:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Proper &lt;code&gt;.gitignore&lt;/code&gt; blocking&lt;/strong&gt; - &lt;code&gt;.env*&lt;/code&gt; catches ALL variants&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✔ Blocks: &lt;code&gt;.env&lt;/code&gt;, &lt;code&gt;.env.production&lt;/code&gt;, &lt;code&gt;.env.staging&lt;/code&gt;, &lt;code&gt;.env.development.local&lt;/code&gt;,  and credential JSONs&lt;/li&gt;
&lt;li&gt;✔ Allows: &lt;code&gt;.env.example&lt;/code&gt; (placeholder-only files for documentation)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Local Pre-commit Hooks&lt;/strong&gt; - Detects secrets before they're committed&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Catches API keys, passwords, private keys, OAuth tokens&lt;/li&gt;
&lt;li&gt;Runs automatically on every commit&lt;/li&gt;
&lt;li&gt;Can't be bypassed accidentally&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Server-Side GitHub Actions&lt;/strong&gt; - Continuous secret scanning&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Runs on every push/PR&lt;/li&gt;
&lt;li&gt;Can't be bypassed&lt;/li&gt;
&lt;li&gt;Blocks merges with detected secrets&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;One-Command Setup&lt;/strong&gt; - &lt;code&gt;make setup&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Auto-detects Python/Node.js/Go projects&lt;/li&gt;
&lt;li&gt;Prerequisites checker verifies Git, Python, Node, Go&lt;/li&gt;
&lt;li&gt;Clear error messages if something's missing&lt;/li&gt;
&lt;li&gt;No decision paralysis—just works&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;


&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqspe5ny1mdnkbf6uk5v7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqspe5ny1mdnkbf6uk5v7.png" alt=" " width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 1: Create &amp;amp; Clone
&lt;/h3&gt;

&lt;p&gt;Go to &lt;a href="https://github.com/nadinev6/no-secrets" rel="noopener noreferrer"&gt;nadinev6/no-secrets&lt;/a&gt; and click &lt;strong&gt;"Use this template"&lt;/strong&gt; button&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Or use the CLI:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create from template (choose public or private)&lt;/span&gt;
gh repo create my-project &lt;span class="nt"&gt;--template&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nadinev6/no-secrets &lt;span class="nt"&gt;--public&lt;/span&gt; &lt;span class="nt"&gt;--clone&lt;/span&gt;
&lt;span class="nb"&gt;cd &lt;/span&gt;my-project
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then it creates a &lt;strong&gt;new repo&lt;/strong&gt; in your account with all the files.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Setup
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Mac/Linux&lt;/span&gt;
make setup

&lt;span class="c"&gt;# Windows (PowerShell)&lt;/span&gt;
.&lt;span class="se"&gt;\s&lt;/span&gt;etup.bat setup
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it! 🎉&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/user-attachments/assets/29a673dc-ee6c-4b24-b863-20a2a1b8a849" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3qkv6mfnnkllky08cnra.png" alt="Watch Demo" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/nadinev6/no-secrets" rel="noopener noreferrer"&gt;github/../no-secrets&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The setup command:&lt;/p&gt;

&lt;p&gt;✔ Checks for required tools (Git, Python/Node/Go)&lt;br&gt;
✔ Auto-detects your project type&lt;br&gt;
✔ Installs pre-commit hooks&lt;br&gt;
✔ Shows a success message with next steps&lt;/p&gt;

&lt;p&gt;Real secrets get caught even in example files, but legitimate test values are allowed!&lt;/p&gt;

&lt;h2&gt;
  
  
  My Experience with GitHub Copilot CLI
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;GitHub CLI&lt;/strong&gt; was essential for helping me make this template reusable.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;I learnt it's best to not over-engineer it.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The best template is one that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Works reliably&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Is easy to understand&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;.gitignore variants are tricky (.env.production isn't .env)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Local checks aren't enough (need server-side GitHub Actions)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Users need &lt;em&gt;&lt;strong&gt;ONE simple command&lt;/strong&gt;&lt;/em&gt;, not complex instructions&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Auto-detection beats decision paralysis&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now I am using this template for every project. You should too.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feee0sph16h0aawjx4br6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feee0sph16h0aawjx4br6.png" alt=" " width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Links &amp;amp; Resources
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/gitleaks/gitleaks" rel="noopener noreferrer"&gt;Gitleaks&lt;/a&gt;&lt;br&gt;
&lt;a href="https://pre-commit.com/" rel="noopener noreferrer"&gt;Pre-commit docs&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.github.com/en/code-security/secret-scanning" rel="noopener noreferrer"&gt;GitHub secret scanning&lt;/a&gt;&lt;br&gt;
&lt;a href="https://cheatsheetseries.owasp.org/cheatsheets/Secrets_Management_Cheat_Sheet.html" rel="noopener noreferrer"&gt;OWASP Secrets Management&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/nadinev6/no-secrets" rel="noopener noreferrer"&gt;No-secrets Project Template&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>githubchallenge</category>
      <category>cli</category>
      <category>githubcopilot</category>
    </item>
    <item>
      <title>Building a Fluid, Minimalist Portfolio</title>
      <dc:creator>Nadine </dc:creator>
      <pubDate>Sun, 01 Feb 2026 18:11:33 +0000</pubDate>
      <link>https://dev.to/nadinev/building-a-fluid-minimalist-portfolio-2col</link>
      <guid>https://dev.to/nadinev/building-a-fluid-minimalist-portfolio-2col</guid>
      <description>&lt;p&gt;--labels dev-tutorial=devnewyear2026 &lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/new-year-new-you-google-ai-2025-12-31"&gt;New Year, New You Portfolio Challenge Presented by Google AI&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  About Me
&lt;/h2&gt;

&lt;p&gt;I am an AI Trainer (AIT) with a background in performance management, sales, and education. For this challenge, I developed a &lt;strong&gt;minimal portfolio&lt;/strong&gt; built on a &lt;strong&gt;"Rule of Three"&lt;/strong&gt; philosophy (highlighting 3 projects). I wanted to show how a focused mindset can silence the noise, moving away from over-complication, toward a minimalist approach where every transition is fluid and the interface feels almost weightless.&lt;/p&gt;

&lt;h2&gt;
  
  
  Portfolio
&lt;/h2&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag__cloud-run"&gt;
  &lt;iframe height="600px" src="https://bento-motion-gallery-969441576592.us-west1.run.app"&gt;
  &lt;/iframe&gt;
&lt;/div&gt;




&lt;h2&gt;
  
  
  How I Built It 🐳
&lt;/h2&gt;

&lt;p&gt;To achieve low latency, I focused on runtime precision, so that once the initial assets are delivered, the interaction remains fluid and the interface feels weightless.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Google AI Studio &amp;amp; Flash UI:&lt;/strong&gt; I used &lt;strong&gt;Gemini in Google AI Studio&lt;/strong&gt; to scaffold the initial UI components and generate logic for custom animations. For the core card templates, I used the &lt;a href="https://aistudio.google.com/app/apps/bundled/flash_ui?showPreview=true&amp;amp;showAssistant=true" rel="noopener noreferrer"&gt;Flash UI&lt;/a&gt; project, extracting the CSS and JavaScript logic to integrate into my custom bento-style gallery.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Component Prototyping:&lt;/strong&gt; I used &lt;a href="https://codepen.io/N-V-the-sans/pen/myEXpdP" rel="noopener noreferrer"&gt;CodePen&lt;/a&gt; to isolate and refine the Flash UI components before final integration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Nano Banana Pro 🍌:&lt;/strong&gt; This was used to regenerate project cover images, moving from static previews to cinematic scenes that align with the portfolio’s monochrome aesthetic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Google Cloud Run ☁️:&lt;/strong&gt; The site is deployed via a &lt;strong&gt;Docker&lt;/strong&gt; build. I implemented a "Scale-to-Zero" strategy using &lt;strong&gt;Knative service definitions&lt;/strong&gt;, enforcing strict resource limits to maintain a high-performance, cost-neutral footprint.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Serverless Communication:&lt;/strong&gt; I built a custom contact system using &lt;strong&gt;Google Apps Script&lt;/strong&gt; as a middleware API. This sends user messages directly into &lt;strong&gt;Google Sheets&lt;/strong&gt; and notifies me via email, providing an easy, database-free messaging solution.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Performance Optimisation
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GSAP Scroll-Driven Logic&lt;/strong&gt;: I implemented &lt;strong&gt;GSAP&lt;/strong&gt; for "scrubbed" transitions. Linking animation progress directly to the scroll offset, creates a tactile feel where the user remains the primary conductor of the UI motion.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Direct DOM Manipulation&lt;/strong&gt;: Mouse coordinate tracking bypasses the Virtual DOM via &lt;code&gt;useRef&lt;/code&gt; and native event listeners to maintain a consistent 60FPS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lazy Video Loading&lt;/strong&gt;: HLS streams are only initialised when cards enter an active or hover state.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource Constraints&lt;/strong&gt;: The build is optimised for sub-256MB memory footprints to remain within the Google Cloud always-free tier.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What I'm Most Proud Of ༄
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The "Monochrome-to-Motion" Strategy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To reduce cognitive noise, I implemented a monochrome interface where generative elements are present but never distracting. &lt;/p&gt;

&lt;p&gt;Project gallery elements only "come alive" on hover/focus, transitioning from static grayscale to cinematic motion. The CSS filter toggles state based on cursor proximity. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mux Video Integration:&lt;/strong&gt;&lt;br&gt;
To prevent heavy assets from bottlenecking the initial load, I offloaded all looping videos to &lt;strong&gt;Mux&lt;/strong&gt;. This allowed for adaptive bitrate streaming, ensuring that the "Motion" phase of the UI stays fluid regardless of the user's connection speed. By offloading these high-bitrate transitions to the client's GPU, to ensure zero-lag.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tablet-First Approach:&lt;/strong&gt;&lt;br&gt;
Components respond to focus and active states, allowing a &lt;strong&gt;"tap-to-reveal"&lt;/strong&gt; behaviour on tablets that mimics the hover effect on desktops.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: Orchestrating the Transition ⛏
&lt;/h2&gt;

&lt;p&gt;This refactor represents my transition into a more intentional way of building complexity is refined through a minimalist lens. It’s not just about what the tools can do, but how we choose to present them.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>googleaichallenge</category>
      <category>portfolio</category>
      <category>gemini</category>
    </item>
    <item>
      <title>The Prompting Trick That Fixed My AI Image Generation</title>
      <dc:creator>Nadine </dc:creator>
      <pubDate>Thu, 11 Dec 2025 14:03:49 +0000</pubDate>
      <link>https://dev.to/nadinev/the-prompting-trick-that-fixed-my-ai-image-generation-3ge4</link>
      <guid>https://dev.to/nadinev/the-prompting-trick-that-fixed-my-ai-image-generation-3ge4</guid>
      <description>&lt;p&gt;Today I'm going to show you a cognitive trick that works in prompting. It's based on how our brains (and language models) actually process language. Always tell the AI what TO do, never what NOT to do.&lt;/p&gt;

&lt;p&gt;This technique took my success rate from 0% to 100%. It's how I generate high-quality images with older models.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem: Negation in Constraint Specification
&lt;/h2&gt;

&lt;p&gt;Consider how most people write instructions to image models:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"A cat, not wearing a hat, blue background, no people, without red tones"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is the baseline. It's how we naturally write constraints. We think of what we DON'T want and express it.&lt;/p&gt;

&lt;p&gt;But this forces the model to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Think about a cat with a hat&lt;/li&gt;
&lt;li&gt;Think about red&lt;/li&gt;
&lt;li&gt;Think about people&lt;/li&gt;
&lt;li&gt;Then try to not include them&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The model has to process the forbidden concepts in order to avoid them. Sometimes this works. Sometimes it fails. And when it fails, the model often outputs exactly what it was supposed to avoid.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Hypothesis
&lt;/h2&gt;

&lt;p&gt;What if instead we used affirmative framing? What if we never mentioned what to avoid, and instead only specified what to include?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Instead of:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"A cat, not wearing a hat, blue background, no people, without red tones"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;We write:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"A cat with a bare head, blue background, only the cat present, blue color palette"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Notice the difference. In the second version, we never mention red. We never mention hats or people. We only specify what we DO want. There's no negation to process. There's no forbidden concept to think about.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Experiment: Testing with FLUX
&lt;/h2&gt;

&lt;p&gt;I tested this hypothesis using FLUX (via Pollinations API) with a simple constraint: generate an image of a cat with no hat, blue background, no red elements.&lt;/p&gt;

&lt;h3&gt;
  
  
  Condition 1: Baseline (Negation)
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;"A cat, not wearing a hat, blue background, no people, without red tones"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Condition 2: Affirmative Framing
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;"A cat with bare head, blue background, only the cat present, blue color palette"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I generated 10 images for each condition and evaluated them on a simple pass/fail basis: Did the image follow the constraints?&lt;/p&gt;




&lt;h2&gt;
  
  
  Results: The Affirmative Framing Breakthrough
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Condition 1 (Negation Baseline): 0% Success Rate
&lt;/h3&gt;

&lt;p&gt;The negation approach failed completely. &lt;strong&gt;All 10 images violated the core constraints&lt;/strong&gt;—every single one included hats, red elements, or both, despite explicit instructions to avoid them.&lt;/p&gt;

&lt;p&gt;The pattern was striking: the model didn't just occasionally fail—it consistently &lt;em&gt;added&lt;/em&gt; the negated elements. Red hats appeared in 8 out of 10 images despite "without red tones" in the prompt. It's as if mentioning "not wearing a hat" made the model think about hats, and mentioning "without red" made it think about red.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsvyb1ide9y9g9q7ir1vx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsvyb1ide9y9g9q7ir1vx.png" alt="Condition 1 Negation Control Results" width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure 1: Condition 1 Results (Negation Baseline). Prompt: "A cat, not wearing a hat, blue background, no people, without red tones." All 10 images failed—every cat has a hat, and most have prominent red elements despite explicit instructions to avoid them.&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"To understand 'not red,' the model must first think about red."&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Condition 2 (Affirmative Framing): 100% Success Rate
&lt;/h3&gt;

&lt;p&gt;Every single image was perfect.&lt;/p&gt;

&lt;p&gt;All 10 runs showed a bare-headed cat against a blue background with no red elements. The consistency was remarkable—the cats all had the same quality of bare-headedness, and the backgrounds were consistent shades of blue.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The improvement: From 0% to 100%&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In Condition 1, every image failed unpredictably. In Condition 2, every image succeeded consistently.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgbd1euw9yxgpmmqpfnx0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgbd1euw9yxgpmmqpfnx0.png" alt="Condition 2 Affirmative Framing Results" width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure 2: Condition 2 Results (Affirmative Framing). Prompt: "A cat with bare head, blue background, only the cat present, blue color palette." All 10 images succeeded with remarkable visual consistency. No hats, no red—just what we asked for.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Cross-Model Validation: Stable Diffusion XL
&lt;/h2&gt;

&lt;p&gt;To confirm these findings weren't specific to FLUX, I ran the same experiment on Stable Diffusion XL—a completely different architecture with different training data.&lt;/p&gt;

&lt;p&gt;Interestingly, SDXL handled some negation constraints better than FLUX. For the color test ("no blue sky"), SDXL creatively stylized the image to avoid the problem entirely. This suggests SDXL may be better trained on negation handling—but it still failed on most constraint types.&lt;/p&gt;

&lt;h3&gt;
  
  
  SDXL Results Summary
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Constraint Type&lt;/th&gt;
&lt;th&gt;Negation&lt;/th&gt;
&lt;th&gt;Affirmative&lt;/th&gt;
&lt;th&gt;Winner&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Color&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ Stylized (avoided blue)&lt;/td&gt;
&lt;td&gt;✅ Gray sky&lt;/td&gt;
&lt;td&gt;Tie&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Object&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;❌ Fruit bowl appeared&lt;/td&gt;
&lt;td&gt;✅ Clean table&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Affirmative&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Attribute&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;❌ Orange cat appeared&lt;/td&gt;
&lt;td&gt;✅ Gray tabby&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Affirmative&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Counting&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;❌ Multiple people&lt;/td&gt;
&lt;td&gt;✅ Single figure&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Affirmative&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Spatial&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;❌ Trees everywhere&lt;/td&gt;
&lt;td&gt;✅ Open field&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Affirmative&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Weather&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ Overcast&lt;/td&gt;
&lt;td&gt;✅ Overcast&lt;/td&gt;
&lt;td&gt;Tie&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imghippo.com%2Ffiles%2FTZv9996guc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imghippo.com%2Ffiles%2FTZv9996guc.png" alt="SDXL Comparison Grid" width="800" height="251"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure 3: SDXL Results. SDXL showed better negation handling than FLUX (note the stylized car image avoiding blue sky), but still failed on most constraint types. Affirmative framing won or tied every test.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Affirmative framing won 4 tests, tied 2, and lost none.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;💡 Even with a better-trained model like SDXL, affirmative framing never loses. It either wins or ties. This makes it the safer, more reliable choice regardless of which model you're using.&lt;/p&gt;




&lt;h2&gt;
  
  
  Bonus Finding: Negative Prompt Fields Don't Fully Solve This
&lt;/h2&gt;

&lt;p&gt;I also tested using FLUX's negative prompt feature—putting affirmative language in the main prompt and forbidden elements in a separate negative prompt field.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Positive:&lt;/strong&gt; "A cat with bare head, blue background, centered composition"&lt;br&gt;
&lt;strong&gt;Negative:&lt;/strong&gt; "hat, people, red, accessories, clutter"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Surprisingly, this performed &lt;em&gt;worse&lt;/em&gt; than pure affirmative framing. Red elements crept back in (collars, accessories, background elements), and some images even showed party hats.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq9napkrloqrt2dgg6x8z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq9napkrloqrt2dgg6x8z.png" alt="Condition 3 Results" width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure 4: Even with forbidden elements in a dedicated negative prompt field, red accessories appeared in most images. The negative prompt still activates the forbidden concepts.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The takeaway:&lt;/strong&gt; Even purpose-built negative prompt features can't fully escape the negation problem. Pure affirmative framing remains the most reliable approach.&lt;/p&gt;




&lt;h2&gt;
  
  
  Unexpected Finding: The Gemini Automation Failure
&lt;/h2&gt;

&lt;p&gt;This is where the story gets interesting.&lt;/p&gt;

&lt;p&gt;I decided to automate the experiment. Why manually write affirmative framings when I could have an LLM generate them?&lt;/p&gt;

&lt;p&gt;I built a simple app that asked Gemini Pro 3 to generate test conditions. For the affirmative framing condition, I specified:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Generate an affirmative framing that reframes the constraint into positive instruction, focusing on what TO include rather than what to avoid."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Gemini reframed the negative constraint "no red" by focusing on "non-red colors" and "colors other than red."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It still used negation.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;"Colors other than red" is negation—just rephrased. The model never escaped the negation frame.&lt;/p&gt;

&lt;p&gt;I tried again, more explicitly:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"CRITICAL: Do NOT mention red or any excluded colors. Only specify colors that ARE allowed. Use positive language only."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Gemini still generated prompts using "colors other than red."&lt;/p&gt;

&lt;p&gt;It failed twice. Only manual rewriting produced pure affirmative language:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Describe a colorful scene using vibrant blues, electric greens, bright yellows, warm oranges, deep purples, and cool silvers."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This automation failure is itself a major finding: &lt;strong&gt;Even advanced language models struggle to generate pure affirmative framing.&lt;/strong&gt; Models are trained on human language, and human language defaults to negation.&lt;/p&gt;




&lt;h2&gt;
  
  
  Practical Rules for Better Prompts
&lt;/h2&gt;

&lt;p&gt;Based on these findings, here are concrete rules for writing better prompts:&lt;/p&gt;

&lt;h3&gt;
  
  
  Rule 1: Never Use Negation in Constraints
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Instead of:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Don't include people in the background, don't use harsh lighting, avoid reflections"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Use:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Show only the subject. Use soft, diffused lighting. Keep surfaces matte and non-reflective."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Rule 2: Be Specific About What IS Present
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Weak:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"A blue background"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Strong:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"A vivid, saturated blue background occupying 80% of the frame, gradient from bright blue at top to deeper blue at bottom"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Rule 3: List Desired Elements Explicitly
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Weak:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"A professional photo without amateur mistakes"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Strong:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"A professional product photo with: sharp focus on the product, even studio lighting, neutral background, shallow depth of field, natural colors"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Rule 4: Use Positive, Action-Oriented Language
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Don't&lt;/th&gt;
&lt;th&gt;Do&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;"Avoid corporate jargon"&lt;/td&gt;
&lt;td&gt;"Use clear, simple vocabulary"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;"Don't make it dark"&lt;/td&gt;
&lt;td&gt;"Use bright lighting"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;"Without unnecessary details"&lt;/td&gt;
&lt;td&gt;"Include only essential information"&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  What This Reveals About How Models Work
&lt;/h2&gt;

&lt;p&gt;Models process language the way they were trained to: like humans do. That's actually the problem.&lt;/p&gt;

&lt;p&gt;When you write "don't include red," the model processes it the same way your brain does—by first activating the concept of "red" to understand what to avoid. For humans, this conscious activation is easy to suppress. For models, that activation becomes part of the output.&lt;/p&gt;

&lt;p&gt;The difference isn't that models think differently. It's that models can't consciously &lt;em&gt;decide&lt;/em&gt; to ignore an activated concept the way you can. They generate based on what's most salient in their processing. And when you mention red—even to forbid it—you've made red salient.&lt;/p&gt;

&lt;p&gt;When you write "include blue and green," there's no competing concept to suppress. The model simply processes what you asked for.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This is why affirmative framing works: it removes the conflicting activation entirely.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Automation Failure: A Cautionary Note
&lt;/h2&gt;

&lt;p&gt;The fact that Gemini struggled to generate pure affirmative framing matters. When I asked it to reframe, it understood the task but couldn't do it. It kept generating "colors other than red" instead of just listing the colors to use.&lt;/p&gt;

&lt;p&gt;This reveals something important: &lt;strong&gt;Affirmative framing is not the model's default behavior.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Models learn from human language. Human language defaults to negation. So when you ask a model to generate affirmative instructions, you're asking it to do something contrary to its training.&lt;/p&gt;

&lt;p&gt;The solution? Be explicit about what you want. Show examples. Specify the structure. Don't assume the model knows what affirmative framing means—teach it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Stop fighting against how AI models process language. Speak their language: be direct, specific, and always frame instructions positively.&lt;/p&gt;

&lt;p&gt;The results speak for themselves:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;From 0% to 100% success rate&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Perfect consistency&lt;/strong&gt; instead of total failure&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Validated across multiple models&lt;/strong&gt; (FLUX and Stable Diffusion XL)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Works across constraint types&lt;/strong&gt; (color, objects, attributes, spatial, counting)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next time you write a prompt, forget about what you don't want. Focus on what you do. Be specific. Be direct. Be affirmative.&lt;/p&gt;

&lt;p&gt;The model will understand.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Agentic Bitcoin24</title>
      <dc:creator>Nadine </dc:creator>
      <pubDate>Sat, 08 Nov 2025 22:22:01 +0000</pubDate>
      <link>https://dev.to/nadinev/agentic-bitcoin24-3946</link>
      <guid>https://dev.to/nadinev/agentic-bitcoin24-3946</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/tigerdata-2025-10-15"&gt;Agentic Postgres Challenge with Tiger Data&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  💡 What I Built
&lt;/h2&gt;

&lt;p&gt;I built &lt;strong&gt;Agentic Bitcoin24&lt;/strong&gt;, a Bitcoin price tracker that &lt;strong&gt;never goes down&lt;/strong&gt;, even when its primary data source fails. It's a growing database that gains value over time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Live Application:&lt;/strong&gt; &lt;a href="https://bitcoin24-delta.vercel.app/" rel="noopener noreferrer"&gt;Agentic Bitcoin24&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F29s4qxuy1v7h4us1mxf2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F29s4qxuy1v7h4us1mxf2.png" alt=" " width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  🎯 Zero-Downtime Resilience
&lt;/h3&gt;

&lt;p&gt;When the CoinGecko API fails (rate limits, outages, network issues), the site &lt;strong&gt;automatically falls back&lt;/strong&gt; to Tiger Data's TimescaleDB cache. Users never see an error (they don't even know the switch happened).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Benefits:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🎯 &lt;strong&gt;Zero Downtime&lt;/strong&gt; - Site stays live during external API outages&lt;/li&gt;
&lt;li&gt;💰 &lt;strong&gt;0.31% API Usage&lt;/strong&gt; - Only &lt;strong&gt;31 calls per month&lt;/strong&gt; vs 10,000 limit&lt;/li&gt;
&lt;li&gt;⚡ &lt;strong&gt;Instant Response&lt;/strong&gt; - Tiger Data cache = no external API latency&lt;/li&gt;
&lt;li&gt;🔄 &lt;strong&gt;Transparent Fallback&lt;/strong&gt; - Users are unaware of the data source switch&lt;/li&gt;
&lt;li&gt;📈 &lt;strong&gt;10-Year Sustainability&lt;/strong&gt; - Will run for the next decade on free tier&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🛢️ How I Used Agentic Postgres
&lt;/h2&gt;

&lt;p&gt;Behind the scenes, &lt;strong&gt;three autonomous agents&lt;/strong&gt; manage the entire database lifecycle - no manual SQL required.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://youtu.be/aDuFx3NSBwk" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fothx6mdoakhviiirl8v5.png" alt="Watch the Demo" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  🎬 The Agent Collaboration Model
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Agent&lt;/th&gt;
&lt;th&gt;Responsibility&lt;/th&gt;
&lt;th&gt;Actions&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;1. Design Agent&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Agnostic database design and ingestion.&lt;/td&gt;
&lt;td&gt;• Reads external API response and automatically designs a matching SQL schema. • Creates general-purpose tables (e.g., standard SQL or JSONB) based on user input.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;2. Optimize Agent&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Transforms and tunes existing database.&lt;/td&gt;
&lt;td&gt;• Analyzes the Design Agent's generic schema for time-series patterns. • Enables TimescaleDB compression and implements automated compression policies. &lt;strong&gt;Safety Protocol:&lt;/strong&gt; • Applies changes like indexing or compression policies only after visual confirmation and user approval.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;3. Monitoring Agent&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Gathers database metrics.&lt;/td&gt;
&lt;td&gt;• Real-time API health checks. • Performance monitoring and visualization.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The agents autonomously:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Monitor API health&lt;/strong&gt; in real-time&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Switch tabs&lt;/strong&gt; (SQL Editor → Charts → API Monitor)
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Execute optimizations&lt;/strong&gt; (indexing, compression)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Visualize results&lt;/strong&gt; (Chart.js dashboards)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Provide safety guidance&lt;/strong&gt; before applying changes&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🏗️ The Workflow:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Daily Ingestion (Vercel Cron)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. Fetch 24 hours of Bitcoin price data (1 API call)
2. Design Agent creates/updates schema automatically
3. Optimize Agent analyzes and tunes performance
4. TimescaleDB compression stores historical record
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Real-Time Monitoring&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CoinGecko API Health Check (every 30s)
   ↓
✅ ONLINE  → Fetch fresh data
❌ OFFLINE → Automatic fallback to Tiger Data cache
   ↓
Zero downtime for users
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🛢️ How I Used Tiger Data + Claude
&lt;/h2&gt;

&lt;p&gt;I used &lt;strong&gt;Tiger CLI (MCP)&lt;/strong&gt; + &lt;strong&gt;Claude Code&lt;/strong&gt; to build the entire system without writing manual SQL:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tiger CLI helped agents learn TimescaleDB-specific operations (&lt;code&gt;converttohypertable&lt;/code&gt;, &lt;code&gt;add_compression&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Claude Code refined the &lt;code&gt;createzerocopyfork&lt;/code&gt; logic and intelligent fallback strategies&lt;/li&gt;
&lt;li&gt;The agents operate in a &lt;strong&gt;chat interface&lt;/strong&gt; where I can say: &lt;em&gt;"Create a database for Bitcoin prices"&lt;/em&gt; and watch them work&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Constraint-Aware Optimization
&lt;/h3&gt;

&lt;p&gt;The Optimize Agent maximizes TimescaleDB's compression capabilities through deep reasoning about storage efficiency:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatically enables compression with proper time-column ordering&lt;/li&gt;
&lt;li&gt;Implements compression policies (auto-compress data older than 30 days)&lt;/li&gt;
&lt;li&gt;Projects long-term capacity and recommends optimizations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When resource constraints prevent certain operations, the agent intelligently adapts by requiring user validation, ensuring all storage optimizations are reviewed before execution.&lt;/p&gt;




&lt;h2&gt;
  
  
  📈 The 10-Year Sustainability Model
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The Math:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Free tier: 10,000 API calls/month&lt;/li&gt;
&lt;li&gt;My usage: 31 calls/month (0.31%)&lt;/li&gt;
&lt;li&gt;Sustainability: &lt;strong&gt;322 months = 26+ years&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why 10+ Years:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With TimescaleDB compression enabled on the time-series data:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Daily Bitcoin prices (24 hourly data points) = ~2KB per day&lt;/li&gt;
&lt;li&gt;Compressed storage: ~730KB per year&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;750MB ÷ 730KB/year ≈ 1,027 years of compressed data&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But realistically, accounting for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Schema overhead&lt;/li&gt;
&lt;li&gt;Indexes and metadata&lt;/li&gt;
&lt;li&gt;Query logs&lt;/li&gt;
&lt;li&gt;Potential data expansion&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conservative estimate: 10+ years&lt;/strong&gt; of continuous operation without hitting storage limits.&lt;/p&gt;




&lt;h2&gt;
  
  
  🌟 Overall Experience
&lt;/h2&gt;

&lt;p&gt;Most apps &lt;strong&gt;fail gracefully&lt;/strong&gt;, this one &lt;strong&gt;doesn't fail at all&lt;/strong&gt;.&lt;br&gt;
We solved the data volatility problem by providing clean, 24-hour historical Bitcoin data, not by collecting data 24/7, but by ingesting 24 hourly data points every 24 hours.&lt;/p&gt;

&lt;p&gt;The system is safe to run indefinitely and will store relevant data for &lt;strong&gt;10+ years&lt;/strong&gt; while costing &lt;strong&gt;nothing&lt;/strong&gt; to maintain.&lt;/p&gt;

&lt;p&gt;I basically hired agents who work for free and never sleep! 🎉&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>agenticpostgreschallenge</category>
      <category>ai</category>
      <category>postgres</category>
    </item>
    <item>
      <title>How I Built a Secret Agent</title>
      <dc:creator>Nadine </dc:creator>
      <pubDate>Sat, 25 Oct 2025 15:59:48 +0000</pubDate>
      <link>https://dev.to/nadinev/how-i-built-a-secret-agent-4p48</link>
      <guid>https://dev.to/nadinev/how-i-built-a-secret-agent-4p48</guid>
      <description>&lt;p&gt;I recently made an accidental but interesting discovery while building an app. I managed to create an agent-like system using nothing more than Gemini's function calling feature, effectively building an agent’s brain without the traditional, continuous infrastructure required to host a full agent.&lt;/p&gt;

&lt;p&gt;The key finding❓ This $0/hr serverless approach not only significantly reduced infrastructure costs but also proved to be a far more helpful debugger than the broad, general-purpose agent provided by my IDE.&lt;/p&gt;




&lt;h2&gt;
  
  
  ֎ Persistent Agents
&lt;/h2&gt;

&lt;p&gt;Traditional AI agents (which I call Persistent Agents) require continuous hosting using managed services and underlying infrastructure. Big tech companies are offering impressive designer spaces and no-code interfaces, but this can quickly become prohibitively expensive.&lt;/p&gt;

&lt;p&gt;The issue lies in the idle cost. Immediately upon deployment, infrastructure is required to host the agent. Even if the agent is inactive or receiving no traffic, at least one compute node is required to run the service, and these costs are incurred continuously, often hourly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So what exactly does this buy you, anyway?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A persistent agent is generally equipped with tools and can use them to perform:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Complex, multi-step reasoning.&lt;/li&gt;
&lt;li&gt;Dynamic decision-making on when and how to call tools.&lt;/li&gt;
&lt;li&gt;Management of long-running conversational memory.&lt;/li&gt;
&lt;li&gt;External actions, like authenticating on your behalf (when permission is granted).&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🛠️ Function Calling as Your Agent
&lt;/h2&gt;

&lt;p&gt;I realised that for my application's specific workflows, the most valuable part of an agent was its dynamic reasoning and ability to use tools and not its continuous hosting status and I had no need for external activities.&lt;/p&gt;

&lt;p&gt;I decided to capture the core functionality of an agent without the overhead of continuous deployment. I applied tool-use logic directly via Gemini’s function calling. The tools themselves, including the logic for search, retrieval, etc., are hardcoded into my conversational frontend.&lt;/p&gt;

&lt;p&gt;The AI's role becomes the &lt;em&gt;Stateless Agent 🧠&lt;/em&gt;. It uses function calling to translate the user’s natural language query into a structured function call and arguments. &lt;/p&gt;

&lt;p&gt;The application executes the call, and the resulting data is sent back to the model for a natural language response to the user.&lt;/p&gt;

&lt;p&gt;Since I am already making calls to the Gemini model for text generation and other things, this method allows me to combine the reasoning and response steps into a single API call, reducing the transaction cost. This is how I anticipate achieving an 80% reduction in operating costs compared to maintaining a persistent agent infrastructure.&lt;/p&gt;




&lt;h2&gt;
  
  
  🪲 How I Discovered My Agent
&lt;/h2&gt;

&lt;p&gt;My application is designed to fall back to a fuzzy text-matching search when vector search fails. I was coding in my IDE with a popular code assistant model running. Yet, my search pipeline was failing, and the IDE agent could not find the issue. It was writing new unit tests that were passing in the development environment but failing repeatedly in production.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The agent was overcomplicating things&lt;/strong&gt;, drowning in the specifics of the code, unit tests, and the immediate task. Each time I summarised the issue, its lack of persistent memory about the operational environment made it feel like I was talking to a blank slate.&lt;/p&gt;

&lt;p&gt;Finally, in sheer desperation, I ran my own application’s frontend and typed into the message input: “&lt;em&gt;What is the problem??&lt;/em&gt;”&lt;/p&gt;

&lt;p&gt;The response from my little agent's brain was immediate and shockingly direct. It informed me that it could not communicate with the backend and, therefore, could not perform the search function it was supposed to execute.&lt;/p&gt;

&lt;p&gt;The issue, it turned out, was a simple CORS policy error preventing the backend from communicating with the frontend. The traditional IDE agent was trapped in code complexity; my function-calling agent could immediately identify what was wrong.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔒 The Security Lesson in Focus
&lt;/h2&gt;

&lt;p&gt;This unexpected diagnostic capability is actually due to its architectural limitations. The agent was forced to reason only about the predefined tool functions available in its system instructions.&lt;/p&gt;

&lt;p&gt;I then asked it how it was performing the search. It began referencing internal file paths and implementation details. This was an unintended data leak because I had not provided specific instructions or response settings on how to constrain its reply.&lt;/p&gt;

&lt;p&gt;That’s the real value of the &lt;em&gt;Stateless Agent&lt;/em&gt;: it lives intrinsically inside the code's purpose, defined solely by the functions it is permitted to use. It doesn't need vast context; it needs focused context. &lt;/p&gt;

&lt;p&gt;The biggest takeaway from this experiment is that tooling isn't a massive, stateful "IDE Agent" that watches your every keystroke. Instead, there is value in composing stateless, focused expert agents that live intrinsically inside the purpose of the code. &lt;/p&gt;

</description>
      <category>agents</category>
      <category>serverless</category>
      <category>architecture</category>
      <category>llm</category>
    </item>
    <item>
      <title>🧠 Human Intelligence vs. LLMs</title>
      <dc:creator>Nadine </dc:creator>
      <pubDate>Tue, 30 Sep 2025 16:29:45 +0000</pubDate>
      <link>https://dev.to/nadinev/human-intelligence-vs-llms-4akm</link>
      <guid>https://dev.to/nadinev/human-intelligence-vs-llms-4akm</guid>
      <description>&lt;p&gt;I was doing investigative research and found a crucial bit of information in a single article. I then used an LLM to perform Deep Research on the same topic but, to my surprise, it returned a report with the claim that the argument was unsubstantiated.&lt;/p&gt;

&lt;p&gt;I used an open-source LLM to perform the same Deep Research and the result was the same. I then attempted to direct the LLM to the source whilst providing a scope, but it was unable to find the information. Instead, the LLM defaulted to searching for a higher-authority source, such as official reports, effectively &lt;strong&gt;dismissing the original article's finding as an unverified outlier.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  ⚙️ The Flaw of Statistical Text Matching
&lt;/h2&gt;

&lt;p&gt;This fundamental reliance on statistical text matching is a flaw in LLMs' ability to perform genuine deep research. Consequently, the model has a bias towards articles or information that is &lt;em&gt;frequent&lt;/em&gt;, &lt;em&gt;common&lt;/em&gt;, and &lt;em&gt;statistically factual&lt;/em&gt;. Rare information is often lost or missing from research.&lt;/p&gt;

&lt;p&gt;Whilst multimodal models are better at processing varied inputs, the issue often stems from the initial filtering. Even if accessible to a crawler, my research suggests much niche information is disregarded due to poor &lt;strong&gt;Search Engine Optimisation (SEO)&lt;/strong&gt; or poor indexing. The underlying LLM's research mechanism then filters this out due to low apparent authority or link popularity, reinforcing the LLM's bias towards common knowledge.&lt;/p&gt;

&lt;p&gt;Interestingly enough, smaller models are better at returning facts. This may be because the pre-trained knowledge of LLMs can interfere with their ability to accept new information. This is why fact-checking and proper research is much better when there is human input and a RAG-based mechanism that relies on more than just text-based matching. The key lies in &lt;strong&gt;structured knowledge matching&lt;/strong&gt;, for which an NER system is a highly capable tool.&lt;/p&gt;

&lt;p&gt;NER systems are able to convert unstructured text into explicit facts and relationships, and then search those structured facts (this is a knowledge-based match). &lt;strong&gt;NER-based RAG&lt;/strong&gt; is like asking a human to first annotate the text with all key facts and then search those structured facts. It is far better at handling the context. It does, however, require human input to set up and manage effectively.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔗The Role of Context Dependency
&lt;/h2&gt;

&lt;p&gt;Context dependency essentially means the '&lt;em&gt;fact&lt;/em&gt;' only exists as a full truth when the context is included. The standard LLM's automated process often loses this nuance in the &lt;em&gt;search&lt;/em&gt;, &lt;em&gt;retrieval&lt;/em&gt;, and &lt;em&gt;synthesis&lt;/em&gt; stages.&lt;/p&gt;

&lt;p&gt;For example, an article contains information that is implicitly mentioned but not explicitly. This is where &lt;strong&gt;human attention to detail and reasoning are important for discernment&lt;/strong&gt;. The model struggles because it operates on token probability (what word is most likely to come next) and semantic similarity (how closely a retrieved text snippet's vector resembles the vector of your prompt).&lt;/p&gt;

&lt;p&gt;The LLM cannot perform the final step of human reasoning to connect: &lt;strong&gt;Premise A&lt;/strong&gt; (in sentence 1) + &lt;strong&gt;Condition B&lt;/strong&gt; (in sentence 3) → &lt;strong&gt;Explicit Fact C&lt;/strong&gt; (missing piece).&lt;/p&gt;

&lt;p&gt;Human researchers are good at:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Inference:&lt;/strong&gt; Understanding what is implied, not just what is explicitly written.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Context:&lt;/strong&gt; Distinguishing between primary data and additional examples.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scepticism:&lt;/strong&gt; Knowing which article to trust and how to connect its logic, even if the writing is "poorly written" or inexplicit.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  ⚖️ Why Not Just Use Gemini? It Works Fine.
&lt;/h2&gt;

&lt;p&gt;When I asked an LLM to compare models, one of the key factors that caused Gemini 2.5 Pro to rank high is its ability to perform &lt;em&gt;Source Quality Filtering&lt;/em&gt;. This is because Gemini leverages &lt;strong&gt;Google Search&lt;/strong&gt; for real-time grounding, which means it has the best mechanism for filtering out misinformation, low-quality sites, and low-authority sources. This however, directly leads to the risk of missing rare or new information.&lt;/p&gt;

&lt;p&gt;I have ranked models, including their mechanism by their ability to find niche information during deep research, considering factors such as access to specialised data, &lt;strong&gt;customisability&lt;/strong&gt;, and inherent biases in training data. GPT-4o ranks third, mainly due to its slightly smaller context window of 128K tokens compared to Gemini 2.5 Pro, which is a limiting factor when dealing with large or numerous documents in deep research.&lt;/p&gt;

&lt;p&gt;Based on &lt;strong&gt;customisation&lt;/strong&gt;, context depth, and inherent bias in deep research for new information, my final ranking of the current solutions is:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Model/Approach&lt;/th&gt;
&lt;th&gt;Advantage&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;⚪&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Custom RAG/NER&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Guaranteed Accuracy&lt;/strong&gt; (Low Hallucination) &amp;amp; specialised jargon understanding.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fph5ck0nq8rcnbcwvvz7l.png" alt=" " width="32" height="32"&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Gemini 2.5 Pro&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Maximum Synthesis&lt;/strong&gt; over 1M tokens of search results for rare connections.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuaxinststsi6734uuu0c.png" alt=" " width="32" height="32"&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Claude 3 Opus&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Superior Reasoning&lt;/strong&gt; and self-correction to vet complex, potentially niche findings.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jc8u8wojw87dex17rt7.png" alt=" " width="32" height="32"&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;GPT-4o&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Autonomy &amp;amp; Multimodality&lt;/strong&gt; (e.g., extracting data from niche charts/images found online).&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The choice depends on priorities: reliability and ease of use favour general-purpose models.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>discuss</category>
      <category>llm</category>
    </item>
    <item>
      <title>RAGgle</title>
      <dc:creator>Nadine </dc:creator>
      <pubDate>Sat, 27 Sep 2025 17:30:25 +0000</pubDate>
      <link>https://dev.to/nadinev/raggle-36o9</link>
      <guid>https://dev.to/nadinev/raggle-36o9</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/kendoreact-2025-09-10"&gt;KendoReact Free Components Challenge&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I built a conversational search engine powered by Nuclia’s Agentic RAG, for e‑commerce product research so users get clear, relevant insights tailored to your interests.&lt;/p&gt;

&lt;p&gt;The app solves a common problem: extracting and making sense of scattered product information across multiple websites. With this tool, users can: &lt;strong&gt;index product data&lt;/strong&gt; from any website automatically, and &lt;strong&gt;chat with an AI assistant&lt;/strong&gt; to explore insights from the indexed results.  &lt;/p&gt;

&lt;p&gt;On the frontend, the interface is built with &lt;strong&gt;KendoReact&lt;/strong&gt; and React components. On the backend, a &lt;strong&gt;Flask API&lt;/strong&gt; handles integration with &lt;strong&gt;Nuclia’s endpoints&lt;/strong&gt;, managing indexing, search, and conversational retrieval.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/95xwflFnRN4"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;🎥 The demo showcases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;KendoReact component interactions &lt;/li&gt;
&lt;li&gt;Image upload and AI-powered analysis&lt;/li&gt;
&lt;li&gt;Conversational search with book-related queries&lt;/li&gt;
&lt;li&gt;Complete user workflow from indexing to insights&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Performance Metrics
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Sample Query&lt;/strong&gt;: Why is the book 107 days by Kamala harris so popular?&lt;br&gt;
&lt;strong&gt;Rephrased Query&lt;/strong&gt;: "What makes Kamala Harris's book '107 Days' so popular?"&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Input Nuclia Tokens&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;19.615&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Output Nuclia Tokens&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;0.204&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total Processing Time&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;3.541 seconds&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Time to First Word&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;2.616 seconds&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Response Latency&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;0.925 seconds&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;h3&gt;
  
  
  Performance Insights
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Token Efficiency&lt;/strong&gt;: High input-to-output token ratio (19.6:0.2) indicates efficient query processing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Response Speed&lt;/strong&gt;: Sub-4-second total response time for complex conversational queries&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;These metrics show RAGgle's ability performance for real-time conversational search.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;📌&lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/nadinev6" rel="noopener noreferrer"&gt;
        nadinev6
      &lt;/a&gt; / &lt;a href="https://github.com/nadinev6/RAGgle" rel="noopener noreferrer"&gt;
        RAGgle
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      RAGgle is a conversational RAG search engine that crawls URLs, indexes product data with Nuclia, and exposes it through an AI chat interface.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;RAGgle&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;This project is a custom, conversational search engine powered by Nuclia's RAG (Retrieval Augmented Generation) technology. It allows users to index content from various websites and interact with an AI-powered chat assistant to get insights about the indexed content. Users can also filter their indexed URL history by date range for better organization. The frontend is built with React and KendoReact components, while the backend is a Flask application that interfaces with the Nuclia API.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;How It Works&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;The screenshot below demonstrates how RAGgle uses cross-referenced data to answer complex questions. In this example, the AI assistant compares Charlie Kirk's most popular book with his other works by analyzing indexed content from multiple sources, providing detailed ratings, review counts, and thematic comparisons across his entire catalog.&lt;/p&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer" href="https://github.com/nadinev6/RAGgle/docs/example_ners.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fnadinev6%2FRAGgle%2Fdocs%2Fexample_ners.png" alt="RAGgle Chat Example"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Example: AI assistant comparing books by Charlie Kirk using cross-referenced data from indexed content&lt;/em&gt;&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Features&lt;/h2&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;URL Indexing&lt;/strong&gt;: Index content from various websites…&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/nadinev6/RAGgle" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  🧩 KendoReact Components Used
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;TabStrip (from @progress/kendo-react-layout)&lt;/li&gt;
&lt;li&gt;TabStripTab (from @progress/kendo-react-layout)&lt;/li&gt;
&lt;li&gt;Popup (from @progress/kendo-react-popup)&lt;/li&gt;
&lt;li&gt;Icon (from @progress/kendo-react-common)&lt;/li&gt;
&lt;li&gt;Button (from @progress/kendo-react-buttons)&lt;/li&gt;
&lt;li&gt;Switch (from @progress/kendo-react-inputs)&lt;/li&gt;
&lt;li&gt;DatePicker (from @progress/kendo-react-dateinputs)&lt;/li&gt;
&lt;li&gt;ProgressBar (from @progress/kendo-react-progressbars)&lt;/li&gt;
&lt;li&gt;Tooltip (from @progress/kendo-react-tooltip)&lt;/li&gt;
&lt;li&gt;Notification (from @progress/kendo-react-notification)&lt;/li&gt;
&lt;li&gt;NotificationGroup (from @progress/kendo-react-notification)&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  [Optional: RAGs to Riches Prize Category] Nuclia Integration
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Overview
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;RAGgle&lt;/strong&gt; is a custom search engine that leverages &lt;strong&gt;Nuclia’s Agentic RAG&lt;/strong&gt; capabilities to automatically crawl and index URLs.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Users can enter a URL, and Nuclia proceeds to index the entire site automatically within minutes.
&lt;/li&gt;
&lt;li&gt;An &lt;strong&gt;AI chat widget&lt;/strong&gt; (built and imported from Nuclia) provides natural language interaction.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Model used&lt;/strong&gt;: chatgpt-azure-4o for standard answer generation and image processing.  &lt;/p&gt;

&lt;h3&gt;
  
  
  ⚡ Active Endpoints
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;POST /index-url&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Purpose: Receives a URL from the frontend, instructs Nuclia to ingest the content from that URL, retrieves extracted metadata from Nuclia, and then stores relevant product information in a Supabase products table.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;GET /nuclia-config&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Purpose: Provides the Nuclia API keys and Knowledge Box ID to the frontend, allowing the nuclia-chat widget to connect to the correct Nuclia instance.&lt;/p&gt;




&lt;h2&gt;
  
  
  ⚙️ How It Works
&lt;/h2&gt;

&lt;p&gt;Here's what the frontend enables:&lt;/p&gt;

&lt;p&gt;The Nuclia chat widget is imported with custom features through its attributes such as (&lt;code&gt;answers&lt;/code&gt;, &lt;code&gt;queryImage&lt;/code&gt;, &lt;code&gt;rephrase&lt;/code&gt;, &lt;code&gt;citations&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;A label set is created in the Nuclia dashboard and applied to all indexed resources. The chatbot then uses these labels through configured attributes:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;rag_strategies="neighbouring_paragraphs|2|2,metadata_extension|classification_labels"&lt;br&gt;
generativemodel="chatgpt-azure-4o"&lt;br&gt;
metadata="title, author, price"&lt;/code&gt; &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Index Wholesale Sites&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Paste a product page URL (e.g., Alibaba, Amazon) and the system automatically scrapes and indexes the content into your Nuclia KB (Knowledge Base).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Quick Search&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Queries are automatically rephrased with AI for better semantic matching using the /predict/rephrase endpoint.&lt;/li&gt;
&lt;li&gt;Relationship-based search finds focused product-related information from image references or questions.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Converse with AI&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
A built-in Nuclia-powered chat assistant helps explore indexed results conversationally and provides context-aware insights for persistent chats.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  🕷️ Database Integration in URL Indexing
&lt;/h3&gt;

&lt;p&gt;To search products and make meaningful comparisons, you need a structured data layer. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let Nuclia do the heavy lifting:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Why?&lt;/em&gt; Because its internal scraping mechanism is more effective than any direct requests.get approach. Nuclia successfully indexes content from protected sites including major e-commerce platforms like Amazon and Alibaba.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data Flow:&lt;/strong&gt;&lt;br&gt;
URL → Nuclia (automatic scraping &amp;amp; indexing) → Extract structured data from Nuclia → Supabase (optional)&lt;/p&gt;

&lt;p&gt;Initially attempted to use BeautifulSoup for direct scraping, but leveraged Nuclia's internal scraping capabilities instead. &lt;/p&gt;

&lt;p&gt;Nuclia's RAG capabilities &lt;strong&gt;automatically&lt;/strong&gt; process the content of the URL, extract entities, and generate usermetadata based on the content it finds. This usermetadata often includes details like product &lt;em&gt;names&lt;/em&gt;, &lt;em&gt;prices&lt;/em&gt;, &lt;em&gt;descriptions&lt;/em&gt;, and &lt;em&gt;images&lt;/em&gt; if they are present and structured on the webpage.&lt;/p&gt;

&lt;p&gt;After Nuclia has processed the URL, the backend then takes these extracted details (like name, price_text, image_url, description, supplier, availability) and stores them in a Supabase products table for monitoring trends over time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here's how the backend system works:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Upload URL to Nuclia &lt;/li&gt;
&lt;li&gt;Retrieve extracted metadata from Nuclia&lt;/li&gt;
&lt;li&gt;Store flattened data in Supabase &lt;/li&gt;
&lt;li&gt;Return success to frontend &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;🎯 Result&lt;/strong&gt;: Your chatbot can answer both qualitative questions (product features, reviews) and quantitative questions (pricing trends, comparisons).&lt;/p&gt;




&lt;p&gt;&lt;em&gt;No hallucinations here&lt;/em&gt;: if the data isn’t in the KB, the AI won’t produce an answer. That’s the beauty of RAG, it's grounded, reliable responses are from real sources.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>kendoreactchallenge</category>
      <category>react</category>
      <category>webdev</category>
    </item>
    <item>
      <title>VisionGen</title>
      <dc:creator>Nadine </dc:creator>
      <pubDate>Sat, 13 Sep 2025 20:30:36 +0000</pubDate>
      <link>https://dev.to/nadinev/visiongen-2kdp</link>
      <guid>https://dev.to/nadinev/visiongen-2kdp</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/google-ai-studio-2025-09-03"&gt;Google AI Studio Multimodal Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;VisionGen is a next-gen tool for "Video-to-Video" generation. Because prompting does not often give you the results you want, I cooked up an a-la-carte JSON prompting applet. It automates prompt generation for the most accurate results.&lt;/p&gt;

&lt;p&gt;The applet resolves time-consuming video annotation by automating object detection, tracking, and scene segmentation with precision. This is useful for computer vision training but skips the training process to provide an immediate practical use: creating new videos from a reference.&lt;/p&gt;

&lt;p&gt;This helps you "&lt;strong&gt;get it right the first time&lt;/strong&gt;," because if a generated video isn't what you want, and you have to regenerate it, you pay for both attempts. This is why VisionGen is designed to increase the chances of getting the perfect video on the first try.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/uXhhbulzS60"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://visiongen-1042863337193.us-west1.run.app/" rel="noopener noreferrer"&gt;VisionGen&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Video Analysis Features
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Consistent Object Tracking → Increase confidence threshold if objects are missed.&lt;/li&gt;
&lt;li&gt;Bounding Boxes → With object classes that can be filtered or excluded.&lt;/li&gt;
&lt;li&gt;Contextual Descriptions → Can be edited or modified for final output.&lt;/li&gt;
&lt;li&gt;Transcriptions → Provides temporal cues based on timestamps.&lt;/li&gt;
&lt;li&gt;Timeline Visualization → Jump to a specific moment in the video by clicking on the text.&lt;/li&gt;
&lt;li&gt;Scene Segmentation → Automatic detection of scene changes and storyline&lt;/li&gt;
&lt;li&gt;Screenshots → The secret hidden ingredient exclusive to VisionGen's workflow.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Multimodal Features
&lt;/h2&gt;

&lt;h3&gt;
  
  
  🎥 Video Understanding and Generation
&lt;/h3&gt;

&lt;p&gt;Gemini is used to understand temporal relationships and object movements. It generates a coherent video that maintains object consistency and follows the narrative provided.&lt;/p&gt;

&lt;h3&gt;
  
  
  🔍 Object Tracking with Occlusion Handling
&lt;/h3&gt;

&lt;p&gt;Maintains consistent object IDs throughout videos, even when objects are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Temporarily occluded (hidden behind other objects)&lt;/li&gt;
&lt;li&gt;Partially visible&lt;/li&gt;
&lt;li&gt;Leaving and re-entering the frame&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The model interpolates positions based on trajectory before and after occlusions, ensuring continuous tracking.&lt;/p&gt;

&lt;h3&gt;
  
  
  🎬 Scene Segmentation
&lt;/h3&gt;

&lt;p&gt;The AI identifies distinct scene changes with precise timestamps and descriptions, enabling users to understand the overall structure quickly.&lt;/p&gt;

&lt;h3&gt;
  
  
  ⚙️ Configurable Analysis Parameters
&lt;/h3&gt;

&lt;p&gt;Users can customize analysis settings to balance detail and processing speed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Confidence Threshold&lt;/strong&gt;: Filter out lower-confidence detections.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Frame Rate&lt;/strong&gt;: Control analysis granularity for different video lengths.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Time Range Focusing&lt;/strong&gt;: Analyze specific segments of longer videos.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Add/Remove Audio&lt;/strong&gt;: Optional audio for balancing cost or to overlay your own audio.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How I Used Google AI Studio
&lt;/h2&gt;

&lt;p&gt;Built entirely using Google AI Studio:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Gemini 2.5 Flash Integration&lt;/strong&gt;: for video understanding to analyze uploaded files frame-by-frame, extracting detailed annotations and creating a narrative.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;(default) veo-2.0-generate-001 endpoint&lt;/strong&gt;: the applet is designed to be model-agnostic and has 2 endpoints configured, including veo-3-fast-generate-preview.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GoogleGenAI SDK&lt;/strong&gt;: for communication with the Gemini and Veo APIs using structured prompts for both analysis and generation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cloud Run Deployment&lt;/strong&gt;: for a scalable, secure deployment.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The application is designed to communicate with Google's models, inluding Veo, directly from the browser using the official @google/genai JavaScript SDK.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it works
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Standard JSON Prompting:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;JSON Prompt Generation&lt;/strong&gt;: automatically transforms the video analysis data into a structured, narrative JSON prompt, ready for video generation.&lt;/p&gt;

&lt;p&gt;The standard feature of the applet is interactive video generation, allowing users to review and, if needed, edit the AI-generated script before creating a new video using a selected model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;✨ 2. Advanced JSON Prompting:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The JSON prompt was upgraded to collect data from the video analysis and generate the intermediate requirements to form an &lt;em&gt;a-la-carte JSON prompt&lt;/em&gt;, including: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;model&lt;/li&gt;
&lt;li&gt;prompt&lt;/li&gt;
&lt;li&gt;negative prompt&lt;/li&gt;
&lt;li&gt;seed&lt;/li&gt;
&lt;li&gt;keyframes (screenshots)&lt;/li&gt;
&lt;li&gt;transcription&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This new JSON prompt includes a narrative, excluded objects (negative prompts), keyframes from the source video, and transcription data to guide the AI with maximum context from the original video to enhance the final output's adherence to your vision.&lt;/p&gt;




&lt;h3&gt;
  
  
  Why JSON Prompts?
&lt;/h3&gt;

&lt;p&gt;The raw Video Analysis data is spreadsheet of disconnected facts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;timestamp:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;1.2&lt;/span&gt;&lt;span class="err"&gt;s,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;object:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'car',&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;bbox:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;timestamp:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;1.3&lt;/span&gt;&lt;span class="err"&gt;s,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;object:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'person',&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;bbox:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;timestamp:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;1.4&lt;/span&gt;&lt;span class="err"&gt;s,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;object:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'car',&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;bbox:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;generateNarrativeForVideo&lt;/em&gt; uses the Gemini text model to act as a "scriptwriter." We ask it to convert the raw data into a structured an array of NarrativePoint objects.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"model"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"veo-3.0-fast-generate-preview"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"prompt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Joker appears from the right side of the frame..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"negativePrompt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"seed"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;94272&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"keyframes"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;44.4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"image"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"data:image/jpeg;base64,/9j/4AAQSkZJRgABAQ..."&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;The&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;very&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;long&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Base&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;string&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"image"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"data:image/jpeg;base64,/9j/2wBDAAYEBQYFBAY..."&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Another&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;very&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;long&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;string&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
 &lt;/span&gt;&lt;span class="nl"&gt;"transcription"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"[29.47s - 30.07s] Hey Arthur..."&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This JSON array acts as a chronological shot list or a script. It forces the AI to organize the chaotic events (and use the video as reference for a new video).&lt;/p&gt;




&lt;h3&gt;
  
  
  ☭ Division of Labor works Better Together
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Seeding&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;What is seeding and why is it included in the metadata?&lt;/p&gt;

&lt;p&gt;The prompt is the destination, and the seed provides the path the AI takes to get there. &lt;/p&gt;

&lt;p&gt;Seeding is for &lt;strong&gt;consistency&lt;/strong&gt;. It creates a reference value assigned to a prompt to ensure the model adheres to specific details in the reference data. It is used to reproduce the same output each time, even with minor prompt variations, by following the same "path."&lt;/p&gt;

&lt;p&gt;If you can predict the result, you can manipulate small details of the narrative. For example, you can add, remove, or modify a prompt's details, like the color of a car, without unintentionally changing the car type or the direction it's driving in.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;📸 &lt;strong&gt;Chaining&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;One API call to a model like Veo generates an 8-second video segment. Each API call requires a new prompt and generates a new frame, which can result in broken context.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;: We use a screenshot as a Base64-encoded 'last frame' from the previous step. The new prompt describes what should happen next, continuing from the screenshot.&lt;/p&gt;

&lt;p&gt;This approach grounds the AI, forcing it to adopt the original video's &lt;strong&gt;color palette, lighting, object style,&lt;/strong&gt; and &lt;strong&gt;composition&lt;/strong&gt;. This process eliminates ambiguity and is the single most important addition for consistency.&lt;/p&gt;

&lt;p&gt;This process can be repeated infinitely or for as many segments as you need to create your desired video length.&lt;/p&gt;




&lt;h3&gt;
  
  
  📊 Export Formats:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;YOLO Format&lt;/strong&gt;: Optimized for object detection model training.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;COCO JSON&lt;/strong&gt;: Compatible with popular computer vision frameworks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A-la-carte JSON&lt;/strong&gt;: A detailed JSON file with scripts and keyframes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Persistent Storage&lt;/strong&gt;: All metadata is saved locally along with the analysis, so it will be restored when you load a project from your history.&lt;/p&gt;




&lt;p&gt;To use the applet and generate videos Bring-Your-Own-API Key: update in Settings. &lt;em&gt;Your API key is safely saved on your own web browser using localStorage.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>googleaichallenge</category>
      <category>ai</category>
      <category>gemini</category>
    </item>
    <item>
      <title>FieldCraft</title>
      <dc:creator>Nadine </dc:creator>
      <pubDate>Thu, 11 Sep 2025 23:14:27 +0000</pubDate>
      <link>https://dev.to/nadinev/fieldcraft-1e65</link>
      <guid>https://dev.to/nadinev/fieldcraft-1e65</guid>
      <description>&lt;h2&gt;
  
  
  FieldCraft, a Cursor for Form Builders
&lt;/h2&gt;

&lt;p&gt;FieldCraft uses &lt;strong&gt;Tambo AI&lt;/strong&gt; to act as an intelligent cursor, directly manipulating the user interface based on natural language commands. This creates a dynamic, user-driven UI/UX.&lt;/p&gt;

&lt;h3&gt;
  
  
  🎥 Demo and Code
&lt;/h3&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/W7i0ZMEhVFw"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Check out the live demo and the full codebase for FieldCraft.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/nadinev6" rel="noopener noreferrer"&gt;
        nadinev6
      &lt;/a&gt; / &lt;a href="https://github.com/nadinev6/FieldCraft" rel="noopener noreferrer"&gt;
        FieldCraft
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A cursor for Form Builders. built for the TamboHack
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;FieldCraft&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;FieldCraft is Cursor for Form Builders powered by Tambo AI, designed for the TamboHack. It uses Next.js, React, TypeScript, Tailwind CSS, and Zod for schema validation. This README will guide you through the architecture and file structure so you can replicate or extend the app.&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Overview&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;FieldCraft enables dynamic, schema-driven form creation and rendering. Forms are defined using JSON objects validated by Zod schemas, and rendered as interactive UI components. The app supports multi-section forms, conditional logic, and extensible field types.&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;File Structure &amp;amp; Key Components&lt;/h2&gt;
&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;1. &lt;strong&gt;Form Field Schemas&lt;/strong&gt;
&lt;/h3&gt;

&lt;/div&gt;
&lt;p&gt;Defines the blueprint for each form field type using Zod.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;File:&lt;/strong&gt; &lt;code&gt;src/lib/form-field-schemas.ts&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; Centralized schema definitions for all supported field types and structural elements.&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h4 class="heading-element"&gt;Supported Field Types:&lt;/h4&gt;

&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Basic Fields:&lt;/strong&gt; &lt;code&gt;text&lt;/code&gt;, &lt;code&gt;email&lt;/code&gt;, &lt;code&gt;password&lt;/code&gt;, &lt;code&gt;number&lt;/code&gt;, &lt;code&gt;checkbox&lt;/code&gt;, &lt;code&gt;select&lt;/code&gt;, &lt;code&gt;radio&lt;/code&gt;, &lt;code&gt;textarea&lt;/code&gt;, &lt;code&gt;date&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Structural Elements:&lt;/strong&gt; &lt;code&gt;group&lt;/code&gt; (for sections), &lt;code&gt;divider&lt;/code&gt; (for visual separation)&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h4 class="heading-element"&gt;Form Definition Flexibility:&lt;/h4&gt;

&lt;/div&gt;
&lt;p&gt;The…&lt;/p&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/nadinev6/FieldCraft" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;





&lt;h2&gt;
  
  
  🧠 AI-Driven UI
&lt;/h2&gt;

&lt;p&gt;FieldCraft's has the ability is to directly control components and dynamically generate forms. This approach makes is very easy to build forms without using technical terms.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dynamic Generation
&lt;/h3&gt;

&lt;p&gt;Unlike traditional form builders like &lt;code&gt;react-jsonschema-form&lt;/code&gt;, where you must first write a static, pre-defined schema, FieldCraft's input is a natural language prompt from a user. The AI's job is to &lt;strong&gt;generate the Zod schema dynamically in real-time&lt;/strong&gt;. This is the most valuable and innovative part of the process: a dynamic, user-driven generation.&lt;/p&gt;

&lt;h3&gt;
  
  
  AI-Driven UI &amp;amp; Interactables
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Interactable Components:&lt;/strong&gt; Components can be wrapped with &lt;code&gt;withInteractable&lt;/code&gt; (e.g., &lt;code&gt;ThemeToggle&lt;/code&gt;), allowing the AI to update their props in place.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Dynamic Control:&lt;/strong&gt; The AI can modify a component's state based on natural language commands, creating a user-driven UI. Users can also continue building other forms in the same chat and view previous versions in the canvas space.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt; A user can say, "Change the theme to dark mode," and the AI will update the &lt;code&gt;ThemeToggle&lt;/code&gt; component's state.&lt;/p&gt;

&lt;h3&gt;
  
  
  Single-Page or Multi-Step Form Builder
&lt;/h3&gt;

&lt;p&gt;FieldCraft simplifies the creation of forms and their content. Users can generate complex forms with features like step-by-step navigation, conditional logic, and real-time validation with a single prompt. If a user provides all the necessary specifications in one prompt, the AI assistant will generate a complete JSON object, and the application's renderer will then create the entire form at once.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔧 How It Works
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The AI's job is to produce the data. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The zod schema's job is to validate that data. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The AI assistant responds with a JSON object that matches the &lt;code&gt;propsSchema&lt;/code&gt;, which is defined in &lt;code&gt;form-definitions.ts&lt;/code&gt; and &lt;code&gt;multistep-form-definitions.ts&lt;/code&gt;. If a user's term is vague, the assistant will guide them toward a more specific request.&lt;/p&gt;

&lt;h3&gt;
  
  
  Adding New Fields
&lt;/h3&gt;

&lt;p&gt;The system is designed to be easily extensible. Any new field type must be defined with a corresponding zod schema.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt; To add a new field type called &lt;code&gt;Box Rating&lt;/code&gt; (up to 10), you would:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Add a new &lt;strong&gt;zod schema&lt;/strong&gt; in &lt;code&gt;form-field-schemas.ts&lt;/code&gt; to define the structure and validation rules (label, name, maxRating).&lt;/li&gt;
&lt;li&gt; Update the &lt;strong&gt;union type&lt;/strong&gt; in &lt;code&gt;form-renderer.tsx&lt;/code&gt; to include the new schema.&lt;/li&gt;
&lt;li&gt; Add the new &lt;strong&gt;field type&lt;/strong&gt; to the validation logic in &lt;code&gt;form-validation.tsx&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;See the Full list of available &lt;a href="https://github.com/nadinev6/FieldCraft/blob/46b17de8841711557350837b814a118861fc18b3/Form-Styling.md" rel="noopener noreferrer"&gt;Styling Options&lt;/a&gt;. &lt;/p&gt;




&lt;h2&gt;
  
  
  Content Generation
&lt;/h2&gt;

&lt;p&gt;The repository comes pre-configured with example multi-step forms, providing users with a starting point for full customization.&lt;/p&gt;

&lt;p&gt;Users can customize templates for preference-based multi-step forms or define all fields in a single prompt.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;"&lt;em&gt;Create a multi-step user registration form with account setup, personal info, preferences, and review steps.&lt;/em&gt;"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;"&lt;em&gt;Build a product feedback form with multiple rating steps and a recommendation section.&lt;/em&gt;"&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Dynamic Styling
&lt;/h3&gt;

&lt;p&gt;Users can ask the AI to style forms directly, giving them full control over the appearance through natural language.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"&lt;em&gt;Make the form background light blue.&lt;/em&gt;"&lt;/li&gt;
&lt;li&gt;"&lt;em&gt;Change the text to dark gray and increase font size.&lt;/em&gt;"&lt;/li&gt;
&lt;li&gt;"&lt;em&gt;Add a green border with rounded corners.&lt;/em&gt;"&lt;/li&gt;
&lt;li&gt;"&lt;em&gt;Make the form more compact with less padding.&lt;/em&gt;"&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Additional Resources
&lt;/h2&gt;

&lt;p&gt;For a deeper understanding of the application's design and how to add more features, view the &lt;a href="https://github.com/nadinev6/FieldCraft/blob/46b17de8841711557350837b814a118861fc18b3/BestPractices.md" rel="noopener noreferrer"&gt;Best Practices&lt;/a&gt; here.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;FieldCraft was built for the &lt;strong&gt;TamboHack: For Your UI Only&lt;/strong&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ui</category>
      <category>opensource</category>
      <category>ai</category>
      <category>react</category>
    </item>
    <item>
      <title>Dropship AI Agent</title>
      <dc:creator>Nadine </dc:creator>
      <pubDate>Thu, 28 Aug 2025 12:50:35 +0000</pubDate>
      <link>https://dev.to/nadinev/dropship-ai-agent-47gi</link>
      <guid>https://dev.to/nadinev/dropship-ai-agent-47gi</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/brightdata-n8n-2025-08-13"&gt;AI Agents Challenge powered by n8n and Bright Data&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;The Dropship AI Agent is designed to be a dropshipping intelligence solution: an unstoppable workflow designed to automate product research and uncover products. It uses two powerful tools, n8n and Bright Data, to create a dynamic, real-time AI assistant.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/XtfmT-xeD-k"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  n8n Workflow
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Product idea → n8n via a webhook:&lt;/strong&gt; A user's product idea is passed to n8n via a webhook.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bright Data → scrape product data:&lt;/strong&gt; Bright Data scrapes product data from a wholesale website.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI agent analyses data → generates a recommendation:&lt;/strong&gt; An AI agent analyses the data and generates a structured recommendation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The final, cleaned data is sent back to the application.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/nadinev6" rel="noopener noreferrer"&gt;
        nadinev6
      &lt;/a&gt; / &lt;a href="https://github.com/nadinev6/dropship" rel="noopener noreferrer"&gt;
        dropship
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Dropship is an AI-powered product research app that uses an n8n + Bright Data agent to scrape wholesale marketplaces and recommend winning dropshipping products from verified suppliers.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Dropship AI Agent&lt;/h1&gt;
&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Overview&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;This is a web application designed to help users discover wholesale products with AI-powered recommendations from verified suppliers. It features a robust search, product comparison, and direct supplier inquiry capabilities, all powered by n8n workflow automation.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Features&lt;/h2&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AI-Powered Product Search&lt;/strong&gt;: Find products based on your search queries.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Product Comparison&lt;/strong&gt;: Compare up to 3 products side-by-side on key attributes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Direct Supplier Inquiry&lt;/strong&gt;: Email suppliers directly from the app.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Favorite Products&lt;/strong&gt;: Save products to a favorites list for easy access.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Responsive Design&lt;/strong&gt;: Optimized for various screen sizes.&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Technologies Used&lt;/h2&gt;

&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;React&lt;/strong&gt;: Frontend library for building user interfaces.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vite&lt;/strong&gt;: Fast development build tool.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tailwind CSS&lt;/strong&gt;: Utility-first CSS framework for styling.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lucide React&lt;/strong&gt;: Icon library.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;n8n&lt;/strong&gt;: Workflow automation for product search and email sending.&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Getting Started&lt;/h2&gt;

&lt;/div&gt;
&lt;p&gt;Follow these steps to get the project up and running on your local machine.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;Prerequisites&lt;/h3&gt;…&lt;/div&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/nadinev6/dropship" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;





&lt;h2&gt;
  
  
  Technical Implementation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  🧠 Stage 1: Product Discovery with the AI Agent
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Webhook → Bright Data:&lt;/strong&gt; Pass the &lt;code&gt;productIdea&lt;/code&gt; parameter.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The process begins with the search input from a user, who provides a product idea.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;n8n Workflow&lt;/strong&gt;: This workflow is triggered by the user's search item. Once a specific product idea is identified, the n8n workflow transitions to the data collection phase.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="nl"&gt;"productIdea"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"keyword"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  📈 Stage 2: Bright Data Verified Node
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Bright Data → AI Agent:&lt;/strong&gt; Pass scraped supplier data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Bright Data unlocks the product catalogue and searches for 3-5 potential product ideas based on the user's input.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dynamic Web Scraping&lt;/strong&gt;: The workflow uses a Bright Data unlocker to perform a real-time, dynamic search on the Alibaba wholesale website.&lt;/p&gt;

&lt;p&gt;The data is then passed to extract a complete, rich set of information with the following selectors (where available):&lt;/p&gt;

&lt;p&gt;&lt;code&gt;productUrl, imageUrl, title, priceText, originalPriceText, discountText, moq, estDelivery, sales, companyName, isVerified, supplierYears, supplierYears, ratingValue, ratingCount&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;I created a product data model based on the n8n workflow output. Then parsed the raw HTML from the product container to extract and clean each selector. This cleaned data was then used to rebuild the product cards on the frontend.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;export&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;interface&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Product&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;id?:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;string;&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;title:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;string;&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;description:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;string;&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;wholesalePriceRange:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;string;&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;moq:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;string;&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;supplierInfo:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;string;&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;shippingInformation:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;string;&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;productSpecifications:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;string;&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;productRatings:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;number;&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;productReviews:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;string;&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;supplierRatings:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;number;&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;supplierReviews:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;string;&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;marketDemand:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'High'&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'Medium'&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'Low'&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'Unknown';&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;images:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;string&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="err"&gt;;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  📊 Stage 3: Analysis, Filtering, and Final Recommendation
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AI Agent → Response Webhook:&lt;/strong&gt; Send analysed results back.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AI Agent Node&lt;/strong&gt;: An n8n LangChain agent node transforms the raw extracted data. It is instructed to act as a data analyst. It sorts and filters the suppliers based on key criteria (lowest price, highest rating, low MOQ if available) and performs sentiment analysis on supplier reviews to identify the top recommendations in an array.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;You are an expert dropshipping data analyst. Analyse the following search results and provide product recommendations...&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Product Data Model&lt;/strong&gt;: The AI agent's analysis is then sent back to the application frontend in a structured format. It renders the data into a familiar card for each item, passing the corresponding data as props that mimic the product cards on the target website.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"recommendations"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Product title from scraped data"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Generated 2-3 sentence product description based on title"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"wholesalePriceRange"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Exact price range from scraped data"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"moq"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"MOQ from scraped data"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"supplierInfo"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Company name"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"supplierCredibility"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"estimated credibility assessment"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"shippingInformation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Delivery time from scraped data + shipping notes"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"productSpecifications"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Generated basic specs based on product title/category"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"productRatings"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;4.2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Estimated&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;rating&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;3.5-4.8&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;range)&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"productReviews"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Generated realistic review summary"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"supplierRatings"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;4.1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Estimated&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;supplier&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;rating&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;3.0-4.7&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;range)&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"supplierReviews"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Generated supplier review summary"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"marketDemand"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"High|Medium|Low"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Based&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;on&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;product&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;category&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;analysis&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"images"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"imageUrl from scraped data"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"analysis"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Overall market analysis and selection rationale"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"recommendation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Final recommendation with reasoning"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I did not add memory to my AI Agent. Instead, I created a custom hook for managing &lt;code&gt;localStorage&lt;/code&gt; to create a history thread that persists with browser refresh. That way, I could return to my application to find my favourited products without having to store them anywhere.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;📧 Dynamic Email Sending&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fszcenef1he05diqw4s80.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fszcenef1he05diqw4s80.png" alt="Email Webhook"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This 3-node workflow allows the user to contact a seller without leaving the application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Hello,

I am interested in your product: [Product Title]

Price Range: [Product Wholesale Price Range]
MOQ: [Product MOQ]

Please provide more details about:
- Availability
- Shipping options
- Payment terms
- Sample availability

Best regards
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is no need for an agent to perform this task, as the details are taken directly from the product model and this general email template is used.&lt;/p&gt;




&lt;h2&gt;
  
  
  Journey: A Cat and Mouse Game 🐾
&lt;/h2&gt;

&lt;p&gt;Initially, I could access Alibaba's products without a problem, but the detectors found me! 🕵️‍♀️ I easily resolved the CORS issue in n8n and the initial anti-bot detection.&lt;/p&gt;

&lt;p&gt;Adding delays helped prevent rate limits, but this was only a temporary fix. ⏳ Soon, Alibaba changed its CSS structure, which broke my selectors. I could update my selectors but then it updated its robots.txt file, which my agent couldn't override. &lt;/p&gt;

&lt;p&gt;I then targeted the AliExpress catalogue, but all I got in return was a standard HTML error page.🛡️ Next, I pivoted to using Google’s cached version, hoping for a kinder reception but alas, Google refused entry❌.&lt;/p&gt;




&lt;p&gt;It seems my dropshipping agent is more of a digital detective, spending its days figuring out &lt;em&gt;why&lt;/em&gt; it can't dropship, rather than &lt;em&gt;what&lt;/em&gt; to dropship! Perhaps I should rename it the "Web Blocker Diagnostic Agent." 🤷‍♀️&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>n8nbrightdatachallenge</category>
      <category>ai</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Lexis Link</title>
      <dc:creator>Nadine </dc:creator>
      <pubDate>Sun, 10 Aug 2025 19:40:20 +0000</pubDate>
      <link>https://dev.to/nadinev/lexis-link-185l</link>
      <guid>https://dev.to/nadinev/lexis-link-185l</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/redis-2025-07-23"&gt;Redis AI Challenge&lt;/a&gt;: Real-Time AI Innovators&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;Ever wondered which sources your AI agent is actually using to answer questions?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lexis Link&lt;/strong&gt;: Build your knowledge base, then search using natural language. Newly uploaded content is immediately available for querying.&lt;/p&gt;

&lt;p&gt;Creates a semantically searchable knowledge base that enables AI systems to provide accurate, traceable, and citable responses with real-time optimization.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/QmprHmTCJhw"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture Flow:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Content Upload → Embedding → Redis Index → Search → Gap Detection&lt;/p&gt;

&lt;p&gt;🔗 &lt;a href="https://lexis-link.vercel.app/" rel="noopener noreferrer"&gt;https://lexis-link.vercel.app/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: Frontend deployed on Vercel, backend running locally with Redis Stack&lt;/em&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  How I Used Redis Stack
&lt;/h2&gt;

&lt;p&gt;Migrated from FAISS to Redis Stack, transforming from batch-processing to a real-time, dynamic application.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;FAISS&lt;/th&gt;
&lt;th&gt;Redis Stack&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Real-time Updates&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Requires index rebuild&lt;/td&gt;
&lt;td&gt;✅ Instant updates&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Persistence&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;File-based, manual saves&lt;/td&gt;
&lt;td&gt;✅ Automatic persistence&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Production Ready&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Research/development&lt;/td&gt;
&lt;td&gt;✅ Excellent for production&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Confidence Tracking&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Manual implementation&lt;/td&gt;
&lt;td&gt;✅ Built-in with Sets&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;


&lt;h2&gt;
  
  
  Technical Implementation
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Redis Search Index with Rich Metadata:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;
&lt;span class="n"&gt;schema&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
   &lt;span class="nc"&gt;TextField&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="nc"&gt;TextField&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;author&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="nc"&gt;TextField&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;title&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
   &lt;span class="nc"&gt;TextField&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;publication_year&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="nc"&gt;TextField&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;page&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
   &lt;span class="nc"&gt;NumericField&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;chunk_index&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="nc"&gt;NumericField&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;total_chunks&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
   &lt;span class="nc"&gt;VectorField&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;vector&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;FLAT&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
       &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;TYPE&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;FLOAT32&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;DIM&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;VECTOR_DIMENSION&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
       &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;DISTANCE_METRIC&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;COSINE&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
   &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  🚀 RAG Optimization with Knowledge Gap Detection
&lt;/h3&gt;

&lt;p&gt;Automatically identifies content gaps using confidence thresholds (low-confidence queries are stored in a Redis Set for later review).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Record knowledge gap if confidence is low
&lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;top_confidence&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;CONFIDENCE_THRESHOLD&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;redis_client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sadd&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;REDIS_KNOWLEDGE_GAPS_SET&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;📝 Recorded knowledge gap: &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt; (confidence: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;top_confidence&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;)&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  🚀 Search Performance
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Caching Strategy for Optimising Performance&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;⏳The embedding generation causes a bottleneck especially for complex concepts. Caching solves this problem. The speed gained from Redis caching is what makes the system feel responsive on repeat queries.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;🚀 SEARCH PERFORMANCE: 165.6ms for query: 'freedom of speech'
INFO:werkzeug:127.0.0.1 - - [10/Aug/2025 20:35:22] "POST /semantic-search
🚀 SEARCH PERFORMANCE: 47.1ms for query: 'freedom of speech'
INFO:werkzeug:127.0.0.1 - - [10/Aug/2025 20:44:05] "POST /semantic-search HTTP/1.1" 200 - 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Search Total Average: 60.78ms&lt;br&gt;
Queries: 20&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✔&lt;strong&gt;Real-time Search Avg&lt;/strong&gt;: sub 100ms semantic search on newly uploaded content&lt;/li&gt;
&lt;li&gt;✔&lt;strong&gt;Source Attribution&lt;/strong&gt;: Complete citation tracking with page-level accuracy&lt;/li&gt;
&lt;li&gt;✔&lt;strong&gt;Self-Optimization&lt;/strong&gt;: Automatic knowledge gap recommendations for content improvement&lt;/li&gt;
&lt;li&gt;✔&lt;strong&gt;Production Scale&lt;/strong&gt;: Distributed, clusterable Redis architecture&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Result&lt;/strong&gt;: Redis transforms static knowledge bases into dynamic, self-improving AI systems.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;📚Inspired by the need to query and optimise structured, citable knowledge bases for AI agents.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>redischallenge</category>
      <category>devchallenge</category>
      <category>database</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
