<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: tanDivina</title>
    <description>The latest articles on DEV Community by tanDivina (@tandivina).</description>
    <link>https://dev.to/tandivina</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tandivina"/>
    <language>en</language>
    <item>
      <title>The Semantic Bridge: Building a Translator’s Dev Portfolio Entirely in Google AI Studio</title>
      <dc:creator>tanDivina</dc:creator>
      <pubDate>Mon, 02 Feb 2026 02:25:58 +0000</pubDate>
      <link>https://dev.to/tandivina/the-semantic-bridge-building-a-translators-dev-portfolio-entirely-in-google-ai-studio-njb</link>
      <guid>https://dev.to/tandivina/the-semantic-bridge-building-a-translators-dev-portfolio-entirely-in-google-ai-studio-njb</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/new-year-new-you-google-ai-2025-12-31"&gt;New Year, New You Portfolio Challenge Presented by Google AI&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  About Me
&lt;/h2&gt;

&lt;p&gt;I am a &lt;strong&gt;Translator turned Solutions Architect&lt;/strong&gt; and 3x Hackathon Winner who views the world through the lens of linguistics and business logic. My journey into engineering wasn't traditional; it was fueled by a transition from intuitive design as a hobby to high-velocity systems architecture.&lt;/p&gt;

&lt;p&gt;In a world where language has become the primary interface for intelligence, my background in translation provides the unique linguistic edge required to bridge the gap between human intent and machine execution. My goal for this portfolio, MyDevfol.io, was to create a "Semantic Bridge"—a digital space that demonstrates how I map messy human business challenges into deterministic, high-impact AI solutions. I don't just build code; I architect intent.&lt;/p&gt;

&lt;p&gt;This entire project was built from the ground up entirely within Google AI Studio. Leveraging the platform allowed me to move from concept to deployment in a single environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Portfolio
&lt;/h2&gt;

&lt;p&gt;I used Google AI Studio's integrated deployment to host this on Google Cloud Run.&lt;br&gt;


&lt;/p&gt;
&lt;div class="ltag__cloud-run"&gt;
  &lt;iframe height="600px" src="https://mydevfol-io-81532538916.us-west1.run.app/"&gt;
  &lt;/iframe&gt;
&lt;/div&gt;




&lt;h2&gt;
  
  
  How I Built It
&lt;/h2&gt;

&lt;p&gt;The architecture of MyDevfol.io is designed to feel like a high-performance "Mission Control" center.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Tech Stack&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Framework: &lt;strong&gt;React with TypeScript&lt;/strong&gt; for type-safe system architecture.&lt;br&gt;
Aesthetics: Tailwind CSS for a dark-mode, &lt;strong&gt;holographic&lt;/strong&gt; design system and Framer Motion for cinematic, 3D-perspective transitions.&lt;br&gt;
Infrastructure: Deployed on Google Cloud Run to ensure low-latency performance and seamless scaling.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Google AI Integration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I leveraged the &lt;strong&gt;Google AI Studio&lt;/strong&gt; to move from concept to deployment in record time. Deploying to &lt;strong&gt;Cloud Run&lt;/strong&gt; allowed me to maintain a complex, animation-heavy site while keeping the "handshake" speeds incredibly fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'm Most Proud Of
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;The Holographic Video Experience (A Personal Touch)&lt;/strong&gt;
I am most proud of the custom-built Video Viewport used in the project showcase. I deliberately chose video over static screenshots to add a personal touch to the technical "Mission Control" aesthetic. It humanizes the work by showing the "Deep Listening" and live interaction in real-time. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The "Semantic Bridge" Narrative&lt;/strong&gt;
Technically, I’m most proud of how the site communicates a complex professional shift. By focusing on "Linguistics" and "Intent", the portfolio serves as a live demonstration of prompt engineering as a core architectural skill.&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>devchallenge</category>
      <category>googleaichallenge</category>
      <category>portfolio</category>
      <category>gemini</category>
    </item>
    <item>
      <title>Pushing Kiro to the Max: UX Schizophrenia &amp; MCP Integration</title>
      <dc:creator>tanDivina</dc:creator>
      <pubDate>Fri, 05 Dec 2025 06:08:16 +0000</pubDate>
      <link>https://dev.to/tandivina/pushing-kiro-to-the-max-ux-schizophrenia-mcp-integration-3no5</link>
      <guid>https://dev.to/tandivina/pushing-kiro-to-the-max-ux-schizophrenia-mcp-integration-3no5</guid>
      <description>&lt;p&gt;SEO is a nightmare. It’s a terrifying landscape of broken links, invisible meta tags, and cryptic Google algorithms.&lt;/p&gt;

&lt;p&gt;For the Kiroween Hackathon, I decided to lean into that fear.&lt;br&gt;
Meet RankBeacon SEO Exorcist. It’s a technical SEO analyzer that treats bugs like ghosts, 404s like zombies, and poor performance like a curse. But I didn't just want to make a "fun" toy; I wanted a tool that could actually fit into a serious developer's workflow. &lt;/p&gt;

&lt;p&gt;Here is how I built a dual-mode React interface and integrated the Model Context Protocol (MCP) to let AI fix the issues for me.&lt;/p&gt;

&lt;h2&gt;
  
  
  💀 The Concept: UX Schizophrenia
&lt;/h2&gt;

&lt;p&gt;Most dev tools are either "fun side projects" or "boring enterprise SaaS." I wanted both.&lt;/p&gt;

&lt;p&gt;I built a Dual-Mode Interface that switches instantly with a hotkey (Ctrl+P):&lt;/p&gt;

&lt;p&gt;Costume Mode: A VHS horror aesthetic with CRT scanlines. Issues are labeled as "Ghosts" (Critical), "Zombies" (Warnings) etc. The 'Haunting Score' shows how haunted your site is: 0 means perfect (no ghosts), 100 means extremely haunted. You see your current score (e.g., 65/100) and work to exorcise the demons to reach 0. In Professional Mode, it's inverted to 'SEO Health Score' where 100 is perfect.&lt;/p&gt;

&lt;p&gt;Professional Mode: A clean, blue-and-white UI suitable for sending to a client without getting fired.&lt;/p&gt;

&lt;p&gt;How we built the "Theme Engine"&lt;/p&gt;

&lt;p&gt;Instead of just swapping CSS colors, we built a React context that swaps entire language dictionaries and component styles.&lt;/p&gt;

&lt;h2&gt;
  
  
  🤖 The Real Magic: MCP Integration
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2l2icmxdob2kcm6gvqpd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2l2icmxdob2kcm6gvqpd.png" alt=" " width="620" height="936"&gt;&lt;/a&gt;&lt;br&gt;
While the spooky UI is fun, the real technical innovation is under the hood. I integrated the Model Context Protocol (MCP).&lt;/p&gt;

&lt;p&gt;If you aren't familiar with it, MCP is a standard that allows AI assistants (like Kiro, Claude, or IDE agents) to connect to external tools. By making RankBeacon an MCP server, I turned it from a website into a native skill for my AI.&lt;/p&gt;

&lt;p&gt;Why does this matter?&lt;/p&gt;

&lt;p&gt;Usually, checking SEO involves deploying your site, copying the URL, pasting it into a tool, and reading a report.&lt;/p&gt;

&lt;p&gt;With RankBeacon's MCP integration, I can debug SEO inside my IDE while working on localhost.&lt;/p&gt;

&lt;p&gt;Real-World Use Case: The "Fix It" Loop&lt;/p&gt;

&lt;p&gt;Here is a real workflow I use with the tool:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The Prompt&lt;br&gt;
I ask my IDE agent: "Analyze the SEO of &lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt;"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Scan&lt;br&gt;
The AI hits my local RankBeacon MCP server. It runs a headless browser scrape (using Playwright) in the background.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Result&lt;br&gt;
Instead of a generic AI guess, it returns hard data from the tool.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Because the AI has context of my file system and the error report, I can just say: "Fix the meta description and add alt text to the hero image."&lt;/p&gt;

&lt;p&gt;The AI opens app/layout.tsx, inserts the correct  tag, and patches the image component. No tab switching required.&lt;/p&gt;

&lt;p&gt;🛠 The Tech Stack&lt;br&gt;
Frontend: React, Tailwind CSS (with a custom "Creepster" font configuration).&lt;/p&gt;

&lt;p&gt;Backend: FastAPI (Python) for the analysis engine.&lt;br&gt;
Protocol: MCP (Model Context Protocol) via a TypeScript/Docker server.&lt;br&gt;
Crawling: Playwright for rendering JavaScript-heavy sites (SPAs).&lt;br&gt;
Accessibility: We worked hard to ensure that despite the "glitch" visual effects, the site remains WCAG 2.1 AA compliant (Screen readers get the professional descriptions, not the spooky ones).&lt;/p&gt;

&lt;p&gt;👻 Try It Yourself&lt;/p&gt;

&lt;p&gt;RankBeacon SEO Exorcist is live. You can try the demo to see the "Dual Mode" switching in action, or check the repo to see how we handled the MCP implementation.&lt;/p&gt;

&lt;p&gt;Live Demo: rankbeacon-exorcist.vercel.app&lt;/p&gt;

&lt;p&gt;Fun Trick: Press Ctrl+P (or Cmd+P) on the site to toggle between "Business" and "Horror" modes instantly.&lt;/p&gt;

&lt;p&gt;I’d love to hear what you think about combining gamification with serious dev tools. Does the "Halloween" theme make the tedious work of SEO more bearable? Let me know in the comments! 👇&lt;/p&gt;

&lt;p&gt;Built with #kiro for the Kiroween Hackathon.&lt;/p&gt;

</description>
      <category>kiro</category>
      <category>kiroween</category>
      <category>mcp</category>
    </item>
    <item>
      <title>AI SEO Content Brief Agent powered by BrightData &amp; n8n</title>
      <dc:creator>tanDivina</dc:creator>
      <pubDate>Mon, 01 Sep 2025 03:31:44 +0000</pubDate>
      <link>https://dev.to/tandivina/ai-seo-content-brief-agent-powered-by-brightdata-n8n-1ndb</link>
      <guid>https://dev.to/tandivina/ai-seo-content-brief-agent-powered-by-brightdata-n8n-1ndb</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/brightdata-n8n-2025-08-13"&gt;AI Agents Challenge powered by n8n and Bright Data&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;In the new era of generative search, why not just use a model with built-in Google Search grounding to create an article? This was the first question I asked myself. &lt;/p&gt;

&lt;p&gt;The answer reveals a fundamental limitation: a grounded AI gives you a summary, but you never see the source material. It's a black box. You can't analyze the structure of the top-ranking pages, you can't see the exact "People Also Ask" questions, and you have no control over which sources the AI chooses to value.&lt;/p&gt;

&lt;p&gt;To solve this fundamental limitation, I built the &lt;strong&gt;AI SEO Content Brief Agent&lt;/strong&gt;. This is not a chatbot; it's a complete, automated intelligence pipeline that transforms the messy, real-time web into a high-value, strategic asset.&lt;/p&gt;

&lt;p&gt;This agent solves a critical problem for content creators: it provides not just an answer, but a data-driven blueprint for how to create content that wins. It automates the work of an expert SEO strategist who needs to deconstruct the competition, not just summarize them.&lt;/p&gt;

&lt;p&gt;The process is a powerful fusion of live data and specialized AI agents, and it's what makes this approach superior:&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Request:&lt;/u&gt; A user visits the agent's webpage (rankbeacon.dev) and enters a competitive keyword.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Real-Time Data Acquisition (The Crucial Difference):&lt;/u&gt; The n8n workflow triggers. Instead of asking an AI for a summary, it uses Bright Data's Web Unlocker to fetch the raw, complete HTML of the live Google search results page. This provides the unprocessed, unbiased source material that grounding completely hides.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Specialized Analysis (The "Analyst Agent"):&lt;/u&gt; The first AI agent acts as a high-speed parser. Its only job is to deconstruct the raw HTML and extract a clean, structured JSON analysis. It identifies the top-ranking competitor URLs, the exact wording of questions users are asking, and the recurring themes in the titles and descriptions. This structural analysis is impossible with a simple grounded prompt.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Strategic Synthesis (The "Strategist Agent"):&lt;/u&gt; A second, distinct AI agent receives this clean, structured data. It thinks like a senior content strategist, using the real-world data to write a creative and comprehensive content brief in Markdown. It doesn't guess what a good title is; it suggests titles based on what's already performing well.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Professional Delivery:&lt;/u&gt; The final brief, now perfectly optimized for both human readability and AI synthesis, is delivered directly to the user's email, ready for a writer to create a piece of content with a genuine, data-driven competitive advantage. Alternatively, we could add an extra AI Writer Agent &amp;amp; a Wordpress node in n8n let AI write the article &amp;amp; publish it on Wordpress to turn it into an AI SEO Content Generator.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;You can try the live tool yourself on the official project webpage:&lt;/p&gt;

&lt;p&gt;➡️ Live Tool: &lt;a href="https://www.rankbeacon.dev/content-brief-agent.html" rel="noopener noreferrer"&gt;https://www.rankbeacon.dev/content-brief-agent.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have also recorded a short demo video that showcases the complete end-to-end process, from entering a keyword on the webpage to receiving the final, formatted content brief in my email inbox.&lt;/p&gt;

&lt;p&gt;➡️ Demo Video: &lt;a href="https://youtu.be/jxnbHURAkIE" rel="noopener noreferrer"&gt;https://youtu.be/jxnbHURAkIE&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  n8n Workflow
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/tanDivina/1030962215cc5535ac28db72c08d4461" rel="noopener noreferrer"&gt;https://gist.github.com/tanDivina/1030962215cc5535ac28db72c08d4461&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Technical Implementation
&lt;/h2&gt;

&lt;p&gt;My agent is architected as an "AI Agent Chain" in n8n. This approach breaks a complex problem into smaller, specialized tasks, leading to more reliable and higher-quality results than a single, monolithic prompt.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;System Instructions (Prompts):&lt;/u&gt; The workflow uses two distinct agents. The first, the "Analyst Agent," is prompted to act as a technical parser, focusing solely on extracting a structured JSON from raw HTML. The second, the "Strategist Agent," is prompted to act as a creative content strategist, taking the clean JSON and generating a human-readable brief in Markdown.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Model Choice:&lt;/u&gt; The workflow is powered by Google Gemini 2.5 Flash Lite. It is designed to be model-agnostic, but the two-agent structure allows for optimization. For instance, a faster, cheaper model can be used for the technical parsing task, while a more powerful, creative model like Gemini 2.5 Pro could be used for the final content generation if desired.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Memory:&lt;/u&gt; The agent is stateless and requires no memory. Each run is a discrete, self-contained task triggered by the user, making it efficient and scalable.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Tools:&lt;/u&gt; The core tools in the n8n workflow are: Webhook (to receive requests from the webpage), Bright Data (for data acquisition), AI Agent using Gemini 2.5 Flash Lite (x2), Code (x2): This was the most critical part of my solution. I used two separate Code nodes to build a resilient data pipeline. The first reconstructs the raw HTML from the Bright Data node's unique output format. The second uses a regular expression to reliably parse the JSON from the AI Analyst's text response, making the workflow robust against AI hallucinations or conversational text, Markdown (for HTML conversion), and Gmail (for final delivery).&lt;/p&gt;

&lt;h3&gt;
  
  
  Bright Data Verified Node
&lt;/h3&gt;

&lt;p&gt;My project's entire data acquisition strategy is built upon the Bright Data Verified Node. I specifically leveraged the Web Unlocker resource within the node. This was the critical component that allowed my agent to reliably bypass blocks, solve any potential CAPTCHAs, and access the raw, real-time HTML of the Google Search Results Page — a task that is notoriously difficult and would be impossible with standard HTTP requests. The Verified Node was the cornerstone that transformed my agent from a theoretical concept into a functional, unstoppable workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Journey
&lt;/h2&gt;

&lt;p&gt;As someone completely new to n8n, this challenge was an incredible, hands-on learning experience that went from basic node connections to advanced, real-world problem-solving. My journey was defined by building a truly resilient data pipeline, and the most important lesson I learned was how the quality and nature of the input data dramatically affects an AI agent's performance.&lt;/p&gt;

&lt;p&gt;I discovered this through two very different test cases:&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Test Case 1:&lt;/u&gt; "Best &lt;em&gt;Chocolate&lt;/em&gt; Tours Panama" (The Success Case)&lt;br&gt;
When I used this keyword, my workflow performed flawlessly from start to finish. The reason is that the search results for this topic contain strong, coherent signals. Words like "tour," "cacao," "farm," and "experience" are all thematically linked. The SEO Analyst Agent received clear, unambiguous HTML, and was able to confidently extract the correct topics and entities, leading to a perfect, on-topic content brief.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Test Case 2:&lt;/u&gt; "&lt;em&gt;Nano Banana&lt;/em&gt; Image Model" (The Critical Failure)&lt;br&gt;
This is where I encountered the most fascinating challenge. Even though Bright Data was successfully scraping the correct search results page, my AI agents were producing a completely incorrect brief about actual bananas. This wasn't a simple bug; it was a classic case of AI Context Collapse.&lt;/p&gt;

&lt;p&gt;The term "Nano Banana" contains two extremely powerful, common words ("nano" and "banana") that created conflicting signals for the AI. In its vast training data, the signal for "fruit" was stronger than the surrounding technical context of "image model" and "AI." The agent latched onto the stronger, incorrect signal and hallucinated a plausible-sounding but completely wrong analysis.&lt;/p&gt;

&lt;p&gt;The Solution: This discovery was my biggest "aha!" moment. I realized I needed to make my SEO Analyst Agent more robust. I re-engineered its prompt to include explicit context, telling it what the original keyword was and instructing it to prioritize the overall technical theme of the search results over potentially confusing words within a brand name. This "pre-framing" of the AI's task solved the problem and made the agent resilient enough to handle these complex, ambiguous, real-world topics.&lt;/p&gt;

&lt;p&gt;This journey taught me that building an "unstoppable workflow" isn't just about connecting nodes; it's about deeply understanding the entire data lifecycle—from anticipating how a target website will respond, to handling unexpected data formats with custom code, and finally, to engineering AI prompts that are resilient enough to overcome the inherent quirks of language models. I'm extremely proud of the final, functional, and highly useful tool I was able to create.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>n8nbrightdatachallenge</category>
      <category>ai</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
