DEV Community

NorthernDev
NorthernDev

Posted on

The Neural Command Center: Building a Generative UI Portfolio with Gemini 1.5 & Python

New Year, New You Portfolio Challenge Submission

This is a submission for the New Year, New You Portfolio Challenge Presented by Google AI

About Me

I am Jacob Sandström, a Senior Full-Stack Engineer and AI Architect based in Sweden. I specialize in what I call "The Boring Stack" (Postgres, Docker, Python)—technology that is pragmatic, reliable, and scales without unnecessary complexity.

My philosophy is simple: Technology is a delivery mechanism for value.

For this challenge, I didn't want to build just another static resume site. I wanted to build a Digital Twin, an extension of myself that can discuss my work, visualize my architecture, and represent my personality even when I'm not online.

Portfolio

Here is the live deployment of my Neural Command Center.

How I Built It

The Concept: Generative UI

Most AI chatbots suffer from the "Wall of Text" problem. You ask about a project, and you get three paragraphs of text. I wanted to break that pattern.

I built a Generative UI system. When you ask my Digital Twin about my projects (e.g., "Show me your projects"), it doesn't just describe them. The AI fundamentally alters the interface, rendering interactive Holographic 3D Cards directly into the chat stream.

The Stack

I chose a robust, monolithic architecture to minimize latency and complexity:

  • Brain: Google Gemini 1.5 Flash (via google-generativeai SDK).
  • Backend: Python (FastAPI). Handles the state, context, and GitHub API calls.
  • Frontend: React 19 + Vite + Framer Motion. Provides the "Sci-Fi" aesthetic and fluid animations.
  • Infrastructure: Docker container deployed on Google Cloud Run.

The "Magic" in the Code

The core innovation lies in how the Frontend and AI communicate through "Intent Tags".

  1. The System Instruction (app.py):
    I instructed Gemini to act as my Digital Twin but gave it a special capability. When discussing specific projects, it injects invisible tags into the response stream:

    # From app.py
    INSTRUCTIONS:
    1. Act as Jacob. Be professional but opinionated about simplicity.
    2. When discussing the project "MemVault", YOU MUST append `[SHOW_PROJECT: memvault]` to your response.
    
  2. The Parsing Engine (App.tsx):
    The frontend parses the streaming response in real-time. If it detects these tags, it dynamically renders React components instead of text.

    // From App.tsx
    const projectTagRegex = /\[SHOW_PROJECT:\s*([a-zA-Z0-9_-]+)\]/gi;
    
    // If a tag is found, we extract the ID and trigger the UI component
    if (matches.length > 0) {
      projectIds = matches.map(m => m[1]);
      cleanContent = cleanContent.replace(projectTagRegex, ''); // Clean the text
    }
    

Google Cloud Run Configuration

To participate in the challenge, I deployed the container with the specific contest label:


bash
gcloud run deploy portfolio \
  --source . \
  --region europe-north1 \
  --allow-unauthenticated \
  --labels dev-tutorial=devnewyear2026
Enter fullscreen mode Exit fullscreen mode

Top comments (0)