This is a submission for the New Year, New You Portfolio Challenge Presented by Google AI
About Me
My name is Darren, aka Dazbo. I am an Enterprise Cloud Architect and a huge fan of all things Google Cloud and Google AI. I enjoy spreading the word about how cool this technology is. For example, by blogging on platforms like this.
I'm also a Google Developer Expert. But don't let that fool you - I don't write much code! Certainly not in my day job; so I spend a lot of time in the evenings and weekends learning, experimenting, building and blogging. I love blogging for a few reasons:
- It's part of my learning process; it helps me assimilate information. (And I'm a learning addict!)
- It's something I can come back to later. It helps me remember what I did, and why.
- I love to share knowledge.
Okay, now you know a little about me. Let's take a look at the portfolio site...
Portfolio
My shiny new portfolio application has been deployed to Cloud Run, as required by this challenge. But please note I have set the minimum instances parameter to 0, meaning that if there's no recent requests for the site, then Cloud Run will have spun down to 0. This is good because it means I'm not paying for Cloud Run when it's not in use. But it also means that if it hasn't been used for a while, Cloud Run must "start up from cold", and this takes several seconds. (See more on how I'm mitigating the cold start problem later in this blog.)
Here's the embedded application:
Note that if you view this in a browser on your desktop, you'll see multiple tiles in each horizontal row. But the embedded / mobile view only shows a single tile per row.
A Few Quick Stats
I started this project on Saturday, Jan 17. It was mostly done by Sunday night, except for some tweaks. In short: the bulk of this application was created in one weekend. It's taken approximately 14 hours to design, document, build, test, and deploy this working application. That includes about three hours to write this blog.
Here are some stats:
The important takeaway here: there's no way I could have done this so quickly without massive contributions from my Google AI tools! (More on that in a minute.)
Application Specifics
Funnily enough, building a portfolio website has been on my todo list for a little while. But when this challenge popped up in my feed, I obviously had to move this work to the top of my list!
Broad Goals
The portfolio application should:
- Present a consolidated view of my blogs (from Medium and dev.to), my public GitHub repos, and my deployed applications.
- Provide a chatbot interface with my persona, such that users can ask questions about my professional capabilities or my portfolio.
More Detailed Requirements
Let's take a look at my specific requirements...
Functional
- Each portfolio "source" - i.e. blogs, GitHub repos, and applications - should be presented in the form of an interactive carousel on the UI.
- There should be a tool to ingest source content into the application. I.e. we should not need to upload content manually, or update databases manually.
- Ingestion should be idempotent.
- AI should be used to provide automatic summaries of ingested content.
- AI should be used to create a markdown version of source blogs. This will be used for future use cases:
- As a source for embeddings, which will provide our chatbot with RAG.
- To allow me to create markdown versions that can be used for cross-posting to blogging sites.
- The metadata and AI-generated summaries for the ingested content should be persisted in a database.
- The Chatbot should adhere to the "Dazbo" persona.
- The Chatbot should be informed by the database.
- The Chatbot should only discuss relevant topics.
- The application should implement SEO best practices to ensure public discoverability.
Non-Functional / ASRs
- The Chatbot should defend against prompt-injection attacks.
- The application should be autoscaling and elastic, with a pay-as-you go paradigm.
- UI should be fast, responsive and support desktop and mobile views.
- Frontend should interact with backend via API.
- The API must implement rate limiting, with both "global" rate limiting and more aggressive rate limiting for calls to the LLM.
- Infrastructure resources (i.e. Cloud resources) should be deployable in a repeatable, automated way using Infrastructure-as-Code (IaC).
- Application changes should automatically be built, tested and deployed through a CI/CD pipeline.
- Human approval is required before deploying to live.
- Code quality is managed through enforced linting and formatting.
- Include safeguards to prevent unexpected Google Cloud costs.
Testing
- Unit tests should provide >80% coverage.
- Integration tests should be present for multiple use cases.
How I Built It
Here I'll give you an overview of:
- The development tools and processes I used
- The design decisions and rationale
- The architecture and tech stack
Also, if you want more details, I've included more hands-on information later in this blog. But for now, the quick version...
Tools and Development
Obviously, one important aspect of this challenge is that the portfolio site must be built using the help of Google AI tools. Fortunately, I've been embracing these tools for a little while now, and this challenge presented a great opportunity to describe how we can use some of these tools together in a holistic way. I'll explain why I prefer some tools for certain use cases.
| Tool | Brief Description |
|---|---|
| Google Antigravity | Google's agent-first next generation development environment. It combines the VS Code experience with deeply integrated agentic workflows. These workflows provide detailed implementation plans, and then provides evidence of the actions taken and functional outcomes. We can customise its behaviour, and provide it with reusable rules, workflows (multi-step tasks) and skills (lightweight on-demand context and knowledge). And we can give it access to tools through the addition of MCP servers. I have previously written blogs on using Antigravity, like this one, which talks about using Antigravity in the WSL environment. |
| Gemini CLI | Google's open-source, terminal-based AI agent, sharing much functionality and configuration with Gemini Code Assist. It is aware of your local OS environment and can execute terminal commands. It has multiple built in tools, such as Google Search, web fetch, and the ability to read and write files. You can configure its behaviour and create repeatable commands. You can provide it with external tools (using MCP), and even package your repeatable commands and tools as sharable and easily deployable extensions. And there's already a huge ecosystem of extensions available. Unlike a simple chatbot, it can perform complex tasks like complex multi-file repo refactors, system troubleshooting, or even diagnose Google Cloud deployment issues through use of appropriate extensions. I've written several blogs about Gemini CLI, like this one which talks about running automated UI tests with Gemini CLI. |
| Gemini CLI Gcloud Extension | Powers-up interactions with Google Cloud. With this extension in place, Gemini doesn't just know about Google Cloud, but can actually navigate and manage it for us. It is able to run gcloud commands, and perform diagnosis and troubleshooting. For example, I can say "Why did my Cloud Run service fail to deploy?" Gemini will then find the logs, interrogate them, tell me what went wrong, and what I need to do to fix it! |
| Gemini CLI ADK-Docs Extension | This is an extension I created myself. The ADK is big and constantly evolving. Gemini's innate knowledge of ADK will always be stale, and this extension solves this problem by providing Gemini with guidance for how to retrieve the latest, most up-to-date and relevant ADK documentation. The quality, accuracy and usefulness Gemini CLI's responses to ADK queries is massively improved. You can find my extension here, and you can you see it is recommended on the official ADK documentation site, here. |
| Gemini CLI Conductor Extension | A powerful extension for Gemini CLI that introduces Context-Driven Development (CDD) to your terminal. It shifts AI interaction away from ephemeral, "forgetful" chats and into persistent, repo-based artifacts. By forcing a "measure twice, code once" philosophy, Conductor ensures the agent understands your project’s vision, guidelines, and tech stack before a single line of code is written. It excels at both "greenfield" and "brownfield" projects, acting less like a noob vibe agent and more like a senior engineer who respects your architectural standards and goals. The workflow is centered around Tracks — specific objectives that Conductor breaks down into formal specifications ( spec.md) and implementation plans (plan.md). As it executes a plan, Conductor provides a high level of accountability: it tracks progress with granular task updates, creates and runs tests to verify each step, enforces human verification, and even manages your Git history by creating commits with detailed notes for every completed action. This creates a permanent, auditable trail of "how and why" changes were made, making your project context a managed, shareable asset.Check out a previous blog I've written about this extension, called Trying Out the New Conductor Extension in Gemini CLI — We’re Gonna Add Auth to Our Full Stack. |
| Gemini Code Assist on GitHub | Enterprise-grade integration that brings Gemini's power directly into your GitHub repos. Unlike simple "PR summarizers," this integration acts as an extremely skilled senior engineer that performs tasks like issues triage and code reviews. It performs these tasks with full awareness of the entire repo. And it is trivial to integrate this capability into your CI/CD workflows. You create a PR, and Gemini Code Assist automatically performs the in-depth review. It then makes recommendations, and categorises them based on importance and severity. For example, if you push code that introduces a prompt injection vulnerability, then the Gemini PR review will tell you, and suggest how you can fix it. You can then make your fixes, commit them, and then re-run the review by simply commenting on your PR with the instruction /gemini review. Cool, right? |
| Agent Starter Pack | An open source framework to speed up the process of making production-ready agentic solutions. It has a number of pre-canned ADK-based template solutions, and fast-tracks the setup of infrastructure, CI/CD, observability and security. |
| Nano Banana Pro | Insanely powerful image generation. I can't live without it! |
Great, so we've got all these AI tools. So how to decide which to use for a particular scenario?
Here's my general approach...
Use Agent-Starter-Pack to Create Your Project Folder
Start with ASP to create your initial project workspace. It creates a folder with a working agent, Python pyproject.toml (to manage dependencies and testing configuration), initial README.md, initial .gitignore, a Makefile, Terraform IaC, CI/CD pipeline, and so much more.
For me, this saves quite a bit of time that would otherwise be spent in initial project creation.
Provide Reusable Context for Gemini
Create a project-specific GEMINI.md file. This provides persistent context to Gemini, and it is used by both Antigravity and Gemini CLI. Note that Agent-Starter-Pack creates this file for us, but I always just replace its contents with what I want.
Making Significant Changes
When I want to add a new feature or do something significant, I'm using Gemini CLI with the Conductor extension. In case you're wondering, I tend to run Gemini CLI from my editor's integrated terminal window.
When we first use Conductor for a given project, Conductor gathers loads of information about the project. For example, your goals, how you want the application to behave, the tech stack you want to use, and so on. This context is persisted and then available for every major task (which Conductor calls a "track") that you want to implement.
When we initiate a new track, Conductor asks a bunch of relevant context-aware questions. It allows free-form answers, but always provides some ideas of good answers that we can select from.
It always creates tests before implementing changes, and then re-runs those tests when the changes have been implemented.
Another VERY useful feature: it tracks each-and-every step by updating the track's plan.md file. This has two very powerful benefits:
- You can shutdown your machine, come back later, and continue where you left off.
- Because the state is persisted and automatically checked-in to GitHub, you can continue your development on another machine. I'm always swapping between my desktop and laptop, so I find this invaluable. This saves several minutes of context building with each development session.
Also, since my Gemini CLI is empowered with the gcloud extension and my ADK Docs extension, it has deep knowledge of how to work with Google Cloud and ADK. So, for example, if I have any Google Cloud deployment or IAM issues, I know Gemini CLI will help me diagnose and fix in no time at all.
For Simpler Changes
For less significant changes, I tend to use the Agent built-in to Antigravity. Sure, Antigravity still plans, executes and verifies. But if I don't need state to be persisted between sessions (or machines) and if I don't want the overhead of using test-driven development, I'll avoid using Conductor. Conductor is great. But it's much slower than just using the agent in Antigravity.
Code Reviews
Having Gemini Code Assist active as a CI/CD workflow in GitHub is incredibly useful. When I'm finished with my current branch (because I always use a separate branch for features) and I raise my PR, Gemini provides a comprehensive code review in the PR itself! I find this a very useful line of defence. It often picks up stuff that Gemini doesn't spot inside my development environment.
Documentation
I've maintained some key documents throughout, and the rules in my GEMINI.md help ensure they are kept current with each and every change that Gemini helps me with. E.g.
[previous content]
## Key Docs
- @README.md - repo overview and dev guidance
- @GEMINI.md - guidance for you, the agent
- @conductor/product.md - an overview of this portfolio site as a "product"
- @conductor/tech-stack.md - an overview of the tech stack
- @TODO.md - list of tasks to complete
- @Makefile - dev commands
- @docs/design-and-walkthrough.md - design and walkthrough, including design decisions and implementation
- @docs/testing.md - testing docs, including descriptions of all tests
- @deployment/README.md - deployment docs
## Rules
[previous rules]
- ALWAYS use the adk-docs-mcp tools to answer questions about building agents with ADK. If you can't use this MCP, you MUST alert me rather than falling back to what you know.
- Key docs should be updated as you make changes.
- Always include top-of-file docstrings in every Python file you create or edit. This should include a description of what the file does, why it exists, and how it works.
[Rest of this doc]
TODO.md
One thing I always create at the very start of a project is a TODO.md. Initially, the TODO.md I created for this project looked like this:
- [ ] Create README.md and GEMINI.md
- [ ] Create design-and-walkthrough doc
- [ ] Create testing doc
- [ ] Implement Conductor
- [ ] Add Gemini PR GitHub Actions
- [ ] Ensure Firestore in Terraform deployment; remove any Cloud SQL
- [ ] Add Google Cloud billing alerts and killswitch
- [ ] Establish how to ingest/point to source blogs, repos, etc
- [ ] Add GCS bucket for static assets
- [ ] Implement backend services
- [ ] Implement FastAPI
- [ ] Deploy with Terraform
- [ ] Create React UI with carousel for blogs, GitHub repos, applications, etc.
- [ ] Build container image and test locally.
- [ ] Deploy and test on Cloud Run.
- [ ] Add conversational agent using Dazbo persona. Store the persona as a Google Secret.
- [ ] Persona can answer portfolio questions and questions about me.
- [ ] Add rate limiting.
- [ ] Map to my domain name.
- [ ] UI experimentation / aesthetics.
- [ ] Add AI summarisation of ingested material.
- [ ] Add AI creation of markdown and generation of keywords.
- [ ] Add SEO.
- [ ] Implement RAG with Vector Search
It then grew as the project moved towards completion.
Design Decisions
I made several of these decisions up-front. In some cases, AI (particularly the Conductor extension) helped me make a selection from options. And in some cases, design decisions evolved and changed as the application was created.
Here's a summary of some of the more significant decisions:
| Decision | Rationale |
|---|---|
| Gemini for LLM | Google's best-in-class multimodal model with its famous 1M+ token context window. I'm using gemini-3-flash-preview at this time. Being fast is more important than powerful reasoning capabilities. (Not that Gemini 3 Flash can't do powerful reasoning!!) |
| ADK for agent framework | Provides a production-grade foundation for agent orchestration. Provides the ability to orchestrate across multiple agents, manage context and artifacts, provides agentic evaluation tools, and provides convenient developer tools for interacting with agents. |
| AI-Powered Summary Creation | Use Gemini to generate concise technical summaries from ingested blogs. Because why do these by hand? |
| AI-Powered Markdown Creation | Use Gemini to generate structured Markdown from raw blog HTML. Because Gemini is really good at this sort of thing! |
| Terraform for infrastructure deployment | Provides declarative Infrastructure as Code (IaC), allowing automated, repeatable, versioned deployment of my Google configuration, IAM and services. |
| Google Cloud Build for CI/CD | A fully managed, serverless CI/CD platform that integrates seamlessly with both Google Cloud services and with GitHub. So any changes pushed to GitHub automatically result in a new image builds, automated testing, and deployment to Cloud Run. |
| Cloud Run for the Application | A fully managed serverless container runtime platform that scales to zero (which is very cost-effective) and handles autoscaling automatically. It also supports custom domains without the need for a Load Balancer. Note that to mitigate the "cold start" that results from me allowing Cloud Run to scale to 0, I'm using CPU boost to speed up startup. |
| Cloud Run Domain Mapping | Allows me to map my custom domain directly to my Cloud Run service, removing the need for a Load Balancer. |
| Cloud Run instances | Set max-instances to 1. I don't expect much demand, and want to limit the applications ability to scale-out, to minimise cost. |
| Unified Container Image | Packaging the frontend, API, backend, and agent into a single container allows atomic deployments, and greatly simplifies the overall solution and deployment process. |
| Unified Origin Architecture | Serving React static assets directly from the FastAPI backend (acting as the origin) completely eliminates CORS complexity in production and simplifies cookie handling. |
| Firestore for the Database | A serverless, autoscaling, NoSQL document database chosen for its flexibility with semi-structured data (blogs, projects). Also, we can use it to store embeddings when we implement RAG later. It has a generous free tier which I don't expect to exceed. Consequently, for the relatively low demands of this application, it will be significantly cheaper than deploying, say, a Cloud SQL Postgres database, where we have to pay for the always-on infrastructure and the storage. |
| Cloud Storage (GCS) | Scalable, serverless, no-ops object storage, well-suited for storage of unstructured data. I will use it to store my small number of static assets, e.g. images used by the UI. |
/api Prefix |
Establishes a strict routing namespace: /api for backend services; all other routes fallback to the SPA (index.html). |
ADK InMemorySessionService |
Sessions are designed to be ephemeral and I have no need for any sort of session persistence or HA. An in-memory store offers the lowest possible latency and simplest implementation without needing external persistence like Redis. |
| FastAPI for backend | Chosen for its high-performance async capabilities, automatic OpenAPI documentation, and native Pydantic integration, ensuring strict type validation across the API surface. |
| In-Memory Rate Limiting | Implemented via slowapi to provide essential DoS protection and cost control for the LLM. At our current scale, this avoids the operational overhead of a dedicated Redis cluster. |
uv for Package Management |
Replaces both pip and poetry with a single, ultra-fast (Rust-based) tool for dependency resolution and environment management, ensuring deterministic builds. |
| Hybrid Ingestion for Medium | Medium has an RSS feed that exports blogs. However, the RSS feed only returns the last 10 blogs. To work around this limitation I'm combining RSS feed and the ability to read a Medium Zip Archive. |
| React for frontend | The industry standard for dynamic UIs. Its declarative component model efficiently handles complex states (like real-time chat and dynamic content filters) and benefits from a massive ecosystem. |
| Vite for frontend build | Offers instant Hot Module Replacement and optimized production builds using Javascript ES modules, significantly outperforming legacy Webpack-based tools in developer experience and build speed. Efficient delivery to the client |
| React 19 Native Metadata | Leverages built-in hoisting for <title> and <meta> tags, eliminating the need for external libraries like react-helmet. |
| Use budget alerts and my central "Killswitch" project | My Killswitch project automatically disconnects billing from a project, if that project exceeds its spend limit. I use this mechanism to keep a lid on my project spend and avoid unexpected cloud bills. |
Deployment Architecture
This is covered in the design decisions, but to summarise and recap:
- The frontend, API and backend are deployed into a single container, hosted on Google Cloud Run.
- Ingested content, extracted metadata and AI-generated enrichment is stored in Firestore.
- Static assets are stored in Google Cloud Storage.
- Logging is sent to Google Cloud Logging.
Application Architecture
The application contains two fundamental components:
The containerised user-facing application, composed of:
- The React UI (frontend)
- The FastAPI layer
- The service-oriented async backend
- The Gemini agent chatbot
The Ingestion Toolkit - a CLI that allows me to load source material into the database. It is intended for use only by me, and not by end users. For this reason, it is not packaged into the container.
Recap
Well, that's it for the overview of my new portfolio application! I've made use of a bunch of Google AI tools to create it, and this has made it possible to build the application in a couple of days, rather than months!
In particular, the AI tools have:
- Helped me create the initial project scaffold quickly.
- Helped me create a full-stack portfolio application, with React/Vite frontend, API, Python backend and Gemini-based agentic components.
- Ensured continuous alignment to my goals and requirements, architectural standards and coding rules.
- Ensured that I've keep my documentation complete and up-to-date.
- Ensured that I have an extensive high-coverage testing suite, which is run as part of my CI/CD pipeline.
I should add that I have virtually zero knowledge of building frontends with React. Creating this responsive and cool looking UI by hand would have taken me a lot of effort, time and learning. Yeah - I love to learn. But there's only so many hours in a day, and sometimes I just want to leverage AI so I don't have to!
Well, that's it for the brief! But if you're interested in more detail, feel free to check out the Deep Dive below.
What's Next for the Application?
My ingestion tool creates AI summaries and stores the entire blogs in the database in markdown. The next obvious step is to take these summaries and markdown, and create embeddings from them. We can store these embeddings in the same Firestore database, index them, and then use them to allow my chatbot to perform RAG. That way, the chatbot will be able to have much more meaningful understanding of the details of my blogs and repos.
What I'm Most Proud Of
From this portfolio experience?
I guess...
- That this isn't just a vibe solution. This is (I hope) well-architected, well-documented, production-ready, with clean, maintainable code.
- That I've managed to pull together (hopefully) a coherent story for how we can make use of a collection of Google AI tools; with clear guidance on which tools are suited for a particular use case. When there are so many tools, it can be hard to know which ones to use.
- That it looks so cool!
AI Comedy Moments
Whilst helping me build this portfolio, Gemini CLI found itself occasionally frustrated. Here's a couple of screenshots that made me laugh:
I am an absolute muppet
And then later, when it made the same mistake again:
I am going to smash my keyboard
It seems that Gemini's internal dialog is pretty similar to mine!!
Diving Deeper
Consider everything from here onwards as erm, bonus content.
I'll provide some more detail for a few steps in this journey. It's not going to be a complete walkthrough, but it should hopefully be useful if you want to replicate parts of the journey.
Let's go!
Create a Google Cloud Project
First we need a Google Cloud project to host our Cloud Run service. I went ahead and created this in the Google Cloud console.
Next we need to configure our gcloud:
export GOOGLE_CLOUD_PROJECT="my-project-name"
gcloud auth login --update-adc
gcloud config set project $GOOGLE_CLOUD_PROJECT
gcloud auth application-default set-quota-project $GOOGLE_CLOUD_PROJECT
Next let's enable a minimal set of APIs, in order to:
- Use Gemini API
- Make use of Gemini Cloud Assist
- Use Cloud Build to push images to Google Artifact Registry, and to deploy to Cloud Run
- Enable Secret Manager
Here's how:
gcloud services enable --project=$GOOGLE_CLOUD_PROJECT \
artifactregistry.googleapis.com \
cloudbuild.googleapis.com \
secretmanager.googleapis.com \
run.googleapis.com \
logging.googleapis.com \
aiplatform.googleapis.com \
serviceusage.googleapis.com \
storage.googleapis.com \
cloudtrace.googleapis.com \
geminicloudassist.googleapis.com
Create an API Key
I'm going to use a Gemini API key in order to use the Gemini API. An easy way to do this is to use Google AI Studio.
- Open https://aistudio.google.com/api-keys.
- From there we import our newly created Google Cloud project. Then, create a new Gemini API key, connected to that project.
Create Your Local Development Workspace
Now we create a local project folder. A really quick way to get up and running is with the Agent Starter Pack. This creates a project for us, deploys a basic agent from a template, sets up initial tests, creates Jupyter notebooks for experimentation, sets up CI/CD, and so on:
uvx agent-starter-pack create dazbo_portfolio -a adk
The agent-starter-pack prompts for input at various stages. These are the choices I selected:
| Consideration | Selected | Rationale |
|---|---|---|
| Agent template | adk |
Provides an out-of-the-box Gemini chat agent, using Google ADK |
| Deployment target | Google Cloud Run | Fully-managed, serverless, no-ops, elastic container hosting; and required for this challenge |
| Session type | in_memory |
I have no need to persist sessions; this option is the most simple to implement, requiring no additional services |
| CI/CD | Google Cloud Build | Out-of-the-box CI/CD pipeline |
| Region | europe-west1 |
Close to home so low latency; also, I know I won't have any Cloud Build quota issues with this region |
Environment Setup
I like to create a .env.template. This includes the environment variables I want, and it's safe to check-in.
export REPO="repo-name"
export SERVICE_NAME="service-name"
export GOOGLE_CLOUD_PROJECT="<your-gcp-project>"
export GOOGLE_CLOUD_REGION="<your-region>" # For Google Cloud resources
export GOOGLE_CLOUD_LOCATION="global" # For Gemini model - you can use "global"
# Firestore
export FIRESTORE_DATABASE_ID="(default)"
# For CI/CD with Cloud Build SA
export CB_SA_EMAIL="${PROJECT_NUMBER}@cloudbuild.gserviceaccount.com"
# Agent SA
export SERVICE_SA="${SERVICE_NAME}-app"
export SERVICE_SA_EMAIL="${SERVICE_SA}@${GOOGLE_CLOUD_PROJECT}.iam.gserviceaccount.com"
# Agent
export GEMINI_API_KEY="<your-gemini-api-key>"
export LOG_LEVEL='DEBUG'
export APP_NAME="app_name"
export AGENT_NAME="chat_agent_name"
export GOOGLE_GENAI_USE_VERTEXAI="False" # Use Vertex AI for auth, not API key
export MODEL="gemini-3-flash-preview"
# URLs
export BASE_URL="https://<your-domain>"
export MEDIUM_PROFILE="https://medium.com/@user-name"
export DEVTO_PROFILE="https://dev.to/user-name"
I then typically make a copy called .env that is not checked-in, and populate with my required values.
I've also created a scripts\setup-env.sh to automate setting up the environment:
#!/bin/bash
# This script is meant to be sourced to set up your development environment.
# It configures gcloud, installs dependencies, and activates the virtualenv.
#
# Usage:
# source ./setup-env.sh [--noauth]
#
# Options:
# --noauth: Skip gcloud authentication.
# --- Color and Style Definitions ---
RESET='\033[0m'
BOLD='\033[1m'
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[0;33m'
BLUE='\033[0;34m'
# --- Parameter parsing ---
AUTH_ENABLED=true
while [[ $# -gt 0 ]]; do
case "$1" in
--noauth)
AUTH_ENABLED=false
shift
;;
*)
shift
;;
esac
done
echo -e "${BLUE}${BOLD}--- ☁️ Configuring Google Cloud environment ---${RESET}"
# 1. Check for .env file
if [ ! -f .env ]; then
echo -e "${RED}❌ Error: .env file not found.${RESET}"
echo "Please create a .env file with your project variables and run this command again."
return 1
fi
# 2. Source environment variables and export them
echo -e "Sourcing variables from ${BLUE}.env${RESET} file..."
set -a # automatically export all variables (allexport = on)
source .env
set +a # disable allexport mode
# 3. Authenticate with gcloud and configure project
if [ "$AUTH_ENABLED" = true ]; then
echo -e "\n🔐 Authenticating with gcloud and setting project to ${BOLD}$GOOGLE_CLOUD_PROJECT...${RESET}"
gcloud auth login --update-adc 2>&1 | grep -v -e '^$' -e 'WSL' -e 'xdg-open' # Suppress any annoying WSL messages
gcloud config set project "$GOOGLE_CLOUD_PROJECT"
gcloud auth application-default set-quota-project "$GOOGLE_CLOUD_PROJECT"
else
echo -e "\n${YELLOW}Skipping gcloud authentication as requested.${RESET}"
gcloud config set project "$GOOGLE_CLOUD_PROJECT"
fi
echo -e "\n${BLUE}--- Current gcloud project configuration ---${RESET}"
gcloud config list project
echo -e "${BLUE}------------------------------------------${RESET}"
export PROJECT_NUMBER=$(gcloud projects describe $GOOGLE_CLOUD_PROJECT --format="value(projectNumber)")
echo -e "${BOLD}PROD_PROJECT_NUMBER:${RESET} $PROJECT_NUMBER"
echo -e "${BLUE}------------------------------------------${RESET}"
# For CI/CD with Cloud Build SA
export CB_SA_EMAIL="${PROJECT_NUMBER}@cloudbuild.gserviceaccount.com"
# 6. Sync Python dependencies and activate venv
echo "Syncing python dependencies with uv..."
uv sync --dev --extra jupyter
source .venv/bin/activate
echo -e "\n${GREEN}✅ Environment setup complete for project ${BOLD}$GOOGLE_CLOUD_PROJECT${RESET}${GREEN}. Your shell is now configured.${RESET}"b
And I can automate running this script whenever I open my workspace, by creating this .envrc file:
# For use with direnv
# - `sudo apt install direnv`
#
# Add hook to end of .bashrc:
# eval "$(direnv hook bash)"
#
# Allow this folder:
# - `direnv allow`
#
# Script as required...
if [ ! -d ".venv" ]; then
uv venv
fi
# Check if gcloud token is still valid to avoid re-authenticating
if gcloud auth print-access-token --quiet > /dev/null 2>&1; then
echo "gcloud token is valid, skipping authentication."
source scripts/setup-env.sh --noauth
else
echo "gcloud token is not valid, re-authenticating."
source scripts/setup-env.sh
fi
Initial Test
At this point, I should have a working agentic sample application. I can test it by running make playground. (This is a handy make target that was created by the ASP.) We can see it start up:
And then we can open our browser:
Okay, so far, so good!
Setting up the Spec with Conductor
To setup Conductor for the first time in a repo, we run this:
/conductor:setup
It guides us with many questions. Like this one to set the tone:
It checks what kind of aesthetic I want from the frontend, and what sort of personality my chat bot will have. (I point it to my existing "Dazbo persona" which I have on GitHub.)
I then proposes a tech stack based on what I've already provided in the README.md, such as the design decisions I had already recorded. I needed to make a couple of tweaks to what Conductor suggested. For example, I want to use Google Firestore for managing portfolio content, rather than Cloud SQL Postgres.
Once the tech stack is confirmed, Conductor pulls in its relevant pre-canned style guides.
Conductor then proposes an initial track, and then commits everything done so far, as per its standard workflow.
At the end of this process, we now have this files in our workspace:
Setup Automatic Gemini Reviews of GitHub PRs
A super useful thing to setup is automatic Gemini reviews of any PRs you submit in GitHub. There's a couple of ways to set this up, but since we already have Gemini CLI running, the easiest way is to run /setup-github from within Gemini CLI. Within a few seconds, it's created the necessary GitHub Actions for us:
We also need to ensure we've created a GitHub repo secret called GEMINI_API_KEY which contains the API key we created earlier.
Now, if we commit changes in a new branch and then raise a PR to merge this back in to main, Gemini will automatically review it for us. Nice!
Setup Terraform and CI/CD
I'd like the deployment of my Google Cloud resources to be consistent and repeatable, so I'm going to use Terraform. The Agent Starter Pack has created some initial Terraform for me, but I've tweaked it for my purposes.
State Management
The first thing I want to do is store my Terraform state in a GCS bucket, rather than in a local state file. Note that we need to create the bucket first and then point Terraform to it. I add this backend.tf:
# =========================================================================
# Terraform backend configuration
# =========================================================================
# Expects the bucket has already been created, e.g.
# gcloud storage buckets create gs://${GOOGLE_CLOUD_PROJECT}-tf-state
#
# To migrate from local state to this GCS backend state, simply re-run terraform init
# and answer 'yes' to the prompt to copy the existing state to the new backend.
terraform {
backend "gcs" {
bucket = "dazbo-portfolio-tf-state" # variables not supported here
prefix = "terraform/state"
}
}
Add Firestore to Terraform Config
I want my Terraform to create my Google Firestore database. I could have just added this additional Terraform configuration by hand, but I've decided to let Conductor do it for me. So I created a dedicated Conductor track.
Conductor adds this section to my storage.tf:
resource "google_firestore_database" "database" {
project = var.project_id
name = "(default)"
location_id = var.region
type = "FIRESTORE_NATIVE"
delete_protection_state = "DELETE_PROTECTION_DISABLED"
depends_on = [resource.google_project_service.deploy_project_services]
}
CI/CD
We can obviously just deploy our Cloud Run service manually once the application is built. But I might as well get the CI/CD running so that any code changes I make are automatically pushed to Cloud Run.
The goal here is for code changes pushed to GitHub to automatically call a trigger that runs a Google Cloud Build workflow. The Cloud Build workflow will build the container image, push it to Google Container Registry, and from there, deploy it to Google Cloud Run.
ASP has already created initial Cloud Build configuration files, and initial Terraform build_triggers.tf. I just need to tweak them a bit for my purposes.
Create a Connection from Cloud Build to GitHub
In order for GitHub to be able trigger our Cloud Build, we need to establish a connection between them.
We can set this up in the Cloud Console: open Cloud Build, then Repositories, then "Connect repository". Select "Edit repositories on GitHub" which takes you to Settings > Applications > Cloud Build. From here, we should add our new GitHub repo. Once this is saved in GitHub, we can select it back in the Cloud Console.
Now, update deployment/terraform/vars/env.tfvars with the appropriate configuration information, including:
-
host_connection_name- as shown in Cloud Build > Repositories -
github_pat_secret_id- which will be the name of the appropriate secret that has been created in Secret Manager. It will be called something like:<host_connection_name>-gihub-oauthtoken-1a2b3c
I've created env.tfvars.template that describes the variables I need:
# Project name used for resource naming
project_name = "<project-name>"
# Your Google Cloud Project ID for resource deployment
project_id = "<your-project-id>"
# Your Google Cloud project ID that will be used to host the Cloud Build pipelines.
cicd_runner_project_id = "<cicd-runner-project-id>"
# Name of the host connection you created in Cloud Build
connection_already_exists = true # Do not try to create a new connection
host_connection_name = "<gh-conn-name>"
github_pat_secret_id = "<gh-conn-secret-id>"
repository_owner = "<gh-username>"
# Name of the repository you added to Cloud Build
repository_name = "<gh-repo-name>"
# The Google Cloud region you will use to deploy the infrastructure
region = "<gcp-region>"
# Service Configuration
service_name = "<service-name>" # Note hyphens
app_name = "<service_name>" # Note underscore
agent_name = "<service_name>_chat_agent"
google_genai_use_vertexai = "true"
model = "<model>"
location = "global"
app_domain_name = ["some-domain.com", "www.some-domain.com"]
I don't need to create the Cloud Build triggers; my Terraform will do that for me when I run it later.
Run the Terraform
Of course, we can run Terraform the manual way, by running terraform init, terraform plan and terraform apply commands. But I've created convenient Makefile targets. So we can just run this:
# First check it with make tf-plan
make tf-apply
After the Terraform apply, our resources have been successfully created. Let's verify the Firestore database has been created:
Great!
And now let's check our Cloud Build triggers:
Portfolio Resource Ingestion Strategy
Let's have a look at Gemini CLI Conductor executing a significant change for us. In Gemini CLI, we run the following command:
/conductor:newTrack
Conductor asks me what I want to achieve. I tell it:
My application needs a way to ingest my upstream resources. I want to store the ingested resource metadata in Google Firestore, and any associated images in GCS. I want the application to be able to trawl my sources, such as GitHub, Medium, and dev.to, using the relevant user ID for me. But for simplicity, I've decided I want this to be handled "out-of-band" to the main application. I.e. I'll just run this ingestion process on-demand, as required.
After a number of questions and checks, Conductor creates this initial spec.md:
# Specification: Portfolio Resource Ingestion Strategy
## Overview
This track defines and implements the system for bringing portfolio resources—including blog posts, GitHub repositories, websites, and applications—into the Dazbo Portfolio application. It focuses on a decoupled, "out-of-band" ingestion process via CLI scripts that populate Google Firestore and handle static assets via Google Cloud Storage.
## Goals
- Establish a "Source of Truth" in Firestore for all showcaseable items.
- Provide a hybrid ingestion mechanism (automated connectors + manual entries).
- Enable on-demand triggering via a dedicated CLI tool.
- Support metadata-only entries for paywalled or external content.
## Functional Requirements
### 1. Ingestion Connectors
- **GitHub Connector:** Fetch public repositories, metadata (stars, description), and primary languages.
- **Medium Connector:** Fetch blog post metadata (titles, links, summaries).
- **Dev.to Connector:** Fetch blog post metadata via the Dev.to API.
- **Custom/Manual Connector:** Support for entries not tied to a specific platform (e.g., private projects, paywalled articles, standalone websites).
### 2. Ingestion Triggering
- **CLI Tooling:** A set of Python scripts in `scripts/` or a dedicated module (e.g., `app/tools/ingest.py`) to trigger syncs manually from a developer environment.
### 3. Static Asset Management
- **Image Storage:** All portfolio-related images (thumbnails, profile pics, screenshots) will be stored in a **Public Google Cloud Storage (GCS) Bucket**.
- **Asset Referencing:** Firestore documents will store the public URLs of these GCS objects.
### 4. Data Architecture
- **Decoupled Logic:** The ingestion logic will be kept separate from the main FastAPI request-response cycle to minimize dependencies in the production runtime.
- **Schema Support:** Extend/Verify `Project` and `Blog` models to support "manual" flags and "metadata-only" status.
## Non-Functional Requirements
- **Resilience:** Handle API rate limiting and transient network errors gracefully.
- **Maintainability:** Use a modular "Connector" pattern to allow adding new sources easily.
- **Idempotency:** Ensure that repeated ingestion runs do not create duplicate entries in Firestore.
## Acceptance Criteria
- [ ] A Python script exists that can sync GitHub, Medium, and Dev.to content to Firestore.
- [ ] Ability to manually add a "Custom" resource entry via a configuration file (e.g., YAML) processed by the CLI.
- [ ] Images successfully uploaded to GCS are accessible via public URL in the app.
- [ ] Integration tests verify that ingestion does not create duplicate entries.
## Out of Scope
- A full-blown web-based Admin Dashboard (GUI).
- Admin API endpoints.
- Automated crawling of paywalled content body (metadata/stubs only).
- Real-time Webhook-based syncing.
- LinkedIn Connector.
I confirm I'm happy with this spec. Conductor then goes on to create an execution plan, as plan.md:
# Implementation Plan: Portfolio Resource Ingestion Strategy
## Phase 1: Foundation & Data Models
- [ ] Task: Update Pydantic models to support ingestion metadata
- [ ] Add `source_platform` and `is_manual` fields to `Project` and `Blog` models
- [ ] Add support for "metadata-only" entries
- [ ] Update Firestore service layer to handle these new fields
- [ ] Task: Verify and Update Terraform for GCS
- [ ] Check `deployment/terraform/storage.tf` for the public assets bucket definition
- [ ] Ensure the bucket has correct public access policies (if intended for public serving)
- [ ] Apply Terraform changes if necessary (update `tech-stack.md` if new resources are added)
- [ ] Task: Implement Google Cloud Storage (GCS) Utility
- [ ] Create a utility class for uploading and retrieving URLs for images in GCS
- [ ] Write unit tests for GCS utility with mocked storage client
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Foundation & Data Models' (Protocol in workflow.md)
## Phase 2: Ingestion Connectors
- [ ] Task: Implement GitHub Connector
- [ ] Create `GitHubConnector` to fetch repos using `PyGithub` or `httpx`
- [ ] Implement mapping from GitHub API response to `Project` model
- [ ] Write unit tests with mocked API responses
- [ ] Task: Implement Medium Connector
- [ ] Create `MediumConnector` to fetch post metadata (titles, links, summaries)
- [ ] Implement mapping to `Blog` model
- [ ] Write unit tests with mocked responses
- [ ] Task: Implement Dev.to Connector
- [ ] Create `DevToConnector` to fetch posts via API
- [ ] Implement mapping to `Blog` model
- [ ] Write unit tests with mocked API responses
- [ ] Task: Conductor - User Manual Verification 'Phase 2: Ingestion Connectors' (Protocol in workflow.md)
## Phase 3: CLI Ingestion Tool
- [ ] Task: Implement CLI Harness
- [ ] Create a script in `app/tools/ingest.py` using `typer` or `argparse`
- [ ] Implement the command logic to orchestrate connectors
- [ ] Add logic to ensure idempotency (prevent duplicates in Firestore)
- [ ] Task: Implement YAML-based Manual Entry Support
- [ ] Define YAML schema for manual resource entries
- [ ] Add logic to the CLI to parse YAML and insert entries into Firestore
- [ ] Write unit tests for YAML parsing and ingestion logic
- [ ] Task: Conductor - User Manual Verification 'Phase 3: CLI Ingestion Tool' (Protocol in workflow.md)
## Phase 4: Integration & Documentation
- [ ] Task: End-to-End Integration Testing
- [ ] Create integration tests that run the full ingestion flow against a local Firestore emulator or mock
- [ ] Verify that images are correctly referenced and metadata is accurate
- [ ] Task: Update Documentation
- [ ] Update `README.md` and `docs/design-and-walkthrough.md` with the new ingestion architecture
- [ ] Task: Conductor - User Manual Verification 'Phase 4: Integration & Documentation' (Protocol in workflow.md)
Excellent! I then ask Conductor to go ahead and do this:
/conductor:implement
Conductor goes ahead with the implementation. It creates tests first, and verifies they fail. Then it writes the code. Then it re-runs the tests and confirm they pass. Then it asks for manual verification from me.
Now this has been implemented, I can try it out:
uv run python -m app.tools.ingest --help
The output looks like this:
Let's load in my GitHub projects:
uv run python -m app.tools.ingest --github-user derailed-dash
This reads in my public repos:
And if we look in Firestore, we can see entries have been created:
Later I spotted that it's ingested repos that I've forked from upstream. I don't want this; I only want it to ingest MY public repos. I fixed this later.
Raising the PR
Having commited my changes to my dedicated branch and pushed to GitHub, I then create a PR in GitHub.
Let's see what Gemini makes of it in GitHub. First, we get Gemini's overview of my PR:
Okay, this is a good summary. Now let's what problems it's found:
Ooh, this is a good catch. It's definitely conceivable that I could have blogs, repos and applications with common names. So I go ahead and fix that straight away. There were a few other issues detected, which I've fixed, committed and pushed.
Then we can re-run the Gemini check of the PR by adding a comment of /gemini review in GitHub.
Add the UI
Here I've setup another Conductor Track, to create our React UI.
I've asked for a clean UI, using Vite to implement a super-fast static React UI. I've asked for carousels for my blogs, repos and sites/applications, and I've asked for a button that will trigger an overlay for the chatbot.
After the functionality has been implemented, we have a working UI that looks like this:
Not bad for so little work!
Raise the UI PR
Let's see what the Gemini review makes of this one:
I fix a few issues that are flagged by Gemini, and we're ready to move on to the next track.
Creating the Container Image
Here I create a new Conductor track to build the Docker image. I already had a Dockerfile, but I need to incorporate the new frontend. Also, I want a two phase Dockerfile in order to create a lightweight container image that starts fast. Here's the resulting Dockerfile:
# Stage 1: Build the React frontend
FROM node:20-slim AS frontend-builder
WORKDIR /app/frontend
COPY frontend/package.json frontend/package-lock.json ./
RUN npm install
COPY frontend/ ./
RUN npm run build
# Stage 2: Final production image
FROM python:3.12-slim
RUN pip install --no-cache-dir uv==0.8.13
WORKDIR /code
# Copy backend requirements and install
COPY ./pyproject.toml ./README.md ./uv.lock ./
RUN uv sync --frozen
# Copy backend code
COPY ./app ./app
# Copy built frontend assets from Stage 1
COPY --from=frontend-builder /app/frontend/dist /code/frontend/dist
ARG COMMIT_SHA=""
ENV COMMIT_SHA=${COMMIT_SHA}
ARG AGENT_VERSION=0.0.0
ENV AGENT_VERSION=${AGENT_VERSION}
EXPOSE 8080
CMD ["uv", "run", "uvicorn", "app.fast_api_app:app", "--host", "0.0.0.0", "--port", "8080"]
And I need to add a couple of targets to my Makefile so we can both build the container, and launch it.
Raising the PR
Once again, let's see what Gemini says:
Gemini didn't find any other issues. BUT, the CI/CD failed because one of my integration tests was broken. Oops! I had forgotten to update one of the API endpoints. Easily fixed.
Onwards!
Deploy to Google Cloud Run
Here I made a few updates to my Cloud Build CI/CD configuration, and my Terraform triggers. In particular, to make sure I'm passing in all the required environment variables to deploy my Cloud Run service.
I also updated my Makefile to replicate the Cloud Build workflow. So it now looks like this:
# Build and deploy the agent to Cloud Run (Manual / Development)
# Parameters not specified are inherited from service provisioned by TF
# Builds directly from source; not from Google Artifact Registry
deploy-cloud-run:
gcloud run deploy $(SERVICE_NAME) \
--source . \
--project $(GOOGLE_CLOUD_PROJECT) \
--region $(GOOGLE_CLOUD_REGION) \
--service-account="$$SERVICE_SA_EMAIL" \
--max-instances=1 \
--cpu-boost \
--allow-unauthenticated \
--set-env-vars="COMMIT_SHA=$(shell git rev-parse HEAD),APP_NAME=$(APP_NAME),AGENT_NAME=$(AGENT_NAME),MODEL=$(MODEL),GOOGLE_GENAI_USE_VERTEXAI=$(GOOGLE_GENAI_USE_VERTEXAI),GOOGLE_CLOUD_LOCATION=$(GOOGLE_CLOUD_LOCATION),LOG_LEVEL=DEBUG" \
--labels=dev-tutorial=devnewyear2026 \
$(if $(IAP),--iap)
The service deployed without incident and I was able to run the portfolio application from the public Cloud Run URL. But the service was not showing the information stored in Firestore.
I immediately knew what the problem was: I haven't granted the Cloud Run service account the right role to use Firestore! But rather than just fixing it myself, I wanted to see how good Gemini CLI is at diagnosing the issue, given that I have the gcloud extension installed.
So, I asked Gemini CLI to diagnose my access issue. It went ahead and used the gcloud MCP server to read the Cloud Run service logs directly. It immediately detected the 403 errors in the logs, and then went ahead and added the missing datastore.user role to this variable in my variables.tf file:
variable "app_sa_roles" {
description = "List of roles to assign to the application service account"
type = list(string)
default = [
"roles/aiplatform.user",
"roles/logging.logWriter",
"roles/cloudtrace.agent",
"roles/storage.objectAdmin",
"roles/serviceusage.serviceUsageConsumer",
"roles/datastore.user",
]
}
I ran my make tf-apply and all was fixed. Woop!
Implementing the Chatbot
I created a new Conductor track for this. We already have a bare-bones agent and a UI widget. What this track needs to do is:
- Add a secret to Secret Manager, for my chatbot's personality.
- Inject the secret as an environment variable into Cloud Run.
- Retrieve the secret in the agent code.
- Add tools in order to query my portfolio data from Firestore.
- Wire the UI widget to the agent.
But I didn't need to tell Conductor all of this. I just told it that the personality will be stored in Google Secret Manager. Conductor then went ahead and worked out what it needed to do. Check out the plan it built:
# Implementation Plan - Chatbot Implementation
This plan outlines the steps to implement the "Dazbo" portfolio chatbot, including backend agent logic, infrastructure updates for secret management, and frontend integration.
## Phase 1: Infrastructure & Secret Management
- [ ] Task: Create Google Secret for Persona Style
- [ ] Create `dazbo-system-prompt` secret in Google Secret Manager
- [ ] Populate with the Dazbo persona and system prompt content
- [ ] Task: Update Terraform Configuration
- [ ] Define `google_secret_manager_secret` in `deployment/terraform/storage.tf` (or dedicated file)
- [ ] Update `google_cloud_run_v2_service` in `deployment/terraform/service.tf` to inject the secret as an environment variable named `DAZBO_SYSTEM_PROMPT`
- [ ] Update `app_sa_roles` in `deployment/terraform/variables.tf` to include `roles/secretmanager.secretAccessor`
- [ ] Task: Apply Infrastructure Changes
- [ ] Run `make tf-apply` to provision resources and update Cloud Run
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Infrastructure' (Protocol in workflow.md)
## Phase 2: Agent Tooling & Logic [checkpoint: 47cbdae]
- [ ] Task: Implement Portfolio Search Tool [2e50bd9]
- [ ] Write unit tests for `search_portfolio` tool
- [ ] Implement `search_portfolio` in `app/agent.py` or a new tools module
- [ ] Tool should query Firestore `projects` and `blogs` collections based on query/tags
- [ ] Task: Implement Content Detail Tool [b8078da]
- [ ] Write unit tests for `get_content_details` tool
- [ ] Implement `get_content_details` to fetch a full document from Firestore by ID
- [ ] Task: Refine Agent Persona & System Prompt Handling [69691f7]
- [ ] Update `app/config.py` to include the environment variable name
- [ ] Modify `app/agent.py` to read the system prompt from the `DAZBO_SYSTEM_PROMPT` environment variable at runtime
- [ ] Ensure `InMemorySessionService` is correctly integrated for history persistence
- [ ] Task: Conductor - User Manual Verification 'Phase 2: Agent Logic' (Protocol in workflow.md)
## Phase 3: Backend API & Streaming
- [ ] Task: Implement Streaming Endpoint in FastAPI
- [ ] Write integration tests for streaming chat endpoint
- [ ] Update `app/fast_api_app.py` to include an SSE endpoint for the agent
- [ ] Ensure the agent's stream is correctly piped to the SSE response
- [ ] Task: Verify Backend End-to-End
- [ ] Run `make local-backend`
- [ ] Use `curl` to verify the streaming response from the API
- [ ] Task: Conductor - User Manual Verification 'Phase 3: Backend Streaming' (Protocol in workflow.md)
## Phase 4: Frontend Integration
- [ ] Task: Connect ChatWidget to Backend
- [ ] Implement SSE listener in `frontend/src/components/ChatWidget.tsx`
- [ ] Update UI state to handle streaming chunks and display message history
- [ ] Add "typing" indicator and auto-scroll to bottom
- [ ] Task: Verify UI/UX
- [ ] Run `make react-ui` and `make local-backend`
- [ ] Confirm chat feels responsive and correctly reflects the Dazbo persona
- [ ] Task: Conductor - User Manual Verification 'Phase 4: Frontend Integration' (Protocol in workflow.md)
## Phase 5: Documentation & Roadmap
- [ ] Task: Update Roadmap & Design Docs
- [ ] Append future RAG/Vector Search tasks to `TODO.md`
- [ ] Update `docs/design-and-walkthrough.md` with the RAG roadmap details (Vertex AI + Firestore Vector Search)
- [ ] Task: Conductor - User Manual Verification 'Phase 5: Documentation' (Protocol in workflow.md)
Here's a snippet that shows our agent now has access to these new portfolio tools:
root_agent = Agent(
name="root_agent",
description="You are Dazbo's helpful assistant. You can search for content in his portfolio",
model=Gemini(
model=settings.model,
retry_options=types.HttpRetryOptions(attempts=3),
),
instruction=settings.dazbo_system_prompt,
tools=[search_portfolio, get_content_details],
)
We can test the chatbot using the "ADK Web" UI:
We can see it's using the right tools and retrieving the right content. Nice!
Adding the Chatbot to the UI
After making all the required changes and pushing, here's the PR summary from Gemini:
And it raises this concern about prompt injection attacks:
Ooh, that's a good one. I added some defensive code (with Gemini's help, of course) and then created a prompt injection test.
Let's have a look at the Chatbot in action:
Implement Rate Limiting
I wanted API rate limiting for the usual reasons. For example, to prevent abuse, and to avoid unexpected costs. I don't want a denial-of-service attack to cost me loads of money!
I ask Conductor to implement the rate limiting. It offers a few ways to do this, but I've gone with the slowapi library. I included global rate limiting for the API, and more stringent rate limit for any calls that interact with Gemini.
Working Around the Medium RSS Issue
It turns out the Medium RSS feed only returns the 10 latest blogs. This is no good! I want my portfolio to show all my blogs. Medium doesn't offer an API to retrieve blogs, so I'm going to have to go old-school and scrape.
Additionally, I would like to:
- Use Gemini to create summaries of each blog read. I will store the summary in Firestore.
- Suggest tags for each ingested blog.
- Convert HTML pages into markdown, with some specific layout rules. (I will re-use this code elsewhere, to help me with cross-posting to blogging sites.)
I created a Conductor track to do this, and guided it in creating a plan and spec, as usual. One really cool thing that Conductor tells me is that Medium offers the ability to export all my blogs in a single zip. I didn't know this! This zip contains all my posts in HTML format. Well, it's not an API, but it's very useful.
Here's the spec.md from Conductor:
# Specification: Comprehensive Medium Blog Ingestion
## Overview
The goal of this track is to overcome the 10-post limitation of the Medium RSS feed and enhance the portfolio's content richness. This involves a hybrid ingestion system (RSS + Zip Export), paywall detection, and a new processing pipeline that converts HTML content to structured Markdown and generates AI-powered summaries.
## Functional Requirements
- **Hybrid Ingestion Engine:**
- Support fetching the latest 10 posts via the standard Medium RSS feed.
- Implement a parser for Medium export archives (`posts.zip`).
- The archive parser must extract blog metadata (title, date, content/summary, URL) from HTML files located within the `posts/` directory of the zip.
- **Content Processing & AI Enrichment:**
- **HTML to Markdown Conversion:**
- Convert the blog post HTML content into clean Markdown.
- **Formatting Rules:**
- Title: H1 (`#`)
- Headings: H2 (`##`)
- Subheadings: H3 (`###`)
- **Frontmatter:** Include YAML frontmatter with `subtitle` and `tags` (if available).
- **AI Summarization:**
- Generate a concise summary of the entire blog post using the project's Gemini agent.
- **Storage:** Store the generated Markdown and AI Summary in the `Blog` model in Firestore.
- **Paywall Identification:**
- Implement heuristic analysis to detect paywalled content (e.g., "Member-only story" markers).
- Update the `Blog` model to include a `is_private` boolean field.
- **Duplicate Management & Idempotency:**
- Detect duplicates across RSS and Zip sources using the canonical URL (fallback to Title).
- **Priority:** RSS Feed metadata takes precedence for basic fields (date, title), but the Zip export (processed into Markdown) serves as the source for the full content body.
- **CLI Enhancement:**
- Update the `ingest` CLI tool to accept a `--medium-zip` parameter.
- **UI Presentation:**
- Display the **AI-generated summary** in the portfolio interface.
- Provide a clear link to the original Medium post for full reading.
- Display a "Member-only" badge for paywalled content.
- **Documentation:**
- Update `docs/design-and-walkthrough.md` to reflect the new hybrid ingestion architecture, design decisions, and a detailed walkthrough of the mechanism.
## Non-Functional Requirements
- **Performance:** Zip parsing and AI summarization should handle rate limits gracefully.
- **Maintainability:** Modular parser architecture.
## Acceptance Criteria
- [ ] Ingestion with `posts.zip` populates Firestore with historical posts.
- [ ] Blog content is stored as Markdown in Firestore (for future use/RAG).
- [ ] Each blog entry has an AI-generated summary stored in Firestore.
- [ ] The Portfolio UI displays the AI Summary and links to the full post.
- [ ] Member-only stories are correctly flagged and badged in the UI.
- [ ] `docs/design-and-walkthrough.md` is updated with the new ingestion details.
I spotted a number of issues with the initial implementation. For example:
- It takes a lot of time to process a zip with many blog files, so we need some sort of progress indicator.
- The Medium blog export includes comments and replies to comments. We don't want to import these as blogs, so the application needs to ignore these.
- We also need to skip files that are unpublished drafts.
This ends up being a pretty big PR in the end:
After we run the new ingestion tool, we can see Firestore has been properly updated with over 100 blogs from Medium, including AI-generated summaries and AI-extracted tags:
Domain Name Mapping
My goal is to map my Cloud Run URL to my custom domain.
But at this point in the day, I've run out of Gemini quota! Sad times. So I'm doing the next few steps without any AI assistance.
First, I need to verify my domain name. I'm following the guidance here in order to create a TXT record that I can configure with my DNS registrar.
Next, I add this block to my services.tf Terraform:
# Create domain mappings for all listed domains
resource "google_cloud_run_domain_mapping" "app_prod_domain_mapping" {
for_each = toset(var.app_domain_name)
name = each.key
project = var.project_id
location = google_cloud_run_v2_service.app.location
metadata {
namespace = data.google_project.project.project_id
}
spec {
route_name = google_cloud_run_v2_service.app.name
}
}
I add the matching variable in variables.tf:
variable "app_domain_name" {
description = "A list of domain names to be mapped to the service"
type = list(string)
}
And then I add the comma-separated list of domains to my env.tfvars:
app_domain_name = ["darrenlester.net", "www.darrenlester.net"]
Now I apply this Terraform configuration and the domain mappings are created. We can verify that this has applied in the Google Cloud Console, by looking at Domain Mappings in the Console:
I need to grab the A (IP4) and AAAA (IP6) records for the domain and add them to the DNS configuration of my DNS registar. And I need to grab the CNAME for the subdomain (i.e. www) and add this to the DNS registrar also.
Shortly after doing this, Google provisions SSL certificates for the domains. It took about 30 minutes.
UI Aesthetics
The UI is okay, but it's not great. I need to add some style and polish!
I started by creating a banner image using Nano Banana Pro. I gave it a few images I wanted to use as subjects. I provided by profile pic, obviously!
Then I told the agent in Antigravity to help me experiment with the UI, and with these requirements:
- Add the new banner
- Add my profile pic to the left of the banner
- Use black background with dark theme
It made some excellent initial UI changes. There were a few aesthetic issues to work through, but eventually I ended up with a result that I was happy with:
(I've iterated the UI a couple of times since then.)
Okay, You Get The Idea
I could show some more hands-on stuff from this journey. But I'm knackered, and I'm sure you are too. But I hope you've found these examples interesting and useful!
Before You Go
Please engage with this post - even if it's just a like. And please follow me so you don't miss my future content. Thanks!




































Top comments (1)
Awesome. I also want to give Conductor a try.