DEV Community

Cover image for "Your Books Tell Your Story" — Building an AI Portfolio That Talks Back
shusukeO
shusukeO

Posted on

"Your Books Tell Your Story" — Building an AI Portfolio That Talks Back

New Year, New You Portfolio Challenge Submission

About Me

Hi, I'm Shusuke O. — a Software Engineer from Tokyo. I build technology that harmonizes with nature and human life.

I'm passionate about distributed systems where independent agents interact to create outcomes — rather than rigid, centralized hierarchies. This belief directly inspired this project.

Tech stack: Go, TypeScript, React, Three.js, Kubernetes, Terraform, and more.

Notable project: Amida-san — a distributed lottery platform with 10,000+ users.


Why a Bookshelf?

I've always believed that what you read shapes who you become. When I look at my bookshelf at home, I don't just see titles — I see turning points. The distributed systems book that changed how I think about architecture. The philosophy book that made me question my assumptions about technology. The novel that reminded me why I started coding in the first place.

A traditional portfolio asks: "What can you do?" But I wanted to answer a deeper question: "How do you think?"

That's why I built Talking Bookshelf — an AI-powered bookshelf character that introduces me through my reading history. Not a static list of skills, but a living conversation about the ideas that shaped me.

The concept is simple: Your books tell your story. The bookshelf knows what I've read, my thoughts on each book, and can have natural conversations with visitors about my interests, skills, and experiences.

Portfolio

https://talkingbookshelf.com/demo

Try it out! Ask the bookshelf anything — about my favorite books, technical skills, or what I'm currently reading.

How I Built It

Tech Stack

Layer Technology
Backend Go 1.24 + Gin
AI Agent Google ADK (Agent Development Kit) v0.3.0
LLM Gemini 2.5 Flash + Flash Lite
Frontend React 19 + TypeScript + Vite
UI Material UI 7 + Framer Motion
Hosting Google Cloud Run
i18n i18next (English/Japanese)

Architecture

The core of this project is an AI Agent built with Google's ADK (Agent Development Kit). The bookshelf character is powered by Gemini 2.5 Flash (with Flash Lite for validation) and has access to three function-calling tools:

  1. search_books — Search my reading history by keyword
  2. get_book_details — Retrieve detailed information about a specific book, including my personal notes
  3. get_reading_stats — Get statistics about my reading habits (total books, favorite genres, etc.)
┌─────────────┐     ┌─────────────┐     ┌─────────────┐
│   Visitor   │────▶│  Bookshelf  │────▶│   Gemini    │
│             │◀────│    Agent    │◀────│ 2.5 Flash   │
                          │             + Flash Lite │
└─────────────┘     └──────┬──────┘     └─────────────┘
                           │
                    ┌──────┴──────┐
                    │  Function   │
                    │   Calling   │
                    │    Tools    │
                    └─────────────┘
Enter fullscreen mode Exit fullscreen mode

Why Gemini 2.5 Flash / Flash Lite & ADK?

I chose Gemini 2.5 Flash for its low latency (critical for conversational UX) and strong function-calling capabilities. The model reliably decides when to call tools and handles multi-turn context well.

Google ADK v0.3.0 made building the agent straightforward. Defining tools is declarative and session management is built-in. Here's the core agent setup:

agent := genkit.DefineAgent(
    "bookshelf",
    &genkit.AgentOptions{
        Model: "googleai/gemini-2.5-flash",
        Tools: []genkit.Tool{searchBooks, getBookDetails, getReadingStats},
    },
)
Enter fullscreen mode Exit fullscreen mode

Before ADK, I built agents from scratch — and it was painful. Parsing tool calls from LLM responses, executing them, feeding results back, handling the "should I call another tool?" loop... all manual. Multi-turn conversation? I had to manage session state myself, worry about token limits, and implement context truncation.

ADK handles all of this. The tool execution loop is automatic. Session management is built-in. I just define my tools declaratively and focus on the actual product experience — not the plumbing.

I also use a two-model strategy for optimal UX:

  • Gemini 2.5 Flash — Main agent for conversation and function calling (quality matters here)
  • Gemini 2.5 Flash Lite — Validation pipeline for hallucination correction (speed matters here)

When the validation pipeline detects a hallucination, Flash Lite generates a corrected response instantly. Users don't notice the extra safety layer because the lightweight model keeps latency low.

Character Design

The bookshelf has a wooden, cozy design with:

  • Eyes (glasses) that blink and react
  • A mouth (made of books) that animates when speaking
  • Hands that wave during greetings
  • Emotional states: idle, thinking, talking, surprised

The character creates an approachable, friendly atmosphere that encourages visitors to explore through conversation.

What I'm Most Proud Of

1. From Static Pages to Personal Conversations

Traditional portfolios are fundamentally one-way communication. You write about yourself, list your projects, and hope visitors read it. But here's the problem: visitors don't know what questions to ask. They skim, maybe click a link or two, and leave.

Talking Bookshelf flips this model. Instead of presenting information, it responds to curiosity. Visitors can ask "What's your favorite book?" or "Do you have any books about AI?" — and get answers grounded in my actual reading history. It's a simple interaction, but it transforms passive browsing into active discovery.

2. The Philosophy: Celebrating Individuality, Not Averages

Most platforms today reduce people to averaged metrics — star ratings, follower counts, standardized skill assessments. Your uniqueness gets flattened into comparable numbers.

This project takes the opposite approach: embrace the individual.

  • The bookshelf has its own distinct personality — not a generic assistant
  • My reading list reflects my specific journey, not algorithmic recommendations
  • Every conversation is unique to the visitor's curiosity

The result? An interaction that feels personal, not templated. You're not browsing a profile — you're meeting a character with opinions, favorites, and quirks.

3. Messy Notes, Coherent Answers

You don't need polished writing to create meaningful conversations.

I just jot down whatever comes to mind after finishing a book:

  • "This finally made distributed consensus click for me"
  • "Reminds me of that paper on event sourcing..."

When a visitor asks about microservices, the AI synthesizes these fragments into a coherent response — reflecting how I actually think, not how I've learned to market myself.

A portfolio that grows naturally as I read, without the pressure of "content creation."

4. Solving AI Hallucination: The Validation Pipeline

One critical challenge with AI-powered portfolios: hallucination. The agent might confidently claim I read books I've never touched, or misrepresent my opinions.

I built a validation pipeline to solve this:

  1. BookAnnotationValidator — Ensures every book reference actually exists in my data
  2. ContentValidator — Cross-checks claims against my actual notes using Flash Lite
  3. ResponseCorrector — Automatically regenerates responses that fail validation

This means visitors can trust what the bookshelf says. It's grounded in real data, not AI imagination.


Final Thoughts

This project started as a contest entry — but I'm having so much fun building it that I've decided to keep going.

Beyond the technical side, I'm also exploring what it truly means to talk about books — how to craft conversations that feel substantial and rewarding, not just informative.

I'm continuing to develop this project at talkingbookshelf.com — exploring how this concept could help others share their reading journeys too.

Thanks for reading. Feel free to chat with my bookshelf and see what I've been reading.

Top comments (0)