DEV Community

Aniket Hingane
Aniket Hingane

Posted on

Building an AI-Powered E-Shopping Platform with Intelligent Product Recommendations

Building an AI-Powered E-Shopping Platform with Intelligent Product Recommendations

Introduction

Have you ever wished your online shopping experience felt more like having a knowledgeable sales assistant right beside you? Through my experimental PoC project, I set out to bring that vision to life using cutting-edge AI agent technology.

In this article, I'll walk you through my journey of building an intelligent e-commerce platform that fundamentally reimagines how shoppers interact with online stores. Instead of clicking through endless category pages and search filters, imagine simply chatting with an AI assistant: "Show me wireless headphones under $100" or "Add a laptop backpack to my cart."

This isn't some far-future concept—I built a working prototype using Google's Agent Development Kit (ADK), Next.js, and CopilotKit. The system handles everything from natural language product searches to cart management, all while maintaining conversation context and providing personalized recommendations.

What excited me most about this experiment wasn't just the technology itself, but discovering how conversational AI can genuinely improve the shopping experience. Throughout this article, I'll share the architecture decisions, implementation challenges, and surprising insights I gained while building an agentic e-commerce system.

Background and Context

E-commerce has come a long way from static product catalogs, but the fundamental interaction model hasn't changed much. Users still navigate through hierarchical menus, type keywords into search boxes, and manually manage their shopping carts through button clicks.

Meanwhile, AI agents have been making waves across various domains. From customer service chatbots to code assistants, we're seeing how natural language interfaces can simplify complex workflows. In my view, e-commerce represents a perfect use case for this technology—shopping is inherently conversational, goal-oriented, and benefits from personalized guidance.

Google's Agent Development Kit caught my attention because it provides a structured framework for building production-grade AI agents with tool-calling capabilities. Unlike simple chatbots that just answer questions, ADK agents can take actions—search databases, update state, trigger UI changes, and more.

I paired this with CopilotKit, a framework designed to integrate AI agents into web applications seamlessly. CopilotKit handles the complex orchestration between frontend state, backend agents, and real-time UI updates. From my experience, this combination proved ideal for building interactive, stateful shopping experiences.

The timing feels right for this exploration. Large language models have become sophisticated enough to understand shopping intent reliably, while frameworks like ADK make it practical to build agent-based systems without reinventing the wheel.

Architectural Overview

The Frontend: Next.js with CopilotKit

I chose Next.js 15 for the frontend, leveraging its App Router for modern React development. The key innovation here is how CopilotKit creates a bidirectional connection between the UI and the AI agent.

Here's how I set up the shared state between the shopping interface and the agent:

const { state, setState } = useCoAgent<ShoppingState>({
  name: "shopping_agent",
  initialState: {
    cart: [],
    products: [...],
    total: 0
  },
});
Enter fullscreen mode Exit fullscreen mode

What I found remarkable is that this state synchronizes automatically. When the agent adds a product to the cart via a tool call, the UI updates in real-time. When users click "Add to Cart" buttons, the agent sees the updated state in its next invocation.

I implemented the sidebar chat interface using CopilotKit's built-in CopilotSidebar component, which I customized with shopping-specific messaging:

<CopilotSidebar
  clickOutsideToClose={false}
  defaultOpen={true}
  labels={{
    title: "Shopping Assistant",
    initial: "👋 Welcome to AI E-Shop! I can help you browse products, manage your cart, and find the perfect items..."
  }}
/>
Enter fullscreen mode Exit fullscreen mode

This approach felt natural because it keeps the traditional e-commerce UI intact while adding conversational capabilities as an enhancement, not a replacement.

The Backend: Python Agent with Google ADK

The backend is where things get interesting. I built a FastAPI application that hosts an ADK agent equipped with specialized shopping tools. Here's the core agent definition:

shopping_agent = LlmAgent(
    name="ShoppingAgent",
    model="gemini-2.0-flash-exp",
    instruction="""
    You are a friendly AI shopping assistant for an e-commerce platform.
    Help users find products, manage their cart, and make purchase decisions.
    """,
    tools=[add_to_cart, remove_from_cart, search_products, 
           get_cart_summary, recommend_product],
    before_agent_callback=on_before_agent,
    before_model_callback=before_model_modifier
)
Enter fullscreen mode Exit fullscreen mode

Each tool is a Python function that manipulates the shopping state. Here's my implementation of the add_to_cart tool:

def add_to_cart(
    tool_context: ToolContext,
    product_id: str,
    quantity: int = 1
) -> Dict[str, Any]:
    """Add a product to the shopping cart."""
    products = tool_context.state.get("products", [])
    cart = tool_context.state.get("cart", [])

    # Find the product
    product = next((p for p in products if p["id"] == product_id), None)
    if not product:
        return {"status": "error", "message": "Product not found"}

    # Check stock
    if product["stock"] < quantity:
        return {"status": "error", "message": "Insufficient stock"}

    # Add to cart or update quantity
    existing = next((item for item in cart if item["product"]["id"] == product_id), None)
    if existing:
        existing["quantity"] += quantity
    else:
        cart.append({"product": product, "quantity": quantity})

    # Update total
    total = sum(item["product"]["price"] * item["quantity"] for item in cart)
    tool_context.state["cart"] = cart
    tool_context.state["total"] = total

    return {
        "status": "success",
        "message": f"Added {quantity} × {product['name']} to cart",
        "total": total
    }
Enter fullscreen mode Exit fullscreen mode

What I discovered through experimentation is that providing clear return messages helps the agent communicate outcomes effectively to users. The agent doesn't just execute actions—it explains what happened.

State Management and Context Injection

One challenge I faced was ensuring the agent always has current shopping context. I implemented a before_model_modifier callback that injects cart state into every LLM request:

def before_model_modifier(callback_context: CallbackContext, llm_request: LlmRequest):
    cart = callback_context.state.get("cart", [])
    total = callback_context.state.get("total", 0.0)

    cart_summary = [
        {"name": item["product"]["name"], 
         "price": item["product"]["price"], 
         "quantity": item["quantity"]}
        for item in cart
    ]

    prefix = f"""CURRENT SHOPPING SESSION STATE:
- Items in cart: {len(cart)}
- Cart total: ${total:.2f}
- Cart contents: {json.dumps(cart_summary)}
"""

    # Inject into system prompt
    original_instruction = llm_request.config.system_instruction
    modified_text = prefix + original_instruction.parts[0].text
    llm_request.config.system_instruction.parts[0].text = modified_text
Enter fullscreen mode Exit fullscreen mode

This pattern ensures the agent always knows what's in the user's cart, enabling contextual recommendations and preventing errors like adding out-of-stock items.

Implementation Insights and Lessons Learned

Product Search with Natural Language

I implemented a flexible search_products tool that accepts multiple filter types:

def search_products(
    tool_context: ToolContext,
    query: str = "",
    category: str = "",
    max_price: float = 0
) -> Dict[str, Any]:
    """Search for products in the catalog."""
    products = tool_context.state.get("products", [])
    results = []

    for product in products:
        if query and query.lower() not in product["name"].lower():
            continue
        if category and product["category"].lower() != category.lower():
            continue
        if max_price > 0 and product["price"] > max_price:
            continue
        results.append(product)

    return {"status": "success", "results": results, "count": len(results)}
Enter fullscreen mode Exit fullscreen mode

The beauty of this approach is that the LLM handles the intent understanding. Users can ask "Show me affordable electronics" and the agent translates that to appropriate search parameters.

Generative UI for Recommendations

I experimented with CopilotKit's generative UI feature to display product recommendations dynamically. When the agent calls the recommend_product tool, a custom UI card appears:

useCopilotAction({
  name: "show_product_recommendation",
  available: "disabled",
  parameters: [
    { name: "productId", type: "string", required: true },
    { name: "reason", type: "string", required: true },
  ],
  render: ({ args }) => {
    const product = state.products.find(p => p.id === args.productId);
    return <ProductRecommendation product={product} reason={args.reason} />;
  },
});
Enter fullscreen mode Exit fullscreen mode

This creates a richer experience than text alone—users see visually appealing product cards with personalized explanations.

Managing Conversation Flow

Initially, my agent would call multiple tools in sequence, creating a chatty back-and-forth that felt unnatural. I implemented an after_model_modifier to end invocations after text responses:

def simple_after_model_modifier(callback_context, llm_response):
    if llm_response.content and llm_response.content.parts[0].text:
        callback_context._invocation_context.end_invocation = True
    return None
Enter fullscreen mode Exit fullscreen mode

This simple change made conversations feel more natural—the agent takes action, reports the result, then waits for the user's next input.

Key Takeaways

After building this experimental e-commerce platform, here's what stood out:

  • Conversational interfaces lower friction: Users can express shopping intent naturally without learning your UI patterns
  • Tool-calling agents enable real actions: Unlike pure chatbots, ADK agents can modify state, search databases, and trigger UI updates
  • Shared state is powerful: Synchronizing agent state with frontend state creates seamless experiences
  • Context injection is critical: Agents need current session state to make intelligent decisions
  • Generative UI adds polish: Dynamic UI components based on agent actions feel magical to users
  • Start simple, iterate: I began with basic cart operations, then added recommendations and search

Conclusion

Building this AI-powered e-shop taught me that agentic architectures aren't just theoretical—they're practical for real-world applications today. The combination of Next.js, Google ADK, and CopilotKit provides a solid foundation for conversational commerce experiences.

This was an experimental PoC, not a production system, but it demonstrated clear advantages: faster product discovery, reduced cognitive load, and more engaging interactions. From my perspective, we're just scratching the surface of what's possible.

If you're interested in building similar systems, I recommend starting small. Implement one or two agent tools, get the state synchronization working, then expand from there. The technologies are mature enough for serious experimentation.

What shopping experiences could you reimagine with conversational AI? The tools are ready—the question is where you'll take them.


Top comments (0)