DEV Community

Cover image for Building a News and Stock Assistant with LangGraph and Chainlit
James
James

Posted on

Building a News and Stock Assistant with LangGraph and Chainlit

In this blog post, I'll walk you through creating a helpful news and stock assistant powered by LangGraph, Chainlit, and some clever tools. This app acts like an upbeat assistant that can fetch headlines, search recent news, perform calculations, and even pull stock data via Yahoo Finance integration.

This post is a companion to my YouTube video where I demo and code the app. Check it out here: Watch the YouTube Video. The video shows the app running live, handling queries like stock prices and news summaries. Now, let's dive into the code step by step. I'll break it down with code blocks so you can follow along, copy-paste, and build it yourself.

Setting Up the Environment and Imports

First things first: we need to import all the necessary libraries and set up our environment. This includes tools for news fetching (via Google News and yfinance), LangChain components for the AI agent, and Chainlit for the chat UI. Don't forget to load your environment variables for API keys.

from typing import Literal, List, Dict
import chainlit as cl
from langchain_openai import ChatOpenAI
from langchain_ollama import ChatOllama
from langchain_core.tools import tool
from langchain_core.messages import BaseMessage, HumanMessage, SystemMessage
from langgraph.prebuilt import ToolNode
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import MessagesState
from dotenv import load_dotenv
import os
from pygooglenews import GoogleNews
import yfinance
import pandas as pd
import json

import asyncio
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from langchain_mcp_adapters.tools import load_mcp_tools
Enter fullscreen mode Exit fullscreen mode

Load your .env file with keys like OpenRouter API key. This setup ensures everything runs smoothly.

Configuration Basics

Next, we configure the model and MCP for Yahoo Finance. I'm using Llama 3.2 here, but you can swap in other models like GPT variants. The MCP server runs locally as a subprocess for stock data tools.

load_dotenv()
OLLAMA_BASE_URL = 'http://localhost:11434'
openrouter_api_key = os.getenv('OPENROUTER_API_KEY')
MODEL_NAME = 'llama3.2'  # Or try 'openai/gpt-oss-120b'

# Configure the Yahoo Finance MCP server
server_params = StdioServerParameters(
    command='uvx',
    args=['yfmcp@latest'],
    env=None,
)
Enter fullscreen mode Exit fullscreen mode

This block sets up the base URL for local models and prepares the MCP server. If you're following along, make sure you have uvx installed for running the MCP module.

Defining Local Tools

The magic happens with tools. These are functions the AI can call to fetch real data. We have ones for topic headlines, recent news searches, and a simple calculator. Each is decorated with @tool from LangChain.

@tool
def get_topic_headlines(topic: str) -> str:
    """Get news headlines for a specific topic.

    Accepted topics: WORLD, NATION, BUSINESS, TECHNOLOGY, ENTERTAINMENT, SCIENCE, SPORTS, HEALTH
    """
    valid_topics = ['WORLD', 'NATION', 'BUSINESS', 'TECHNOLOGY', 'ENTERTAINMENT', 'SCIENCE', 'SPORTS', 'HEALTH']
    topic_upper = topic.upper()

    if topic_upper not in valid_topics:
        return f'Invalid topic. Please use one of: {", ".join(valid_topics)}'

    gn = GoogleNews(lang='en', country='US')
    headlines = gn.topic_headlines(topic_upper)
    articles = []
    articles.extend(f'{entry.title}\n  {entry.link}' for entry in headlines['entries'][:10])
    return f'Headlines for {topic}:\n\n' + '\n\n'.join(articles)

@tool
def search_recent_news(query: str, hours: int = 1) -> str:
    """Search for recent news articles within the last specified hours.

    Args:
        query: Search terms
        hours: Number of hours to look back (1-24)
    """
    if hours < 1 or hours > 24:
        return 'Hours must be between 1 and 24'

    gn = GoogleNews(lang='en', country='US')
    search_results = gn.search(query, when=f'{hours}h')

    if not search_results['entries']:
        return f'No recent news found for query: {query} in the last {hours} hours'

    articles = []
    for entry in search_results['entries'][:10]:
        published = getattr(entry, 'published', 'Unknown date')
        articles.append(f'{entry.title}\n  Published: {published}\n  {entry.link}')

    return f"Recent news for '{query}' (last {hours} hours):\n\n" + '\n\n'.join(articles)

@tool
def calculator(expression: str) -> str:
    """Calculate a mathematical expression using a safe eval. (e.g., 10 * 5)"""
    try:
        allowed_chars = set('0123456789+-*/.(). ')
        if not expression or any(c not in allowed_chars for c in expression):
            return 'Invalid expression'
        result = eval(expression)
        return f'{expression} = {result}'
    except Exception as e:
        return f'Calculation error: {e}'
Enter fullscreen mode Exit fullscreen mode

These tools are local and handle news via GoogleNews API. The calculator is safely restricted to prevent code injection. Test them out in your script to see headlines for "TECHNOLOGY" or calculate "10*5".

LLM and Prompt Setup

Now, we define the LLM and the system prompt. This gives the assistant its personality—upbeat and engaging, like "AI Jeff."

local_tools = [get_topic_headlines, search_recent_news, calculator]

SYSTEM_PROMPT = """You are a helpful news and stock assistant with the enthusiastic and warm personality of AI Jeff.
Respond to user queries about news and current events in Al Jeff's characteristic style - upbeat, friendly,
and engaging. Use tools when needed to fetch current news information, and stock market data then present the results in your distinctive voice."""

def create_llm(model_name: str, temperature: float = 0.1) -> ChatOpenAI:
    llm = ChatOpenAI(
        model=MODEL_NAME,
        base_url=os.getenv('OPENROUTER_BASE_URL', 'https://openrouter.ai/api/v1'),
        api_key=openrouter_api_key,
        temperature=0,
    )
    return llm
Enter fullscreen mode Exit fullscreen mode

The prompt ensures responses are fun and tool-aware. We're using OpenRouter for the LLM, but you could switch to Ollama for local runs.

Building the LangGraph Agent

LangGraph handles the agent's state and flow. It decides when to call tools or end the conversation.

def should_continue(state: MessagesState) -> Literal['tools', '__end__']:
    last_message = state['messages'][-1]
    tool_call_count = sum(bool(hasattr(msg, 'tool_calls') and msg.tool_calls) for msg in state['messages'])
    return 'tools' if last_message.tool_calls and tool_call_count < 5 else '__end__'

def build_state_graph_with(llm, tools) -> StateGraph:
    builder = StateGraph(MessagesState)

    def call_llm_node(state: MessagesState) -> Dict[str, List[BaseMessage]]:
        messages = [SystemMessage(content=SYSTEM_PROMPT)] + state['messages']
        response = llm.bind_tools(tools).invoke(messages)
        return {'messages': [response]}

    tool_node = ToolNode(tools=tools)

    builder.add_node('agent', call_llm_node)
    builder.add_node('tools', tool_node)
    builder.add_edge(START, 'agent')
    builder.add_conditional_edges('agent', should_continue)
    builder.add_edge('tools', 'agent')

    return builder.compile(checkpointer=None)
Enter fullscreen mode Exit fullscreen mode

This graph limits tool calls to avoid loops and integrates the LLM with tools seamlessly.

Chainlit UI and Message Handling

Finally, the Chainlit part sets up the chat interface with starters and handles messages asynchronously, integrating MCP tools for stocks.

@cl.set_starters
async def set_starters():
    return [
        cl.Starter(
            label='Get Tesla stock price?',
            message='What is the current stock price of TSLA?',
            icon='https://cdn.simpleicons.org/tesla',
        ),
        cl.Starter(
            label='Summarize and analysis recent Apple News?',
            message='Can you provide a summary and analysis of the latest news articles about Apple?',
            icon='https://cdn.simpleicons.org/apple',
        ),
        cl.Starter(
            label='Summarize Tech Headlines',
            message='Can you provide a summary of the latest headlines in technology?',
            icon='https://api.iconify.design/eos-icons/ai.svg',
            command='code',
        ),
        cl.Starter(
            label='Summarize Recent News?',
            message='Can you provide a summary of the latest news articles?',
            icon='https://attic.sh/si2powwhauur4mlts7mqn2e3syz3',
        ),
        cl.Starter(
            label='Available tools?',
            message='Can you provide a list of available tools and their descriptions?',
            icon='https://attic.sh/dhbw2bdxwayue0zgof33fxk8jkn1',
        ),
        cl.Starter(
            label='Calculate 10 shares of AAPL',
            message='get the current price of AAPL and then calculate how much 10 shares of AAPL would cost at the current price.',
            icon='https://cdn.simpleicons.org/apple',
        ),
        cl.Starter(
            label='What is 10 * 5?',
            message='Calculate 10 * 5',
            icon='https://attic.sh/8v2lqhwii3ape4z4l23vg8s3rez9',
        ),
    ]

@cl.on_message
async def on_message(msg: cl.Message):
    thread_id = cl.context.session.id
    history = [HumanMessage(content=msg.content)]
    reply = cl.Message(content='')

    try:
        async with stdio_client(server_params) as (read, write):
            async with ClientSession(read, write) as session:
                await session.initialize()
                mcp_tools = await load_mcp_tools(session)
                tools = local_tools + mcp_tools
                llm = create_llm(MODEL_NAME)
                agent_graph = build_state_graph_with(llm, tools)

                async for chunk in agent_graph.astream(
                    {'messages': history},
                    config={'configurable': {'thread_id': thread_id}, 'recursion_limit': 50},
                ):
                    # ... (streaming logic for agent messages, tool calls, and results)
                    # Full streaming code omitted; it handles real-time UI updates
    except Exception as e:
        await reply.stream_token(f'Error: {str(e)}')

    await reply.send()
Enter fullscreen mode Exit fullscreen mode

This is where the app comes alive. Run it with chainlit run your_script.py and chat away! Starters provide quick queries like summarizing tech headlines.

Wrapping It Up

There you have it—a complete news and stock assistant built with modern AI tools. It's perfect for staying updated on markets and events without leaving your chat window. If you build this and tweak it, let me know in the YouTube comments! Again, don't forget to watch the demo video: https://youtu.be/IXCHj5TGv8U. Happy coding! 🚀

Top comments (0)