DEV Community

eugen hoble
eugen hoble

Posted on

Building a Simple AI Agent with Python, LangChain, OpenAI, VS Code, and WSL

AI agents can sound complicated.

But at the basic level, an agent is just a software system that can:

  1. Receive a user request
  2. Decide whether it needs a tool
  3. Call that tool
  4. Use the result to produce a better answer

That is the core idea behind this project.

I built a simple local AI agent using:

  • Python
  • LangChain
  • OpenAI
  • VS Code
  • WSL

The goal was not to build a production-grade agent immediately.

The goal was to understand the foundation: how an LLM can use tools inside a real Python application.

What this project does

This project is a command-line AI agent.

You run it locally, ask a question, and the agent decides whether it should call one of the available tools.

Example:

You: If my server costs 0.42 dollars per hour and runs all month, what is the monthly cost?

Agent: Monthly cost = $0.42/hr × 24 hrs/day × 30 days = $302.40.

The agent can also answer questions like:

What is the risk of deploying a database schema change on Friday afternoon?

For that, it can call a custom deployment-risk tool.

The point is simple:

A chatbot only responds.

An agent can use tools.

Project structure

The project uses a standard Python src layout:

simple-ai-agent/
├── .vscode/
│ ├── extensions.json
│ ├── launch.json
│ └── settings.json
├── docs/
│ ├── ARCHITECTURE.md
│ └── ai_agent_system_architecture_overview.png
├── scripts/
│ ├── run.sh
│ └── setup_wsl.sh
├── src/
│ └── agent_app/
│ ├── init.py
│ ├── agent.py
│ ├── config.py
│ ├── main.py
│ └── tools.py
├── tests/
│ └── test_tools.py
├── .env.example
├── .gitignore
├── pyproject.toml
├── requirements.txt
└── README.md

Why I used WSL and VS Code

I work on Windows, but I prefer using WSL for Python projects.

That gives me a Linux-like development environment while still using VS Code as the editor.

The project files are stored inside WSL:
~/workspace/langchain-vscode-agent-wsl

Then I open the project with:
.code

This makes VS Code connect to the WSL environment directly.

That is important because Python, the virtual environment, and the dependencies all live inside WSL.

Setting up the project

</> Bash
python3 -m venv .venv
source .venv/bin/activate
Enter fullscreen mode Exit fullscreen mode

Then install the dependencies:

</> Bash
pip install --upgrade pip
pip install -r requirements.txt
Enter fullscreen mode Exit fullscreen mode

Because the project uses the src layout, install it in editable mode:

</> Bash
pip install -e .
Enter fullscreen mode Exit fullscreen mode

This tells Python where the package lives.

Without this step, running:

</> Bash
python -m agent_app.main
Enter fullscreen mode Exit fullscreen mode

may fail with: ModuleNotFoundError: No module named 'agent_app'

Editable install fixes that while still letting me edit the source files normally.

Environment variables

The project uses a local .env file.

Example:

OPENAI_API_KEY=your_api_key_here
OPENAI_MODEL=gpt-5.4-nano
Enter fullscreen mode Exit fullscreen mode

The agent tools

The project includes simple tools in tools.py.

One tool calculates monthly cloud cost:

from langchain.tools import tool


@tool
def calculate_monthly_cloud_cost(
    hourly_cost: float,
    hours_per_day: int = 24,
    days: int = 30
) -> float:
    """
    Calculate estimated monthly cloud cost.

    Args:
        hourly_cost: Cost per hour in dollars.
        hours_per_day: Number of hours used per day.
        days: Number of days in the month.

    Returns:
        Estimated monthly cost.
    """
    return hourly_cost * hours_per_day * days
Enter fullscreen mode Exit fullscreen mode

Another tool gives a simple deployment-risk assessment:

from langchain.tools import tool


@tool
def check_deployment_risk(change_type: str) -> str:
    """
    Give a simple deployment risk assessment based on the type of change.
    """
    change_type = change_type.lower()

    if "database" in change_type or "schema" in change_type:
        return (
            "High risk. Use migration scripts, backups, rollback plan, "
            "and test in staging first."
        )

    if "ui" in change_type or "frontend" in change_type:
        return (
            "Medium risk. Use feature flags, browser testing, "
            "and monitor user errors."
        )

    if "config" in change_type:
        return (
            "Medium to high risk. Validate config, use version control, "
            "and prepare rollback."
        )

    return (
        "Low to medium risk. Use automated tests, peer review, "
        "and deployment monitoring."
    )
Enter fullscreen mode Exit fullscreen mode

These are intentionally simple.

The purpose is to show the agent pattern, not to build a full cloud-cost platform or deployment-governance system.

Creating the agent

The agent is created with LangChain and an OpenAI chat model.

from langchain.agents import create_agent
from langchain_openai import ChatOpenAI

from agent_app.config import get_settings
from agent_app.tools import calculate_monthly_cloud_cost, check_deployment_risk


def build_agent():
    settings = get_settings()

    model = ChatOpenAI(
        model=settings.openai_model,
        temperature=0
    )

    return create_agent(
        model=model,
        tools=[
            calculate_monthly_cloud_cost,
            check_deployment_risk
        ],
        system_prompt="""
        You are a practical software engineering assistant.
        Use tools only when they are useful.
        Explain answers clearly and concisely.
        Do not invent tool results.
        """
    )
Enter fullscreen mode Exit fullscreen mode

The important part is this:

tools=[
    calculate_monthly_cloud_cost,
    check_deployment_risk
]
Enter fullscreen mode Exit fullscreen mode

That is what gives the model controlled access to actions.

The LLM does not directly execute arbitrary code.

It can only use the tools we expose.

That boundary matters.

Running the app

The app can be run with:

python -m agent_app.main
Enter fullscreen mode Exit fullscreen mode

Example session:

Simple LangChain Agent
Type 'exit' or 'quit' to stop.

You: If my server costs 0.42 dollars per hour and runs all month, what is the monthly cost?

Agent: Monthly cost = $0.42/hr × 24 hrs/day × 30 days = $302.40.
Enter fullscreen mode Exit fullscreen mode

Why this is an agent, not just a chatbot?

A normal chatbot receives text and returns text.

This project adds another layer:

User request
    ↓
Agent reasoning
    ↓
Tool selection
    ↓
Tool execution
    ↓
Final answer
Enter fullscreen mode Exit fullscreen mode

That tool-use step is what makes the pattern interesting.

The model is not only generating language.

It is deciding whether to call a function.

In a real system, those tools could be:

  • database queries
  • internal APIs
  • document search
  • ticket creation
  • email drafts
  • deployment checks
  • monitoring lookups
  • report generation

The system architecture
A production-ready agent would need more than this demo.

A more complete architecture would usually include these layers:

1. User & Channel Layer
   Web app, Slack, Teams, CLI, REST API

2. API / Application Layer
   FastAPI, authentication, rate limits, session management

3. Agent Orchestration Layer
   LangChain, LangGraph, planner, tool router, state machine

4. LLM Layer
   OpenAI, Anthropic, Azure OpenAI, local models

5. Tool & Action Layer
   Python functions, external APIs, SQL databases, email, web search

6. Retrieval & Knowledge Layer
   Embeddings, vector database, document store, knowledge base

7. Memory & State Layer
   Short-term memory, long-term memory, Redis, Postgres, checkpointing

8. Infrastructure & Operations Layer
   Containers, queues, secrets, CI/CD, cloud hosting, monitoring
Enter fullscreen mode Exit fullscreen mode

Two concerns apply across all layers:

  • Security & Guardrails
  • Observability & Evaluation

What I learned

The biggest lesson from this project is that a simple agent is not hard to build.

The harder part is designing the system around it.

The agent itself may be only a few files.

But once you think about production, you need to care about:

  • tool boundaries
  • authentication
  • cost control
  • logging
  • error handling
  • state
  • testing
  • security
  • observability
  • deployment

GitHub repository

You can find the project here

Top comments (0)