DEV Community

Cover image for Build a Secure PostgreSQL AI Agent with LangChain + Ollama
Gaurav Kumar
Gaurav Kumar

Posted on • Originally published at gauravbytes.hashnode.dev

Build a Secure PostgreSQL AI Agent with LangChain + Ollama

๐Ÿš€ Introduction

Imagine asking your database:

โ€œShow me the top 10 customers by revenue.โ€

โ€ฆand instantly getting results without writing a single SQL query.

Thatโ€™s exactly what an AI-powered database agent can do.

In this tutorial, weโ€™ll build a secure PostgreSQL AI Agent using:

  • ๐Ÿงฉ LangChain for agent orchestration
  • ๐Ÿฆ™ Ollama for running local LLMs
  • ๐Ÿ˜ PostgreSQL as the database
  • ๐Ÿ›ก๏ธ A custom SQL safety layer to block destructive queries

By the end, youโ€™ll have a local AI assistant capable of converting natural language into SQL queries safely and efficiently.

๐Ÿ’ป Source Code:
postgres-agent GitHub Repository

๐Ÿค– What is a PostgreSQL AI Agent?

A PostgreSQL AI Agent is essentially an LLM-powered assistant that can:

  • Understand natural language
  • Generate SQL queries
  • Execute them against PostgreSQL
  • Return readable results

Think of it like ChatGPT connected to your database โ€” but with guardrails and controlled execution.

โš™๏ธ Tech Stack

Tool Purpose
LangChain Agent orchestration and tool calling
Ollama Run LLMs locally without API costs
langchain-ollama LangChain integration for Ollama
psycopg2 PostgreSQL adapter for Python
Python Core application runtime

๐Ÿงฑ System Architecture

๐Ÿ”Œ Step 1: Connect to PostgreSQL

Weโ€™ll start by creating a PostgreSQL connection using psycopg2.

import psycopg2

DB_CONFIG = {
    "host": "localhost",
    "port": 5432,
    "database": "postgres",
    "user": "postgres",
    "password": "root",
}

def get_connection():
    return psycopg2.connect(**DB_CONFIG)
Enter fullscreen mode Exit fullscreen mode

๐Ÿ” Production Tip:
Never hardcode credentials in production. Use environment variables or a secret manager.

๐Ÿ› ๏ธ Step 2: Create LangChain Tools

LangChain agents interact with systems using tools.

These tools expose safe and structured database operations to the LLM.

๐Ÿ“‹ Tool: List Database Tables

@tool
def list_tables() -> str:
    """List all tables in the database."""
    conn = get_connection()
    try:
        cur = conn.cursor()
        cur.execute("""
            SELECT table_name FROM information_schema.tables
            WHERE table_schema = 'public'
        """)
        tables = [row[0] for row in cur.fetchall()]
        return f"Tables: {', '.join(tables)}" if tables else "No tables found."
    finally:
        conn.close()
Enter fullscreen mode Exit fullscreen mode

This gives the agent dynamic schema awareness instead of relying on hardcoded table names.

๐Ÿ“‘ Tool: Fetch Table Schema

@tool
def get_table_schema(table_name: str) -> str:
    """Get the schema (columns and types) of a specific table."""
    conn = get_connection()
    try:
        cur = conn.cursor()
        cur.execute("""
            SELECT column_name, data_type, is_nullable
            FROM information_schema.columns
            WHERE table_schema = 'public' AND table_name = %s
            ORDER BY ordinal_position
        """, (table_name,))
        columns = cur.fetchall()

        if not columns:
            return f"Table '{table_name}' not found."

        schema = "\n".join([
            f"  {col[0]} ({col[1]}, nullable={col[2]})"
            for col in columns
        ])

        return f"Schema for '{table_name}':\n{schema}"
    finally:
        conn.close()
Enter fullscreen mode Exit fullscreen mode

This helps the LLM understand:

  • Column names
  • Data types
  • Nullability
  • Table structure

โšก Tool: Execute SQL Queries

@tool
def execute_sql(query: str) -> str:
    """Execute a SQL query against the PostgreSQL database and return results. Use this for SELECT queries."""

    is_safe, reason = validate_read_only_sql(query)

    if not is_safe:
        return f"Safety Guard: Blocked query. {reason}"

    conn = get_connection()

    try:
        cur = conn.cursor()
        cur.execute(query)

        if cur.description:
            columns = [desc[0] for desc in cur.description]
            rows = cur.fetchall()

            if not rows:
                return "Query returned no results."

            result = " | ".join(columns) + "\n"
            result += "\n".join([
                " | ".join(str(v) for v in row)
                for row in rows[:50]
            ])

            if len(rows) > 50:
                result += f"\n... ({len(rows)} total rows)"

            return result
        else:
            conn.commit()
            return f"Query executed successfully. Rows affected: {cur.rowcount}"

    except Exception as e:
        conn.rollback()
        return f"SQL Error: {e}"

    finally:
        conn.close()
Enter fullscreen mode Exit fullscreen mode

This becomes the core execution engine of the AI agent.

๐Ÿ›ก๏ธ Step 3: Add a SQL Safety Guard

Allowing an LLM to execute unrestricted SQL is dangerous.

Thatโ€™s why every query should pass through a validation layer before execution.

โœ… Allowed vs Blocked Queries

Allowed Blocked
SELECT INSERT
WITH UPDATE
SHOW DELETE
EXPLAIN DROP
โ€” ALTER
โ€” TRUNCATE

๐Ÿงผ Query Normalization

Before validation, we sanitize queries by removing:

  • Comments
  • Hidden injections
  • String-based bypass attempts

This significantly improves safety when working with LLM-generated SQL.

๐Ÿง  Step 4: Setup Ollama for Local LLMs

Ollama makes it incredibly easy to run large language models locally.

No OpenAI API.
No usage limits.
No cloud dependency.

Useful links:

๐Ÿ”ฝ Pull the Model

ollama pull qwen2.5:7b
Enter fullscreen mode Exit fullscreen mode

Verify installation:

ollama list
Enter fullscreen mode Exit fullscreen mode

๐Ÿงฉ Recommended Models for SQL Agents

Model Command Notes
Qwen 2.5 7B ollama pull qwen2.5:7b Used in this tutorial
Llama 3.1 8B ollama pull llama3.1 Strong general-purpose model
DeepSeek-R1 7B ollama pull deepseek-r1 Excellent reasoning
Mistral 7B ollama pull mistral Lightweight and fast

๐Ÿ“ฆ Install LangChain Ollama Integration

pip install langchain-ollama
Enter fullscreen mode Exit fullscreen mode

References:


โš™๏ธ Configure ChatOllama

from langchain_ollama import ChatOllama

llm = ChatOllama(
    model="qwen2.5:7b",
    temperature=0
)
Enter fullscreen mode Exit fullscreen mode

Setting temperature=0 helps generate more deterministic and reliable SQL queries.

Why Use Ollama?

  • โœ… Completely local execution
  • โœ… No API cost
  • โœ… Privacy-friendly
  • โœ… GPU acceleration support
  • โœ… Supports many open-source models

๐Ÿ”— Step 5: Create the LangChain Agent

tools = [list_tables, get_table_schema, execute_sql]

agent = create_agent(llm, tools)
Enter fullscreen mode Exit fullscreen mode

LangChain enables the AI agent to:

  • Select tools dynamically
  • Chain multiple operations
  • Reason step-by-step
  • Generate context-aware SQL

๐Ÿ’ฌ Step 6: Create an Interactive Chat Loop

while True:
    user_input = input("\nYou: ").strip()

    if user_input.lower() in ("exit", "quit"):
        print("Goodbye!")
        break

    if not user_input:
        continue
Enter fullscreen mode Exit fullscreen mode

This turns the application into a conversational SQL assistant.

๐Ÿงพ Step 7: Add Debugging & Observability

Debugging AI agents becomes much easier when you can inspect tool calls and outputs.

def print_turn_details(messages: list[BaseMessage]) -> None:
    final_response = ""

    for message in messages:
        if isinstance(message, AIMessage):

            for tool_call in message.tool_calls:
                tool_name = tool_call.get("name", "unknown_tool")
                tool_args = format_tool_payload(
                    tool_call.get("args", {})
                )

                print(f"\nTool call: {tool_name}({tool_args})")

            content = format_content(message.content).strip()

            if content:
                final_response = content

        elif isinstance(message, ToolMessage):
            tool_name = getattr(message, "name", None) or "tool"

            tool_output = (
                format_content(message.content).strip()
                or "(no output)"
            )

            print(f"\nTool response [{tool_name}]: {tool_output}")

    if final_response:
        print(f"\nAgent: {final_response}")
    else:
        print("\nAgent: I couldn't generate a response.")
Enter fullscreen mode Exit fullscreen mode

This helps you inspect:

  • Tool invocations
  • Tool outputs
  • Agent reasoning flow
  • Final responses

๐Ÿงช Example Queries

Try prompts like:

  • "List all tables"
  • "Show schema of users table"
  • "Get top 5 users by revenue"
  • "How many orders were placed last month?"

๐ŸŒ Real-World Use Cases

This architecture can be extended into real production systems.

๐Ÿ“Š AI Analytics Dashboards

Allow non-technical users to query business data using plain English.

๐Ÿ’ฌ Internal Data Chatbots

Integrate with Slack or Teams for self-serve analytics.

๐Ÿงพ Automated Reporting

Generate recurring reports automatically using natural language prompts.

๐Ÿข SaaS Admin Tools

Provide operations teams with an AI-powered database interface.

๐Ÿค– AI Copilots for Analysts

Speed up SQL generation and analytics workflows.

๐ŸŽฏ Final Thoughts

Youโ€™ve now built a secure and extensible PostgreSQL AI Agent powered by LangChain and Ollama.

The biggest takeaway here is that tool-based AI architecture gives LLMs structured access to databases without exposing unrestricted control.

Key Learnings

  • LangChain simplifies AI agent orchestration
  • Ollama enables local LLM execution
  • SQL validation is essential for security
  • Tool-driven agents are highly extensible

With a few additional improvements like authentication, query caching, and semantic memory, this can evolve into a powerful production-grade AI data assistant.

๐Ÿ“ฆ Full Working Source Code:
postgres-agent GitHub Repository

Top comments (0)