Customer support questions span structured data (orders, products ๐๏ธ), unstructured knowledge (docs/FAQs ๐), and live systems (shipping updates ๐). In this post weโll ship a compact AI agent that handles all threeโusing:
- ๐ง Python + smolagents to orchestrate the agentโs โbrainโ
- ๐งฐ InterSystems IRIS for SQL, Vector Search (RAG), and Interoperability (a mock shipping status API)
โก TL;DR (snack-sized)
- Build a working AI Customer Support Agent with Python + smolagents orchestrating tools on InterSystems IRIS (SQL, Vector Search/RAG, Interoperability for a mock shipping API).
- It answers real questions (e.g., โWas order #1001 delivered?โ โWhatโs the return window?โ) by combining tables, documents, and interoperability calls.
- Youโll spin up IRIS in Docker, load schema and sample data, embed docs for RAG, register tools (SQL/RAG/API), and run the agent via CLI or Gradio UI.
๐งญ What youโll build
An AI Customer Support Agent that can:
- ๐ Query structured data (customers, orders, products, shipments) via SQL
- ๐ Retrieve unstructured knowledge (FAQs & docs) via RAG on IRIS Vector Search
- ๐ Call a (mock) shipping API via IRIS Interoperability, with Visual Trace to inspect every call
Architecture (at a glance)
User โ Agent (smolagents CodeAgent)
โโ SQL Tool โ IRIS tables
โโ RAG Tool โ IRIS Vector Search (embeddings + chunks)
โโ Shipping Tool โ IRIS Interoperability (mock shipping) โ Visual Trace
New to smolagents? Itโs a tiny agent framework from Hugging Face where the model plans and uses your toolsโother alternatives are LangGraph and LlamaIndex.
๐งฑ Prerequisites
- ๐ Python 3.9+
- ๐ณ Docker to run IRIS in a container
- ๐งโ๐ป VS Code handy to checkout the code
- ๐ OpenAI API key for the LLM + embeddings โ or run locally with Ollama if you prefer
1) ๐งฉ Clone & set up Python
git clone https://github.com/intersystems-ib/customer-support-agent-demo
cd customer-support-agent-demo
python -m venv .venv
# macOS/Linux
source .venv/bin/activate
# Windows (PowerShell)
# .venv\Scripts\Activate.ps1
pip install -r requirements.txt
cp .env.example .env # add your OpenAI key
2) ๐ณ Start InterSystems IRIS (Docker)
docker compose build
docker compose up -d
Open the Management Portal (http://localhost:52773 in this demo).
3) ๐๏ธ Load the structured data (SQL)
From SQL Explorer (Portal) or your favorite SQL client:
LOAD SQL FROM FILE '/app/iris/sql/schema.sql' DIALECT 'IRIS' DELIMITER ';';
LOAD SQL FROM FILE '/app/iris/sql/load_data.sql' DIALECT 'IRIS' DELIMITER ';';
This is the schema you have just loaded:
Run some queries and get familiar with the data. The agent will use this data to resolve questions:
-- List customers
SELECT * FROM Agent_Data.Customers;
-- Orders for a given customer
SELECT o.OrderID, o.OrderDate, o.Status, p.Name AS Product
FROM Agent_Data.Orders o
JOIN Agent_Data.Products p ON o.ProductID = p.ProductID
WHERE o.CustomerID = 1;
-- Shipment info for an order
SELECT * FROM Agent_Data.Shipments WHERE OrderID = 1001;
โ If you see rows, your structured side is ready.
4) ๐ Add unstructured knowledge with Vector Search (RAG)
Create an embedding config (example below uses an OpenAI embedding modelโtweak to taste):
INSERT INTO %Embedding.Config
(Name, Configuration, EmbeddingClass, VectorLength, Description)
VALUES
('my-openai-config',
'{"apiKey":"YOUR_OPENAI_KEY","sslConfig":"llm_ssl","modelName":"text-embedding-3-small"}',
'%Embedding.OpenAI',
1536,
'a small embedding model provided by OpenAI');
Need the exact steps and options? Check the documentation
Then embed the sample content:
python scripts/embed_sql.py
Check the embeddings are already in the tables:
SELECT COUNT(*) AS ProductChunks FROM Agent_Data.Products;
SELECT COUNT(*) AS DocChunks FROM Agent_Data.DocChunks;
๐ Bonus: Hybrid + vector search directly from SQL with EMBEDDING()
A major advantage of IRIS is that you can perform semantic (vector) search right inside SQL and mix it with classic filtersโno extra microservices needed. The EMBEDDING() SQL function generates a vector on the fly for your query text, which you can compare against stored vectors using operations like VECTOR_DOT_PRODUCT.
Example A โ Hybrid product search (price filter + semantic ranking):
SELECT TOP 3
p.ProductID,
p.Name,
p.Category,
p.Price,
VECTOR_DOT_PRODUCT(p.Embedding, EMBEDDING('headphones with ANC', 'my-openai-config')) score
FROM Agent_Data.Products p
WHERE p.Price < 200
ORDER BY score DESC
Example B โ Semantic doc-chunk lookup (great for feeding RAG answers):
SELECT TOP 3
c.ChunkID AS chunk_id,
c.DocID AS doc_id,
c.Title AS title,
SUBSTRING(c.ChunkText, 1, 400) AS snippet,
VECTOR_DOT_PRODUCT(c.Embedding, EMBEDDING('warranty coverage', 'my-openai-config')) AS score
FROM Agent_Data.DocChunks c
ORDER BY score DESC
Why this is powerful: you can pre-filter by price, category, language, tenant, dates, etc., and then rank by semantic similarityโall in one SQL statement.
5) ๐ Wire a live (mock) shipping API with Interoperability
The project exposes a tiny /api/shipping/status endpoint through IRIS Interoperabilityโperfect to simulate โreal worldโ calls:
curl -H "Content-Type: application/json" \
-X POST \
-d '{"orderStatus":"Processing","trackingNumber":"DHL7788"}' \
http://localhost:52773/api/shipping/status
Now open Visual Trace in the Portal to watch the message flow hop-by-hop (itโs like airport radar for your integration โ๏ธ).
6) ๐ค Meet the agent (smolagents + tools)
Peek at these files:
-
agent/customer_support_agent.pyโ boots a CodeAgent and registers tools -
agent/tools/sql_tool.pyโ parameterized SQL helpers -
agent/tools/rag_tool.pyโ vector search + doc retrieval -
agent/tools/shipping_tool.pyโ calls the Interoperability endpoint
The CodeAgent plans with short code steps and calls your tools. You bring the tools; it brings the brains using a LLM model
7) โถ๏ธ Run it!
One-shot (quick tests)
python -m cli.run --email alice@example.com --message "Where is my order #1001?"
python -m cli.run --email alice@example.com --message "Show electronics that are good for travel"
python -m cli.run --email alice@example.com --message "Was my headphones order delivered, and whatโs the return window?"
Interactive CLI
python -m cli.run --email alice@example.com
Web UI (Gradio)
python -m ui.gradio
# open http://localhost:7860
๐ ๏ธ Under the hood
The agentโs flow (simplified):
๐งญ Plan how to resolve the question and what available tools must be used: e.g., โcheck order status โ fetch returns policy โ.
๐ค๏ธ Call tools as needed
- ๐๏ธ SQL for customers/orders/products
- ๐ RAG over embeddings for FAQs/docs (and remember, you can prototype RAG right inside SQL using
EMBEDDING()+ vector ops as shown above) - ๐ Interoperability API for shipping status
- ๐งฉ Synthesize: stitch results into a friendly, precise answer.
Add or swap tools as your use case grows: promotions, warranties, inventory, you name it.
๐ Wrap-up
You now have a compact AI Customer Support Agent that blends:
- ๐ง LLM reasoning (smolagents CodeAgent)
- ๐๏ธ Structured data (IRIS SQL)
- ๐ Unstructured knowledge (IRIS Vector Search + RAG) โ with the bonus that
EMBEDDING()lets you do hybrid + vector search directly from SQL - ๐ Live system calls (IRIS Interoperability + Visual Trace)


Top comments (0)