A lot of LangGraph demos prove that graphs can run.
Fewer prove that teams can operate them.
That difference matters.
Once a workflow starts classifying tickets, choosing queues, deciding whether to escalate, and generating internal summaries, the important question is no longer just "did the graph execute?" It becomes "why did it make that decision?"
That is the motivation behind langgraph-ticket-triage, a small Python starter that shows how to build a support triage workflow with LangGraph, FastAPI, and Tokvera trace visibility.
Why LangGraph workflows need observability
LangGraph is useful because it gives you a clean way to model multi-step workflows.
But in production-like systems, graph execution alone is not enough.
Teams still need to understand:
- how a ticket was classified
- why a queue was selected
- whether escalation logic was applied
- what summary was generated for the internal team
- whether the result came from mock mode or a live model call
Without that visibility, graph-based systems can become just as opaque as a large one-shot prompt.
What this starter repo does
The repo focuses on a practical support triage flow instead of a toy graph.
For each incoming ticket, it:
- starts a LangGraph workflow run
- classifies the ticket
- chooses a destination queue
- assigns SLA and suggested ownership
- generates an internal summary
- returns triage metadata, next actions, and Tokvera trace IDs
That makes it a strong reference for teams that want a Python-first agent workflow example with real operational shape.
The workflow structure is intentionally simple
The current graph uses two nodes:
classifysummarize
And the workflow path looks like this:
ticket input
-> classify node
-> summarize node
-> triage response + Tokvera trace IDs
That is a good starter shape because it keeps the graph readable while still separating two different responsibilities.
Classification handles routing decisions.
Summarization handles internal communication.
That separation makes the workflow easier to inspect and extend.
Why this workflow is more realistic than a simple agent demo
A realistic support flow has to do more than produce text.
It has to turn an inbound ticket into operational decisions.
In this starter, that includes:
- classification such as
bug,billing,feature, orgeneral - priority setting
- queue selection
- escalation recommendation
- suggested ownership
- SLA expectations
- next actions for the support team
- an internal summary
That is the kind of output support and platform teams can actually use.
The API surface
The project exposes a small set of routes for health checks, reusable sample payloads, and direct workflow execution:
GET /healthGET /api/demo-ticketGET /api/sample-ticketsPOST /api/triage
Example request:
curl -X POST http://localhost:3200/api/triage \
-H "Content-Type: application/json" \
-d '{
"subject": "Bug: team members cannot open traces",
"message": "Our support team sees a permissions error whenever they click a trace detail page.",
"plan": "enterprise",
"customer_name": "Ava",
"customer_email": "ava@example.com"
}'
That keeps local evaluation simple and makes the repo easy to demonstrate in articles, screenshots, and developer onboarding flows.
What the response gives you
The output is not just a generated summary.
It returns the data that an internal support workflow actually needs:
{
"trace_id": "trc_123",
"run_id": "run_123",
"ticket": {
"subject": "Bug: team members cannot open traces",
"plan": "enterprise",
"customer_name": "Ava",
"customer_email": "ava@example.com"
},
"triage": {
"classification": "bug",
"priority": "high",
"queue": "engineering",
"should_escalate": true,
"suggested_owner": "support-engineering",
"suggested_sla_hours": 2,
"tone": "urgent",
"short_reason": "incident language detected"
},
"next_actions": [
"Assign to support-engineering",
"Respond within 2 hours",
"Collect reproduction details, timestamps, and failing trace IDs",
"Escalate because the enterprise plan requires faster handling"
],
"summary": "..."
}
That combination of workflow metadata plus trace identifiers is what makes the example useful beyond a basic LangGraph demo.
How the workflow behaves
The classification step can run in mock mode or with a live model.
The repo includes heuristic fallback behavior for issues like:
- bugs and incidents
- billing questions
- feature requests
- general support
Then the summarization step turns the classification output into a short internal handoff summary and a set of next actions.
That is a good pattern for real teams because it separates decision logic from communication logic.
Why Tokvera fits well with LangGraph
LangGraph gives you workflow structure.
Tokvera gives you workflow visibility.
This starter uses Tokvera to make the graph inspectable at two useful levels:
- graph root runs
- node-level execution spans
That means you can inspect:
- the overall workflow run
- the
classify_ticketdecision step - the model-backed classification call when live mode is enabled
- the
summarize_triagestep - the model-backed summary generation call when live mode is enabled
That distinction matters because debugging agent workflows usually requires more than raw model telemetry.
You need to understand the workflow path itself.
What this helps you debug
With node-level visibility, you can answer questions like:
- Did the graph classify a billing issue as a bug?
- Was escalation triggered because of the plan, the message content, or both?
- Did the classification step behave correctly but the summary step produce weak output?
- Did mock mode hide a live-model issue during local testing?
Those are the kinds of questions teams actually hit when they move from demo graphs to production-like workflows.
Running it locally
The project defaults to mock mode, which is the right choice for a starter.
It lets you evaluate the workflow without needing live provider credentials on day one.
python -m venv .venv
. .venv/Scripts/activate
pip install -e .
copy .env.example .env
uvicorn app.main:app --reload --port 3200
By default, the API runs on http://localhost:3200.
To use a live provider, set MOCK_MODE=false and provide:
OPENAI_API_KEYTOKVERA_API_KEY
You can also configure TOKVERA_INGEST_URL, TOKVERA_TENANT_ID, and OPENAI_MODEL.
Why this repo is valuable for Python-first teams
A lot of OSS AI starter content leans heavily toward JavaScript.
This repo matters because it gives Python teams a concrete example of how to combine:
- FastAPI for the API surface
- LangGraph for workflow orchestration
- OpenAI for model-backed steps
- Tokvera for root-run and node-level visibility
That combination makes it a good reference for teams building internal agents, support flows, and other stateful multi-step workflows in Python.
What to customize next
The starter is intentionally compact, which makes it easy to extend.
The next useful upgrades would be:
- add more graph nodes for knowledge-base lookup or escalation review
- add a human-in-the-loop approval step before escalation
- add queue-specific summary formats
- persist workflow runs to a database
- attach screenshots or payload references to traces
- build a lightweight support console UI on top of the API
Those are natural next steps for any team turning a graph demo into a real workflow surface.
Conclusion
The best LangGraph examples do more than show nodes and edges.
They show how a workflow makes decisions and how a team can inspect those decisions later.
That is why langgraph-ticket-triage is useful.
It gives Python teams a practical support-triage workflow with clear graph structure, useful operational output, and trace visibility that makes the system debuggable instead of opaque.
Related links
- Repo: https://github.com/Tokvera/langgraph-ticket-triage
- LangGraph tracing docs: https://tokvera.org/docs/integrations/langgraph
- Get started: https://tokvera.org/docs/get-started
Top comments (0)