TL;DR — We built an AI pipeline that turns a baggage damage claim (plain text) into a policy-backed compensation offer. It uses KaibanJS (multi-agent orchestration), Kaiban MCP (card lifecycle as tools), Tavily (real-time product prices), and A2A (platform → agent triggers). Everything is TypeScript, tool-calling based, and runs from a single repo. Here’s the stack, the flow, and how to run it.
What we’re building
A multi-agent system that:
- Receives a “card” from the Kaiban platform (description = claim text).
- Extracts and validates the claim (passenger, items, damage).
- Calculates compensation using airline policy (mock), historical payouts (mock), and real-time market prices via Tavily (up to 5 items).
- Writes the offer back to the card and moves it to done (or blocked on error).
All platform interaction happens through MCP tools — no Kaiban SDK in the agent code. That makes the agent a pure “tool-calling” pipeline and keeps the same pattern you’d use for any LLM + tools setup.
Tech stack
| Layer | Tech |
|---|---|
| Runtime | Node.js |
| Language | TypeScript |
| Multi-agent | KaibanJS (open-source) |
| Platform integration | Kaiban MCP (Streamable HTTP), A2A |
| LLM | OpenAI (via KaibanJS) |
| Real-time data | Tavily (product price search) |
| Validation / schemas | Zod |
High-level flow
Kaiban board: card created (description = claim text) in column "todo"
│
▼ A2A request to your endpoint
Executor: parse activity → get_card (MCP) → validate (has description, column = todo)
│
▼ Start KaibanJS team with card_id, board_id, team_id, agent_id
Task 0 → get_card, move_card to "doing", return claim text (userMessage)
Task 1 → Extract & validate claim (passenger, items, damage)
Task 2 → get_airline_policy + get_historical_payouts + search_product_market_price (Tavily × up to 5)
→ apply depreciation & caps → compensation amount + breakdown
Task 3 → Generate offer text (or "please provide X")
Task 4 → update_card (result = offer), move_card to "done", create_card_activities
│
▼ On any error: move_card to "blocked", create_card_activities (audit)
The executor is a thin A2A handler: it receives the webhook, validates the card via MCP, runs the team, and on failure moves the card to blocked via MCP so nothing gets lost.
Why MCP instead of the Kaiban SDK?
We use the Kaiban MCP server so that agents never touch the Kaiban SDK. All they see are tools: get_card, move_card, update_card, create_card_activities. Benefits for devs:
- One mental model — Same “LLM calls tools” pattern for both platform (Kaiban) and data (Tavily). No mixing SDK calls and tool calls in agent code.
- Schema from the server — MCP exposes JSON Schema for each tool; we convert to Zod so KaibanJS (and the LLM) get correct parameter definitions and fewer bad tool calls.
-
Protocol, not vendor lock-in — MCP is an open protocol. Your agent code doesn’t depend on
@kaiban/sdk; it just depends on “something that exposes these tools.”
If you prefer a controller-driven style where your app code uses the Kaiban SDK and the “agent” only does domain logic, the platform supports that too (see other kaiban-agents-starter examples). For this example we wanted to show a fully tool-based integration.
Code highlights
1. MCP client: Streamable HTTP + Zod
We use @modelcontextprotocol/sdk with Streamable HTTP and turn each MCP tool into a KaibanJS-compatible tool with a Zod schema:
// kaiban-mcp-client.ts (simplified)
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { StreamableHTTPClientTransport } from '@modelcontextprotocol/sdk/client/streamableHttp.js';
import { convertJsonSchemaToZod } from 'zod-from-json-schema';
export async function getKaibanTools(): Promise<KaibanJSTool[]> {
const client = await getMcpClient();
const { tools: mcpTools } = await client.listTools();
return mcpTools.map((t) => {
const schema = inputSchemaToZod(t.inputSchema, t.name);
return {
name: t.name,
description: t.description ?? `MCP tool: ${t.name}`,
schema,
async invoke(input: Record<string, unknown>): Promise<string> {
const result = await client.callTool({ name: t.name, arguments: input });
return stringifyToolContent(result.content);
},
};
});
}
getMcpClient() builds the transport with KAIBAN_MCP_URL and Authorization: Bearer KAIBAN_API_TOKEN. The important bit: JSON Schema → Zod so the LLM gets proper tool schemas and we avoid vague or wrong arguments.
2. Tavily as a tool (real-time prices)
The Compensation Calculation Agent has three tools. Two are mocks (policy, historical payouts); the third is Tavily for real product prices:
// One call per product; agent is instructed to use max 5 per claim
const searchProductMarketPriceTool = new DynamicStructuredTool({
name: 'search_product_market_price',
description:
'Searches for current market price of a product in USD. Call once per product; use for up to 5 products per claim.',
schema: z.object({
productName: z.string().describe('e.g. "Samsonite 28 inch spinner luggage"'),
}),
func: async (input) => {
const client = await getTavilyClient();
if (!client) return JSON.stringify({ success: false, mockValue: 120, currency: 'USD' });
const response = await client.search(`${input.productName} current price buy USD`);
// ... parse and return summary with price info
return JSON.stringify({ success: true, summary, currency: 'USD' });
},
});
We use LangChain’s DynamicStructuredTool here so it plugs into KaibanJS’s tool layer. The agent is prompted to call this once per damaged item (up to 5), then apply policy (depreciation, caps) and output a breakdown.
3. Executor: A2A → validate → team → or block
The executor receives the A2A body, pulls Kaiban activity (card_id, board_id, team_id), fetches the card via MCP, and only runs the team if the card has a description and is in todo:
const card = await getCard(cardId);
if (!card?.description || card.column_key !== 'todo') {
logger.debug('Skipping card: no description or not in todo');
continue;
}
try {
await processDamagedBaggageCompensationRequest(context);
} catch (error) {
logger.error('Failed to process card', { error, cardId });
await moveCardToBlocked(cardId, boardId, teamId, {
id: ourAgentId,
type: 'agent',
name: OUR_AGENT_NAME,
});
}
So: one place for “run the pipeline” and “on failure, move to blocked.” All via MCP.
4. Team definition (async because of MCP tools)
The team is created asynchronously because we need to fetch Kaiban tools first:
export async function createDamagedBaggageCompensationTeam(
context: DamagedBaggageCompensationTeamContext,
) {
const kaibanTools = await getKaibanTools();
const kaibanCardSyncAgent = new Agent({
name: 'Kaiban Card Sync Agent',
role: 'Kaiban Platform Sync',
goal: 'Use Kaiban MCP tools to get card, move card to doing/done, update card result, and create card activities.',
tools: kaibanTools,
});
// ... Claim Extraction, Compensation Calculation (policy + historical + Tavily), Compensation Offer agents
// ... Tasks 0–4
return new Team({
name: 'Damaged Baggage Compensation Team',
agents: [
kaibanCardSyncAgent,
claimExtractionValidationAgent,
compensationCalculationAgent,
compensationOfferAgent,
],
tasks: [
getCardAndMoveToDoingTask,
extractAndValidateClaimTask,
compensationCalculationTask,
generateCompensationOfferTask,
updateCardWithOfferAndMoveToDoneTask,
],
inputs: { card_id, board_id, team_id, agent_id, agent_name },
env: { OPENAI_API_KEY: process.env.OPENAI_API_KEY || '' },
});
}
Task 0 and Task 4 are both assigned to kaibanCardSyncAgent; the rest are specialized agents. Inputs are the IDs from the A2A activity so the sync agent can call get_card(card_id), move_card(card_id, column_key, actor), etc.
Get it running
git clone https://github.com/kaiban-ai/kaiban-agents-starter.git
cd examples/damaged-baggage-compensation-mcp-kaibanjs
npm install
cp .env.example .env
Edit .env:
-
KAIBAN_MCP_URL(orKAIBAN_TENANT+KAIBAN_ENVIRONMENT) andKAIBAN_API_TOKEN OPENAI_API_KEY-
TAVILY_API_KEY(optional; falls back to mock price if missing) -
A2A_BASE_URL(your public URL, e.g.https://your-server.com) -
KAIBAN_AGENT_IDorKAIBAN_DAMAGED_BAGGAGE_COMPENSATION_AGENT_ID(from Kaiban after you register the agent)
Register the agent in Kaiban:
-
Agent card:
GET {A2A_BASE_URL}/damagedBaggageCompensation/a2a/.well-known/agent-card.json -
Agent endpoint:
POST {A2A_BASE_URL}/damagedBaggageCompensation/a2a
Assign the agent to a board, then create a card with the description set to a claim (e.g. copy from GET .../damagedBaggageCompensation/samples/baggage-claim-example.txt). Put the card in the todo column. Start the server:
npm run dev
The platform will send an A2A request; the executor will run the team and the card will move todo → doing → done (or blocked on error).
Tests
Tests mock MCP, Tavily, and KaibanJS so you don’t need live credentials:
npm test
Use the same pattern to add tests for new tasks or tools.
Takeaways
- Multi-agent + tools — KaibanJS gives you a clear split: one agent for “platform sync” (MCP tools), others for extraction, calculation, and copy. Each task has a single agent and a defined output schema (Zod).
- MCP for platform — Using Kaiban MCP (instead of the SDK in agent code) keeps the pipeline tool-only and schema-driven; you can reuse the same MCP client in other agents.
- Real-time data in the loop — Tavily shows how to plug a third-party API into an LLM tool (one call per product, cap at 5) so compensation is grounded in current prices.
- Failure = visible — Moving failed cards to blocked via MCP and logging activities means every run is auditable and nothing drops on the floor.
Links
-
Repo: kaiban-agents-starter →
examples/damaged-baggage-compensation-mcp-kaibanjs - Example docs: Damaged Baggage Compensation with Kaiban MCP (KaibanJS)
- Kaiban MCP reference: docs.kaiban.io/references/kaiban-mcp
- Use case (product): Kaiban – Automated Damaged Baggage Compensation
Tags: typescript node ai llm multi-agent mcp kaibanjs tavily open-source
Top comments (0)