Strands Agents has been Python-only since launch. If your application code is TypeScript, that meant either switching languages for the agent layer or building your own tool-use loop from scratch. Neither is great.
The SDK now has TypeScript support, currently in preview. Breaking changes are possible and not all Python SDK features are available yet. But the core agent loop, tool use, and streaming all work, and the patterns feel natural if you're already writing TypeScript. I put together a demo repo with five standalone examples that progressively build from a minimal agent to multi-turn conversation. Each one is a single file you can run directly. No build step, no boilerplate.
The Setup
You'll need Node.js 20+ and AWS credentials configured for Amazon Bedrock access. The TypeScript SDK supports Bedrock, OpenAI, Gemini, and custom providers. These examples use Bedrock.
git clone https://github.com/gunnargrosch/strands-agents-ts-demo.git
cd strands-agents-ts-demo
npm install
All examples use tsx for direct TypeScript execution. No compilation needed.
If you'd rather follow along in your own project:
npm init -y && npm pkg set type=module
npm install @strands-agents/sdk zod
npm install --save-dev @types/node typescript tsx
Example 1: The Simplest Agent
How little code does it take to get an agent running?
import { Agent, BedrockModel } from '@strands-agents/sdk'
const agent = new Agent({
model: new BedrockModel(),
systemPrompt: 'You are a helpful assistant that likes to be edgy in a funny way.',
printer: false,
})
const prompt = process.argv[2] || 'Which city is best, Gothenburg or Stockholm?'
console.log(`Prompt: ${prompt}\n`)
const result = await agent.invoke(prompt)
console.log(result.toString())
That's it. BedrockModel() with no arguments defaults to Claude Sonnet 4.5. printer: false disables the SDK's built-in console output, which clutters your terminal when you want to control the formatting yourself.
npm run simple
Or pass your own prompt:
npm run simple "What's the meaning of life?"
Example 2: Adding a Tool
A bare agent is just a chat wrapper. Tools are where it gets useful:
import { Agent, BedrockModel, tool } from '@strands-agents/sdk'
import { z } from 'zod'
const calculator = tool({
name: 'calculate',
description: 'Perform basic math operations',
inputSchema: z.object({
operation: z.enum(['add', 'subtract', 'multiply', 'divide']),
a: z.number(),
b: z.number(),
}),
callback: ({ operation, a, b }) => {
switch (operation) {
case 'add': return a + b
case 'subtract': return a - b
case 'multiply': return a * b
case 'divide':
if (b === 0) throw new Error('Division by zero')
return a / b
}
},
})
const agent = new Agent({
model: new BedrockModel(),
systemPrompt: 'You are a helpful math assistant. Always explain how you calculated the result.',
tools: [calculator],
printer: false,
})
The agent decides when to call the tool based on the prompt. Ask it "What is 42 times 17?" and it invokes the calculator, then explains the result.
The Zod part matters more than it looks. If the model sends parameters that fail validation, the SDK catches the error and sends it back to the model as a tool result, giving it a chance to correct the input on the next loop iteration. If you've built custom agent loops before, you know how much code goes into handling bad tool inputs and retrying. Here that's just the agent loop doing its job.
npm run calculator
Example 3: Streaming and Model Configuration
The first two examples wait for the full response before printing anything. That's fine for short answers, but for anything longer you want tokens streaming as they arrive:
import { Agent, BedrockModel } from '@strands-agents/sdk'
const model = new BedrockModel({
// modelId: 'us.anthropic.claude-sonnet-4-5-20250929-v1:0',
// region: 'us-east-1',
temperature: 0.7,
maxTokens: 1024,
})
const agent = new Agent({
model,
systemPrompt: 'You are a helpful assistant. Keep responses concise.',
printer: false,
})
const prompt = process.argv[2] || 'Explain how a CPU works in five sentences.'
console.log(`Prompt: ${prompt}\n`)
for await (const event of agent.stream(prompt)) {
if (event.type === 'modelContentBlockDeltaEvent' && event.delta.type === 'textDelta') {
process.stdout.write(event.delta.text)
}
}
process.stdout.write('\n')
agent.stream() returns an async iterable of every event in the agent loop: text deltas, tool call events, tool results, metadata. We filter for textDelta because we only want the model's text on stdout, not the tool orchestration noise.
The commented-out lines show how to pin a specific model and region. The us. prefix means cross-region inference, routing requests across US regions. Worth considering for agent workloads: multi-step loops that make several model calls in sequence benefit from the distributed capacity.
npm run streaming
Example 4: Multi-Tool Orchestration
This is where agents start to feel like agents rather than fancy API wrappers. The researcher example has three tools: search, compare, and summarize. The agent decides which to call and in what order:
const search = tool({
name: 'search',
description: 'Search the knowledge base for information about a topic',
inputSchema: z.object({
query: z.string().describe('The search query'),
}),
callback: ({ query }) => {
const key = query.toLowerCase()
for (const [topic, info] of Object.entries(knowledgeBase)) {
if (key.includes(topic)) return { found: true, topic, info }
}
return { found: false, topic: key, info: 'No results found.' }
},
})
The compare tool structures side-by-side comparisons, and summarize creates formatted output. Ask the agent to compare TypeScript and Rust and here's what actually happens:
Prompt → Agent
├─ Agent calls search("TypeScript") → gets results
├─ Agent calls search("Rust") → gets results
├─ Agent calls compare(typescript_data, rust_data) → structured comparison
├─ Agent calls summarize(comparison) → formatted output
└─ Agent returns final response with the summary
This is the part worth paying attention to. The agent isn't following a hardcoded sequence. It's making decisions at each step based on what the previous tool returned. It could search both topics first, then compare. Or it could search one, realize it needs more context, and search again before moving on. The model drives the orchestration.
npm run researcher
The knowledge base here is simulated, but the pattern is real. Replace it with API calls, database queries, or file operations and you have a working research agent.
Example 5: Multi-Turn Conversation
All the previous examples are single-shot: one prompt, one response. Real applications usually need back-and-forth. Call agent.invoke() multiple times on the same instance and it maintains the conversation history using a sliding window manager:
const rl = createInterface({ input: process.stdin, output: process.stdout })
console.log('Multi-turn chat. Conversation context is preserved across messages.')
console.log('Type "exit" to quit.\n')
while (true) {
const input = await ask('You: ')
if (input.trim().toLowerCase() === 'exit') break
const result = await agent.invoke(input)
console.log(`\nAgent: ${result.toString()}\n`)
}
Each call adds to the message history. The agent remembers what you discussed and can reference earlier context. This is the foundation for chatbots, interactive assistants, or any agent that needs ongoing interaction.
npm run chat
What's Not Covered
The TypeScript SDK is still catching up to the Python SDK. A few things that aren't available yet: structured output, multi-agent patterns (graph, swarm, workflow orchestration), callback handlers for streaming (TypeScript uses async iterators exclusively), module-based tool loading, and the observability/telemetry stack. The documentation tracks what's available in each language.
What is here covers the core well: model invocation, tool use with type safety, streaming, and conversation management. For most agent use cases, that's the foundation you build on.
These five examples all run locally against Bedrock. The next interesting question is what happens when you deploy an agent as a service: behind an API, on Lambda, handling concurrent requests with proper error handling and observability. That's where the patterns get more nuanced, and it's what I plan to cover next.
Additional Resources
- Strands Agents TypeScript Demo
- Strands Agents Documentation
- Strands Agents GitHub
- TypeScript SDK Quickstart
- Model Providers
- Amazon Bedrock Documentation
If you try any of these examples or build something with the SDK, I'd like to hear about it in the comments.
Top comments (0)