I wanted to participate in the current AI era, but not just as a user of tools. I wanted to build something. And I thought — what better way to see how Rust as a language can evolve in this space than to build a library for it?
That's how mini-agent started. A minimal, async-first AI agent framework in Rust with support for OpenAI, Anthropic, OpenRouter, and Ollama.
Here's what the build actually looked like.
Why Rust for AI tooling?
Most AI tooling is Python. That's fine — Python is great for fast iteration. But I was learning Rust through Let's Get Rusty and the official Rust book, and I kept wondering — what does it look like to build something real with it?
AI agents felt like the right challenge. They involve async code, HTTP clients, JSON parsing, trait abstractions, and error handling — basically a tour of the language.
The Architecture
The core idea is three things: a Provider, a Tool, and an Agent.
// Implement this to add a new LLM backend
#[async_trait]
pub trait LlmProvider: Send + Sync {
fn provider_name(&self) -> &str;
async fn complete(
&self,
messages: &[Message],
tools: &[&dyn Tool],
model: &str,
) -> Result<Completion, AgentError>;
}
// Implement this to give the agent new capabilities
#[async_trait]
pub trait Tool: Send + Sync + 'static {
fn name(&self) -> &'static str;
fn description(&self) -> &'static str;
fn parameters_schema(&self) -> Value;
async fn execute(&self, args: Value) -> Result<String, AgentError>;
}
The Agent drives a ReAct-style loop — plan, act, observe — until the model returns a final answer or hits the max steps limit.
User prompt
│
▼
Agent sends messages + tools → LlmProvider
│
▼
LLM responds with tool call?
├── Yes → execute tool → result added to context → loop
└── No → return final answer
The Hardest Part: JSON Responses Across Different Providers
This was genuinely painful. Every LLM provider returns a slightly different JSON shape.
OpenAI and OpenRouter are compatible, so I was able to share helper functions between them:
pub fn parse_openai_completion(json: &Value) -> Result<Completion, AgentError> {
let choice = json
.get("choices")
.and_then(|v| v.as_array())
.and_then(|a| a.first())
.ok_or_else(|| AgentError::invalid("provider", "missing 'choices' array"))?;
// ...
}
But Anthropic is completely different. Instead of tool_calls, it returns tool_use blocks inside a content array:
{
"content": [
{ "type": "text", "text": "Let me calculate that." },
{ "type": "tool_use", "id": "abc", "name": "add_numbers", "input": { "a": 4, "b": 7 } }
]
}
So I had to write a separate parser for Anthropic and translate it into the same internal Completion struct. This is where Rust actually helped — the type system forced me to handle every case explicitly rather than letting things silently fail.
I also found a real bug during this process: the Anthropic provider was silently dropping the system prompt because I had hardcoded let system_prompt: Option = None; and never actually extracted it from the message history. It compiled fine. It just never worked. That kind of silent failure is exactly what better error handling prevents.
Error Handling — From Strings to Structure
My original errors looked like this:
#[error("Provider error: {0}")]
ProviderError(String),
#[error("Invalid response from LLM: {0}")]
InvalidResponse(String),
The problem is these are useless for callers. You can't pattern match on the content of a string. You can't build retry logic on top of it.
So I restructured everything:
#[derive(Error, Debug)]
pub enum AgentError {
#[error("Network error: {0}")]
Network(#[from] reqwest::Error),
#[error("Provider '{provider}' error{}: {message}",
.status.map(|s| format!(" (HTTP {s})")).unwrap_or_default())]
Provider {
provider: String,
message: String,
status: Option<u16>,
},
#[error("Tool '{tool}' failed: {reason}")]
ToolExecution {
tool: String,
reason: String,
},
#[error("Agent reached the maximum of {0} steps without a final answer")]
MaxSteps(usize),
// ...
}
Now callers can actually do something useful:
impl AgentError {
pub fn is_retryable(&self) -> bool {
matches!(
self,
Self::Network(_) | Self::Provider { status: Some(500..=599), .. }
)
}
pub fn is_client_error(&self) -> bool {
matches!(self, Self::Provider { status: Some(s), .. } if *s >= 400 && *s < 500)
}
}
This was one of the things Rust surprised me with. The borrow checker and the type system feel restrictive at first, but they push you toward designs that are actually better. I ended up with cleaner error handling than I would have written in any other language, not because I planned it that way — but because Rust made the lazy approach painful enough that I did it properly.
What I Learned About Rust Specifically
A few things that genuinely surprised me coming from other languages:
The borrow checker is annoying until it isn't. I spent a lot of time fighting it early on. But every time it stopped me, it was catching something real — a value being used after it was moved, a reference outliving its owner. After a while you stop fighting it and start designing around it.
async fn in traits is still evolving. I hit an issue where async fn in traits makes them not dyn-compatible — meaning you can't use Box directly with an async method. The async-trait crate solves this with a macro, but it's worth knowing it's not zero-cost.
thiserror is excellent. If you're doing any serious error handling in Rust, use it. The #[from] derive and the format string syntax in #[error(...)] make structured errors almost painless.
Memory management clicked later than I expected. The ownership model feels theoretical until you're actually debugging a real async program. Then it becomes obvious why it exists.
Try It
[dependencies]
mini-agent = { git = "https://github.com/RajMandaliya/mini-agent" }
let mut agent = Agent::new(Box::new(provider), model);
agent.add_tool(AddNumbersTool);
let result = agent.run("What is 42 + 58?").await?;
- crates.io: https://crates.io/crates/mini-agent
- GitHub: https://github.com/RajMandaliya/mini-agent
This is v0.1.0. The next thing on my list is typed tool results — right now execute() returns a String which keeps the trait object-safe, but it means callers have to deserialize themselves. I got good feedback on this from the Rust community and it's worth solving properly.
If you're learning Rust and wondering what to build — build something real. The ecosystem needs more Rust-native AI tooling, and the language is genuinely good at it.
Top comments (0)