Building AI features in mobile apps usually goes like this: send a prompt to a backend server, wait, then pipe the response into a chat bubble. You end up maintaining a Python orchestration server, an API gateway, and a queue β just so your chatbot can tell the user the weather.
What if the entire reasoning engine lived on the client?
Meet Vantura β the first Stateful Agentic AI Framework designed exclusively for Flutter. Vantura brings a full ReAct (Reason + Act) loop to your Dart code, so the agent can think, call local tools, observe results, and iterate β all without a middleware server in between.
Why Vantura Exists
Most "AI SDKs" for mobile are thin wrappers around an HTTP call. Vantura is an orchestration framework. Here's what that means in practice:
| Feature | What It Actually Does |
|---|---|
| π§ On-Device ReAct Loop | The agent's decision cycle (Thought β Action β Observation β Repeat) runs entirely in your Dart process. It can call sqflite, trigger a GoRouter navigation, or read a sensor β without any network hop to a backend orchestrator. |
| π οΈ Type-Safe Tooling | Define tool arguments as a plain Dart class. SchemaHelper auto-generates the JSON Schema the LLM needs. No hand-written JSON. |
| πΎ Dual-Layer Memory | A sliding window of recent messages (short-term) plus automatic LLM-powered summarization of older context (long-term). Your agent remembers, without blowing up token costs. |
| βΈοΈ Agent Checkpointing | The ReAct loop serializes its state between tool calls. If the user kills the app mid-reasoning, agent.resume(resumeFrom: checkpoint) picks up exactly where it left off. |
| π€ Multi-Agent Coordination |
AgentCoordinator manages multiple specialized agents. The built-in transfer_to_agent tool lets them hand off tasks at runtime. |
| π Privacy-First Security | Built-in PII redaction, input sanitization (100 KB limit, control-character stripping), anti-jailbreak system directives, and redacted logging by default. |
Show Me the Code
1. Spin Up an Agent in Under 20 Lines
import 'package:vantura/vantura.dart';
// 1. Pick your LLM provider β swap one line to switch models.
final client = VanturaClient(
apiKey: 'YOUR_OPENAI_KEY',
baseUrl: 'https://api.openai.com/v1/chat/completions',
model: 'gpt-4o',
);
// 2. Initialize memory (short-term window + auto-summarization).
final memory = VanturaMemory(sdkLogger, client);
// 3. Create the agent.
final agent = VanturaAgent(
name: 'inventory_bot',
instructions: 'You help manage the user\'s local product inventory.',
memory: memory,
client: client,
state: VanturaState(),
tools: [
...getStandardTools(), // Calculator, Connectivity, DeviceInfo, ApiTest
CheckStockTool(), // Your custom tool (see below)
],
);
// 4. Run with streaming β each chunk arrives as it's generated.
await for (final response in agent.runStreaming('How many MacBooks are in stock?')) {
if (response.textChunk != null) {
print(response.textChunk); // stream to your UI
}
}
Note:
VanturaStateis aChangeNotifier. Wrap it withAnimatedBuilderorListenableBuilderto drive real-time UI updates β show "Thinkingβ¦", "Calling tool: check_stock", or the final answer, all reactively.
2. Define a Custom Tool (Type-Safe)
Tools are strongly typed. You define a Dart class for the arguments, and Vantura's SchemaHelper generates the JSON Schema for the LLM automatically:
// --- Argument class ---
class CheckStockArgs {
final String productName;
CheckStockArgs(this.productName);
factory CheckStockArgs.fromJson(Map<String, dynamic> json) =>
CheckStockArgs(json['product_name'] as String);
}
// --- Tool definition ---
class CheckStockTool extends VanturaTool<CheckStockArgs> {
@override String get name => 'check_stock';
@override String get description => 'Checks current stock level for a product.';
@override
Map<String, dynamic> get parameters => SchemaHelper.generateSchema({
'product_name': SchemaHelper.stringProperty(
description: 'The name of the product to look up',
),
});
@override
CheckStockArgs parseArgs(Map<String, dynamic> json) =>
CheckStockArgs.fromJson(json);
@override
Future<String> execute(CheckStockArgs args) async {
// Query your local DB, call an API, read a file β whatever you need.
final count = await _database.getStockCount(args.productName);
return 'Product "${args.productName}" has $count units in stock.';
}
}
That's it. No hand-written JSON Schema. No Map<String, dynamic> spaghetti. The LLM sees a clean function signature, calls it, and Vantura routes the result back into the reasoning loop.
3. Swap Providers With a Single Line
Vantura ships with three deeply integrated clients that all conform to the LlmClient interface. Swap providers without changing a single line of agent logic:
// OpenAI / Groq / TogetherAI / Local Ollama (any OpenAI-compatible API)
final LlmClient openai = VanturaClient(
apiKey: 'sk-...',
baseUrl: 'https://api.openai.com/v1/chat/completions',
model: 'gpt-4o',
);
// Anthropic Claude β native Messages API, not a compatibility shim
final LlmClient claude = AnthropicClient(
apiKey: 'sk-ant-...',
model: 'claude-3-7-sonnet-latest',
);
// Google Gemini β native REST API with structured tool injection
final LlmClient gemini = GeminiClient(
apiKey: 'AIza...',
model: 'gemini-2.0-flash',
);
Pass any of these as the client: parameter to VanturaAgent and everything β streaming, tool calls, memory β just works.
4. Multi-Agent Coordination
Need specialized agents? Create a routing layer with AgentCoordinator:
final billingAgent = VanturaAgent(
name: 'BillingBot',
instructions: 'You handle invoicing, payments, and billing questions.',
memory: billingMemory,
client: client,
state: billingState,
tools: [CreateInvoiceTool(), RefundTool()],
);
final supportAgent = VanturaAgent(
name: 'SupportBot',
instructions: 'You handle general support queries and FAQs.',
memory: supportMemory,
client: client,
state: supportState,
tools: [SearchFaqTool(), TicketTool()],
);
// The coordinator injects a "transfer_to_agent" tool into each agent automatically.
final coordinator = AgentCoordinator([billingAgent, supportAgent]);
// If the user asks a billing question to SupportBot, it will call
// transfer_to_agent(target_agent: "BillingBot", reason: "...") on its own.
await for (final r in coordinator.runStreaming('I need a refund for order #42')) {
if (r.textChunk != null) print(r.textChunk);
}
5. Human-in-the-Loop Confirmation
Prevent the agent from executing sensitive actions without the user's approval:
class DeleteProductTool extends VanturaTool<DeleteProductArgs> {
@override String get name => 'delete_product';
@override String get description => 'Permanently deletes a product.';
// Static: always require confirmation
@override bool get requiresConfirmation => true;
// Dynamic: only confirm for high-value items
@override
bool requiresConfirmationFor(DeleteProductArgs args) {
return args.value > 100.0; // skip confirmation for low-value items
}
// ... parseArgs, parameters, execute ...
}
When requiresConfirmationFor returns true, Vantura pauses the ReAct loop and yields a CONFIRMATION_REQUIRED message. Your UI can show a dialog, and the agent resumes only after explicit approval.
Why You Should Try Vantura
- Local-First: Query SQLite, read sensors, navigate screens β no backend round-trip.
- Private by Default: Sensitive data stays on-device. Logging redacts API keys and PII automatically.
- Production-Ready: Built-in retry logic, rate-limit handling, input sanitization, and anti-jailbreak directives.
- Fully Open Source: BSD-3-Clause licensed. PRs welcome.
π Get started on pub.dev β
π Browse the source on GitHub β
Top comments (0)