Core Problem Statement
GraphBit targets the hard parts of building reliable, scalable, and maintainable AI-powered workflows. It addresses:
- Orchestrating multi-step AI tasks with clear data dependencies and parallelism
- Integrating multiple LLM providers without lock-in
- Making LLM agents safely call tools and incorporate tool results
- Running workflows with production-grade resilience under variable load and flaky networks
- Giving Python developers a simple API while leveraging a high-performance runtime
In short: GraphBit turns agentic AI from ad-hoc scripts into robust, dependency-aware workflows that can run reliably in production.
Developer Pain Points
- Fragmented orchestration
- Hand-rolled “call A then B” glue code; no graph-level validation or parallelism
- Brittle context passing between steps; outputs lost or misapplied across nodes
- Vendor lock-in and integration drift
- Different provider SDKs and message formats; feature differences (tool calls, usage, finish reasons)
- Hard to switch providers for cost/performance or to run locally vs in cloud
- Tool-use complexity
- Reliable function-calling flow is tricky: deciding when to call tools, validating schemas, chaining tool results back into the final answer
- Reliability under real-world conditions
- Handling timeouts, rate limits, transient network issues
- Avoiding cascading failures when a node/provider starts erroring
- Avoiding thundering herds with naive concurrency
- Performance and cost control
- Efficient parallelism across independent steps
- Ability to run local models (e.g., Ollama) when cost or data locality matters
- Developer ergonomics
- Python-centric teams want simple APIs, but also need serious runtime control and health visibility
- Packaging and distribution for mixed Rust/Python stacks can be painful
Technical Solutions (as implemented)
-
Workflow engine (Rust)
- Directed-acyclic workflow with dependency validation (cycle checks, edge correctness, unique constraints)
- Dependency-aware batch execution and parallelism (run independent nodes together)
- Context model automatically captures node outputs by node ID and name, and injects parent outputs into agent prompts as a structured preamble
-
Agent system with tool orchestration
- Agents validate LLM configuration at creation (proactive failure detection)
- Two-phase tool flow:
- Agent signals required tool calls
- Python executes registered tools and returns results to the agent for a final answer
-
Multi-LLM support with a unified interface
- Common request/response abstractions handle messages, tool calls, and usage accounting consistently
- Providers implemented and verified: OpenAI, Anthropic, Ollama (local)
- Additional providers wired in the factory for broader coverage
-
Production resilience
- Retries with exponential backoff and jitter based on error classification (timeouts, rate limits, etc.)
- Circuit breakers with Closed/Open/Half-Open states to prevent cascading failures and enable timed recovery
- Per-node-type concurrency manager to avoid global bottlenecks and adapt concurrency by workload shape
-
Python-first developer experience
- PyO3 bindings expose a clear API: Workflow, Node, Executor, LlmConfig/Client, Tool registry, Embeddings, Text splitters, Document loader
- Health and runtime utilities: init, configure runtime, get system info, health check, shutdown
- Document loader, text splitters, and embeddings available from Python to build RAG-like and document-centric pipelines
-
Local and cloud flexibility
- Seamless use of local LLMs via Ollama (with model existence checks and auto-pull)
- Cloud providers supported through the same abstractions—swap or mix as needed
Differentiation
-
Rust core + Python interface
- High performance, safety, and concurrency control in Rust; ergonomic API in Python
- Better throughput and lower overhead vs Python-only orchestration layers
-
Reliability-first agent orchestration
- Built-in retries, jitter, and circuit breakers at the workflow engine level, not bolted on
- Per-node-type concurrency avoids global semaphore bottlenecks and let you tune hotspots
-
Opinionated context passing for agents
- Automatic, structured injection of parent outputs into prompts (titled sections + JSON block) improves answer quality in multi-step flows
-
Vendor flexibility by design
- Unified provider model lets you change providers or run locally with minimal code change
- Providers normalize tool calls and finish reasons for consistent orchestration behavior
-
Rich Python surface area beyond LLM calls
- Embeddings (OpenAI/HuggingFace), text splitting (character, token, sentence, recursive, etc.), and document loading
- Makes it practical to build RAG and document-centric pipelines without external glue libraries
Use Cases Well Suited to GraphBit
-
Multi-step content pipelines
- Research → Draft → Edit → Review → Format across multiple agents and tools, with parallel branches and dependency-aware joins
-
RAG and document workflows
- Document loading, splitting, embeddings, and agent steps orchestrated in one system
-
Tool-augmented agents
- LLMs that call Python tools (calculators, data access, business logic) in a controlled two-phase flow
-
Hybrid local/cloud deployments
- Use local models via Ollama for cost/data locality, and fall back to cloud providers as needed
-
Enterprise-grade AI job runners
- Long-running, dependency-heavy tasks that must survive provider hiccups, rate limits, and transient network issues while maximizing throughput
Notes on Current Maturity (to set expectations)
- Fully working nodes today: Agent, Condition, Transform, Delay, DocumentLoader
- Additional node types declared but not yet executed in the engine: Split, Join, HttpRequest, Custom
- Providers verified: OpenAI, Anthropic, Ollama; others are factory-wired and may require validation
- Streaming support is scaffolded at the interface level but not consistently implemented across providers
- CI workflows exist but are disabled; tests and performance harnesses exist, with room to expand coverage and automation
GraphBit is designed to give Python developers a powerful yet safe foundation for agentic AI: a validated workflow graph, consistent multi-provider LLM integration, robust tool orchestration, and production-grade reliability controls—without sacrificing the speed and efficiency of a Rust core.
Top comments (0)