API Token Speed Benchmark: Compare LLM API Provider Performance
When developing AI applications, understanding the performance characteristics
of different LLM API providers is crucial for making informed decisions. The
API Token Speed Benchmark tool provides comprehensive metrics to compare token
generation speed, latency, and throughput across multiple providers.
Why Benchmark LLM API Providers?
Different LLM API providers offer varying performance characteristics that can
significantly impact your application's user experience. Factors like Time To
First Token (TTFT), tokens-per-second throughput, and total generation time
vary between providers, models, and even specific API endpoints.
Benchmarking helps you:
- Identify the fastest provider for your specific use case
- Compare latency and throughput across different models
- Verify API connectivity and authentication
- Test new API endpoints or experimental models
- Optimize cost-performance trade-offs
Key Performance Metrics
The benchmark tool measures several critical performance indicators:
- TTFT (Time To First Token) : Measures the latency before the first token arrives, indicating how quickly the model starts generating a response
- TPS (Tokens Per Second) : Calculates the generation throughput, showing how fast tokens are produced
- Total Time : Captures the complete generation duration from request to final token
- Input/Output Tokens : Reports token counts from API usage data, with fallback estimation at 4 characters per token
Getting Started with Benchmarking
The tool requires Python 3 with the requests library and reads configuration
from ~/.openclaw/openclaw.json. Here's how to begin:
1. List Available Targets
Start by checking what API targets are configured:
python3 main.py --targets
- Run Benchmark on Specific Target
Test a particular provider or model:
python3 main.py run --label <target-label>
- Compare All Targets
Run comprehensive benchmarks across all configured providers:
python3 main.py run --all
- Verify API Connectivity
Before running full benchmarks, check if a target is reachable:
python3 main.py check --label <target-label>
Configuration and Security
The tool reads configuration from ~/.openclaw/openclaw.json. Targets are
defined in the models.providers section with baseUrl, apiKey, api format, and
model configurations.
Security Best Practice : Never hardcode API keys in configuration files.
Use environment variable placeholders like "apiKey": "${ANTHROPIC_API_KEY}" to
read keys securely from your environment.
Example provider configuration:
{
"models": {
"providers": {
"my-provider": {
"baseUrl": "https://api.example.com",
"apiKey": "sk-xxx",
"api": "openai-completions",
"models": [
{
"id": "model-name",
"api": "openai-completions"
}
]
}
}
}
}
Advanced Options
The benchmark tool offers several options for fine-tuning your tests:
- --repeat N : Number of runs per prompt level (default: 1)
- --category : Run specific prompt categories (short, medium, long)
- --quiet : Suppress progress output
- --timeout N : Request timeout in seconds (default: 120)
- --table : Output as formatted table instead of JSON
Interpreting Results
The benchmark output provides detailed metrics for each test run. Pay
attention to:
- Consistency across multiple runs
- Performance differences between prompt lengths
- TTFT vs throughput trade-offs
- Token count accuracy and estimation methods
Practical Use Cases
Consider benchmarking when:
- Choosing between API providers for a new project
- Evaluating performance improvements after model updates
- Testing geographic latency differences
- Comparing cost vs performance across different pricing tiers
- Validating API stability before production deployment
Supported API Formats
The tool supports multiple API formats:
- anthropic-messages : Anthropic's message-based API format
- openai-completions : OpenAI's completions API format
- openai-responses : OpenAI's responses API format
This flexibility allows you to benchmark across different providers using
their native API formats while maintaining consistent testing methodology.
Conclusion
API benchmarking is an essential practice for developers working with LLM
services. By systematically measuring and comparing performance across
providers, you can make data-driven decisions that optimize your application's
responsiveness and user experience.
Whether you're building chatbots, content generation tools, or complex AI
applications, understanding the performance characteristics of your chosen API
providers will help you deliver better products to your users.
Skill can be found at:
benchmark/SKILL.md>
Top comments (0)