At Air Pipe, we faced a common challenge in the world of container orchestration - finding the right balance between functionality and complexity. While Kubernetes offers a powerful but complex solution, and Docker Swarm lacks crucial features like autoscaling, we needed something that fit perfectly in the middle. We decided to build Orbit, a lightweight container orchestration tool written in Rust that we're actively developing and improving.
Why We Started
Our journey began with a specific need at Air Pipe. Running a shared-nothing architecture at the edge, we needed a simple way to scale HTTP/TCP based containers without the overhead of complex infrastructure or additional dependencies. We wanted something that could be deployed as a single binary, yet provide robust container management capabilities.
The existing solutions didn't quite fit:
Kubernetes: Too complex for our edge use case, with a large learning curve and resource footprint
Docker Swarm: Too basic, lacking crucial features like autoscaling
Other solutions: Either too complex or too tightly coupled with specific cloud providers or vendors
Starting Simple: Basic Container Management
We started with the basics - a simple Rust application that could manage individual containers. The choice of Rust was deliberate:
Memory safety without garbage collection
High performance and low resource usage
Strong type system and excellent concurrency model
Growing ecosystem of libraries and tools
Our first version could do basic container operations:
async fn start_containers(
&self,
service_name: &str,
containers: &Vec<Container>,
) -> Result<Vec<(String, String)>> {
// Basic container startup logic
}
Evolution to Pod-Based Architecture
As we developed the system, we realized that managing groups of containers together (pods) would provide better isolation and resource management. This led to a more sophisticated container management system:
pub struct InstanceMetadata {
pub uuid: Uuid,
pub created_at: SystemTime,
pub network: String,
pub containers: Vec<ContainerMetadata>,
pub image_hash: HashMap<String, String>,
}
This structure allowed us to track related containers and their metadata together, making operations like scaling and updates more coherent.
The Game Changer: Integrating Pingora
One of our most significant technical decisions was choosing Cloudflare's Pingora framework for our proxy layer. This wasn't just about load balancing - it was about building a robust, high-performance proxy system that could handle production workloads efficiently.
Why Pingora?
Written in Rust, aligning with our technology stack
Battle-tested at Cloudflare's scale
Excellent performance characteristics
Built-in health checking and automatic failover
The integration looked something like this:
name: web-service
instance_count:
min: 2
max: 5
spec:
containers:
- name: web
image: airpipeio/infoapp:latest
ports:
- port: 80
node_port: 30080 # This enables Pingora proxy
Adding Intelligence: Resource-Based Autoscaling
One of our proudest features is the intelligent autoscaling system. Instead of just scaling based on simple metrics, we implemented a sophisticated resource monitoring and scaling system:
resource_thresholds:
cpu_percentage: 80 # CPU usage threshold
cpu_percentage_relative: 90 # CPU usage relative to limit
memory_percentage: 85 # Memory usage threshold
metrics_strategy: max # Strategy for pod metrics
The system continuously monitors these metrics and makes scaling decisions based on configurable thresholds. Here's a practical example:
name: demo-service
instance_count:
min: 2
max: 5
memory_limit: "1Gi"
cpu_limit: "1.0"
resource_thresholds:
cpu_percentage: 80
memory_percentage: 85
spec:
containers:
- name: infoapp
image: airpipeio/infoapp:latest
ports:
- port: 80
node_port: 30080
memory_limit: "512Mi"
cpu_limit: "0.5"
In this configuration, if CPU usage exceeds 80% or memory usage exceeds 85% across instances, Orbit automatically scales up the service (up to the max instance count). When resource usage decreases, it scales down gracefully while maintaining the minimum instance count.
Where We Are Today
Today, Orbit stands as a testament to what's possible when you focus on solving a specific problem well. It's a single binary under 5MB that provides:
Automated container scaling
Service discovery
High-performance proxying
Volume management
Rolling updates
Prometheus metrics integration
Try It Yourself
Want to see Orbit in action? Here's a simple example to get started:
Create a configuration file (e.g.,
web-service.yml):
name: web-service
instance_count:
min: 2
max: 5
resource_thresholds:
cpu_percentage: 80
memory_percentage: 85
spec:
containers:
- name: web
image: airpipeio/infoapp:latest
ports:
- port: 80
node_port: 30080
- Start Orbit:
orbit -c /path/to/configs
Visit http://localhost:30080 to see your service in action, and watch Orbit automatically manage scaling based on resource usage.
Looking Forward
We're continuing to evolve Orbit based on real-world usage and community feedback. Our focus remains on maintaining simplicity while adding powerful features that make container orchestration easier and more efficient.
Want to learn more or contribute? Check out our GitHub repository or join our Discord community.
About Air Pipe
Orbit is just one piece of the puzzle in our mission to simplify and improve the software development lifecycle. At Air Pipe, we're building tools and platforms that make it easier to build, test, and deploy applications. Our shared-nothing architecture and edge computing approach allows for highly efficient and scalable deployments.
Want to learn more about how Air Pipe can help streamline your development workflow? Visit airpipe.io to discover our complete suite of development tools and solutions. See how our platform can help you build better software, faster.
💡 Visit Air Pipe
💡 Find us on Discord
Top comments (0)