Longhorn is the CNCF lightweight distributed block storage system for Kubernetes. It turns your cluster nodes' disks into a highly available storage pool with snapshots, backups, and disaster recovery.
Free, open source, and backed by SUSE/Rancher. No license keys, no storage fees.
Why Use the Longhorn API?
- Distributed block storage — replicated across nodes for HA
- Automated snapshots — recurring snapshot schedules via API
- Disaster recovery — backup to S3/NFS with one API call
- Volume management — create, resize, and clone volumes programmatically
Quick Setup
1. Install Longhorn
helm repo add longhorn https://charts.longhorn.io
helm install longhorn longhorn/longhorn --namespace longhorn-system --create-namespace
# Access UI
kubectl port-forward -n longhorn-system svc/longhorn-frontend 8080:80
2. List Volumes
LONGHORN_URL="http://localhost:8080"
curl -s "$LONGHORN_URL/v1/volumes" | jq '.data[] | {name: .name, size: .size, state: .state, robustness: .robustness, replicas: (.numberOfReplicas)}'
3. Create a Volume
curl -s -X POST "$LONGHORN_URL/v1/volumes" \
-H "Content-Type: application/json" \
-d '{"name":"my-data-vol","size":"10Gi","numberOfReplicas":3,"frontend":"blockdev"}'
4. Create a Snapshot
curl -s -X POST "$LONGHORN_URL/v1/volumes/my-data-vol?action=snapshotCreate" \
-H "Content-Type: application/json" \
-d '{"name":"backup-before-upgrade"}'
5. Backup to S3
# First configure backup target in settings
curl -s -X PUT "$LONGHORN_URL/v1/settings/backup-target" \
-H "Content-Type: application/json" \
-d '{"value":"s3://my-backups@us-east-1/longhorn"}'
# Create backup from snapshot
curl -s -X POST "$LONGHORN_URL/v1/volumes/my-data-vol?action=snapshotBackup" \
-H "Content-Type: application/json" \
-d '{"name":"backup-before-upgrade"}'
Python Example
import requests
LONGHORN = "http://localhost:8080"
# List volumes
vols = requests.get(f"{LONGHORN}/v1/volumes").json()
for v in vols["data"]:
size_gb = int(v["size"]) / (1024**3)
print(f"Volume: {v['name']} | Size: {size_gb:.0f}GB | State: {v['state']} | Replicas: {v['numberOfReplicas']}")
# List nodes
nodes = requests.get(f"{LONGHORN}/v1/nodes").json()
for n in nodes["data"]:
conditions = {c['type']: c['status'] for c in n.get('conditions', [])}
print(f"Node: {n['name']} | Schedulable: {n.get('allowScheduling')} | Ready: {conditions.get('Ready','?')}")
Key Endpoints
| Use Case | Endpoint | Method |
|---|---|---|
| List volumes | /v1/volumes | GET |
| Create volume | /v1/volumes | POST |
| Snapshot | /v1/volumes/{name}?action=snapshotCreate | POST |
| Backup | /v1/volumes/{name}?action=snapshotBackup | POST |
| List nodes | /v1/nodes | GET |
| List backups | /v1/backupvolumes | GET |
| Settings | /v1/settings | GET/PUT |
| Recurring jobs | /v1/recurringjobs | GET/POST |
Need custom data extraction or scraping solution? I build production-grade scrapers for any website. Email: Spinov001@gmail.com | My Apify Actors
Top comments (0)