TL;DR
Grafana Loki is a horizontally-scalable log aggregation system inspired by Prometheus. Unlike Elasticsearch, it only indexes labels — not full text — making it 10x cheaper to run while still letting you search logs efficiently with LogQL.
What Is Loki?
Loki is Grafana's log aggregation solution:
- Label-based indexing — index metadata, not log content
- 10x cheaper than Elasticsearch for same volume
- LogQL — Prometheus-like query language for logs
- Multi-tenant — built for SaaS and shared infrastructure
- S3/GCS storage — use cheap object storage for log data
- Free — Apache 2.0 or Grafana Cloud free tier (50 GB/month)
Quick Start with Docker
# docker-compose.yml
version: "3"
services:
loki:
image: grafana/loki:latest
ports: ["3100:3100"]
command: -config.file=/etc/loki/local-config.yaml
promtail:
image: grafana/promtail:latest
volumes:
- /var/log:/var/log
- ./promtail-config.yml:/etc/promtail/config.yml
command: -config.file=/etc/promtail/config.yml
grafana:
image: grafana/grafana
ports: ["3000:3000"]
Sending Logs via API
curl -X POST http://localhost:3100/loki/api/v1/push \
-H "Content-Type: application/json" \
-d '{
"streams": [{
"stream": {
"app": "my-service",
"env": "production",
"level": "error"
},
"values": [
["1711900000000000000", "Connection timeout to database"]
]
}]
}'
Sending Logs from Node.js
import winston from "winston";
import LokiTransport from "winston-loki";
const logger = winston.createLogger({
transports: [
new LokiTransport({
host: "http://localhost:3100",
labels: { app: "my-api", env: "production" },
json: true,
batching: true,
interval: 5,
}),
],
});
logger.info("User logged in", { userId: 123 });
logger.error("Payment failed", { orderId: "abc", error: "timeout" });
LogQL Queries
# All error logs from my-api
{app="my-api"} |= "error"
# JSON parsing + filtering
{app="my-api"} | json | level="error" | status >= 500
# Count errors per minute
count_over_time({app="my-api"} |= "error" [1m])
# Top 5 error messages
topk(5, sum by (msg) (count_over_time({app="my-api"} | json | level="error" [1h])))
# Logs with latency > 1 second
{app="my-api"} | json | latency > 1000
# Rate of requests per service
sum by (app) (rate({env="production"} [5m]))
Promtail Configuration
server:
http_listen_port: 9080
clients:
- url: http://loki:3100/loki/api/v1/push
scrape_configs:
- job_name: app-logs
static_configs:
- targets: [localhost]
labels:
app: my-api
env: production
__path__: /var/log/my-api/*.log
Loki vs Elasticsearch vs CloudWatch
| Feature | Loki | Elasticsearch | CloudWatch |
|---|---|---|---|
| Cost (1TB/day) | ~$50/mo | ~$500/mo | ~$500/mo |
| Index strategy | Labels only | Full text | Full text |
| Query language | LogQL | KQL/Lucene | Insights |
| Storage backend | S3/GCS | Dedicated | AWS |
| Grafana native | ✅ | Plugin | Plugin |
| Setup complexity | Low | High | None (managed) |
| Free tier | 50 GB/mo Cloud | Self-host | 5 GB |
Resources
- Loki Documentation
- GitHub Repository — 24K+ stars
- LogQL Reference
- Grafana Cloud Free — 50 GB logs/month
Logging your data extraction pipelines? My Apify tools extract web data at scale — pair with Loki for cost-effective log analysis. Questions? Email spinov001@gmail.com
Top comments (0)