Why InfluxDB?
InfluxDB is the #1 time-series database. Purpose-built for metrics, events, and IoT data with a powerful query language (Flux) and built-in visualization.
InfluxDB Cloud free tier: unlimited series, 30-day retention, 5 dashboards, 5 MB/5 min write rate.
Getting Started
Option 1: InfluxDB Cloud (Free)
Sign up at influxdata.com — free forever, no credit card.
Option 2: Local (Docker)
docker run -d --name influxdb \
-p 8086:8086 \
-e DOCKER_INFLUXDB_INIT_MODE=setup \
-e DOCKER_INFLUXDB_INIT_USERNAME=admin \
-e DOCKER_INFLUXDB_INIT_PASSWORD=password123 \
-e DOCKER_INFLUXDB_INIT_ORG=myorg \
-e DOCKER_INFLUXDB_INIT_BUCKET=mybucket \
-e DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=my-super-token \
influxdb:latest
Python Client
from influxdb_client import InfluxDBClient, Point
from influxdb_client.client.write_api import SYNCHRONOUS
from datetime import datetime, timedelta
import random
client = InfluxDBClient(url="http://localhost:8086", token="my-super-token", org="myorg")
write_api = client.write_api(write_options=SYNCHRONOUS)
query_api = client.query_api()
# Write metrics
for i in range(100):
point = Point("server_metrics") \
.tag("host", f"server-{random.randint(1, 3)}") \
.tag("region", random.choice(["us-east", "eu-west"])) \
.field("cpu", random.uniform(10, 90)) \
.field("memory", random.uniform(30, 85)) \
.field("requests", random.randint(100, 5000)) \
.time(datetime.utcnow() - timedelta(minutes=i))
write_api.write(bucket="mybucket", record=point)
print("Wrote 100 data points")
# Query with Flux
query = '''
from(bucket: "mybucket")
|> range(start: -1h)
|> filter(fn: (r) => r._measurement == "server_metrics")
|> filter(fn: (r) => r._field == "cpu")
|> aggregateWindow(every: 5m, fn: mean)
|> yield(name: "mean_cpu")
'''
tables = query_api.query(query)
for table in tables:
for record in table.records:
print(f"{record.get_time()} | {record.values.get('host')} | CPU: {record.get_value():.1f}%")
Node.js Client
const { InfluxDB, Point } = require("@influxdata/influxdb-client");
const influx = new InfluxDB({ url: "http://localhost:8086", token: "my-super-token" });
const writeApi = influx.getWriteApi("myorg", "mybucket");
const queryApi = influx.getQueryApi("myorg");
// Write data
const point = new Point("temperature")
.tag("sensor", "living-room")
.floatField("value", 22.5)
.timestamp(new Date());
writeApi.writePoint(point);
await writeApi.close();
// Query
const query = `from(bucket: "mybucket")
|> range(start: -24h)
|> filter(fn: (r) => r._measurement == "temperature")
|> aggregateWindow(every: 1h, fn: mean)`;
queryApi.queryRows(query, {
next: (row, tableMeta) => {
const data = tableMeta.toObject(row);
console.log(`${data._time}: ${data._value.toFixed(1)}°C`);
},
complete: () => console.log("Done")
});
REST API (curl)
# Write data (line protocol)
curl -X POST 'http://localhost:8086/api/v2/write?org=myorg&bucket=mybucket' \
-H 'Authorization: Token my-super-token' \
-d 'temperature,sensor=outdoor value=18.5'
# Query data
curl -X POST 'http://localhost:8086/api/v2/query?org=myorg' \
-H 'Authorization: Token my-super-token' \
-H 'Content-Type: application/json' \
-d '{"query": "from(bucket: \"mybucket\") |> range(start: -1h)"}'
Use Cases
- Server monitoring — CPU, memory, disk, network metrics
- IoT — sensor data with automatic downsampling
- Application performance — request latency, error rates, throughput
- Business metrics — signups, revenue, conversions over time
- Financial data — stock prices, crypto, with 1s+ resolution
Need to collect time-series data from the web? I build scrapers for price monitoring, metrics collection, and data pipelines. Check out my Apify actors or email spinov001@gmail.com for custom solutions.
What metrics are you tracking? Share below!
Top comments (0)