A startup I know was storing IoT sensor readings in regular PostgreSQL. At 10 million rows, queries took 45 seconds. They switched to TimescaleDB — same data, same queries, 200ms response time.
TimescaleDB is PostgreSQL for time-series data. And it has a generous free tier on Timescale Cloud.
What You Get for Free
Timescale Cloud free tier:
- 25 GB storage on their managed service
- Full PostgreSQL compatibility — use any Postgres client/ORM
- Hypertables — automatic time-based partitioning
- Continuous aggregates — materialized views that auto-refresh
- Compression — 90%+ storage reduction
- Built-in time-series functions
Quick Start
# Connect with standard psql
psql 'postgres://user:password@host.timescaledb.io:5432/tsdb'
# Or use any PostgreSQL driver — it's just Postgres!
Create a Hypertable
-- Regular table first
CREATE TABLE sensor_data (
time TIMESTAMPTZ NOT NULL,
sensor_id INTEGER,
temperature DOUBLE PRECISION,
humidity DOUBLE PRECISION
);
-- Convert to hypertable — this is the magic
SELECT create_hypertable('sensor_data', 'time');
-- Insert data normally
INSERT INTO sensor_data VALUES
(NOW(), 1, 22.5, 45.0),
(NOW(), 2, 23.1, 42.0),
(NOW() - INTERVAL '1 hour', 1, 21.8, 47.0);
Time-Series Queries
-- Average temperature per hour (last 24h)
SELECT time_bucket('1 hour', time) AS hour,
AVG(temperature) AS avg_temp,
MAX(temperature) AS max_temp
FROM sensor_data
WHERE time > NOW() - INTERVAL '24 hours'
GROUP BY hour
ORDER BY hour DESC;
-- First/last value per sensor
SELECT sensor_id,
first(temperature, time) AS first_reading,
last(temperature, time) AS latest_reading
FROM sensor_data
WHERE time > NOW() - INTERVAL '1 day'
GROUP BY sensor_id;
Continuous Aggregates (Auto-Refreshing Materialized Views)
-- Create a continuous aggregate for hourly stats
CREATE MATERIALIZED VIEW hourly_stats
WITH (timescaledb.continuous) AS
SELECT time_bucket('1 hour', time) AS hour,
sensor_id,
AVG(temperature) AS avg_temp,
COUNT(*) AS readings
FROM sensor_data
GROUP BY hour, sensor_id;
-- Auto-refresh policy
SELECT add_continuous_aggregate_policy('hourly_stats',
start_offset => INTERVAL '3 hours',
end_offset => INTERVAL '1 hour',
schedule_interval => INTERVAL '1 hour');
-- Query the aggregate (instant, even with billions of rows)
SELECT * FROM hourly_stats
WHERE hour > NOW() - INTERVAL '7 days'
ORDER BY hour DESC;
Node.js Integration
const { Pool } = require('pg');
const pool = new Pool({
connectionString: process.env.TIMESCALE_URL
});
async function insertReadings(readings) {
const values = readings.map((r, i) =>
`($${i*4+1}, $${i*4+2}, $${i*4+3}, $${i*4+4})`
).join(',');
const params = readings.flatMap(r =>
[r.time, r.sensorId, r.temperature, r.humidity]
);
await pool.query(
`INSERT INTO sensor_data (time, sensor_id, temperature, humidity) VALUES ${values}`,
params
);
}
async function getHourlyAvg(sensorId, hours = 24) {
const { rows } = await pool.query(
`SELECT time_bucket('1 hour', time) AS hour,
AVG(temperature) AS avg_temp
FROM sensor_data
WHERE sensor_id = $1 AND time > NOW() - $2::interval
GROUP BY hour ORDER BY hour DESC`,
[sensorId, `${hours} hours`]
);
return rows;
}
Compression (90%+ Storage Savings)
-- Enable compression
ALTER TABLE sensor_data SET (
timescaledb.compress,
timescaledb.compress_segmentby = 'sensor_id'
);
-- Auto-compress data older than 7 days
SELECT add_compression_policy('sensor_data', INTERVAL '7 days');
-- Check compression stats
SELECT pg_size_pretty(before_compression_total_bytes) AS before,
pg_size_pretty(after_compression_total_bytes) AS after
FROM hypertable_compression_stats('sensor_data');
Use Cases
- IoT: Sensor data, device metrics, environmental monitoring
- DevOps: Application metrics, server monitoring, log analytics
- Finance: Stock prices, trading data, market analytics
- Web Analytics: Page views, user events, session tracking
Need to collect time-series data from websites or APIs? Check out my web scraping actors on Apify — automated data collection pipelines for any source.
Building a custom data pipeline? Email me at spinov001@gmail.com — I specialize in scraping + database solutions.
Top comments (0)