DEV Community

Smrati
Smrati

Posted on

Storing IoT time-series data: InfluxDB vs TimescaleDB for asset tracking

Millions of data points per day in your asset tracking solution? Location logs, sensor readings, status messages – all generate millions. The way you store your data influences dashboard performance, querying capabilities, and even your infrastructure bill. Check out which of the two below can help you make the right choice.

Two options

InfluxDB

Custom-built time-series database
A completely new database optimized to handle time-series workloads. Own query language, own storage engine, own everything – specifically tailored to writing a lot of data.

  • Built from scratch
  • Query language – Flux
  • TSM engine

TimescaleDB

PostgreSQL time-series extension
A PostgreSQL database extension with hypertables and built-in time-series optimization features. SQL support – all your joins, foreign keys and other advanced SQL stuff.

  • PostgreSQL based
  • SQL support
  • Hypertables

Asset location data writing — InfluxDB

InfluxDB utilizes the line protocol for its writing operations — small, quick, and built to handle a massive number of devices all streaming at once:

// Writing to InfluxDB v2 using the JS client
const { InfluxDB, Point } = require('@influxdata/influxdb-client')

const client = new InfluxDB({
  url: 'http://localhost:8086',
  token: process.env.INFLUX_TOKEN
})
const writeApi = client.getWriteApi('myorg', 'asset-tracking')

// Write a location + sensor reading point
const point = new Point('asset_location')
  .tag('asset_id', 'truck-01')
  .tag('fleet', 'north-region')
  .floatField('lat', 43.6532)
  .floatField('lng', -79.3832)
  .floatField('speed_kmh', 62.4)
  .floatField('temp_c', 4.2)
  .timestamp(new Date())

writeApi.writePoint(point)
await writeApi.flush()
Enter fullscreen mode Exit fullscreen mode

Asset location data writing — TimescaleDB

TimescaleDB makes use of traditional SQL insert statements on a hypertable — familiar territory for any PostgreSQL developer. The partitioning and compression are handled seamlessly behind the scenes:

-- Create hypertable (run once)
CREATE TABLE asset_locations (
  time       TIMESTAMPTZ      NOT NULL,
  asset_id   TEXT            NOT NULL,
  lat        DOUBLE PRECISION,
  lng        DOUBLE PRECISION,
  speed_kmh  DOUBLE PRECISION,
  temp_c     DOUBLE PRECISION
);

SELECT create_hypertable('asset_locations', 'time');

-- Insert a reading (standard SQL)
INSERT INTO asset_locations
  (time, asset_id, lat, lng, speed_kmh, temp_c)
VALUES
  (NOW(), 'truck-01', 43.6532, -79.3832, 62.4, 4.2);
Enter fullscreen mode Exit fullscreen mode

Querying — that's where the rubber meets the road

Both databases excel at querying a range of times but differ when trying to relate the asset data with your business environment. Here is the same query — "average speed by asset for the past 24 hours" — in both:

// InfluxDB — Flux query
from(bucket: "asset-tracking")
  |> range(start: -24h)
  |> filter(fn: (r) => r._measurement == "asset_location")
  |> filter(fn: (r) => r._field == "speed_kmh")
  |> group(columns: ["asset_id"])
  |> mean()
Enter fullscreen mode Exit fullscreen mode
-- TimescaleDB — standard SQL
SELECT
  asset_id,
  AVG(speed_kmh) AS avg_speed,
  time_bucket('1 hour', time) AS bucket
FROM asset_locations
WHERE time > NOW() - INTERVAL '24 hours'
GROUP BY asset_id, bucket
ORDER BY bucket DESC;
Enter fullscreen mode Exit fullscreen mode

Head-to-head comparison

Criteria InfluxDB TimescaleDB
Write throughput Extremely high (better) High (with tuning)
Query language Flux (learning curve) Full SQL (better)
Relational joins Not supported Full JOIN support (better)
Compression Automatic (TSM) (better) Manual policy setup
Retention policies Built-in, simple (better) Via PostgreSQL jobs
Ecosystem / tooling Grafana native Full PostgreSQL ecosystem (better)
Existing PostgreSQL stack Separate system Drop-in extension (better)
Operational complexity Medium Low (if you know Postgres) (better)

Tip on compression for TimescaleDB:
Activate column-level compression for chunks older than 7 days – IoT data compresses by 90-95% using the default compression policy in TimescaleDB. A 100GB raw database is typically reduced to less than 8GB. Configure it once and forget about it: SELECT add_compression_policy('asset_locations', INTERVAL '7 days');

Asset tracking-specific requirement – joining
This is where TimescaleDB shines over almost any other asset tracking solution. Your time-series data is not an island and needs to be joined with other asset data, such as metadata, asset fleet membership, geofencing details, and maintenance history. This is a single SQL command in TimescaleDB:

-- Join location data with asset metadata — trivial in TimescaleDB
SELECT
  a.asset_id,
  a.vehicle_type,
  a.assigned_driver,
  AVG(l.speed_kmh) AS avg_speed,
  MAX(l.temp_c) AS max_temp
FROM asset_locations l
JOIN assets a ON l.asset_id = a.asset_id
WHERE l.time > NOW() - INTERVAL '1 hour'
  AND a.fleet = 'north-region'
GROUP BY a.asset_id, a.vehicle_type, a.assigned_driver;
Enter fullscreen mode Exit fullscreen mode

However, InfluxDB requires two separate systems to pull data from and join the results at the application level. Works, but not as efficiently as TimescaleDB.

Quick decisions chart
IoT ingest only, highest write speeds → InfluxDB
Grafana, basic time range queries → InfluxDB
PostgreSQL already up and running → TimescaleDB
Joining assets to metadata, maintenance, users → TimescaleDB
Complex analytics, reports, SQL knowledge → TimescaleDB
Large scale, metrics-only ingestion without relations → InfluxDB

Most asset tracking applications need to combine location information with other types of data like asset metadata, geofencing conditions, and maintenance records. For such use cases, TimescaleDB should be used. Pure telemetry ingestion pipelines, without any need for relational features, will find InfluxDB more suitable because of its throughput.

AssetTrackPro's platform has built-in support for time series ingestion, storage, and querying at scale, so you won't have to build your own database layer from the ground up. Check out our platform →

Are you developing an IoT data pipeline? Then let us do the work for you – we provide a scalable asset telemetry backend so that you can focus on your core functionality.
Explore AssetTrackPro

Top comments (0)