A practical guide to building a live monitoring dashboard for any data source.
Introduction
In a previous organisation, I built and maintained several internal monitoring dashboards. The idea came from patterns I had seen in engineering blogs about observability systems, and I experimented internally with a similar setup to bring different metrics into a single real-time view. The architecture behind those dashboards was straightforward and reusable.
This project is a recreation of that setup using publicly available APIs, tracking weather across two cities, commodity prices, and exchange rates EUR->INR as a working example. It continuously collects data, stores it in a time-series database, and visualises it in a live Grafana dashboard. It is a starting point for anyone who wants to experiment with real-time data or build similar dashboards for their own projects.
Organisations and individuals often have access to data from multiple sources that updates continuously. The challenge is not finding the data. Most of it is freely available. The challenge is aggregating it into a single, live view that is easy to read and act on.
Without a proper pipeline, data either gets checked manually, sits in spreadsheets, or simply goes unmonitored. A real-time dashboard solves this by automating the collection, storage, and visualisation in one place.
The same problem exists at different scales. Whether it is infrastructure monitoring inside a company, tracking external data feeds, or keeping an eye on any system that produces regular measurements, the approach is the same.
Tech Stack
- InfluxDB — time-series database that stores all collected data
- Grafana — visualisation layer that renders live dashboards
- Docker — runs the entire stack in containers
- Python — polling script that fetches data and writes to InfluxDB
- Telegraf — alternative data collection agent by InfluxData
Note: This project uses a Python script for data collection, but Telegraf or any other tool that can write to InfluxDB works just as well.
Architecture
There are two separate flows in the project.
Live data flow
Data Sources (APIs)
│
▼
poller.py
(runs every 15m)
│
▼
InfluxDB
(storage)
│
▼
Grafana
│
▼
Browser
Dashboard setup flow
dashboard.yml
│
▼
build_dashboard.py
│
▼
dashboard.json
│
▼
Grafana loads it automatically
The live data flow runs continuously in the background. The dashboard setup flow runs once locally, and again whenever the dashboard configuration is updated.
Note: The dashboard can also be created manually through the Grafana UI without using any configuration files or scripts. The code-based approach is used here to keep the setup version controlled and easy to modify.
Step by Step
Prerequisites
Before getting started, make sure the following are installed:
- Docker Desktop — required to run the stack
- Python — required to generate the dashboard configuration
Note: Docker is used here to demonstrate the full setup locally. InfluxDB and Grafana can also be installed directly on any machine or server without Docker if preferred.
1. Data Sources
Three publicly available APIs are used in this project, covering weather, commodity prices, and exchange rates. No API keys or signups are required.
Any API or data source that returns regularly updating values can be used here. The structure of the pipeline does not change regardless of what data is being collected.
2. Data Collection
The poller is a Python script that queries the APIs every fifteen minutes and writes the results directly to InfluxDB.
One concept worth understanding in InfluxDB before writing data:
- Tags describe the data and are indexed for filtering. Examples: city name, source identifier, category.
- Fields contain the actual numeric values that get stored and plotted.
Getting this distinction right makes querying and building dashboards significantly simpler.
3. Storage with InfluxDB
InfluxDB is a time-series database built for handling measurements that arrive continuously. Data is written as timestamped points, which makes it well suited for infrastructure metrics, sensor readings, financial feeds, or any other regularly updating values.
Each record written contains a measurement name, tags, numeric fields, and a timestamp.
To verify the connection is working, the Grafana Explore view can be used to query the bucket directly. A simple query returning the last hour of data confirms that the poller is writing successfully and Grafana can read it.
4. Visualisation with Grafana
Grafana reads from InfluxDB and renders the data as a live dashboard. It supports charts, stat panels, colour thresholds, gauges, and auto-refresh at configurable intervals.
It also connects to many other data sources including Prometheus, PostgreSQL, and others, making it flexible beyond just this stack.
One of the more useful features is the time range selector. The same dashboard can show data from the last 5 minutes all the way through to months of history, and the panels update instantly as the range changes.
Grafana also supports alerting. Alerts can be configured on any panel to trigger notifications when a value crosses a threshold — useful for staying on top of data without having to watch the dashboard continuously. Notifications can be sent to email, Slack, PagerDuty, and many other channels.
5. Dashboard as Code
Grafana dashboards are often created manually through the UI. That works for quick experiments, but the exported JSON becomes large and difficult to maintain over time.
In this project the dashboard is defined in a dashboard.yml configuration file. A script converts it into the JSON format Grafana expects, and Grafana loads it automatically on startup.
This keeps the dashboard fully version controlled. Adding a new panel is a matter of updating one line in the config file.
6. Running the Stack
The entire system starts with one command:
docker compose up -d
This launches three containers: InfluxDB, Grafana, and the poller. The dashboard is immediately available at localhost:3000. Pre-configured, pre-seeded, already collecting.
Result
Once the stack is running, the dashboard provides a continuously updating view of all collected data. It refreshes automatically, is colour-coded by thresholds, and is queryable over any time range.
The same setup can be pointed at any data source without changing the underlying infrastructure. The only part that changes is the polling script.
Conclusion
The pattern covered in this project: collect, store, visualize. It is a solid foundation for building active monitoring dashboards around any kind of live data.
The data collection piece is intentionally flexible. Telegraf works well for infrastructure and system metrics with minimal configuration. A custom script makes sense when working with external APIs that need specific handling. A cloud function or a scheduled job fits just as well depending on the environment.
What stays consistent is the rest of the stack. InfluxDB handles the time-series storage reliably regardless of where the data comes from. Grafana turns it into something usable, with the added ability to set up alerts and query across any time range without much overhead.
The same approach applies whether the goal is monitoring application performance, tracking external data feeds, or building visibility into any system that produces regular measurements.
Resources
- GitHub — github.com/007bsd/live-monitor
- InfluxDB Documentation — docs.influxdata.com
- Grafana Documentation — grafana.com/docs
- Grafana Alerting — grafana.com/docs/grafana/latest/alerting
- Telegraf — influxdata.com/telegraf
- Docker — docs.docker.com
If you try setting this up and encounter any issues, please leave a comment. The complete code for this project is available on GitHub.






Top comments (2)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.