Factory machines are a silent source of valuable insights. By collecting just a single metric from different machines, it is possible to visualize the load of the entire factory by hour, identify when shifts start and end, and detect the busiest workstations.
So I decided to create this guide to show how I collect readings from a bending machine UI and store them in a PostgreSQL table.
1. Goal
Capture changes in the machine’s bend counter and store them as time-series records. The goal is to build useful telemetry inside the factory ERP.
2. Structure
I defined a PostgreSQL table structure suitable for storing timestamp-based readings. An example is shown below.
The screenshot is taken from Adminer, which I use as a web interface to interact with my PostgreSQL database. I find it great for basic tasks, although there are probably better alternatives.
Next, I created a separate folder that works as an extension inside my ERP system. This is where the API endpoints reside. Its purpose is to accept POST requests:
POST /extension/machine-tracking/store-reading
Then I defined the payload (Codex actually helped generate it):
{
"machine_id": "BM301",
"reading": 1234,
"time": "2026-03-13T10:30:00.000Z"
}
There are three main variables:
- the machine ID
- the reading value
- the timestamp
The machine I started this integration with is designed to bend metal, as shown below.
View from a side.
Technically, there are many metrics that could be captured, such as the total number of cycles, power on/off signals, and others. However, simplifying the integration to a single parameter does not necessarily reduce the quality of telemetry.
So I decided to track just one metric, which is the number of bends. Specifically the number of times the metal beam moves up and down.
Each time the value changes, an API request is triggered and the backend stores the reading. Once multiple readings are collected over time, charts can be generated to visualize how busy the machine is throughout the day.
3. Building API
Well, prompts again. I sent the following prompt to Codex.
The schema called
meteringto store reading counters from various machines on the factory floor was already created in the previous step.
The final output consisted of two main files:
- index.js — defines the API route
- src/store.js — handles storing the data
Below is a screenshot of index.js.
Not much is happening here. We simply extend an Express route and perform basic validation of the incoming data before passing the payload to the store method.
const response = await store(req.body, db, logger);
In the store.js file, we pass the three payload parameters to the PostgreSQL database.
You can browse these files directly in my repository:
telemetry extension
4. Getting Data From the Machine
Now that the API and database are set up, the next question is: where do we get the data from?
My first thought was to use sensors connected to a Raspberry Pi acting as a gateway that would forward the payload over the internet to my ERP server.
However, since I understand how the machine’s UI operates (it’s a relatively simple browser interface communicating with the PLCs), I decided to tweak the interface itself to extract the reading without installing any additional hardware.
The bending machine’s software still relies on jQuery version 2.0 which was released over a decade ago.
The only change I had to make after Codex generated the code was updating the endpoint. I changed it from:
/extension/machine-tracking/store-reading
to
https://mydomain/extension/machine-tracking/store-reading
so that the machine interface could transmit the data to the ERP server publicly.
5. Debug
Obviously, things did not work out of the box, but I felt I was very close. The first issue turned out to be the endpoint itself.
It turned out that my default PostgreSQL user did not have permission to connect to the newly created metering database.
The fix was relatively simple. Using the Adminer interface, I extended the permissions for my default user.
GRANT USAGE ON SCHEMA public TO skarda_design;
GRANT SELECT, INSERT, UPDATE, DELETE ON TABLE public.metering TO skarda_design;
GRANT USAGE, SELECT ON SEQUENCE public.metering__id_seq TO skarda_design;
After that, I sent the request through Postman and it worked. I could hardly believe it, but when I checked the database, the payload had been stored correctly.
Even though I had deployed everything to production, the information was still not being transmitted from the machine. I opened the browser’s inspect panel on the machine interface and discovered that it was a CORS issue.
I simply needed to configure Express to allow requests sent from localhost. Once that was fixed and the browser tab on the machine was refreshed, the data started flowing.
I then connected two more machines using unique machine_id values and checked the metering table.
The results speak for themselves.
6. Visualisation
This is the most fascinating part, because we can now start thinking about meaningful insights from the collected data (I waited for 3 days before getting to this part) and how to visualize them. I defined three possible views:
- Shift heatmap
- Production rate trend
- Per-machine counter trend
Then I sent another prompt. Here’s what we got.












Top comments (1)
😍😍😍