DEV Community

Cover image for How I Built an Automated Operational Analytics Pipeline from a Queue Management System
Monfort N. Brian
Monfort N. Brian

Posted on

How I Built an Automated Operational Analytics Pipeline from a Queue Management System

Banks don’t struggle because they lack systems.
They struggle because operational data arrives too late to matter.

This project started with a Queue Management System (QMS) used daily in branches across the bank. Kiosks were issuing tickets, services were delivered, staff were active; data was being generated constantly.

Yet operational and management teams were still:

  • Reviewing previous month performance
  • Making decisions without visibility into the current month
  • Fighting with unreadable Excel files
  • Rushing reports right before meetings

This wasn’t a dashboard problem.
It was a data pipeline problem.

The Initial Constraint: No API, No Clean Exports

The QMS was not part of core banking but it had its own limitations.

There was:

  • ❌ No usable API
  • ❌ No structured, analytics-ready export

However, there was an option:
An FTP backup configuration where you could define:

  • FTP credentials
  • Destination path
  • Backup schedule

This looked promising.

The First Attempt (and Why It Failed)

When enabled, the QMS started pushing CSV backups to the FTP server.

But reality hit quickly:

  • CSV files had no column headers
  • Data was malformed and inconsistent
  • Two expected datasets (users & services) were mixed and incomplete
  • Files were not reliably parseable
  • Business logic could not be inferred

Technically, data existed.
Practically, it was unusable.

At that point, forcing this approach further would have meant:

  • Fragile parsing
  • Endless edge cases
  • Low trust from the business

So I stopped.

The Pivot: Automate What the System Does Well

A few days later, I took a different angle:

Instead of forcing machine integration, I automated the human workflow the system already supported well.

The QMS had a stable, predictable UI for:

  • Exporting Users statistics
  • Exporting Services statistics
  • Downloading them as .xls

So I built a browserless Chromium automation with custom logic.

This changed everything.

Layer 1: Automated Data Extraction (Daily, Reliable)

  • Every day at 7:00 PM (UTC+2):
  • Chromium automation logs into the QMS
  • Navigates the export screens
  • Downloads exactly two XLS files (stats_user & stats_serv)
  • Files are deposited into an SFTP: /raw_data/

No human action.
No broken CSVs.
Same structure, every single day.

The most reliable integration is often the one that respects how the system was designed to be used.

automated-data-extraction-pipeline

Layer 2: Extraction & Cleaning Pipeline (Event-Driven)

The moment new files land in /raw_data, a second pipeline kicks in.

This is not cron-based.
It’s event-driven via webhook.

What this pipeline does:

  • Reads the XLS files
  • Applies client-defined naming conventions
  • Cleans and normalizes fields
  • Applies business rules and calculations
  • Aligns data to operational definitions (KPIs that actually make sense)
  • Converts outputs to CSV
  • Writes them to:/ready_data/

This layer is where raw logs become operational truth.

//Get filename from Download FTP node
let inputFileName = '';
let fileDate = '';

try {
  const ftpItems = $('Download serv').all();
  if (
    ftpItems &&
    ftpItems[0] &&
    ftpItems[0].binary &&
    ftpItems[0].binary.data
  ) {
    inputFileName = ftpItems[0].binary.data.fileName || '';
  }
} catch (e) {
  console.log('Could not get filename from Download node:', e.message);
}

//Extract date from filename
if (inputFileName) {
  const dateMatch = inputFileName.match(/(\d{8})/);
  if (dateMatch) {
    fileDate = dateMatch[1];
  }
}

//Fallback (use today's date)
if (!fileDate) {
  const today = new Date();
  const day = today.getDate().toString().padStart(2, '0');
  const month = (today.getMonth() + 1).toString().padStart(2, '0');
  const year = today.getFullYear();
  fileDate = `${day}${month}${year}`;
}

//Header mapping (ONLY the columns we want)
const headerMap = {
  Utilisateur: 'users',
  '8-9': '8_9',
  '9-10': '9_10',
   ....
  'Durée session': 'session_duration',
  'Latence': 'latency',
  'Ratio (%)': 'service_ratio_pct',
  '# appelés': 'tickets_called',
  '# entretiens': 'clients_served',
  '# annulés': 'tickets_cancelled',
  'T.entretien moy.': 'avg_service_time',
  'T.entretien max.': 'max_service_time',
  'Alarme': 'alert',
};

//Build CSV content
const csvLines = [];
.....

Enter fullscreen mode Exit fullscreen mode

Layer 3: Distribution Pipeline (Decision-Ready Data)

The final pipeline runs on cron at 3:00 AM (UTC).

Its role is simple:

  • Take validated CSVs from /ready_data
  • Push them to an external SFTP
  • Make them instantly consumable by Power BI

By the time teams arrive at work:

  • Data is fresh
  • KPIs are current
  • Dashboards reflect what is happening now, not last month

distribution-pipeline-ready-data-external-sftp-power-bi

Tech Stack (Self-Hosted, Bank-Friendly)

  • n8n (orchestration & automation)
  • Custom JavaScript (data logic & transformations)
  • Browserless Chromium
  • SFTP (raw & ready zones)
  • Ubuntu Server
  • Docker (self-hosted)

No core dependency.
No invasive access.
No vendor lock-in.

What Changed for the Business

Before:

  • Monthly Excel exports
  • Unreadable tables
  • DIY dashboards showing last month
  • Last-minute reporting panic
  • Decisions made in partial darkness

After:

  • Daily operational datasets
  • Current-month visibility
  • Reliable KPIs
  • Zero manual interventions
  • Dashboards that actually support decisions

From a business perspective, this unlocked:

  • Faster operational decision-making
  • Better service performance monitoring
  • Clearer visibility into branch activity
  • A foundation for future automation and Agentic AI

Why This was an Operational excellence project

This wasn’t about tools.
It wasn’t about dashboards.
It wasn’t about IT modernization.

It was about getting the right data, at the right time, in the right shape, so operations teams could actually operate.

Once that exists:

  • Dashboards become useful
  • Forecasting becomes possible
  • Advanced analytics becomes realistic

Final Thought

You can’t improve what you can’t see and you can’t see it if data arrives too late.

Operational excellence starts before analytics.
It starts with pipelines like this.

Top comments (0)