<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Monfort N. Brian</title>
    <description>The latest articles on DEV Community by Monfort N. Brian (@monfortbrian_).</description>
    <link>https://dev.to/monfortbrian_</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/monfortbrian_"/>
    <language>en</language>
    <item>
      <title>How I Built an Automated Operational Analytics Pipeline from a Queue Management System</title>
      <dc:creator>Monfort N. Brian</dc:creator>
      <pubDate>Tue, 03 Feb 2026 23:24:31 +0000</pubDate>
      <link>https://dev.to/monfortbrian_/how-i-built-an-automated-operational-analytics-pipeline-from-a-queue-management-system-ecc</link>
      <guid>https://dev.to/monfortbrian_/how-i-built-an-automated-operational-analytics-pipeline-from-a-queue-management-system-ecc</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Banks don’t struggle because they lack systems.&lt;br&gt;
They struggle because operational data arrives too late to matter.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This project started with a Queue Management System (QMS) used daily in branches across the bank. Kiosks were issuing tickets, services were delivered, staff were active; data was being generated constantly.&lt;/p&gt;

&lt;p&gt;Yet operational and management teams were still:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reviewing previous month performance&lt;/li&gt;
&lt;li&gt;Making decisions without visibility into the current month&lt;/li&gt;
&lt;li&gt;Fighting with unreadable Excel files&lt;/li&gt;
&lt;li&gt;Rushing reports right before meetings&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This wasn’t a dashboard problem.&lt;br&gt;
It was a &lt;strong&gt;data pipeline problem&lt;/strong&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Initial Constraint: No API, No Clean Exports
&lt;/h2&gt;

&lt;p&gt;The QMS was not part of core banking but it had its own limitations.&lt;/p&gt;

&lt;p&gt;There was:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;❌ No usable API&lt;/li&gt;
&lt;li&gt;❌ No structured, analytics-ready export&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, there was an option:&lt;br&gt;
An &lt;strong&gt;FTP backup configuration&lt;/strong&gt; where you could define:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;FTP credentials&lt;/li&gt;
&lt;li&gt;Destination path&lt;/li&gt;
&lt;li&gt;Backup schedule&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This looked promising.&lt;/p&gt;
&lt;h2&gt;
  
  
  The First Attempt (and Why It Failed)
&lt;/h2&gt;

&lt;p&gt;When enabled, the QMS started pushing CSV backups to the FTP server.&lt;/p&gt;

&lt;p&gt;But reality hit quickly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CSV files had no column headers&lt;/li&gt;
&lt;li&gt;Data was malformed and inconsistent&lt;/li&gt;
&lt;li&gt;Two expected datasets (users &amp;amp; services) were mixed and incomplete&lt;/li&gt;
&lt;li&gt;Files were not reliably parseable&lt;/li&gt;
&lt;li&gt;Business logic could not be inferred&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Technically, &lt;em&gt;&lt;strong&gt;data existed&lt;/strong&gt;&lt;/em&gt;.&lt;br&gt;
Practically, &lt;strong&gt;&lt;em&gt;it was unusable&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;At that point, forcing this approach further would have meant:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fragile parsing&lt;/li&gt;
&lt;li&gt;Endless edge cases&lt;/li&gt;
&lt;li&gt;Low trust from the business&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So I stopped.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Pivot: Automate What the System Does Well
&lt;/h2&gt;

&lt;p&gt;A few days later, I took a different angle:&lt;/p&gt;

&lt;p&gt;Instead of forcing machine integration, I automated the human workflow the system already supported well.&lt;/p&gt;

&lt;p&gt;The QMS had a stable, predictable UI for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Exporting Users statistics&lt;/li&gt;
&lt;li&gt;Exporting Services statistics&lt;/li&gt;
&lt;li&gt;Downloading them as .xls&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So I built a browserless Chromium automation with custom logic.&lt;/p&gt;

&lt;p&gt;This changed everything.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Layer 1: Automated Data Extraction (Daily, Reliable)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Every day at 7:00 PM (UTC+2):&lt;/li&gt;
&lt;li&gt;Chromium automation logs into the QMS&lt;/li&gt;
&lt;li&gt;Navigates the export screens&lt;/li&gt;
&lt;li&gt;Downloads exactly two XLS files (stats_user &amp;amp; stats_serv)&lt;/li&gt;
&lt;li&gt;Files are deposited into an SFTP: /raw_data/&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No human action.&lt;br&gt;
No broken CSVs.&lt;br&gt;
Same structure, every single day.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The most reliable integration is often the one that respects how the system was designed to be used.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fueujeioqhlx08vu9es9a.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fueujeioqhlx08vu9es9a.jpg" alt="automated-data-extraction-pipeline" width="800" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Layer 2: Extraction &amp;amp; Cleaning Pipeline (Event-Driven)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The moment new files land in /raw_data, a second pipeline kicks in.&lt;/p&gt;

&lt;p&gt;This is not cron-based.&lt;br&gt;
It’s event-driven via webhook.&lt;/p&gt;

&lt;p&gt;What this pipeline does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reads the XLS files&lt;/li&gt;
&lt;li&gt;Applies client-defined naming conventions&lt;/li&gt;
&lt;li&gt;Cleans and normalizes fields&lt;/li&gt;
&lt;li&gt;Applies business rules and calculations&lt;/li&gt;
&lt;li&gt;Aligns data to operational definitions (KPIs that actually make sense)&lt;/li&gt;
&lt;li&gt;Converts outputs to CSV&lt;/li&gt;
&lt;li&gt;Writes them to:/ready_data/&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This layer is where raw logs become operational truth.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//Get filename from Download FTP node
let inputFileName = '';
let fileDate = '';

try {
  const ftpItems = $('Download serv').all();
  if (
    ftpItems &amp;amp;&amp;amp;
    ftpItems[0] &amp;amp;&amp;amp;
    ftpItems[0].binary &amp;amp;&amp;amp;
    ftpItems[0].binary.data
  ) {
    inputFileName = ftpItems[0].binary.data.fileName || '';
  }
} catch (e) {
  console.log('Could not get filename from Download node:', e.message);
}

//Extract date from filename
if (inputFileName) {
  const dateMatch = inputFileName.match(/(\d{8})/);
  if (dateMatch) {
    fileDate = dateMatch[1];
  }
}

//Fallback (use today's date)
if (!fileDate) {
  const today = new Date();
  const day = today.getDate().toString().padStart(2, '0');
  const month = (today.getMonth() + 1).toString().padStart(2, '0');
  const year = today.getFullYear();
  fileDate = `${day}${month}${year}`;
}

//Header mapping (ONLY the columns we want)
const headerMap = {
  Utilisateur: 'users',
  '8-9': '8_9',
  '9-10': '9_10',
   ....
  'Durée session': 'session_duration',
  'Latence': 'latency',
  'Ratio (%)': 'service_ratio_pct',
  '# appelés': 'tickets_called',
  '# entretiens': 'clients_served',
  '# annulés': 'tickets_cancelled',
  'T.entretien moy.': 'avg_service_time',
  'T.entretien max.': 'max_service_time',
  'Alarme': 'alert',
};

//Build CSV content
const csvLines = [];
.....

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Layer 3: Distribution Pipeline (Decision-Ready Data)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The final pipeline runs on cron at 3:00 AM (UTC).&lt;/p&gt;

&lt;p&gt;Its role is simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Take validated CSVs from /ready_data&lt;/li&gt;
&lt;li&gt;Push them to an external SFTP&lt;/li&gt;
&lt;li&gt;Make them instantly consumable by Power BI&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By the time teams arrive at work:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data is fresh&lt;/li&gt;
&lt;li&gt;KPIs are current&lt;/li&gt;
&lt;li&gt;Dashboards reflect what is happening now, not last month&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0y41e281ro0zhr88nr0i.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0y41e281ro0zhr88nr0i.jpg" alt="distribution-pipeline-ready-data-external-sftp-power-bi" width="800" height="305"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tech Stack (Self-Hosted, Bank-Friendly)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;n8n (orchestration &amp;amp; automation)&lt;/li&gt;
&lt;li&gt;Custom JavaScript (data logic &amp;amp; transformations)&lt;/li&gt;
&lt;li&gt;Browserless Chromium&lt;/li&gt;
&lt;li&gt;SFTP (raw &amp;amp; ready zones)&lt;/li&gt;
&lt;li&gt;Ubuntu Server&lt;/li&gt;
&lt;li&gt;Docker (self-hosted)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No core dependency.&lt;br&gt;
No invasive access.&lt;br&gt;
No vendor lock-in.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Changed for the Business
&lt;/h2&gt;

&lt;p&gt;Before:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Monthly Excel exports&lt;/li&gt;
&lt;li&gt;Unreadable tables&lt;/li&gt;
&lt;li&gt;DIY dashboards showing last month&lt;/li&gt;
&lt;li&gt;Last-minute reporting panic&lt;/li&gt;
&lt;li&gt;Decisions made in partial darkness&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Daily operational datasets&lt;/li&gt;
&lt;li&gt;Current-month visibility&lt;/li&gt;
&lt;li&gt;Reliable KPIs&lt;/li&gt;
&lt;li&gt;Zero manual interventions&lt;/li&gt;
&lt;li&gt;Dashboards that actually support decisions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;From a business perspective, this unlocked:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Faster operational decision-making&lt;/li&gt;
&lt;li&gt;Better service performance monitoring&lt;/li&gt;
&lt;li&gt;Clearer visibility into branch activity&lt;/li&gt;
&lt;li&gt;A foundation for future automation and Agentic AI&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Why This was an Operational excellence project&lt;/p&gt;

&lt;p&gt;This wasn’t about tools.&lt;br&gt;
It wasn’t about dashboards.&lt;br&gt;
It wasn’t about IT modernization.&lt;/p&gt;

&lt;p&gt;It was about &lt;strong&gt;getting the right data, at the right time, in the right shape&lt;/strong&gt;, so operations teams could actually operate.&lt;/p&gt;

&lt;p&gt;Once that exists:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dashboards become useful&lt;/li&gt;
&lt;li&gt;Forecasting becomes possible&lt;/li&gt;
&lt;li&gt;Advanced analytics becomes realistic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Final Thought&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You can’t improve what you can’t see and you can’t see it if data arrives too late.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Operational excellence starts before analytics.&lt;br&gt;
It starts with pipelines like this.&lt;/p&gt;

</description>
      <category>automation</category>
      <category>ai</category>
      <category>datascience</category>
      <category>api</category>
    </item>
  </channel>
</rss>
