<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: ThresholdIQ</title>
    <description>The latest articles on DEV Community by ThresholdIQ (@thresholdiq).</description>
    <link>https://dev.to/thresholdiq</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/thresholdiq"/>
    <language>en</language>
    <item>
      <title>The 9 ML Anomaly Detection Methods ThresholdIQ Uses — Explained in Plain English</title>
      <dc:creator>ThresholdIQ</dc:creator>
      <pubDate>Tue, 24 Mar 2026 01:53:20 +0000</pubDate>
      <link>https://dev.to/thresholdiq/the-9-ml-anomaly-detection-methods-thresholdiq-uses-explained-in-plain-english-288d</link>
      <guid>https://dev.to/thresholdiq/the-9-ml-anomaly-detection-methods-thresholdiq-uses-explained-in-plain-english-288d</guid>
      <description>&lt;p&gt;When you upload a spreadsheet to &lt;strong&gt;ThresholdIQ&lt;/strong&gt;, nine separate machine learning methods run simultaneously across every column in your data. Each one is looking for a different type of problem. Some catch sudden spikes. Others find slow drift. One looks for sensors that have frozen. Another watches for two metrics that normally move together suddenly moving apart.&lt;/p&gt;

&lt;p&gt;Most people don't need to know how any of this works — they just want the anomaly flagged. But if you've ever wondered "why did **ThresholdIQ **flag that?" or "what would it miss?", this guide is for you. Each method gets a plain-English explanation, a concrete worked example, and an honest summary of what it catches and what it doesn't.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Contents&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Multi-Window Z-Score — the primary severity driver&lt;/li&gt;
&lt;li&gt;EWMA Spike Detection — sudden event catcher&lt;/li&gt;
&lt;li&gt;SARIMA Seasonal Residuals — seasonality-aware detection&lt;/li&gt;
&lt;li&gt;Isolation Forest — multivariate outlier detection&lt;/li&gt;
&lt;li&gt;Correlation Deviation — correlated failure detection&lt;/li&gt;
&lt;li&gt;DBSCAN Cluster Noise — behavioural outlier detection&lt;/li&gt;
&lt;li&gt;Seasonal Baseline — time-of-day / day-of-week context&lt;/li&gt;
&lt;li&gt;Trend Detection — gradual drift early warning&lt;/li&gt;
&lt;li&gt;Stuck &amp;amp; Zero Detection — sensor failure &amp;amp; line halt
How the 9 methods work together: Each method runs independently and produces a score. ThresholdIQ fuses these scores using a weighted formula — multi-window Z-score drives the primary severity, and the other 8 methods can only boost severity, never reduce it. This means a false positive from one method can't override a clean result from the others — but genuine anomalies that multiple methods agree on escalate quickly to Critical or Emergency.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Method 1 of 9&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-Window Z-Score&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The primary severity driver — compares every data point to multiple time horizons simultaneously&lt;br&gt;
Plain-English explanation&lt;br&gt;
A Z-score answers one question: "How far is this value from normal, measured in standard deviations?" A Z-score of 0 means perfectly average. A Z-score of 3 means three standard deviations above average — unusual for any distribution. Most single-metric alert systems use one Z-score window. **ThresholdIQ **runs four windows in parallel: 50 points (short-term), 100 points (mid-term), 200 points (long-term), and 500 points (very long-term). Each window has its own rolling mean and standard deviation.&lt;/p&gt;

&lt;p&gt;Analogy: Imagine four weather forecasters, each looking at a different time period — the last week, the last month, the last quarter, and the last year. If all four agree that today's temperature is unusually high, you can be very confident it really is abnormal. If only the "last week" forecaster flags it, it might just be a warm spell. Multi-window Z-score works the same way: agreement across windows = high confidence = higher severity.&lt;br&gt;
How severity is determined&lt;br&gt;
The number of windows simultaneously breached maps directly to severity level:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;W50 breach only → Warning&lt;/li&gt;
&lt;li&gt;W50 + W100 both breached → Critical&lt;/li&gt;
&lt;li&gt;W50 + W100 + W200 all breached → Emergency
&lt;strong&gt;Worked example — daily cash flow monitoring&lt;/strong&gt;
Example data: Finance team daily cash inflow (A$000s)
Day Cash inflow W50 Z-score W100 Z-score    W200 Z-score
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Result&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Mon $142k   0.3 0.2 0.1 Normal&lt;br&gt;
Tue $138k   -0.1    -0.2    -0.1    Normal&lt;br&gt;
Wed $41k    3.4 1.8 1.2 ⚠️ Warning (W50 only)&lt;br&gt;
Thu $38k    3.8 3.1 1.9 🟠 Critical (W50+W100)&lt;br&gt;
Fri $29k    4.2 3.7 3.2 🔴 Emergency (all 3)&lt;/p&gt;

&lt;p&gt;Wednesday's drop fires a Warning — unusual in the short term but not yet confirmed. By Friday, all three windows agree the cash inflow is far below normal. The anomaly has persisted and escalated to Emergency. This is a structural problem, not a one-day blip.&lt;/p&gt;

&lt;p&gt;Catches&lt;/p&gt;

&lt;p&gt;Sudden large spikes or drops&lt;br&gt;
Sustained deviations that persist over time&lt;br&gt;
Values that are unusual at any time horizon&lt;br&gt;
Escalating anomalies (gets worse each period)&lt;br&gt;
Misses on its own&lt;br&gt;
Seasonal patterns (Sunday lows look anomalous)&lt;br&gt;
Gradual drift that's slow enough to shift the mean&lt;br&gt;
Multi-metric relationships between columns&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Method 2 of 9&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;EWMA Spike Detection&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Exponentially Weighted Moving Average — catches sudden instantaneous events that resolve quickly&lt;/p&gt;

&lt;p&gt;Plain-English explanation&lt;/p&gt;

&lt;p&gt;EWMA (Exponentially Weighted Moving Average) is a special kind of average that gives more weight to recent data and less weight to older data. It creates a smoothed trend line through your data. Anything that deviates sharply from that smooth line — a sudden spike or crash — gets flagged.&lt;/p&gt;

&lt;p&gt;Think of it as the difference between the trend and the actual value. EWMA subtracts the smoothed trend from each actual reading. What's left (the residual) tells you what's unexpected. When the residual exceeds 3 standard deviations, EWMA fires.&lt;/p&gt;

&lt;p&gt;Analogy: You drive to work every day and the journey takes about 35 minutes. EWMA is like tracking your rolling average journey time, weighted towards recent trips. One day you hit an accident and it takes 90 minutes. The EWMA trend says "expected: 36 mins" — the actual is 90 mins — the residual is massive. Flag. Next day it's back to 34 minutes. The spike resolved quickly, but EWMA caught it the moment it happened.&lt;br&gt;
Worked example — e-commerce daily revenue&lt;br&gt;
Example data: Online store daily revenue (A$)&lt;br&gt;
Date    Actual revenue  EWMA trend  Residual    Result&lt;br&gt;
Mar 18  $14,200 $13,980 +$220   Normal&lt;br&gt;
Mar 19  $13,900 $13,960 -$60    Normal&lt;br&gt;
Mar 20  $2,100  $13,850 -$11,750    🔴 Emergency spike&lt;br&gt;
Mar 21  $13,600 $13,740 -$140   Normal (resolved)&lt;br&gt;
EWMA catches the Mar 20 revenue collapse on the same day it happens — before it shows up in any weekly report. It also correctly shows the next day as normal, so you don't get ongoing false alerts once the issue resolves. This was likely a payment gateway outage or checkout failure.&lt;br&gt;
Catches&lt;br&gt;
Sudden single-period spikes or crashes&lt;br&gt;
Events that resolve quickly (transient anomalies)&lt;br&gt;
Instantaneous sensor readings far from trend&lt;br&gt;
Fast-reacting — fires within the same reporting period&lt;br&gt;
Misses on its own&lt;br&gt;
Gradual drift (the trend line adapts to drift)&lt;br&gt;
Sustained long-term deviations&lt;br&gt;
Patterns that emerge across multiple metrics&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Method 3 of 9&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SARIMA Seasonal Residuals&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Seasonal AutoRegressive Integrated Moving Average — separates predictable patterns from real anomalies&lt;br&gt;
Plain-English explanation&lt;br&gt;
SARIMA is a statistical forecasting model that understands seasonality — predictable patterns that repeat at regular intervals. It learns from your historical data: "Electricity usage is always high on Monday mornings. Revenue always dips on Sundays. Production throughput is always lower on night shifts." It builds a model of what your data should look like at any given time based on those patterns.&lt;/p&gt;

&lt;p&gt;Once SARIMA has learned the seasonal model, it computes what it expected at each point, and compares that to what actually happened. The difference (the residual) is what gets analysed for anomalies. A Sunday revenue figure that looks low by absolute standards might be perfectly normal for a Sunday — SARIMA knows this and won't flag it.&lt;/p&gt;

&lt;p&gt;Analogy: A supermarket expects long checkout queues on Saturday afternoons — that's just how Saturdays work. SARIMA is the manager who knows all the normal busy and quiet patterns. If queues are long on a Wednesday at 11am (when it's normally quiet), SARIMA flags it. But it never flags Saturday afternoon queues as anomalous, because those are expected.&lt;br&gt;
Worked example — utility gas consumption&lt;br&gt;
Example data: Residential gas consumption (GJ/day)&lt;br&gt;
Date    Day Actual GJ   SARIMA expected Residual    Result&lt;br&gt;
Jun 1   Mon 42.1    41.8    +0.3    Normal&lt;br&gt;
Jun 7   Sun 31.4    32.1    -0.7    Normal (Sundays are always lower)&lt;br&gt;
Jun 8   Mon 44.2    42.0    +2.2    Normal (slight winter increase)&lt;br&gt;
Jun 9   Tue 29.3    41.9    -12.6   ⚠️ Warning (unexpected low)&lt;br&gt;
Without SARIMA, Sunday's 31.4 GJ might look anomalously low compared to weekday averages of ~43 GJ. SARIMA knows Sundays are always lower and ignores it. But Tuesday's 29.3 GJ is far below the Tuesday expectation of 41.9 GJ — that's a real anomaly (possible meter fault or pipeline pressure issue).&lt;br&gt;
Catches&lt;br&gt;
Anomalies that break seasonal patterns&lt;br&gt;
Events that look normal in absolute terms but are wrong for the time period&lt;br&gt;
Day-of-week and hour-of-day deviations&lt;br&gt;
Misses on its own&lt;br&gt;
Anomalies that follow the seasonal pattern (a "seasonal" anomaly)&lt;br&gt;
Very short data histories (needs at least 2 full seasonal cycles)&lt;br&gt;
Multi-metric relationships&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Method 4 of 9&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Isolation Forest&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Unsupervised ML model — detects globally unusual combinations across all metrics simultaneously&lt;br&gt;
Plain-English explanation&lt;br&gt;
Isolation Forest is an unsupervised machine learning algorithm that works by trying to isolate each data point from the rest of the dataset using random cuts. Normal points — those that cluster with similar points — require many cuts to isolate because there are lots of similar points nearby. Anomalous points — those that are unusual — can be isolated in very few cuts because they're far from everything else.&lt;/p&gt;

&lt;p&gt;Crucially, Isolation Forest looks at all your columns simultaneously. A reading might be within normal range on any single metric, but if the combination of values is unusual across three or four metrics together, Isolation Forest finds it. This is the key method for catching multi-metric anomalies that single-column monitoring would never detect.&lt;/p&gt;

&lt;p&gt;Analogy: Imagine a crowd of 1,000 people, and you're trying to find the one person who doesn't belong. You start drawing random lines through the crowd — "everyone on the left of this line, everyone on the right." A normal person surrounded by similar people takes many cuts before they're alone. An oddly-dressed person standing away from the crowd gets isolated in just two or three cuts. Isolation Forest does this mathematically with your data columns.&lt;br&gt;
Worked example — manufacturing OEE with multiple metrics&lt;br&gt;
Example data: Production line metrics (each column looks normal individually)&lt;br&gt;
Shift   OEE %   Temp °C    Cycle time (s)  Reject %    Isolation score Result&lt;br&gt;
Day 1   87  68  42  1.2 0.42    Normal&lt;br&gt;
Day 2   86  69  43  1.4 0.40    Normal&lt;br&gt;
Day 3   83  74  47  2.1 0.71    ⚠️ Warning (unusual combo)&lt;br&gt;
Day 4   81  79  52  3.8 0.89    🔴 Emergency (globally isolated)&lt;br&gt;
Day 3's OEE of 83 looks only slightly below normal. Temperature of 74°C is within the accepted range. Cycle time of 47s is a little high. Reject rate of 2.1% might pass inspection individually. But the combination of all four metrics shifting together in the same direction is globally unusual — Isolation Forest catches this as a Warning on Day 3, before any single metric triggers an individual alert. By Day 4 it's an Emergency.&lt;br&gt;
Catches&lt;br&gt;
Multi-metric combinations that are globally unusual&lt;br&gt;
Anomalies that don't breach any individual threshold&lt;br&gt;
Patterns invisible to single-column analysis&lt;br&gt;
Misses on its own&lt;br&gt;
Anomalies in a single column where other columns are normal&lt;br&gt;
Seasonal patterns (doesn't account for time)&lt;br&gt;
Requires enough data to establish "normal" clusters&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Method 5 of 9&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correlation Deviation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Monitors whether metrics that normally move together have started moving apart — or diverge when they shouldn't&lt;br&gt;
Plain-English explanation&lt;br&gt;
Some metrics in your data are naturally correlated — they tend to move up and down together. Revenue and units sold. Power consumption and production output. Delivery volume and fuel cost. Correlation Deviation monitors these relationships over time. When two metrics that have historically moved together suddenly diverge — or when two metrics that normally move independently start moving in lockstep — that's flagged as an anomaly.&lt;/p&gt;

&lt;p&gt;This method is particularly powerful for catching process failures that don't show up in any single column. If your OEE stays flat but your reject rate climbs, something has changed in the relationship between those metrics — even if neither column individually looks alarming.&lt;/p&gt;

&lt;p&gt;Analogy: You track both your heart rate and your step count while exercising. Normally, they move together — more steps = higher heart rate. One day your step count is normal but your heart rate is unusually high. The relationship has broken. Something might be wrong — you might be getting ill, or there's something else stressing your body. Correlation Deviation detects exactly this type of "the relationship broke" signal.&lt;br&gt;
Worked example — operations supplier monitoring&lt;br&gt;
Example data: Supplier B over 4 weeks (metrics normally correlated)&lt;br&gt;
Week    Order volume    On-time %   Historical corr.    Deviation   Result&lt;br&gt;
Week 1  240 units   96% Strong positive None    Normal&lt;br&gt;
Week 2  255 units   94% Strong positive Slight  Normal&lt;br&gt;
Week 3  270 units   87% Breaking down   Moderate    ⚠️ Warning — volume up, OTD dropping&lt;br&gt;
Week 4  290 units   74% Inverted    Large   🟠 Critical — relationship fully inverted&lt;br&gt;
Historically, higher order volume correlates with better supplier performance (volume customer = priority service). Now volume is rising but on-time delivery is falling — the relationship has inverted. This is an early signal that the supplier is over-committed and struggling. Neither metric individually would have triggered an alert. The relationship breaking is the signal.&lt;br&gt;
Catches&lt;br&gt;
Correlated metrics that diverge unexpectedly&lt;br&gt;
Process changes that affect metric relationships&lt;br&gt;
Multi-metric failures invisible to single-column rules&lt;br&gt;
Misses on its own&lt;br&gt;
Single-metric anomalies where all correlations hold&lt;br&gt;
Very weak or noisy correlations in the data&lt;br&gt;
Newly added columns with no correlation history&lt;/p&gt;

&lt;p&gt;**Method 6 of 9&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DBSCAN Cluster Noise&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Density-Based Spatial Clustering — identifies points that don't belong to any normal behavioural cluster&lt;br&gt;
Plain-English explanation&lt;br&gt;
DBSCAN (Density-Based Spatial Clustering of Applications with Noise) groups your data into clusters based on density — regions where data points are close together. Points that belong to a dense cluster are normal. Points that sit far from any cluster, in low-density regions of the data space, are labelled "noise" — and those noise points are anomalies.&lt;/p&gt;

&lt;p&gt;Unlike methods that flag values outside a threshold, DBSCAN doesn't need to know in advance what "normal" looks like. It discovers the natural clusters in your data and then identifies what doesn't fit. This makes it excellent at catching systematic patterns that are unusual — like a specific product SKU that consistently returns at a different rate from all similar products, or a meter that always reads in a pattern no other meter produces.&lt;/p&gt;

&lt;p&gt;Analogy: Imagine dropping 1,000 coloured dots on a map representing where people park their cars. Most dots cluster around shopping centres, stations, and offices. A few dots appear in random fields with no nearby cluster — those are the anomalies. DBSCAN finds those isolated dots without anyone telling it where the normal clusters should be.&lt;br&gt;
Worked example — e-commerce returns by product&lt;br&gt;
Example data: Product return rates and average review score across 8 SKUs&lt;br&gt;
SKU Return rate %   Avg review  Cluster Result&lt;br&gt;
SKU-001 3.2 4.3 Normal cluster A    Normal&lt;br&gt;
SKU-002 4.1 4.1 Normal cluster A    Normal&lt;br&gt;
SKU-003 3.8 4.4 Normal cluster A    Normal&lt;br&gt;
SKU-004 2.9 4.6 Normal cluster B    Normal&lt;br&gt;
SKU-005 3.1 4.5 Normal cluster B    Normal&lt;br&gt;
SKU-006 18.7    2.1 No cluster (noise)  🔴 Emergency — isolated outlier&lt;br&gt;
SKU-006 sits completely outside both normal clusters — it has a return rate and review score combination that no other product in the catalogue produces. DBSCAN labels it as noise (an anomaly). This flags a quality issue: the product is defective, mislabelled, or fundamentally not meeting customer expectations. A threshold rule watching return rate alone might catch this eventually, but DBSCAN catches the combined pattern immediately.&lt;br&gt;
Catches&lt;br&gt;
Entities (SKUs, meters, suppliers) unlike any normal group&lt;br&gt;
Systematic defect patterns in quality data&lt;br&gt;
Reverse-wiring and meter tampering patterns&lt;br&gt;
No prior knowledge of "normal" required&lt;br&gt;
Misses on its own&lt;br&gt;
Time-series anomalies (DBSCAN ignores time order)&lt;br&gt;
Anomalies that cluster with other anomalies&lt;br&gt;
Sparse datasets with too few points per cluster&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Method 7 of 9&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Seasonal Baseline&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Maintains separate normal ranges per hour-of-day and day-of-week — context-aware thresholds without manual configuration&lt;br&gt;
Plain-English explanation&lt;br&gt;
The Seasonal Baseline method builds a separate statistical profile for each time bucket in your data. For hourly data, it calculates a normal mean and standard deviation for each hour of the day and each day of the week independently — so "normal for 3am on a Sunday" and "normal for 3pm on a Tuesday" are tracked separately.&lt;/p&gt;

&lt;p&gt;This is simpler than SARIMA (it doesn't build a full forecasting model) but it's very effective at eliminating false positives caused by predictable time-based patterns. Night-shift throughput, weekend call volumes, Monday morning order surges — all of these are learned as patterns specific to their time bucket and excluded from anomaly detection.&lt;/p&gt;

&lt;p&gt;Analogy: A restaurant manager knows that 20 customers at 1pm on a Tuesday is completely normal, but 20 customers at 1pm on a Saturday is concerningly quiet. They don't need a formula — they just know from experience what each day and time looks like. The Seasonal Baseline does this automatically for every column in your data, learning the expected level for every hour and day combination.&lt;br&gt;
Worked example — call centre ticket volume&lt;br&gt;
Example data: Customer support tickets per hour (operations team)&lt;br&gt;
Time    Day Tickets Baseline for this slot  Deviation   Result&lt;br&gt;
09:00   Monday  47  Mon 9am avg: 44 (±8)   +3  Normal&lt;br&gt;
09:00   Sunday  12  Sun 9am avg: 14 (±4)   -2  Normal (Sundays are always quiet)&lt;br&gt;
14:00   Wednesday   89  Wed 2pm avg: 38 (±9)   +51 (&amp;gt;5σ)  ⚠️ Warning — far above Wed 2pm normal&lt;br&gt;
Without Seasonal Baseline, Sunday's 12 tickets might look anomalously low versus the overall weekly average of ~42 tickets/hour. The Seasonal Baseline knows Sunday at 9am averages 14 and treats 12 as normal. Wednesday at 2pm averaging 38 tickets suddenly receiving 89 is genuinely anomalous — something caused an unusual spike on a normally quiet Wednesday afternoon.&lt;br&gt;
Catches&lt;br&gt;
Values that are wrong for the time period even if typical overall&lt;br&gt;
Anomalies on predictably quiet periods (weekends, nights)&lt;br&gt;
Shift-specific deviations in manufacturing data&lt;br&gt;
Misses on its own&lt;br&gt;
Multi-week seasonal patterns (uses fixed day/hour buckets)&lt;br&gt;
Long-term trends (the baseline adapts slowly to drift)&lt;br&gt;
Multi-metric patterns&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Method 8 of 9&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trend Detection&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Identifies monotonic drift across consecutive rolling windows — the early warning system for gradual degradation&lt;br&gt;
Plain-English explanation&lt;br&gt;
Trend Detection compares the mean value across consecutive rolling windows of 50 data points each. If the mean is consistently moving in one direction — each window's average is higher (or lower) than the previous window's average, across three or more consecutive windows — a trend is flagged. This is monotonic drift: not a spike, not a step change, but a steady, persistent movement in one direction.&lt;/p&gt;

&lt;p&gt;This is the method that gives you weeks of advance warning on bearing wear, budget overruns that build slowly, and supplier performance that's eroding imperceptibly. No single period looks alarming. The direction across periods is the signal.&lt;/p&gt;

&lt;p&gt;Analogy: Your bathroom scale shows your weight each morning. No single day looks dramatically different. But Trend Detection is the method that notices "you were 78.2kg three weeks ago, then 78.8kg, then 79.3kg, then 79.9kg this week." Each individual reading looks fine. The four-week upward trajectory is the alert. This is how gradual health problems — and gradual data problems — are caught before they become crises.&lt;br&gt;
Worked example — equipment spindle temperature drift&lt;br&gt;
Example data: CNC machine spindle temperature across 4 weekly windows (50 readings each)&lt;br&gt;
Window  Period  Window mean temp    Change from prev.   Trend status&lt;br&gt;
W1  Week 1  67.2°C — Baseline&lt;br&gt;
W2  Week 2  68.4°C +1.2°C Monitoring&lt;br&gt;
W3  Week 3  69.9°C +1.5°C ⚠️ Warning — 3rd consecutive rise&lt;br&gt;
W4  Week 4  71.8°C +1.9°C 🟠 Critical — accelerating upward trend&lt;br&gt;
No individual reading breached the "normal operating temperature" threshold of 75°C. A single-threshold rule would have seen nothing. Trend Detection flagged the Warning in Week 3 because three consecutive windows all showed higher averages than the previous — monotonic upward drift. The maintenance team scheduled a bearing inspection, found wear, and replaced the bearing before it caused an unplanned line stoppage. This is what preventive detection looks like.&lt;br&gt;
Catches&lt;br&gt;
Gradual drift that never triggers a single-point threshold&lt;br&gt;
Slowly accumulating budget or cost overruns&lt;br&gt;
Equipment wear and sensor calibration drift&lt;br&gt;
Performance erosion in supplier or sales data&lt;br&gt;
Misses on its own&lt;br&gt;
Sudden spikes (EWMA handles those)&lt;br&gt;
Oscillating or reversing trends&lt;br&gt;
Multi-metric anomalies&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Method 9 of 9&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stuck &amp;amp; Zero Detection&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Identifies sensor freeze (repeated identical values) and sudden drop to zero — fires Emergency immediately&lt;br&gt;
Plain-English explanation&lt;br&gt;
This is the most straightforward of the nine methods, but it catches some of the most expensive failures. It monitors for two specific patterns:&lt;/p&gt;

&lt;p&gt;Stuck values: The same number appearing repeatedly across a rolling window. A live sensor that outputs 47.3 for 20 consecutive readings isn't measuring anything — it's frozen. This indicates sensor failure, PLC communication loss, or a data pipeline that's stuck replaying stale data.&lt;br&gt;
Zero values: A metric that has been producing non-zero readings suddenly drops to exactly zero. This indicates a complete equipment stoppage, a service disconnection, a tracking pixel going offline, or a meter that has stopped registering.&lt;br&gt;
Both patterns immediately escalate to Emergency severity — not Warning, not Critical. They indicate that your monitoring data is no longer trustworthy, which is worse than a bad reading. You can act on an anomalous reading. You can't act on data that secretly stopped updating.&lt;/p&gt;

&lt;p&gt;Analogy: Imagine a speedometer in a car that suddenly locks at 60 km/h even while the car decelerates to a stop. A stuck speedometer is worse than an inaccurate one — it's actively lying. You'd rather know you have no speed data than trust a frozen reading. Stuck detection is your warning that the instrument has locked up. Zero detection is your warning that the engine has stopped entirely.&lt;br&gt;
Worked example — production line sensor monitoring&lt;br&gt;
Example 1: Stuck sensor — hydraulic pressure readings (bar)&lt;br&gt;
Reading #   Pressure (bar)  Status&lt;br&gt;
R-001   142.3   Normal&lt;br&gt;
R-002   141.8   Normal&lt;br&gt;
R-003   143.1   Normal&lt;br&gt;
R-004 to R-024  143.1 (repeated 21×)   🔴 Emergency — sensor frozen&lt;br&gt;
Example 2: Zero drop — e-commerce checkout completions per hour&lt;br&gt;
Hour    Completions Status&lt;br&gt;
14:00   47  Normal&lt;br&gt;
15:00   52  Normal&lt;br&gt;
16:00   0   🔴 Emergency — checkout dead&lt;br&gt;
17:00   0   🔴 Emergency — ongoing&lt;br&gt;
Both patterns fire Emergency immediately. There's no Warning-then-Critical escalation because there's nothing to investigate — the data source has either stopped or frozen. In the checkout example, the business lost A$2,200/hour (52 completions × ~A$42 AOV) for two hours before the IT team was notified via the email alert. Without Stuck &amp;amp; Zero detection, this would have only been caught in the next morning's daily sales review.&lt;br&gt;
Catches&lt;br&gt;
Sensor freeze and PLC communication failure&lt;br&gt;
Complete equipment or line stoppages&lt;br&gt;
Data pipeline failures serving stale data&lt;br&gt;
Tracking pixel and analytics disconnections&lt;br&gt;
Meter communication failures in utility data&lt;br&gt;
Misses on its own&lt;br&gt;
Partial failures (low values, not zero)&lt;br&gt;
Sensors that output random noise instead of zero&lt;br&gt;
Legitimate zero readings (planned shutdowns)&lt;br&gt;
How all 9 methods combine into a single severity grade&lt;br&gt;
Running nine separate detection methods is only useful if their results are combined intelligently. **ThresholdIQ **uses a score fusion formula that treats Multi-Window Z-Score as the primary driver, with the other eight methods acting as boosters:&lt;/p&gt;

&lt;p&gt;/* Score fusion formula */&lt;br&gt;
final_score = multiWindow_score + min(0.25, ml_composite × 0.25)&lt;/p&gt;

&lt;p&gt;/* ML composite weights */&lt;br&gt;
ml_composite =&lt;br&gt;
  EWMA(0.12) + SARIMA(0.22) + IForest(0.20) +&lt;br&gt;
  Correlation(0.12) + DBSCAN(0.06) +&lt;br&gt;
  Seasonal(0.12) + Trend(0.10) + Stuck(0.06)&lt;br&gt;
The key design principle: ML methods can only boost severity, never reduce it. If Multi-Window Z-Score says Warning, the combined ML composite can escalate it to Critical or Emergency, but it cannot declare it Normal. This prevents false negatives — genuine anomalies can't be overridden by other methods — while also preventing false positives from any single ML method firing alone.&lt;/p&gt;

&lt;p&gt;Example of fusion in action: A meter reading fires a Warning from Multi-Window Z-Score (W50 breach). SARIMA flags it as unexpected for the time of day (+0.22 boost). Isolation Forest confirms it's a globally unusual reading (+0.20 boost). The combined ml_composite pushes the final score above the Critical threshold. The Warning automatically escalates to Critical — with the signals tab showing exactly which methods fired and why.&lt;/p&gt;

&lt;p&gt;Why nine methods and not just one?&lt;/p&gt;

&lt;p&gt;Every single one of these methods has failure modes when used alone. Z-Score fires false positives on seasonal data. SARIMA can't catch sudden spikes on new datasets. Isolation Forest doesn't understand time. EWMA can't detect gradual drift. No single method finds everything — but nine methods running in parallel, with intelligent fusion, catches the anomalies that cost real money.&lt;/p&gt;

&lt;p&gt;The table below shows which method catches which type of anomaly:&lt;/p&gt;

&lt;p&gt;Anomaly type                            Best method(s)&lt;br&gt;
Sudden single-period spike or crash EWMA, Multi-Window Z-Score&lt;br&gt;
Sustained deviation over many periods   Multi-Window Z-Score, Trend                   detection&lt;br&gt;
Seasonally unexpected value          SARIMA, Seasonal Baseline&lt;br&gt;
Multi-metric combination anomaly     Isolation Forest, Correlation Deviation&lt;br&gt;
Gradual drift over weeks             Trend Detection&lt;br&gt;
Behavioural cluster outlier          DBSCAN&lt;br&gt;
Sensor freeze / line halt            Stuck &amp;amp; Zero Detection&lt;br&gt;
Any of the above, with time context Seasonal Baseline + all others&lt;/p&gt;

&lt;p&gt;Try all 9 methods free — upload your spreadsheet → &lt;a href="https://dev.tourl"&gt;thresholdiq.app&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>machinelearning</category>
      <category>datascience</category>
      <category>analytics</category>
    </item>
    <item>
      <title>How ThresholdIQ Detects Anomalies Automatically (No Setup Required)</title>
      <dc:creator>ThresholdIQ</dc:creator>
      <pubDate>Tue, 17 Mar 2026 03:45:12 +0000</pubDate>
      <link>https://dev.to/thresholdiq/how-thresholdiq-detects-anomalies-automatically-no-setup-required-5ah9</link>
      <guid>https://dev.to/thresholdiq/how-thresholdiq-detects-anomalies-automatically-no-setup-required-5ah9</guid>
      <description>&lt;p&gt;Most monitoring tools ask you to do the hard work first: decide which columns to watch, pick your thresholds, and configure your rules. But here's the problem — if you already knew exactly what was wrong and where to look, you wouldn't need a monitoring tool.&lt;/p&gt;

&lt;p&gt;ThresholdIQ takes the opposite approach. Upload your file. Click detect. The engine figures out what's unusual. This post explains exactly how that works, in plain English.&lt;/p&gt;

&lt;p&gt;The Problem With Manual Thresholds&lt;br&gt;
Imagine you're a Finance Analyst monitoring daily revenue. You set a rule: "alert me if revenue drops below £50,000." Sensible. But:&lt;/p&gt;

&lt;p&gt;Revenue is always lower on Sundays — does that count?&lt;br&gt;
Revenue has been gradually declining for six weeks — should you have noticed sooner?&lt;br&gt;
Revenue and margin both dropped at exactly the same time today — is that more serious?&lt;br&gt;
A static threshold catches the obvious cases. It misses the subtle, the seasonal, and the correlated ones. ThresholdIQ's automatic engine is designed to catch all of them.&lt;/p&gt;

&lt;p&gt;What Happens When You Click "Detect Anomalies"&lt;br&gt;
When you upload a file and click the button, nine detection methods run simultaneously across every numeric column in your data. Here's what each one does and what it catches:&lt;/p&gt;

&lt;p&gt;Method  What it does    What it catches Severity&lt;br&gt;
Multi-Window Z-score    Computes local mean and standard deviation over 50, 100, 200 and 500-point rolling windows. Flags values that deviate more than 2–3.5 standard deviations from recent history.    Sudden spikes, abrupt drops, sustained departures from normal range Warning → Emergency&lt;br&gt;
EWMA    Applies exponential weighting so recent values count more than older ones. Flags residuals between the raw value and the smoothed trend.    Fast, sharp spikes that would be missed by a slow rolling average   Boosts score&lt;br&gt;
SARIMA  Seasonal ARIMA model learns the regular cycle in your data (e.g. daily/weekly patterns). Flags points that deviate from seasonal expectation, not just raw magnitude.   Anomalies that look normal in isolation but are wrong for that time of day or day of week   Boosts score&lt;br&gt;
Isolation Forest    Treats every row as a point in multi-dimensional space (one dimension per metric). Identifies points that are isolated — far from all other points — across all metrics at once.    Global outliers, sensor failures, zero readings when value should be non-zero   Emergency&lt;br&gt;
Correlation Deviation   Monitors whether correlated metrics deviate together in the same direction. Two or more metrics all abnormal simultaneously is a stronger signal than one alone.    Multi-metric failures — e.g. revenue AND margin AND volume all drop together  Emergency&lt;br&gt;
DBSCAN  Groups your data points into "normal behaviour clusters." Points that don't belong to any cluster are labelled noise and flagged.   Behavioural outliers — patterns that don't match any known operating mode Critical&lt;br&gt;
Seasonal Baseline   Builds a separate mean and standard deviation for every hour-of-day and day-of-week bucket. Sunday overnight readings are compared against Sunday overnight history — not all-time history.   Prevents false alarms from normal seasonal lows; surfaces genuine anomalies within their time context   Warning&lt;br&gt;
Trend Detection Compares the average of three consecutive 50-point windows. A monotonic rising or falling drift across all three windows is flagged.    Gradual budget drift, slow inventory decline, creeping latency — things that look fine today but signal a problem forming Critical&lt;br&gt;
Stuck/Zero Detection    Detects when a series that previously had variation becomes constant, or drops to zero from a meaningful non-zero history.  Sensor failures, data pipeline outages, broken integrations that produce zeroes instead of real values  Emergency&lt;br&gt;
How the Results Are Combined&lt;br&gt;
Each method produces a score between 0 and 1 for every data point. These scores are combined using a weighted fusion formula:&lt;/p&gt;

&lt;p&gt;Final score = Multi-Window score + min(0.25, ML composite × 0.25)&lt;/p&gt;

&lt;p&gt;The Multi-Window Z-score is the primary driver of severity. The other eight methods can only boost a score — they can never reduce it. This prevents a single false-positive method from masking a real anomaly.&lt;br&gt;
The final score maps to a severity level:&lt;/p&gt;

&lt;p&gt;Warning (0.60–0.79): W50 window breached — short-term deviation, may self-resolve&lt;br&gt;
Critical (0.80–0.89): W50 + W100 both breached — confirmed anomaly, investigate&lt;br&gt;
Emergency (0.90+): W50 + W100 + W200 all breached — structural shift, escalate immediately&lt;br&gt;
A Real Example: IoT Sensor Data&lt;br&gt;
Suppose you upload a CSV of hourly temperature readings from three factory facilities over 10 days. Here's what happens:&lt;/p&gt;

&lt;p&gt;1&lt;br&gt;
Schema detection: ThresholdIQ finds the timestamp column, the three numeric temperature columns, and the facility dimension. No mapping required.&lt;br&gt;
2&lt;br&gt;
Seasonal baseline: The engine notices temperature is reliably lower overnight (hours 22–06). It builds a separate baseline for each hour slot.&lt;br&gt;
3&lt;br&gt;
Multi-window scoring: A spike at 03:00 Tuesday gets a high W50 score. But the seasonal baseline confirms this is within normal overnight range — so the SARIMA method returns a low residual. Final score stays at Warning.&lt;br&gt;
4&lt;br&gt;
Stuck detection: Facility-C's temperature reads exactly 21.4°C for 20 consecutive hours on Thursday. Stuck detection fires — score jumps to Emergency. This is a sensor failure, not a real reading.&lt;br&gt;
5&lt;br&gt;
Results: The timeline shows Facility-C flagged in red from Thursday 08:00. The Detection Signals tab shows "Stuck/Zero: 1" — exactly one method fired, and it fired confidently.&lt;br&gt;
How Much Data Do You Need?&lt;br&gt;
Data minimums:&lt;br&gt;
Under 10 rows: Detection is blocked — not enough data to learn any baseline&lt;br&gt;
10–49 rows: Basic mode only — Multi-Window and EWMA run. No SARIMA or clustering.&lt;br&gt;
50–99 rows: Reduced mode — SARIMA skipped (needs 40+ points to train). All other methods active.&lt;br&gt;
100+ rows: Full detection — all 9 methods active&lt;br&gt;
For most Finance or Operations exports — weekly KPI reports, monthly actuals, daily ops logs — you'll have well over 100 rows and the full engine runs immediately.&lt;/p&gt;

&lt;p&gt;What About False Positives?&lt;br&gt;
This is the most common concern with automated detection. ThresholdIQ addresses it in three ways:&lt;/p&gt;

&lt;p&gt;Seasonal awareness: SARIMA and the hourly/daily seasonal baseline prevent routine low periods from triggering alerts.&lt;br&gt;
Multi-window confirmation: A Warning only becomes Critical when it persists into the 100-point window. Transient spikes stay at Warning level.&lt;br&gt;
Fusion capping: ML methods can add at most 0.25 to the base score. A single method can't manufacture a Critical alert on its own.&lt;br&gt;
What You See in the App&lt;br&gt;
After detection completes, ThresholdIQ gives you four views:&lt;/p&gt;

&lt;p&gt;Timeline: Your data plotted over time with Warning/Critical/Emergency colour bands and anomaly markers&lt;br&gt;
Distribution: Severity breakdown by metric and dimension group&lt;br&gt;
Alert Log: Every anomaly with its score, reason, and the exact data point value&lt;br&gt;
Detection Signals: Which of the 9 methods fired, how many times, and at what severity — so you understand what the engine saw.&lt;/p&gt;

&lt;p&gt;&lt;a href="//thresholdiq.app"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>monitoring</category>
      <category>datascience</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>ThresholdIQ-We Rebuilt ThresholdIQ From the Ground Up — Here's What Changed</title>
      <dc:creator>ThresholdIQ</dc:creator>
      <pubDate>Sun, 15 Mar 2026 22:57:26 +0000</pubDate>
      <link>https://dev.to/thresholdiq/thresholdiq-we-rebuilt-thresholdiq-from-the-ground-up-heres-what-changed-4mmj</link>
      <guid>https://dev.to/thresholdiq/thresholdiq-we-rebuilt-thresholdiq-from-the-ground-up-heres-what-changed-4mmj</guid>
      <description>&lt;p&gt;&lt;strong&gt;ThresholdIQ Adaptive Detection Engine&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Few weeks ago, I shipped ThresholdIQ with a simple premise: monitor your spreadsheet data without needing a BI tool. You uploaded a file, set some threshold rules, and got email alerts when values crossed them.&lt;/p&gt;

&lt;p&gt;It worked. But users kept hitting the same wall: setting thresholds is harder than it sounds. What's the "right" threshold for revenue when your business has seasonality? What about a metric that's been quietly drifting for three months?&lt;/p&gt;

&lt;p&gt;So we scrapped the rule-based approach entirely. ThresholdIQ v2 uses 9 concurrent ML detection methods to find anomalies automatically — no thresholds to configure, no false positives from stale rules.&lt;/p&gt;

&lt;p&gt;What the engine actually does&lt;br&gt;
When you press Detect, the engine runs per group, per metric, per row — all inside your browser. Nine methods run in parallel:&lt;/p&gt;

&lt;p&gt;Multi-Window Z-score builds rolling baselines at four window sizes simultaneously. EWMA catches sudden deviations from exponential trends. SARIMA handles weekly and monthly seasonal patterns. Isolation Forest finds global outliers that local windows miss. Correlation deviation checks whether related metrics moved together. DBSCAN labels isolated noise points. The remaining three catch seasonal day/hour patterns, slow trend drift, and stuck or zero-value anomalies.&lt;/p&gt;

&lt;p&gt;Each method contributes a weighted score. The final severity — Warning, Critical, or Emergency — is determined by how many window tiers breach simultaneously. More windows breaching means a more persistent, significant anomaly.&lt;/p&gt;

&lt;p&gt;The window control actually works now&lt;br&gt;
The sidebar has always had a 50 / 100 / 200 / 500 pts window selector. In v1 it only changed the chart viewport. In v2, it scales all four detection window sizes proportionally. Choose 50 pts and the engine uses windows of [25, 50, 100, 250] — reactive, catches fast spikes. Choose 500 pts and it uses [250, 500, 1000, 2500] — stable, only flags sustained anomalies. Same data, meaningfully different detection.&lt;/p&gt;

&lt;p&gt;The alert log shows real numbers now&lt;br&gt;
One piece of feedback we kept getting: "the Threshold column shows 7,484 for a metric that's measured in percentages." That was because the threshold was inherited from the primary metric's rolling stats, not computed per-metric.&lt;/p&gt;

&lt;p&gt;Now every alert row shows the threshold and deviation for that specific metric — in that metric's own units. Revenue deviation shows in dollars. Profit margin shows in percentage points. A new Z-Score column sits alongside Deviation so you get both the raw difference and the statistical significance.&lt;/p&gt;

&lt;p&gt;Why browser-side matters&lt;br&gt;
Every calculation — all 9 ML methods, all rolling windows, all group splits — runs entirely in your browser. Your data never hits a server. For finance and operations teams handling sensitive spreadsheets, this isn't a nice-to-have. It's the whole point.&lt;/p&gt;

&lt;p&gt;The ML libraries (Isolation Forest, simple-statistics) are now bundled locally. No CDN dependencies that can break, no MIME type errors, no third-party calls.&lt;/p&gt;

&lt;p&gt;What's next&lt;br&gt;
We're focused on getting the first real external paying customer before we expand the roadmap. If you're a Finance Analyst, FP&amp;amp;A manager, or Operations lead who monitors spreadsheet data regularly — ThresholdIQ is built specifically for your workflow.&lt;/p&gt;

&lt;p&gt;Try it free for 7 days at thresholdiq.app — no credit card, no setup, zero configuration.&lt;/p&gt;

&lt;h1&gt;
  
  
  ThresholdIQ #AnomalyDetection #ExcelMonitoring #FinanceOps #MachineLearning #SaaS #ProductUpdate
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2nph8gefbfhk6j0iy57s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2nph8gefbfhk6j0iy57s.png" alt="ThresholdIQ Detection Engine" width="800" height="489"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>analytics</category>
      <category>machinelearning</category>
      <category>showdev</category>
    </item>
    <item>
      <title>How to Monitor Excel or CSV Files Automatically with Alerts</title>
      <dc:creator>ThresholdIQ</dc:creator>
      <pubDate>Mon, 09 Mar 2026 23:15:19 +0000</pubDate>
      <link>https://dev.to/thresholdiq/how-to-monitor-excel-or-csv-files-automatically-with-alerts-2ddj</link>
      <guid>https://dev.to/thresholdiq/how-to-monitor-excel-or-csv-files-automatically-with-alerts-2ddj</guid>
      <description>&lt;p&gt;Power BI is excellent for complex dashboards — but if you just need threshold alerts on Excel data, you don't need that complexity.&lt;/p&gt;

&lt;p&gt;I've spent years as a senior data analyst watching small and mid-size teams spend $1,200+ per year on BI tools — just to monitor a handful of KPIs in Excel.&lt;/p&gt;

&lt;p&gt;The truth? Most teams don't need dashboards.&lt;/p&gt;

&lt;p&gt;They need alerts.&lt;/p&gt;

&lt;p&gt;"Tell me when inventory drops below 500 units."&lt;br&gt;
"Warn me when the daily refund rate exceeds 3%."&lt;br&gt;
"Flag any customer with an overdue balance over $10k."&lt;/p&gt;

&lt;p&gt;That's it. Simple threshold monitoring. And Excel can't do it automatically.&lt;/p&gt;

&lt;p&gt;So I built *&lt;em&gt;ThresholdIQ *&lt;/em&gt;— a browser-based Excel alert dashboard that takes 60 seconds to set up.&lt;/p&gt;

&lt;p&gt;✅ Upload any structured file (Excel, CSV, JSON, XML)&lt;br&gt;
✅ Set 3-tier thresholds (Warning / Critical / Emergency)&lt;br&gt;
✅ See timeline charts, distribution analysis, alert logs&lt;br&gt;
✅ Export reports as PDF or CSV&lt;br&gt;
✅ Zero data upload — everything runs locally in your browser&lt;/p&gt;

&lt;p&gt;Built by a data analyst. Priced for teams, not enterprises. Starts free.&lt;/p&gt;

&lt;p&gt;👉  &lt;a href="https://thresholdiq.app" rel="noopener noreferrer"&gt;ThresholdIQ&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl0z4uh6oa0l6v37ermv0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl0z4uh6oa0l6v37ermv0.png" alt="ThresholdIQ Flow Diagram" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  excel #automation #webdev #producitivty
&lt;/h1&gt;

</description>
    </item>
  </channel>
</rss>
