<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Amir Shachar</title>
    <description>The latest articles on DEV Community by Amir Shachar (@amir_shachar_bc46a63dda21).</description>
    <link>https://dev.to/amir_shachar_bc46a63dda21</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/amir_shachar_bc46a63dda21"/>
    <language>en</language>
    <item>
      <title>First-Time Payees, Payouts, and Why Clean Transactions Still Turn Into Fraud Losses</title>
      <dc:creator>Amir Shachar</dc:creator>
      <pubDate>Wed, 01 Apr 2026 07:10:24 +0000</pubDate>
      <link>https://dev.to/amir_shachar_bc46a63dda21/first-time-payees-payouts-and-why-clean-transactions-still-turn-into-fraud-losses-5amh</link>
      <guid>https://dev.to/amir_shachar_bc46a63dda21/first-time-payees-payouts-and-why-clean-transactions-still-turn-into-fraud-losses-5amh</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://riskernel.com/blog/first-time-payees-payout-fraud.html" rel="noopener noreferrer"&gt;Riskernel&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Some of the worst fraud losses do not look obviously bad at the transaction level.&lt;/p&gt;

&lt;p&gt;The amount may look normal. The device may be familiar. The customer may even pass the basic checks. Then the money leaves anyway, and the loss shows up later.&lt;/p&gt;

&lt;p&gt;That happens because many fraud systems still score the event too narrowly. The real weakness is often in the setup around the event: a first-time payee, a change in payout path, an unusual sequence before release of funds, or a contextual signal that never made it into the decision.&lt;/p&gt;

&lt;h2&gt;
  
  
  The transaction is not always where the risk lives
&lt;/h2&gt;

&lt;p&gt;Event-centric scoring works well when the event itself carries the anomaly. But some fraud patterns are cleaner than that. The transaction can look almost ordinary while the surrounding setup tells a very different story.&lt;/p&gt;

&lt;p&gt;That is especially true in payouts, account changes, and certain APP-style flows where the harmful part is not “this payment is weird” but “this setup makes the payment dangerous.”&lt;/p&gt;

&lt;h2&gt;
  
  
  Why first-time payees deserve separate attention
&lt;/h2&gt;

&lt;p&gt;A first-time payee is not automatically fraud. But it is often a context shift that deserves more weight than teams give it.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It changes the trust assumptions around the transaction.&lt;/li&gt;
&lt;li&gt;It is often part of a sequence, not a standalone event.&lt;/li&gt;
&lt;li&gt;When combined with timing, device, or behavior changes, it becomes much more informative.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The mistake is treating it as just another feature instead of a structural change in exposure.&lt;/p&gt;

&lt;p&gt;New payee added. Payout requested soon after. Amount is not extreme. Device looks mostly familiar. The transaction alone may score as clean. The setup around it does not.&lt;/p&gt;

&lt;h2&gt;
  
  
  Payouts are not just purchases in reverse
&lt;/h2&gt;

&lt;p&gt;Teams often reuse too much logic between payment approval and payout approval. That is a mistake.&lt;/p&gt;

&lt;p&gt;Payouts deserve separate thinking because the fraud incentives, timing pressure, and loss mechanics are different. By the time the payout event arrives, the decision window is tighter and the operational cost of being wrong is often higher.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the rules engine usually misses
&lt;/h2&gt;

&lt;p&gt;Rules are still useful, but this is where they often become blunt. A rule can spot “new payee plus large amount” or “payout after account change.” It is less good at combining weaker contextual shifts that only become meaningful together.&lt;/p&gt;

&lt;p&gt;That is also why per-decision explanations matter. If the system can show that the model relied on a first-time payee, timing shift, and payout-path change together, the analyst has something real to work with. If it only emits a generic risk score, the queue gets slower and the pattern stays hard to debug. That operational side is covered here: &lt;a href="https://riskernel.com/blog/shap-explainability-fraud-ops.html" rel="noopener noreferrer"&gt;SHAP Explainability for Fraud Ops&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  One concrete loss pattern
&lt;/h2&gt;

&lt;p&gt;A customer account appears healthy. A new payee is introduced. The payout request lands shortly after, and nothing about the raw amount looks shocking. A simple transaction-level view may let it pass.&lt;/p&gt;

&lt;p&gt;The problem is that the suspicious part was distributed across the setup, not concentrated in one obvious event. If your system only scores the event in isolation, you are asking it to miss exactly the kind of loss you care about.&lt;/p&gt;

&lt;h2&gt;
  
  
  What buyers should evaluate in vendor demos
&lt;/h2&gt;

&lt;p&gt;If you are comparing fraud vendors, ask them to walk through a setup-sensitive case, not just a cartoonishly bad purchase. Ask how the system handles first-time payees, payout timing, and context shifts around release of funds.&lt;/p&gt;

&lt;p&gt;Then ask whether the decision arrives fast enough for the real approval path. That is where the latency article matters: &lt;a href="https://riskernel.com/blog/real-time-fraud-scoring-latency.html" rel="noopener noreferrer"&gt;Real-Time Fraud Scoring Latency: What 47ms Actually Means&lt;/a&gt;. And if you want to validate the whole thing on your own traffic, start with &lt;a href="https://riskernel.com/blog/shadow-testing-fraud-vendor.html" rel="noopener noreferrer"&gt;Shadow Testing a Fraud Vendor Before You Touch Production&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The practical standard
&lt;/h2&gt;

&lt;p&gt;Good fraud systems do not just ask whether the transaction looks suspicious. They ask whether the setup around the transaction changed the exposure in a way the business should care about before money moves.&lt;/p&gt;

&lt;p&gt;That is a better standard for payouts, first-time payees, and the kinds of clean-looking losses that still hurt teams every day.&lt;/p&gt;

&lt;h2&gt;
  
  
  Note
&lt;/h2&gt;

&lt;p&gt;Canonical version: &lt;a href="https://riskernel.com/blog/first-time-payees-payout-fraud.html" rel="noopener noreferrer"&gt;https://riskernel.com/blog/first-time-payees-payout-fraud.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next read: &lt;a href="https://riskernel.com/blog/shap-explainability-fraud-ops.html" rel="noopener noreferrer"&gt;SHAP Explainability for Fraud Ops&lt;/a&gt;&lt;/p&gt;

</description>
      <category>fraud</category>
      <category>fintech</category>
      <category>api</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Handling Extreme Class Imbalance in Fraud Detection</title>
      <dc:creator>Amir Shachar</dc:creator>
      <pubDate>Wed, 01 Apr 2026 07:09:27 +0000</pubDate>
      <link>https://dev.to/amir_shachar_bc46a63dda21/handling-extreme-class-imbalance-in-fraud-detection-2i18</link>
      <guid>https://dev.to/amir_shachar_bc46a63dda21/handling-extreme-class-imbalance-in-fraud-detection-2i18</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://riskernel.com/blog/extreme-class-imbalance-fraud-detection.html" rel="noopener noreferrer"&gt;Riskernel&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Fraud is one of the easiest machine learning problems to misunderstand because the target is so rare.&lt;/p&gt;

&lt;p&gt;In many portfolios, fraud is well below one percent of total events. That means a model can look excellent in offline evaluation while still creating a terrible operational outcome once it meets production traffic.&lt;/p&gt;

&lt;p&gt;If you are evaluating a fraud vendor or building your own stack, the first thing to understand is that this is not a standard classification problem. It is a rare-event decisioning problem with operational consequences.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the base rate changes everything
&lt;/h2&gt;

&lt;p&gt;When fraud is extremely rare, “accuracy” becomes almost meaningless. Even AUC can look strong while the operating threshold behaves badly in the live queue.&lt;/p&gt;

&lt;p&gt;The real question is not “can the model separate classes in a notebook?” It is “can the model catch enough fraud at a threshold that does not drown the team in false positives?”&lt;/p&gt;

&lt;h2&gt;
  
  
  Why good offline metrics can still mislead you
&lt;/h2&gt;

&lt;p&gt;A vendor can show an impressive offline result and still fail your production test. That usually happens because the evaluation is too abstracted from the actual decision environment.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The fraud rate in the evaluation set is higher than the real portfolio.&lt;/li&gt;
&lt;li&gt;The metrics focus on global ranking quality, not threshold behavior.&lt;/li&gt;
&lt;li&gt;The review-cost side of false positives is treated as secondary.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The result is measured before the model meets missing signals, noisy enrichment, and shifting attack patterns.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What happens at the actual operating threshold?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How do precision and recall behave on the live base rate?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How many extra cases hit the review queue for each incremental fraud catch?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How is performance monitored after launch as the fraud mix shifts?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Where oversampling starts to lie
&lt;/h2&gt;

&lt;p&gt;Techniques like oversampling and synthetic minority generation can be useful during model development, but they are easy to over-trust.&lt;/p&gt;

&lt;p&gt;The risk is not that these methods are always wrong. The risk is that they create a neat offline world that smooths over the messiness of production. Fraud does not arrive as clean synthetic clusters. It arrives in bursts, edge cases, and changing patterns that interact with the rest of your decision system.&lt;/p&gt;

&lt;h2&gt;
  
  
  One concrete failure mode
&lt;/h2&gt;

&lt;p&gt;A team evaluates a model on a rebalanced dataset and gets a result that looks excellent. Then they move toward production and discover the threshold that looked fine offline now routes too many cases to manual review.&lt;/p&gt;

&lt;p&gt;The model is not useless. The evaluation was incomplete. The hidden problem is not raw ranking quality. It is that the model was never judged against the real review-cost tradeoff.&lt;/p&gt;

&lt;h2&gt;
  
  
  This is why buyer evaluations often go wrong
&lt;/h2&gt;

&lt;p&gt;When buyers compare vendors, they often hear broad claims about AI quality, risk intelligence, or detection performance. Without threshold-level evaluation, those claims stay too vague to be useful.&lt;/p&gt;

&lt;p&gt;That is why a practical buying process should combine the full API checklist in &lt;a href="https://riskernel.com/blog/fraud-detection-api-guide.html" rel="noopener noreferrer"&gt;Fraud Detection API: What to Look For in 2026&lt;/a&gt; with a real shadow run on your own traffic. If you want the evaluation workflow itself, start here: &lt;a href="https://riskernel.com/blog/shadow-testing-fraud-vendor.html" rel="noopener noreferrer"&gt;Shadow Testing a Fraud Vendor Before You Touch Production&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Operationally, false positives are part of the model
&lt;/h2&gt;

&lt;p&gt;Fraud teams often talk about the model as if it stops at the score. It does not. The model continues into the queue, the analyst experience, the customer support burden, and the approval rules that sit around it.&lt;/p&gt;

&lt;p&gt;That is also why explainability matters. If the false-positive cluster is invisible, fixing it takes longer. If the analyst can see what drove the decision, the team can debug faster. That operational side is covered in &lt;a href="https://riskernel.com/blog/shap-explainability-fraud-ops.html" rel="noopener noreferrer"&gt;SHAP Explainability for Fraud Ops&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The practical standard
&lt;/h2&gt;

&lt;p&gt;For fraud, the right standard is not one pretty model metric. It is a model that still behaves well when the fraud rate is tiny, the cost of review is real, and the threshold has to survive production conditions.&lt;/p&gt;

&lt;p&gt;That is a harder bar, but it is the one that actually matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Note
&lt;/h2&gt;

&lt;p&gt;Canonical version: &lt;a href="https://riskernel.com/blog/extreme-class-imbalance-fraud-detection.html" rel="noopener noreferrer"&gt;https://riskernel.com/blog/extreme-class-imbalance-fraud-detection.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next read: &lt;a href="https://riskernel.com/blog/first-time-payees-payout-fraud.html" rel="noopener noreferrer"&gt;First-Time Payees, Payouts, and Why Clean Transactions Still Turn Into Fraud Losses&lt;/a&gt;&lt;/p&gt;

</description>
      <category>fraud</category>
      <category>fintech</category>
      <category>api</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Shadow Testing a Fraud Vendor Before You Touch Production</title>
      <dc:creator>Amir Shachar</dc:creator>
      <pubDate>Wed, 01 Apr 2026 07:08:07 +0000</pubDate>
      <link>https://dev.to/amir_shachar_bc46a63dda21/shadow-testing-a-fraud-vendor-before-you-touch-production-5e21</link>
      <guid>https://dev.to/amir_shachar_bc46a63dda21/shadow-testing-a-fraud-vendor-before-you-touch-production-5e21</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://riskernel.com/blog/shadow-testing-fraud-vendor.html" rel="noopener noreferrer"&gt;Riskernel&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The fastest way to make a bad fraud buying decision is to trust a polished demo.&lt;/p&gt;

&lt;p&gt;Every vendor can show a clean dashboard, a few good examples, and a latency number that looks comfortable. None of that tells you how the system behaves on your own traffic, inside your own approval flow, with your own mix of customers, missing signals, payout patterns, and operational constraints.&lt;/p&gt;

&lt;p&gt;If you are evaluating a fraud vendor seriously, the right next step is not production. It is a shadow test.&lt;/p&gt;

&lt;h2&gt;
  
  
  What a shadow test actually is
&lt;/h2&gt;

&lt;p&gt;A shadow test means the vendor scores your real traffic in parallel without changing production behavior. Your current stack still makes the live decision. The candidate system watches the same events and produces its own outputs so you can compare them safely.&lt;/p&gt;

&lt;p&gt;That matters because buying a fraud product is not only about model quality. It is about whether the score distribution, explanations, latency, and queue impact make sense in your operating model.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why feature checklists and demos are not enough
&lt;/h2&gt;

&lt;p&gt;The weak part of most evaluations is that teams compare capabilities instead of behavior. “Uses AI,” “has device intelligence,” or “supports rules plus ML” are not useful discriminators once the vendor is inside the shortlist.&lt;/p&gt;

&lt;p&gt;The practical questions are different:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Does the system stay fast enough when traffic is real?&lt;/li&gt;
&lt;li&gt;Do the explanations help analysts move faster or just sound impressive?&lt;/li&gt;
&lt;li&gt;Do false positives cluster in ways that create extra review work?&lt;/li&gt;
&lt;li&gt;Do the scores line up with your actual fraud patterns, not the vendor's favorite demo set?&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What to compare during the test
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;P50, P95, and P99 latency.&lt;/li&gt;
&lt;li&gt;Score distribution by flow, not just one global average.&lt;/li&gt;
&lt;li&gt;Top explanations or reasons on flagged cases.&lt;/li&gt;
&lt;li&gt;Precision and false-positive patterns at candidate thresholds.&lt;/li&gt;
&lt;li&gt;Analyst feedback on whether the outputs are usable.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you skip any of those, you usually end up learning too late that the “good” pilot was never operationally good in the first place. For the broader evaluation checklist, start with &lt;a href="https://riskernel.com/blog/fraud-detection-api-guide.html" rel="noopener noreferrer"&gt;Fraud Detection API: What to Look For in 2026&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  How long it should run
&lt;/h2&gt;

&lt;p&gt;Long enough to capture real variability. If you run a shadow test for a day or two, you mostly learn how the system behaves on a clean slice of traffic. That is not enough.&lt;/p&gt;

&lt;p&gt;You want enough volume to see quiet periods, peak periods, incomplete enrichment, weird edge cases, and at least a few analyst-reviewed outcomes. In most teams, that means at least one to two weeks of real traffic.&lt;/p&gt;

&lt;h2&gt;
  
  
  One concrete failure mode
&lt;/h2&gt;

&lt;p&gt;A vendor can look excellent in the first meeting: strong average latency, modern UI, and plausible reasons on a few example cases.&lt;/p&gt;

&lt;p&gt;Then the shadow run starts. On payout traffic, the scores compress into a narrow range, the reasons are generic, and the tail latency becomes erratic whenever one enrichment provider slows down. None of that was visible in the demo. All of it matters in production.&lt;/p&gt;

&lt;p&gt;If you want the latency part of that problem in isolation, read &lt;a href="https://riskernel.com/blog/real-time-fraud-scoring-latency.html" rel="noopener noreferrer"&gt;Real-Time Fraud Scoring Latency: What 47ms Actually Means&lt;/a&gt;. If you want to know what analysts should actually see, the SHAP article goes deeper: &lt;a href="https://riskernel.com/blog/shap-explainability-fraud-ops.html" rel="noopener noreferrer"&gt;SHAP Explainability for Fraud Ops&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Red flags that should kill the pilot
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The vendor avoids showing score distributions on your own traffic.&lt;/li&gt;
&lt;li&gt;The reasons behind decisions are too generic to be operationally useful.&lt;/li&gt;
&lt;li&gt;The system performs well only after heavy threshold tuning that hides weak base behavior.&lt;/li&gt;
&lt;li&gt;Latency is framed only as an average and the tail is ignored.&lt;/li&gt;
&lt;li&gt;Ops feedback is treated as secondary to offline model metrics.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What a good outcome looks like
&lt;/h2&gt;

&lt;p&gt;A good shadow test ends with something simple: you understand how the new system behaves, what tradeoffs it introduces, and what production changes it would justify.&lt;/p&gt;

&lt;p&gt;That is a much higher standard than “the demo looked good,” but it is the right one. Fraud infrastructure should earn trust on your traffic before it touches your approvals.&lt;/p&gt;

&lt;h2&gt;
  
  
  Note
&lt;/h2&gt;

&lt;p&gt;Canonical version: &lt;a href="https://riskernel.com/blog/shadow-testing-fraud-vendor.html" rel="noopener noreferrer"&gt;https://riskernel.com/blog/shadow-testing-fraud-vendor.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next read: &lt;a href="https://riskernel.com/blog/extreme-class-imbalance-fraud-detection.html" rel="noopener noreferrer"&gt;Handling Extreme Class Imbalance in Fraud Detection&lt;/a&gt;&lt;/p&gt;

</description>
      <category>fraud</category>
      <category>fintech</category>
      <category>api</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Real-Time Fraud Scoring Latency: What 47ms Actually Means</title>
      <dc:creator>Amir Shachar</dc:creator>
      <pubDate>Wed, 01 Apr 2026 07:06:40 +0000</pubDate>
      <link>https://dev.to/amir_shachar_bc46a63dda21/real-time-fraud-scoring-latency-what-47ms-actually-means-3700</link>
      <guid>https://dev.to/amir_shachar_bc46a63dda21/real-time-fraud-scoring-latency-what-47ms-actually-means-3700</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://riskernel.com/blog/real-time-fraud-scoring-latency.html" rel="noopener noreferrer"&gt;Riskernel&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Fraud vendors love to say they are fast. The problem is that “fast” usually means one cherry-picked number with no context.&lt;/p&gt;

&lt;p&gt;If you are evaluating real-time fraud scoring for checkout, instant payments, payout approval, or account takeover flows, the only latency number that matters is the one your customer actually feels when the system is under load.&lt;/p&gt;

&lt;p&gt;That is why “47ms” is useful only if you understand what sits behind it, and why averages by themselves are usually the wrong way to compare vendors.&lt;/p&gt;

&lt;h2&gt;
  
  
  Average latency is the easiest number to game
&lt;/h2&gt;

&lt;p&gt;A product demo can produce a beautiful average. Production traffic usually does not.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;P50 tells you what the middle of the distribution looks like.&lt;/li&gt;
&lt;li&gt;P95 tells you what happens when the system is a bit stressed.&lt;/li&gt;
&lt;li&gt;P99 tells you whether the tail is ugly enough to affect real users and downstream systems.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A vendor can still claim “sub-100ms average” while giving you a P99 that spikes into the high hundreds. That may be fine for a batch workflow. It is not fine for a hot path.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why tail latency is the real product metric
&lt;/h2&gt;

&lt;p&gt;In fraud, the decision rarely happens alone. It sits inside a longer chain: device signals, enrichment calls, policy logic, auth windows, orchestration, and the user waiting on the other side.&lt;/p&gt;

&lt;p&gt;Once the tail gets bad, every other service has less room to breathe. Review queues start to absorb more borderline cases, fallback rules become more aggressive, and the user experiences friction that has nothing to do with model quality.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fast enough depends on the flow
&lt;/h2&gt;

&lt;p&gt;Not every workflow needs the same latency budget.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Card authorization and instant bank transfers need tight tails.&lt;/li&gt;
&lt;li&gt;Payout approval has slightly more room, but delays still affect conversion and support load.&lt;/li&gt;
&lt;li&gt;Manual review routing can tolerate slower decisions, but only if the output is much richer.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The right question is not “is this system fast?” It is “is this system fast enough for our exact flow when traffic is real, inputs are messy, and dependencies are imperfect?”&lt;/p&gt;

&lt;h2&gt;
  
  
  What to ask a vendor besides one number
&lt;/h2&gt;

&lt;p&gt;If you want an honest answer, ask for these things together:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;P50, P95, and P99 latency on production traffic.&lt;/li&gt;
&lt;li&gt;Whether enrichment calls are included in the published number.&lt;/li&gt;
&lt;li&gt;What happens when one upstream provider times out.&lt;/li&gt;
&lt;li&gt;How the system degrades when signals are partial or missing.&lt;/li&gt;
&lt;li&gt;Whether you can test latency in a shadow run against your own traffic shape.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the answer stays vague, the number is usually marketing, not engineering.&lt;/p&gt;

&lt;h2&gt;
  
  
  Latency and explainability are connected
&lt;/h2&gt;

&lt;p&gt;Teams often compare speed and explainability as if they trade off automatically. Sometimes they do. Often the real issue is bad product architecture, not physics.&lt;/p&gt;

&lt;p&gt;A fast score that no one can trust creates queue work. A beautiful explanation that arrives too late breaks the flow. The useful system is the one that gives you both inside the budget the business can actually support.&lt;/p&gt;

&lt;p&gt;If you are evaluating the full stack, the broader checklist is in &lt;a href="https://riskernel.com/blog/fraud-detection-api-guide.html" rel="noopener noreferrer"&gt;Fraud Detection API: What to Look For in 2026&lt;/a&gt;. If you care about what the ops team sees after the decision lands, read &lt;a href="https://riskernel.com/blog/shap-explainability-fraud-ops.html" rel="noopener noreferrer"&gt;SHAP Explainability for Fraud Ops&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The practical standard
&lt;/h2&gt;

&lt;p&gt;A median of 47ms is useful because it leaves room for the rest of the transaction path. But it only matters if the tail stays controlled and the system keeps giving usable decisions when the environment is imperfect.&lt;/p&gt;

&lt;p&gt;That is the bar teams should use when they compare fraud vendors: not one pretty number, but a distribution, under real conditions, with enough context to trust it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Note
&lt;/h2&gt;

&lt;p&gt;Canonical version: &lt;a href="https://riskernel.com/blog/real-time-fraud-scoring-latency.html" rel="noopener noreferrer"&gt;https://riskernel.com/blog/real-time-fraud-scoring-latency.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next read: &lt;a href="https://riskernel.com/blog/fraud-detection-api-guide.html" rel="noopener noreferrer"&gt;Fraud Detection API: What to Look For in 2026&lt;/a&gt;&lt;/p&gt;

</description>
      <category>fraud</category>
      <category>fintech</category>
      <category>api</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>SHAP Explainability for Fraud Ops: What Analysts Actually Need</title>
      <dc:creator>Amir Shachar</dc:creator>
      <pubDate>Tue, 31 Mar 2026 15:03:55 +0000</pubDate>
      <link>https://dev.to/amir_shachar_bc46a63dda21/shap-explainability-for-fraud-ops-what-analysts-actually-need-35g1</link>
      <guid>https://dev.to/amir_shachar_bc46a63dda21/shap-explainability-for-fraud-ops-what-analysts-actually-need-35g1</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://riskernel.com/blog/shap-explainability-fraud-ops.html" rel="noopener noreferrer"&gt;Riskernel&lt;/a&gt;.&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;When a fraud vendor says “explainable AI,” the fastest way to test the claim is simple: ask to see one blocked payment.&lt;/p&gt;

&lt;p&gt;Not a dashboard. Not a portfolio-level feature importance chart. One decision that an analyst has to review right now.&lt;/p&gt;

&lt;p&gt;That is where most explainability stories fall apart. A global chart may tell you that transaction amount matters across the portfolio. It does not tell the analyst handling case #47,291 why this payment was blocked and whether the decision looks reasonable.&lt;/p&gt;

&lt;h2&gt;
  
  
  What useful explainability looks like
&lt;/h2&gt;

&lt;p&gt;A useful fraud review screen shows the main reasons behind a specific score, in plain terms, for that specific transaction.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;This transaction scored 0.87
first-time payee: +0.31
velocity spike in the last hour: +0.24
device fingerprint mismatch: +0.18
geolocation anomaly: +0.14
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is the practical value of per-decision feature attribution. SHAP is one way to do it well. The analyst no longer has to reverse-engineer the model from a black-box score.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why portfolio-level feature importance is not enough
&lt;/h2&gt;

&lt;p&gt;Portfolio-wide summaries answer a different question. They are useful for model development, not for ops execution.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;They tell you what mattered on average, not what mattered on this case.&lt;/li&gt;
&lt;li&gt;They do not help when a customer calls support about one blocked payout.&lt;/li&gt;
&lt;li&gt;They do not make false-positive clusters obvious at the queue level.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Fraud operations need explanations that travel with the decision, not a report someone saw three weeks ago.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where this changes the day-to-day work
&lt;/h2&gt;

&lt;p&gt;The first change is review speed. Analysts stop guessing. They can see what the model relied on most, make a faster judgment, and move to the next case.&lt;/p&gt;

&lt;p&gt;The second change is feedback quality. A black-box workflow usually produces vague complaints like “the model feels too aggressive.” A per-decision workflow produces something actionable: “we are over-weighting first-time payees for established customers with stable device history.”&lt;/p&gt;

&lt;p&gt;The third change is trust. Product, risk, support, and compliance all work better when the answer is more specific than “the system said so.”&lt;/p&gt;

&lt;h2&gt;
  
  
  Explainability also helps catch drift
&lt;/h2&gt;

&lt;p&gt;Drift monitoring usually starts with feature distributions, calibration, and loss metrics. That is right, but it is not the whole picture.&lt;/p&gt;

&lt;p&gt;If the reasons behind decisions start shifting in a systematic way over a few months, that can be an early warning that something changed in the event stream, enrichment quality, or attack pattern. The point is not that SHAP replaces the rest of monitoring. It gives you one more operational lens that teams can actually read.&lt;/p&gt;

&lt;h2&gt;
  
  
  The real question to ask any vendor
&lt;/h2&gt;

&lt;p&gt;If an analyst opens a case in your stack today, can they see the top reasons behind the score without calling data science?&lt;/p&gt;

&lt;p&gt;If the answer is no, the model may still be smart, but the operating model is weak. For most teams, that gap matters more than another point of offline model performance.&lt;/p&gt;

&lt;p&gt;If you are evaluating vendors more broadly, start with the full API checklist in &lt;a href="https://riskernel.com/blog/fraud-detection-api-guide.html" rel="noopener noreferrer"&gt;Fraud Detection API: What to Look For in 2026&lt;/a&gt;, then compare how each system handles explainability in practice.&lt;/p&gt;

&lt;h2&gt;
  
  
  Note
&lt;/h2&gt;

&lt;p&gt;Canonical version: &lt;a href="https://riskernel.com/blog/shap-explainability-fraud-ops.html" rel="noopener noreferrer"&gt;https://riskernel.com/blog/shap-explainability-fraud-ops.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next read: &lt;a href="https://riskernel.com/blog/fraud-detection-api-guide.html" rel="noopener noreferrer"&gt;Fraud Detection API: What to Look For in 2026&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>datascience</category>
      <category>machinelearning</category>
      <category>security</category>
    </item>
    <item>
      <title>Fraud Detection API: What to Look For in 2026</title>
      <dc:creator>Amir Shachar</dc:creator>
      <pubDate>Tue, 31 Mar 2026 15:02:51 +0000</pubDate>
      <link>https://dev.to/amir_shachar_bc46a63dda21/fraud-detection-api-what-to-look-for-in-2026-59a8</link>
      <guid>https://dev.to/amir_shachar_bc46a63dda21/fraud-detection-api-what-to-look-for-in-2026-59a8</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://riskernel.com/blog/fraud-detection-api-guide.html" rel="noopener noreferrer"&gt;Riskernel&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Most fraud API evaluations go wrong for the same reason: teams compare feature lists instead of production behavior.&lt;/p&gt;

&lt;p&gt;A vendor says it uses AI, supports real-time scoring, and integrates quickly. That sounds fine until you try to run it on live payment traffic and discover the score is hard to interpret, the latency spikes when volume jumps, or the ops team still cannot tell why a transaction was blocked.&lt;/p&gt;

&lt;p&gt;If you are buying a fraud detection API in 2026, these are the questions worth asking before you start a pilot.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Can it decide fast enough for your flow?
&lt;/h2&gt;

&lt;p&gt;Latency is not a vanity metric. It changes checkout friction, approval rates, and how much time you have left for downstream steps.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ask for P95 and P99 latency, not just average response time.&lt;/li&gt;
&lt;li&gt;Ask whether the vendor measures latency in production traffic or demo traffic.&lt;/li&gt;
&lt;li&gt;Ask what happens when enrichment providers time out or return partial data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If your use case is card authorization, instant transfers, or payout approval, a slow tail will hurt more than a nice average helps.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Can analysts see what the model relied on most?
&lt;/h2&gt;

&lt;p&gt;A raw score is not enough. The ops question is simple: when a case lands in a queue, can someone understand it quickly enough to act?&lt;/p&gt;

&lt;p&gt;A useful interface shows the top drivers behind a decision, not just a confidence number. If the analyst sees only “0.91 high risk,” review speed will stay slow and false-positive patterns will stay hidden.&lt;/p&gt;

&lt;p&gt;Explainability also matters beyond the queue. It gives risk leaders a way to debug drift, justify policy changes, and explain outcomes to internal stakeholders. If you want the operational version of that problem, &lt;a href="https://riskernel.com/blog/shap-explainability-fraud-ops.html" rel="noopener noreferrer"&gt;this SHAP guide for fraud ops&lt;/a&gt; goes deeper.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. How does it behave on heavily imbalanced data?
&lt;/h2&gt;

&lt;p&gt;Fraud is rare. That sounds obvious, but it still breaks a lot of vendor claims. A model can look accurate on paper while missing the cases that matter or flooding the team with false positives.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ask what fraud rates the model was trained on.&lt;/li&gt;
&lt;li&gt;Ask how the vendor measures precision and recall at the operating threshold, not just offline AUC.&lt;/li&gt;
&lt;li&gt;Ask how performance is monitored after launch as fraud patterns shift.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the answer stays high-level, that is usually a warning sign.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. What does integration actually require?
&lt;/h2&gt;

&lt;p&gt;“Easy integration” can mean anything from one REST endpoint to a months-long project with schema work, workflow tuning, analyst training, and data mapping across multiple systems.&lt;/p&gt;

&lt;p&gt;The practical questions are these:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How many endpoints do you need to call in the hot path?&lt;/li&gt;
&lt;li&gt;What minimum payload is required for a useful score?&lt;/li&gt;
&lt;li&gt;Can you launch with your current event stream, or do you need a data project first?&lt;/li&gt;
&lt;li&gt;What happens when signals are missing?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Good API-first products let you start narrow and expand. Bad ones front-load the entire project.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Will they let you shadow test before you commit?
&lt;/h2&gt;

&lt;p&gt;You should not have to trust a slide deck. A serious vendor should let you run in parallel, inspect decisions, and compare performance against your current setup before you change production behavior.&lt;/p&gt;

&lt;p&gt;That shadow phase is where most evaluation mistakes become obvious. You learn whether the model is fast enough, whether the reasons are usable, and whether the score distribution makes operational sense for your portfolio.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Does the product fit your operating model?
&lt;/h2&gt;

&lt;p&gt;Some teams need a platform with queues, workflows, and case management. Others just need a clean scoring API they can plug into an existing system.&lt;/p&gt;

&lt;p&gt;If you are a fintech with a small engineering team, buying a full enterprise platform when you only need fast scoring is usually how you end up overpaying for complexity.&lt;/p&gt;

&lt;p&gt;If you are in that camp, the more relevant comparison is often not “best fraud vendor overall” but “best API-first tool for our flow.” That is also why buyers often start with a focused comparison like &lt;a href="https://riskernel.com/blog/nice-actimize-alternatives.html" rel="noopener noreferrer"&gt;this Actimize alternatives breakdown&lt;/a&gt; instead of a general market map.&lt;/p&gt;

&lt;h2&gt;
  
  
  A simple evaluation checklist
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Demand P95 and P99 latency.&lt;/li&gt;
&lt;li&gt;Review one real analyst decision screen.&lt;/li&gt;
&lt;li&gt;Ask how the model handles rare-event imbalance.&lt;/li&gt;
&lt;li&gt;Run a shadow test before any go-live decision.&lt;/li&gt;
&lt;li&gt;Match the product shape to your actual operating model.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Note
&lt;/h2&gt;

&lt;p&gt;Canonical version: &lt;a href="https://riskernel.com/blog/fraud-detection-api-guide.html" rel="noopener noreferrer"&gt;https://riskernel.com/blog/fraud-detection-api-guide.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next read: &lt;a href="https://riskernel.com/blog/nice-actimize-alternatives.html" rel="noopener noreferrer"&gt;NICE Actimize Alternatives for Fintechs&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>api</category>
      <category>performance</category>
      <category>security</category>
    </item>
    <item>
      <title>NICE Actimize Alternatives for Fintechs: 2026 Comparison</title>
      <dc:creator>Amir Shachar</dc:creator>
      <pubDate>Tue, 31 Mar 2026 15:01:46 +0000</pubDate>
      <link>https://dev.to/amir_shachar_bc46a63dda21/nice-actimize-alternatives-for-fintechs-2026-comparison-1oap</link>
      <guid>https://dev.to/amir_shachar_bc46a63dda21/nice-actimize-alternatives-for-fintechs-2026-comparison-1oap</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://riskernel.com/blog/nice-actimize-alternatives.html" rel="noopener noreferrer"&gt;Riskernel&lt;/a&gt;.&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;NICE Actimize is the default answer when enterprise banks need fraud detection. It's comprehensive, battle-tested, and deployed at most of the world's largest financial institutions.&lt;/p&gt;

&lt;p&gt;It's also expensive, slow to implement, and architecturally heavy for fintechs that need to move fast. If you're a payment processor, neobank, or lending platform with 50-500 employees, you're probably paying for capabilities you don't need and waiting months for an integration that should take days.&lt;/p&gt;

&lt;p&gt;I spent four years at NICE Actimize building fraud detection models, including systems for Bank of America. I know the platform well -- both what it does right and where it creates friction for smaller, faster-moving companies. This comparison is based on that experience plus what I've seen in the market since.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Companies Look for Alternatives
&lt;/h2&gt;

&lt;p&gt;Three recurring pain points drive the search:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cost.&lt;/strong&gt; Enterprise licenses start at $100K/year and scale quickly from there. For a Series A fintech processing moderate transaction volume, that's a significant chunk of runway allocated to a single vendor.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Implementation time.&lt;/strong&gt; A typical Actimize deployment takes 3-6 months minimum. You often need Actimize-certified consultants to configure the system, which adds both cost and calendar time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complexity.&lt;/strong&gt; Actimize is a platform, not an API. It comes with dashboards, workflow engines, case management, and reporting tools. If you need real-time transaction scoring and your team is 3 engineers, that's a lot of surface area to manage.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Alternative Landscape in 2026
&lt;/h2&gt;

&lt;p&gt;The market has split into roughly three tiers: enterprise platforms, startup-friendly API-first vendors, and specialized niche players.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enterprise Platforms
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Feedzai&lt;/strong&gt; is the closest direct competitor to Actimize. AI-first architecture, strong in banking and payments. More modern than Actimize under the hood, but still enterprise-priced and enterprise-complex. If you're replacing Actimize and you're a large bank, Feedzai is the main alternative. If you're a fintech looking for something lighter, you'll hit similar friction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SAS Fraud Management&lt;/strong&gt; is the other legacy player. Strong analytics capabilities, but the implementation model is similar to Actimize -- heavy, consultant-driven, measured in months.&lt;/p&gt;

&lt;h3&gt;
  
  
  Startup-Friendly API-First
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;SEON&lt;/strong&gt; focuses on digital footprint enrichment and device intelligence. Strong for e-commerce and online lending where you need to assess the person behind the transaction. Pricing starts around $600/month, which is a different universe from Actimize. The trade-off: SEON is enrichment-heavy but lighter on real-time ML scoring.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sardine&lt;/strong&gt; was founded by the ex-Coinbase fraud team. Strong on device intelligence and behavioral biometrics. Good fit for fintech and crypto companies. They've raised significant funding and are building a broader platform, but the core strength is still behavioral signals.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Unit21&lt;/strong&gt; gives you more flexibility to build custom rules and models on top of their risk infrastructure. Good choice if you have in-house fraud expertise and want control over your decisioning logic rather than a black-box score.&lt;/p&gt;

&lt;h3&gt;
  
  
  Niche Players
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Alloy&lt;/strong&gt; sits at the intersection of identity verification and fraud decisioning. Not a pure fraud scoring play, but it overlaps with the KYC/onboarding fraud use case. Strong for identity-centric fraud (synthetic identity, account takeover at onboarding).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kount (Equifax)&lt;/strong&gt; focuses on e-commerce fraud. Acquired by Equifax, which gives it access to credit bureau data. Good for card-not-present fraud in retail. Less relevant for payment processors or lending platforms.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to Actually Evaluate
&lt;/h2&gt;

&lt;p&gt;When comparing alternatives, the feature matrices and pricing pages only tell part of the story. Here's what actually matters in production:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Latency
&lt;/h3&gt;

&lt;p&gt;If you need real-time transaction decisioning (card transactions, instant payments, push-to-card), latency is a product requirement. Anything over 200ms adds noticeable friction. Ask for P99 latency, not average. A vendor claiming "sub-100ms average" might have a P99 of 800ms, which means 1 in 100 of your customers experiences a nearly one-second delay.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Explainability
&lt;/h3&gt;

&lt;p&gt;Most fraud APIs return a risk score. Fewer can tell you &lt;em&gt;why&lt;/em&gt;. The difference matters operationally: when an analyst reviews a flagged transaction, do they see "risk score: 0.87" or do they see "first-time payee: +0.31, velocity spike: +0.24, device mismatch: +0.18"?&lt;/p&gt;

&lt;p&gt;Per-decision feature attribution (using techniques like SHAP) changes how your ops team works. Review time drops because analysts can see the reasons. False positive patterns become visible. And increasingly, regulators in markets like the UK expect evidence chains behind fraud decisions, not just scores.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Class Imbalance Handling
&lt;/h3&gt;

&lt;p&gt;Fraud is typically well below 1% of transactions. In many portfolios, it's a rounding error in the data. How the vendor's models handle this extreme imbalance determines whether you get a system that catches fraud or one that drowns you in false positives. Ask specifically: what approach do they use for class imbalance? Standard oversampling (SMOTE) has known limitations at production scale.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Integration Complexity
&lt;/h3&gt;

&lt;p&gt;API-first vendors (SEON, Sardine, Unit21) can be integrated in days. Platform vendors (Actimize, Feedzai, SAS) take months. The question is whether you need the platform capabilities or whether a clean API with good documentation is sufficient. For most fintechs, it's the latter.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Shadow Testing
&lt;/h3&gt;

&lt;p&gt;The only way to know if a fraud vendor actually works for your transaction patterns is to run it in parallel with your existing stack. Any vendor that doesn't offer a shadow test period is asking you to commit blind. Look for vendors that let you compare decisions side-by-side before going live.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bottom Line
&lt;/h2&gt;

&lt;p&gt;Actimize is still the right choice for large banks that need a comprehensive platform with decades of regulatory validation. It's not the right choice for a 200-person fintech that needs to score transactions in real time and can't wait 6 months to go live.&lt;/p&gt;

&lt;p&gt;If you're evaluating alternatives, the most important questions aren't about features. They're about latency under your actual load, explainability at the individual decision level, how the system handles the extreme class imbalance that defines fraud data, and whether you can test before you commit.&lt;/p&gt;

&lt;h2&gt;
  
  
  Note
&lt;/h2&gt;

&lt;p&gt;Canonical version: &lt;a href="https://riskernel.com/blog/nice-actimize-alternatives.html" rel="noopener noreferrer"&gt;https://riskernel.com/blog/nice-actimize-alternatives.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next read: &lt;a href="https://riskernel.com/blog/fraud-detection-api-guide.html" rel="noopener noreferrer"&gt;Fraud Detection API: What to Look For in 2026&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amir Shachar&lt;/strong&gt; holds 12 patents in fraud detection, cybersecurity, and AI. He spent 4 years at NICE Actimize building fraud models for institutions including Bank of America, and served as Chief Data &amp;amp; AI Scientist at Skyhawk Security. He's the founder of &lt;a href="https://riskernel.com" rel="noopener noreferrer"&gt;Riskernel&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>cybersecurity</category>
      <category>machinelearning</category>
      <category>startup</category>
    </item>
  </channel>
</rss>
