DEV Community

Cover image for How Scammers Use Automation to Scale Attacks Globally
James Smith
James Smith

Posted on

How Scammers Use Automation to Scale Attacks Globally

Just ten years ago, operating a major fraud ring required a lot of manpower. Now, all it takes is a laptop and a credit card to target hundreds of thousands of victims across several continents. The automation technology used is more advanced than many security measures anticipate.
In February 2025, Interpol's Cybercrime Coordinated Action (IOCA) unit published details of an operation to dismantle a syndicate of fraudsters in 14 countries. The syndicate was responsible for an estimated $47 million loss in 18 months. When the authorities gained access to the operation's infrastructure servers located in four jurisdictions, most of them in the cloud they discovered the operation was being monitored and managed by fewer than a dozen people. The victim targeting, campaign launching, communications, payments, and money transfers were all automated. The automation was overseen by the twelve. They weren't doing it themselves.
That number - twelve people, fourteen countries, $47 million - is the key ratio. It says something about the nature of the economics of the scam: automation has dropped the labor required to produce $1 of fraudulent output so far that the limiting factor in the scale of the scam operations is no longer human labor. It is infrastructure, and infrastructure is cheap.
In this article, we explore the layers of automation that make global-scale fraud possible what they are, how they work together, and where the opportunities lie for detection by systems that target them.

The Automation Stack: Five Layers of Scaled Fraud

An advanced fraud automation stack comprises the five layers that each address a particular scaling challenge that would normally be performed by humans. We can think of the stack as a system because methods of detecting fraud based on individual layers are circumvented more easily than methods based on interactions between layers.

Layer 1: List Building

The fuel for any large-scale fraud campaign is a target list with the requisite profiling information to facilitate contextually relevant attacks. List generation pipelines acquire data from multiple channels at once: dark web market data feeds of compromised credentials (bought or extracted), web crawling activities targeting professional websites and public listings, data broker API feeds for enriching target demographics and financial profiles, and social media crawls contributing to behavioral and social network targeting.
Advanced campaigns implement machine-learning segmentation of target lists before launching campaigns. They score targets against predicted vulnerability profiles based on historical campaign results age, income indicators, recent credit events, and media channel use patterns and route targets to different campaign variants fine-tuned for the segments. A target identified as a recent retiree with investment account indicators receives a different message than one identified as a consumer with e-commerce indicators. The segmentation is applied automatically to new list purchases and changes the routing.

Layer 2: Infrastructure Provisioning and Rotation

Campaign infrastructure (domains, hosting, email-sending infrastructure, and phone numbers) is automatically provisioned and rotated to stay ahead of detection and blocking. Automated domain registration uses registrar APIs to provision new domains against template patterns at a rate of hundreds of domains per day when needed. New hosting is provisioned through cloud provider APIs, spinning up new instances across geographies and sometimes across multiple cloud providers to distribute detection across multiple ASNs.
Email sending is especially heavily automated due to the fact that email deliverability depends on reputation, and reputation is highly vulnerable to collapsing under the weight of a fraud campaign. Sending domains and IP addresses are included in provisioning automation that spins up new domains and IPs and warms them up with legitimate email sends to owned domains and inboxes, opens, replies, moves to different folders, and other user activities. When a sending domain is blocked by major mail providers, the automation automatically rotates to a pre-warmed domain within minutes and flags the blocked domain for rotation later or disposal. This is not done by the human operator. A monitoring daemon does.

Layer 3: Dynamic Personalization

Campaign message bodies email, text message, and social media are produced masse through the use of template engines and dynamic personalization layers. At the most basic level, template variables are injected with target name, institution name, and numbers into the message base. At the more advanced level, LLM-based generation provides contextually rich message content using target data as input so that the message content makes reference to the target's apparent location, institution, or recent activity in a way that makes it seem like the message was sent to that individual rather than being broadcast.
Multivariate tests are automatically run within campaigns: different subject lines, different urgency appeals, and different call-to-action options are tested across message variants with performance data automatically tracked. Message variants' click-through and conversion rates feed the template library selection logic, and over time focus shifts to message variants with the best architecture, without operator intervention. The campaign is automatically adapted against the victim population.

Layer 4: Victim Interaction and Credential Processing

Automated processes take over the earliest part of the interaction when victims interact with the campaign infrastructure (clicking a link, filling out a form, dialing a phone number). Internet-based fraud campaigns present targeted pages, capture credentials via multi-step form submission, and perform real-time proxying to legitimate target institutions, as detailed previously in the analysis of phishing kits. Credentials are immediately validated against the target institution's authentication service, ranked by indicators of value (visible in the first response of the session), and queued for operator exploitation based on expected value.
Voice campaigns rely on automated dialing and pre-recorded or text-to-speech audio to contact victims en masse. IVR-like automation performs the initial phases of interaction creating context, urgency, and identity confirmation and prioritizes human operator interaction during the exploitation phase based on value or engagement indicators. The machine filters and pre-qualifies; the person closes.

Layer 5: Proceeds Movement and Laundering Automation

The last layer of automation and the most heavily invested in operational security is money movement. Automated networks of mule accounts and cryptocurrency mixing and multiple-tier transfers ensure money is transferred across jurisdictions and accounts more quickly than financial institution fraud prevention processes can detect and freeze funds. The timing of transactions is tuned to take advantage of the lag between the transaction and a fraud flag. Geographic routing capitalizes on differences in the time taken for institutions to communicate and regulatory reporting thresholds across jurisdictions. Decisions to move funds are made by scripts, not humans.

The Geographic Arbitrage Dimension

Automation makes global scale possible not only by removing the human element from menial work but also by enabling multi-jurisdictional operations in a way that is specifically aimed at taking advantage of jurisdictional differences. An operation based in one jurisdiction, hosted in two, with victims in a fourth, and financial transfer in a fifth, creates an investigative coordination burden that is beyond the capability of a single law enforcement agency to resolve quickly and the faster the attack, the better.
The cloud makes jurisdictional diversification easy. Hosting campaigns across AWS regions in three continents, using traffic routing via residential IP proxy networks that present an IP address in the target jurisdiction, and providing console access via VPN infrastructure hosted in a no cooperative jurisdiction, adds little to the cost and complexity of operation when the provisioning and routing are done automatically. The jurisdictional complexity that would require intensive human operational security runs as an infrastructure-as-code configuration.

Defending Against Automated Attacks

The detection problem of automated fraud at scale is not simply detecting and stopping manual attacks. The detection signal pattern of an automated attack is not the pattern of one clever fraudster it is the statistical pattern of a system at scale, and scale makes a pattern that cannot be achieved by an individual manual attacker.
Signals of detection that the automated scale enables:
Domain registration rate clustering: Automated infrastructure setup creates domain registration spikes with discernible timing and naming pattern clustering. Bursts of registration within short time periods, with naming conventions or registrar accounts, are a strong indicator of campaign infrastructure set-up.
Mail sending anomalies: Automated warm-up and campaign mail sending operations leave detectable mail volume and timing traces compared with legitimate mail sending operations. Sending at a mathematically precise interval, velocity ramp-up that does not correlate with list build-up, and bounce rate ramp-up inconsistent with historic sender analysis are discernible through mail provider systems.
Inter-campaign technical infrastructure fingerprinting: Automatic reuse of infrastructure across campaigns leaves common technical traces, such as shared SSL certificate features, identical name server configurations, identical page template hash values across ostensibly distinct domains, and shared ASN infrastructure. Graph correlation of these commonalities reveals campaign clusters not evident in domain-level analysis.
Community intelligence correlation: Automated campaigns produce a lot of victim reports a detection signal that scales with the size of the campaign. Community intelligence platforms such as Scam Alerts do this in close to real time, producing a threat map that is a representation of the active campaign infrastructure as seen by victim devices, not by automated scanners after the evasion filters. The bigger the campaign, the more reports and the quicker the community intelligence layer identifies the pattern in the infrastructure to future victims.

The Asymmetry and Its Implications

The dozen operators behind a scam operation that spans 14 countries are not unique it's the terminal point of a long arm of capability that has been extending for a decade. The cost of automation tooling is coming down, and as it does, the number of human actors required is going down. It is no longer human resources. It is the infrastructure and the barriers presented by detection systems.
Manual, low-volume detection and prevention systems are not fit for purpose against high-volume attacks. The problem is not so much complexity-sensitive as volume-sensitive. A detection system that performs when the rate of fraudulent account creation is 100 accounts per day cannot cope with 100,000 accounts per day. The signal-to-noise ratio is reversed. Thresholds set for manual attack volumes would trigger volume false positives.
Detection architecture that is explicitly tuned for automated attacks involves investing in the types of signal that are correlated to attacker activity (rather than anti-correlated): infrastructure correlation systems that improve as the volume of campaigns increases, community intelligence systems like the Scam Alerts system that increase the number of victim reports as the reach of attack campaigns increases, and behavioral anomaly detection systems whose confidence increases as the number of automated behavioral signatures across sessions increases. The economy of scale the attacker achieves through automation is the data volume that makes statistical attack detection feasible if the detection architecture takes this as its input.
Twelve victims, fourteen nations, and US$47m. The only reason that the attacker's ratio is acceptable is that the detection architecture isn't.

Top comments (0)