DEV Community

Cover image for AI-Driven “Pushpaganda” Campaign Exploits Google Discover for Scareware and Ad Fraud
Abhay Negi
Abhay Negi

Posted on

AI-Driven “Pushpaganda” Campaign Exploits Google Discover for Scareware and Ad Fraud

Cybersecurity researchers have uncovered a sophisticated ad fraud operation that combines artificial intelligence with search engine manipulation to distribute misleading content through Google Discover. The campaign, referred to as “Pushpaganda,” demonstrates how attackers are increasingly abusing trusted platforms to scale deception and generate revenue.

The operation, identified by HUMAN’s Satori Threat Intelligence team, relies heavily on AI-generated content and search engine optimization (SEO) poisoning techniques. Its primary objective is to push fabricated or misleading news stories into personalized Google Discover feeds, particularly targeting Android and Chrome users. Once users engage with this content, they are drawn into a carefully designed trap that leads to persistent push notification abuse and, eventually, financial scams.

At the core of the campaign is a simple but effective strategy: gain user trust through seemingly legitimate content, then exploit that trust through deceptive interactions. The attackers create large volumes of AI-generated articles that mimic trending or newsworthy topics. These articles are then optimized to rank well enough to appear in Google Discover feeds, which many users rely on for curated content recommendations.

When a user clicks on one of these stories, they are redirected to attacker-controlled domains. These websites are designed to appear credible at first glance but quickly attempt to coerce visitors into enabling browser push notifications. This step is critical to the operation. Once users grant permission, attackers gain a persistent communication channel that can be used to deliver malicious messages directly to their devices.

The push notifications themselves often contain alarming or urgent messages. These may include fake legal warnings, security alerts, or claims of compromised devices. The goal is to trigger a sense of urgency, prompting users to click on the notifications without fully considering their legitimacy. When clicked, these alerts redirect users to additional malicious sites that are part of the same ecosystem.

These secondary sites are typically filled with advertisements or designed to facilitate scams, generating revenue for the attackers through ad impressions or fraudulent transactions. By continuously driving traffic through this network of domains, the operators behind Pushpaganda are able to create a steady stream of income.

At its peak, the campaign generated an enormous volume of activity. Researchers observed approximately 240 million bid requests linked to more than 100 domains within a single week. While the campaign initially focused on users in India, it has since expanded to multiple regions, including the United States, Australia, Canada, South Africa, and the United Kingdom.

One of the most concerning aspects of this operation is its use of AI to scale content production. By automating the creation of articles, attackers can rapidly generate large volumes of content tailored to different audiences and topics. This not only increases their reach but also makes it more difficult for detection systems to keep up.

Google has acknowledged the issue and implemented fixes to reduce the visibility of such spam content within Discover. The company also reiterated its policies against low-quality and manipulative content, particularly when generated using AI for the purpose of influencing search rankings. According to its guidelines, content that is created at scale without providing real value to users is considered a violation.

However, the Pushpaganda campaign highlights a broader challenge: even with platform-level protections, attackers continue to find ways to exploit trust-based systems. This is where external intelligence becomes critical. Platforms like IntelligenceX can play an important role in identifying and tracking malicious domains involved in such campaigns.

By analyzing domain infrastructure, content patterns, and exposure across open sources, IntelligenceX enables security researchers and organizations to uncover connections between seemingly unrelated sites. This kind of visibility is essential in campaigns like Pushpaganda, where attackers rely on large networks of disposable domains to sustain their operations.

Additionally, IntelligenceX can help organizations monitor whether their brand, services, or users are being targeted within such ecosystems. Early detection of malicious domains or cloned content can significantly reduce the risk of users falling victim to scams.

The Pushpaganda campaign is not an isolated case. It builds on earlier trends where attackers abused push notifications as a delivery mechanism for malicious content. Similar campaigns have demonstrated how effective this approach can be, particularly when combined with social engineering tactics that create urgency.

What makes this operation stand out is its scale and level of automation. By integrating AI-generated content, SEO manipulation, and push notification abuse, attackers have created a highly efficient system for driving traffic and generating revenue.

The findings also align with broader observations from HUMAN regarding large-scale ad fraud ecosystems. In previous research, the company identified thousands of domains and mobile applications forming interconnected networks designed to monetize fraudulent traffic. These “cashout” sites act as endpoints where traffic is converted into revenue, often without the knowledge of advertisers.

A key takeaway from these investigations is that even when individual campaigns are disrupted, the underlying infrastructure often remains active. Domains used for monetization can be reused by different threat actors, making it difficult to completely eliminate the threat.

This reinforces the importance of continuous monitoring and proactive threat intelligence. Identifying malicious domains before they are widely used in campaigns can significantly reduce their impact. In this context, platforms like IntelligenceX provide a valuable advantage by offering early visibility into emerging threats.

The Pushpaganda campaign serves as a reminder that the line between legitimate content and malicious activity is becoming increasingly blurred. As attackers continue to refine their techniques, both users and organizations must remain vigilant and adopt more advanced approaches to detecting and mitigating threats.

Top comments (0)