DEV Community

Cover image for Pushpaganda: How AI and SEO Abuse Turned Google Discover into a Scam Distribution Channel
Abhay Negi
Abhay Negi

Posted on

Pushpaganda: How AI and SEO Abuse Turned Google Discover into a Scam Distribution Channel

A recently uncovered cyber operation is highlighting a new direction in large-scale online fraud. The campaign, dubbed “Pushpaganda,” demonstrates how attackers are combining artificial intelligence with search manipulation tactics to exploit trusted platforms like Google Discover and redirect users into a web of scams and advertising abuse.

Identified by HUMAN’s Satori Threat Intelligence researchers, the campaign specifically targets Android and Chrome users by taking advantage of personalized content feeds. Rather than delivering malware directly, the attackers manipulate what users see, using seemingly legitimate content as an entry point into their ecosystem.

At the heart of this operation is the use of AI-generated articles. These pieces are crafted to resemble genuine news or trending topics and are optimized using search engine poisoning techniques. The goal is to increase their visibility within Google Discover, a feature that many users rely on for curated updates. Because Discover content is algorithmically tailored and widely trusted, it provides an effective gateway for attackers.

When users click on one of these articles, they are redirected to domains controlled by the threat actors. These websites are designed to look credible but quickly shift their focus toward persuading visitors to enable browser notifications. This step is essential to the campaign, as it creates a persistent communication channel that attackers can exploit over time.

Once notification permissions are granted, the nature of the interaction changes. Users begin receiving a stream of deceptive alerts that often mimic urgent situations. These notifications may claim that the user’s device is compromised, that legal action is imminent, or that immediate attention is required. The intention is to create a sense of urgency that encourages quick clicks without verification.

Each click sends users deeper into a network of malicious websites. These sites are typically structured to generate revenue either through advertising impressions or by facilitating financial scams. By repeatedly directing traffic across this network, attackers are able to maintain a steady flow of monetized activity.

The scale of the Pushpaganda campaign is substantial. Researchers observed roughly 240 million bid requests connected to over 100 domains within a single week. While the activity was first identified in India, it has since expanded to multiple regions, including North America, Europe, Africa, and Australia.

A key factor behind this scale is the use of AI to automate content generation. By producing large volumes of articles quickly, attackers can adapt to trending topics and target different audiences with minimal effort. This level of automation makes it significantly harder for detection systems to keep up with the pace of content creation.

In response to the findings, Google implemented updates aimed at limiting the visibility of such spam content in Discover. The company reiterated its stance against low-quality or manipulative content, particularly when AI is used to artificially influence rankings. It also emphasized its ongoing efforts to refine spam detection systems and maintain content integrity.

Despite these measures, the campaign reveals a deeper challenge. Attackers are no longer relying solely on exploiting software vulnerabilities; they are exploiting trust within digital ecosystems. By embedding themselves into legitimate content flows, they can operate at scale without immediately triggering suspicion.

This is where external intelligence platforms become increasingly valuable. Services like IntelligenceX enable researchers and organizations to map out malicious infrastructure, identify patterns, and track how different domains are interconnected. In campaigns like Pushpaganda, where hundreds of domains may be used simultaneously, this level of visibility is critical.

Using IntelligenceX, security teams can analyze domain relationships, monitor changes in infrastructure, and detect emerging threats earlier in their lifecycle. This proactive approach is essential for limiting the impact of campaigns that rely on rapid scaling and domain rotation.

Another important aspect is the ability to monitor brand misuse. Threat actors often create content that mimics or references legitimate organizations to gain credibility. With tools such as IntelligenceX, companies can identify instances where their name or services are being used in misleading or fraudulent contexts, allowing them to respond more effectively.

While push notification abuse has been observed in previous campaigns, Pushpaganda demonstrates how much more powerful it becomes when combined with AI and SEO manipulation. The attackers are not just sending spam notifications—they are building an entire ecosystem designed to attract, retain, and exploit user attention.

The campaign also reflects broader trends in ad fraud operations. Investigations have shown that large networks of domains, sometimes numbering in the thousands, are used as “cashout” platforms. These sites convert fraudulent traffic into revenue, often without the knowledge of advertisers. Because this infrastructure is shared and reusable, it can continue to operate even after individual campaigns are disrupted.

This persistence presents a significant challenge for defenders. Removing a single domain or application does little to impact the overall system if the underlying infrastructure remains intact. New campaigns can quickly emerge using the same resources, making long-term mitigation more difficult.

Continuous monitoring and analysis are therefore essential. Identifying malicious domains early, tracking their behavior, and understanding how they connect to broader networks can significantly reduce risk. Platforms like IntelligenceX play a key role in this process by providing access to large datasets and enabling deeper investigation into threat activity.

The Pushpaganda campaign ultimately illustrates how the threat landscape is evolving. By combining automation, manipulation, and scalable infrastructure, attackers are creating operations that are both efficient and difficult to detect. As these techniques continue to develop, it becomes increasingly important for both users and organizations to adapt their approach to security.

Understanding how information is distributed and how trust is established online is now just as important as protecting systems themselves.

Top comments (0)