A newly identified cyber campaign known as “Pushpaganda” highlights how attackers are evolving beyond traditional techniques and exploiting trusted content platforms at scale. By combining artificial intelligence with search engine manipulation, threat actors are successfully pushing malicious content into Google Discover feeds and redirecting users toward scams and ad fraud networks.
This campaign, analyzed by HUMAN’s Satori Threat Intelligence team, primarily targets Android and Chrome users. Instead of relying on malware downloads or phishing emails, attackers are manipulating the way content is delivered to users, making the attack both subtle and highly effective.
At the core of this operation is AI-generated content. Threat actors produce large volumes of articles designed to resemble legitimate news or trending topics. These articles are then optimized using SEO poisoning techniques so they can appear in Google Discover, a platform that many users trust for personalized content.
When a user clicks on one of these seemingly harmless articles, they are redirected to attacker-controlled websites. These pages are carefully designed to appear legitimate at first glance but quickly attempt to convince users to enable browser notifications.
This step is critical to the attack chain. Once notifications are allowed, attackers gain a persistent communication channel. From that point forward, users begin receiving misleading alerts that often mimic urgent warnings, such as security threats, legal notices, or system failures. These messages are crafted to create panic and drive immediate interaction.
Clicking on these notifications redirects users to additional malicious websites within the attacker’s ecosystem. These sites are typically built to generate advertising revenue or promote financial scams, allowing attackers to continuously monetize user engagement.
The scale of the campaign is significant. Researchers observed nearly 240 million bid requests associated with over 100 domains within a short period. Although the activity was initially concentrated in India, it quickly expanded to other regions, including North America, Europe, Africa, and Australia.
What makes Pushpaganda particularly dangerous is its reliance on automation. By using AI, attackers can rapidly generate content tailored to different audiences and trending topics. This enables them to scale operations quickly while making detection more difficult for traditional security systems.
Google has responded by implementing measures to limit such abuse within its Discover platform. The company has emphasized that using AI-generated content purely to manipulate rankings violates its policies and continues to improve its spam detection systems to address emerging threats.
However, this campaign reflects a broader shift in the cybersecurity landscape. Attackers are increasingly targeting trust-based systems rather than just technical vulnerabilities. By embedding themselves within legitimate content ecosystems, they can reach users more effectively and at a much larger scale.
This is where organizations need a more proactive security approach. Platforms like IntelligenceX play a critical role in identifying and mitigating such threats. With capabilities like threat detection, vulnerability assessments, and infrastructure monitoring, IntelligenceX helps organizations understand how these campaigns operate and where their exposure lies.
For example, identifying malicious domains, tracking infrastructure patterns, and analyzing attacker behavior are essential steps in stopping campaigns like Pushpaganda early. IntelligenceX enables organizations to gain this visibility and respond before large-scale damage occurs.
Another key advantage is risk management and compliance. As attackers continue to exploit content platforms and user trust, organizations must ensure that their systems, applications, and user-facing services are secure. IntelligenceX supports this by helping businesses align with security standards while actively reducing their attack surface.
Push notification abuse itself is not a new concept. However, this campaign demonstrates how effective it becomes when combined with AI-generated content and SEO manipulation. The result is a highly scalable attack model that blends seamlessly into everyday user behavior.
The campaign also connects to a larger ecosystem of ad fraud operations. Networks of domains are often reused across multiple campaigns, allowing attackers to maintain revenue streams even when individual operations are disrupted. This persistence makes it difficult to completely eliminate the threat.
Because of this, continuous monitoring and threat intelligence are essential. Organizations must move beyond reactive security and adopt strategies that focus on early detection and proactive defense.
The Pushpaganda campaign ultimately shows that cybersecurity is no longer just about protecting systems. It is about understanding how information is delivered, how trust is built, and how both can be manipulated.
As attackers continue to evolve, organizations must do the same—by investing in visibility, intelligence, and proactive security measures that can keep pace with modern threats.
Top comments (0)