A recently uncovered cyber campaign is showing how artificial intelligence is being weaponized to manipulate trusted content platforms at scale. Security researchers have identified an operation, named “Pushpaganda,” that uses AI-generated articles and search engine manipulation techniques to infiltrate Google Discover feeds and lure users into scams.
The campaign, analyzed by HUMAN’s Satori Threat Intelligence team, targets Android and Chrome users by exploiting how personalized content is delivered. Instead of relying on traditional malware distribution, the attackers focus on shaping what users see, turning legitimate discovery mechanisms into entry points for deception.
At the center of the operation is the use of artificially generated content. Attackers create large volumes of news-style articles that appear relevant and credible. These pieces are optimized using search engine poisoning techniques so they can surface in Google Discover, a feature many users trust for curated updates and trending topics.
Once a user clicks on one of these articles, they are taken to a website controlled by the attackers. These pages are carefully designed to appear authentic but quickly attempt to push users into enabling browser notifications. This step is critical to the campaign’s success.
When users allow notifications, they unknowingly grant attackers a persistent channel to reach their devices. From that point on, the operation shifts into its second phase. Users begin receiving alarming notifications that often mimic legal warnings, security alerts, or urgent system issues. These messages are crafted to trigger immediate reactions, increasing the chances of interaction.
Clicking on these notifications redirects users to additional domains within the same network. These sites are typically built to generate revenue through advertising impressions or to facilitate scams, including fraudulent financial schemes. By continuously cycling users through this ecosystem, the attackers are able to maintain a steady flow of monetized traffic.
At its peak, the Pushpaganda campaign generated a massive amount of activity. Researchers observed around 240 million bid requests tied to over 100 domains within just a week. Although the campaign initially focused on India, it quickly expanded its reach to regions such as the United States, Canada, Australia, South Africa, and the United Kingdom.
One of the most notable aspects of this campaign is how effectively it uses AI to scale operations. By automating content creation, attackers can rapidly produce articles tailored to different audiences and trending topics. This not only increases visibility but also makes it more difficult for detection systems to identify and filter out malicious content.
Google has responded by implementing measures to limit the spread of such spam within its Discover platform. The company emphasized its policies against low-quality or manipulative content, particularly when generated using AI for the purpose of influencing rankings. It also highlighted ongoing efforts to improve spam detection and enforce stricter content standards.
Despite these measures, the campaign underscores a larger issue. Attackers are no longer just exploiting technical vulnerabilities; they are targeting trust itself. By blending into legitimate content ecosystems, they can operate at scale without immediately raising suspicion.
This is where external threat intelligence becomes increasingly important. Platforms like IntelligenceX provide the ability to track malicious domains, analyze infrastructure, and uncover connections between different parts of a campaign. In cases like Pushpaganda, where hundreds of domains may be involved, such visibility is essential.
Using IntelligenceX, security teams can identify clusters of suspicious domains, monitor how they evolve over time, and detect early signs of similar campaigns. This proactive approach allows organizations to respond before threats fully develop.
Another advantage is the ability to monitor brand exposure. Campaigns like Pushpaganda often rely on impersonation or misleading content to gain trust. By leveraging platforms such as IntelligenceX, organizations can detect whether their brand or services are being misused within malicious ecosystems, helping them take action quickly.
Push notification abuse itself is not new. Previous campaigns have shown how effective it can be as a delivery mechanism for scams and unwanted content. However, what makes Pushpaganda stand out is the way it combines multiple techniques—AI-generated content, SEO manipulation, and notification-based social engineering—into a single, scalable operation.
The research also aligns with broader findings about large-scale ad fraud ecosystems. Investigations have revealed networks of thousands of domains acting as “cashout” platforms, where fraudulent traffic is converted into revenue. These systems are often designed to persist even after individual campaigns are disrupted, making them difficult to eliminate entirely.
A key challenge highlighted by this campaign is resilience. Even if specific apps or domains are taken down, the underlying infrastructure can be reused by other threat actors. This shared ecosystem allows operations to continue with minimal disruption, increasing the overall effectiveness of such schemes.
Because of this, continuous monitoring becomes essential. Identifying malicious infrastructure early and tracking its evolution over time is critical to reducing risk. Tools like IntelligenceX support this process by providing access to large-scale data and correlations that would otherwise be difficult to uncover.
The Pushpaganda campaign illustrates how the threat landscape is evolving. Attackers are combining automation, social engineering, and infrastructure reuse to create operations that are both scalable and difficult to detect. As these techniques continue to develop, both individuals and organizations must adapt their defenses accordingly.
Ultimately, the campaign serves as a reminder that security is no longer just about protecting systems—it is also about understanding how information is delivered, consumed, and manipulated.
Top comments (0)