When we go online, we are constantly exposed to threats whether we know it or not. Or, if we accept it as a fact or not. It could be a direct threat such as landing on a malicious website or we are using vulnerable software and services. The threats always seem to be lurking, waiting for someone to sound the alarm.
In this week's review, we'll take a look at articles that show the big benefits of end-to-end encryption, security issues revolving around AI tools, malware, and voice phishing.
Why the tech industry needs to stand firm on preserving end-to-end encryption
It's a necessity. It's for me and you. End-to-end encryption should stay.
If you think otherwise, change your mind with the following excerpt:
The issue of breaking encryption with a backdoor should not be shrouded in secrecy like the non-public notice issued to Apple, as this concerns a fundamental privacy and security issue.
There are times for secrecy, and I am sure there will be specific cases when data is accessed using the legislation that could, depending on circumstances, be kept secret.
Several Vulnerabilities Patched in AI Code Editor Cursor
Take a wild guess on the nature of the vulnerability? It's an RCE! Buttt.... For now, it's all good news. Why? They patched it. So, why include it? Just to let you know vulnerabilities are not going anywhere anytime soon and we should always assume that our favorite tools can be turned against us. Now, if you think that's an exaggeration, the excerpt below should change your mind.
Addressed in Cursor version 1.3, this was not the only code execution flaw resolved in the AI agent recently. Another one, tracked as CVE-2025-54136 (CVSS score of 7.2), could have allowed attackers to swap harmless MCP configuration files with malicious commands, without triggering a warning.
If an attacker has write permissions on a user’s active branches of a source repository that contains existing MCP servers the user has previously approved, or an attacker has arbitrary file-write locally, the attacker can achieve arbitrary code execution.
PlayPraetor Android Trojan Infects 11,000+ Devices via Fake Google Play Pages and Meta Ads
If you're familiar with Android Trojans and you're thinking if this PlayPraetor is also abusing the Accessibility services, your guess is correct. But here is what I still don't get: who invests time and effort to create a tool of chaos? Let me know in the comments section.
From the article:
The evolving nature of the supported commands indicates that PlayPraetor is being actively developed by its operators, allowing for comprehensive data theft. In recent weeks, attacks distributing the malware have increasingly targeted Spanish- and Arabic-speaking victims, signaling a broader expansion of the malware-as-a-service (MaaS) offering.
AI Guardrails Under Fire: Cisco’s Jailbreak Demo Exposes AI Weak Points
This article is a wake-up call for any organization that has trained or is planning on training their AI models using proprietary or copyrighted content. Guess what? The technique detailed in this article can expose you.
From the article (emphasis mine):
Cisco’s decomposition example demonstrates extraction of an New York Times article that, without Cisco’s prior knowledge, had been used in training the LLM model. This should have been prevented by the model’s guardrails. Indeed, the first direct prompt request for the copy, delivered without naming the article but loosely describing the content, was denied; but recognition of its existence was confirmed.
Voice phishers strike again, this time hitting Cisco
If you and I think that we are safe from this kind of attack, this should prove otherwise. Big tech can also be vulnerable. The last time I checked, they have human workers. So, yes, attackers can, and will always leverage the human link in their attacks.
Now, wait. Did you read the article? I believe that you should. If not, here is an excerpt from the article that briefly explains how to defend against these kinds of attacks.
One of the best defenses against these sorts of attacks is the use of multi-factor authentication that’s compliant with FIDO, the industry standard developed by a consortium of organizations around the world. The cryptographic keys securing FIDO are bound to the domain name of the service being logged into. That prevents attacks relying on spoofed or lookalike phishing sites from working.
Major Enterprise AI Assistants Can Be Abused for Data Theft, Manipulation
If you have been following the AI space, you'll not be surprised by this news. Since the explosion of AI systems, we have also witnessed novel attack methods against them. This is one such example. The article covered attack methods against many popular AI tools—such as ChatGPT, Google Gemini, Copilot, and Cursor—resulting in data theft (sometimes without user interaction).
A key takeaway from the article:
In the case of Copilot Studio agents that engage with the internet — over 3,000 instances have been found — the researchers showed how an agent could be hijacked to exfiltrate information that is available to it. Copilot Studio is used by some organizations for customer service, and Zenity showed how it can be abused to obtain a company’s entire CRM.
Credits
Cover photo by Debby Hudson on Unsplash.
That's it for this week, and I'll see you next time.
Top comments (0)