DEV Community

Cover image for What to Expect in 2025?
Alex P
Alex P

Posted on • Originally published at dub.sh

What to Expect in 2025?

Summary:

  • A lot of high-quality phishing
  • New techniques for infiltrating regular users' computers
  • Targeted attacks on specific projects or developers
  • OAuth takeover via Doubleclick
  • Data leaks due to chatbots being implemented everywhere
  • Deepfakes
  • AI new zero-days
  • PRICES!
  • AI Benefits for Security Teams
  • Regulatory or legislative requirements

Phishing

Last year saw numerous significant data leaks, fraud schemes using cutting-edge technology, and simple ingenuity. This trend will continue, and attackers, fueled by past successes, will become even more ambitious

Notable leaks:

  • National Public Data (2.9 billion records)
  • Ticketmaster & Live Nation (560 million records)
  • Synnovis (300 million records)
  • Change Healthcare (145 million records)
  • MOVEit (77 million records)
  • AT&T (73 million records)
  • Dell (49 million records)

All of this now serves as a database for phishers. Despite projects like evilginx2, gophish, and other phishing frameworks being designed for IT-savvy users, modern AI technologies will guide even script kiddies from setup to fine-tuning attacks for specific targets

AI technologies for phishing campaigns:

  • Improving existing phishing templates
  • Crafting the best possible emails or SMS messages in multiple languages, making detection harder
  • Lowering the entry threshold for cybercrime, increasing the number of attackers

The primary goal of phishing remains account takeover for exploitation

This year, major internet companies may not make passkeys mandatory for authentication, but at the very least, they will introduce support and offer them as an alternative to insecure passwords and SMS/Google Authenticator codes. Even if not to protect users, then to legally shield companies by saying, "We provided security options, you just didn't use them"

CISA + FBI recently released "Product Security: Bad Practices 2.0" stating that:

  • Projects should use phishing-resistant MFA
  • Passkeys are phishing-resistant authentication forms

Phishing via Google Ads won't disappear, as well as "free" phishing methods like fake website placements on Google Maps/Earth, Instagram, Facebook, TikTok, etc.

New Techniques for Computer Infiltration

BlackMamba: AI-powered Polymorphic Malware

  1. Uses a harmless executable to call a high-reputation AI API (OpenAI) at runtime
    • The program may even be useful to the user, like downloading videos, merging PDFs, file type conversion, etc.
  2. Retrieves AI-synthesized malicious code to steal keystrokes
  3. Executes AI-generated code within a benign application using Python's exec() function
  4. Re-synthesizes its keylogger functionality on every launch, making the malware truly polymorphic

Attacks on Developers and Supply Chain Attacks

Early last year, someone noticed their program running 0.2 seconds slower and discovered something that could have compromised the entire world if given a few more months. Details: XZ Utils backdoor

Image description

Recently, 35 to 41 Chrome extensions were compromised, affecting 2.6–3.7 million users

Attack Mechanism:

  • Attackers used phishing emails targeting extension developers
  • Emails contained fake Google notifications about Chrome Web Store violations
  • Developers clicked links and granted access to malicious OAuth apps
  • Attackers then uploaded compromised extensions to the Chrome Web Store

    Issues here:

  • Organizations failing to create allowlists for OAuth applications

    • At least for privileged company accounts (especially if the extension is company-developed)
    • Or not implementing additional security confirmations post-installation (see OAuth takeover via Doubleclick)
  • Users trusting any extensions, ignoring ratings, reviews, antivirus protections, or even browser updates

This supply chain attack trend will continue in 2025 (e.g., South Korean VPN provider IPany has already been affected)

Although Google has tried to improve security in recent years, things have unexpectedly worsened. The introduction of Google Chrome’s Manifest V3 made extensions less privileged, but many ad blockers stopped working without warning. This left users vulnerable to attackers exploiting ads. Many users have started seeing ads again but are unaware that their ad blockers (e.g., uBlock Origin) have been silently disabled

If you're concerned about developer-targeted attacks, it’s best to store honeypot AWS keys in home directories/documents/desktops. If your security tools fails, this will at least alert you to a breach in time and initiate an Incident Response Plan accordingly

OAuth Takeover via Doubleclick

A significant threat that affects all organizations that have not implemented an extension review/approval process for SaaS products

There are currently almost no protections against it, except for maintaining a whitelist (e.g., approved Slack applications) and slow internet (a solvable issue)

But it's better to see it once than hear about it a hundred times:
Suppose a user receives an email with a link to a legitimate resource (e.g., GitHub Pages), where they are required to prove they are not a robot by double-clicking their mouse:

Image description

However, after the first click, the user is redirected to a page that grants the attacker's server permission to access data from the targeted SaaS platform
This happens in an instant, and as the user double-clicks, they inadvertently press "Allow/Approve," etc.

Image description

More videos can be found at the link

CAPTCHA Test Installs Malware

In my opinion, this is a brilliant solution targeting users who are completely unfamiliar with technologies. Here's what we observe: users are explicitly told they need to pass a CAPTCHA, and to do so, they are instructed to open a terminal and paste a command that the website has placed in their clipboard upon clicking

Image description

As a result, the user voluntarily and unknowingly runs a virus on their computer

Image description

I'm very curious about what else 2025 might bring. It seems that as the population grows, attackers may revisit the oldest propagation methods, and by the law of large numbers, someone in an organization is bound to fall for it

Data Leaks from Chatbots

A large number of chatbots can be observed everywhere today, from search engines to banks

Not long ago, on an educational website, I tried asking a chatbot to answer questions from tests. The bot said it couldn't because it was a verification test. I told it that I was a person in need of help and that if it didn’t assist me, I’d be in trouble. Of course, the bot then apologized and helped answer all the test questions. However, it seems to me that it didn’t think but used a general knowledge base because various AI made significant mistakes on this test (I compared)

This is a simple example showing that using overly smart AIs for the most critical projects is somewhat risky, as it can at least lead to breaches of confidentiality

An example of such a case: Microsoft’s Bing AI secret rules

Accordingly, I expect a surge of Bug Bounty reports this year about how bots successfully disclose unnecessary information

And the most fascinating part here is that the entry threshold for anyone interested is lowering again. Try it yourself at CTF AI Security Challenge. Done? Congratulations, you’re already a hacker

I expect projects like WAF for AI chatbots to continue developing this year

At the very least, Cloudflare WAF is already considering this

Deepfakes

In 2024, news headlines such as "Finance Department Employee Transfers $25 Million After Video Conference with deepfake CFO" caused a stir

This marks the first known case of fraud in Hong Kong involving deepfake technology during a video conference. The employee initially received a suspicious message about a confidential transaction but had their doubts dispelled during a video call where fraudsters used convincing video replicas of several colleagues

The victim made 15 transfers to five different bank accounts, and the scam was uncovered only after the employee contacted the company’s headquarters. It was later revealed that the fraudsters had used recordings from previous online conferences to create fake videos

This is a significant event because it highlights how videos and even photos of top management from social media or public media can now serve as sources for creating deepfakes (video and audio)

A brief training session can help warn employees about such threats. The most important takeaway is to inform them that photo, audio, or video messages received via unfamiliar communication channels should be ignored and reported to the security team

AI assistants for video conferences, especially free ones, pose a significant risk. Most users operate under the assumption that websites, programs, or browser extensions labeled as "free assistant" or "free VPN" are some kind of Robin Hood of the internet, offering services for free—but this is not true!

Such misplaced trust in the corporate segment will inevitably lead to leaks of video conferences and related data at some point

All of this can be controlled through OAuth API policies for your workspace or by establishing an allow-list for browser extensions. In short—good luck to everyone with configuring PAM (Privileged Access Management) as well!

Zero-days

Each year, a large number of entirely new vulnerabilities emerge, both in old products and in new ones

There is no way to completely eliminate this, but risks can be reduced by using real-time updatable NGFW / EDR / WAF / IDS, etc., which monitor new trends and have the potential to impact your environment if someone attempts to exploit a security breach, or at the very least, provide timely warnings about potential threats

AI opens up new opportunities in product vulnerability research, ranging from applying methods to analyze old (or new) code to actually invoking various APIs

What to Do About All This

Use the best that AI offers today for protection, assessment, and training, at least just to know something about them

Stay informed and monitor cybersecurity trends, sharing the most notable ones impacting the organization with employees

Conduct periodic training sessions, ensuring they are brief (like this note), as any training longer than 10 minutes on a topic that is not particularly engaging for people is almost ineffective

Image description

Prices

A lot of data + a lot of users + a lot of computational power = high costs, servers consume significant energy, and users will have to pay for it

We still don’t see ads in AI chat responses, do we?

AI will be heavily integrated into the core products of tech giants and will align with the surrounding product ecosystems
For example, Google is raising subscription prices for corporate clients in a few months but will include Gemini for everyone

On one hand, this is good because employees won’t need to use the free Gemini version, which reserves full rights to train on user data
Or take the MacOS 15.2 update – now with built-in ChatGPT
As for Windows, it’s ahead of everyone with its Copilot integration

All of this raises some concerns because additional tools available to all users mean additional attack surfaces for malicious actors
In the future, we might see infected Google Docs (or something like this), where opening them with enabled AI assistants could automatically leak user data

Some related links:

"People use it much more than we expected": Sam Altman says OpenAI is 'losing money' despite launching $200 ChatGPT Pro subscription

Microsoft chooses infamous nuclear site for AI power

Gemini automatically summarized the contents of a personal tax return in Google Docs without explicit permission

Google’s New AI Architecture ‘Titans’ Can Remember Long-Term Data

AI Benefits for Security Teams

  • EDR solutions will be able to analyze and dismiss false positives more accurately
  • Dependency assessment programs in source code or Prioritizing Vulnerability Patching can begin leveraging AI to reduce noise, allowing them to focus on truly critical issues while considering the organization's specific attributes
  • Incident Response Team – with proper documentation of incident response procedures and incidents themselves – AI, having access to various company metrics, will increasingly assist the Incident Commander in avoiding mistakes and leading the team more effectively toward problem resolution
  • Integrating Generative AI Into Security Tool Administrative Interfaces, for example, to accelerate the creation of reports based on multiple attributes
  • Optimizing Identity and Access Management, such as for Risk-based authentication based on attributes or Automated user onboarding and offboarding – at a minimum, using AI to review all requests while considering all comments to understand where the user had access (if necessary) ## Regulatory or Legislative Requirements

In addition to the well-known DORA, NIS2, and CRA, organizations developing or using AI are subject to the EU AI Act

Many countries are also developing their own laws

The implementation of the EU AI Act is divided into several steps, but in just a few weeks, it will no longer be permissible to use AI as systems for predicting criminal behavior based on personality profiling (starting February 2, 2025)

Links to Additional Materials

Five Smart Ways to Invest in Your Human Firewalls

Top Five Cybersecurity Predictions for 2025

Google Cloud Cybersecurity Forecast 2025

Impact of AI on the Cybersecurity Industry: Who's Got the Upper Hand?

Passwordless Authentication: A New Reality or a Pipe Dream?

Operating Inside the Interpreted: Offensive Python

Passkey technology is elegant, but it’s most definitely not usable security

Image of Timescale

Timescale – the developer's data platform for modern apps, built on PostgreSQL

Timescale Cloud is PostgreSQL optimized for speed, scale, and performance. Over 3 million IoT, AI, crypto, and dev tool apps are powered by Timescale. Try it free today! No credit card required.

Try free

Top comments (0)

A Workflow Copilot. Tailored to You.

Pieces.app image

Our desktop app, with its intelligent copilot, streamlines coding by generating snippets, extracting code from screenshots, and accelerating problem-solving.

Read the docs

👋 Kindness is contagious

Dive into an ocean of knowledge with this thought-provoking post, revered deeply within the supportive DEV Community. Developers of all levels are welcome to join and enhance our collective intelligence.

Saying a simple "thank you" can brighten someone's day. Share your gratitude in the comments below!

On DEV, sharing ideas eases our path and fortifies our community connections. Found this helpful? Sending a quick thanks to the author can be profoundly valued.

Okay