DEV Community

Cover image for What Scares Companies in 2025?
Martin Edvardsen
Martin Edvardsen

Posted on

What Scares Companies in 2025?

And just as importantly, what are we doing to prevent the threats and breaches that they fear? This article is written following my attendance at a security conference in Denmark.

The conference presented products targeted at companies seeking help securing their systems without the in-house knowledge to do so. It did not present the tools, knowledge, or frameworks for building such products, as my colleagues and I hoped. As a group of software developers, we therefore did not seem to be the intended crowd.

Nonetheless, it gave us an opportunity to observe, and put into perspective, what defines the vulnerabilities feared, and the services or products providing protection against them. Here are some of the key observations we made. These are observations from a conference and not facts, so keep that in mind.

Shadow IT becomes Shadow AI

In the industry, Shadow IT is by now a common term referring to any software or hardware used within an organisation without explicit approval or control by its IT department. For instance, using Google Drive, maybe even for sharing sensitive data, in an organisation, which has a controlled instance of OneDrive as the approved means of file storage and sharing.

A big subject of the conference was AI, specifically due to the skepticism towards AI intensifying dramatically in recent years. This has spawned a new branch called Shadow AI, which refers specifically to the unauthorised and uncontrolled use of AI tools, models, and platforms within organisations. Large language models (LLMs) like ChatGPT and Claude have become integral to many employees’ daily work, but uncontrolled usage risks data leakage, compliance violations, and biased or unvetted business decisions, that do not align with the direction or values of the company.

Countermeasures usually include the use of internal, controlled, and monitored LLMs. At the conference, the focus was on systems called LLM guardrails. They monitor and control the use of common LLM systems by constraining the behaviour of said systems to ensure safe, ethical, and reliable outputs. They focused on mitigating the most common approaches to abusing LLMs, based on the OWASP Top 10 for LLMs (https://owasp.org/www-project-top-10-for-large-language-model-applications/). More specifically, they focused on improving the security, privacy, safety, and relevancy of the LLMs, being the categories of said guardrails.

Micro-segmented Access Control

Another big focus for the conference was ZTA, Zero-Trust Architecture, where we no longer solely rely on a firewall as a strong perimeter keeping adversaries out, but rather assume that adversaries are already inside the network. This essentially means that you don’t trust anyone inside your network.

Given these circumstances, we need to monitor and limit movement inside the internal network instead. A great example is from the banking sector, where we see one of the beneficial uses of AI. To identify possible money laundering among their customers, they use AI to recognise any anomalies in the behaviour of the users, i.e. deviating from the regular patterns of that specific user, called behaviour-based detection. An example would be someone suddenly, but consistently, transferring a large amount of money to an offshore bank account every Monday of the week.

One of the countermeasures, among many, presented at the conference was a concept called micro-segmentation, which enforces strict access control between internal systems by segmenting workloads, applications, and individual processes instead of entire subnets or zones, which can apply to all levels of a system. This means that even though an adversary, or even malware, is inside one of your systems, it would be contained in that specific system and would not spread laterally to other groups.

An even more detailed approach was presented, where micro-segmentation was broken down into “outside the group” and “inside the group”, which referred to communication, policies and access control inside a group, for instance being an entire HR System, and outside of said group, being communication between said groups.

Social engineering still takes first place

But how does the adversary get into the systems in the first place? One possibility is human-induced breaches, which was the last major subject of the conference. In fact, they account for around 60% of all breaches, according to Verizon Data Breach Investigation report 2025 (https://www.verizon.com/business/resources/Tc61/reports/2025-dbir-data-breach-investigations-report.pdf).

In an ever-moving world, where rapidly evolving tools and systems constantly make it easier for us to reach our goals faster at the expense of security and privacy, we expose our systems and data to adversaries to an unprecedented degree.

At the conference, the countermeasures proposed for these breaches focused on adapting to the users, instead of restricting them. One solution proposed was to adapt to the primary productivity hub of employees working for it-based companies – the browser. Specifically, a chromium-based browser, which most people are used to, that enables complete control and monitoring, while also allowing access to private apps or SSH without the use of VPN. Furthermore, as it was browser-based, it could be used by any device, even unmanaged ones in a secure manner.

In conclusion, as trust is diminishing to the point of relying on architectures with “zero-trust”, the industry is gravitating towards adaptation over restriction. Furthermore, even though we are only seeing the tip of the iceberg in terms of how AI will affect and change the security and privacy of companies, we are constantly forced to reflect on and adapt to them.

Reach out to me for sources on the specific products mentioned and discussed in this article.

Top comments (0)