DEV Community

NLP Cube Technologies
NLP Cube Technologies

Posted on • Originally published at nlpcube.com

The persistent humanity in AI and cyber security

Even as AI technology transforms some aspects of cyber security, the intersection of the two remains profoundly human. Although it’s perhaps counter intuitive, humans are front and center in all parts of the cyber security triad: the bad actors who seek to do harm, the gullible soft targets, and the good actors who fight back.

Even without the looming spectator of AI, the cybersecurity battlefield is often opaque to average users and the technologically savvy alike. Adding a layer of AI, which comprises numerous technologies that can also feel unexplainable to most people, may seem doubly intractable — as well as impersonal. That’s because although the cybersecurity fight is sometimes deeply personal, it’s rarely waged in person.

But it is waged by people. It’s attackers at their computers in one place launching attacks on people in another place, and those attacks are ideally being thwarted by defenders at their computers in yet another place. That dynamic frames how we can understand the roles of people in cybersecurity and why even the advent of AI doesn’t fundamentally change it.

Irreplaceable humans

In a way, AI’s impact on the field of cybersecurity is no different from its impact on other disciplines, in that people often grossly overestimate what AI can do. They don’t understand that AI often works best when it has a narrow application, like anomaly detection, versus a broader one, like engineering a solution to a threat.

Unlike humans, AI lacks ingenuity. It is not creative. It is not clever. It often fails to take into account context and memory, leaving it unable to interpret events like a human brain does.

In an interview with VentureBeat, LogicHub CEO and cofounder Kumar Saurabh illustrated the need for human analysts with a sort of John Henry test for automated threat detection. “A couple of years ago, we did an experiment,” he said. This involved pulling together a certain amount of data — a trivial amount for an AI model to sift through, but a reasonably large amount for a human analyst — to see how teams using automated systems would fare against humans in threat detection.

“I’ve given the data to about 40 teams so far. Not a single team has been able to pick that [threat] up in an automated way,” he said. “In some ways, we know the answer that it doesn’t take much to bypass machine-driven threat detection. How about we give it to really sophisticated analysts?,” he asked. According to Saurabh, within one to three hours, 25% of the human security professionals had cracked it. What’s more, they were able to explain to Saurabh how they had figured it out.

The twist: The experiment involved a relatively tiny amount of data, and it still took hours for skilled analysts to find the threat. “At that speed, you’d need 5,000 security analysts [to get through a real-world amount of data],” Saurabh said, as literally billions of data points are generated daily.

“Clearly, that doesn’t work either,” he said. “And this is where the intersection of AI threat detection comes in. We need to take the machine[s] and make them as intelligent as those security analysts who have 10 years, 15 years of experience in threat detection.” He argued that although there’s been progress toward that goal, it’s a problem that hasn’t been solved very well — and likely won’t be for decades.

That’s because what AI can do in cybersecurity right now is narrow. Pitched against artificial general intelligence (AGI ) — the holy grail of thinking machines that does not yet exist — it’s laughable how far away our current AI tools are from approaching what a skilled security professional can do. “All people have general purpose intelligence,” said Saurabh. “[But] even if you teach an AI to drive, it can’t make coffee.”

Dr. Ricardo Morla, professor at the University of Porto, told VentureBeat that one way to understand the collaboration between humans and machines is in terms of cognitive resources. “As cars get smarter, the human ends up releasing cognitive resources required … to switch on the lights when it’s dark, [control] the clutch on an uphill start, or … actually [drive] the car, and using these resources for other tasks,” he said.

But, he added, “We are not at the point where the human in a security operations centre or the human behind a massive botnet can just go home and leave it to the machine to get the job done.” He pointed to tasks like intrusion detection and automated vulnerability scanning that require security pros to supervise “if not during the actual learning and inference, definitely while reviewing results, choosing relevant learning data scenarios and models, and assessing robustness of the model against attacks through adversarial learning.” He also suggested that humans are needed “to oversee performance and effectiveness and to design attack goals and defence priorities.”

There are some security-related tasks for which AI is better suited. Caleb Fenton is head of innovation for SentinelOne, a company that specializes in using AI and machine learning for endpoint detection. He believes that AI has helped software makers develop their tools faster. “Programmers don’t have to write really complicated functions anymore that might take … many months of iteration and trying,” he said. “Now, the algorithm writes the function for them. And all you need is data and labels.”

He said using AI has resulted in a “net win” for approaches to threat detection, whether static (i.e., looking at files) or behavioral (i.e., how programs behave). But he allows that a tool’s a tool, and that “it’s only as good as the person using it.”

Steve Kommrusch is a doctoral candidate at Colorado State University who is currently focused on machine learning but has already spent 28 years as a computer engineer at companies such as HP and AMD. He echoed Fenton’s assertions. “The AI can help identify risky software coding styles, and this can allow larger amounts of safe code to be written quickly. Certain tasks — perhaps initial bug triage or simple bug fixing — might be done by AI instead of humans,” he said. “But deciding which problems need solving, architecting data structure access, [and] developing well-parallelizable algorithms will still need humans for quite a while.”

For the foreseeable future, then, the question is not whether machines will replace humans in cybersecurity, but how effectively they can augment what human security professionals do.

Humans are still the weakest link

Ironically, even as human defenders remain crucial to the cybersecurity battle, they make persistently soft targets. It doesn’t matter how hidden a door is or how thick it is or how many locks it has; the easiest way to break in is to get someone with the keys to unlock it for you.

And the keys are held by people, who can be tricked and manipulated, are sometimes ignorant, often make mistakes, and suffer lapses in judgment. If we open a malicious file by accident or foolishly hand over our sensitive login or financial information to a criminal, the cyber security defender’s task becomes difficult or nearly impossible.

AI versus AI

None of the above is to say that targets are only human. “There will be cases where access control mechanisms are implemented using AI and where the AI may become a target,” Morla said. He listed examples, such as efficiently finding malignant samples that look benign to a person but force the AI to misclassify it; poisoning a data set and thus preventing the AI from adequately learning from it; reverse-engineering AI to find models; and watermarking the AI for copyright.

“So while the human may still be the weakest link, bringing AI into cybersecurity adds another weak link to the chain.”

Simple motivations

Fenton’s comments point to an often overlooked aspect of cybersecurity, which is that attackers are primarily motivated by the same thing that drives all thieves: money.

“Attackers will usually come up with the cheapest, dumbest, most boring solution to a problem that works. Because they’re thinking cost/benefit analysis. They’re not trying to be clever,” Fenton said. That’s key to understanding the cybersecurity world, because it helps show how narrow the scope of it is. Fenton calls it a goal-oriented market, both for attackers and defenders. And for attackers, the goal is largely financial.

People versus people

People are always at both ends of the attacker-victim dyad. There is no software that becomes sentient, turns itself into malware, and then chooses to make an attack; it’s always a person who sets out to do accomplish some task at the expense of others. And although it’s true that a cyberattack is about compromising or capturing systems — not people, per se — the reason any target is lucrative is because there are humans at the end of it who will cough up ransomware money or inadvertently open a breach into a system that has value for the attacker.

In the end, even as AI enhances some aspects of cyber attacks and some aspects of cyber defence, the stakes are still profoundly human. The tools and attack vectors may change, but there is still a person who attacks, a person who is a target, and a person who defends. Same as it ever was.

Top comments (0)