DEV Community

Cover image for Cybersecurity In The World Of Generative AI
Mohab Gabber
Mohab Gabber

Posted on

Cybersecurity In The World Of Generative AI

AI is all the rage lately since it is becoming the "all-in-one tool.", You can use generative AI models to solve your math homework, write an article (not this one), create beautiful paintings, etc. But you can also use it to develop malware, spyware, and spam emails.

Like any new development in anything computer-related, AI is starting to present new challenges for the cybersecurity industry, so let's try to discuss this interesting idea.

The obvious threats and mitigations

The most obvious threat that LLMs present is when a person with malicious intent prompts the model to generate illegal or harmful content. This was the case for a while after the release of ChatGPT 3.5, when users asked the model to generate malware, write scams, etc. But this can be mitigated by implementing safeguards and sanitizing prompts before passing them to the model, which was actually implemented on OpenAI's ChatGPT and Google's Bard. But of course it is not that simple, and the addition of such restrictions encouraged researchers and malicious users alike to find new ways to bypass said restrictions, which evolved into a game of cat and mouse with no definitive solution in sight.

Made to be malicious

Since it is becoming harder and harder to generate harmful content from models like ChatGPT and Bard, cyber criminals went to the next option: why not build your own? They don't have to build everything from scratch since a lot of high-quality LLMs are open source; they can just train the model on malware datasets, and that's it. An example of this idea is "WormGPT", which is built on the open-source gpt-j model.

WormGPT allows users to generate malware, ransomware, scam emails, and much more. The WormGPT example might lead some to believe that AI does not really work in real-world situations, which is true for now, but sooner or later someone will actually have the financial capabilities and the expertise to develop a much more dangerous model, which in my opinion will be an event resembling the launch of the Silk Road.

What should we do?

Script-kiddies can now use AI tools like WormGPT to develop convincing scam emails, web pages, or even malware, and in the hands of an expert, such tools will make the workflow faster and make it easier to debug and develop complicated scripts and tools. In my opinion, we can only meet this threat by using the same methods as in developing a model that helps cybersecurity researchers and blue teams protect systems, detect threats, and maybe even write reports (because I hate reports).

Tools like this will help automate routine system checks, update packages, analyze network traffic, etc.

I'm no expert when it comes to AI; I just like to entertain ideas and think about "what ifs?", so I really appreciate any kind of feedback, correction, or suggestion. If you like this article, don't forget to like and share <3

Top comments (2)

Collapse
 
skryking profile image
Jason Ormes

Maybe develop some AI systems to help defend against the malicious ones?

Collapse
 
mohabgabber profile image
Mohab Gabber

Exactly, I believe Microsoft is already building one: microsoft.com/en-us/security/busin...