DEV Community

Cover image for Security news weekly round-up - 27th December 2024
Habdul Hazeez
Habdul Hazeez

Posted on

Security news weekly round-up - 27th December 2024

Introduction

We have come a long way this year. From January 2024 to December 2024, unbelievable. We are still here. Welcome everyone, how are you doing today?

In this week's edition of our security review here on DEV, the articles that we'll review should not be a surprise if you're a long-term reader of this review. Nonetheless, they are:

  • Phishing (no surprises in this one)
  • Malware (we cover this almost every week)
  • Artificial Intelligence (I'll say abuse of Artificial intelligence, but let's keep it as is)

Take a deep breath (I just did), and let's do some review.


How to Lose a Fortune with Just One Bad Click

After reading the article, you might think that you can never fall for it. Or, you might blame both victims for being dumb. Guess what? I have news for you, you can fall for it, and so can I. That's why it's called Social Engineering; a way to make you do stuff that, on a normal day, you'll never do it.

To complicate matters, it seems the alleged perpetrators of this scam are teenagers. Yes, you read that right. One of them might even be a 13-year-old kid who still goes to school with a backpack!

The author advice that you do the following while aiming to keep yourself safe from this type of scam:

Understand that your email credentials are more than likely the key to unlocking your entire digital identity. Be sure to use a long, unique passphrase for your email address, and never pick a passphrase that you have ever used anywhere else (not even a variation on an old password).

Finally, it’s also a good idea to take advantage of the strongest multi-factor authentication methods offered. For Gmail/Google accounts, that includes the use of passkeys or physical security keys, which are heavily phishing resistant.

AI Could Generate 10,000 Malware Variants, Evading Detection in 88% of Case

Tell me that you're not surprised without telling me that you're not surprised. This is one of the not-so-good sides of Artificial intelligence. Yes, it makes us productive, but in the wrong hands, it could be a weapon to cause all sorts of mischief. There are guardrails in some LLMs that prevent them from writing malware, but that does not mean that they can't rewrite existing ones.

From the article:

"Criminals can prompt LLMs to perform transformations that are much more natural-looking, which makes detecting this malware more challenging."

With enough transformations over time, the approach could have the advantage of degrading the performance of malware classification systems, tricking them into believing that a piece of nefarious code is actually benign.

Beware Of Shadow AI – Shadow IT’s Less Well-Known Brother

When employees aim to be productive in this age of AI, you have to educate and support them. Locking them out, or implementing strict policies can give rise to Shadow AI — the unsanctioned use of AI tools and technologies outside organizational control.

This article is a wonderful piece. You'll learn a lot. Read it and get accustomed to the term before it becomes mainstream.

The Intersection of AI and OSINT: Advanced Threats On The Horizon

Be wary of what you post online. Ignore the noise that you need to share lots of personal information for you to be seen. If you don't heed this advice, you can give cyber criminals more than enough information to launch a spear phishing attack against you.

From the article:

Scammers and cybercriminals constantly monitor public information to collect insight on people, businesses and systems. They research social media profiles, public records, company websites, press releases, etc., to identify vulnerabilities and potential targets.

What might seem like harmless information such as a job change, a location-tagged photograph, stories in media, online interests and affiliations can be pieced together to build a comprehensive profile of a target, enabling threat actors to launch targeted social engineering attacks.

Researchers Uncover PyPI Packages Stealing Keystrokes and Hijacking Social Accounts

In plain words, the packages are both malware designed to steal lots of personal information from the infected system. Luckily, at the time of writing, they have been taken down. Still, we can learn that apps or packages from official sources can be malware or contain malware components.

From the article (emphasis mine):

The first of the two packages, zebo, uses obfuscation techniques, such as hex-encoded strings, to conceal the URL of the command-and-control (C2) server it communicates with over HTTP requests.

It also packs in a slew of features to harvest data, including leveraging the pynput library to capture keystrokes and ImageGrab to periodically grab screenshots every hour

Cometlogger, on the other hand, is a lot of feature-packed, siphoning a wide range of information, including cookies, passwords, tokens, and account-related data from apps such as Discord, Steam, Instagram, X, TikTok, Reddit, Twitch, Spotify, and Roblox. It's also capable of harvesting system metadata, network and Wi-Fi information, a list of running processes, and clipboard content

Credits

Cover photo by Debby Hudson on Unsplash.


That's it for this week, and I'll see you next time.

Top comments (0)