<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kuboid Secure Layer</title>
    <description>The latest articles on DEV Community by Kuboid Secure Layer (@kuboidsecurelayer).</description>
    <link>https://dev.to/kuboidsecurelayer</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kuboidsecurelayer"/>
    <language>en</language>
    <item>
      <title>Vulnerability Chaining: How Attackers Combine Low-Severity Bugs Into Critical Breaches</title>
      <dc:creator>Kuboid Secure Layer</dc:creator>
      <pubDate>Fri, 03 Apr 2026 04:56:14 +0000</pubDate>
      <link>https://dev.to/kuboidsecurelayer/vulnerability-chaining-how-attackers-combine-low-severity-bugs-into-critical-breaches-fep</link>
      <guid>https://dev.to/kuboidsecurelayer/vulnerability-chaining-how-attackers-combine-low-severity-bugs-into-critical-breaches-fep</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TLDR:&lt;/strong&gt; Your scanner flagged three findings — verbose error messages (low), no rate limiting on password reset (medium), weak token strength (medium). No criticals. You deprioritised the report. Six weeks later, an attacker chained those three findings into a complete account takeover of your most privileged user. Vulnerability chaining is how real attacks work. Individual severity ratings are misleading because CVSS scores vulnerabilities in isolation. Attackers never attack in isolation.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  "No Critical Findings"
&lt;/h2&gt;

&lt;p&gt;Those three words are responsible for more breaches than any zero-day exploit.&lt;/p&gt;

&lt;p&gt;Not because the organisation didn't care — they did. Not because the scanner was broken — it wasn't. But because the report rated three separate findings as low and medium severity, the security team made a rational prioritisation decision: fix the criticals first, queue the rest for next sprint.&lt;/p&gt;

&lt;p&gt;The problem is that an attacker had already read the same three findings — by probing the application themselves — and saw something the scanner couldn't: a path. Three steps, each individually unremarkable, that together produced a complete account takeover in under 20 minutes.&lt;/p&gt;

&lt;p&gt;This is vulnerability chaining. And understanding it changes how you read every security report you'll ever receive.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Account Takeover — Step by Step
&lt;/h2&gt;

&lt;p&gt;Let's walk through the exact scenario from the opening hook, because the mechanics matter.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Finding 1 — Verbose error messages (Low severity).&lt;/strong&gt; The password reset flow returns different error messages depending on whether an email address exists in the system. &lt;code&gt;"User not found"&lt;/code&gt; versus &lt;code&gt;"Reset email sent"&lt;/code&gt;. A scanner flags this as low severity information disclosure. The developer made a UX decision — helpful error messages. The security implication barely registers.&lt;/p&gt;

&lt;p&gt;What this gives an attacker: a way to enumerate which email addresses have accounts. They now know that &lt;code&gt;ceo@company.com&lt;/code&gt; is a valid account in the system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Finding 2 — No rate limiting on password reset (Medium severity).&lt;/strong&gt; The password reset endpoint accepts unlimited requests. The scanner flags this as medium — a denial-of-service risk if abused, potentially annoying. No critical risk in isolation.&lt;/p&gt;

&lt;p&gt;What this gives an attacker: the ability to request password resets for any account as many times as they want, as fast as they want.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Finding 3 — Weak reset token entropy (Medium severity).&lt;/strong&gt; The password reset tokens are six characters long, alphanumeric, generated with a weak random function. The scanner flags it as medium — weak cryptography, should be strengthened.&lt;/p&gt;

&lt;p&gt;What this gives an attacker: reset tokens with a search space small enough to brute-force.&lt;/p&gt;

&lt;p&gt;Now chain them.&lt;/p&gt;

&lt;p&gt;The attacker identifies &lt;code&gt;ceo@company.com&lt;/code&gt; via the verbose error message. They hammer the password reset endpoint — no rate limiting — triggering hundreds of reset emails per minute. For each reset request, they simultaneously brute-force the token space against the reset endpoint. Because tokens are short and weakly random, they find a valid token within minutes. They use it. They set a new password. They're in — as the CEO.&lt;/p&gt;

&lt;p&gt;Three medium and low findings. One complete, privileged account takeover. The scanner gave each finding a severity score between 3 and 5 out of 10. The attack path they formed together was a 10.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Automated Tools Cannot See Chains
&lt;/h2&gt;

&lt;p&gt;The reason is architectural. Automated scanners test endpoints. They test inputs. They test responses. They evaluate each finding against a scoring rubric — CVSS (Common Vulnerability Scoring System) — that was designed to rate the severity of individual vulnerabilities.&lt;/p&gt;

&lt;p&gt;CVSS has no mechanism for "this vulnerability, when combined with that vulnerability, enables this attack path." It was never designed for that. It rates issues in isolation because isolation is the only mode a signature-based scanner can operate in.&lt;/p&gt;

&lt;p&gt;A human tester, on the other hand, builds a mental model of the application. They see the verbose error message and immediately ask: &lt;em&gt;what else is there in this authentication flow that might make this useful?&lt;/em&gt; They find the rate limiting gap. They find the weak token. And the chain assembles itself — not through any sophisticated technique, but through the attacker's fundamental question: &lt;em&gt;what can I do with what I have?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is the gap we described in &lt;a href="https://www.kuboid.in/blog/why-automated-vulnerability-scanners-miss-most-real-vulnerabilities" rel="noopener noreferrer"&gt;why automated scanners miss most real vulnerabilities&lt;/a&gt;. Chaining is the sharpest illustration of it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Three More Chains Worth Understanding
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Open Redirect + Reflected XSS → Phishing with legitimate domain trust.&lt;/strong&gt;&lt;br&gt;
A scanner finds an open redirect (low) — a URL parameter that redirects users to an arbitrary external site. It also finds a reflected XSS (medium) on the same domain. &lt;a href="https://www.kuboid.in/blog/xss-in-2026-why-cross-site-scripting-is-still-dangerous-and-still-everywhere" rel="noopener noreferrer"&gt;XSS is still widespread and dangerous&lt;/a&gt;. Together: send a victim a link on the legitimate domain that redirects to a page injecting script into their session. The trust of the real domain bypasses their suspicion. The XSS steals their session token.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SSRF + Cloud Metadata Endpoint → Cloud Credential Theft.&lt;/strong&gt;&lt;br&gt;
A Server-Side Request Forgery vulnerability (medium) allows the application to make requests to internal addresses. On its own, limited impact. But most cloud environments expose a metadata endpoint at a well-known internal IP that returns IAM credentials for the instance. SSRF to that endpoint retrieves credentials with whatever permissions the instance carries. From a medium-severity SSRF to complete cloud environment access — one hop.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IDOR + Information Disclosure → Targeted Account Takeover.&lt;/strong&gt;&lt;br&gt;
An &lt;a href="https://www.kuboid.in/blog/idor-the-vulnerability-developers-keep-writing-and-how-to-stop" rel="noopener noreferrer"&gt;IDOR vulnerability&lt;/a&gt; leaks user profile data including email addresses and partial phone numbers. An information disclosure in an error message confirms which users have admin privileges. Combined: identify admin email addresses, then use them as targets for a precisely crafted spear phishing campaign. The IDOR and the information disclosure each score as medium. Their combination enables a targeted attack on your highest-privilege accounts.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Severity Miscalculation Problem
&lt;/h2&gt;

&lt;p&gt;CVSS scores are not wrong. They're just answering a different question than you think they are.&lt;/p&gt;

&lt;p&gt;A CVSS score answers: &lt;em&gt;how severe is this individual vulnerability, assuming optimal conditions for an attacker exploiting it in isolation?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;What you actually need to know is: &lt;em&gt;what can an attacker accomplish in my specific environment, using any combination of the weaknesses that exist here?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Those are different questions with different answers. A pen test report that lists findings by individual CVSS score without documenting attack paths is giving you the right answer to the wrong question. It tells you which vulnerabilities are serious in the abstract. It doesn't tell you how you'd actually get breached.&lt;/p&gt;

&lt;p&gt;A good pen test report — the kind worth acting on — documents attack chains explicitly. It shows the path: finding A enables finding B, finding B in combination with finding C produces this outcome. It presents risk in the context of your application, not in the abstract of a scoring rubric.&lt;/p&gt;

&lt;p&gt;This is one of the most significant differences between a tool-generated report and a report written by a tester who has spent hours thinking adversarially about your specific system. The &lt;a href="https://www.kuboid.in/blog/broken-authentication-what-it-is-what-i-test-and-why-it-keeps-getting-exploited" rel="noopener noreferrer"&gt;broken authentication patterns&lt;/a&gt; that enable many chains are exactly the kind of thing a skilled tester looks to combine.&lt;/p&gt;




&lt;h2&gt;
  
  
  What This Means for How You Read Security Reports
&lt;/h2&gt;

&lt;p&gt;Next time you receive a vulnerability report — whether from an automated scanner, a bug bounty submission, or a pen test — read it differently.&lt;/p&gt;

&lt;p&gt;Don't just look at the severity column. Look at which findings touch the same surface. Which findings involve the same user flow. Which low-severity information disclosures might make a medium-severity issue exploitable. Ask the person who wrote the report: are there any combinations here that concern you?&lt;/p&gt;

&lt;p&gt;If they've thought about it, they'll have an answer. If the report was auto-generated, there won't be one.&lt;/p&gt;




&lt;h2&gt;
  
  
  How Kuboid Secure Layer Thinks About This
&lt;/h2&gt;

&lt;p&gt;Every &lt;a href="https://www.kuboid.in/services/web-app-pentest" rel="noopener noreferrer"&gt;web application assessment&lt;/a&gt; we deliver at &lt;a href="https://www.kuboid.in/why-kuboid" rel="noopener noreferrer"&gt;Kuboid Secure Layer&lt;/a&gt; explicitly documents attack paths — not just individual findings. If we find three medium-severity issues that chain into a critical outcome, the report reflects that. The executive summary reflects the actual risk, not the average CVSS score.&lt;/p&gt;

&lt;p&gt;Because the board doesn't need to know that you have a 4.3 and two 5.1s. They need to know that an attacker can take over your CEO's account in 20 minutes — and here's exactly how.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.kuboid.in/contact" rel="noopener noreferrer"&gt;Book a free consultation&lt;/a&gt; and we'll walk you through what attack-path-aware testing looks like for your application.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Have you ever received a "no criticals" report and later found something serious hiding in the medium findings? Or spotted a chain that a scanner completely missed? Drop a comment — this pattern is far more common than the industry admits.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




</description>
      <category>cybersecurity</category>
    </item>
    <item>
      <title>Why a Clean Vulnerability Scan Report Can Be Your Biggest Security Risk</title>
      <dc:creator>Kuboid Secure Layer</dc:creator>
      <pubDate>Thu, 02 Apr 2026 05:12:45 +0000</pubDate>
      <link>https://dev.to/kuboidsecurelayer/why-a-clean-vulnerability-scan-report-can-be-your-biggest-security-risk-2moj</link>
      <guid>https://dev.to/kuboidsecurelayer/why-a-clean-vulnerability-scan-report-can-be-your-biggest-security-risk-2moj</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TLDR:&lt;/strong&gt; In March 2025, 270,000 Samsung Germany customer records were leaked using credentials stolen in 2021 — credentials a cybersecurity firm had flagged years earlier that Samsung never rotated. In February 2024, the largest healthcare breach in US history happened because one Citrix portal was missing MFA — a policy that UnitedHealth's own security standards required. An automated vulnerability scanner running on either system would have reported no critical findings. Those reports would have been accurate. And completely useless.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The Credential That Sat There for Four Years
&lt;/h2&gt;

&lt;p&gt;On 29 March 2025, a hacker operating under the alias "GHNA" dumped 270,000 Samsung Germany customer records onto the internet — names, addresses, email addresses, transaction details, order histories, support communications.&lt;/p&gt;

&lt;p&gt;They didn't break any code. They didn't exploit a zero-day. They used a username and password that had been sitting in criminal databases since 2021, when Raccoon Infostealer malware infected a laptop belonging to an employee at Spectos GmbH — a third-party service quality monitoring firm connected to Samsung's German customer ticketing system.&lt;/p&gt;

&lt;p&gt;Hudson Rock, the security firm that analysed the breach, found that initial access was gained via login credentials stolen by an infostealer in 2021. Apparently, the compromised credentials had not been updated for years.&lt;/p&gt;

&lt;p&gt;Hudson Rock had flagged these compromised credentials years ago in their Cavalier database, which tracks over 30 million infected machines. Samsung reportedly failed to rotate or secure them, allowing the hacker to access the system years later.&lt;/p&gt;

&lt;p&gt;Run an automated vulnerability scanner against Samsung's ticketing system in 2024. It would have tested for open ports, unpatched software, weak cipher suites, missing headers. It would have found nothing critical. Because there was nothing technically broken. The credentials were valid. The system was functioning as designed. The control that was missing — credential rotation — is not something any scanner checks for.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Portal That Had No MFA
&lt;/h2&gt;

&lt;p&gt;Three months before the Samsung breach, in February 2024, a single stolen credential and a missing checkbox caused the largest healthcare data breach in United States history.&lt;/p&gt;

&lt;p&gt;UnitedHealth confirmed that Change Healthcare's network was breached by the BlackCat ransomware gang, who used stolen credentials to log into the company's Citrix remote access service, which did not have multi-factor authentication enabled.&lt;/p&gt;

&lt;p&gt;The company's policy was to have MFA turned on for all external-facing systems, but for reasons that remain under investigation, a Change Healthcare Citrix portal used for desktop remote access did not have MFA turned on. "That was the server through which the cybercriminals were able to get into Change," UnitedHealth CEO Andrew Witty said.&lt;/p&gt;

&lt;p&gt;The attackers were inside the network for nine days before deploying ransomware. By then, they had exfiltrated 4TB of data. The final count of affected individuals reached 190 million — more than half the US population. The financial cost to UnitedHealth exceeded $2.8 billion by mid-2025.&lt;/p&gt;

&lt;p&gt;One Citrix portal. No MFA checkbox ticked. No vulnerability scanner would have flagged it as a critical finding, because the portal was technically functional. It was serving its purpose. The missing control was a &lt;em&gt;gap&lt;/em&gt;, not a broken component.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Pattern These Breaches Share
&lt;/h2&gt;

&lt;p&gt;These are not isolated anomalies. They represent a category of security failure that is structurally invisible to automated tooling: &lt;strong&gt;the absence of a required control&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A vulnerability scanner finds things that are broken — software with known CVEs, misconfigured headers, weak cryptographic implementations. It cannot find things that are &lt;em&gt;missing&lt;/em&gt; — a rotation policy that was never enforced, an MFA configuration that was never applied, a third-party access review that was never conducted, an offboarding process that never revoked credentials.&lt;/p&gt;

&lt;p&gt;This is the critical distinction. Broken things have signatures. Missing things have no signature. You cannot scan for the absence of a process.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://www.kuboid.in/blog/secrets-in-code-how-one-git-commit-cost-a-startup-dollar-80000" rel="noopener noreferrer"&gt;secrets in code post&lt;/a&gt; covers a related pattern — credentials committed to repositories that live there indefinitely because no one has a process to detect or rotate them. The &lt;a href="https://www.kuboid.in/blog/iam-permissions-why-admin-access-for-everyone-is-a-disaster-waiting-to-happen" rel="noopener noreferrer"&gt;IAM permissions post&lt;/a&gt; covers what happens when access is provisioned but never reviewed. Same category: not broken, just missing the right control.&lt;/p&gt;




&lt;h2&gt;
  
  
  What "No Vulnerabilities Found" Actually Means
&lt;/h2&gt;

&lt;p&gt;Let's be precise about what an automated scanner is actually telling you when it returns a clean report.&lt;/p&gt;

&lt;p&gt;It is telling you: &lt;em&gt;of the known vulnerability signatures in our database, we did not detect any matches against the assets you provided us.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It is not telling you: your credential hygiene is sound. Your MFA coverage is complete. Your third-party access has been reviewed recently. Your access control policies match your actual configuration. Your architecture has no single points of failure.&lt;/p&gt;

&lt;p&gt;The gap between those two statements is where real breaches happen.&lt;/p&gt;

&lt;p&gt;This is the false confidence problem — and it is more dangerous than false positives. A false positive means your team wastes time investigating a non-issue. False confidence means your team doesn't investigate the actual risk at all, because the report implied it wasn't there.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Manual Assessment Finds That Scanners Cannot
&lt;/h2&gt;

&lt;p&gt;A manual security assessment — whether a penetration test, an architecture review, or a control gap analysis — is not limited to signature matching. A human reviewer asks questions that scanners were never designed to ask.&lt;/p&gt;

&lt;p&gt;Are your third-party vendor credentials subject to rotation policies — and is that policy actually enforced? Does MFA coverage match your documented standards across every external-facing system, or are there exceptions that were never followed up? When an employee leaves, is access revocation verified or assumed? If your Citrix portal was acquired through a company merger, did it go through the same security onboarding as your primary infrastructure?&lt;/p&gt;

&lt;p&gt;These are process and architecture questions. Their answers determine whether you have a &lt;a href="https://www.kuboid.in/blog/why-mfa-wont-stop-social-engineering-attacks" rel="noopener noreferrer"&gt;Samsung-style gap&lt;/a&gt; sitting dormant in your environment right now.&lt;/p&gt;

&lt;p&gt;The most valuable output of a manual engagement is sometimes not a list of vulnerabilities found — it's a list of controls that should exist but don't. The things your scanner had no way to tell you were missing.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Question Worth Asking This Week
&lt;/h2&gt;

&lt;p&gt;You don't need to wait for a pen test to start. One question for your team right now:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Do we have any third-party vendor credentials with access to our systems that haven't been reviewed or rotated in the last 12 months?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If the honest answer is "we don't know" — that is your finding.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Has your team ever discovered a missing control that a scanner completely overlooked — something obvious in hindsight that no automated tool would have caught? Drop a comment. These stories matter, and they're far more instructive than any CVE database.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;At &lt;a href="https://www.kuboid.in/why-kuboid" rel="noopener noreferrer"&gt;Kuboid Secure Layer&lt;/a&gt;, our assessments include control gap analysis alongside vulnerability testing — specifically because the Samsung and Change Healthcare categories of failure are increasingly common and completely invisible to scanning alone. If you want to know what your environment is missing, not just what's broken, &lt;a href="https://www.kuboid.in/contact" rel="noopener noreferrer"&gt;let's talk&lt;/a&gt;. We also offer a &lt;a href="https://www.kuboid.in/services/virtual-security-engineer" rel="noopener noreferrer"&gt;Virtual Security Engineer&lt;/a&gt; service for teams that need ongoing security oversight without a full-time hire.&lt;/p&gt;




</description>
      <category>cybersecurity</category>
      <category>founder</category>
    </item>
    <item>
      <title>The Axios Supply Chain Attack Explained: How a Compromised npm Account Put 83 Million Projects at Risk</title>
      <dc:creator>Kuboid Secure Layer</dc:creator>
      <pubDate>Wed, 01 Apr 2026 04:28:47 +0000</pubDate>
      <link>https://dev.to/kuboidsecurelayer/the-axios-supply-chain-attack-explained-how-a-compromised-npm-account-put-83-million-projects-at-2191</link>
      <guid>https://dev.to/kuboidsecurelayer/the-axios-supply-chain-attack-explained-how-a-compromised-npm-account-put-83-million-projects-at-2191</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TLDR:&lt;/strong&gt; On March 31, 2026, between 00:21 and 03:29 UTC, two malicious versions of Axios — &lt;code&gt;1.14.1&lt;/code&gt; and &lt;code&gt;0.30.4&lt;/code&gt; — were published to npm via a compromised maintainer account. They silently installed a cross-platform remote access trojan (RAT) on any machine that ran &lt;code&gt;npm install&lt;/code&gt; during that window. The malware targeted macOS, Windows, and Linux, contacted a live command-and-control server, self-deleted its own traces after execution, and established persistence. Axios has 83 million weekly downloads. If your CI/CD pipeline ran without a pinned version during those three hours, check your system now.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The Package Everyone Trusts
&lt;/h2&gt;

&lt;p&gt;If you've written JavaScript in the last decade — frontend or backend — you've almost certainly used Axios. It's the HTTP client. The one that just works. It sits in millions of &lt;code&gt;package.json&lt;/code&gt; files across the world as a dependency so standard it's rarely thought about.&lt;/p&gt;

&lt;p&gt;Which is exactly why it was targeted.&lt;/p&gt;

&lt;p&gt;On the night of March 30–31, 2026, an attacker who had obtained the npm credentials of Axios's primary maintainer — &lt;code&gt;@jasonsaayman&lt;/code&gt; — used them to publish two poisoned versions directly to the npm registry, bypassing the project's GitHub Actions CI/CD pipeline entirely. The malicious versions were live for just over three hours. In those three hours, any automated build system, developer machine, or container that pulled a fresh Axios install without a pinned version could have been fully compromised.&lt;/p&gt;

&lt;p&gt;The attack was not opportunistic. It was planned, staged, and executed with remarkable precision.&lt;/p&gt;




&lt;h2&gt;
  
  
  How It Was Built — 18 Hours in Advance
&lt;/h2&gt;

&lt;p&gt;The attacker's playbook reveals serious operational sophistication. They didn't rush.&lt;/p&gt;

&lt;p&gt;At 05:57 UTC on March 30, they published &lt;code&gt;plain-crypto-js@4.2.0&lt;/code&gt; to npm — a clean, harmless package with no malicious content. Just establishing a footprint. Creating a brief package history to reduce suspicion.&lt;/p&gt;

&lt;p&gt;Nearly 18 hours later, at 23:59 UTC, they published &lt;code&gt;plain-crypto-js@4.2.1&lt;/code&gt; — the same package name, now containing the malicious payload.&lt;/p&gt;

&lt;p&gt;Then, at 00:21 UTC on March 31, they published &lt;code&gt;axios@1.14.1&lt;/code&gt; using the compromised &lt;code&gt;@jasonsaayman&lt;/code&gt; account. The only change to Axios? A single line adding &lt;code&gt;plain-crypto-js@4.2.1&lt;/code&gt; as a runtime dependency — a package that Axios's own source code never imports anywhere, added solely to trigger its &lt;code&gt;postinstall&lt;/code&gt; hook: &lt;code&gt;node setup.js&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;At 01:00 UTC, a second poisoned release — &lt;code&gt;axios@0.30.4&lt;/code&gt; — followed, hitting the legacy version branch. Both release branches compromised within 39 minutes.&lt;/p&gt;

&lt;p&gt;npm quarantined both versions at 03:29 UTC. 188 minutes total exposure.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Happened When You Ran &lt;code&gt;npm install&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;setup.js&lt;/code&gt; postinstall dropper used double-layer obfuscation — reversed Base64 encoding combined with an XOR cipher keyed to &lt;code&gt;OrDeR_7077&lt;/code&gt; — to evade static analysis tools. Once decoded, it detected the host operating system and reached out to &lt;code&gt;sfrclak[.]com:8000&lt;/code&gt; (IP: &lt;code&gt;142.11.206.73&lt;/code&gt;) to download a platform-appropriate second-stage payload.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;On macOS:&lt;/strong&gt; An AppleScript downloaded a C++ RAT binary to &lt;code&gt;/Library/Caches/com.apple.act.mond&lt;/code&gt; — deliberately mimicking Apple's own background daemon naming convention. Once running, it fingerprinted the system, generated a unique victim ID, and beaconed to the C2 server every 60 seconds using a fake Internet Explorer 8 User-Agent string. The attacker could send arbitrary shell commands, execute additional payloads, or enumerate the filesystem on demand.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;On Windows:&lt;/strong&gt; A VBScript downloader copied PowerShell to &lt;code&gt;%PROGRAMDATA%\wt.exe&lt;/code&gt; — disguising it as Windows Terminal — and executed a hidden PowerShell RAT that connected to the same C2 server.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;On Linux:&lt;/strong&gt; A Python RAT was downloaded to &lt;code&gt;/tmp/ld.py&lt;/code&gt; and launched as an orphaned background process via &lt;code&gt;nohup python3&lt;/code&gt;, detaching it from the terminal session that spawned it.&lt;/p&gt;

&lt;p&gt;After launching the RAT, the dropper performed forensic self-cleanup: it deleted &lt;code&gt;setup.js&lt;/code&gt;, removed the &lt;code&gt;package.json&lt;/code&gt; containing the postinstall hook, and replaced it with a clean &lt;code&gt;package.md&lt;/code&gt; file renamed to &lt;code&gt;package.json&lt;/code&gt;. If you inspected &lt;code&gt;node_modules/plain-crypto-js&lt;/code&gt; after the fact, you would find no obvious signs a postinstall script had ever run.&lt;/p&gt;

&lt;p&gt;As StepSecurity noted in their analysis: "Neither malicious version contains a single line of malicious code inside Axios itself." The attack was entirely in the injected dependency.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Your Lockfile Is Your First Line of Defence
&lt;/h2&gt;

&lt;p&gt;Here's the critical question: were you affected?&lt;/p&gt;

&lt;p&gt;The three-hour window (00:21–03:29 UTC) is the key constraint. If your &lt;code&gt;package-lock.json&lt;/code&gt; or &lt;code&gt;yarn.lock&lt;/code&gt; was committed before the malicious versions were published and your install ran &lt;code&gt;npm ci&lt;/code&gt; — which strictly uses the lockfile — you were not affected. The lockfile would have pinned you to a prior, clean Axios version.&lt;/p&gt;

&lt;p&gt;Risk was highest for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CI/CD pipelines&lt;/strong&gt; running on a schedule or on commit that use &lt;code&gt;npm install&lt;/code&gt; without enforcing the lockfile, especially those running during early UTC hours&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Developers&lt;/strong&gt; who ran &lt;code&gt;npm install&lt;/code&gt; or &lt;code&gt;npm update&lt;/code&gt; during the window on a machine that resolved the new versions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Projects depending on&lt;/strong&gt; &lt;code&gt;@qqbrowser/openclaw-qbot@0.0.130&lt;/code&gt; or &lt;code&gt;@shadanai/openclaw&lt;/code&gt; versions &lt;code&gt;2026.3.31-1&lt;/code&gt; and &lt;code&gt;2026.3.31-2&lt;/code&gt; — two additional packages that shipped the same malicious &lt;code&gt;plain-crypto-js&lt;/code&gt; dependency, with no dependency on the time window&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the same attack pattern we wrote about with &lt;a href="https://www.kuboid.in/blog/litellm-supply-chain-attack-march-2026-explained" rel="noopener noreferrer"&gt;LiteLLM just a week ago&lt;/a&gt; — and before that, &lt;a href="https://www.kuboid.in/blog/xz-utils-backdoor-supply-chain-attack-linux" rel="noopener noreferrer"&gt;XZ Utils&lt;/a&gt;. The attack surface is different; the playbook is identical: compromise a trusted publishing credential, inject a malicious dependency, rely on the ecosystem's implicit trust.&lt;/p&gt;




&lt;h2&gt;
  
  
  Check If You Were Affected — Right Now
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Check your lockfile for the affected versions&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# npm&lt;/span&gt;
&lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-E&lt;/span&gt; &lt;span class="s1"&gt;'"axios"'&lt;/span&gt; package-lock.json | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-E&lt;/span&gt; &lt;span class="s1"&gt;'1\.14\.1|0\.30\.4'&lt;/span&gt;

&lt;span class="c"&gt;# yarn&lt;/span&gt;
&lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s1"&gt;'axios@'&lt;/span&gt; yarn.lock | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-E&lt;/span&gt; &lt;span class="s1"&gt;'1\.14\.1|0\.30\.4'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2: Check for the malicious dependency&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;ls &lt;/span&gt;plain-crypto-js
&lt;span class="c"&gt;# or&lt;/span&gt;
find node_modules &lt;span class="nt"&gt;-name&lt;/span&gt; &lt;span class="s2"&gt;"plain-crypto-js"&lt;/span&gt; &lt;span class="nt"&gt;-type&lt;/span&gt; d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3: Check for RAT artifacts on potentially affected machines&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Platform&lt;/th&gt;
&lt;th&gt;Indicator of Compromise&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;macOS&lt;/td&gt;
&lt;td&gt;&lt;code&gt;/Library/Caches/com.apple.act.mond&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Windows&lt;/td&gt;
&lt;td&gt;&lt;code&gt;%PROGRAMDATA%\wt.exe&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Linux&lt;/td&gt;
&lt;td&gt;&lt;code&gt;/tmp/ld.py&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Network&lt;/td&gt;
&lt;td&gt;Outbound traffic to &lt;code&gt;sfrclak[.]com&lt;/code&gt; or &lt;code&gt;142.11.206.73:8000&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If you find any of these, stop. The RAT was active and beaconing. Do not attempt to clean the system in place.&lt;/p&gt;




&lt;h2&gt;
  
  
  What to Do If You Were Affected
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Isolate the machine immediately.&lt;/strong&gt; If a RAT was running, the attacker had arbitrary code execution. Do not continue using the system for anything sensitive.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rotate every credential on that machine.&lt;/strong&gt; API keys, npm tokens, GitHub tokens, SSH keys, AWS credentials, cloud service accounts, database passwords, &lt;code&gt;.env&lt;/code&gt; file contents. Everything. Rotate from a clean device, not the compromised one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Audit your CI/CD pipeline logs&lt;/strong&gt; for the March 31 UTC window. Determine exactly which builds installed the affected versions and what those build environments had access to.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rebuild compromised environments from a known-clean snapshot.&lt;/strong&gt; Do not attempt to remediate in place. The dropper self-deleted, but the RAT and its persistence mechanism may have installed additional payloads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Downgrade Axios&lt;/strong&gt; to version &lt;code&gt;1.14.0&lt;/code&gt; or &lt;code&gt;0.30.3&lt;/code&gt; and pin it explicitly in your &lt;code&gt;package.json&lt;/code&gt;. Block egress traffic to &lt;code&gt;sfrclak[.]com&lt;/code&gt; at your network perimeter.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Structural Change That Would Have Stopped This
&lt;/h2&gt;

&lt;p&gt;Two controls that were widely recommended but not universally adopted would have prevented compromise even during the three-hour window.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use &lt;code&gt;npm ci&lt;/code&gt; in CI/CD, never &lt;code&gt;npm install&lt;/code&gt;.&lt;/strong&gt; &lt;code&gt;npm ci&lt;/code&gt; enforces strict lockfile compliance. It installs exactly the versions in your lockfile and fails if &lt;code&gt;package.json&lt;/code&gt; and the lockfile are out of sync. An attacker publishing a new version to the registry cannot affect a build that never consults the registry for versions — it only reads from the lockfile.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use &lt;code&gt;--ignore-scripts&lt;/code&gt; for automated installs.&lt;/strong&gt; Running &lt;code&gt;npm ci --ignore-scripts&lt;/code&gt; prevents postinstall hooks from executing entirely. The entire Axios attack was delivered through a postinstall hook. This flag would have blocked it completely — though be aware it can break packages that legitimately require native compilation during install.&lt;/p&gt;

&lt;p&gt;These aren't exotic security measures. They're npm flags. The fact that so many pipelines don't use them by default is part of why supply chain attacks through postinstall scripts remain so effective.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Pattern We're Now Watching Every Week
&lt;/h2&gt;

&lt;p&gt;The Axios attack came one week after the &lt;a href="https://www.kuboid.in/blog/litellm-supply-chain-attack-march-2026-explained" rel="noopener noreferrer"&gt;LiteLLM compromise&lt;/a&gt; — which itself came from credentials stolen during the Trivy breach. These attacks are cascading. Each compromised tool provides credentials or access that enables the next attack. The &lt;a href="https://www.kuboid.in/blog/owasp-top-10-2025-everything-that-changed-and-what-it-means" rel="noopener noreferrer"&gt;software supply chain&lt;/a&gt; is now OWASP's #3 risk category specifically because this pattern is accelerating, not slowing down.&lt;/p&gt;

&lt;p&gt;If your organisation uses open source at any scale — and virtually every organisation does — your dependency tree is an attack surface that nobody on your team has fully reviewed. The question isn't whether one of those packages will be compromised. It's whether you'll know about it within three hours or three months.&lt;/p&gt;

&lt;p&gt;At &lt;a href="https://www.kuboid.in" rel="noopener noreferrer"&gt;Kuboid Secure Layer&lt;/a&gt;, our &lt;a href="https://www.kuboid.in/services/web-app-pentest" rel="noopener noreferrer"&gt;security assessments&lt;/a&gt; include supply chain exposure analysis — dependency tree auditing, CI/CD pipeline security, and secrets management review. If you want to understand what your dependency graph actually looks like from an attacker's perspective, &lt;a href="https://www.kuboid.in/contact" rel="noopener noreferrer"&gt;let's talk&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Are you running Axios in production? Did you check your lockfile after this news broke? Drop a comment — and if you found the affected versions in your environment, what did your response look like? The more we share, the better prepared the community gets.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




</description>
      <category>axios</category>
      <category>cybersecurity</category>
      <category>npm</category>
      <category>founder</category>
    </item>
    <item>
      <title>Pen Testing Tools Explained: Nessus, Burp Suite, Nmap, Metasploit — What They Do and What They Miss</title>
      <dc:creator>Kuboid Secure Layer</dc:creator>
      <pubDate>Tue, 31 Mar 2026 03:50:47 +0000</pubDate>
      <link>https://dev.to/kuboidsecurelayer/pen-testing-tools-explained-nessus-burp-suite-nmap-metasploit-what-they-do-and-what-they-miss-aid</link>
      <guid>https://dev.to/kuboidsecurelayer/pen-testing-tools-explained-nessus-burp-suite-nmap-metasploit-what-they-do-and-what-they-miss-aid</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TLDR:&lt;/strong&gt; Nessus, Burp Suite, Nmap, Metasploit, ZAP — these are the tools in every pen tester's arsenal. You've probably heard of most of them. Your DevOps team may already run some of them. But here's what most vendors won't tell you: every single one of these tools has a hard boundary where it stops working — and a human takes over. Understanding that boundary is the difference between a security programme that checks boxes and one that actually finds what an attacker would find.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Tools Don't Hack. Testers Do.
&lt;/h2&gt;

&lt;p&gt;There's a narrative in the security industry — reinforced by vendors, marketing decks, and compliance frameworks — that the right tool equals the right result. Run Nessus. Get a report. Fix the findings. Done.&lt;/p&gt;

&lt;p&gt;I've been doing this long enough to know that story is comfortable but incomplete. Tools are how a pen tester starts. They are never how a pen tester finishes. Here's an honest breakdown of what each major tool actually does — and where each one hands off to human judgment.&lt;/p&gt;




&lt;h2&gt;
  
  
  Nmap — The First Thing We Run
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What it does:&lt;/strong&gt; Nmap is a network scanner. It discovers what's alive on a network, which ports are open on each host, what services are running on those ports, and — in many cases — what software versions are running. It's the reconnaissance layer. Before a tester touches anything else, they need a map.&lt;/p&gt;

&lt;p&gt;A typical Nmap scan tells us: there's a web server on port 443, an SSH service on port 22, a database port that really shouldn't be internet-facing, and a forgotten dev server running an outdated version of nginx.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where it stops:&lt;/strong&gt; Nmap tells you &lt;em&gt;what exists&lt;/em&gt;. It has no opinion on whether any of it is exploitable, misconfigured, or logically flawed. An open port is just an open port until a human decides what to do with it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Nessus / Qualys / OpenVAS — CVE Matching at Scale
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What they do:&lt;/strong&gt; These are vulnerability scanners. They take the inventory Nmap builds and match it against massive databases of known CVEs (Common Vulnerabilities and Exposures). They identify unpatched software, weak cipher suites, default credentials still in place, deprecated protocols, and configuration issues measured against security benchmarks like CIS controls.&lt;/p&gt;

&lt;p&gt;This is genuinely valuable. If you're running Apache 2.4.49 and it's September 2021, Nessus will tell you that it's vulnerable to a path traversal and RCE exploit that's actively being weaponised. That's not a trivial finding.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where they stop:&lt;/strong&gt; Nessus knows nothing about your application. It doesn't know that the &lt;code&gt;/admin&lt;/code&gt; panel requires authentication — or that the authentication can be bypassed with a specific header. It doesn't know that your payment flow has a race condition. It matches signatures. Anything that requires understanding context, intent, or business logic is outside its scope.&lt;/p&gt;

&lt;p&gt;As we wrote &lt;a href="https://www.kuboid.in/blog/why-automated-vulnerability-scanners-miss-most-real-vulnerabilities" rel="noopener noreferrer"&gt;in our post on scanner limitations&lt;/a&gt;, these tools find the known and miss the novel. Attackers aren't limited to the known.&lt;/p&gt;




&lt;h2&gt;
  
  
  OWASP ZAP / Nikto — Web Application Baseline
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What they do:&lt;/strong&gt; Where Nessus scans infrastructure, ZAP and Nikto scan web applications specifically. They crawl your app, submit inputs with payloads designed to trigger common vulnerabilities — reflected XSS, basic SQL injection, open redirects, missing security headers — and report what fires.&lt;/p&gt;

&lt;p&gt;ZAP in particular has an active scanner mode that is genuinely useful for catching low-hanging fruit quickly. If your application is missing a &lt;code&gt;Content-Security-Policy&lt;/code&gt; header or has a reflected XSS in a search field, ZAP will find it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where they stop:&lt;/strong&gt; They crawl what they can see. Anything that requires authentication state, multi-step flows, or understanding how parts of the application interact with each other is largely invisible to an automated crawler. They also generate meaningful false positive rates — which means someone still has to review and validate every finding manually anyway.&lt;/p&gt;




&lt;h2&gt;
  
  
  Burp Suite — Where Things Get Interesting
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What it does:&lt;/strong&gt; Burp Suite is the tool I spend the most time in on any engagement, and it's worth explaining why — because it operates differently from everything above.&lt;/p&gt;

&lt;p&gt;Burp sits as a proxy between the tester's browser and the target application. Every request and response passes through it. That means the tester sees exactly what data is being sent, exactly what the server responds with, and can intercept, modify, and replay any request in real time.&lt;/p&gt;

&lt;p&gt;In scanner mode, Burp can run automated checks similar to ZAP. But that's not where its value lies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The manual mode is where it becomes a different category of tool entirely.&lt;/strong&gt; A tester who is actively working through Burp — replaying authentication requests with modified parameters, changing user IDs in API calls, testing how the application responds to unexpected inputs in multi-step flows — is doing something fundamentally different from running a scanner. They are thinking adversarially, in real time, based on what the application reveals about itself as they probe it.&lt;/p&gt;

&lt;p&gt;This is the gap between a tool that finds and a tester that tests. Burp is the interface through which that testing happens.&lt;/p&gt;




&lt;h2&gt;
  
  
  Metasploit — Exploit Validation, Not Discovery
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What it does:&lt;/strong&gt; Metasploit is a framework for running known exploits against known vulnerabilities. If Nessus tells you that a host is running a service vulnerable to CVE-2024-XXXX, Metasploit likely has a module that can confirm whether that vulnerability is actually exploitable in your specific environment.&lt;/p&gt;

&lt;p&gt;It's also an excellent framework for post-exploitation — simulating what an attacker does after they gain initial access. Can they move laterally? Escalate privileges? Reach your database server? Metasploit is how testers answer those questions with controlled, validated techniques.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where it stops:&lt;/strong&gt; Metasploit does not discover vulnerabilities. It exploits the ones that are already known and catalogued. The most significant vulnerabilities in most modern applications — business logic flaws, broken access control, chained exploits — have no Metasploit module because they're unique to your codebase. You can't automate exploiting a vulnerability that's specific to the way your application handles a password reset.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Human Judgment Layer
&lt;/h2&gt;

&lt;p&gt;Here's how these tools actually fit into a real engagement.&lt;/p&gt;

&lt;p&gt;We start with Nmap — building the map. Then automated scanning with Nessus or equivalents — clearing the known CVE landscape quickly. Then Burp and ZAP for web application baseline coverage. All of this happens in the first few hours.&lt;/p&gt;

&lt;p&gt;Then the tools go to the background and the actual work begins.&lt;/p&gt;

&lt;p&gt;The manual phase is where a tester asks: what does this application actually do? What happens if I call this API endpoint as User A but with User B's resource ID? What happens if I complete step 1 of this checkout flow and then skip directly to step 3? What happens if I submit a negative quantity? What happens if I call the password reset endpoint 200 times in 60 seconds?&lt;/p&gt;

&lt;p&gt;None of those questions have tool answers. They have human answers — arrived at through curiosity, pattern recognition, and a working understanding of how developers make mistakes under deadline pressure.&lt;/p&gt;

&lt;p&gt;If you want to understand what a full manual engagement covers, &lt;a href="https://www.kuboid.in/blog/the-complete-web-app-pen-test-checklist-what-i-test-on-every-engagement" rel="noopener noreferrer"&gt;this post walks through our complete checklist&lt;/a&gt;. And if you've never commissioned a pen test before and aren't sure what to expect, &lt;a href="https://www.kuboid.in/blog/what-is-a-penetration-test-everything-you-need-to-know-before-booking-one" rel="noopener noreferrer"&gt;start here&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Honest Conclusion
&lt;/h2&gt;

&lt;p&gt;Every tool in this list is genuinely useful. We use all of them. But they are starting points — ways of quickly eliminating the obvious so we can focus on the interesting.&lt;/p&gt;

&lt;p&gt;The interesting is where your actual risk lives. It's in the places an automated tool can't reach because reaching there requires understanding your application, your architecture, and your users well enough to ask the questions an attacker would ask.&lt;/p&gt;

&lt;p&gt;Tools find what they were built to find. Attackers find everything else.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Are you running any of these tools in your own pipeline? Curious whether your team treats them as a starting point or a final answer — drop a comment.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;At &lt;a href="https://www.kuboid.in/why-kuboid" rel="noopener noreferrer"&gt;Kuboid Secure Layer&lt;/a&gt;, manual assessment is the core of every engagement — tools included, human judgment first. &lt;a href="https://www.kuboid.in/services/web-app-pentest" rel="noopener noreferrer"&gt;See what a full assessment covers&lt;/a&gt; or &lt;a href="https://www.kuboid.in/contact" rel="noopener noreferrer"&gt;book a free consultation&lt;/a&gt; to talk through what your application needs.&lt;/p&gt;




</description>
      <category>cybersecurity</category>
      <category>burpsuite</category>
      <category>nmap</category>
      <category>metasploit</category>
    </item>
    <item>
      <title>Why Automated Vulnerability Scanners Miss Most Real Security Vulnerabilities</title>
      <dc:creator>Kuboid Secure Layer</dc:creator>
      <pubDate>Mon, 30 Mar 2026 04:48:33 +0000</pubDate>
      <link>https://dev.to/kuboidsecurelayer/why-automated-vulnerability-scanners-miss-most-real-security-vulnerabilities-2p96</link>
      <guid>https://dev.to/kuboidsecurelayer/why-automated-vulnerability-scanners-miss-most-real-security-vulnerabilities-2p96</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TLDR:&lt;/strong&gt; An IEEE study confirmed what every experienced penetration tester already knows — manual testing is significantly more effective than automated scanning in terms of accuracy. Automated scanners are excellent at finding known CVEs, missing patches, and basic misconfigurations. They are structurally blind to business logic flaws, chained vulnerabilities, access control issues, and anything that requires understanding &lt;em&gt;what your application is supposed to do&lt;/em&gt;. A clean scanner report is not a clean bill of health. It's a partial picture — and attackers know which part is missing.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  "We Run Scans Every Week. We're Covered."
&lt;/h2&gt;

&lt;p&gt;I've heard a version of this in almost every initial conversation with a new client. They have Nessus running on a schedule. They've got OWASP ZAP integrated into their CI/CD pipeline. Their last scan came back with nothing critical.&lt;/p&gt;

&lt;p&gt;Then we run a manual assessment.&lt;/p&gt;

&lt;p&gt;The findings list is never empty.&lt;/p&gt;

&lt;p&gt;Not because the scanner is bad software — it isn't. But because there's a fundamental category of vulnerability that no automated tool, no matter how well tuned, can reliably detect. And those are precisely the vulnerabilities that attackers target.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Scanners Are Actually Good At
&lt;/h2&gt;

&lt;p&gt;Let's be fair first, because automated scanning genuinely earns its place in a security programme.&lt;/p&gt;

&lt;p&gt;Scanners are exceptional at breadth. They can assess hundreds of assets in hours, checking against databases of tens of thousands of known CVE signatures. They reliably catch unpatched software, default credentials left in place, expired TLS certificates, open ports that shouldn't be open, and common misconfigurations. The &lt;a href="https://www.verizon.com/business/resources/reports/dbir/" rel="noopener noreferrer"&gt;2025 Verizon DBIR&lt;/a&gt; found that exploitation of known vulnerabilities accounted for 20% of breaches — and scanners are exactly the right tool to eliminate that category of exposure.&lt;/p&gt;

&lt;p&gt;They're also valuable for regression testing. Once you've fixed a vulnerability, a scanner can verify the fix was applied correctly — in minutes, without scheduling a human.&lt;/p&gt;

&lt;p&gt;For routine hygiene across a large infrastructure, automated scanning is not optional. But hygiene is not the same as security assurance.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Core Limitation: Signatures vs. Understanding
&lt;/h2&gt;

&lt;p&gt;Here's the fundamental problem.&lt;/p&gt;

&lt;p&gt;Automated scanners work by matching what they observe against a database of known patterns. If the pattern is in the database, they find it. If it isn't — or if the vulnerability requires &lt;em&gt;understanding context&lt;/em&gt; rather than matching a signature — they can't find it.&lt;/p&gt;

&lt;p&gt;Consider the IDOR vulnerability that &lt;a href="https://www.kuboid.in/blog/optus-data-breach-10-million-records-stolen-idor-vulnerability" rel="noopener noreferrer"&gt;exposed nearly 10 million Optus customers in 2022&lt;/a&gt;. An API endpoint returned any customer's data when you changed a single integer in the request. No scanner catches that, because the endpoint was functioning &lt;em&gt;exactly as programmed&lt;/em&gt;. The data came back with a &lt;code&gt;200 OK&lt;/code&gt;. There was nothing anomalous to flag. The flaw was in the logic — in the missing ownership check — not in the syntax.&lt;/p&gt;

&lt;p&gt;That distinction is everything.&lt;/p&gt;




&lt;h2&gt;
  
  
  Five Vulnerability Classes Scanners Structurally Cannot Find
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Business Logic Flaws.&lt;/strong&gt; Your checkout flow lets a user apply a discount code, then manipulate the cart after the discount is applied to add more items at the discounted price. Your password reset flow has a race condition. Your subscription tier can be bypassed by calling API endpoints in the wrong order. None of these are in a CVE database. They require understanding what your application &lt;em&gt;should&lt;/em&gt; do — and then testing what it &lt;em&gt;actually does&lt;/em&gt; when someone deliberately misuses it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Broken Access Control and IDOR.&lt;/strong&gt; As &lt;a href="https://www.kuboid.in/blog/idor-the-vulnerability-developers-keep-writing-and-how-to-stop" rel="noopener noreferrer"&gt;we've covered in depth&lt;/a&gt;, access control failures are the number one vulnerability class in &lt;a href="https://www.kuboid.in/blog/owasp-top-10-2025-everything-that-changed-and-what-it-means" rel="noopener noreferrer"&gt;OWASP 2025&lt;/a&gt;. A scanner sees an authenticated endpoint and confirms it returns data. It doesn't log in as User A and attempt to access User B's records. A human tester does exactly that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Chained Vulnerabilities.&lt;/strong&gt; A single low-severity finding — an information disclosure in an error message, a misconfigured CORS header, a slightly overprivileged token — may individually look negligible. Chained together in sequence, they can lead to full account takeover or data exfiltration. Scanners rate findings in isolation. Attackers don't attack in isolation. According to &lt;a href="https://www.getastra.com/blog/security-audit/penetration-testing-trends/" rel="noopener noreferrer"&gt;Astra's 2025 penetration testing trends report&lt;/a&gt;, there was a nearly 2000% increase in vulnerabilities discovered manually compared to automated tools — specifically in APIs, cloud configurations, and chained exploits.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Authentication and Session Logic Flaws.&lt;/strong&gt; Does your "remember me" token ever expire? Can a password reset token be used more than once? Can an attacker brute-force your OTP with no rate limiting? These questions require active, intentional probing — not passive observation. A scanner looks at the login page. It doesn't sit there systematically trying to subvert the authentication flow the way a motivated attacker would.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Second-Order and Stored Injection.&lt;/strong&gt; Classic SQL injection scanners test inputs and look for immediate responses. Second-order injection is when malicious input is stored and only executed later — in a different context, triggered by a different user action. Scanners almost universally miss this because the payload and the trigger are separated in time and context.&lt;/p&gt;




&lt;h2&gt;
  
  
  The False Confidence Problem
&lt;/h2&gt;

&lt;p&gt;OWASP research indicates that automated tools have false positive rates between 15% and 30% for common vulnerability types. That means a significant portion of what your scanner flags isn't real — which trains teams to dismiss findings. Meanwhile, the real vulnerabilities that scanners miss entirely generate no alert at all.&lt;/p&gt;

&lt;p&gt;The result is a security programme that's simultaneously overwhelmed with noise and blind to signal.&lt;/p&gt;

&lt;p&gt;The most dangerous outcome isn't a failed scan. It's a clean scan. Because a clean scan feels like permission to ship, permission to tell the board "we're secure," permission to defer the manual assessment to next quarter. That deferred assessment is the gap that breaches happen in.&lt;/p&gt;

&lt;p&gt;Manual pentests alone prevented $21.8M in targeted risk in 2024 — value that automated tooling, for all its volume, couldn't replicate. The precision of human testing in high-risk areas remains irreplaceable.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Baseline Stack That Actually Works
&lt;/h2&gt;

&lt;p&gt;Automated and manual testing are not competitors. They're complements that cover different things.&lt;/p&gt;

&lt;p&gt;The baseline we recommend to every client at &lt;a href="https://www.kuboid.in" rel="noopener noreferrer"&gt;Kuboid Secure Layer&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Continuous automated scanning&lt;/strong&gt; for known CVEs, dependency vulnerabilities, and misconfigurations — integrated into your pipeline so issues are caught before they ship.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Manual penetration testing at meaningful intervals&lt;/strong&gt; — at minimum annually, and after any significant architectural change, new feature launch, or infrastructure migration. Not as a compliance checkbox, but as a genuine adversarial simulation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Authenticated API testing specifically&lt;/strong&gt; — your APIs are almost certainly your highest-risk surface and the one least likely to be covered by a generic scanner. We covered &lt;a href="https://www.kuboid.in/blog/api-security-the-blind-spot-of-every-early-stage-startup" rel="noopener noreferrer"&gt;why API security is the blind spot of most startups here&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you want to understand exactly what a manual engagement covers — and what your scanner is currently leaving untested — &lt;a href="https://www.kuboid.in/blog/the-complete-web-app-pen-test-checklist-what-i-test-on-every-engagement" rel="noopener noreferrer"&gt;our web app pen test checklist&lt;/a&gt; walks through it in detail. And if you want to understand what the full engagement process looks like before committing, &lt;a href="https://www.kuboid.in/blog/what-is-a-penetration-test-everything-you-need-to-know-before-booking-one" rel="noopener noreferrer"&gt;this post covers everything&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  One Last Thing
&lt;/h2&gt;

&lt;p&gt;The gap between what automated scanners find and what manual testing finds isn't a gap in tooling. It's a gap in understanding. Scanners look at your application from the outside and compare what they see against what they already know. A good penetration tester looks at your application from the perspective of someone who wants to break it — and asks questions the scanner was never programmed to ask.&lt;/p&gt;

&lt;p&gt;Your scanner is running. That's good. The question is: when did someone last actually &lt;em&gt;think&lt;/em&gt; about how to break your application?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;If you've ever been surprised by a finding your scanner missed — or if you're running scans and assuming that's enough — drop a comment. You're not alone, and this conversation matters.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;At &lt;a href="https://www.kuboid.in/why-kuboid" rel="noopener noreferrer"&gt;Kuboid Secure Layer&lt;/a&gt;, our manual web application assessments are specifically designed to find what your automated tooling leaves behind. &lt;a href="https://www.kuboid.in/contact" rel="noopener noreferrer"&gt;Book a free consultation&lt;/a&gt; and we'll tell you exactly what a manual assessment of your application would cover.&lt;/p&gt;




</description>
      <category>cybersecurity</category>
      <category>automation</category>
      <category>vulnerabilities</category>
      <category>scanner</category>
    </item>
    <item>
      <title>Web Application Penetration Testing: A Complete Guide for Developers and Founders</title>
      <dc:creator>Kuboid Secure Layer</dc:creator>
      <pubDate>Sun, 29 Mar 2026 03:11:06 +0000</pubDate>
      <link>https://dev.to/kuboidsecurelayer/web-application-penetration-testing-a-complete-guide-for-developers-and-founders-3amp</link>
      <guid>https://dev.to/kuboidsecurelayer/web-application-penetration-testing-a-complete-guide-for-developers-and-founders-3amp</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TLDR:&lt;/strong&gt; Web application penetration testing is the practice of attacking your own application — in a controlled, structured way — before a real attacker does. It covers everything from authentication and access control to APIs, session handling, business logic, and third-party integrations. This guide explains every phase, every decision, and what you actually get at the end. Written from the perspective of someone who spent years building software before spending years breaking it.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Why a Developer Explains This Differently
&lt;/h2&gt;

&lt;p&gt;I didn't start in security. I started writing code — building features, hitting deadlines, shipping products. I know what it feels like to be told "we need a pen test" without any context for what that actually means or whether it's worth the time and cost.&lt;/p&gt;

&lt;p&gt;That background shapes how I approach both testing and explaining it. I'm not interested in making this sound more complex than it is. A web app pen test is a structured attempt to find what's broken before someone else does. That's it. Everything else is methodology.&lt;/p&gt;

&lt;p&gt;If you want the foundational version — what pen testing is at its most basic — &lt;a href="https://www.kuboid.in/blog/what-is-a-penetration-test-everything-you-need-to-know-before-booking-one" rel="noopener noreferrer"&gt;we covered that here&lt;/a&gt;. This guide goes deeper, specifically for web applications.&lt;/p&gt;




&lt;h2&gt;
  
  
  What It Is — And What It Isn't
&lt;/h2&gt;

&lt;p&gt;A web application penetration test is a time-boxed, authorised security assessment where a tester attempts to find and exploit vulnerabilities in your application using the same techniques a real attacker would use.&lt;/p&gt;

&lt;p&gt;It is not a vulnerability scan. A scanner runs automated checks against known signatures — it's fast, broad, and misses the majority of logic-level vulnerabilities. A pen test is slower, manual, and finds the things a scanner cannot: access control failures, business logic flaws, authentication bypasses, and the kind of chained vulnerabilities where no single issue is critical but three of them together open a serious door.&lt;/p&gt;

&lt;p&gt;It is not a one-time audit that means you're secure forever. Applications change. New features introduce new attack surfaces. A pen test gives you a point-in-time assessment — valuable when it's recent, decreasing in relevance as your codebase evolves.&lt;/p&gt;

&lt;p&gt;And it is not something to be afraid of. The goal isn't to humiliate your engineering team. It's to surface what your team is too close to see, in a consequence-free environment, before the consequences are real.&lt;/p&gt;




&lt;h2&gt;
  
  
  The 5 Phases of a Web App Pen Test
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Phase 1 — Reconnaissance.&lt;/strong&gt; Before touching the application, the tester maps the environment. What subdomains exist? What technologies are in use? What does the certificate transparency log reveal? What APIs are documented — or undocumented? This is open-source intelligence work that mirrors exactly what an attacker does before they probe anything. We walked through what this looks like in the opening minutes &lt;a href="https://www.kuboid.in/blog/web-app-pen-test-first-10-minutes-checklist" rel="noopener noreferrer"&gt;in this post&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 2 — Mapping.&lt;/strong&gt; The tester walks through the entire application as a user — every feature, every input, every state change. They're building a mental model of what the application does, what data it handles, and where the interesting trust boundaries are. What happens when a free-tier user tries to access a paid feature? What happens when a read-only user submits a POST request to an endpoint designed for admins?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 3 — Vulnerability Identification.&lt;/strong&gt; With a full picture of the application, the tester begins probing systematically — working through the &lt;a href="https://www.kuboid.in/blog/owasp-top-10-explained-without-the-jargon" rel="noopener noreferrer"&gt;OWASP Top 10&lt;/a&gt; as a framework but not limited to it. Access controls. Authentication flows. Input validation. Session handling. Cryptographic implementation. API endpoints. Third-party integrations. Each area gets deliberate, manual attention.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 4 — Exploitation.&lt;/strong&gt; Where possible and in scope, the tester demonstrates that a vulnerability is genuinely exploitable — not just theoretically present. This is what separates a pen test finding from a scanner alert. "This endpoint has no rate limiting" becomes "I extracted 50,000 user records using this endpoint at the rate of 200 requests per second with no authentication."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 5 — Reporting.&lt;/strong&gt; Everything is documented: what was found, how it was found, how severe it is, and exactly how to fix it. More on this below.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Actually Gets Tested
&lt;/h2&gt;

&lt;p&gt;The scope of a web app pen test covers more ground than most people expect. A thorough engagement tests:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Authentication and session management&lt;/strong&gt; — login flows, password reset logic, session token entropy, logout behaviour, remember-me functionality, and MFA implementation. &lt;a href="https://www.kuboid.in/blog/broken-authentication-what-it-is-what-i-test-and-why-it-keeps-getting-exploited" rel="noopener noreferrer"&gt;Broken authentication&lt;/a&gt; is #7 on OWASP 2025 but consistently one of the most damaging vulnerability classes when exploited.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Authorisation and access control&lt;/strong&gt; — can User A access User B's data? Can a standard user perform admin actions? Can a free-tier customer access premium features? This is OWASP #1 for the fifth consecutive year.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Input handling and injection&lt;/strong&gt; — SQL injection, command injection, XSS, template injection, XML injection. Every place your application accepts input is a potential injection point.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;APIs&lt;/strong&gt; — REST, GraphQL, internal microservice endpoints. &lt;a href="https://www.kuboid.in/blog/api-security-the-blind-spot-of-every-early-stage-startup" rel="noopener noreferrer"&gt;APIs are consistently the most underprepared part of modern applications&lt;/a&gt;. Undocumented endpoints, missing authentication on internal routes, and excessive data exposure are endemic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Business logic&lt;/strong&gt; — the flaws that no tool catches. Can I buy a product for a negative price? Can I skip a step in a multi-stage verification flow? Can I apply a discount code more than once? Can I manipulate a quantity field to cause an integer overflow?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Third-party integrations&lt;/strong&gt; — OAuth implementations, payment gateway flows, webhook handling, and SSO configurations. Vulnerabilities in how you &lt;em&gt;integrate&lt;/em&gt; with trusted services are often more exploitable than vulnerabilities in the services themselves.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Infrastructure exposure&lt;/strong&gt; — security headers, TLS configuration, exposed debug endpoints, and information disclosure through error messages.&lt;/p&gt;




&lt;h2&gt;
  
  
  Black Box, Grey Box, White Box — Which Is Right for You?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Black box&lt;/strong&gt; means the tester has no prior knowledge of the application — no source code, no credentials, no architecture documentation. This mirrors the perspective of an external attacker. It's valuable for testing your external attack surface but typically finds fewer issues per hour of testing because the tester spends significant time in reconnaissance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Grey box&lt;/strong&gt; means the tester has some knowledge — typically a set of test credentials and basic documentation about the application structure. This is the most common engagement type because it combines realistic attacker simulation with enough context to test deeply within the time budget.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;White box&lt;/strong&gt; means full access — source code, architecture diagrams, environment documentation. This finds the most vulnerabilities per hour and is particularly valuable for code-level issues that would never be visible from outside. It's closer to a secure code review than a traditional pen test.&lt;/p&gt;

&lt;p&gt;For most startups and growing companies, grey box is the right starting point. It gives you the best signal-to-noise ratio within a practical time and cost budget.&lt;/p&gt;




&lt;h2&gt;
  
  
  What a Real Report Looks Like
&lt;/h2&gt;

&lt;p&gt;A good pen test report has two sections serving two different audiences.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;executive summary&lt;/strong&gt; — one to two pages — tells leadership what was found, how severe it is, and what the business risk is. It doesn't use CVE numbers or CVSS scores. It says things like: "An attacker with no authentication could access the personal data of any customer in the system by modifying a URL parameter." Actionable. Clear. No jargon.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;technical findings&lt;/strong&gt; section gives your engineering team everything they need to reproduce and fix every issue. Each finding includes: a description of the vulnerability, the exact steps to reproduce it, evidence (screenshots, request/response logs), a severity rating, and specific remediation guidance. Not "fix your access controls" — "add an ownership check on line 142 of &lt;code&gt;InvoiceController.js&lt;/code&gt; that verifies &lt;code&gt;invoice.owner_id === session.user_id&lt;/code&gt; before returning the response."&lt;/p&gt;

&lt;p&gt;Be wary of any report that doesn't include specific remediation guidance. A list of problems without solutions is not a deliverable.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Happens After the Report
&lt;/h2&gt;

&lt;p&gt;The report is the beginning, not the end.&lt;/p&gt;

&lt;p&gt;Your engineering team reviews the findings, prioritises by severity, and begins remediation. Critical and high findings — things that are actively exploitable with significant impact — should be addressed before anything else. Medium and low findings get scheduled into normal sprint work.&lt;/p&gt;

&lt;p&gt;Once remediation is complete, the tester should offer a re-test: returning to verify that the fixes actually work. A surprising number of security fixes are incomplete — the specific exploit is blocked but the underlying vulnerability remains accessible through a different path. Re-testing closes that gap.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.kuboid.in/blog/secure-coding-habits-that-take-5-minutes-and-prevent-80percent-of-web-vulnerabilities" rel="noopener noreferrer"&gt;The secure coding habits that prevent most of these issues from being introduced in the first place&lt;/a&gt; are worth implementing alongside remediation — so the next engagement starts from a stronger baseline.&lt;/p&gt;




&lt;h2&gt;
  
  
  How to Prepare — And When to Book
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Preparation.&lt;/strong&gt; Provide your tester with test accounts at each privilege level in your application. Document any areas that are explicitly out of scope (payment processors, third-party services you don't control). Brief the tester on your application's core functionality — not because they need hand-holding, but because 30 minutes of context at the start saves hours of mapping time that can be spent on deeper testing instead.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When to book.&lt;/strong&gt; The specific trigger events that should prompt a pen test: before a major product launch, before enterprise sales conversations where customers will ask for a report, after a significant architectural change, when you process sensitive personal data or financial information for the first time, and on a recurring annual basis for applications in production. If any of these apply to you right now, the timing is right.&lt;/p&gt;




&lt;h2&gt;
  
  
  How to Choose a Pen Tester
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Green flags:&lt;/strong&gt; They ask detailed questions about your application before scoping. They provide a methodology document. They offer grey box as a default, not black box. They include re-testing in the engagement. Their report template shows actual technical findings, not just scanner output.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Red flags:&lt;/strong&gt; They can quote a price without understanding your application's scope. Their sample report is mostly automated scanner output. They don't offer re-testing. They can't explain what business logic testing means.&lt;/p&gt;




&lt;h2&gt;
  
  
  How Kuboid Secure Layer Approaches This
&lt;/h2&gt;

&lt;p&gt;Our &lt;a href="https://www.kuboid.in/services/web-app-pentest" rel="noopener noreferrer"&gt;web application penetration tests&lt;/a&gt; are grey box by default, fully mapped to OWASP Top 10:2025, and include re-testing as standard. We write reports that your engineering team can act on immediately and that your leadership team can understand without a security background.&lt;/p&gt;

&lt;p&gt;If you're not sure whether your application needs a pen test right now, &lt;a href="https://www.kuboid.in/contact" rel="noopener noreferrer"&gt;book a free consultation&lt;/a&gt; — we'll give you an honest answer, not a sales pitch.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Are you a founder or developer who's been through a pen test? What surprised you most about the findings? Drop a comment — the answers are usually more instructive than the blog post.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




</description>
      <category>webdev</category>
      <category>cybersecurity</category>
      <category>pentest</category>
    </item>
    <item>
      <title>Web App Pen Test: What I Check in the First 10 Minutes of Every Engagement</title>
      <dc:creator>Kuboid Secure Layer</dc:creator>
      <pubDate>Sat, 28 Mar 2026 06:25:01 +0000</pubDate>
      <link>https://dev.to/kuboidsecurelayer/web-app-pen-test-what-i-check-in-the-first-10-minutes-of-every-engagement-12m8</link>
      <guid>https://dev.to/kuboidsecurelayer/web-app-pen-test-what-i-check-in-the-first-10-minutes-of-every-engagement-12m8</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TLDR:&lt;/strong&gt; Most people imagine pen testing as a montage of terminals, complex exploits, and hours of deep technical work. The reality is that the first 10 minutes are almost always the most revealing. I run the same opening checklist on every web application I assess — and in those 10 minutes, I almost always find 2 or 3 things that a real attacker would exploit before they even get to the sophisticated stuff. Here's exactly what that checklist looks like, and how you can run it on your own application today.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Why the First 10 Minutes Tell You So Much
&lt;/h2&gt;

&lt;p&gt;There's a principle in security that's uncomfortable but consistently true: the most dangerous vulnerabilities in your application are usually the obvious ones. Not because your team is careless — but because obvious things are easy to miss when you're deep in feature development, operating under deadline pressure, and focused on what your application &lt;em&gt;does&lt;/em&gt; rather than what it &lt;em&gt;shouldn't allow&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;An attacker approaching your application cold has no context, no assumptions, and no attachment. They look at the surface before they try to break through it. They check what you've accidentally left visible before they try to find what's deliberately hidden.&lt;/p&gt;

&lt;p&gt;That's exactly how I start every assessment. No tools running yet. No automated scans. Just a browser and a clear mental checklist.&lt;/p&gt;

&lt;p&gt;Here's what's on it.&lt;/p&gt;




&lt;h2&gt;
  
  
  The 9-Point Opening Checklist
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. HTTP to HTTPS Enforcement and Cookie Security
&lt;/h3&gt;

&lt;p&gt;First thing I do: type &lt;code&gt;http://&lt;/code&gt; (not https) in front of the domain. Does the application redirect? Does it redirect with a 301 (permanent) or a 302 (temporary)? Is HTTP Strict Transport Security (HSTS) set in the response headers?&lt;/p&gt;

&lt;p&gt;Then I log in and open DevTools. I look at every cookie the application sets. Three questions: Is the &lt;code&gt;Secure&lt;/code&gt; flag set (cookie only transmitted over HTTPS)? Is &lt;code&gt;HttpOnly&lt;/code&gt; set (JavaScript can't read it)? Is &lt;code&gt;SameSite&lt;/code&gt; configured to &lt;code&gt;Strict&lt;/code&gt; or &lt;code&gt;Lax&lt;/code&gt;?&lt;/p&gt;

&lt;p&gt;A session cookie missing any of these flags is a vulnerability I'll document in every report. It's also one of the simplest fixes in existence.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Security Response Headers
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;Ctrl+Shift+I&lt;/code&gt;. Network tab. I load the application and look at the response headers on the main document. Six headers tell me an enormous amount in under 60 seconds:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Content-Security-Policy&lt;/code&gt; — absent or set to &lt;code&gt;*&lt;/code&gt; means XSS mitigations are wide open&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;X-Frame-Options&lt;/code&gt; or &lt;code&gt;frame-ancestors&lt;/code&gt; CSP — absent means clickjacking is possible&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;X-Content-Type-Options: nosniff&lt;/code&gt; — absent means MIME-type sniffing attacks are viable&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Referrer-Policy&lt;/code&gt; — absent means sensitive URLs in the referer header leak to third parties&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Permissions-Policy&lt;/code&gt; — reveals what browser APIs the application uses&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Server&lt;/code&gt; and &lt;code&gt;X-Powered-By&lt;/code&gt; — if these are present, they're telling me your web server version and framework. That's free reconnaissance I didn't have to work for.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Missing security headers are a quick win for attackers and a quick fix for developers. They're also almost always present in the findings of every assessment I run.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. robots.txt and sitemap.xml
&lt;/h3&gt;

&lt;p&gt;Every pen tester checks these. Attackers do too — it takes three seconds.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;/robots.txt&lt;/code&gt; was designed to tell search engines which paths not to index. It's essentially a publicly available map of paths you consider sensitive. I've found admin panels, internal API endpoints, staging directories, and backup locations all listed in &lt;code&gt;robots.txt&lt;/code&gt; files on production applications.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;/sitemap.xml&lt;/code&gt; gives me a complete list of every URL the application wants indexed. It tells me the full scope of the application before I've done any discovery work myself.&lt;/p&gt;

&lt;p&gt;Neither of these is a vulnerability by itself. But both reliably point me toward the most interesting parts of the application within the first two minutes.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. IDOR Check on Every Visible ID Parameter
&lt;/h3&gt;

&lt;p&gt;The moment I see a URL like &lt;code&gt;/account/profile?id=1042&lt;/code&gt; or &lt;code&gt;/invoice/download?ref=8834&lt;/code&gt;, I open a second browser, log in as a different user, and try to access those same URLs.&lt;/p&gt;

&lt;p&gt;If I get the first user's data in the second user's session — that's an IDOR. Full stop. This is Broken Access Control, #1 on OWASP 2025 for the fifth consecutive year, and &lt;a href="https://www.kuboid.in/blog/optus-data-breach-10-million-records-stolen-idor-vulnerability" rel="noopener noreferrer"&gt;it's how 10 million Optus customer records were stolen&lt;/a&gt; by incrementing a single integer.&lt;/p&gt;

&lt;p&gt;I also check whether IDs are sequential integers. If they are, even a fully authenticated endpoint is at higher risk — because enumeration doesn't require guessing. We covered this pattern in depth in &lt;a href="https://www.kuboid.in/blog/idor-the-vulnerability-developers-keep-writing-and-how-to-stop" rel="noopener noreferrer"&gt;this post on IDOR vulnerabilities&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. JavaScript File Review
&lt;/h3&gt;

&lt;p&gt;Modern web applications ship enormous JavaScript bundles to the browser. The developer's intent is to send the frontend code. What often comes along for the ride: internal API endpoint paths, environment variable names, hardcoded API keys, commented-out debug code, and internal service URLs that were never meant to be public.&lt;/p&gt;

&lt;p&gt;I open the browser's Sources tab, look through the loaded JS files, and run a quick search for strings like &lt;code&gt;api_key&lt;/code&gt;, &lt;code&gt;secret&lt;/code&gt;, &lt;code&gt;token&lt;/code&gt;, &lt;code&gt;internal&lt;/code&gt;, &lt;code&gt;admin&lt;/code&gt;, and &lt;code&gt;TODO&lt;/code&gt;. You would be surprised how often this surfaces something useful in under five minutes. We've written about what happens when &lt;a href="https://www.kuboid.in/blog/secrets-in-code-how-one-git-commit-cost-a-startup-dollar-80000" rel="noopener noreferrer"&gt;secrets make it into code&lt;/a&gt; — the same pattern applies to client-side JavaScript.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Password Reset Flow
&lt;/h3&gt;

&lt;p&gt;Authentication flows are where I spend serious time later in an assessment, but the password reset flow gets a quick check early because it fails so consistently. I specifically look for: Does the reset token expire? Can I reuse a token after it's been used once? Is there a rate limit on reset requests, or can I flood the endpoint? Is the token short enough to brute-force?&lt;/p&gt;

&lt;p&gt;The weakest reset flows I've seen use six-digit numeric tokens with no expiry and no rate limiting. That's 1,000,000 possible combinations and unlimited attempts — a brute-force that takes minutes. We covered why &lt;a href="https://www.kuboid.in/blog/broken-authentication-what-it-is-what-i-test-and-why-it-keeps-getting-exploited" rel="noopener noreferrer"&gt;broken authentication&lt;/a&gt; shows up this consistently and what a secure implementation looks like.&lt;/p&gt;

&lt;h3&gt;
  
  
  7. Error Messages and Information Disclosure
&lt;/h3&gt;

&lt;p&gt;I start poking at inputs with values they weren't designed to handle. A single quote in a search field. A letter in a numeric ID field. A negative number in a quantity field. An oversized string in a text input.&lt;/p&gt;

&lt;p&gt;What I'm looking for is what the application says when it breaks. Does it return a generic "something went wrong" message, or does it return a stack trace showing me your framework version, your file paths, your database schema, and your internal IP addresses?&lt;/p&gt;

&lt;p&gt;Verbose error messages are free reconnaissance for an attacker. They're also a &lt;a href="https://www.kuboid.in/blog/security-misconfiguration-2025-owasp-number-2-web-app-risk" rel="noopener noreferrer"&gt;misconfiguration finding&lt;/a&gt; that belongs in every report because it directly accelerates every other attack.&lt;/p&gt;

&lt;h3&gt;
  
  
  8. Subdomain Enumeration
&lt;/h3&gt;

&lt;p&gt;Still in that first 10 minutes, I'll run a quick passive subdomain check. Tools like SecurityTrails, crt.sh (certificate transparency logs), and DNSdumpster surface subdomains without sending a single packet to the target. I'm looking for: staging environments, old API versions, admin panels, internal tools, and forgotten development servers.&lt;/p&gt;

&lt;p&gt;The Internet Archive breach we covered &lt;a href="https://www.kuboid.in/blog/security-misconfiguration-2025-owasp-number-2-web-app-risk" rel="noopener noreferrer"&gt;here&lt;/a&gt; started with a forgotten development subdomain. This is not an edge case — it's one of the most reliable findings on every assessment I run.&lt;/p&gt;

&lt;h3&gt;
  
  
  9. Unauthenticated LLM and AI Endpoints (The 2026 Addition)
&lt;/h3&gt;

&lt;p&gt;This one didn't exist on my checklist two years ago. Now it's standard.&lt;/p&gt;

&lt;p&gt;If I can tell from the application's functionality or JavaScript that it's using an LLM backend — a chat feature, an AI assistant, a document summarisation tool — I immediately look for the API endpoint that talks to it. I check whether it's authenticated. I check whether I can call it directly without a user session. I check whether it has rate limiting. I check whether it's proxied through the application's own backend or hitting OpenAI/Anthropic directly with a hardcoded key in the client-side JavaScript.&lt;/p&gt;

&lt;p&gt;Unauthenticated LLM endpoints are how &lt;a href="https://www.kuboid.in/blog/llmjacking-how-attackers-steal-your-openai-api-key-and-run-up-dollar100k-bills" rel="noopener noreferrer"&gt;LLMjacking attacks&lt;/a&gt; happen. And the &lt;a href="https://www.kuboid.in/blog/api-security-the-blind-spot-of-every-early-stage-startup" rel="noopener noreferrer"&gt;API security blind spots&lt;/a&gt; that affect standard endpoints are even more prevalent in AI feature implementations because they're often built quickly by teams without a security background.&lt;/p&gt;




&lt;h2&gt;
  
  
  What These Findings Tell Me About the Rest of the Engagement
&lt;/h2&gt;

&lt;p&gt;Here's the part that matters most for engineering leaders: when I find issues in these first 10 minutes, it's not because I've found the edge cases. It's because I've found the surface layer. These are the things a competent attacker finds in their first pass before they've even started trying.&lt;/p&gt;

&lt;p&gt;If an application fails on three or four of these checks, it tells me the rest of the assessment is going to be thorough. It suggests that security wasn't a structured part of the build process — that it was assumed to be handled rather than explicitly designed in.&lt;/p&gt;

&lt;p&gt;If an application passes most of these cleanly, I know I'm working with a team that has thought about security at the implementation level. The assessment gets more interesting from there — we start finding the architectural and logic issues that take more effort — but the low-hanging fruit is gone.&lt;/p&gt;

&lt;p&gt;The full scope of what we test beyond this is covered in &lt;a href="https://www.kuboid.in/blog/the-complete-web-app-pen-test-checklist-what-i-test-on-every-engagement" rel="noopener noreferrer"&gt;our complete web app pen test checklist&lt;/a&gt;. The first 10 minutes are just the opening conversation.&lt;/p&gt;




&lt;h2&gt;
  
  
  How to Run This on Your Own App Right Now
&lt;/h2&gt;

&lt;p&gt;You don't need any specialist tools for most of this checklist. A browser with DevTools open, a second test account, and the URLs &lt;code&gt;crt.sh&lt;/code&gt;, &lt;code&gt;dnsdumpster.com&lt;/code&gt;, and your own application are enough to cover roughly half of it in under 30 minutes.&lt;/p&gt;

&lt;p&gt;Open your application. Work through each of the nine points above. Write down what you find. If anything flags — a missing security header, an IDOR that works across accounts, a subdomain you'd forgotten about — that's something worth addressing before you let a real attacker find it.&lt;/p&gt;




&lt;h2&gt;
  
  
  How Kuboid Secure Layer Can Help
&lt;/h2&gt;

&lt;p&gt;The first 10 minutes are a free check you can run yourself. What comes after requires a structured methodology, adversarial thinking, and experience with what the findings in the surface layer usually lead to underneath.&lt;/p&gt;

&lt;p&gt;Our &lt;a href="https://www.kuboid.in/services/web-app-pentest" rel="noopener noreferrer"&gt;web application penetration tests&lt;/a&gt; start with this checklist and go significantly further — covering authentication logic, business logic flaws, server-side vulnerabilities, and the full OWASP Top 10:2025 framework. If you'd like to see exactly what we look for across a full engagement, &lt;a href="https://www.kuboid.in/contact" rel="noopener noreferrer"&gt;book a free consultation&lt;/a&gt; and we'll walk you through our process.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;If you've ever run a quick security check on your own application — even informally — what did you find? I'm genuinely curious. Drop a comment below. The most common answer is "more than I expected."&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




</description>
      <category>cybersecurity</category>
      <category>webdev</category>
      <category>pentest</category>
      <category>ai</category>
    </item>
    <item>
      <title>The LiteLLM Supply Chain Attack Explained: What Happened, Who's Affected, and What to Do Now</title>
      <dc:creator>Kuboid Secure Layer</dc:creator>
      <pubDate>Fri, 27 Mar 2026 04:26:43 +0000</pubDate>
      <link>https://dev.to/kuboidsecurelayer/the-litellm-supply-chain-attack-explained-what-happened-whos-affected-and-what-to-do-now-ap3</link>
      <guid>https://dev.to/kuboidsecurelayer/the-litellm-supply-chain-attack-explained-what-happened-whos-affected-and-what-to-do-now-ap3</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TLDR:&lt;/strong&gt; On the morning of March 24, 2026, two malicious versions of LiteLLM — one of the most widely used Python packages for building AI applications — were published to PyPI. The packages were live for roughly two and a half hours. During that window, anyone who ran &lt;code&gt;pip install litellm&lt;/code&gt; without a pinned version may have installed a credential stealer that targeted AWS keys, Kubernetes tokens, SSH keys, cloud credentials, CI/CD secrets, and database passwords. If you use LiteLLM, check your version right now. If you installed between 10:39 UTC and 13:25 UTC on March 24, treat your environment as compromised.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  "Just a Library Update" — Until It Wasn't
&lt;/h2&gt;

&lt;p&gt;Picture a typical Monday morning. A developer on your team runs a routine &lt;code&gt;pip install --upgrade litellm&lt;/code&gt; in a CI/CD pipeline. The upgrade goes smoothly. No errors. The pipeline passes. Everything looks fine.&lt;/p&gt;

&lt;p&gt;What they didn't know: for a 166-minute window earlier that day, two versions of LiteLLM on PyPI contained a hidden, multi-stage credential stealer. The moment those packages executed, they began silently harvesting every secret they could find on the host — and exfiltrating them, encrypted, to an attacker-controlled server.&lt;/p&gt;

&lt;p&gt;This is a supply chain attack. Not your code. Not your team. A package you trusted, from a project you rely on, turned into a weapon.&lt;/p&gt;

&lt;p&gt;And this one hit one of the most sensitive packages imaginable.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why LiteLLM Was the Perfect Target
&lt;/h2&gt;

&lt;p&gt;If you build AI-powered applications, there's a good chance LiteLLM is somewhere in your stack. It's an open-source Python library that acts as a universal interface for over 100 large language model providers — OpenAI, Anthropic, Google Gemini, and more — translating API calls into a standard format. It's the plumbing of the modern AI application layer.&lt;/p&gt;

&lt;p&gt;That position in the stack is exactly what made it valuable to attackers. According to Wiz's research, LiteLLM is present in &lt;strong&gt;36% of cloud environments&lt;/strong&gt;. It processes API keys, environment variables, and credentials as part of its normal function. Compromise the library, and you're inside the environment of every developer and company that uses it — with direct access to the most sensitive configuration data they have.&lt;/p&gt;

&lt;p&gt;The package receives approximately &lt;strong&gt;3 million downloads per day&lt;/strong&gt;. Even two and a half hours of exposure represents an enormous potential blast radius.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Actually Happened
&lt;/h2&gt;

&lt;p&gt;The threat group behind this attack — identified by Wiz and Sonatype as &lt;strong&gt;TeamPCP&lt;/strong&gt;, suspected to have links to the LAPSUS$ group — had already compromised Aqua Security's Trivy security scanner the day before. During that compromise, they obtained an API token belonging to a LiteLLM maintainer's PyPI account.&lt;/p&gt;

&lt;p&gt;They used that token to bypass LiteLLM's official CI/CD pipeline entirely and publish two malicious packages directly to PyPI:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;v1.82.7&lt;/strong&gt; — published at approximately 08:30 UTC. The malicious payload was embedded in &lt;code&gt;proxy_server.py&lt;/code&gt;. It executed whenever &lt;code&gt;litellm --proxy&lt;/code&gt; was run or when &lt;code&gt;litellm.proxy.proxy_server&lt;/code&gt; was imported.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;v1.82.8&lt;/strong&gt; — a more dangerous escalation published shortly after. In addition to the &lt;code&gt;proxy_server.py&lt;/code&gt; payload, it introduced a file called &lt;code&gt;litellm_init.pth&lt;/code&gt; — exploiting Python's &lt;code&gt;.pth&lt;/code&gt; mechanism, which allows arbitrary code to execute during interpreter startup. This meant the malware ran &lt;strong&gt;whenever Python was invoked on the system&lt;/strong&gt; — regardless of whether LiteLLM was explicitly imported. This made it significantly harder to detect and dramatically more persistent.&lt;/p&gt;

&lt;p&gt;PyPI quarantined both packages at approximately &lt;strong&gt;11:25 UTC&lt;/strong&gt;. The official LiteLLM security update was published the same day. &lt;a href="https://docs.litellm.ai/blog/security-update-march-2026" rel="noopener noreferrer"&gt;LiteLLM's own incident report&lt;/a&gt; confirmed the compromised versions, the attack vector, and the immediate steps taken.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the Malware Did Once Installed
&lt;/h2&gt;

&lt;p&gt;The payload operated in three stages, each more aggressive than the last.&lt;/p&gt;

&lt;p&gt;The first stage launched data collection and began exfiltrating immediately. The second stage performed deep reconnaissance across the host — enumerating system details and searching for: environment variables and API keys, SSH keys and configurations, cloud provider credentials (AWS, GCP, Azure), Kubernetes configuration files and service account tokens, CI/CD pipeline secrets, Terraform and Helm configurations, Docker configs, database credentials, and cryptocurrency wallet data.&lt;/p&gt;

&lt;p&gt;In some cases, the malware actively used discovered credentials — querying AWS APIs and accessing Kubernetes secrets — rather than simply collecting them.&lt;/p&gt;

&lt;p&gt;All harvested data was encrypted using AES-256-CBC with a randomly generated session key, that key was then encrypted with a hard-coded RSA public key embedded in the malware, and the entire package was exfiltrated to attacker-controlled domains: &lt;code&gt;checkmarx[.]zone&lt;/code&gt; (version 1.82.7) and &lt;code&gt;models[.]litellm[.]cloud&lt;/code&gt; (version 1.82.8). Neither domain is affiliated with LiteLLM.&lt;/p&gt;

&lt;p&gt;The third stage dropped a persistent Python script (&lt;code&gt;sysmon.py&lt;/code&gt;) configured to run as a system service, polling the attacker's server every 50 minutes for new payloads — meaning even after the initial infection is cleaned up, compromised systems may continue to receive attacker instructions until the persistence mechanism is removed.&lt;/p&gt;

&lt;p&gt;This is not a script-kiddie attack. The sophistication of the encryption, persistence mechanism, and evasion techniques (including serving benign content to sandbox analysis systems) points to a well-resourced, organised threat group.&lt;/p&gt;




&lt;h2&gt;
  
  
  Are You Affected?
&lt;/h2&gt;

&lt;p&gt;You may be affected if &lt;strong&gt;any&lt;/strong&gt; of the following are true:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You ran &lt;code&gt;pip install litellm&lt;/code&gt; or &lt;code&gt;pip install --upgrade litellm&lt;/code&gt; on &lt;strong&gt;March 24, 2026 between 10:39 UTC and 13:25 UTC&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;You built a Docker image during that window that included &lt;code&gt;pip install litellm&lt;/code&gt; without a pinned version&lt;/li&gt;
&lt;li&gt;You use an AI agent framework, MCP server, or LLM orchestration tool that depends on LiteLLM as a transitive dependency&lt;/li&gt;
&lt;li&gt;Your CI/CD pipeline pulls dependencies without version pinning&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You are &lt;strong&gt;not&lt;/strong&gt; affected if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You are running the official &lt;strong&gt;LiteLLM Proxy Docker image&lt;/strong&gt; (&lt;code&gt;ghcr.io/berriai/litellm&lt;/code&gt;) — it pins dependencies and did not pull the compromised PyPI versions&lt;/li&gt;
&lt;li&gt;You are on &lt;strong&gt;v1.82.6 or earlier&lt;/strong&gt; and did not upgrade during the window&lt;/li&gt;
&lt;li&gt;You installed LiteLLM from the GitHub source repository, which was not compromised&lt;/li&gt;
&lt;li&gt;You use LiteLLM Cloud&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To check your version: &lt;code&gt;pip show litellm&lt;/code&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What to Do Right Now
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;If you installed v1.82.7 or v1.82.8&lt;/strong&gt;, act immediately:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Check for the persistence file.&lt;/strong&gt; Run this on any potentially affected host:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;find /usr/lib/python3/ &lt;span class="nt"&gt;-name&lt;/span&gt; &lt;span class="s2"&gt;"litellm_init.pth"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If it's present, remove it and treat the host as fully compromised.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Rotate every credential on affected systems.&lt;/strong&gt; Assume everything has been exfiltrated: AWS access keys, cloud service tokens, database passwords, SSH keys, Kubernetes tokens, CI/CD secrets, &lt;code&gt;.env&lt;/code&gt; file contents. Rotate all of them immediately from a clean, unaffected machine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Check outbound traffic logs&lt;/strong&gt; for connections to &lt;code&gt;models.litellm.cloud&lt;/code&gt; or &lt;code&gt;checkmarx.zone&lt;/code&gt;. Either domain in your logs is a confirmed indicator of compromise.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Audit your entire dependency tree.&lt;/strong&gt; Don't just check direct LiteLLM installations — check every package in your environment that might pull LiteLLM as a transitive dependency. AI agent frameworks and orchestration tools are the most likely indirect vectors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Pin your version&lt;/strong&gt; to v1.82.6 or wait for a verified clean release from the LiteLLM team, who have &lt;a href="https://docs.litellm.ai/blog/security-update-march-2026" rel="noopener noreferrer"&gt;paused new releases&lt;/a&gt; pending a full supply chain review with Google's Mandiant team.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Bigger Picture
&lt;/h2&gt;

&lt;p&gt;This attack is the third in a series by TeamPCP in the space of two days. They first compromised Aqua Security's Trivy scanner, then Checkmarx's KICS GitHub Action, then used credentials obtained from those attacks to hit LiteLLM. This is a coordinated, escalating campaign targeting the security and AI tooling ecosystem specifically.&lt;/p&gt;

&lt;p&gt;We've written before about how &lt;a href="https://www.kuboid.in/blog/owasp-top-10-2025-everything-that-changed-and-what-it-means" rel="noopener noreferrer"&gt;software supply chain attacks are now OWASP #3&lt;/a&gt; — and about &lt;a href="https://www.kuboid.in/blog/xz-utils-backdoor-supply-chain-attack-linux" rel="noopener noreferrer"&gt;how the XZ Utils backdoor nearly compromised every Linux server on the internet&lt;/a&gt;. LiteLLM is the same category of attack, executed faster, against a more targeted segment of the industry.&lt;/p&gt;

&lt;p&gt;The lesson isn't to stop using open source. It's to stop treating your dependencies as someone else's responsibility. Every package in your application is code you are accountable for — whether you wrote it or not. &lt;a href="https://www.kuboid.in/blog/secrets-in-code-how-one-git-commit-cost-a-startup-dollar-80000" rel="noopener noreferrer"&gt;Secrets committed to code&lt;/a&gt; or picked up from a compromised package are equally dangerous once they're in an attacker's hands.&lt;/p&gt;




&lt;h2&gt;
  
  
  How Kuboid Secure Layer Can Help
&lt;/h2&gt;

&lt;p&gt;If your team uses LiteLLM or any AI framework with LLM dependencies, now is the right time to audit your dependency tree, your secrets management practices, and your CI/CD pipeline security.&lt;/p&gt;

&lt;p&gt;Our &lt;a href="https://www.kuboid.in/services/cloud-database-pentest" rel="noopener noreferrer"&gt;cloud and application security reviews&lt;/a&gt; specifically cover supply chain exposure — what's in your dependency graph, whether your build pipeline is hardened against token theft, and whether your secrets are being managed in a way that limits blast radius when an upstream package is compromised.&lt;/p&gt;

&lt;p&gt;If you think you may have been affected and need guidance on response, or if you want to proactively assess your exposure before the next attack, &lt;a href="https://www.kuboid.in/contact" rel="noopener noreferrer"&gt;reach out to us&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Are you running LiteLLM in production? Have you checked your version yet? Drop a comment — and if you found the malicious package in your environment, please share what you saw. The more the community shares, the faster we all respond.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




</description>
      <category>litellm</category>
      <category>cybersecurity</category>
      <category>ceo</category>
      <category>founder</category>
    </item>
    <item>
      <title>OWASP Top 10 2025: Everything That Changed and What It Means for Developers</title>
      <dc:creator>Kuboid Secure Layer</dc:creator>
      <pubDate>Mon, 23 Mar 2026 06:03:18 +0000</pubDate>
      <link>https://dev.to/kuboidsecurelayer/owasp-top-10-2025-everything-that-changed-and-what-it-means-for-developers-224b</link>
      <guid>https://dev.to/kuboidsecurelayer/owasp-top-10-2025-everything-that-changed-and-what-it-means-for-developers-224b</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TLDR:&lt;/strong&gt; OWASP released its 2025 Top 10 in November — the first update in four years. They analysed 2.8 million applications, 175,000 CVE records, and surveyed practitioners worldwide. The result: two brand-new categories (Supply Chain Failures and Mishandling of Exceptional Conditions), Security Misconfiguration jumping from #5 to #2, and a fundamental philosophical shift — from cataloguing symptoms to identifying root causes. If your team is still working off the 2021 list, your threat model is already outdated.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The List That Shapes How the World Builds Software
&lt;/h2&gt;

&lt;p&gt;Four years ago, a security team somewhere pinned the OWASP Top 10:2021 to their Confluence wall and said "right, let's test against this." That list has since shaped pen test scopes, compliance frameworks, developer training curricula, and security tooling roadmaps across thousands of organisations.&lt;/p&gt;

&lt;p&gt;Then, in November 2025 at the Global AppSec Conference, OWASP quietly dropped the 2025 edition — and a few things shifted significantly.&lt;/p&gt;

&lt;p&gt;If you're not familiar with what OWASP is, &lt;a href="https://www.kuboid.in/blog/owasp-top-10-explained-without-the-jargon" rel="noopener noreferrer"&gt;we covered that in detail here&lt;/a&gt;. Short version: it's the most widely referenced standard for web application security risks in the world. When it changes, the industry pays attention.&lt;/p&gt;

&lt;p&gt;So let's walk through exactly what changed, why, and what it means for your application in 2026.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Methodology Shift Nobody Talked About
&lt;/h2&gt;

&lt;p&gt;Before we get to the rankings, the most important change in 2025 isn't a new category. It's a philosophical one.&lt;/p&gt;

&lt;p&gt;OWASP deliberately moved from categorising &lt;em&gt;symptoms&lt;/em&gt; to categorising &lt;em&gt;root causes&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The 2021 list included things like "Sensitive Data Exposure" — which describes what happens when something goes wrong. The 2025 list reframes these as "Cryptographic Failures" — which describes &lt;em&gt;why&lt;/em&gt; it goes wrong. That distinction matters enormously for developers. Fixing a symptom is whack-a-mole. Fixing a root cause is engineering.&lt;/p&gt;

&lt;p&gt;The team analysed 589 Common Weakness Enumerations (CWEs) — substantially more than the 2021 edition — selected eight categories from data, and reserved two slots for community-voted risks that data alone doesn't yet capture. The result is a list that reflects both what's happening now and what practitioners on the ground are worried is coming.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's New: The Two Brand-New Categories
&lt;/h2&gt;

&lt;h3&gt;
  
  
  A03: Software Supply Chain Failures
&lt;/h3&gt;

&lt;p&gt;This is the one that should concern every CTO. It expands the 2021 category "Vulnerable and Outdated Components" to encompass the entire software supply chain — dependencies, build systems, and distribution infrastructure. Despite having the fewest occurrences in testing data, it carries the highest average exploit and impact scores from CVEs.&lt;/p&gt;

&lt;p&gt;The reason: supply chain attacks don't look like bugs. They look like legitimate updates from trusted sources. The SolarWinds attack. The XZ Utils backdoor. The compromised &lt;code&gt;event-stream&lt;/code&gt; npm package. These weren't vulnerabilities in your code — they were trust failures in your pipeline.&lt;/p&gt;

&lt;p&gt;50% of survey respondents ranked Supply Chain Failures as their top concern — the highest consensus across any single category. The community sees where the next wave of attacks is coming from. This category is their answer.&lt;/p&gt;

&lt;h3&gt;
  
  
  A10: Mishandling of Exceptional Conditions
&lt;/h3&gt;

&lt;p&gt;The second new entry is subtler but just as important. It focuses on how systems behave under abnormal conditions: exceptions, unexpected inputs, and fail-open logic. Attackers increasingly exploit edge cases and exception paths — code paths that were never covered well in design or testing.&lt;/p&gt;

&lt;p&gt;The classic example: an error in an authentication flow that, instead of denying access, throws an unhandled exception and defaults to &lt;em&gt;granting&lt;/em&gt; access. The code was never written to be malicious. It just wasn't written to fail safely.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Reshuffling: What Went Up, What Got Absorbed
&lt;/h2&gt;

&lt;p&gt;The full 2025 list looks like this:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;#&lt;/th&gt;
&lt;th&gt;2025 Category&lt;/th&gt;
&lt;th&gt;2021 Position&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;A01&lt;/td&gt;
&lt;td&gt;Broken Access Control&lt;/td&gt;
&lt;td&gt;#1 (unchanged)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;A02&lt;/td&gt;
&lt;td&gt;Security Misconfiguration&lt;/td&gt;
&lt;td&gt;#5 → &lt;strong&gt;jumped to #2&lt;/strong&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;A03&lt;/td&gt;
&lt;td&gt;Software Supply Chain Failures&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;New&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;A04&lt;/td&gt;
&lt;td&gt;Cryptographic Failures&lt;/td&gt;
&lt;td&gt;#2 → #4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;A05&lt;/td&gt;
&lt;td&gt;Injection&lt;/td&gt;
&lt;td&gt;#3 → #5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;A06&lt;/td&gt;
&lt;td&gt;Insecure Design&lt;/td&gt;
&lt;td&gt;#4 → #6&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;A07&lt;/td&gt;
&lt;td&gt;Authentication Failures&lt;/td&gt;
&lt;td&gt;#7 (unchanged)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;A08&lt;/td&gt;
&lt;td&gt;Data Integrity Failures&lt;/td&gt;
&lt;td&gt;#8 (unchanged)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;A09&lt;/td&gt;
&lt;td&gt;Security Logging &amp;amp; Alerting Failures&lt;/td&gt;
&lt;td&gt;#9 (unchanged)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;A10&lt;/td&gt;
&lt;td&gt;Mishandling of Exceptional Conditions&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;New&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Notable: SSRF (Server-Side Request Forgery), which was A10 in 2021, has been consolidated into A01: Broken Access Control — reflecting the significant overlap between SSRF exploits and access control failures.&lt;/p&gt;

&lt;p&gt;The biggest jump is Security Misconfiguration, from #5 to #2. It now affects virtually every tested application, with over 719,000 mapped CWEs. As software becomes more configurable — containers, cloud, IaC, feature flags — misconfiguration has quietly become the most pervasive risk in modern stacks.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Top 5 in Plain English
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;A01 — Broken Access Control:&lt;/strong&gt; Still number one, still the most common finding in every pen test we run. This is when a user can do something they shouldn't — view another user's data, access an admin page, modify a record they don't own. The IDOR vulnerabilities we wrote about &lt;a href="https://www.kuboid.in/blog/idor-the-vulnerability-developers-keep-writing-and-how-to-stop" rel="noopener noreferrer"&gt;in this post&lt;/a&gt; live here. So does SSRF now.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A02 — Security Misconfiguration:&lt;/strong&gt; Your S3 bucket is public. Your debug endpoint is live in production. Your container is running as root. None of these required a single line of bad code — just a bad configuration decision, often made once and never revisited. &lt;a href="https://www.kuboid.in/blog/cloud-security-checklist-10-things-to-fix-in-your-aws-account-today" rel="noopener noreferrer"&gt;We covered this in detail for cloud environments here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A03 — Software Supply Chain Failures:&lt;/strong&gt; The npm package you installed in 2023 just received a malicious update. Your build pipeline pulls from an unverified registry. A transitive dependency — three layers deep, one you've never heard of — has a critical CVE. You shipped it last Tuesday.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A04 — Cryptographic Failures:&lt;/strong&gt; You're storing passwords with MD5. Your TLS certificate is using a deprecated cipher suite. You're transmitting sensitive data over HTTP in an internal service because "it's internal." These are the failures that turn a minor breach into a catastrophic one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A05 — Injection:&lt;/strong&gt; SQL injection. Command injection. XSS. These dropped from #3 to #5 not because they're less dangerous, but because tooling has gotten better at catching them. They're still everywhere — &lt;a href="https://www.kuboid.in/blog/api-security-the-blind-spot-of-every-early-stage-startup" rel="noopener noreferrer"&gt;especially in APIs&lt;/a&gt; — but the industry has made real progress on the basics.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Takeaway for Your Team
&lt;/h2&gt;

&lt;p&gt;Here's the uncomfortable truth: if your last security review, pen test, or threat model was based on the 2021 list, two entirely new risk categories weren't on the table. Your supply chain wasn't assessed. Your exception handling wasn't challenged. Your misconfiguration risk — now the second-highest category in the world — may have been treated as a footnote.&lt;/p&gt;

&lt;p&gt;That's not a criticism. It's just what happens when the threat landscape moves and the assessment framework hasn't caught up yet.&lt;/p&gt;

&lt;p&gt;The 2025 update is telling you exactly where attackers are looking right now. The question is whether you look there first.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Have you started updating your security practices to OWASP 2025?&lt;/strong&gt; Drop a comment — we're curious how many teams are still working off the 2021 list heading into mid-2026.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  How Kuboid Secure Layer Can Help
&lt;/h2&gt;

&lt;p&gt;Our &lt;a href="https://www.kuboid.in/services/web-app-pentest" rel="noopener noreferrer"&gt;web application penetration tests&lt;/a&gt; are fully mapped to OWASP Top 10:2025 — including the two new categories. We test your supply chain exposure, your exception handling edge cases, your access control logic, and everything in between.&lt;/p&gt;

&lt;p&gt;If you haven't had your application reviewed since 2021, now is the right time. &lt;a href="https://www.kuboid.in/contact" rel="noopener noreferrer"&gt;Book a free consultation here&lt;/a&gt; and we'll tell you honestly where you stand.&lt;/p&gt;




</description>
      <category>cybersecurity</category>
      <category>owasp</category>
    </item>
    <item>
      <title>Why Multi-Factor Authentication Won't Stop Social Engineering Attacks</title>
      <dc:creator>Kuboid Secure Layer</dc:creator>
      <pubDate>Thu, 19 Mar 2026 04:27:10 +0000</pubDate>
      <link>https://dev.to/kuboidsecurelayer/why-multi-factor-authentication-wont-stop-social-engineering-attacks-oj8</link>
      <guid>https://dev.to/kuboidsecurelayer/why-multi-factor-authentication-wont-stop-social-engineering-attacks-oj8</guid>
      <description>&lt;p&gt;IT support received a call from someone claiming to be the company's CFO. Travelling. Locked out. Urgent MFA reset needed before a board meeting.&lt;/p&gt;

&lt;p&gt;The caller knew the CFO's name, the internal system names, the right acronyms. They sounded exactly like someone who belonged. The helpdesk agent — trained to be helpful, under pressure to resolve tickets fast — reset the MFA.&lt;/p&gt;

&lt;p&gt;The caller was not the CFO.&lt;/p&gt;

&lt;p&gt;Forty minutes later, the attacker had domain administrator access. This is the exact pattern Unit 42 documented in their 2025 Global Incident Response Report when tracing Muddled Libra's intrusions across dozens of enterprises. And the chilling detail: &lt;strong&gt;every single target had MFA enabled&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmskjk7np2xnelicx8qwd.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmskjk7np2xnelicx8qwd.webp" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  MFA Is Still Essential. Let's Be Clear About That.
&lt;/h2&gt;

&lt;p&gt;Before anything else — this post is not an argument against MFA. Enable it everywhere. On every account. Right now, if you haven't.&lt;/p&gt;

&lt;p&gt;MFA stops an enormous category of attacks. Credential stuffing, brute force, password spraying, leaked password reuse — if your employees' passwords are sitting in a breach database somewhere (and statistically, they are), MFA is the control that keeps those stolen credentials useless.&lt;/p&gt;

&lt;p&gt;79% of business email compromise victims investigated in 2024–2025 had MFA enabled — which tells you two things simultaneously. First, attackers are now specifically targeting organisations that have MFA. Second, having it wasn't enough on its own.&lt;/p&gt;

&lt;p&gt;MFA protects credentials. It does not protect people.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Two Ways Attackers Bypass MFA Through Social Engineering
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Attack 1: The Help Desk Reset
&lt;/h3&gt;

&lt;p&gt;This is the Muddled Libra / Scattered Spider playbook, documented in breach after breach since 2022.&lt;/p&gt;

&lt;p&gt;The attacker doesn't try to beat your MFA. They phone your IT helpdesk and ask for it to be reset.&lt;/p&gt;

&lt;p&gt;They've researched beforehand. LinkedIn gave them employee names, titles, and reporting structures. The company website gave them system names and office locations. Enough detail to build a completely convincing pretext. When the helpdesk agent asks a verification question — "What's your employee ID?" or "Who do you report to?" — the attacker has the answer ready.&lt;/p&gt;

&lt;p&gt;The agent resets the MFA. The attacker logs in. And because the login now shows valid credentials plus freshly issued MFA, every security system logs it as normal.&lt;/p&gt;

&lt;h3&gt;
  
  
  Attack 2: MFA Fatigue (Prompt Bombing)
&lt;/h3&gt;

&lt;p&gt;This one requires the attacker to already have your password — usually from a phishing attack or a credential dump. They then attempt login repeatedly, triggering MFA push notifications to your phone. Dozens of them. Sometimes hundreds.&lt;/p&gt;

&lt;p&gt;The goal: wear you down until you tap "Approve" just to make it stop.&lt;/p&gt;

&lt;p&gt;Prompt bombing represented 14% of all social engineering incidents in 2024, and succeeded in more than 20% of social attacks against public sector organisations in 2025.&lt;/p&gt;

&lt;p&gt;This is how the 2022 Uber breach worked. An attacker purchased a contractor's credentials, then bombed them with MFA requests for hours — eventually calling them on WhatsApp, claiming to be Uber IT, and asking them to approve one request as part of a "security verification." They did. The attacker was in.&lt;/p&gt;

&lt;p&gt;In 2025, this technique has been weaponised by ransomware groups including Akira, which targeted SonicWall VPN environments using a combination of stolen credentials and MFA push spam.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If your phone suddenly started receiving 30 MFA requests at midnight, what would you do? What would your team do? It's worth thinking about before an attacker tests the answer.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What Actually Stops These Attacks
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. A Strict, Written Help Desk Identity Verification Policy
&lt;/h3&gt;

&lt;p&gt;This is the highest-impact, lowest-cost control available to most organisations — and it's almost never implemented properly.&lt;/p&gt;

&lt;p&gt;Every MFA reset, password change, or access modification must require identity verification through a channel that cannot be spoofed by a caller. That means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Video call with face verification&lt;/strong&gt; using a known internal contact, not the person requesting the reset&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Manager callback&lt;/strong&gt; via a number stored in your directory — not one provided by the caller&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hardware token or ID card verification&lt;/strong&gt; in person for high-privilege accounts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The policy must also state: &lt;strong&gt;no exceptions for urgency&lt;/strong&gt;. Urgency is the social engineer's most reliable tool. The moment your helpdesk feels empowered to say "I can't reset this without proper verification, regardless of how important you say this is" — you've removed the most exploited entry point in your entire security stack.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Replace Push Notifications With Phishing-Resistant MFA
&lt;/h3&gt;

&lt;p&gt;Standard push-based MFA — the kind where you get a notification and tap "Yes" — is vulnerable to both fatigue attacks and adversary-in-the-middle phishing proxies (tools like Evilginx that intercept MFA codes in real time).&lt;/p&gt;

&lt;p&gt;Upgrade to one of these:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;FIDO2 / Hardware Security Keys&lt;/strong&gt; (YubiKey, Google Titan) — cryptographically bound to the specific website, impossible to phish. The gold standard. 87% of US and UK enterprises have deployed or are actively rolling out passkeys, per a 2025 FIDO Alliance study.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Number matching&lt;/strong&gt; — the user must type a code shown on the login screen into their authenticator app. Stops prompt bombing because each approval is tied to a specific session the user initiated.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Passkeys&lt;/strong&gt; — increasingly supported across Microsoft, Google, Apple, and major SaaS platforms.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If a budget-constrained move is needed: number matching is free in Microsoft Authenticator and Google Authenticator. Enable it today.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Conditional Access and Anomaly Detection
&lt;/h3&gt;

&lt;p&gt;Even with strong MFA, set controls that flag or block logins that look wrong:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Login from a new device or country triggers additional verification&lt;/li&gt;
&lt;li&gt;Login at unusual hours requires manager approval&lt;/li&gt;
&lt;li&gt;Rapid privilege escalation after login generates an immediate security alert&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Muddled Libra playbook moves fast — from helpdesk call to domain admin in 40 minutes — specifically because most environments don't alert on rapid privilege changes after a legitimate login. That gap is closable.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Just-in-Time (JIT) Access for Privileged Accounts
&lt;/h3&gt;

&lt;p&gt;Admin accounts should not exist as persistent, always-available logins. Just-in-time access means elevated permissions are granted on demand, for a defined time window, with an approval workflow. An attacker who bypasses MFA and gets into a standard account gets significantly less value if admin access requires a separate, time-limited grant that leaves an auditable trail.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Policy Your IT Team Needs Before Anything Else
&lt;/h2&gt;

&lt;p&gt;Before you buy new tools or upgrade MFA methods, write this policy and train your helpdesk on it:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"No MFA reset, password change, or access modification will be processed based solely on a phone call, regardless of who the caller claims to be or how urgent the request appears. All such requests require [specific second-channel verification]. This applies without exception to all accounts including executives."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Print it. Put it on every helpdesk agent's wall. Test it with a simulated vishing call.&lt;/p&gt;

&lt;p&gt;Because here's the truth — the attacker who called and pretended to be your CFO isn't testing your technology. They're testing your people. And right now, without that policy in writing, your people have no script to follow when the pressure hits.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Honest Assessment
&lt;/h2&gt;

&lt;p&gt;MFA is not the finish line. It's a layer — an important, essential, non-negotiable layer — in a stack that must also include process, training, and human verification protocols.&lt;/p&gt;

&lt;p&gt;The organisations we work with at &lt;a href="https://www.kuboid.in" rel="noopener noreferrer"&gt;Kuboid Secure Layer&lt;/a&gt; that get this right aren't necessarily the ones with the most sophisticated tools. They're the ones where the helpdesk agent feels completely confident saying "I can't do that without proper verification" — and where that response is backed by a policy that the CEO signed off on.&lt;/p&gt;

&lt;p&gt;If you'd like to understand where your current identity verification protocols stand — or run a simulated vishing test against your helpdesk — &lt;a href="https://www.kuboid.in/services/human-risk-assessment" rel="noopener noreferrer"&gt;our Human Risk Assessment service&lt;/a&gt; is built for exactly this. We test it the way a real attacker would.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Does your IT support team have a documented, tested protocol for MFA resets? If not — or if you're not sure — that's the gap most worth closing this week. Drop a comment or &lt;a href="https://www.kuboid.in/contact" rel="noopener noreferrer"&gt;reach out directly&lt;/a&gt;. You'd be surprised how many teams discover it's missing only after we ask.&lt;/strong&gt;&lt;/p&gt;




</description>
      <category>cybersecurity</category>
      <category>founder</category>
      <category>socialengineering</category>
      <category>ceo</category>
    </item>
    <item>
      <title>The Coinbase Breach Explained: Insider Social Engineering Attack</title>
      <dc:creator>Kuboid Secure Layer</dc:creator>
      <pubDate>Wed, 18 Mar 2026 06:24:58 +0000</pubDate>
      <link>https://dev.to/kuboidsecurelayer/the-coinbase-breach-explained-insider-social-engineering-attack-l6a</link>
      <guid>https://dev.to/kuboidsecurelayer/the-coinbase-breach-explained-insider-social-engineering-attack-l6a</guid>
      <description>&lt;p&gt;Coinbase is not a company that cuts corners on security. They have a dedicated security team, world-class infrastructure, and the kind of compliance apparatus that comes with being publicly listed on NASDAQ and joining the S&amp;amp;P 500.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa982gu88n83zboq4xnqk.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa982gu88n83zboq4xnqk.webp" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;None of it mattered.&lt;/p&gt;

&lt;p&gt;Because in late 2024, an attacker didn't try to break in. They simply walked up to the side door — the offshore customer support team — and offered someone cash to open it.&lt;/p&gt;

&lt;p&gt;This is the Coinbase breach. And it's one of the most important security case studies any business leader could study in 2025, because the lesson has nothing to do with code.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Actually Happened — The Full Timeline
&lt;/h2&gt;

&lt;p&gt;The breach started on &lt;strong&gt;December 26, 2024&lt;/strong&gt;. Attackers — working through methods still under investigation — identified and contacted customer support agents working for Coinbase's outsourced support operations, primarily based in India. They offered cash to those agents in exchange for accessing and exporting customer data from Coinbase's internal support tools.&lt;/p&gt;

&lt;p&gt;Those agents had legitimate credentials. They weren't hacking anything. They were doing exactly what their access permissions allowed — looking up customer records — and quietly copying that data out.&lt;/p&gt;

&lt;p&gt;Coinbase only discovered the insider wrongdoing on May 11, 2025 — nearly &lt;strong&gt;five months&lt;/strong&gt; after the data had begun flowing out. That same day, the attackers sent an extortion email demanding &lt;strong&gt;$20 million in Bitcoin&lt;/strong&gt; to stay quiet about the stolen data.&lt;/p&gt;

&lt;p&gt;Coinbase CEO Brian Armstrong publicly refused to pay, stating "We will not fund criminal activity." Instead, Coinbase announced a matching $20 million reward for information leading to the attackers' arrest.&lt;/p&gt;

&lt;p&gt;In December 2025, Indian authorities arrested a former Coinbase customer support agent in Hyderabad in connection with the breach.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Technique: This Is Still Social Engineering
&lt;/h2&gt;

&lt;p&gt;It's tempting to file this under "insider threat" and move on. But that misses the real lesson.&lt;/p&gt;

&lt;p&gt;The support agents didn't wake up one day and decide to steal data. They were recruited, persuaded, and incentivised — by someone who understood their situation, identified their vulnerability, and constructed an offer they found difficult to refuse.&lt;/p&gt;

&lt;p&gt;Cybercriminals contacted customer support agents working for an external vendor and successfully bribed at least one agent to hand over their credentials or otherwise facilitate access to Coinbase's internal support tools.&lt;/p&gt;

&lt;p&gt;That's social engineering. The target just happened to be inside the organisation rather than outside it. The psychological levers — financial pressure, plausible deniability, perceived low risk — were pulled with precision.&lt;/p&gt;

&lt;p&gt;This is a pattern with a name now. Threat groups like Muddled Libra (which we covered in &lt;a href="https://www.kuboid.in/blog" rel="noopener noreferrer"&gt;Tuesday's post&lt;/a&gt;) have made BPO targeting — going after Business Process Outsourcing firms that handle support operations for large companies — a core part of their playbook. Common tactics include bribing insiders with legitimate access, social engineering support staff to grant unauthorised access, and compromising BPO employee accounts to reach internal systems.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Was Actually Exposed
&lt;/h2&gt;

&lt;p&gt;The stolen data included account data for a small subset of customers — less than 1% of Coinbase's monthly transacting users. No passwords, private keys, or funds were directly exposed, and Coinbase Prime accounts were untouched.&lt;/p&gt;

&lt;p&gt;From Maine's state breach notification filing, the affected headcount was 69,461 people — names, addresses, partial Social Security numbers, account balances, and transaction history. Enough to build highly convincing impersonation attacks.&lt;/p&gt;

&lt;p&gt;And that's exactly what happened next. Attackers used the stolen data to contact Coinbase customers while pretending to be Coinbase support — warning them of suspicious activity on their accounts and instructing them to move their funds to "safe" wallets. Wallets controlled by the attackers.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Actual Cost: Up to $400 Million
&lt;/h2&gt;

&lt;p&gt;The $20 million ransom demand was almost comically small relative to the actual cost of saying no.&lt;/p&gt;

&lt;p&gt;Coinbase estimated their financial exposure from customer reimbursement and remediation at between $180 million and $400 million. That figure covers reimbursing customers who were deceived into transferring crypto, legal costs, regulatory response, the DOJ investigation, and remediation work across their support operations.&lt;/p&gt;

&lt;p&gt;Plus the reputational hit of becoming a breach headline in the same week they joined the S&amp;amp;P 500.&lt;/p&gt;

&lt;p&gt;The attacker spent almost nothing. The company it targeted will spend hundreds of millions.&lt;/p&gt;




&lt;h2&gt;
  
  
  What This Means for Your Business
&lt;/h2&gt;

&lt;p&gt;You are almost certainly not Coinbase. But you very likely share one critical vulnerability with them: &lt;strong&gt;third-party and contractor access to your customer or operational data&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Ask yourself honestly: who has access to your customer records right now? Not just your employees — your CRM vendor's support team, your outsourced helpdesk, your software implementation partners, the offshore agency managing your email campaigns. How is that access monitored? What does it look like when someone starts pulling more records than their role requires?&lt;/p&gt;

&lt;p&gt;Most companies don't have a clear answer. That gap is exactly what attackers are looking for.&lt;/p&gt;




&lt;h2&gt;
  
  
  Three Practical Steps to Reduce This Risk
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Apply least privilege — and mean it.&lt;/strong&gt; Every person who touches your systems — employee, contractor, or vendor — should have access to only what their specific role requires, for only as long as that role exists. A support agent who handles billing queries does not need access to identity documents. A vendor who handles email does not need access to your full customer database.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Log and monitor access behaviour, not just access events.&lt;/strong&gt; Knowing who logged in is table stakes. Knowing that a support agent viewed 400 customer records in a two-hour window — when their average is 30 per day — is the signal that catches insider threats before five months pass. Anomaly detection on access logs is no longer an enterprise-only capability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Vet contractors and vendors the same way you vet employees.&lt;/strong&gt; Background checks, clear contractual obligations around data handling, and defined offboarding procedures are non-negotiable when a contractor has access to customer PII. The Coinbase breach happened through a vendor relationship. The outsourced employees already had legitimate access as part of their job functions, allowing attackers to bypass traditional cybersecurity defences without touching Coinbase's core systems. Legitimate access used maliciously is invisible to most security tools.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Question Worth Asking Right Now
&lt;/h2&gt;

&lt;p&gt;The Coinbase breach didn't start with a sophisticated exploit. It started with someone at a support desk deciding that a cash offer was worth the risk. Five months passed before anyone noticed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Does your company know who has access to your customer data right now — and would you know within days, not months, if that access was being abused?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If that question makes you uncomfortable, it should. That discomfort is useful.&lt;/p&gt;

&lt;p&gt;At &lt;a href="https://www.kuboid.in" rel="noopener noreferrer"&gt;Kuboid Secure Layer&lt;/a&gt;, our &lt;a href="https://www.kuboid.in/services/cloud-database-pentest" rel="noopener noreferrer"&gt;Cloud &amp;amp; Infrastructure Security Reviews&lt;/a&gt; specifically examine access control architecture, third-party access patterns, and the logging infrastructure that determines whether you'd detect an insider event. We also work with organisations to build the vendor security frameworks that catch these risks at the contracting stage, not after a breach.&lt;/p&gt;

&lt;p&gt;If you've ever worked with a BPO, an outsourced support team, or a third-party vendor with access to customer data — &lt;a href="https://www.kuboid.in/contact" rel="noopener noreferrer"&gt;we should talk&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Have you audited your third-party access controls recently? Or does your vendor list have doors you've forgotten about? Drop a comment — this is a conversation more businesses need to be having openly.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>cryptocurrency</category>
      <category>cybersecurity</category>
      <category>ceo</category>
      <category>hacking</category>
    </item>
    <item>
      <title>What Is Social Engineering? Complete Guide With 2026 Examples</title>
      <dc:creator>Kuboid Secure Layer</dc:creator>
      <pubDate>Mon, 16 Mar 2026 02:32:14 +0000</pubDate>
      <link>https://dev.to/kuboidsecurelayer/what-is-social-engineering-complete-guide-with-2026-examples-54fn</link>
      <guid>https://dev.to/kuboidsecurelayer/what-is-social-engineering-complete-guide-with-2026-examples-54fn</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcfhr9mmysbl60pmqmk1o.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcfhr9mmysbl60pmqmk1o.jpeg" alt=" " width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In 2024, employees at a multinational firm in Hong Kong attended what looked like a completely normal video call. Their CFO was on the call. So were several senior colleagues. Instructions were given. Questions were asked and answered. People complied.&lt;/p&gt;

&lt;p&gt;By the end of that call, &lt;strong&gt;$25.6 million&lt;/strong&gt; had been authorised for transfer.&lt;/p&gt;

&lt;p&gt;Every person on that call — except the real employees — was an AI-generated deepfake.&lt;/p&gt;

&lt;p&gt;No malware. No hacking. No breach of any firewall. Just people, trusting what they saw with their own eyes.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is social engineering in 2026. And the numbers say it's coming for your team next.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  So What Is Social Engineering, Really?
&lt;/h2&gt;

&lt;p&gt;Social engineering is the use of psychological manipulation — not technical exploits — to trick people into handing over access, credentials, money, or sensitive data.&lt;/p&gt;

&lt;p&gt;The attacker's weapon isn't code. It's trust.&lt;/p&gt;

&lt;p&gt;They study how humans behave: our tendency to comply with authority, our fear of consequences, our desire to be helpful. Then they design situations that exploit exactly those instincts. By the time the target realises something is wrong, the damage is already done.&lt;/p&gt;

&lt;p&gt;What makes this so dangerous is its simplicity. You don't need to be a sophisticated hacker to pull off a social engineering attack. You need to be a good liar with a believable story — and in 2026, AI can help build that story in seconds.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why It Works: The 6 Human Triggers Attackers Exploit
&lt;/h2&gt;

&lt;p&gt;Robert Cialdini's research on influence identified six principles of persuasion that marketers use to drive decisions. Attackers have been quietly using the same playbook for decades.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Authority&lt;/strong&gt; — People comply with figures of power without questioning. An email "from the CEO" or a call "from IT" triggers automatic deference.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Urgency&lt;/strong&gt; — When there's a deadline, people stop thinking carefully. "Your account will be locked in 30 minutes" is a pressure tactic, not a policy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Social proof&lt;/strong&gt; — "Your colleague already approved this" lowers resistance instantly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Liking&lt;/strong&gt; — Attackers who seem friendly, relatable, or familiar are trusted more readily.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reciprocity&lt;/strong&gt; — When an attacker has done something helpful first, targets feel obligated to cooperate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scarcity&lt;/strong&gt; — "This is your only chance to verify" forces a snap decision.&lt;/p&gt;

&lt;p&gt;Each of these triggers bypasses rational evaluation. That's exactly why they work — even on smart, experienced professionals.&lt;/p&gt;




&lt;h2&gt;
  
  
  The 7 Main Techniques (A Quick Map)
&lt;/h2&gt;

&lt;p&gt;Social engineering isn't one attack. It's a category of attacks. Here's the landscape:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Phishing&lt;/strong&gt; — Deceptive emails designed to steal credentials or deliver malware. Still the dominant vector, accounting for 57% of all social engineering incidents per the Verizon 2025 DBIR.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Spear Phishing&lt;/strong&gt; — Targeted phishing that uses your name, your company, your colleagues — crafted specifically for you.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vishing&lt;/strong&gt; — Voice phishing over phone calls. Vishing attacks skyrocketed 442% between H1 and H2 of 2024, per CrowdStrike.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Smishing&lt;/strong&gt; — The same attack delivered via SMS. Fake toll notices, delivery alerts, bank warnings.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pretexting&lt;/strong&gt; — A fabricated scenario to extract information. "I'm from the auditors and need your login to verify the account."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Baiting&lt;/strong&gt; — Exploiting curiosity. A USB drive labelled "Salary List Q4" left in a car park. Someone always plugs it in.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tailgating&lt;/strong&gt; — Physical. Following an authorised person through a secure door by looking like you belong.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Scale of the Problem in 2025–2026
&lt;/h2&gt;

&lt;p&gt;These numbers are not hypothetical. They come from the &lt;a href="https://www.verizon.com/business/resources/reports/dbir/" rel="noopener noreferrer"&gt;Verizon 2025 Data Breach Investigations Report&lt;/a&gt; and &lt;a href="https://www.ibm.com/reports/data-breach" rel="noopener noreferrer"&gt;IBM's Cost of a Data Breach Report 2025&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The human element was involved in &lt;strong&gt;60% of all breaches&lt;/strong&gt; in 2025&lt;/li&gt;
&lt;li&gt;Social engineering appeared in the &lt;strong&gt;top three breach patterns across 13 of 16 industries&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;The average cost of a phishing-initiated breach: &lt;strong&gt;$4.91 million&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;BEC attacks alone caused &lt;strong&gt;$2.77 billion&lt;/strong&gt; in reported losses in 2024 (FBI IC3)&lt;/li&gt;
&lt;li&gt;The median time for a user to click a phishing link: &lt;strong&gt;21 seconds&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And here's what keeps me up at night as someone who works in this field every day: &lt;strong&gt;93% of employees don't receive regular security awareness training&lt;/strong&gt;, according to Gitnux's 2025 analysis. That's not a technology gap. That's an enormous open door.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Have you ever asked yourself how your team would respond to a well-crafted phishing email or a convincing phone call? If you don't know the answer, that's worth sitting with for a moment.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Who Gets Targeted?
&lt;/h2&gt;

&lt;p&gt;The honest answer: everyone. But some profiles are hit harder.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Small and mid-sized businesses&lt;/strong&gt; are targeted nearly 4x more often than large enterprises, according to Verizon's 2025 DBIR. Why? Fewer security controls, less training, and the assumption that "we're too small to be a target" — which is itself a vulnerability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Finance and HR teams&lt;/strong&gt; are primary targets because they hold the keys to money movement and sensitive employee data. In Q1 2025, 60.7% of failed phishing simulations involved emails impersonating internal teams — with HR being the single most imitated department at 49.7%.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;C-suite executives&lt;/strong&gt; are targeted through whaling — spear phishing specifically crafted for leadership, often using publicly available information from LinkedIn, press releases, and earnings calls.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;New employees&lt;/strong&gt; are particularly vulnerable. They haven't learned internal communication norms, they're eager to be helpful, and they're unlikely to question authority figures.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you're a CEO or manager reading this — when did your team last receive any training on recognising these attacks? Drop a comment below. I'm genuinely curious where most businesses actually sit on this.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Technology Alone Cannot Stop This
&lt;/h2&gt;

&lt;p&gt;This is the part most vendors won't tell you directly.&lt;/p&gt;

&lt;p&gt;Your firewall doesn't see a social engineering attack. Your antivirus has nothing to scan. Your email filter can't detect a perfectly written email from a convincing lookalike domain. When an employee willingly hands over their credentials — or approves a wire transfer because they believe the request is legitimate — every piece of security technology you've deployed watches it happen without raising an alert.&lt;/p&gt;

&lt;p&gt;The 2025 DBIR is unambiguous: &lt;strong&gt;60% of breaches involve a human action&lt;/strong&gt;. Technology secures systems. It cannot secure judgment under pressure.&lt;/p&gt;

&lt;p&gt;This is why we focus so much of our work at &lt;a href="https://www.kuboid.in" rel="noopener noreferrer"&gt;Kuboid Secure Layer&lt;/a&gt; on the human layer — the part of your security posture that no amount of software spend can fully replace.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Defence: What Actually Works
&lt;/h2&gt;

&lt;p&gt;The full defence playbook deserves its own post — and we'll cover it in depth later this week. But here's the short version:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Training that sticks&lt;/strong&gt; — Not a once-a-year slideshow. Scenario-based, regular, relevant training that reflects the actual attacks your industry faces today.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Simulated attacks&lt;/strong&gt; — The only way to know how your team actually responds is to test them safely. Simulated phishing and vishing campaigns reveal your real vulnerabilities before a real attacker does.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Verification procedures&lt;/strong&gt; — Any request involving money, credentials, or sensitive data must require a second-channel check. Always. No exceptions for urgency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A no-blame reporting culture&lt;/strong&gt; — If employees fear punishment for clicking, they'll hide it. Hidden incidents cost organisations days of additional exposure.&lt;/p&gt;

&lt;p&gt;At &lt;a href="https://www.kuboid.in/services/human-risk-assessment" rel="noopener noreferrer"&gt;Kuboid Secure Layer&lt;/a&gt;, our Human Risk Assessment service is built specifically around this. We simulate the attacks. We identify the gaps. We help build the culture that closes them — without embarrassing your team in the process.&lt;/p&gt;




&lt;h2&gt;
  
  
  One Last Thought
&lt;/h2&gt;

&lt;p&gt;The Hong Kong deepfake case wasn't a failure of technology. Every security system in that company was probably working exactly as designed. It was a failure of process — no verification procedure that would have caught a transfer of that size before it went through.&lt;/p&gt;

&lt;p&gt;Social engineering attacks succeed not because your people are foolish, but because attackers are patient, well-researched, and increasingly well-equipped. The $25.6 million that left that company was authorised by people who genuinely believed they were doing their jobs.&lt;/p&gt;

&lt;p&gt;The question isn't whether your team could be manipulated. The question is whether your processes would catch it before the money moves.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Want to find out how your team performs against a real simulation? &lt;a href="https://www.kuboid.in/contact" rel="noopener noreferrer"&gt;Let's talk.&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>hacking</category>
      <category>phishing</category>
      <category>ceo</category>
      <category>founder</category>
    </item>
  </channel>
</rss>
