The Call That Changed Everything
It started with an anomaly that almost nobody noticed.
A regional accounting firm — mid-sized, 60+ employees, handling tax filings and financial audits for hundreds of business clients — reached out after their cyber insurance provider flagged unusual outbound traffic during a routine policy renewal review. The firm's internal IT contact assumed it was a misconfigured backup job. It wasn't.
What we uncovered over the next several weeks was eleven months of silent, systematic data exfiltration. Client financial records, PII, tax identification numbers, business banking details. All of it leaving the network through a vector nobody had thought to question: a remote-access tool installed by a third-party payroll software vendor.
This is the story of how it happened, how we found it, and what it took to contain the damage without destroying the trust that firm had spent decades building.
The Attack Vector Nobody Audited
The vendor in question was a legitimate payroll software company the firm had used for years. During a routine product update roughly eleven months prior, the vendor's support team had remotely installed a lightweight remote desktop agent — standard practice for troubleshooting. The installation was authorized. The agent itself was legitimate.
The problem: the vendor had been compromised. Attackers had obtained credentials to the vendor's internal support infrastructure and were using it as a launchpad into every client environment where their remote-access agent was deployed.
This is a classic supply chain intrusion, and it's devastatingly effective because:
- The remote-access tool is already trusted by endpoint security
- Traffic originates from a known, whitelisted IP range
- The activity looks like normal vendor support behavior
- There's no malware on the victim's systems to detect
The attackers weren't noisy. They exfiltrated data in small, irregular bursts — typically during business hours, mimicking legitimate support sessions. Over eleven months, they systematically harvested the firm's file shares.
How We Uncovered It
The insurance provider's anomaly flag gave us a starting point, but it was vague: elevated outbound data volume to a third-party IP. Our investigation began with log analysis.
# Reconstructing outbound session data from firewall logs
grep "PERMIT" firewall.log | awk '{print $6, $7, $8, $12}' \
| sort -k4 -rn \
| head -50
Sorting by data volume revealed dozens of sessions to a single IP range — all attributed in the logs to the payroll vendor's support domain. At first glance, nothing alarming. But cross-referencing timestamps with the vendor's actual support ticket history showed a critical gap: most of these sessions had no corresponding support ticket.
We then pulled endpoint telemetry from the remote-access agent's own activity logs:
[2023-03-14 10:42:17] Session initiated — user: vendor_support_01
[2023-03-14 10:42:19] File browse: \\FILESERVER01\ClientRecords\2022\
[2023-03-14 10:43:04] File copy: 47 files transferred
[2023-03-14 10:43:31] Session closed
No support ticket. No email from the client requesting help. A session opened, files copied, session closed. Repeated hundreds of times across eleven months.
The attacker was methodical. They never triggered volume thresholds that would have alerted on a single session. They rotated through directories slowly, never grabbing everything at once.
Containment: What You Do and Don't Do
Here's where firms make their worst mistakes. The instinct is to immediately kill everything — terminate vendor access, wipe affected systems, rotate all credentials simultaneously. That instinct can destroy forensic evidence and tip off attackers to cover their tracks before you've mapped the full scope.
Our containment sequence:
- Isolate, don't disconnect. We moved the compromised file server to a quarantine VLAN before terminating vendor access. This preserved live artifacts.
- Snapshot before remediation. Full disk images before any credential rotation or system changes.
- Notify vendor security team in parallel. They confirmed their support infrastructure had been compromised. Eleven other clients were affected.
- Map the blast radius. Using the session logs, we reconstructed exactly which directories, which files, which client records were accessed. This took four days.
- Staged credential rotation. Rotating everything at once in an active environment causes outages. We sequenced rotations across 72 hours.
The Client Relationship Problem Is Harder Than the Technical Problem
This is what the incident response playbooks don't tell you: notifying clients of a breach involving their financial data, when you're a firm whose entire value proposition is trust, is existentially dangerous.
The firm had approximately 340 business clients whose data may have been accessed. Legal counsel advised notification within the breach window required by their state. We advised being specific about what was and wasn't accessed, rather than issuing a vague blanket notification.
The session logs saved them. Because we could reconstruct precisely which client folders were accessed in which sessions, the firm was able to notify clients with specific detail:
"Your 2021 and 2022 tax filing records were accessed in three sessions between April and June 2023. The following file types were involved..."
Specificity is uncomfortable. It's also the only thing that preserves credibility. Clients who received vague "we may have had an incident" letters were more likely to churn. Clients who received detailed, honest disclosures with a clear remediation timeline largely stayed.
What Should Have Prevented This
The hard truth: this breach was preventable with controls the firm hadn't prioritized.
- Vendor access reviews. A quarterly audit of all active third-party remote-access agents would have caught an agent with no recent legitimate sessions.
- Session logging with anomaly baselining. If you authorize vendor remote access, log every session and baseline what "normal" looks like.
- Just-in-time access. Rather than a persistent always-on agent, require vendors to request access for a defined window. The agent activates, the session occurs, the access closes.
- Outbound DLP rules. Even basic data-volume thresholds per destination IP would have flagged this within weeks, not months.
If you're evaluating your own third-party access posture and aren't sure where to start, the advisory team at www.asktechgurus.com has published vendor risk frameworks specifically designed for professional services firms managing sensitive client data.
The Lesson That Doesn't Age
Supply chain attacks succeed not because defenders are incompetent, but because they extend trust to familiar names and stop asking questions. The remote-access agent was legitimate. The vendor was legitimate. The traffic looked legitimate.
The attackers exploited a gap between authorized and audited.
Every tool you've permitted into your environment is a door. The question isn't whether you trust whoever gave you the key — it's whether you're watching who walks through.
Eleven months is a long time for a door to be open.
Top comments (0)