DEV Community

Joe Gellatly
Joe Gellatly

Posted on • Originally published at medcurity.com

The 2026 HIPAA Security Rule Checklist for Engineers at Small Healthcare Orgs

If you build or run the tech stack for a clinic, FQHC, community health center, critical access hospital, ASC, or any small/mid-size healthcare organization, the 2026 HIPAA Security Rule amendments are the first meaningful update in two decades. Most of the public commentary has been about "encryption is now required" — true, but not the whole story. This is the engineer's version.

The one-paragraph summary

The 2026 amendments promote most previously-"addressable" Security Rule specifications to required. The practical effect: you need encryption everywhere ePHI lives or moves, MFA on every system that touches ePHI, a biannual vulnerability-scanning cadence plus annual penetration testing, a 72-hour breach-reporting pipeline to OCR for any breach affecting 500+ individuals, and a written, current asset inventory that ties every system back to your risk analysis. None of these are revolutionary on their own — but getting all seven right, documented, and defensible is a real engineering effort.

The seven pillars

1. Encryption — everywhere

What's required: ePHI encrypted at rest and in transit, using NIST-recognized cryptographic standards (FIPS 140-3 modules where feasible).

What this actually means:

  • Databases: TDE on SQL Server/Postgres/MySQL, or equivalent.
  • Object storage: SSE-KMS for S3, Customer-Managed Keys for Azure Blob, CMEK for GCS.
  • Endpoints: BitLocker / FileVault / LUKS on every device with potential ePHI access.
  • Backup: encrypted at rest AND in transit; check your backup tool's actual settings.
  • Fax / scan-to-email bridges: end-to-end encryption, not just transport TLS.
  • Archived data: often the biggest miss. Tape archives and legacy backups frequently sit unencrypted.

Engineering gotcha: "encryption in transit" means TLS 1.2+ on every path, including internal East-West traffic in your VPC. If your service mesh has plaintext between pods, that's a finding.

2. MFA — no exceptions

What's required: MFA on any system that creates, receives, maintains, or transmits ePHI.

The breakdown by system class:

  • EHR / PM / LIS / RIS: MFA mandatory. Most modern vendors support it; the work is enforcement and enrollment tracking.
  • Remote access: VPN + MFA. No more split-tunnel exception lists.
  • Cloud admin: IAM with MFA, no console-root users without hardware MFA.
  • Email: MFA mandatory. O365/Google Workspace conditional access policies.
  • Shared workstations (nursing stations, pre-op, front desk): this is the hardest part. Most real-world implementations use proximity badges + PIN with short session timeouts. Design this before audit, not during.
  • Credentialed-but-not-employed clinicians: same MFA standard, even though they're 1099 / credentialed staff.

Engineering gotcha: service accounts that touch ePHI need documented MFA equivalents (key rotation, conditional access, secrets management). "This is a service account so MFA doesn't apply" is not a defensible answer.

3. Biannual vulnerability scanning

What's required: Formal vulnerability scanning at least twice a year, documented, with findings tied back to the risk analysis.

What "formal" means:

  • Scope includes every ePHI-handling system (apps, infrastructure, and the infrastructure the apps run on).
  • Authenticated scans where feasible, not just unauthenticated perimeter checks.
  • Output is a written report with findings, severity, and remediation owner.
  • Findings get closed out or accepted with documented justification.

Tooling: commercial scanners (Qualys, Tenable, Rapid7) or managed offerings from security vendors. Open-source options (OpenVAS) work if you have the ops discipline.

4. Annual penetration testing

What's required: At least one formal penetration test per year, scoped to cover ePHI-handling systems.

Scope baseline for a small healthcare org: external perimeter, the identity perimeter (O365/Workspace), the EHR and its patient portal, any web applications you own, and the VPN/remote-access infrastructure. For larger orgs, add internal network, cloud, and application-layer testing.

Engineering gotcha: don't conflate vulnerability scanning with penetration testing. A scan enumerates known CVEs. A pen test is a human trying to break in. OCR expects both.

5. 72-hour breach reporting

What's required: For breaches affecting 500+ individuals, OCR notification within 72 hours of discovery (tighter than the pre-2026 60-day rule).

Operational implication: the 72-hour clock starts when the organization discovers the breach, not when investigation concludes. You need:

  • A monitored intake path for suspected-breach reports.
  • A triage process that moves from "suspected" to "confirmed" within 24 hours.
  • Documented legal and PR review in parallel, not sequentially.
  • A pre-drafted OCR notification template with fillable scope/affected-count fields.

For breaches under 500 individuals, the annual HHS notification rule still applies; the 72-hour accelerant is specific to the large-breach path.

6. Written asset inventory

What's required: A current, written inventory of every system that creates, receives, maintains, or transmits ePHI, tied back to the risk analysis.

What "current" actually means: updated whenever a system is added, removed, or materially changed. Point-in-time CMDB snapshots aren't enough — the inventory has to be maintained.

Minimum inventory fields:

  • System name
  • Type (EHR, PM, LIS, RIS, email, file storage, etc.)
  • Vendor
  • Owner (technical + business)
  • Data classification (does it touch ePHI?)
  • Encryption status (at rest, in transit)
  • MFA status
  • Backup / DR arrangement
  • BAA status (if vendor-hosted)
  • Last risk-analysis coverage date

7. Documented, up-to-date risk analysis

What's required: A Security Risk Analysis (SRA) that is current (annually at a minimum, plus after material changes) and covers every ePHI-handling system, site, and vendor relationship.

What it isn't: a generic checklist. OCR has repeatedly taken action against organizations whose SRA was templated, stale, or not tied to actual systems and workflows.

What it is:

  1. Scope definition (every ePHI system, every site, every BAA-covered vendor).
  2. Threat and vulnerability analysis.
  3. Likelihood and impact rating per identified risk.
  4. Current controls and residual risk.
  5. A risk management plan with owned, dated remediation steps.
  6. Evidence that the plan is actually being executed.

The 48-hour engineering readiness check

If OCR opened a compliance review tomorrow, could you produce, within 48 hours:

  • [ ] A current SRA with a risk management plan and dated remediation owners
  • [ ] An asset inventory showing every ePHI-handling system, its encryption status, and its MFA status
  • [ ] Evidence of the most recent vulnerability scan (date, tool, scope, findings, remediation)
  • [ ] Evidence of the most recent penetration test (date, scope, findings, remediation)
  • [ ] A signed BAA for every vendor in your inventory that touches PHI
  • [ ] Training records for every current employee, with attestations and dates
  • [ ] A 72-hour incident-response playbook (triage path, template OCR notification, legal review)

A "no" or "I'm not sure" on any of those is a gap worth closing before Q3 2026.

Where to go deeper

If you want the segment-specific versions of this:

If you're the engineer on the hook for making all seven pillars real, pick the weakest one, ship documentation for it this month, and rotate through the others. Don't try to turn the whole ship at once — the SRA is the right anchor, because everything else hangs off it.


Disclosure: I'm the founder/CEO of Medcurity, which builds HIPAA compliance software for small and mid-size healthcare organizations. This post is the engineering-focused version of our written guides and isn't legal advice.

Top comments (0)