Originally published on arkensec.com
My last SOC 2 Type II kickoff call lasted 82 minutes. The auditor asked for seven specific artifacts in the first ten, and I had four of them. The other three — evidence of vulnerability scan cadence on a defined schedule, documented remediation SLAs with timestamps, and a current third-party penetration test report — cost three weeks and $14,000 to produce mid-engagement.
I've now sat in twelve of these kickoffs across both sides of the table. The same thing breaks every time.
Evidence velocity, not documentation, is what blocks Series A SaaS from SOC 2 Type II. Closing the velocity gap saves six months and $20K–$50K.
What "evidence velocity" actually means
SOC 2 Type II readiness means your control environment can produce timestamped, auditor-legible evidence of every in-scope control operating consistently across a multi-month observation period — typically six months on a first audit, twelve on subsequent ones.
Readiness is not whether your policies exist. It's whether the artifacts those policies promise can be sampled at any random week and handed over in under five minutes.
Auditors don't review your entire observation period sequentially. They sample. They pick week 12, week 23, week 31, and ask you to produce evidence that each control fired on those specific dates. If you can produce it for week 12 but not week 23, the control fails.
That's the velocity problem. Not "do you have a scanner." But "does the scanner produce a retained, timestamped artifact on a fixed cadence, automatically, every single time."
Type I vs. Type II: what actually changes
Type I is a snapshot. Type II is a recording. The enterprise buyer blocking your deal wants the recording.
| Dimension | Type I | Type II |
|---|---|---|
| Question answered | Are controls designed correctly today? | Did controls operate effectively over the observation window? |
| Observation period | Point-in-time | 3–12 months (6 is typical first audit) |
| Evidence required | Policies, configs, sample artifacts | Continuous artifacts across the full window |
| Auditor sampling | Once | Random weeks across the period |
| Buyer credibility | Limited | Required for most enterprise procurement |
| Typical cost | $10K–$20K | $25K–$60K (audit fee + readiness + tooling) |
| Failure mode | Missing policy | Missing artifact for sampled week |
Enterprise procurement teams stopped accepting Type I as a substitute around 2022. If you're going through the work, plan for Type II from day one.
The three Trust Services Criteria that eat Series A teams
The AICPA publishes five Trust Services Criteria: Security (mandatory), Availability, Processing Integrity, Confidentiality, and Privacy. Almost every Series A SaaS scopes the first audit to Security only — right call. Inside Security, auditors walk roughly sixty points of focus across nine Common Criteria.
Three of those produce most of the pain:
- CC6.1 — logical and physical access controls
- CC7.1 — detection of security events
- CC7.2 — system monitoring for vulnerabilities and malicious code
Every other criterion has a policy answer. These three demand continuous evidence. That's where readiness dies.
First readiness assessments for 25-person SaaS teams typically return 40–80 gaps. Roughly three-quarters follow the same pattern: the control is written into the policy, it's occasionally enforced in practice, and nobody has collected evidence of continuous operation across the observation window.
What auditor-acceptable evidence looks like (with examples)
A useful rule: if you can't produce the evidence in under five minutes from a cold start, it doesn't exist for audit purposes.
| Format that works | Format that fails |
|---|---|
| Immutable logs with timestamps (CloudTrail, GitHub audit log, Okta system log to a dated S3 bucket) | "We have logging on" |
| Tool-generated reports with scan-date header, retained for the full observation window | Screenshots saved to someone's laptop |
| Ticket artifacts with state transitions (Jira/Linear: opened → assigned → remediated → closed, each timestamped) | "We can regenerate the report" |
| Reviewer-signed artifacts (named reviewer clicks approval on a defined schedule) | A Slack thumbs-up emoji |
| Cron-driven scan output landing in a write-once retention bucket on a fixed cadence | "We scan when we deploy" |
Let me show you what the right side of that table looks like in practice.
Wiring CloudTrail to S3 with immutable retention
This is the foundation for CC6.1 and CC7.1 evidence. CloudTrail logs API calls; the S3 bucket with Object Lock makes them tamper-evident.
# Create a dedicated audit evidence bucket with versioning + Object Lock
aws s3api create-bucket \
--bucket your-company-soc2-evidence \
--region us-east-1 \
--object-lock-enabled-for-bucket
# Enable versioning (required for Object Lock)
aws s3api put-bucket-versioning \
--bucket your-company-soc2-evidence \
--versioning-configuration Status=Enabled
# Set a default retention policy (COMPLIANCE mode, 365 days)
aws s3api put-object-lock-configuration \
--bucket your-company-soc2-evidence \
--object-lock-configuration '{
"ObjectLockEnabled": "Enabled",
"Rule": {
"DefaultRetention": {
"Mode": "COMPLIANCE",
"Days": 365
}
}
}'
Then create a CloudTrail trail that writes to that bucket:
aws cloudtrail create-trail \
--name soc2-audit-trail \
--s3-bucket-name your-company-soc2-evidence \
--include-global-service-events \
--is-multi-region-trail \
--enable-log-file-validation
aws cloudtrail start-logging --name soc2-audit-trail
--enable-log-file-validation is the part most people skip. It creates SHA-256 digest files that let you prove the logs weren't modified after the fact. Auditors who know what they're looking at will ask for this.
Pulling the Okta system log to a dated evidence path
For CC6.1 access reviews, you need the IdP log retained with a timestamp path an auditor can navigate. Here's a minimal Python script that pulls the Okta system log daily and writes it to a dated S3 prefix:
import boto3
import requests
import json
from datetime import datetime, timedelta, timezone
OKTA_DOMAIN = "https://your-org.okta.com"
OKTA_API_TOKEN = "your-api-token" # use Secrets Manager in prod
S3_BUCKET = "your-company-soc2-evidence"
def pull_okta_log(date: datetime) -> list:
since = date.replace(hour=0, minute=0, second=0, microsecond=0)
until = since + timedelta(days=1)
headers = {
"Authorization": f"SSWS {OKTA_API_TOKEN}",
"Accept": "application/json"
}
params = {
"since": since.isoformat(),
"until": until.isoformat(),
"limit": 1000
}
events = []
url = f"{OKTA_DOMAIN}/api/v1/logs"
while url:
resp = requests.get(url, headers=headers, params=params)
resp.raise_for_status()
events.extend(resp.json())
# Okta paginates via Link header
url = resp.links.get("next", {}).get("url")
params = {} # params only on first request
return events
def upload_to_s3(events: list, date: datetime):
s3 = boto3.client("s3")
key = f"okta-logs/{date.strftime('%Y/%m/%d')}/system-log.json"
s3.put_object(
Bucket=S3_BUCKET,
Key=key,
Body=json.dumps(events, indent=2),
ContentType="application/json"
)
print(f"Uploaded {len(events)} events to s3://{S3_BUCKET}/{key}")
if __name__ == "__main__":
yesterday = datetime.now(timezone.utc) - timedelta(days=1)
events = pull_okta_log(yesterday)
upload_to_s3(events, yesterday)
Run this as a Lambda on a daily EventBridge schedule. The S3 path structure (okta-logs/2025/01/15/system-log.json) is what makes the auditor's job easy — they can navigate directly to the date they sampled.
# EventBridge rule to trigger daily at 01:00 UTC
aws events put-rule \
--name okta-log-daily \
--schedule-expression "cron(0 1 * * ? *)" \
--state ENABLED
CC7.2: the highest evidence-velocity gap
CC7.2 requires you to monitor systems for vulnerabilities and respond on a defined timeline. What auditors want:
- A scanning cadence documented in policy (weekly external, monthly internal is the most defensible default)
- A documented remediation SLA aligned to NIST SP 800-40r4: 30 days for critical, 60 for high, 90 for medium
- Evidence that findings are triaged against the SLA and closed or risk-accepted in writing, for every scan across the entire window
The CISA Known Exploited Vulnerabilities catalog is the cleanest external benchmark for which findings warrant the tight end of the SLA. If a CVE is in KEV, treat it as critical regardless of your internal CVSS scoring.
The failure mode isn't having the scanner. It's running the scanner manually, inconsistently, and never retaining the reports against a sampling schedule.
Automating vulnerability scan evidence with a cron job and S3
If you're running Nuclei against your external perimeter, here's a minimal wrapper that produces a timestamped, retained artifact on every run:
#!/bin/bash
# scan-and-retain.sh
# Run this on a fixed cron schedule that matches your written policy exactly.
# If your policy says "weekly every Monday at 02:00 UTC", this cron runs
# weekly every Monday at 02:00 UTC. Not "roughly weekly." Exactly.
set -euo pipefail
TARGET="${1:-your-domain.com}"
BUCKET="your-company-soc2-evidence"
TIMESTAMP=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
DATE_PATH=$(date -u +"%Y/%m/%d")
REPORT_FILE="/tmp/nuclei-${TIMESTAMP}.json"
echo "[+] Starting scan of ${TARGET} at ${TIMESTAMP}"
# Run Nuclei with severity filter and JSON output
nuclei \
-target "https://${TARGET}" \
-severity critical,high,medium \
-json-export "${REPORT_FILE}" \
-silent \
-tags cve,exposure,misconfiguration
# Add scan metadata envelope
METADATA=$(jq -n \
--arg target "$TARGET" \
--arg timestamp "$TIMESTAMP" \
--arg policy_cadence "weekly" \
--arg scanner "nuclei" \
'{
scan_metadata: {
target: $target,
scan_timestamp: $timestamp,
policy_cadence: $policy_cadence,
scanner: $scanner,
soc2_control: "CC7.2",
nist_reference: "SP 800-40r4"
}
}')
# Merge metadata with findings
jq -s '.[0] * {findings: .[1]}' \
<(echo "$METADATA") \
"${REPORT_FILE}" > "/tmp/final-${TIMESTAMP}.json"
# Upload to dated S3 path
aws s3 cp "/tmp/final-${TIMESTAMP}.json" \
"s3://${BUCKET}/vulnerability-scans/${DATE_PATH}/nuclei-${TIMESTAMP}.json"
echo "[+] Evidence retained at s3://${BUCKET}/vulnerability-scans/${DATE_PATH}/"
# Cleanup
rm -f "${REPORT_FILE}" "/tmp/final-${TIMESTAMP}.json"
The cron entry, if your policy says weekly on Mondays:
0 2 * * 1 /opt/scripts/scan-and-retain.sh your-domain.com >> /var/log/soc2-scans.log 2>&1
The metadata envelope is the part most people skip. When an auditor pulls the artifact for week 23, they need to see the scan timestamp, the target, and the control it satisfies — without you having to explain it verbally. Put it in the artifact itself.
Tracking remediation SLAs in Jira with automation
The other half of CC7.2 evidence is showing that findings get triaged and closed within the documented SLA. Here's a Jira automation rule (in JSON, importable via the automation library) that sets a due date on vulnerability tickets based on severity:
{
"name": "Set CC7.2 remediation due date by severity",
"trigger": {
"component": "TRIGGER",
"type": "jira.issue.created",
"conditions": [
{
"component": "CONDITION",
"type": "jira.issue.fields.condition",
"field": "labels",
"condition": "CONTAINS",
"value": "vulnerability"
}
]
},
"actions": [
{
"component": "ACTION",
"type": "jira.issue.edit.fields",
"fields": {
"duedate": {
"expression": "{{#if issue.priority.name == 'Critical'}}{{now.plusDays(30)}}{{else if issue.priority.name == 'High'}}{{now.plusDays(60)}}{{else}}{{now.plusDays(90)}}{{/if}}"
},
"customfield_soc2_control": "CC7.2",
"customfield_nist_sla": "SP 800-40r4"
}
}
]
}
The state transitions (opened → assigned → remediated → closed) are what the auditor samples. Every transition is timestamped by Jira automatically. The due date field makes SLA compliance visible in a single query:
labels = vulnerability
AND duedate < now()
AND status != Done
ORDER BY priority DESC
Run this query weekly and screenshot the result to your evidence bucket. Zero results is the evidence. Non-zero results need a risk acceptance artifact.
CC7.1: security event detection without a full SOC
CC7.1 requires monitoring for anomalous events with a documented response process. Most Series A teams do this by accident — CloudWatch alarms, Sentry, a dedicated Slack channel, maybe PagerDuty. The gap is usually the absence of a defined severity taxonomy and any artifact showing triage actually happened.
Vercel's April 2026 disclosure is the worst-case version of CC7.1 failing silently: roughly 22 months between OAuth compromise and detection. Same evidence-velocity lesson, different control. I wrote that one up separately.
What works at Series A scale: a lightweight log aggregator (Panther, Wazuh, or a structured SNS → Lambda → S3 pipeline), a one-page
Top comments (0)