Audit logs and compliance: what gets recorded and why
Published: April 21, 2026
Category: Security · Compliance
Reading time: 15 minutes
Author: NEXUS AI Team
A production incident happens at 2:47 AM. You wake up to alerts. By the time you open your laptop, the question isn't "what failed" — your monitoring already told you that. The question is "who changed what, when, and from where?"
Without audit logs, that question takes hours. With audit logs, it takes minutes.
NEXUS AI records 42 distinct event types across 7 categories — every authentication attempt, every secret operation, every deployment action, every permission change. This post covers exactly what gets recorded, what each severity level means, how retention works, and how the audit log maps to the compliance controls your team, auditors, and regulators care about.
Why audit logs exist — and why most teams underinvest in them
Audit logs are not a debugging tool. They are an accountability system.
The difference matters. A debugging tool helps you understand why software broke. An accountability system answers a harder set of questions:
- Did an authorized person take this action?
- From a recognized location?
- At a time that makes sense?
- On the resource they were supposed to touch?
These are the questions a security incident forces you to answer — and the questions compliance auditors ask during a review. Every minute you spend reconstructing context from application logs and git history is a minute your accountability system didn't pay for itself.
NEXUS AI's audit log is designed to answer accountability questions in seconds, not hours.
The event catalog: 42 event types, 7 categories
Every event stored in the audit log has a named type. Here is the complete catalog.
Authentication events
| Event | What triggered it | Default severity |
|---|---|---|
LOGIN_SUCCESS |
Successful sign-in via password or OAuth | INFO |
LOGIN_FAILED |
Failed sign-in attempt (wrong password, invalid token) | WARNING |
LOGOUT |
Explicit sign-out | INFO |
REGISTER |
New account created | INFO |
TOKEN_REFRESH |
Session token refreshed | INFO |
TOKEN_EXPIRED |
Session token expired and was rejected | WARNING |
Authentication events are the first line of accountability. A single LOGIN_FAILED is noise. Fifteen LOGIN_FAILED events from the same IP in 90 seconds is a brute-force signal — recorded, flagged, and available for your SIEM in real time.
Deployment events
| Event | What triggered it | Default severity |
|---|---|---|
DEPLOYMENT_CREATED |
New deployment provisioned | INFO |
DEPLOYMENT_UPDATED |
Deployment configuration changed | INFO |
DEPLOYMENT_STARTED |
Stopped deployment brought back online | INFO |
DEPLOYMENT_STOPPED |
Running deployment halted | INFO |
DEPLOYMENT_DELETED |
Deployment permanently removed | INFO |
DEPLOYMENT_FAILED |
Build or container start failed | ERROR |
DEPLOYMENT_SCALED |
Replica count changed | INFO |
Every deploy action is tied to the actor who triggered it — a user email, a token ID, or both. When a deploy fires at 3 AM, you know whether it was a CI token, a scheduled job, or a human who shouldn't have been working at 3 AM.
Security events
| Event | What triggered it | Default severity |
|---|---|---|
DOCKERFILE_VALIDATION_FAILED |
Submitted Dockerfile failed safety checks | WARNING |
RATE_LIMIT_EXCEEDED |
API caller exceeded request rate limits | WARNING |
UNAUTHORIZED_ACCESS |
Request rejected due to insufficient permissions | WARNING |
SUSPICIOUS_ACTIVITY |
Behavioral anomaly detected by the security monitor | CRITICAL |
CONTAINER_ESCAPE_ATTEMPT |
Container process attempted to break isolation | CRITICAL |
SUSPICIOUS_ACTIVITY and CONTAINER_ESCAPE_ATTEMPT are the two events that trigger an immediate alert. They are never demoted to WARNING or lower.
Resource events
| Event | What triggered it | Default severity |
|---|---|---|
RESOURCE_LIMIT_EXCEEDED |
Deployment exceeded its CPU, memory, or storage limit | WARNING |
HIGH_CPU_USAGE |
Container CPU usage crossed threshold | WARNING |
HIGH_MEMORY_USAGE |
Container memory usage crossed threshold | WARNING |
Resource events are included in the audit log — not just the metrics system — because they tell the story of capacity-related incidents. A deployment that gets quietly OOM-killed at 4 PM on a Tuesday has a paper trail.
Administrative events
| Event | What triggered it | Default severity |
|---|---|---|
USER_CREATED |
New team member account created or invited | INFO |
USER_DELETED |
Team member removed from the organization | INFO |
PERMISSION_CHANGED |
Role or scope assignment changed | INFO |
CONFIG_CHANGED |
Organization-level configuration updated | INFO |
Administrative events answer the access-review question: not just who has access now, but who granted it, when, and to whom. Every PERMISSION_CHANGED event records the before and after state in the details JSON field.
Project events
| Event | What triggered it | Default severity |
|---|---|---|
PROJECT_CREATED |
New project created | INFO |
PROJECT_UPDATED |
Project metadata or settings changed | INFO |
PROJECT_DELETED |
Project and all its deployments deleted | INFO |
Secret and vault events
| Event | What triggered it | Default severity |
|---|---|---|
SECRET_CREATED |
New secret stored in the vault | INFO |
SECRET_UPDATED |
Secret value changed | INFO |
SECRET_ROTATED |
Secret rotated (new value, same name) | INFO |
SECRET_DELETED |
Secret removed from the vault | INFO |
SECRET_REVEALED |
Secret value decrypted for display | WARNING |
SECRET_LISTED |
Secret names listed (values not returned) | INFO |
SECRET_RUNTIME_ACCESSED |
Secret decrypted for container injection at deploy time | INFO |
SECRET_REVEALED is marked WARNING by design. The vault never returns plaintext values through normal operations — if a SECRET_REVEALED event fires, an admin explicitly requested a value display. That is worth noting.
SECRET_RUNTIME_ACCESSED records every time a secret is decrypted for injection into a running container. On a high-frequency redeploy environment, this creates a complete timeline of which secrets were active in which container instances.
Database intelligence events
| Event | What triggered it | Default severity |
|---|---|---|
DATABASE_ACCESSED |
External database connection established | INFO |
DATABASE_MODIFIED |
Schema change or DDL applied to external database | WARNING |
DATABASE_QUERY_EXECUTED |
SQL query executed against an external database source | INFO |
DATABASE_MODIFIED is promoted to WARNING because schema changes are high-impact and difficult to reverse. Every DDL statement executed through NEXUS AI's Database Intelligence layer — whether applied directly or via a proposed fix — produces a record.
The audit log record
Every event writes a single record to the audit_logs table. Here is what that record looks like in full:
{
"id": "f3c8a21b-4d9e-4a7f-b6c1-e2d8f0a3b591",
"eventType": "DEPLOYMENT_CREATED",
"severity": "INFO",
"userId": "usr_01HX9...",
"organizationId": "org_01HX9...",
"ipAddress": "140.82.114.3",
"userAgent": "nexusapp-cli/2.0.0 node/20.11.0",
"resourceId": "dep_api-prod",
"resourceType": "deployment",
"action": "DEPLOYMENT_CREATED",
"details": {
"deploymentName": "api-prod",
"image": "ghcr.io/org/api:sha-abc123",
"region": "us-east-1",
"provider": "AWS_APP_RUNNER",
"tokenId": "tok_01HX9..."
},
"success": true,
"errorMessage": null,
"timestamp": "2026-04-21T09:17:05.000Z"
}
Every field is intentional:
| Field | Why it exists |
|---|---|
id |
Unique record identifier — stable reference for incident tickets |
eventType |
Machine-readable event name — filterable, indexable, SIEM-parseable |
severity |
INFO / WARNING / ERROR / CRITICAL — drives alerting and dashboards |
userId |
The human actor (null if action was taken by a token with no session) |
organizationId |
Tenant boundary — logs are always org-scoped; cross-tenant reads are impossible |
ipAddress |
Source IP of the request — critical for geolocation anomaly detection |
userAgent |
CLI version, browser, SDK — surfaces automation vs. human access |
resourceId |
The specific resource acted upon — deployments, secrets, users |
resourceType |
The category of resource — enables filtering by type |
action |
Human-readable description of what happened |
details |
Freeform JSON — event-specific context (image tag, region, old role, new role) |
success |
Whether the action completed successfully |
errorMessage |
If success is false, why it failed |
timestamp |
UTC timestamp of the event — stored with millisecond precision |
The details field is where event-specific context lives. A PERMISSION_CHANGED record includes the old role and new role. A SECRET_RUNTIME_ACCESSED record includes the deployment ID and the name (not value) of the secret. A LOGIN_FAILED record includes the email attempted.
Severity levels
NEXUS AI uses four severity levels. They control how an event is stored, surfaced, and retained.
INFO
Normal system operation. Every successful deploy, login, and secret list operation lands here. INFO events are written to the database and available for query, but they do not trigger any alert. They form the baseline — the record of what "normal" looks like.
WARNING
Something worth noting. Failed logins, rate limit hits, secret reveal operations, and Dockerfile validation failures are WARNING events. They do not indicate a breach, but they indicate conditions that — in volume or combination — warrant investigation. Five LOGIN_FAILED events is noise. Fifty in ten minutes is a pattern your SIEM should surface.
ERROR
An action failed in a way that requires attention. DEPLOYMENT_FAILED is an ERROR. These events are logged to the application error stream in addition to the database, so they appear in your observability pipeline immediately.
CRITICAL
Immediate action required. Only two event types default to CRITICAL: SUSPICIOUS_ACTIVITY and CONTAINER_ESCAPE_ATTEMPT. CRITICAL events trigger the alerting pipeline — currently logging to the critical error stream, with email, Slack, and PagerDuty integrations on the roadmap. CRITICAL events are also exempt from the standard 90-day retention purge. They are kept indefinitely, regardless of plan.
The security score
NEXUS AI's security monitor computes a rolling 100-point security score for each organization, recalculated across a configurable window (default: 7 days).
The score starts at 100 and deducts based on event volume:
| Event category | Deduction per occurrence |
|---|---|
Failed login (LOGIN_FAILED) |
−2 points |
Rate limit exceeded (RATE_LIMIT_EXCEEDED) |
−1 point |
Dockerfile validation failure (DOCKERFILE_VALIDATION_FAILED) |
−5 points |
Critical security event (severity: CRITICAL) |
−10 points |
A score of 100 means no adverse events in the review window. A score of 60 means something is worth investigating. A score below 40 should trigger an active security review.
The score is visible in the NEXUS AI dashboard under Settings → Security and is available via the API at GET /api/audit/security-summary.
Retention policy
| Plan | Retention window | CRITICAL event retention |
|---|---|---|
| Starter | 90 days | Indefinite |
| Pro | 90 days | Indefinite |
| Enterprise | Configurable (env: AUDIT_LOG_RETENTION_DAYS) |
Indefinite |
| Enterprise On-Prem | You own the database | You own the database |
The retention job runs daily at 3:30 AM UTC. It purges records older than the retention window — except CRITICAL events, which are never automatically purged.
On Enterprise, set AUDIT_LOG_RETENTION_DAYS to any positive integer. Regulated industries typically set this to 365 (HIPAA minimum) or 2555 (7-year financial records requirement).
# Example: 365-day retention for HIPAA workloads
AUDIT_LOG_RETENTION_DAYS=365
On Enterprise On-Prem, you bring your own PostgreSQL cluster. Audit logs live in your database, under your retention and backup policies, with no data leaving your infrastructure.
Accessing your audit logs
Dashboard
The audit log viewer is at Settings → Audit Logs in the NEXUS AI dashboard. Filter by event type, severity, date range, and user. Paginated, searchable, exportable.
API
The audit log API is at /api/audit/logs:
# Get the last 50 logs
curl -H "Authorization: Bearer $NEXUS_API_KEY" \
"https://nexusai.run/api/audit/logs?limit=50"
# Filter by event type
curl -H "Authorization: Bearer $NEXUS_API_KEY" \
"https://nexusai.run/api/audit/logs?eventType=SECRET_UPDATED&limit=100"
# Filter by severity
curl -H "Authorization: Bearer $NEXUS_API_KEY" \
"https://nexusai.run/api/audit/logs?severity=WARNING&limit=100"
# Date range (ISO 8601)
curl -H "Authorization: Bearer $NEXUS_API_KEY" \
"https://nexusai.run/api/audit/logs?startDate=2026-04-01T00:00:00Z&endDate=2026-04-21T23:59:59Z"
# Security summary for the last 7 days
curl -H "Authorization: Bearer $NEXUS_API_KEY" \
"https://nexusai.run/api/audit/security-summary?days=7"
CSV export
Export up to 10,000 log records as a CSV file — ready to upload to your SIEM, compliance platform, or auditor portal:
# Export last 30 days to CSV
curl -H "Authorization: Bearer $NEXUS_API_KEY" \
"https://nexusai.run/api/audit/export?startDate=2026-03-21T00:00:00Z" \
-o audit-logs-march-2026.csv
The CSV includes all fields: timestamp, event type, severity, user ID, IP address, action, success, error message, resource ID, and resource type. The details JSON field is serialized as a string in the CSV export.
Available filter endpoints
| Endpoint | Purpose |
|---|---|
GET /api/audit/logs |
Query logs with filters |
GET /api/audit/export |
Download CSV (up to 10,000 records) |
GET /api/audit/security-summary |
7-day security summary and score |
GET /api/audit/security-metrics |
Real-time threat metrics |
GET /api/audit/event-types |
List all 42 event types and 4 severity levels |
GET /api/audit/user-activity/:userId |
30-day activity summary for a specific user |
Audit logs in a real incident
Here is how audit logs actually look during an incident response workflow.
Scenario: At 11:42 PM, a production deployment stops unexpectedly. The on-call engineer opens the audit log.
Step 1: Filter for recent deployment events on the affected resource.
curl -H "Authorization: Bearer $NEXUS_API_KEY" \
"https://nexusai.run/api/audit/logs?eventType=DEPLOYMENT_STOPPED&limit=10"
Result:
{
"eventType": "DEPLOYMENT_STOPPED",
"severity": "INFO",
"userId": null,
"details": {
"tokenId": "tok_01HX9...",
"deploymentName": "api-prod",
"triggeredBy": "access_token"
},
"ipAddress": "198.51.100.22",
"timestamp": "2026-04-21T23:42:17.000Z"
}
No userId — the stop was triggered by an Access Token, not a human session. tokenId identifies which token.
Step 2: Cross-reference the token ID against the token list.
nexus token list --json | jq '.[] | select(.id == "tok_01HX9...")'
Result: The token was named github-actions-prod. It has deploy:write scope. But the GitHub Actions workflow that uses it is only supposed to trigger redeployments, not stops.
Step 3: Pull the full recent activity for that token.
curl -H "Authorization: Bearer $NEXUS_API_KEY" \
"https://nexusai.run/api/audit/logs?limit=20" | \
jq '.data.logs[] | select(.details.tokenId == "tok_01HX9...")'
Result: The token was used from IP 198.51.100.22. Your CI/CD pipeline runs from 140.82.114.0/24. This IP is outside that range.
The token was compromised. You revoke it immediately, rotate secrets, and have a complete timeline of every action it took — from the first legitimate use to the unauthorized stop. The entire investigation took 11 minutes.
Without audit logs: that same investigation would have required GitHub Actions logs, cloud provider logs, and a manual timeline reconstruction. Best case: 90 minutes.
Compliance mapping
NEXUS AI's audit log maps directly to the access-control and audit requirements in the major compliance frameworks. This is not a marketing table — these are the specific control IDs your auditor will check.
HIPAA
| HIPAA requirement | Control ID | NEXUS AI coverage |
|---|---|---|
| Access control — unique user identification | §164.312(a)(2)(i) |
userId on every record; token-based access uses tokenId in details
|
| Audit controls — hardware, software, procedural mechanisms | §164.312(b) | 42 event types, append-only log, 90–365 day retention |
| Automatic logoff — session inactivity termination | §164.312(a)(2)(iii) |
TOKEN_EXPIRED event records session termination |
| Encryption and decryption — PHI protection | §164.312(a)(2)(iv) |
SECRET_RUNTIME_ACCESSED records every secret decryption event |
| Person or entity authentication — verify identity before granting access | §164.312(d) |
LOGIN_FAILED events surface failed authentication |
SOC 2 Type II
| SOC 2 criterion | Trust Service Criterion | NEXUS AI coverage |
|---|---|---|
| Logical access controls | CC6.1 |
PERMISSION_CHANGED, USER_CREATED, USER_DELETED
|
| System access authorization | CC6.2 |
LOGIN_SUCCESS, LOGIN_FAILED, UNAUTHORIZED_ACCESS
|
| User registration and de-provisioning | CC6.3 |
USER_CREATED, USER_DELETED, PERMISSION_CHANGED
|
| Restricting access to data in transit | CC6.7 |
SECRET_RUNTIME_ACCESSED, DATABASE_ACCESSED
|
| Monitoring of system components | CC7.2 |
HIGH_CPU_USAGE, HIGH_MEMORY_USAGE, RESOURCE_LIMIT_EXCEEDED
|
| Incident detection and reporting | CC7.3 |
SUSPICIOUS_ACTIVITY, CONTAINER_ESCAPE_ATTEMPT (CRITICAL, alerted immediately) |
GDPR
| GDPR requirement | Article | NEXUS AI coverage |
|---|---|---|
| Records of processing activities | Art. 30 | Complete event log with timestamp, actor, resource, and outcome |
| Accountability — demonstrate compliance | Art. 5(2) | Append-only log with no modification capability |
| Data access requests — who accessed what | Art. 15 |
user-activity/:userId endpoint for per-user activity reports |
| Breach notification — detect and respond within 72 hours | Art. 33 | CRITICAL events alerted immediately; SUSPICIOUS_ACTIVITY surfaces breach signals |
PCI DSS (v4.0)
| PCI DSS requirement | Control | NEXUS AI coverage |
|---|---|---|
| Track and monitor access to cardholder data | Req. 10.2 |
DATABASE_ACCESSED, DATABASE_QUERY_EXECUTED for connected payment databases |
| Record user access to audit trails | Req. 10.2.1 | All 42 event types record userId or tokenId
|
| Record privileged access | Req. 10.2.1.b |
PERMISSION_CHANGED records role elevations |
| Retain audit log history for at least 12 months | Req. 10.7 | Set AUDIT_LOG_RETENTION_DAYS=365 on Enterprise |
| Review logs daily | Req. 10.6 |
GET /api/audit/logs with date filter; exportable to SIEM |
Integrating with your SIEM
NEXUS AI does not require a native SIEM integration — the export API and JSON query endpoint are designed to plug into any pipeline.
Datadog:
# Pull last hour of WARNING+ events and ship to Datadog
curl -s -H "Authorization: Bearer $NEXUS_API_KEY" \
"https://nexusai.run/api/audit/logs?severity=WARNING&startDate=$(date -u -v-1H '+%Y-%m-%dT%H:%M:%SZ')" | \
jq '.data.logs[]' | \
while read -r event; do
curl -s -X POST "https://http-intake.logs.datadoghq.com/api/v2/logs" \
-H "DD-API-KEY: $DD_API_KEY" \
-H "Content-Type: application/json" \
-d "$event"
done
Grafana / Loki: Ship the JSON response from /api/audit/logs via a log shipper (Promtail, Alloy) configured to poll the endpoint on a scheduled interval.
Splunk / SIEM: Use the CSV export endpoint (/api/audit/export) on a scheduled basis and ingest via Splunk's file monitor input.
On the Enterprise On-Prem plan, audit logs live in your PostgreSQL cluster. Query them directly with any BI or SIEM tool that supports PostgreSQL — no export pipeline required.
What the audit log does NOT record
Knowing the boundaries of any control is as important as knowing what it covers.
Secret values are never logged. SECRET_RUNTIME_ACCESSED records that a secret named DATABASE_URL was accessed for deployment api-prod. It does not record the value of DATABASE_URL. The plaintext never touches the audit log.
Application-level data is not recorded. NEXUS AI audits actions taken on the platform — deployments, secrets, members, tokens. It does not audit what your application does with the resources it receives. If your app logs process.env.DATABASE_URL at startup, that is an application-level concern, not a platform-level audit event.
Read operations on deployments are not individually logged. Viewing a deployment's status in the dashboard does not produce an audit event. Audit events capture state changes and security-relevant reads (secrets, databases). Routine dashboard reads would generate millions of low-value INFO events per day on active organizations.
Container stdout/stderr is not the audit log. Build logs and runtime logs are separate from the audit log. They are accessible via GET /api/deployments/:id/logs and stored in the observability layer, not in audit_logs.
Checklist: audit log hygiene
- [ ] Review
GET /api/audit/security-summaryweekly — if the score drops below 80, investigate - [ ] Set
AUDIT_LOG_RETENTION_DAYS=365for HIPAA, financial, or PCI workloads (Enterprise plan) - [ ] Export monthly CSVs to your compliance archive before the quarterly close
- [ ] Pull
GET /api/audit/user-activity/:userIdfor every departing team member as part of offboarding - [ ] Filter for
PERMISSION_CHANGEDevents during quarterly access reviews — verify every role change was intentional - [ ] Confirm no
SECRET_REVEALEDevents in the last 30 days unless explicitly authorized - [ ] Set up a SIEM pipeline for
severity=WARNINGevents if your team size exceeds 10 engineers - [ ] On Enterprise On-Prem: verify your PostgreSQL backup schedule includes the
audit_logstable - [ ] Document your retention period in your security policy before your next audit
Frequently asked questions
Can I delete an audit log entry?
No. Audit log records are append-only. There is no API endpoint to delete individual records. The retention job purges records older than the retention window — but only non-CRITICAL events, and only automatically. This immutability is intentional: an audit log you can edit is not an audit log.
Who can access audit logs?
Audit log access requires the audit.read org permission, which is granted to OWNER and ADMIN roles. Developer and lower roles cannot query the audit log. This prevents a Developer from inspecting what other users have done — audit visibility is a privilege, not a default.
Does the audit log capture API calls made by NEXUS AI MCP tools (Claude agents)?
Yes. MCP tool calls go through the same API layer as CLI and dashboard actions. They produce audit events with the token ID in details.tokenId and a userAgent that identifies the MCP client. You can filter for agent-originated actions by querying for the specific token ID issued to your MCP integration.
What happens to audit logs if I downgrade my plan?
Logs are not deleted on plan downgrade. If you downgrade from Enterprise (365-day retention) to Pro (90-day retention), the retention job will begin purging records older than 90 days on its next daily run. Export before downgrading if you need records beyond the 90-day window.
Is the audit log encrypted at rest?
Yes. The audit_logs table lives in the same PostgreSQL database as the rest of your organization's data. The database is encrypted at rest using AES-256. On Enterprise On-Prem, encryption at rest is your infrastructure team's responsibility.
Can I receive real-time alerts for specific event types?
CRITICAL events (SUSPICIOUS_ACTIVITY, CONTAINER_ESCAPE_ATTEMPT) trigger the alert pipeline immediately. For custom alerting on other event types — for example, an alert any time PERMISSION_CHANGED fires — the current path is to poll /api/audit/logs via your SIEM and configure alert rules there. Native webhook delivery for specific event types is on the product roadmap.
What's next
The audit log is available on every NEXUS AI plan, including Starter at $29/mo. The 90-day retention window, CSV export, and full API access ship on all plans. Configurable retention windows (AUDIT_LOG_RETENTION_DAYS) and indefinite CRITICAL event retention are available on Enterprise and Enterprise On-Prem.
If you are working through a HIPAA Business Associate Agreement, SOC 2 audit, or PCI self-assessment questionnaire, the compliance table in this post maps directly to the evidence your auditor needs. The CSV export at /api/audit/export is the artifact.
For regulated workloads, healthcare data, or financial applications, the Enterprise plan adds configurable retention, dedicated security review, and SAML SSO. Reach out at nexusai.run.
Related reading:
- Stop shipping secrets. Start using a vault.
- RBAC deep dive: roles, scopes, and least privilege
- MCP integration: 37 tools for Claude and AI agents
- How NEXUS AI deploys your app in under 5 minutes
What you can't see, you can't defend. What you can't prove, you can't audit.
Top comments (0)