DEV Community

Cover image for Code Quality: CQ Risk‑Weighted Assessment Mode
Luciano Federico Pereira
Luciano Federico Pereira

Posted on

Code Quality: CQ Risk‑Weighted Assessment Mode

1. The Philosophy: Reality vs. Compliance

In a regulated payment environment, Code Quality (CQ) is often a tug-of-war between two worlds: the Engineering Reality (the daily developer experience) and KPI Governance (the metrics required by regulators like PCI-DSS, PSD2, or ISO 27001).

My approach is to bridge this gap. I do not view quality as a "one-size-fits-all" metric. Instead, I use a Risk-Weighted Scoring Model.

This model acknowledges a hard truth: in payments, a security breach is an existential threat, while a latency spike is merely a degraded experience. Therefore, we do not chase "clean code" in a vacuum; we prioritize the metrics that protect the business license and the user’s funds.

The Core Thesis: "Speed is a feature; Security is a prerequisite. We can scale to fix performance, but we cannot scale to fix a data breach."

2. The Strategy: Risk-Based Weighting

We align our engineering standards with our regulatory environment. The following weighting model ensures that a system cannot achieve a "Passing" grade if it is fast but insecure.

Category Weight (PCI-DSS Context) Why this weight?
Security 40% - 45% Highest Risk. Non-negotiable for ISO 27001/PCI-DSS. Vulnerabilities here end the business.
Integrity 20% - 25% Financial Risk. Prevents fraud, double-spending, and data tampering.
Reliability 15% - 20% Operational Risk. Uptime and error handling must be deterministic.
Performance 5% - 10% User Experience. Important, but secondary to the safety of funds.

Refined Scoring Formula

To avoid arbitrary grading, each metric is normalized onto a shared 0–100 scale. This lets us compare apples to oranges—latency, error rates, security findings—without smuggling in hidden biases.

The Normalization Model

For any metric where lower is better (latency, error count, etc.):
$$
[ \text{Score} = \max\left(0,; 100 \cdot \left(1 - \frac{\text{Actual} - \text{Target}}{\text{Target}}\right)\right) ]
$$

  • Hitting the target → 100
  • Missing the target by 10% → 90
  • Missing the target by 50% → 50
  • Catastrophic misses bottom out at 0, not negative values

This keeps the scoring intuitive and prevents a single bad metric from dominating the entire risk profile.

Example

A service has a P95 latency target of 150ms but is currently at 220ms.
$$
[ \text{Score} = 100 \cdot \left(1 - \frac{220 - 150}{150}\right) = 100 \cdot (1 - 0.4667) = 53.3 ]
$$
Rounded → 53

If this metric carries a 5% weight, its contribution to the overall score is:
$$
[ 53 \times 0.05 = 2.65 ]
$$
This keeps the signal honest: the service i*s slow, but the risk impact is proportionate—*unless high‑weight categories like Security or Availability are also degraded.

To avoid arbitrary grading, we use Normalization. We convert diverse metrics (milliseconds, vulnerability counts, percentages) into a shared 0–100 scale using the formula:

Example: If a service has a P95 Latency target of 150ms but is hitting 220ms, the score is 68. However, at a 5% weight, this impacts the final score by only 3.4 points. This allows us to be honest about technical debt without panicking stakeholders, provided the Security score remains high.

3. The Balance: SLAs, Reality, and Error Budgets

An SLA is a promise to the customer; Telemetry is the engineering truth. We manage the gap between them using Error Budgets:

  1. Innovation Phase: If Reality > SLA, the team has the "budget" to ship features fast and experiment.
  2. Stabilization Phase: If telemetry shows we are drifting near the SLA Floor, the model triggers a pivot. We stop feature work and move engineering effort to debt reduction and hardening.

4. Implementation: Multi-Stack Consistency

In a microservices environment utilizing Go, FastAPI (Python), and Symfony/Laravel (PHP) on AWS, language consistency is secondary to Telemetry Consistency.

The Stack Strategy

  • Go (Microservices): Focused on high-concurrency throughput.
  • Quality Gate: govulncheck for security, strict context propagation for tracing.

  • FastAPI (Data/ML): Focused on schema integrity.

  • Quality Gate: Pydantic for strict input/output validation (Integrity Score).

  • Symfony/Laravel (BFF/Legacy): Focused on business logic.

  • Quality Gate: PHPStan (Level 8+) and structured logging for audit trails.

  • AWS Infrastructure: The Unifying Layer.

  • Observability: CloudWatch and X-Ray ingest normalized JSON logs and Trace IDs from all three languages, providing a single "pane of glass" for system health.

5. Telemetry: Compliance-Ready Observability

We move beyond "vanity metrics" (like simple uptime) to a maturity model that satisfies PCI-DSS Requirement 10 and PSD2 Auditability.

The Telemetry Checklist

  • Traceability: Every request generates a Correlation ID at the edge, propagated through every Go routine, PHP process, and Python async task.
  • Auditability: Logs are structured (JSON), immutable, and contain User IDs/Context (without logging PII/Secrets).
  • Integrity: We monitor for log-tampering and missing telemetry signals.

Sample Quality Report: payment-gateway-svc (Go)

This is an example of the model’s output for a production service.

Category Metric (Target vs. Actual) Score (0-100) Weight Weighted Contribution
Security 0 Critical Vulns 100 45% 45.0
Integrity 100% Schema Validation 100 20% 20.0
Reliability 99.9% Uptime (Actual 99.8%) 99 15% 14.8
Performance P95: 150ms (Actual 220ms) 68 5% 3.4
Auditability 100% Trace ID Propagation 100 15% 15.0
TOTAL 98.2 / 100 (PASS)

Analysis: The service passes because it is secure and auditable. The performance drift (220ms) is noted as technical debt but does not block deployment, as it sits within the Error Budget.

6. Conclusion

This work shows that Code Quality is a measurable control surface, not an ideal. By weighting Security and Integrity above all else, we align engineering effort with real risk. Telemetry becomes the verification layer that proves our engineering state matches our business commitments.

Top comments (0)