@renato_marinho
PII Protection:
PII redaction is handled at the data layer, not as middleware. Our DB Graph MCP server (detailed in this post) automatically anonymizes query results from production databases — emails become @.com, names become ***, phone numbers/addresses/card numbers are all masked before they ever reach the LLM context. This runs at both the MCP layer and the Lambda layer (dual validation), so even if one layer is compromised, PII doesn't leak. On the observability side, Grafana's log data is also protected — structured logs are designed to exclude PII fields, and Grafana access itself is scoped via Google OAuth with domain restrictions. Servers that don't touch customer data (infrastructure, CI/CD, documentation) simply don't have access to PII-containing databases in the first place — scope separation by design.
Observability & Incident Response:
Every MCP server is instrumented with OpenTelemetry, and all logs/traces/metrics are aggregated in Grafana. Grafana alerting rules are configured per server — latency spikes, error rate thresholds, and availability checks all trigger Slack notifications automatically. So if a server misbehaves at 2am, the on-call engineer gets a Slack alert immediately.
For investigation, we built a Grafana MCP server — meaning Claude Code itself can query logs and metrics. "Show me error logs from the DB Graph MCP in the last hour" returns
structured results directly in the AI context. This closes the loop: the same AI that uses the MCP servers can also diagnose issues with them.
Independent Deployment:
Each server is a separate Cloud Run service with its own Pulumi stack, service account, and IAM roles. Deploying, scaling, or shutting down one server has zero impact on the others. There's no shared runtime or process — they're fully isolated at the infrastructure level.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
@renato_marinho
PII Protection:
PII redaction is handled at the data layer, not as middleware. Our DB Graph MCP server (detailed in this post) automatically anonymizes query results from production databases — emails become @.com, names become ***, phone numbers/addresses/card numbers are all masked before they ever reach the LLM context. This runs at both the MCP layer and the Lambda layer (dual validation), so even if one layer is compromised, PII doesn't leak. On the observability side, Grafana's log data is also protected — structured logs are designed to exclude PII fields, and Grafana access itself is scoped via Google OAuth with domain restrictions. Servers that don't touch customer data (infrastructure, CI/CD, documentation) simply don't have access to PII-containing databases in the first place — scope separation by design.
Observability & Incident Response:
Every MCP server is instrumented with OpenTelemetry, and all logs/traces/metrics are aggregated in Grafana. Grafana alerting rules are configured per server — latency spikes, error rate thresholds, and availability checks all trigger Slack notifications automatically. So if a server misbehaves at 2am, the on-call engineer gets a Slack alert immediately.
For investigation, we built a Grafana MCP server — meaning Claude Code itself can query logs and metrics. "Show me error logs from the DB Graph MCP in the last hour" returns
structured results directly in the AI context. This closes the loop: the same AI that uses the MCP servers can also diagnose issues with them.
Independent Deployment:
Each server is a separate Cloud Run service with its own Pulumi stack, service account, and IAM roles. Deploying, scaling, or shutting down one server has zero impact on the others. There's no shared runtime or process — they're fully isolated at the infrastructure level.