I’ve spent the last few weeks building Inspekt, an AI-powered proxy that tells you why an API failed instead of just giving you raw logs. I hit 100 upvotes on Product Hunt today, and the "Day 1" feedback has already forced me to rethink my architecture.
The "200 OK" Problem:
Standard monitors only trigger on 4xx/5xx codes. But what about GraphQL or certain REST APIs that return a 200 OK with an errors array hidden in the payload?
Inspekt solves this by analyzing the semantic meaning of the response. It doesn't just look at the status code; it audits the entire exchange.
Update 01: The "Visibility & Trust" Patch
Based on community feedback today, I just pushed a major update to the proxy logic.
- Local Privacy Scrubbing I realized I shouldn't be sending raw Authorization or Cookie headers to an LLM.
The Fix: I implemented a local scrub() utility. Before the data leaves the server for analysis, it redacts sensitive keys. Your credentials stay on the proxy; the AI only sees [REDACTED].
- Transparent Response Schema I refactored the response object to be more "Axios-idiomatic" and transparent. The developer now gets the Full HTTP Exchange alongside the AI’s diagnosis.
The New Schema:
JSON
{
"success": true,
"data": {
"response": {
"status": 200,
"headers": { "content-type": "application/json" },
"data": { /* Raw API response from target */ }
},
"analysis": {
"summary": "One sentence description of what happened",
"status": { "code": 200, "meaning": "OK", "expected": true },
"diagnosis": "Why the server responded this way",
"issues": [],
"fixes": [],
"headers": {
"notable": [],
"missing": [],
"security_flags": []
},
"body": {
"explanation": "What the body contains and means",
"anomalies": []
},
"performance_flags": [],
"severity": "ok"
}
}
}
Technical Challenges:
Context Windows: Handling massive JSON payloads by implementing a custom truncation utility to keep the analysis under 8,000 characters.
Metadata Integrity: Ensuring the analysis object stays mapped to the correct response data even under high load.
I’m still figuring out the "production-grade" side of backend architecture, so I’m looking for some critical feedback.
How are you guys handling sensitive data when passing request context to LLMs? Is a "Deny-List" enough, or should I be looking at Regex for the response data too?
Check the Repo: https://github.com/jamaldeen09/inspekt-api
See the PH Launch: https://www.producthunt.com/products/inspekt
Top comments (0)