The "descriptive exceptions at every failure point" pattern is something I wish more test frameworks emphasized. I maintain a game project with 1,600+ tests in Godot/GUT, and the single biggest productivity gain was making every assertion failure message answer three questions: what was expected, what was received, and which specific input caused it. When you're running the full suite and something breaks at test #847, you don't want to fire up a debugger — you want the error message itself to be the diagnosis.
Your schema drift detection idea (#2 in Future Improvements) resonates deeply. I run an AI agent that consumes multiple third-party APIs daily — Gumroad, itch.io, GitHub. The painful reality is that APIs change without updating their OpenAPI spec. Gumroad silently started returning 422 for payloads that previously got 200, with no schema version bump. itch.io's selectize-based tag API has undocumented rate limiting that returns valid-looking HTML instead of a proper error response. In both cases, a contract test against the published spec would have passed — the spec was never updated to reflect the actual behavior.
This makes me wonder: have you considered a "reverse contract test" pattern? Instead of validating your code against the provider's spec, you'd record actual responses over time and detect when they diverge from the schema you've been observing. Basically treating the Swagger doc as the expected contract but building a shadow schema from actual traffic. The gap between the two is where the real bugs hide.
Also curious about $ref resolution in deeply nested schemas. NSwag handles circular references well in my experience, but I've seen OpenAPI specs in the wild where allOf + $ref chains create schemas that validate differently depending on resolution order. Have you hit any edge cases with NSwag's ActualSchema resolution?
Senior QA Engineer in Portugal | AWS Community Builder
I write extensively about test automation, performance, best practices, and various other useful things I have learned as a QA professional.
This makes me wonder: have you considered a "reverse contract test" pattern? Instead of validating your code against the provider's spec, you'd record actual responses over time and detect when they diverge from the schema you've been observing. Basically treating the Swagger doc as the expected contract but building a shadow schema from actual traffic. The gap between the two is where the real bugs hide.
I'm new on contract testing and I really liked your suggestion, I will go more deep on this and If I have good results bring it here
Also curious about $ref resolution in deeply nested schemas. NSwag handles circular references well in my experience, but I've seen OpenAPI specs in the wild where allOf + $ref chains create schemas that validate differently depending on resolution order. Have you hit any edge cases with NSwag's ActualSchema resolution?
I didn't yet, but If you have one spec for example, could you send me? I think that would be fun work on this edge cases.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
The "descriptive exceptions at every failure point" pattern is something I wish more test frameworks emphasized. I maintain a game project with 1,600+ tests in Godot/GUT, and the single biggest productivity gain was making every assertion failure message answer three questions: what was expected, what was received, and which specific input caused it. When you're running the full suite and something breaks at test #847, you don't want to fire up a debugger — you want the error message itself to be the diagnosis.
Your schema drift detection idea (#2 in Future Improvements) resonates deeply. I run an AI agent that consumes multiple third-party APIs daily — Gumroad, itch.io, GitHub. The painful reality is that APIs change without updating their OpenAPI spec. Gumroad silently started returning
422for payloads that previously got200, with no schema version bump. itch.io's selectize-based tag API has undocumented rate limiting that returns valid-looking HTML instead of a proper error response. In both cases, a contract test against the published spec would have passed — the spec was never updated to reflect the actual behavior.This makes me wonder: have you considered a "reverse contract test" pattern? Instead of validating your code against the provider's spec, you'd record actual responses over time and detect when they diverge from the schema you've been observing. Basically treating the Swagger doc as the expected contract but building a shadow schema from actual traffic. The gap between the two is where the real bugs hide.
Also curious about
$refresolution in deeply nested schemas. NSwag handles circular references well in my experience, but I've seen OpenAPI specs in the wild whereallOf+$refchains create schemas that validate differently depending on resolution order. Have you hit any edge cases with NSwag'sActualSchemaresolution?Thank you very much for your feedback.
I'm new on contract testing and I really liked your suggestion, I will go more deep on this and If I have good results bring it here
I didn't yet, but If you have one spec for example, could you send me? I think that would be fun work on this edge cases.