I built this to understand how autonomous vehicle teams validate safety-critical decisions against regulatory standards like MISRA C:2012, CERT C, and ISO 26262.
Most AV test frameworks focus on functional behavior ("does it stop?"), but compliance validation — mapping each decision to specific safety rules — seems to be done manually. This framework automates that.
Key features:
- AI-biased scenario generation weighted toward edge cases (30% fog vs 20% uniform, 40% critical severity)
- Validates AV decisions against ISO 26262 ASIL-D timing requirements (<150ms for critical scenarios)
- Tests cybersecurity attacks (GPS spoofing, replay attacks, signal jamming) via Playwright API calls
- Generates compliance audit reports mapping each violation to specific MISRA/CERT rules
Tech: Python (scenario gen), Java 17 + Playwright (API testing), JUnit 5 (compliance validation)
The mock decision engine has 3 intentional bugs — the tests catch 2 of them (replay attack detection, ASIL-D timing violations).
Built this as a learning exercise after 13 years in QA automation. Curious what folks working on real AV validation think I'm missing.
GitHub: https://github.com/codingbuddha123/av-test-framework
Top comments (0)