This is a submission for the GitHub Copilot CLI Challenge
What I Built
I built AI Hallucination Detector for Code, a command-line tool that detects hallucinations in AI-generated Python code using static analysis.
The tool identifies:
- Hallucinated or fake APIs (non-existent imports)
- Logical inconsistencies such as misleading time-complexity claims
- Produces a clear hallucination risk score to help developers judge reliability
This project matters to me because AI-assisted coding is becoming mainstream, but correctness and trust remain critical. Instead of generating more code, this project focuses on verifying AI-generated code responsibly.
Demo
🔗 GitHub Repository:
https://github.com/anupam-hegde/AI-Hallucination-Detector-for-Code
🎥 Demo GIF / Video:
(Attached in the repository README)
Demo Flow:
- A Python file with a fake import and nested loops is analyzed
- The CLI detects:
- API hallucinations
- Logic hallucinations
- A risk score is generated with a clean, readable CLI output
The demo shows the tool working end-to-end using a single CLI command.
My Experience with GitHub Copilot CLI
GitHub Copilot CLI played a key role as a development and reasoning assistant throughout this project.
I used Copilot CLI to:
- Understand and implement AST-based static analysis
- Design safe logic for validating Python imports without executing code
- Debug CLI architecture issues related to Typer
- Reason about algorithmic complexity and scoring strategies
Copilot helped me move faster and think more clearly, but it was never treated as a source of truth.
All suggestions were manually reviewed and validated using deterministic static analysis — which directly aligns with the goal of this project.
This experience reinforced how Copilot CLI can be a powerful productivity tool when used responsibly and thoughtfully.
Top comments (0)