DEV Community

Deepak
Deepak

Posted on

de-BUG

I Built a Tool to Find the Exact Test Case Where Your Competitive Programming Code Fails 🚀

If you do competitive programming, you’ve probably faced this frustrating situation.

Your code works for all the test cases you tried.

But the judge still says:

«Wrong Answer»

You test again.
You try more cases.
Everything still works.

Then your friend submits almost the same logic and it gets Accepted.

So the question becomes:

Where exactly is my code failing?


The Real Problem

Most of the time the issue is not obvious.

Typical problems include:

  • Edge cases you didn’t think of
  • Constraints behaving differently
  • Integer overflow
  • Boundary conditions
  • Incorrect assumptions in logic

You can try generating test cases manually, but:

  • It takes time
  • You might miss the failing case
  • AI sometimes hallucinates incorrect cases

So I thought:

Why not build a tool that automatically finds the failing test case?


Introducing: Debug Tool for Competitive Programming

I built a web application that helps you detect exactly where your code fails.

Instead of guessing the failing test case, the tool automatically finds it.

How it works

You simply provide:

1️⃣ Your buggy code
2️⃣ The correct code
3️⃣ The problem constraints

The application then:

  • Generates multiple valid test cases
  • Runs both programs
  • Compares outputs
  • Detects the first failing test case

It also tries to show the possible reason for failure, helping you debug faster.


Why This Is Better Than Asking AI

Many people paste code into AI and ask:

«“Find the failing test case”»

But AI can sometimes:

❌ generate invalid test cases
❌ misunderstand constraints
❌ guess incorrect edge cases

This tool instead:

✅ Compiles and runs the code
✅ Tests multiple generated test cases
✅ Detects the real failing case

So the result is based on actual execution, not guessing.


Example Situations Where This Helps

This tool is useful when:

  • Your code works locally but fails on the judge
  • Your friend's solution works but yours doesn't
  • You can't figure out the edge case
  • Hidden test cases cause Wrong Answer in contests

Built with Lovable + Vibe Coding

This project was completely built using Lovable with vibe coding.

It was an experiment to see how quickly a useful developer tool could be built using modern AI-assisted workflows.


Try the Tool

You can try the application here:

🔗 Live Demo:
https://preview--debugforcompetitiveprogramming.lovable.app/login

Paste your buggy code, correct code, and problem constraints, and the tool will try to detect the failing test case automatically.


Feedback Welcome

This is still an early version and I would love feedback from the competitive programming community.

Some ideas I’m considering next:

  • Supporting multiple programming languages
  • Generating stronger adversarial test cases
  • Better explanations for why the code fails

If you try the tool, let me know what you think! 🚀

Top comments (0)