Current agent hackathons are rewarding projects that show real tool use, not just a polished prompt.
Across the May 2026 agent events I am tracking, the repeated proof pattern is clear:
- a hosted demo or walkthrough
- a public repo or repeatable run path
- real tool/API/MCP integration
- clear permissions and unsafe-action boundaries
- evals, test logs, or before/after proof
- payment caps and receipts if the build can spend
- a truthful eligibility and rules check before submission
I built a small browser-only gate for that checklist:
https://tateprograms.com/agent-submission-gate.html
It accepts a project folder or pasted README/package/OpenAPI/server/policy notes and returns:
- submission score
- next fix order
- detected agent surface
- missing proof
- payment/control gaps
- copyable report
The point is not to guarantee a prize. It is to make the submission evidence harder to miss before the deadline.
The most useful checks are the boring ones:
- Can a reviewer run or inspect the build without guessing?
- Is the tool surface machine-readable?
- Is there a demo URL, screenshot, or video path?
- Are permissions, rate limits, approvals, and destructive actions explicit?
- If x402, Pay.sh, wallets, or paid APIs are involved, are caps and receipts visible?
- Did the team check the event rules truthfully before optimizing for a prize?
This was built for agent hackathon projects, but the same shape applies to early product demos. The projects that look stronger are the ones with proof around the agent, not only proof that the agent exists.
Top comments (0)