DEV Community

Cover image for I Had 48 Hours for This Hackathon. Nigeria Gave Me 6.
Divine Victor
Divine Victor

Posted on • Edited on

I Had 48 Hours for This Hackathon. Nigeria Gave Me 6.

Demo Link: https://youtube.com/shorts/c4_MsRqAooY?si=o3zpOkVzXUM6iqcC
GitHub: https://github.com/Loneewolf15/WitnessAi
Deployed URL: https://witnessai-production.up.railway.app

The Problem Has a Face

September 23rd, 2025. Gunmen on 50 motorcycles rode into villages in Patigi Local Government Area, Kwara State. They killed a pregnant woman. Kidnapped six people. Operated freely for four hours.

Four hours. Zero recorded evidence.

This is not an isolated incident. It is the default. Crimes happen in Nigerian markets, neighborhoods, and schools—and when it is over, there is almost nothing to work with. No footage. No documentation. Just people's word against each other.

Nigeria spent $470 million on a national CCTV project in 2010. Fifteen years later, coverage is thin, footage quality is often too poor to identify anyone, and CCTV recordings exist in a legal grey area that makes them difficult to use as evidence in court. We do not have police with body cameras. We never built that infrastructure.

Most security AI is built for places that already have all of that. It assumes reliable power, a central database, trained operators, and a legal framework that accepts digital evidence. It was built for that world and exported everywhere else as a solution.

I wanted to build something that starts where we actually are.

What I Built

WitnessAI is a real-time security intelligence agent that transforms any live camera feed into an always-on AI legal witness.

  • Browser camera streams live to the agent via WebSocket
  • YOLOv8 detects and tracks every person in the frame
  • Behavioral anomaly engine flags loitering, running, crowd surges, and falls
  • Gemini Realtime narrates incidents aloud the moment they happen
  • Deepgram STT lets the operator speak to the agent by voice
  • ElevenLabs TTS speaks responses back in real time
  • Every confirmed incident auto-generates a structured evidence package: timestamped JSON report plus a video clip with 30 seconds of pre-crime footage already baked in

One button. Camera starts. Everything else is automatic.

The Timeline Nobody Planned For

Here is what I thought my weekend looked like:

  • Friday 3am: Register. Start planning.
  • Friday - Saturday: Build the full system.
  • Sunday: Polish and submit.

Here is what actually happened:

Friday 3am: Found the hackathon. Read the docs. Got excited. Brainstormed the idea with Claude AI to get my thoughts structured, built a mind map, and registered. Set an alarm for 4am and went to sleep with a plan.

Friday daytime: BEDC—Benin Electricity Distribution Company—took the light. All day. No power, no laptop, no building. I had school anyway, so I sat through lectures mentally sketching how the tracker should work — but there was nothing to come home to.

Saturday: Still no light. Airtel and MTN, apparently sensing an opportunity, also chose this exact weekend to sort out whatever they had between themselves. No power. Barely any network. I watched the hackathon clock tick on my phone.

Saturday 11pm: The light came back. I opened the laptop and started building.

What followed was not rosy. The network was still cutting in and out. The laptop kept freezing—not once, not twice, but repeatedly throughout the night every time I pushed it hard. Each freeze meant a restart, each restart meant losing whatever state I had in memory, and each restart meant waiting and hoping the laptop would come back up before the next power cut.

And then there was the YOLOv8 problem.

  • Sunday 5am: The light went again. I stopped.
  • Sunday 8am - 12pm: Church.
  • Sunday 12pm: Back home. No light. Race to the 7:15pm late submission deadline on whatever battery and mobile data remained.

Total actual building time: roughly 6 hours.

The Challenges

Python 3.12 on a Machine That Did Not Want To

The Vision Agents SDK requires Python 3.12. My machine had 3.11. The deadsnakes PPA - the standard fix - failed because broken third-party repo entries from Opera and KiCad were corrupting my apt sources. Pyenv was the fallback. It downloaded fine, then froze mid-compilation when my laptop ran out of patience.

After a restart, it compiled cleanly.

Fix: Restart. Not elegant. Completely effective.

YOLOv8 Would Not Run—So I Switched to CPU Mode

This was the wall I hit in the middle of the night. Every time I tried to run YOLOv8 with the full GPU configuration, the process would crash. No meaningful error message. Just a hard exit.

After more debugging than I want to admit, I realized my machine simply could not handle the full YOLOv8 model the way it was configured. So I switched to the CPU-optimized version—YOLOv8n (nano), explicitly forced to CPU inference. Slower per frame, yes. But it ran. Consistently. No crashes.

pythonmodel = YOLO('yolov8n.pt') # nano - CPU-optimised
results = model(frame, conf=0.5, classes=[0], device='cpu')

This is actually a better fit for the deployment context anyway. Most of the environments in Nigeria where this would be useful do not have GPUs. If the model only works with expensive hardware, it is not actually solving the problem.

Fix: Stop fighting the hardware. Use what runs. YOLOv8n on CPU is fast enough for 5fps and accurate enough to matter.


The Network Kept Cutting Mid-Build

Between 11pm Saturday and 5am Sunday, Airtel and MTN were both unreliable. Installing packages mid-network-drop means corrupted downloads. Pushing to GitHub on an unstable connection means incomplete commits. Half my debugging time that night was not debugging code—it was re-running commands that had failed silently because the network disappeared underneath them.

I learned to check the connection state before every pip install. I learned to commit locally and push in bursts when the network came back.

Fix: Work in smaller units. Commit constantly. Assume the network will disappear.

I Forgot to Build the Dashboard Until 2 hours to submission

By the time the backend was solid—detection, tracking, anomaly engine, narrator, packager, evidence buffer, correlator, I looked at the clock and realized I had built absolutely nothing for a human to actually see.

No dashboard. No UI. Just a very functional terminal.

I am not a design person. I said this plainly to myself, then opened Gemini and described what I needed—anomaly log, live metrics, narrative feed, and evidence download panel. We worked through the layout together. I handled the WebSocket logic and the backend integration. Gemini helped with the visual structure and the CSS.
By 4pm the operator console was working.

The SDK Could Not Run Reliably Enough to Demo

A few hours before the deadline I accepted the truth: my laptop could not run the full Vision Agents SDK stack - Gemini Realtime, Deepgram, ElevenLabs, YOLOv8, all simultaneously - without freezing. With power cuts making every session unpredictable, I could not guarantee a stable environment for a clean recording.

So I improvised.

I modified the dashboard to capture camera frames directly from the browser via getUserMedia and stream them to the backend over WebSocket. No Stream call required. YOLOv8 processes the frames locally, the anomaly engine runs, the dashboard updates in real time. I deployed the FastAPI dashboard to Railway so judges can visit a live URL. A relay system pushes all local events to Railway via persistent WebSocket so the live dashboard reflects whatever the local agent sees.

The full SDK integration - Gemini Realtime at 5fps, Deepgram STT, ElevenLabs TTS, 5 function tools wired to the backend - is in the codebase and correctly wired. The architecture is complete. The demo shows what the system does.

What I Learned

Plan for the environment, not just the code. I planned every module. I did not plan for BEDC, for Airtel, for a laptop that freezes under load. In Nigeria those are not edge cases. They are the environment. That changes how I will approach every hard deadline going forward.

6 hours is enough if you know what you are building. The mind map I built at 3am Friday with Claude AI was not wasted time. When light finally came back Saturday night, I did not sit and think about what to build. I just built it. Having the architecture clear in my head before I had power meant every working hour counted.

CPU constraints are product constraints. Switching to YOLOv8n on CPU was not a downgrade - it was the right call for the actual deployment environment. The communities that need this most do not have GPUs. Building for the constrained version of reality is building for where the problem actually lives.

The SDK is well designed. When it ran, integration was fast. VideoProcessorPublisher, Realtime, STT, TTS, tool calling, the pieces fit together cleanly. The fps=1 mistake I made early cost me one number to fix. That is the right level of abstraction.

Final Thoughts

98 tests. 0 failures. A live deployed dashboard. A working anomaly engine. An evidence packager that generates court-structured reports automatically.
Built across two broken sessions, between power cuts, with a laptop that froze more times than I counted, on a network that came and went as it pleased - in a country where the infrastructure to solve this problem does not yet exist, which is exactly why someone needs to build it.

Is it finished? No. Is it real? Yes.

That is enough for now.

Published March 1st, 2026. Built for the Vision Possible: Agent Protocol hackathon by WeMakeDevs x Vision Agents.

VisionAgents #WeMakeDevs #BuildInPublic #MadeInNigeria #AIAgents

Top comments (0)