I'm writing this the day after a deadline that nearly ended our hackathon run.
We're Team Unfiltered Minds, competing in Guidewire DEVTrails 2026 — a 6-week
startup simulation where you build an AI-powered parametric insurance platform
for gig delivery workers in India. Think Zomato, Swiggy, Zepto riders. No health
coverage, no vehicle insurance. Just income protection when the weather turns bad
and they can't work.
The product is called GigSafe. The idea is simple: when IMD declares a red
alert in your city, you shouldn't have to file a claim. The system should already
know you're affected and pay you automatically.
Simple in theory. Absolutely not simple to build securely.
The problem nobody warns you about with parametric insurance
Parametric insurance pays out based on an objective trigger — rainfall exceeds
64.5mm/day, AQI crosses 400, temperature holds above 42°C for 3+ hours. No
adjuster. No claim form. No waiting.
The trigger fires. The payout goes out. Done.
The problem? If you're building this system and your trigger is just "is the GPS
inside the red-alert zone?" — you've built a fraud machine, not an insurance
product.
We found this out the hard way.
The Market Crash
24 hours before Phase 1 closed, the hackathon organizers dropped what they called
a "Market Crash" event. A simulated attack scenario:
500 delivery workers. Telegram-coordinated. GPS spoofing apps. All of them
faking their location inside a declared severe weather zone. Liquidity pool
drained in one alert window.
They gave us 24 hours to architect a defense or take a financial penalty in the
competition's virtual economy.
No code required. Just airtight logic.
My first instinct was: GPS + IP cross-check. Done in 10 minutes, ship it.
My teammate looked at me and said "VPN exists."
He was right. So we went deeper.
What we actually built
The core insight was this: fraud is a network phenomenon, not an individual one.
A single bad actor filing a false claim looks like noise. 500 coordinated bad
actors filing simultaneously leave structural signatures that no individual-level
anomaly detector will catch — but a graph will.
We designed a heterogeneous evidence graph with 6 node types:
- Worker Account
- Device Fingerprint
- Network Signature
- Payout Wallet
- Geo-Time Bucket
- Alert Window
And 5 edge types connecting them: uses-device, uses-network,
receives-wallet, claimed-in-bucket, co-claims-alert-window.
Then we defined 5 deterministic ring detection rules. Not "anomaly detected."
Actual hard thresholds:
R1 — Device fan-in burst: If 12+ unique workers link to the same device
fingerprint within 90 minutes of an alert activation, and 70%+ of those are
filing their first claim in 30 days → +35 ring risk points.
R2 — Wallet convergence: If 8+ workers route payouts to one wallet within
24 hours, median account age under 21 days → +30 points, wallet cluster frozen.
R3 — Geo-time synchrony: If 10+ workers enter the same alert polygon within
5 minutes with trajectory similarity ≥ 0.85 → +25 points, full cluster goes
Amber minimum.
R4 — Impossible mobility: Same worker jumping 8km in under 4 minutes, twice
in one shift. Or 6 workers with near-identical path templates within 2-second
timestamp jitter → +20 points.
R5 — Network opportunism: 15+ claims from the same network signature group
in 30 minutes, alert-window claim rate 6x baseline → +20 points.
Ring score = sum of triggered rules. 0–39 is low risk. 40–59 is Amber.
60+ is Red hold with containment actions.
The case that nearly broke the whole model
Here's the thing nobody thinks about when designing fraud detection for gig
workers: the workers most likely to be falsely flagged are the workers who
most need the payout.
A Zomato rider with a ₹6,000 phone in the middle of Cyclone Michaung. Low
battery. No GPS lock. Dropped data connection every 3 minutes. Offline gap of
10 minutes during the claim window.
In our naive model? That worker looks exactly like a GPS spoofer. Every trust
signal is weak. Device integrity score: low. Spatiotemporal plausibility: low.
Cross-signal corroboration: low.
Our system would have held his payout and asked him to submit evidence while
he was sitting in floodwater with 9% battery.
That's not anti-fraud. That's just punishing poverty.
So we built what we called the Amber-Degraded Lane. Any claim where 2 of
these 4 conditions are true:
- Battery < 12%
- GPS accuracy worse than 150m for 8+ minutes
- Packet loss > 40% for 10+ minutes
- Offline gap > 8 minutes during alert
...automatically enters a protected flow. 40% provisional payout releases in
10 minutes. A 6-hour evidence recovery window opens. If review isn't completed
within 12 hours and no hard fraud signal is confirmed, the provisional
auto-upgrades to 70%.
The worker isn't penalized for bad infrastructure. The system absorbs the
uncertainty instead of pushing it onto the person with the least capacity to
handle it.
What we got wrong
We got 2 stars out of 5 in Phase 1.
The judge's feedback was precise: "Exceptional depth in fraud prevention.
Completely misses core hackathon requirements. No persona definition, no premium
model, no parametric triggers, no technical implementation."
They were right. We spent so much time on the adversarial defense section that
the actual insurance product design never made it into the README. The fraud
architecture was impressive. It just had no product underneath it.
The lesson: a security system without a product to protect is just a
white paper.
Phase 2 is about building the actual product. The fraud architecture is one
section. Not the whole thing.
What's coming in Phase 2
- Worker onboarding with dynamic weekly premium calculation (base ₹49 × zone risk multiplier × history multiplier × seasonal multiplier)
- Live parametric trigger monitoring (IMD API + mock weather feeds)
- Zero-touch claim initiation — no form, no button, just automatic detection
- Worker dashboard + insurer analytics panel
- The full anti-spoofing engine running underneath all of it
If you're building anything in the insurance or fintech space and want to talk
through the fraud detection architecture, I'm happy to go deeper in the comments.
We're still in the competition. Phase 2 deadline is April 4th.
Clock's running.
— Team Unfiltered Minds
Top comments (0)