This is a submission for the Built with Google Gemini: Writing Challenge
This project started as a proposal. My employer loved it so much they decided to fund it entirely. I'm a fresher — and somehow I'm the project manager.
A quick note on the name: I came up with "OpenHand" and honestly loved it — but the name is already taken, so it's not finalized. We're just calling it OpenHand internally for now. The final name will be something original that isn't claimed yet. I liked it too much not to keep using it in the meantime.
I was scrolling through scholarship donation sites one evening and I kept hitting the same wall: you donate ₹2,000 to help someone attend college and then... silence. You get a thank-you email. Maybe a vague update six months later. But you never actually know if your ₹2,000 paid for a textbook, tuition, or ended up somewhere else entirely.
That bothered me deeply.
So I asked myself: what if donors could see every single rupee they donated traced back to an actual receipt? Not "funds were used responsibly" — but literally a scanned invoice from the college, a bank transfer screenshot, metadata and all?
That became OpenHand.
What I Built with Google Gemini
OpenHand is a full-stack charity platform where individuals in need (scholarships, medical emergencies, livelihood support, community projects) can post aid requests, get them verified through a two-level admin review process, and receive donations from the public — with every single disbursement of funds published publicly with attached proof documents.
The platform's core differentiator is what I call the Transparency Engine: funds are never handed directly to recipients. Instead, administrators pay institutions and vendors directly — colleges, hospitals, bookstores — then publish a disbursement report with the invoice, bank transfer reference, and payment confirmation attached. Every donor who contributed to that cause receives an in-app notification:
"Update on Rahul's Scholarship: ₹1,200 paid to ABC College (Receipt #4521) + ₹800 paid to National Bookstore for textbooks (Invoice #789)"
It goes deeper than that — each donor can also see exactly how their specific contribution was proportionally allocated across those line items. Your ₹2,000 doesn't disappear into a pool. It follows a paper trail.
The Tech Stack
| Layer | Technology |
|---|---|
| Backend | Spring Boot 3, Java 17, Spring Security, JPA/Hibernate |
| Database | PostgreSQL 16 |
| Frontend | Vue 3 + Vite + Pinia + Vue Router |
| Auth | JWT with embedded role + userId claims |
| Infrastructure | Docker + Docker Compose (3-container setup) |
| File Storage | Multipart upload (proof documents, KYC docs, receipts) |
Key Features in the MVP
- 10-stage request lifecycle: Draft → Submitted → Level 1 Review → Level 1 Approved → Level 2 Review → Published → Funding → Fully Funded → Disbursement → Completed
- Two-level admin review: Level 1 verifies document authenticity; Level 2 does deep institutional verification and approves for public listing — with separate internal notes at each stage
- Role-based access control: four roles (regular user, L1 admin, L2 admin, super admin) — every API endpoint locked down accordingly
- Disbursement lifecycle: Create → Mark Paid → Publish (with proof attachments) — triggers real-time donor notifications at each stage
- Donor dashboard: Personalized view of all funded causes, disbursement reports, and total impact summary
- Milestone updates: Recipients post progress updates (exam results, achievements, graduation) that automatically notify all their donors
- Admin audit trail: Every admin action permanently logged — who, when, what changed, reasoning
- Platform impact stats: Live public dashboard (total funds raised, active campaigns, completed causes)
- Donor badges: Four progressive tiers — first donation, repeat donor, generous, and champion — computed automatically based on giving history
- 4 request categories: Scholarship, Medical, Livelihood, Community — each with tailored verification fields
- Full containerization: One command spins the entire platform
This went from a blank VS Code window to a working, containerized, seed-data-loaded application in a single day.
The Role Google Gemini Played
I didn't use Gemini as a code autocomplete tool. I used it as a thinking partner through four distinct architectural phases — each treated as its own focused session with full context handed in upfront.
Phase 1 — Feature Planning
Before writing a single line of code, I described the problem to Gemini: the trust gap in charity platforms, the pain point of donors never knowing where money goes, and my instinct that direct disbursement with proof was the answer.
Gemini helped me stress-test edge cases I wouldn't have caught on my own:
- What happens if a campaign doesn't hit its funding goal by deadline? → Three options: extend deadline, partial scholarship, or refund donors — admin decides per case.
- How do you prevent someone from posting multiple overlapping requests? → Cooldown period — can't post a new request until the previous one resolves.
- How does penny-level tracking work across many donors? → Proportional allocation — if you donated 10% of the total, you see 10% of each disbursement line item.
The output was a structured features.md — 258 lines of specification across 11 feature domains. That document became the single source of truth for everything that followed.
Phase 2 — Frontend Design
With the spec locked, I described the four core user journeys to Gemini: a recipient posting a request, a donor browsing and giving, an admin reviewing and disbursing, a donor tracking their impact. We mapped those journeys to Vue components, Pinia store structure, and Vue Router guard patterns.
Key decisions from this phase:
- Embed
rolein JWT claims so the frontend can make role-based decisions without an extra API round-trip - 30-second notification polling in the navbar (instead of WebSockets for MVP simplicity)
- A 3-step form wizard for request creation so recipients aren't hit with one overwhelming 20-field form in one go
Phase 3 — Backend Design
The most technically dense session. I needed a Spring Boot architecture that could handle the state machine, the dual admin approval flow, and the disbursement-to-notification pipeline — without becoming spaghetti.
Gemini helped me reason through:
- A custom global exception handler with mapped field-level validation errors (not generic 500s) for a much better frontend experience
- Layered DTOs separating internal entity state from API response shape — particularly the donation response DTO, which masks donor identity for anonymous contributions
- The audit service design: a standalone append-only table with
entityType/entityIdrather than versioned rows, so audit queries stay simple forever - Spring Security matcher ordering — getting public browse endpoints accessible without auth while all write and admin endpoints stay protected, without conflicting patterns
The result: 9 controllers, 10 services, 8 repositories — every component with a single clear responsibility.
Phase 4 — Dockerization
Gemini walked me through the depends_on + healthcheck pattern — making the Spring Boot container explicitly wait for pg_isready before attempting datasource initialization. It also flagged that ddl-auto: update is fine for MVP but flagged Flyway migrations as a pre-production requirement.
The final docker-compose.yml spins up three containers: PostgreSQL 16 with a persistent named volume, the Spring Boot backend built from a multi-stage Dockerfile, and an Nginx container serving the Vue production build. Zero local installations needed beyond Docker itself.
Demo
The platform is currently under active development and is not yet publicly deployed — it is being funded by my organization ahead of a worldwide launch in the coming weeks.
Below is a walkthrough of the key screens in the working MVP:
Public browse page — Request cards with live progress bars, category filters, verification badges, and funding deadlines.
Request detail page — The full story, itemized funding breakdown, disbursement timeline with proof attachment links, and inline donation form.
Admin review panel — L1 and L2 review queues with internal notes, approval/rejection actions, and funding deadline controls.
Admin disbursements — Create a disbursement, mark it paid, publish it (which fires in-app notifications to all donors instantly).
Donor dashboard — Total donated, causes funded, badge tier, and recent donation history linked back to each request.
A live link will be shared here once we launch — and when that day comes, I'll be posting about it everywhere. Right here on DEV, on LinkedIn, and anywhere else people will listen. This one means a lot.
What I Learned
Technical
State machines are the right abstraction for lifecycle modeling. Once I modelled the request lifecycle as explicit enum states, everything else became clear — what transitions are valid, what notifications to fire at each transition, what each role is allowed to trigger. Never track multi-stage status with ad-hoc boolean flags.
JWT claim design matters from day one. Embedding both role and userId directly in the token means every request handler can authorize and identify the caller without a database round-trip. Mid-build I discovered an earlier scaffolded token provider class that hadn't embedded the user ID — a decision that would have cascaded painfully through every service that needed to know who was acting.
DTOs protect you from your own database. Returning a raw entity directly from a REST endpoint is a ticking clock — eventually you add a sensitive field and forget it's exposed. The discipline of explicit DTO classes with factory methods (especially the donation response DTO handling anonymous donor masking) saved real headaches down the line.
Audit tables are cheap and invaluable. The audit log table cost me 45 minutes to build. For a platform where humans make trust-critical financial decisions, knowing that every admin action is permanently recorded — who approved, when, with what justification — is foundational, not a nice-to-have.
Personal
I'm a fresher. And the way this project got funded was not something I planned.
I built this entirely on my own time, for myself — because I genuinely wanted it to exist. Once I had a working demo, I casually showed it to my manager. He liked it enough to bring his manager in — a director. The director liked it enough to gather the senior leadership team. I demoed the working application to all of them. They approved it on the spot and decided to fund it entirely under the organization.
I'm now the project manager of something I built in a single day as a personal side project. That still doesn't feel real.
The lesson here isn't about presentation skills or pitching frameworks. It's simpler: ship something real. A working demo in front of the right people will always say more than any slide deck or spec document ever could.
I also learned to be comfortable with deliberate scope. Shipping a working, demonstrable core beats shipping a half-broken everything. Razorpay integration, email delivery, and recurring donations are all staged — the foundation is solid, and they'll come next.
Google Gemini Feedback
What Worked Brilliantly
Structured problem-solving sessions. When I came to Gemini with well-framed questions and full context ("here's the feature spec, here's the entity model, what are the failure modes in this design?"), the responses were genuinely insightful — not textbook answers, but responses that probed my specific design decisions.
The feature planning phase was exceptional. Gemini surfaced edge cases — partial funding resolution, the cooldown anti-fraud measure, the KYC verification flow — that I would have hit much later and much more expensively in the build.
Cross-layer consistency. When I described a frontend feature, Gemini would naturally flag "you'll need the backend endpoint to return X for that to work" — catching API contract mismatches before they became integration bugs.
Where I Needed More
Large interconnected code generation occasionally needed reconciliation. When generating multiple service classes together, a method signature in one class wouldn't quite align with the interface expected by another. The code was correct in isolation — the integration seams needed a human review pass.
Over-engineering instinct. When I asked for a simple file upload handler, I got back a multi-interface strategy-pattern solution. A single focused service class was the right call for an MVP. I had to explicitly say "simpler is fine — we're not designing for hot-swappable storage backends right now."
The Ugly
Mid-build, I asked Gemini to help fix a Spring Security CORS issue with preflight requests being blocked. The fix it suggested was technically correct — but it introduced a security regression by widening the allowed origins to *. It wasn't wrong to suggest it (it's a standard debugging step), but it highlighted the core lesson: you need to understand every suggestion before you apply it. The right tool still requires a thinking operator.
What's Next
OpenHand is in active development, fully funded by my organization. The next build sprint covers:
- Real Razorpay/Stripe integration — escrow capability, refund mechanisms for failed campaigns
- Email delivery — SMTP infrastructure is configured; notification templates are next
- Penny-level allocation UI — the headline feature: each donor's dashboard shows their specific rupees apportioned across each disbursement line item
- KYC verification workflow — ID gateway integration
- Milestone post form — recipient UI for posting updates (backend complete, frontend form staged)
- Impact visualization — geographic heatmap, category breakdowns, monthly trend charts
Within a few weeks, OpenHand will go live publicly. Anyone will be able to post a request — for a scholarship, medical treatment, livelihood support, or a community project — and every donor will be able to trace exactly where every rupee went, with the actual receipts to prove it.
The mission is earnest: if every charity platform worked this way, the trust deficit that holds so many people back from donating would simply stop existing.
The four-component approach — Plan → Frontend → Backend → Infrastructure — reflects a belief that AI is most valuable when you bring it a structured problem, not an open-ended one. Each Gemini session started with context. The outputs were sharp because the inputs were specific.
AI-assisted development is a collaboration. Gemini was the smartest teammate in the room on Day 1. The code running in Docker right now is the result of that collaboration.
The live link is coming soon, and when it does, I'll be shouting about it here on DEV, on LinkedIn, and everywhere else.
If you're a donor who has ever wondered where your money actually went — this platform (whatever its final name turns out to be) is being built for you.
Top comments (0)