DEV Community

Cover image for fill.ai - Turn PDFs Into Conversation's
Poetry Of Code
Poetry Of Code

Posted on

fill.ai - Turn PDFs Into Conversation's

Inspiration

For over 8.7 million visually impaired adults in the U.S., something as routine as filling out a tax form, medical intake sheet, or job application can be frustrating or even impossible without help. Existing form fillers require visual interaction — clicking, dragging, typing — and none are built with true accessibility in mind. We wanted to change that by building a tool that gives these users independence, speed, and confidence.

What it does

fill.ai is a voice-powered, AI-driven form filler designed specifically for visually impaired users. Just upload any form — PDF, scan, or image — and the app:

  • Automatically detects fields using AI + OCR

  • Prompts the user to fill out each field using natural language

  • Allows users to speak their responses entirely by voice

  • Auto-fills the form in real time and generates a completed PDF

No mouse. No keyboard. No visual interface required.

How we built it

  • Frontend: React + Vite + SCSS Modules, with accessible markup and keyboard navigation support.
  • Voice Input: Web Speech API for speech-to-text conversion.
  • OCR & Field Detection: Tesseract.js + custom logic to parse text layout and detect form fields from scanned documents.
  • Form Filling Logic: JSON-based structure for field mapping, tied to voice prompts and AI suggestions.

  • PDF Handling: PDF-lib to generate and fill form data into PDF templates.

Challenges we ran into

  • OCR Accuracy: Scanned forms are often low-quality or skewed. We had to implement cleaning logic and fallback detection methods.

  • Voice Handling: Managing speech input in a structured and user-friendly way was tricky, especially with multiple fields and interruptions.

  • Form Complexity: Real-world forms are inconsistent — we had to account for variable layouts and missing field tags.

Accomplishments that we're proud of

  • Created a fully voice-driven form filling experience — no mouse or keyboard needed.
    Built accessible UI components that work well with screen readers.

  • Successfully processed and completed real scanned forms using only voice input.

  • Designed the system to be useful not just for the visually impaired, but for anyone needing hands-free interaction.

  • Successfuly implemented language recognition for various languages including Hindi, Spanish, Ukrainian and Hurdu

What we learned

  • Accessibility-first design isn't just a feature — it changes how you think about user flows and interface priorities.

  • Voice UI is incredibly powerful, but needs thoughtful structure and fallback handling.

  • AI can enhance accessibility when it’s used with purpose — detecting form fields from imperfect scans was a real win.

What's next for fill.ai

🔄 Improve field detection using ML-based layout analysis
🌐 Expand language support for multilingual users
📱 Build a mobile-first experience for on-the-go form filling
🧑‍🦯 Partner with accessibility orgs for real user testing and feedback
🔒 Add secure document upload and signing capabilities

Link

Top comments (0)