It keeps happening. I'm reading on my Kindle, come across something I don't fully understand, and face the same annoying choice: put the Kindle down and grab my phone, or struggle with the Kindle's browser trying to use ChatGPT's interface that wasn't built for e-ink screens.
After dealing with this enough times, I spent this weekend building Kindle-ChatGPT: a simple AI chat that works directly in your Kindle's browser. No app to download, no account to create. Just type your question and get an answer optimized for e-ink.
The problem with AI on Kindle
When I'm reading on my Kindle and come across something I want to understand better, I have two options:
- Put down the Kindle and grab my phone
- Try to use the Kindle's browser to access a traditional AI chat interface that's not optimized for e-ink displays
Neither option is ideal. The first interrupts my reading flow, and the second gives me an interface that's painful to use on an e-ink screen with its low refresh rate and limited interactivity.
What I built
Kindle-ChatGPT is a web app designed exclusively for Kindle e-reader browsers. Here's what makes it work well on Kindle:
- High contrast design : Black and white interface optimized for e-ink displays
- Simple interaction : Minimal UI that works with Kindle's browser limitations
- No login required : Access it instantly without creating an account
- Lightweight : Fast loading and minimal battery drain
- Streaming responses : See the AI's answer appear progressively, just like ChatGPT
The name says "ChatGPT" because that's what people search for, but it actually uses Google's Gemini API under the hood.
Technical implementation
I built Kindle-ChatGPT with these technologies:
- Next.js 15: React framework with server-side rendering
- TypeScript: Type safety for better code quality
- Tailwind CSS: Utility-first CSS for rapid UI development
- Google Gemini AI: Specifically the gemini-2.5-flash-lite-preview-09-2025 model
- Cloudflare Workers: Edge computing for fast global performance
Why Gemini instead of ChatGPT?
While the service is named "ChatGPT" for discoverability, I chose Google's Gemini API for several technical reasons:
- Better free tier : Gemini offers more generous rate limits for free usage
- Faster responses : The flash-lite model is optimized for speed
- Native streaming : Built-in SSE (Server-Sent Events) support for progressive responses
- Lower latency : Works well with Cloudflare's edge network
Key technical challenges
Building for Kindle presented unique challenges:
1. Kindle browser detection
The app only works on Kindle browsers to maintain focus on the optimized experience. I implemented device detection to ensure users get the interface designed specifically for e-ink displays.
2. Rate limiting without authentication
Since there's no login, I implemented IP-based rate limiting using Cloudflare KV storage:
// Rate limiting configurationconst RATE_LIMIT_PER_MINUTE = 10; // Max 10 requests per minuteconst DAILY_MESSAGE_LIMIT = 100; // Max 100 messages per dayconst MAX_MESSAGE_LENGTH = 5000; // Max 5000 characters per message
This prevents abuse while keeping the service free and accessible.
3. Streaming responses for e-ink
E-ink displays have slow refresh rates, so I needed to balance streaming speed with readability. The implementation uses Server-Sent Events (SSE) to stream responses from Gemini:
const apiUrl = `https://generativelanguage.googleapis.com/v1beta/models/${GEMINI_MODEL}:streamGenerateContent?alt=sse&key=${GEMINI_API_KEY}`;const response = await fetch(apiUrl, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ contents, generationConfig: { temperature: 0.7, maxOutputTokens: 2048, topP: 0.95, topK: 40, }, }),});
4. Conversation history management
To provide context-aware responses, I maintain conversation history on the client side and send it with each request:
const contents = (history || []).map((msg: { role: string; content: string }) => ({ role: msg.role === 'assistant' ? 'model' : 'user', parts: [{ text: msg.content.substring(0, MAX_MESSAGE_LENGTH) }],}));
How to use it
Using Kindle-ChatGPT is straightforward:
- Open your Kindle's web browser (Menu → Experimental Browser or Web Browser, depending on your model)
- Navigate to kindle-chatgpt.com
- Start typing your question in the text area
- Press Enter or tap the Send button
- Watch the response stream in
Real-world use cases
Here's how I actually use it while reading:
- Quick definitions : "What does 'epistemology' mean in simple terms?"
- Context about historical events : "What was happening in Europe in 1848?"
- Concept clarification : "Explain quantum entanglement like I'm 12"
- Author background : "Who is Yuval Noah Harari and what's his background?"
- Book recommendations : "What other books are similar to Sapiens?"
- Writing help : "Help me rephrase this sentence to be clearer"
The key is that I never have to leave my Kindle. The conversation stays in context, and the high-contrast interface doesn't strain my eyes.
Why Kindle-only?
Some people asked why I restricted it to Kindle browsers. Here's my reasoning:
- Focused optimization : By targeting one device type, I can optimize the entire experience
- Clear value proposition : Kindle users know exactly what they're getting
- Prevents misuse : Limits the potential for bot traffic and abuse
- Battery efficiency : The simplified UI is designed specifically for e-ink's power characteristics
If someone tries to access the site from a regular browser, they see a landing page explaining the service and encouraging them to use it on their Kindle.
Security and privacy
Since there's no authentication, privacy was a top concern:
- No data storage : Conversations aren't saved on the server
- No tracking : No analytics or cookies beyond what's necessary for rate limiting
- IP-based rate limiting : Uses Cloudflare KV with automatic expiration
- Input validation : Strict message length limits and content validation
- Security headers : Proper CSP, X-Frame-Options, and other security headers
Deployment on Cloudflare
I deployed this on Cloudflare Workers using @opennextjs/cloudflare
, which adapts Next.js for Cloudflare's edge network. This gives several advantages:
- Global edge network : Fast response times worldwide
- Free tier : 100,000 requests per day on the free plan
- KV storage : Built-in key-value storage for rate limiting
- Automatic scaling : Handles traffic spikes without configuration
The build process is simple:
npm run pages:buildnpm run deploy
Lessons learned
Building this taught me several things about working with constrained environments:
1. Constraints can simplify decisions
The Kindle's limitations (slow refresh, limited JavaScript, basic CSS support) forced me to strip away unnecessary complexity. What I ended up with was simpler and more focused.
2. Progressive enhancement matters
The app works with minimal JavaScript. If streaming fails, it falls back to a simple request-response model. This makes it more resilient.
3. Building for yourself speeds up iteration
I built this because I needed it. Every technical decision was tested immediately by actually using the app on my Kindle while reading.
4. Free services need clear limits
Without proper rate limiting, a free AI service would be abused instantly. The 100 messages per day limit is generous enough for legitimate use but prevents abuse.
What's next
I'm considering these additions:
- Support for multiple AI models (letting users choose between Gemini, Claude, etc.)
- Saved conversation history (optional, with user consent)
- Integration with Kindle's built-in dictionary
- Export conversations to your email
But I'm being cautious about adding features. The simplicity is part of what makes it work well on Kindle.
Try it yourself
If you have a Kindle e-reader, try it out at kindle-chatgpt.com. It's free and requires no signup.
Top comments (1)
This is awesome!