This is a submission for the Built with Google Gemini: Writing Challenge
What I Built with Google Gemini
I built Sigilla, a privacy-first, read-it-later application specifically designed to cure "digital hoarding" for Personal Knowledge Management (PKM) users.
Most developers and tech enthusiasts suffer from the Collector's Fallacy. We save hundreds of articles in tools like Pocket or browser bookmarks, creating a graveyard of unread links just to feel productive in the moment.
Sigilla solves this by introducing forced auto-decay. If you save a link and do not read it within 7 days, it is archived. It forces a hard decision: either prioritize the information right now, or let it go. For the articles you actually read, Sigilla features built-in Spaced Repetition (SM-2) to help you retain the knowledge, and provides clean Markdown exports directly to systems like Obsidian. It also handles massive Pocket and Instapaper imports entirely client-side, ensuring no server ever touches your data.
Google Gemini played a massive role as my lead architect and co-pilot. I used Gemini to rapidly prototype the React frontend, structure the backend logic, and write the complex client-side parsing needed for the imports. It allowed me to focus entirely on the core philosophy and user experience of the app, rather than getting bogged down in boilerplate and syntax.
Demo
The web app is live and fully functional. You can try the methodology yourself here:
(Note: The companion Chrome extension is currently sitting in the Google Web Store review queue).
What I Learned
The biggest lesson was a personal one: letting go of my developer ego.
I used to be highly skeptical of AI coding assistants and have been vocal about it in the past. However, building Sigilla completely shifted my perspective. I realized that we should use the AI that is available for help. It is not cheating; it is simply the next natural layer of abstraction.
Working extensively with Gemini taught me how to shift from being a traditional "coder" to becoming an "editor" and a technical director. Your job is no longer to memorize the exact syntax for an obscure array method; your job is to deeply understand the user's actual problem and guide the AI to build a robust solution for it.
Technically, I learned a great deal about client-side data processing. Forcing the app to handle heavy imports without a backend server required strict state management, and iterating on those solutions alongside an AI sped up my learning curve immensely.
Google Gemini Feedback
The Good:
The speed of iteration is unmatched. Gemini is fantastic at taking a high-level architectural concept and churning out the complex regex and component structures needed to make it work. It acts as an incredible sounding board when you are stuck on a logic problem, instantly offering two or three different approaches to evaluate.
The Friction:
The main challenge was context degradation. During long coding sessions with multiple files, the model would sometimes lose track of earlier architectural decisions or specific data schemas we had established. I had to learn to be very strict with my prompting, regularly resetting the context and reminding the model of the exact constraints we were working within. You cannot just put it on autopilot; it requires a firm, guiding hand to keep the code consistent across a growing codebase.
Top comments (11)
This is such a relatable idea đ âTab anxietyâ is real.
I really like the forced 7-day auto-archive concept. It actually pushes you to decide whatâs important instead of just collecting links and feeling productive. Thatâs a smart psychological twist, not just another read-it-later app.
The privacy-first and fully client-side approach is also impressive. Handling big imports without sending data to a server shows real thought behind the product.
And your reflection about moving from âcoderâ to more of an âeditorâ mindset with AI feels very honest. AI doesnât replace thinking; it makes clear thinking even more important. Great build and great insights đ
Thank you for reading and for the great breakdown. You really hit the nail on the head. The entire goal was to force actual decision-making rather than just letting us collect links to feel productive. Building the heavy imports entirely client-side was a fun technical challenge, but absolutely necessary for the privacy aspect. And I completely agree regarding the mindset shift. We should use the AI that is available for help to clear out the heavy lifting, so we can focus our clear thinking on solving the actual behavioral problems. I really appreciate your feedback!
This is such a thoughtful and honest build đ
I really love the idea of forced auto-decay â that 7-day rule hits hard. Itâs such a simple constraint, but it directly attacks the âIâll read this laterâ lie we all tell ourselves. Turning a read-it-later app into a decision-making tool instead of a storage unit is a powerful shift.
Also, your reflection about moving from âcoderâ to âeditorâ really resonated with me. That mindset change is real. AI doesnât replace thinking â it amplifies it if you guide it well.
Super clean philosophy. Super real learning. Respect for building something that solves an actual behavioral problem, not just a technical one.
thank you for reading and for the great feedback. 'A decision-making tool instead of a storage unit' is probably the best way anyone has described it so far. That is exactly the goal.
âYou are completely right about the mindset shift as well. When we actually use the AI that is available to us for help, it amplifies our ability to focus on real behavioral problems instead of just getting stuck typing boilerplate code. I really appreciate your perspective!"
Great read! I really resonated with your point about shifting from a 'coder' to an 'editor.' Iâve also been using Gemini heavily in my workflow, especially pairing it with v0 for rapid frontend prototyping. Itâs amazing how these tools let us focus more on the core problem rather than the boilerplate. Thanks for sharing Sigilla's philosophy
thank you for reading. Pairing Gemini with v0 for the frontend is an incredibly powerful combination. It perfectly captures that 'editor' mindset. When we use the AI that is available for help to clear out the boilerplate, we actually get to spend our energy solving the real behavioral problems, like the digital hoarding with Sigilla. Glad the philosophy resonated with you!
Wow! It is great article! How you use gemini. I am curious! Have you start to use gemini 3 or gemini 2.5.flash.
thank you! I have primarily been using the Gemini 3 models for the heavier architectural lifting and complex logic, like figuring out the client-side parsing. For quicker iterations, smaller components, and boilerplate code, the 2.5 Flash models are incredibly fast. We really should use the AI that is available for help right now to move fast, and mixing the models gave me the perfect balance between deep reasoning and speed."
Okay, I see. Did you run out of tokens on Gemini 3? Itâs a common issue when using that model.
Yes, context limits and usage caps can definitely be a bottleneck with the heavier models. I managed to avoid the worst of it by being very strict about how I structured my prompts and keeping my sessions focused.
âWhenever a conversation got too long or the model started losing track of the architecture, I would summarize the current state of the code and start a completely fresh session. Also, deliberately offloading the smaller, repetitive tasks to the Flash models really helped save the Gemini 3 bandwidth for the actual complex problem-solving.
ok! I see