DEV Community

Cover image for The End of Tab Anxiety: Building a Destructive Read-It-Later App

The End of Tab Anxiety: Building a Destructive Read-It-Later App

NorthernDev on March 02, 2026

This is a submission for the Built with Google Gemini: Writing Challenge What I Built with Google Gemini I built Sigilla, a privacy-first, read-it...
Collapse
 
guestpostdiscovery profile image
guestpostdiscovery

This is such a relatable idea 😅 “Tab anxiety” is real.

I really like the forced 7-day auto-archive concept. It actually pushes you to decide what’s important instead of just collecting links and feeling productive. That’s a smart psychological twist, not just another read-it-later app.

The privacy-first and fully client-side approach is also impressive. Handling big imports without sending data to a server shows real thought behind the product.

And your reflection about moving from “coder” to more of an “editor” mindset with AI feels very honest. AI doesn’t replace thinking; it makes clear thinking even more important. Great build and great insights 👏

Collapse
 
the_nortern_dev profile image
NorthernDev

Thank you for reading and for the great breakdown. You really hit the nail on the head. The entire goal was to force actual decision-making rather than just letting us collect links to feel productive. Building the heavy imports entirely client-side was a fun technical challenge, but absolutely necessary for the privacy aspect. And I completely agree regarding the mindset shift. We should use the AI that is available for help to clear out the heavy lifting, so we can focus our clear thinking on solving the actual behavioral problems. I really appreciate your feedback!

Collapse
 
raj_247 profile image
Raj Dutta

This is such a thoughtful and honest build 👏

I really love the idea of forced auto-decay — that 7-day rule hits hard. It’s such a simple constraint, but it directly attacks the “I’ll read this later” lie we all tell ourselves. Turning a read-it-later app into a decision-making tool instead of a storage unit is a powerful shift.

Also, your reflection about moving from “coder” to “editor” really resonated with me. That mindset change is real. AI doesn’t replace thinking — it amplifies it if you guide it well.

Super clean philosophy. Super real learning. Respect for building something that solves an actual behavioral problem, not just a technical one.

Collapse
 
the_nortern_dev profile image
NorthernDev

thank you for reading and for the great feedback. 'A decision-making tool instead of a storage unit' is probably the best way anyone has described it so far. That is exactly the goal.
​You are completely right about the mindset shift as well. When we actually use the AI that is available to us for help, it amplifies our ability to focus on real behavioral problems instead of just getting stuck typing boilerplate code. I really appreciate your perspective!"

Collapse
 
samwoo profile image
Sam Woo

Great read! I really resonated with your point about shifting from a 'coder' to an 'editor.' I’ve also been using Gemini heavily in my workflow, especially pairing it with v0 for rapid frontend prototyping. It’s amazing how these tools let us focus more on the core problem rather than the boilerplate. Thanks for sharing Sigilla's philosophy

Collapse
 
the_nortern_dev profile image
NorthernDev

thank you for reading. Pairing Gemini with v0 for the frontend is an incredibly powerful combination. It perfectly captures that 'editor' mindset. When we use the AI that is available for help to clear out the boilerplate, we actually get to spend our energy solving the real behavioral problems, like the digital hoarding with Sigilla. Glad the philosophy resonated with you!

Collapse
 
benjamin_nguyen_8ca6ff360 profile image
Benjamin Nguyen

Wow! It is great article! How you use gemini. I am curious! Have you start to use gemini 3 or gemini 2.5.flash.

Collapse
 
the_nortern_dev profile image
NorthernDev

thank you! I have primarily been using the Gemini 3 models for the heavier architectural lifting and complex logic, like figuring out the client-side parsing. For quicker iterations, smaller components, and boilerplate code, the 2.5 Flash models are incredibly fast. We really should use the AI that is available for help right now to move fast, and mixing the models gave me the perfect balance between deep reasoning and speed."

Collapse
 
benjamin_nguyen_8ca6ff360 profile image
Benjamin Nguyen

Okay, I see. Did you run out of tokens on Gemini 3? It’s a common issue when using that model.

Thread Thread
 
the_nortern_dev profile image
NorthernDev • Edited

Yes, context limits and usage caps can definitely be a bottleneck with the heavier models. I managed to avoid the worst of it by being very strict about how I structured my prompts and keeping my sessions focused.
​Whenever a conversation got too long or the model started losing track of the architecture, I would summarize the current state of the code and start a completely fresh session. Also, deliberately offloading the smaller, repetitive tasks to the Flash models really helped save the Gemini 3 bandwidth for the actual complex problem-solving.

Thread Thread
 
benjamin_nguyen_8ca6ff360 profile image
Benjamin Nguyen

ok! I see