DEV Community

Cover image for Curing Developer "Time Blindness" with Local AI: SheepCat v1.1 is live πŸ‘πŸˆ
Andrew Chadwick
Andrew Chadwick

Posted on

Curing Developer "Time Blindness" with Local AI: SheepCat v1.1 is live πŸ‘πŸˆ

Wrestling with massive C# codebases and complex SQL queries by day usually means my executive function is completely fried by 5:00 PM. I frequently drop into a deep flow state and emerge hours later with absolutely zero memory of what I actually worked on.

​Standard time-trackers make this worse. Ticking stopwatches create background anxiety, and heavy enterprise dashboards demand too much context-switching.
​That’s why I’ve been building SheepCat:Track My Work in public: an open-source, purely local work tracker built in Python.

​The philosophy is "Cognitive Ergonomics." It gives you a gentle, dismissible nudge every hour. You just dump your messy, raw thoughts into it without breaking your flow ("fighting with the DB connection again"). At the end of the day, it passes those fragmented, dyslexic brain-dumps to a local LLM (via Ollama) to synthesize a clean,
professional stand-up report.
​Today, I just pushed v01.01.00, and it includes a massive quality-of-life upgrade for developers who struggle with context retention.

​What's New in v1.1?

β€‹πŸ” 1. Search + AI Summarization (The "What did I do Tuesday?" Cure)
Have you ever stared blankly at your screen during a Wednesday morning stand-up trying to remember what you accomplished on Monday?
You can now search through your past messy logs in SheepCat and have the local AI instantly summarize the resulting tasks into a clean, readable brief. No more frantic scrolling through git commits or Jira histories.

β€‹πŸ§  2. Custom AI Prompts
Every neurodivergent brain works a little differently, so you shouldn't be locked into a default AI personality. You can now fully customize the system prompts for both your interval check-ins and your end-of-day summaries. Tailor the LLM's output to fit exactly how your specific workflow operates. (We also restored the default gentle nudge interval to a solid 60 minutes).

​100% Local. Zero Cloud Sync.
​Because SheepCat uses Ollama to run the AI locally on your machine, your proprietary code snippets, internal rants, and raw notes never touch a cloud server.

​If you are tired of loud, demanding tracking tools and want to try a "gentler" approach, you can grab the latest release or check out the code here:

β€‹πŸ‘‰ SheepCat on GitHub (v01.01.00)
​I'd love to hear your feedback in the comments. How do you handle tracking your work when you're deep in the zone? Does your team use anything that actually respecs cognitive load? Let me know!

​(And if you want to help fuel the late-night Python debugging, I recently set up my GitHub Sponsors page! )

Top comments (2)

Collapse
 
klement_gunndu profile image
klement Gunndu

The local-only Ollama approach is the right call β€” dumping raw work context into a cloud API would be a nonstarter for anyone working on proprietary codebases. The custom system prompts for summaries is a nice touch for adapting to different workflows.

Collapse
 
chadders13 profile image
Andrew Chadwick

Yer there is so much out there that is cloud first I feel there is the space in the market for something like this the more I talk to people about it 😊 thanks for the feedback 😊