The Problem: Context Pollution and Portability
Over the past few months, I've been evolving my AI-powered journaling system from a simple experiment into a sophisticated tool for self-reflection and personal growth. This journey has taken me from Python-based MCP servers to Rust scripts, and from basic tool integration to leveraging the full Claude Code ecosystem. In this post, I'll share the technical evolution and the lessons learned along the way.
The Solution: Three Technical Migrations
Migration 1: Python MCPs to Rust Scripts
Finding a Real Rust Project
I've been studying Rust for a few months now, but I struggle with finding practical projects to solidify my learning. I created Trackwatch out of necessity after a friend shared the concept and my frustration with Tidal's web player limitations. However, relying heavily on AI assistance for that project left me feeling unsatisfied - I wasn't truly learning.
So I decided to rewrite my journal's markdown parsers in Rust, figuring out the indexing challenges with minimal AI help. My first professional programming work involved parsing HTML, XML, and RSS feeds to build a news database for mobile display. This felt like coming full circle - learning to parse Markdown 15 years later. The difference is stark though. Modern tooling makes it almost trivial compared to wrestling with Nokogiri back then.
You can find the complete Rust implementation here: markdown-journal-rust. For a detailed explanation of how these scripts work together with semantic search and structured metadata analysis, check out my previous post: An Architecture for Personal AI: Combining Semantic Search and Structured Metadata Analysis.
Lance DB and Embeddings
The vector database of choice was Lance DB. After researching vector databases in Rust, I found Lance DB offered the best combination of performance and ease of use. For embeddings, I upgraded from the older all-MiniLM-L6-v2 model to the newer BGE-base-en-v1.5, which provides much richer semantic representations with 768 dimensions instead of 384. Testing showed noticeably better retrieval quality - the larger model captures more nuanced relationships between concepts.
With over 5 months of journaling, I now have 150+ daily files plus countless topic-specific documents - ideas, study logs, personal reflections. Indexing is slower without GPU acceleration (haven't prioritized this yet), but queries return in under 20ms. I evolved the chunking strategy from fixed 500-character chunks with 50-character overlap to dynamic 200-2000 character chunks with no overlap. This preserves entire paragraphs, maintaining context integrity.
Migration 2: Eliminating MCP Context Pollution
When I was creating the first version of the journal, everything was novel to me. The concept of MCPs was just being understood by the public, Anthropic had just released it a few weeks prior. It felt like I was doing something incredibly smart by giving the model a focused tool it could use. And it was indeed very interesting to use for a while. But I soon started noticing some drawbacks.
First is that it's not that easy to install an MCP. Every time I wanted to change my editor, or the AI program I was using (VS Code, Cursor, Windsurf, Trae, Rovo Dev, Gemini CLI, Roo Code, Cline, Claude Code, Cursor CLI, Opencode), I had to change something about the configuration. Either the JSON fields were not the same, or it couldn't take an SSE connection and had to basically do a cold call to a python script which meant it took a very long time.
Then there's the fact that I'm writing a lot more code, not only I need the scripts to run the queries and index, but I also have to write and maintain the MCP to use those scripts. It felt unnecessary since it was a local script, and I soon realized it was.
The MCP also added a lot of pollution to the context, the calls (successful or not), the agent would sometimes fail to use it properly even with instructions in the prompt. I had the brilliant idea to simply create scripts and have the model run bash commands. This would allow me to trim the return of the scripts as much as I wanted and know exactly what was going into the context.
Migration 3: Adopting Claude Code's Ecosystem
After testing out every single model that was released on day 1, to see what better accommodates the purpose of the journal, I have settled on Claude Sonnet 4. I'm not going to go into a lot of details on its personality because you can change all that with a prompt, but the fact that it's great at tool calling is a plus. Another HUGE advantage is the Claude Code ecosystem which makes maintaining some details of the journaling a lot easier.
Claude Code Feature Integration
Slash Commands
The first Claude Code tool that made my life easier was the slash command. With it you can create a file in the commands/
folder that will be executed every time you run /nameofthefile
. So I created two very basic commands /start
and /commit
.
The first asks Claude to check the time (as it doesn't have an internal clock) and then follow the agent rules, which is to read the previous 2 days files, my profile, and then create today's file if it doesn't exist. The second will verify the previous commits messages, run git add ., commit and push (I keep my files in a private repo to avoid losing everything like in April).
Sub Agents
When this feature was released, it opened up new possibilities for handling complex queries...
journal-rag-search
I no longer need to ask my running agent to query the database, which doesn't always return exactly what I want, still pollutes the context and sometimes needs multiple queries to get the whole story. For this I created the rag-search-agent
. A sub agent that is in charge of calling the rust script, as many times as it needs to create a story about what it found and then return to my main agent exactly what it needs to know to keep the conversation going. This gives me the best results from the query return, which started with a simple MCP in the past, then evolved into the running agent calling the scripts, finally left to background agents that can run multiple queries, create a proper story and return just that, perfect context for the running agent.
journal-field-completer
My daily file is very complex with frontmatter fields at the top of the markdown, multiple fixed fields across the file that have to be filled by the model during the day as the conversation happens. Many times the model doesn't understand it's supposed to do that, and at the end of the day a lot of these fields would still be blank even though there was enough data in the file to populate everything. So I created a sub agent whose only task is to read the whole file and fill the fields with real and relevant data. When this worked, I added a line to the /commit
slash command, to check the time, and if it's night time, it should run the field completer before pushing. Works every time and my files are always complete now.
weekly-retro-analyzer
This is basically a template that runs once a week and creates a retrospective of my journey in the past 7 days, which I'll discuss more below.
Status Line
This is their last released feature, it hasn't been a week since it went live. You can add a simple line under your chat box with whatever data you want. It should be a bash command, so people usually add the model they're using, the folder they're in, the branch, etc. For me, the most important thing is knowing the current context. I strongly dislike the auto-compact feature because it's a waste of tokens, and I also don't like being caught by surprise with the dreaded system message (You have x% of context left before auto-compact). This allows me to always know more or less the size of my prompts, how far along the conversation I am and plan for calling /commit
or restarting the conversation.
Hooks
Hooks are a very powerful feature where you can tell Claude to do something based on a given event. I'm definitely not using its full potential. The only use case of hooks for me so far is to inject the time into every prompt, so the model no longer hallucinates how long things take. It loves adding timestamps to my conversations, even if I have a strict rule in my prompt to NOT do that. It just can't avoid... So to stop having wrong time added to each paragraph, I found out how to inject the correct time every time.
The Weekly Retrospective System
This was the latest addition to the journal. Weekly retrospective.
Thoughts Were Getting Lost
As I continue to write down my thoughts every day, it's still not possible to have the model read every single file in my project before talking to me because of context size limitations. One rule I have in my prompt is that the model should read the past two days and then we can start our conversation. This makes it so it doesn't know things that happened a few days ago. The weekly retro fixes that issue by creating a one-file snapshot of the previous week, that can be read every day in the initial prompt.
Creating Accountability
The journal has evolved from being simply a snapshot of my mind, to be something a bit more complex akin to a therapist based on my own rules. That means I have to keep track of all the things I've decided to do (studying, going to the gym, maintaining boundaries with people, saving money, staying sober...). The weekly retro will read the past week's file, the daily files for the past 7 days and write a comprehensible summary and then check if I've done everything I said I would, and what new things came up. It's easier for me too, to keep track of my own progress. I'm gamifying my existence and becoming the main character of my life by tracking the evolution.
Verifying Results
The goal of doing this is of course checking that I am on track, that no matter what happens on a day-to-day basis, I'm still doing what I said I was going to do. We can only achieve great things with repetition, creating plans is great but it only makes me anxious if I don't follow them. Now I get excited to do exactly what I said I would do because it will give me that feeling of reward at the end of the "sprint".
Real-World Application: Engineering Management
As an Engineering Manager, the weekly retro has become essential for juggling multiple responsibilities. Here's how I use it to maintain performance across different areas:
Team Leadership Tracking:
- Document key points from 1-on-1s with team members
- Track action items and follow-ups from previous weeks
- Monitor team morale and individual developer growth
- Ensure I'm providing consistent support without micromanaging
Technical Contribution Balance:
- Verify I'm still contributing code despite management duties
- Track my PR velocity to remain one of the top contributors in the company
- Document technical decisions and architectural choices
- Review code quality against my personal museum worthy standard: "Would Sandi Metz be proud of this?"
Quality Metrics:
- Number of PRs submitted and merged
- Code review feedback received and given
- Refactoring improvements implemented
- Test coverage maintained or improved
The weekly retro aggregates all this data, showing me patterns like:
- Am I spending enough time coding vs. meetings?
- Are my 1-on-1s actually helping my team members grow?
- Is my code quality suffering when I'm context-switching too much?
- Which days/times am I most productive for deep work?
This systematic review ensures I'm not just busy, but effective in both leadership and technical contribution. The journal becomes my accountability partner, reminding me when I've neglected either my team or my craft.
Conclusion
This evolution from MCPs to Rust scripts and Claude Code's ecosystem represents more than just technical improvements - it's about finding the right balance between automation and control. The journal has transformed from a simple markdown collection into an intelligent system that actively helps me maintain accountability and track progress.
The key insight? Sometimes simpler is better. Direct bash scripts provide cleaner context than MCPs, sub-agents handle complex queries elegantly, and weekly retros create the continuity that daily snapshots miss. Most importantly, building these tools in Rust gave me the practical experience I was seeking while creating something genuinely useful.
As I continue this journey, the system keeps evolving with my needs. That's the beauty of building your own tools - they grow with you.
If you're interested in building something similar or want to explore the code, check out the markdown-journal-rust repository on GitHub.
Top comments (0)