This is the chapter where I admit something simple: clean code talks are nice, but products are built in history, not in slides.
When MindMapVault was small, I could hold most of it in my head. Then the project grew into frontend work, desktop work, encryption flows, uploads, backend routes, different database paths, deployment scripts, release notes, and a lot of "just fix this one thing" days.
That is when coding rules stopped being theory and became survival.
The rules I kept coming back to were not fancy:
- keep changes small
- do not refactor the whole house when the sink is leaking
- be extra careful in crypto, auth, and storage code
- keep frontend and backend contracts aligned
- prefer readable code over clever code
That sounds obvious. It is also the difference between a project that can still move and a project that starts breaking under its own weight.
The code has a visible history, and that is normal
You can see the project history in the codebase. I actually think that is healthy to admit.
MindMapVault started with MongoDB. Later I added Stoolap. Later I added SQL-oriented paths and those _sql.rs files. If you do not stop everything and rewrite the whole project from zero every time the architecture evolves, signs of the older path stay visible.
That is not a moral failure. That is what real software looks like.
You can often read the timeline directly from the repo:
frontend_app/ hosted app UI, editor, crypto helpers, vault flows
frontend_www/ marketing site, release notes, public blog
desktop/src-tauri/ local desktop shell and native packaging
backend/src/ auth, routes, storage, DB adapters, upload flows
scripts/ regression runners, banner rendering, deployment helpers
Then inside the backend there is another layer of history:
- older MongoDB-oriented paths
- later SQL and Stoolap paths
- route files that had to stay practical while the storage model evolved
You can absolutely see that evolution if you read the repo for long enough. I am fine with that. Every long-running project carries some residue of its earlier decisions.
Practical first, perfect never
I like practical things. I like jobs done.
That preference is visible in the code.
Some parts are tidy and stable. Some parts are tactical. Some parts are not how I would design them in a greenfield rewrite. But a real product is not rebuilt from first principles every Tuesday morning.
There is a version of software advice that pretends all good code emerges from calm, linear planning. That is not how most product work happens.
Real code grows under pressure from:
- feature delivery
- production bugs
- changed infrastructure
- new storage backends
- packaging and deployment headaches
- the need to keep existing users working while the internals evolve
So yes, theoretically clean code and real-life code are often different things.
The goal was never to make MindMapVault look like a textbook. The goal was to keep it understandable enough, safe enough, and changeable enough while the product kept moving.
Where the rules were bent
There are places where the project is cleaner than average, and places where it absolutely is not.
The most visible compromises are usually these:
- Boundaries are not always perfect
Some concerns leak across layers because the fastest safe fix was not always the prettiest abstraction.
- React hygiene is not always textbook
There are places with deliberate lint-rule exceptions or dependency-array compromises because stable behavior in a real flow mattered more than satisfying the purest interpretation of the rule.
- Style consistency is uneven
Some modules were written during calmer phases. Others were written during "let me finally this *** and go to the bed" phases. That difference is visible.
I would rather say that openly than write fake architecture prose around it.
Copilot changed the texture of the code too
Another honest point: code does not look like it did five years ago.
This project carries signs of Copilot use and, more broadly, LLM-assisted development. That is real now. We should stop pretending otherwise.
Sometimes that means faster scaffolding. Sometimes it means a strange but useful first draft. Sometimes it means the code gets a little more chaotic or stylistically mixed than it would with one human colleague writing every line in one voice.
But there is another side to that trade-off.
LLMs are also good at searching through that mess, finding the right file, spotting a broken path, or repairing a repeated pattern faster than a human might. In that sense, the code is not only written differently now. It is also maintained differently now.
I think we have to accept both sides:
- the Copilot touch is visible
- some generated structure is less elegant than an ideal hand-crafted version
- but the same tooling also makes large, messy codebases easier to search, patch, and recover
- and different coding styles are visible in a single project even one man´s hand (and a robot´s hand)
That is part of modern software reality now.
What actually kept the project under control
The workflow was practical, not ceremonial.
-
tscand production builds were the first fast safety net. - backend regression runs through WSL and Python scripts were the real "did I break the product" check
- dependency audits were hygiene, not proof of quality
- security-sensitive changes were validated by behavior, not by vibes
On the Python side I leaned on repeatable checks like:
scripts/backend_regression_test.pyscripts/attachement_regression_test.pyscripts/shared_regression_test.pyscripts/production_functional_test.py
For burst and stress behavior I also used helpers like:
scripts/load_test_stoolap.pyscripts/production_burst_test.pyscripts/crud_burst_runner.py
That does not make the project magically clean. It just means there were repeatable ways to keep reality in view.
Passing lint does not prove good architecture. A green build does not prove good UX. An audit does not prove safe design.
But together, those checks helped keep the project from drifting too far into chaos.
The maintainability standard I actually believe in
For me, maintainability is not "could this win a code-style argument on the internet?"
It is more practical:
- can I still understand this at 2 AM during a bug
- can I change one thing without breaking five others
- can I trace a storage or auth path end to end
- can I ship a fix without turning it into a rewrite
That is the bar I care about.
MindMapVault is not pristine, and I do not need to pretend it is. It is a real product with a real timeline, visible scars, and a codebase that shows both human shortcuts and AI-era development habits.
I am okay with that.
What matters is that the important parts stay understandable, the dangerous parts stay guarded, and the project keeps moving without collapsing under its own history.
Top comments (0)