"I don't have $30,000 for a GPU cluster. Does that mean I can't evolve my AI?"
That was the question that started PickyTrain.
We've been told for years that if you want to change how an LLM thinks, you need a massive dataset and a training loop that eats VRAM for breakfast. I call BS. If a model is just a giant pile of weights, why can’t we just... edit the weights?
Today, I’m open-sourcing PickyTrain: A "Hex Editor" for AI models that lets you perform "brain surgery" on GGUF files on your CPU, with zero training data.
🧠 The Problem: The "Black Box" of Fine-Tuning
Standard fine-tuning is a shotgun approach. You throw data at a model and hope the backpropagation hits the right neurons. It’s expensive, slow, and requires hardware most of us don't have under our desks.
🔪 The Solution: Surgical Weight Editing
PickyTrain (written in Rust 🦀) "thaws" frozen GGUF models into a new fluid format called PTXY.
No GPU? No Problem. It runs entirely on the CPU.
No Dataset? Fine. You don't need 10,000 examples. You just need to find the right "synapse" and nudge it.
Rust Performance: Built with a high-performance Rust core and Python bindings via PyO3. It’s fast, memory-safe, and won't crash your dev environment.
✨ What can you actually do with it?
Nudge Behavior: Want your coding agent to be 10% more concise? Find the FFN weights and give them a "nudge."
Correct Hallucinations: Surgically adjust the weights where specific facts are stored.
Safety Guardrails: Every edit is tracked in a Delta Journal. If you accidentally "lobotomize" your model, just hit Rollback. It’s like Git for your AI's brain.
🛠️ The Tech Stack
Language: Rust (The "Scalpel")
Bindings: Python / PyO3 (The "Interface")
UI: A slick Curses TUI for terminal-dwelling hackers.
Compatibility: Supports Q4_K, Q8_0, F16, and F32 GGUF quants.
🚧 This is just the beginning
I developed this while working on my Sovereign AI Stack the idea that we should all own our own "Ghost Corporations" of local AI agents. PickyTrain is the tool that lets those agents evolve without a cloud subscription.
NOTE THAT THIS PROJECT IS STILL UNDER DEVELOPMENT YOU MIGHT FIND BUGS OR ERROR AND I'M THINKING OF ADDING MORE USEFUL FEATURES AND IMPROVEMENTS:
Activation Heatmaps: To see which neurons fire during a prompt.
GGUF-Bake: To export your "surgeries" back to standard formats.
LoRA Merging: To bake adapters directly into the weights on a CPU.
👉 Check out the Repo: https://github.com/Ainix-dev/PickyTrain



Top comments (0)