Watch the 2-minute demo on YouTube!
The problem I was tired of solving
Every time I wanted to fine-tune a small language model, I'd spend 2-3 hours on the same boring setup:
- Which model fits my data? Llama? Phi? Mistral?
- What hyperparameters should I use?
- How do I set up LoRA without breaking something?
- Why is my Colab crashing again?
I got tired of it. So I built TuneKit to automate the entire thing.
What TuneKit does
Three steps. That's it.
- Upload your data (JSONL file)
- AI picks the best model (Llama 3.2, Phi-4, Mistral, Qwen, Gemma)
- Get a ready-to-run Colab notebook β Hit "Run All" β trained model in ~15 minutes
No coding. No guessing. No cloud bills.
The tech stack
Unsloth - 2x faster training (this is the secret sauce)
Google Colab's free T4 GPU - No credit card needed
Smart model selection - Analyzes your task type and data patterns
LoRA fine-tuning - Auto-optimized configs
Why it matters
Fine-tuning shouldn't require a PhD or a $300/month GPU bill.
The tools exist (Unsloth is incredible), but the setup is still a nightmare for most developers.
TuneKit wraps all of that complexity into a UI that just works.
Launch day π
We hit #19 on Product Hunt today.
The response has been wild. Turns out a lot of people have the same fine-tuning frustration I had.
Try it:
π Live: tunekit.app
π» GitHub: github.com/riyanshibohra/TuneKit
π Product Hunt: Product Hunt
What's next
- More models support
- Advanced hyperparameter tuning for power users
- Direct deployment options (GGUF for Ollama, HuggingFace Hub)
Your turn
If you've ever spent hours setting up fine-tuning, give TuneKit a shot. Would love your feedback.
What do you use for fine-tuning? Any features you'd want to see? π
P.S. Shoutout to the @unslothai team - their optimization makes this entire project possible.
Top comments (0)