Prompt Engineering is dead. Long live Skill Searching.
We’ve all been there: tweaking a prompt for hours ("You are a helpful assistant...", "Take a deep breath...") trying to get a model to behave. It’s fragile, unscalable, and frankly, boring.
Enter UPskill by Hugging Face (conceptually). The idea is simple: instead of writing prompts, you generate them using a "Teacher" model, validate them against test cases, and save the winner as a reusable "Skill".
I built a Local-Only Demo to prove this works without sending a single token to the cloud. Here’s how I did it using Ollama and Gemma 3 (12B).
The Problem: Vague Inputs
Users are lazy. They type things like:
"write something about AI"
Baseline models will give you a generic, rambling encyclopedia entry. But what if you wanted a structured JSON object for your app?
The Solution: A "Prompt Optimizer" Skill
I created a meta-skill that intercepts these vague requests and transforms them into structured, high-quality prompts (or in this case, pure JSON for validation).
Step 1: The "Teacher" (Gemma 3)
I asked Gemma 3 to generate a system prompt that forces JSON compliance.
{
"role": "Tech Journalist",
"goal": "Explain AI basics",
"constraints": "Max 200 words",
"output_format": "Markdown"
}
Step 2: Verification
I ran this skill against a test suite of vague inputs.
- Baseline (Llama/Gemma raw): Fails 100% of the time (returns text).
- Skilled Model: Passes 100% of the time (returns valid JSON).
For developers, this means deterministic behavior from non-deterministic models. You can treat "Skills" like library functions—import them, trust them, and stop worrying about the underlying prompt magic.
Try It Yourself
I’ve open-sourced the demo. No API keys needed. Just ollama serve and go.

Top comments (0)