This is a submission for the Gemma 4 Challenge: Write About Gemma 4
Building with Gemma 4: What I Learned From Turning Job Posts into AI Decisions
I didn’t start this project to “explore AI”.
I started it because job posts are messy.
Some are vague. Some are misleading. Some look real but feel off once you read them twice.
So I built something simple:
A tool that tells you what a job post actually means.
The idea
Ghost Job Detector takes a job description and returns:
- Is this a real job?
- Is it a ghost job?
- Is it a scam?
- Or just a suspicious listing?
But more importantly, it explains why.
Why Gemma 4
I used Gemma 4 because I needed something that can:
- understand messy human-written job posts
- detect weak signals (not just keywords)
- reason about intent
- return structured output reliably
I ran it through OpenRouter using:
- Gemma 4 26B MoE (primary)
- Gemma 4 31B Dense (fallback)
How people can actually use Gemma 4
One thing that surprised me while working with Gemma 4 is how accessible it actually is.
There are a few practical ways to use it depending on your setup:
1. OpenRouter (fastest way to start)
This is what I used in this project.
You can access Gemma models directly through OpenRouter:
- no infrastructure setup
- no local GPU needed
- just API calls
It’s the easiest way to integrate Gemma 4 into a real application.
2. Google AI Studio
Google also provides access through AI Studio.
You can:
- test prompts directly in the browser
- experiment with models
- generate API keys for integration
It’s more of a playground for prototyping than production.
3. Running locally (fully free option)
Gemma models are also open enough to run locally depending on size.
Typical setup:
- Download model from Hugging Face or Kaggle
- Run using tools like:
- Ollama
- LM Studio
- Transformers (Python)
This gives you:
- full control
- no API limits
- no cost per request
But requires more setup and compute power.
Why this matters
What I realized is that Gemma 4 isn’t tied to a single platform.
You can:
- prototype fast with OpenRouter
- experiment with Google AI Studio
- or run fully offline locally
That flexibility is what makes it practical for real-world projects.
What actually matters in real usage
One thing I learned quickly:
The model is not the hard part.
The hard part is making it behave consistently inside a product.
That meant:
- forcing JSON structure
- validating outputs
- handling API failures
- retry logic for rate limits
- fallback between models
Real example
Here’s how the system behaves:
Input:
"We are looking for a self-starter in a fast-paced environment."
Output:
- Verdict: Suspicious
- Reasoning: vague expectations + pressure signals
- HR translation: high workload, unclear structure, limited support
HR language is the real problem
A lot of job descriptions don’t lie directly.
They just hide meaning behind corporate phrases.
So I added a “translation layer”:
- “Fast-paced environment” → high pressure, overtime likely
- “Wear many hats” → multiple roles, single salary
- “Self-starter required” → little onboarding or guidance
This part turned out to be more useful than the classification itself.
The biggest engineering lesson
The most important thing I learned:
LLM behavior is mostly a product of constraints, not just model size.
Gemma 4 became much more reliable when I:
- strictly defined output schema
- reduced ambiguity in prompts
- enforced structure in responses
Reliability matters more than intelligence
Since I used free-tier API access, rate limits happened frequently.
Instead of treating that as a blocker, I added:
- retry logic
- fallback between Gemma models
- graceful UI handling when AI is busy
The result: the app never feels “broken”, even when the API is.
Why I built this
Not to replace recruiters.
Not to automate hiring.
But to help people avoid wasting time on job posts that don’t make sense.
Final thought
Building with Gemma 4 felt less like “using AI”
and more like:
designing how AI should behave inside a real product.
That shift is what made this project interesting for me.
Demo
https://ghost-job-detector-rlcx.vercel.app/
Top comments (0)