Stop shipping AI features blind. ResonanceEngine tells you exactly how well two texts resonate — whether that's a model output vs. expected answer, a chatbot reply vs. brand voice, or a generated email vs. the user's intent.
Why developers use it
LLMs are non-deterministic. You need a fast, calibrated way to measure "did the output actually match what we wanted?" — without hand-grading every response.
ResonanceEngine gives you a 0–100 score plus aligned/misaligned signals, in four modes:
- 🧠 Semantic — do the texts mean the same thing?
- 🎯 Intent — does the response fulfill the user's actual goal?
- ❤️ Emotional — does the tone match (warm, urgent, calm)?
- 🎨 Brand — does it sound like your brand voice?
Use cases
- QA on AI responses — flag low-scoring outputs before they ship
- Prompt evaluation — A/B test prompts with objective scoring
- Content guardrails — block off-brand or off-intent generations
- Chatbot evals — measure response quality at scale
- RAG validation — does the answer actually match the retrieved context?
Quick start
curl -X POST 'https://resonance-engine.p.rapidapi.com/resonance-engine-evaluate' \
-H 'x-rapidapi-key: YOUR_KEY' \
-H 'x-rapidapi-host: resonance-engine.p.rapidapi.com' \
-H 'Content-Type: application/json' \
-d '{
"source": "I need help fast, my server is down",
"target": "Sure! Here is a 12-step guide to server architecture",
"mode": "intent"
}'
Response: Low intent score — the reply doesn't match the urgency.
Batch mode
Need to score 20 pairs at once? Use /resonance-engine-batch — same auth, takes an array of pairs and returns per-pair scores plus an average.
Pricing
- Basic — $9/month, 5,000 evaluations
- Pro — $29/month, 50,000 evaluations ⭐ recommended
- Ultra — $99/month, 500,000 evaluations
Top comments (0)