DEV Community

Bob Steger
Bob Steger

Posted on

How to Add Sentiment Analysis to Your App in 5 Minutes (Free API)

Every app that handles user-generated content eventually needs text analysis: customer reviews need sentiment scoring, blog platforms need readability checks, chat apps need toxicity filtering. Support tickets need classification. There seems like there's lots of need for these tools. So I built the Smart Text Toolkit, a single API with 14 text analysis endpoints. One subscription, one integration, and you've got sentiment analysis, readability scoring, summarization, keyword extraction, toxicity detection, PII redaction, and more.

In this tutorial, I'll show you how to integrate sentiment analysis into your app in under 5 minutes. Then I'll walk through the other endpoints so you can see what's available.

Output example

When you call the sentiment analysis endpoint and get back structured data like this:

{
  "sentiment": {
    "label": "positive",
    "score": 0.92,
    "scores": {
      "positive": 0.92,
      "negative": 0.03,
      "neutral": 0.05
    }
  },
  "processing_time_ms": 85
}
Enter fullscreen mode Exit fullscreen mode

Step 1: Get Your API Key

Head to the RapidAPI and search for "Smart Text Toolkit."
You should be automatically subscribed to the free tier (you get 100 requests per month to test with).

Grab your API key from the dashboard. You'll need two values:

  • Your X-RapidAPI-Key
  • The host: smart-text-toolkit.p.rapidapi.com

If you don't see this, expand the AI-Powered category on the left panel and click the Analyze Sentiment endpoint.
Then, under the App tab in the middle of the window, you can get the API Key value.

Step 2: Call the API

Python code example

import requests

url = "https://smart-text-toolkit.p.rapidapi.com/api/v1/sentiment"

payload = {
    "text": "I absolutely love this product! The quality exceeded my expectations.",
    "granularity": "document"
}

headers = {
    "Content-Type": "application/json",
    "X-RapidAPI-Key": "YOUR_API_KEY_HERE",
    "X-RapidAPI-Host": "smart-text-toolkit.p.rapidapi.com"
}

response = requests.post(url, json=payload, headers=headers)
data = response.json()

print(f"Sentiment: {data['sentiment']['label']}")
print(f"Confidence: {data['sentiment']['score']:.0%}")
Enter fullscreen mode Exit fullscreen mode

Output:

Sentiment: positive
Confidence: 92%
Enter fullscreen mode Exit fullscreen mode

JavaScript (Node.js)

const response = await fetch(
  "https://smart-text-toolkit.p.rapidapi.com/api/v1/sentiment",
  {
    method: "POST",
    headers: {
      "Content-Type": "application/json",
      "X-RapidAPI-Key": "YOUR_API_KEY_HERE",
      "X-RapidAPI-Host": "smart-text-toolkit.p.rapidapi.com",
    },
    body: JSON.stringify({
      text: "I absolutely love this product! The quality exceeded my expectations.",
      granularity: "document",
    }),
  }
);

const data = await response.json();
console.log(`Sentiment: ${data.sentiment.label} (${data.sentiment.score})`);
Enter fullscreen mode Exit fullscreen mode

cURL

curl -X POST "https://smart-text-toolkit.p.rapidapi.com/api/v1/sentiment" \
  -H "Content-Type: application/json" \
  -H "X-RapidAPI-Key: YOUR_API_KEY_HERE" \
  -H "X-RapidAPI-Host: smart-text-toolkit.p.rapidapi.com" \
  -d '{"text": "I absolutely love this product!", "granularity": "document"}'
Enter fullscreen mode Exit fullscreen mode

Step 3: Go Deeper with Sentence-Level Analysis (2 minutes)

The real power shows up when you switch to sentence-level granularity. You would use this for analyzing reviews, feedback, or support tickets where a single message contains mixed feelings.

payload = {
    "text": "The food was incredible and the atmosphere was perfect. "
            "However, the service was painfully slow and our waiter "
            "forgot our appetizers entirely.",
    "granularity": "sentence"
}

response = requests.post(url, json=payload, headers=headers)
data = response.json()

for sentence in data["sentiment"]["sentences"]:
    emoji = "😊" if sentence["label"] == "positive" else "😠" if sentence["label"] == "negative" else "😐"
    print(f"{emoji} [{sentence['label']}] {sentence['text']}")
Enter fullscreen mode Exit fullscreen mode

Output:

😊 [positive] The food was incredible and the atmosphere was perfect.
😠 [negative] However, the service was painfully slow and our waiter forgot our appetizers entirely.
Enter fullscreen mode Exit fullscreen mode

Now you can pinpoint exactly what customers love and hate, and not just whether a review is good or bad.

Real-World Use Case: Review Analyzer

Here's a practical example — a function that analyzes a batch of product reviews and generates a summary:

import requests

API_URL = "https://smart-text-toolkit.p.rapidapi.com/api/v1"
HEADERS = {
    "Content-Type": "application/json",
    "X-RapidAPI-Key": "YOUR_API_KEY_HERE",
    "X-RapidAPI-Host": "smart-text-toolkit.p.rapidapi.com"
}

def analyze_reviews(reviews):
    results = {"positive": 0, "negative": 0, "neutral": 0}
    insights = []

    for review in reviews:
        # Get sentiment
        resp = requests.post(
            f"{API_URL}/sentiment",
            json={"text": review, "granularity": "document"},
            headers=HEADERS
        ).json()

        label = resp["sentiment"]["label"]
        results[label] += 1

        # Extract keywords from negative reviews
        if label == "negative":
            kw_resp = requests.post(
                f"{API_URL}/keywords",
                json={"text": review, "max_keywords": 5},
                headers=HEADERS
            ).json()

            top_issues = [kw["keyword"] for kw in kw_resp["keywords"][:3]]
            insights.append({
                "review": review[:80] + "...",
                "issues": top_issues
            })

    return {
        "total": len(reviews),
        "breakdown": results,
        "negative_insights": insights
    }

# Example usage
reviews = [
    "Absolutely love this product! Works exactly as described.",
    "Terrible quality. Broke after two days. Waste of money.",
    "It's okay, nothing special. Does what it says.",
    "The shipping was fast but the product itself is cheaply made.",
    "Best purchase I've made all year. Highly recommend!"
]

report = analyze_reviews(reviews)
print(f"Positive: {report['breakdown']['positive']}")
print(f"Negative: {report['breakdown']['negative']}")
print(f"Neutral:  {report['breakdown']['neutral']}")

if report["negative_insights"]:
    print("\nKey issues in negative reviews:")
    for item in report["negative_insights"]:
        print(f"{', '.join(item['issues'])}")
Enter fullscreen mode Exit fullscreen mode

Notice how we combined two endpoints — sentiment and keywords — to not just detect negative reviews but understand why they're negative.

List of all available endpoints

All of these follow the same pattern (POST with JSON, get structured data in the response):

Endpoint What It Does Great For
/sentiment Positive/negative/neutral scoring Review analysis, social monitoring
/readability Flesch-Kincaid, Gunning Fog, 5 more CMS tools, educational platforms
/summarize Extractive and abstractive summaries News apps, document tools
/keywords Keyword extraction + SEO density SEO tools, content platforms
/toxicity Multi-label toxicity detection Chat moderation, community safety
/language-detect 50+ languages with confidence i18n, content routing
/grammar-check Spelling, grammar, style, punctuation Writing assistants, editors
/entities Named entity recognition (NER) Data extraction, knowledge graphs
/pii-detect Find and redact personal data Compliance (GDPR/CCPA), privacy
/text-compare Semantic + lexical similarity Plagiarism check, dedup
/classify Zero-shot text classification Ticket routing, content tagging
/paraphrase Rewrite with style control Writing tools, content variation
/seo-analyze Full SEO scoring + recommendations Content marketing, blogging
/emotion 7 emotion categories (anger, joy...) UX research, feedback analysis

I built these with performance in mind. Every response includes processing_time_ms value so you can monitor performance.

Another Example: PII (Personally Identifiable Information) Detection

Here's a quick look at the PII detection endpoint:

response = requests.post(
    f"{API_URL}/pii-detect",
    json={
        "text": "Contact John Smith at john.smith@acme.com "
               "or call 555-867-5309. His SSN is 123-45-6789.",
        "redact": True
    },
    headers=HEADERS
).json()

print(f"PII found: {response['total_entities']}")
print(f"Risk level: {response['risk_level']}")
print(f"\nRedacted: {response['redacted_text']}")
Enter fullscreen mode Exit fullscreen mode

Output:

PII found: 4
Risk level: critical

Redacted: Contact ********** at **********************
or call ***-***-****. His SSN is ***-**-****.
Enter fullscreen mode Exit fullscreen mode

One API call, and you've got GDPR-ready redaction.

Yet Another Example: Zero-Shot Classification

This one's my favorite. You provide any categories you want — no training, no ML expertise — and the API classifies text into them:

response = requests.post(
    f"{API_URL}/classify",
    json={
        "text": "I've been charged twice and I want a refund immediately.",
        "labels": ["billing", "technical_support", "returns", "general"],
    },
    headers=HEADERS
).json()

print(f"Category: {response['best_match']['label']}")
print(f"Confidence: {response['best_match']['score']:.0%}")
Enter fullscreen mode Exit fullscreen mode

Output:

Category: billing
Confidence: 72%
Enter fullscreen mode Exit fullscreen mode

Instant support ticket routing with zero training data.

Pricing

I wanted this to be accessible, so there's a free tier to start:

  • Basic: 100 requests/month (great for testing)
  • Pro ($9.99/mo): 5,000 requests
  • Ultra ($29.99/mo): 25,000 requests
  • Mega ($79.99/mo): 100,000 requests

All 14 endpoints included at every tier. No per-endpoint pricing games.

The Tech Behind It

For those curious about what's under the hood:

  • Framework: Python + FastAPI
  • Sentiment: RoBERTa-based model fine-tuned for sentiment
  • Summarization: DistilBART (CNN variant)
  • Toxicity: Detoxify (multi-label classifier)
  • PII: Microsoft Presidio
  • Classification: BART-large-MNLI (zero-shot)
  • Grammar: LanguageTool
  • NER + Keywords: spaCy
  • Similarity: Sentence-Transformers (MiniLM)

All models run on CPU — no GPU costs to pass on to you. Models are loaded once at startup and stay in memory, so there are no cold starts.

Try It Out

  1. Search for "Smart Text Toolkit" on RapidAPI
  2. Subscribe to the free tier
  3. Test any endpoint using the built-in API playground
  4. Integrate into your app using the code examples above

I'm a solo developer building this, so I genuinely appreciate any feedback. If there's an endpoint you'd want added or something isn't working right, drop a comment below or reach out through RapidAPI.

Happy building!

Top comments (0)