Forem

Cleava AI
Cleava AI

Posted on

I Built a Free Text Analysis API -- Readability, SEO & Sentiment in One Call

I kept running into the same problem across different projects: I needed to analyze text content programmatically -- readability scores, keyword density, sentiment -- and every solution was either expensive, slow, or required wrangling an LLM for something that should be deterministic.

So I built TextOptimizer, a lightweight REST API that gives you readability metrics, SEO analysis, and sentiment scoring in a single POST request. No AI models, no per-token billing, no waiting 3 seconds for a response. Just algorithms, math, and sub-100ms response times.
Here is what it looks like in practice.

The API in 30 Seconds

One endpoint. One request body. Three categories of analysis back.

curl -X POST https://textoptimizer-api.vercel.app/analyze \
  -H "Content-Type: application/json" \
  -H "X-API-Key: YOUR_API_KEY" \
  -d '{
    "text": "Artificial intelligence is transforming how businesses operate. Companies that adopt AI early gain a significant competitive advantage in their markets. The technology enables faster decision-making, better customer experiences, and more efficient operations across every industry.",
    "keywords": ["ai", "business"]
  }'
Enter fullscreen mode Exit fullscreen mode

And here is what comes back:

{
  "readability": {
    "flesch_kincaid_grade": 9.62,
    "flesch_reading_ease": 49.73,
    "gunning_fog": 15.11,
    "smog_index": 13.43,
    "grade_level": "High School",
    "reading_difficulty": "Fairly Difficult"
  },
  "seo": {
    "keyword_density": {
      "ai": 4.76,
      "business": 2.38
    },
    "word_count": 42,
    "character_count": 273,
    "sentence_count": 4,
    "paragraph_count": 1,
    "meta_suggestions": {
      "title_length": "50-60 characters",
      "description_length": "150-160 characters",
      "h1_count": "1 per page",
      "image_alt_texts": "Include target keywords"
    },
    "content_improvements": [
      "Consider adding more content (minimum 300 words)"
    ]
  },
  "sentiment": {
    "sentiment_score": 0.15,
    "sentiment_label": "positive",
    "tone_classification": "optimistic",
    "confidence": 0.85
  }
}
Enter fullscreen mode Exit fullscreen mode

That is readability scoring (Flesch-Kincaid, Gunning Fog, SMOG), SEO metrics with keyword density and actionable improvement suggestions, and sentiment analysis with tone classification -- all from one call.

Why I Didn't Use an LLM for This

I know what you are thinking: "Why not just call GPT-4 and ask it to rate the readability?" A few reasons:
Determinism matters. If you run the same blog post through the API twice, you get the exact same scores. LLMs give you different answers every time. When you are building dashboards, tracking content quality over time, or setting automated thresholds, you need consistency.
Cost scales linearly with LLMs. Analyzing a 5,000-word article through an LLM costs real money per call. TextOptimizer uses established algorithms (Flesch-Kincaid, Gunning Fog Index, SMOG) that run in milliseconds with zero inference cost.
Speed. Sub-100ms responses mean you can run this in a pre-publish hook, a real-time editor sidebar, or a bulk pipeline without your users noticing any lag.

Python Example
If you prefer working in Python, here is a clean example using requests:

import requests

API_URL = "https://textoptimizer-api.vercel.app/analyze"
API_KEY = "YOUR_API_KEY"

def analyze_content(text, keywords=None):
    response = requests.post(
        API_URL,
        headers={
            "Content-Type": "application/json",
            "X-API-Key": API_KEY,
        },
        json={
            "text": text,
            "keywords": keywords or [],
        },
    )
    response.raise_for_status()
    return response.json()

# Analyze a blog post draft
result = analyze_content(
    text="Your blog post content goes here...",
    keywords=["python", "api", "tutorial"],
)

print(f"Grade Level: {result['readability']['grade_level']}")
print(f"Reading Ease: {result['readability']['flesch_reading_ease']}")
print(f"Word Count: {result['seo']['word_count']}")
print(f"Sentiment: {result['sentiment']['sentiment_label']}")
print(f"Improvements: {result['seo']['content_improvements']}")
Enter fullscreen mode Exit fullscreen mode

There is also an official Python SDK if you want something even more concise:

from textoptimizer import TextOptimizer

client = TextOptimizer("YOUR_API_KEY")
result = client.analyze("Your text here", keyword="python sdk")
Enter fullscreen mode Exit fullscreen mode

Individual Endpoints
The /analyze endpoint is the all-in-one call, but you can also hit each analysis type individually if you only need one piece:

Endpoint :: What It Returns
POST /analyze :: Full analysis (readability + SEO + sentiment)
POST /readability :: Flesch-Kincaid, Flesch Reading Ease, Gunning Fog, SMOG
POST /seo :: Word count, keyword density, meta suggestions, improvements
POST /sentiment :: Sentiment score (-1 to 1), tone, confidence
GET /health :: API status check
All POST endpoints accept the same request body: {"text": "...", "keywords": [...]}. The keywords field is optional and mainly relevant for SEO analysis.

Where This Is Actually Useful
I have been thinking about this from the perspective of "what would I plug this into," and here are the use cases that make the most sense:
CMS pre-publish checks. Hook TextOptimizer into your publishing workflow. Before an article goes live, automatically flag content that reads at a graduate level when your audience is general consumers, or content that is missing target keyword density.
Writing tools and editors. If you are building a writing app (or a Notion/Google Docs plugin), you can show a real-time readability sidebar that updates as the user types. The response times are fast enough for near-instant feedback.
SEO tooling. Bulk-analyze your site's content library. Find pages with low word counts, poor readability, or missing keyword optimization. Feed the content_improvements array directly into a task list.
Content marketing platforms. Score email copy, landing pages, and ad text before they ship. Use the sentiment endpoint to make sure your messaging lands with the right tone.
Academic tools. Check whether a research paper summary is accessible to a general audience, or whether documentation is written at an appropriate reading level.

Pricing
The free tier gives you 100 requests per month with access to all endpoints. That is enough to test the integration, build a proof of concept, or run a small personal project.

If you need more volume, paid plans start at $9.99/month for 10,000 requests and go up from there for production workloads.

Get Started
API Base URL: https://textoptimizer-api.vercel.app
Interactive Docs: https://textoptimizer-api.vercel.app/docs
RapidAPI Listing: Search "TextOptimizer" on RapidAPI to subscribe and get your API key
Python SDK: pip install textoptimizer
The API is live and ready to use. If you have questions or feature requests, drop a comment below -- I am actively developing this and would love to hear what you would build with it.

Top comments (0)