DEV Community

Pulsebit News Sentiment API
Pulsebit News Sentiment API

Posted on

Your Pipeline Is 22.8h Behind: Catching Human Rights Sentiment Leads with Pulsebit

Your Pipeline Is 22.8h Behind: Catching Human Rights Sentiment Leads with Pulsebit

We recently discovered a striking anomaly: a 24h momentum spike of -0.442 concerning human rights sentiment. This spike highlights a notable drop in sentiment, indicating a shift in public perception that might have flown under your radar. Notably, English press coverage is leading the discourse, with a lag of just 22.8 hours against the Italian press. This kind of delay can skew your understanding of current events and sentiment trends, particularly when dealing with critical topics like human rights.

When your pipeline doesn't accommodate multilingual inputs or recognize dominant entities, it can lead to significant oversights. In this case, your model missed this critical sentiment shift by over 22 hours. The leading language is English, but if you're only processing one language or relying on a single source, you're likely to misinterpret the landscape. This kind of gap can affect decision-making and strategy, especially in fast-moving fields where sentiment can shift rapidly.

English coverage led by 22.8 hours. Italian at T+22.8h. Conf
English coverage led by 22.8 hours. Italian at T+22.8h. Confidence scores: English 0.95, French 0.95, Spanish 0.95 Source: Pulsebit /sentiment_by_lang.

To address this, we can leverage our API with Python to catch these sentiment shifts effectively. First, we want to filter by language to focus on the relevant sentiment data. Below is an example of how to filter the English language:

import requests

url = "https://api.pulsebit.io/sentiment"
params = {
    "topic": "human rights",
    "lang": "en"
}

response = requests.get(url, params=params)
data = response.json()  # Assuming the response is in JSON format
Enter fullscreen mode Exit fullscreen mode

Next, we need to score the narrative framing itself. We can pass the cluster reason string back through our sentiment scoring endpoint to gain insights into how the narrative is being shaped. Here's how to do that:

cluster_reason = "Clustered by shared themes: rights, north, group, forced, labour."
meta_sentiment_response = requests.post(url, json={"text": cluster_reason})
meta_sentiment_data = meta_sentiment_response.json()
Enter fullscreen mode Exit fullscreen mode

With these two API calls, we can effectively capture both the sentiment shift and the underlying narrative, allowing us to react in real time.

Left: Python GET /news_semantic call for 'human rights'. Rig
Left: Python GET /news_semantic call for 'human rights'. Right: returned JSON response structure (clusters: 3). Source: Pulsebit /news_semantic.

Now that we have a handle on this data, here are three specific builds you can implement based on this finding:

  1. Geo-Filtered Sentiment Analysis: Set a signal threshold of sentiment score < -0.7 to trigger alerts for human rights discussions in English-speaking countries. Use the geo filter to ensure you're only capturing relevant discussions in your target regions.

Geographic detection output for human rights. Hong Kong lead
Geographic detection output for human rights. Hong Kong leads with 2 articles and sentiment -0.70. Source: Pulsebit /news_recent geographic fields.

  1. Meta-Sentiment Looping: Create a loop that scores the sentiment of articles containing the key phrases "rights," "north," and "group." Use a threshold of confidence > 0.95 to ensure that you're only acting on highly reliable sentiment scores.

  2. Forming Themes Dashboard: Develop a dashboard that displays the forming themes in real-time. Use the forming words like rights, human, and google contrasted against mainstream terms. This can help you visualize the narrative shift and adjust your strategies accordingly.

By implementing these builds, you can ensure that your sentiment analysis pipeline is responsive and agile, catching shifts in public sentiment before they become widespread.

To get started with this kind of analysis, check out our documentation. With just a few copy-paste commands, you can begin running this analysis in under 10 minutes.

Top comments (0)