DEV Community

Cover image for Moderate posts automatically with the Perspective API
Valentin Prugnaud 🦊
Valentin Prugnaud 🦊

Posted on • Originally published at whatdafox.com on

Moderate posts automatically with the Perspective API

We all know the Internet we know and love isn't always a safe place, but there are innovations that we can easily
leverage to make it a better place.

Recently, amid this whole COVID–19 situation, my co-founders and I decided to create a platform, for Canadians to thank
all the workers working on the front lines, risking their lives, for us while we stay at home: Together North. The whole country and many other places in the world started clapping and cheering at a dedicated time every day to show their gratitude.
But we thought it would be a good idea to have a dedicated place where people could leave their gratitude messages, share it directly with
the people they are cheering for and keep it online as a reminder.

And it was a good idea. The only problem was: some people don't go on the Internet to share love and compassion, but to write mean comments
and spread hate instead. And Together North was a perfect target: we are a small team and it is an open text message. There is room for abuse and we can't monitor every message manually.

Here comes the Perspective API. I decided to use this API to evaluate every incoming message
and decide whether or not that message should be published, or if it should be flagged and evaluated by a human.

Let's dive into how it works.

Concept

The concept is simple: create a Firebase Function to evaluate any incoming message upon creation, run it through the Perspective API and based on the "Toxicity" result, decide whether or not we should display it or not.

After running a few manual tests with carefully chosen words, I decided to use a threshold of 25% toxicity. Anything above 25% would be flagged as "toxic", unpublished and forwarded to us in a dedicated Slack channel for human evaluation. Anything below 25% would automatically be approved, published on the platform but also forwarded into our Slack channel for good measure and proofreading just in case.

Setup

We already have a "messages" collection on Firestore where we store all of our incoming messages. To decide whether or not we should display the message, we updated every document with a "published" flag. If true, it is displayed on the platform, if not, it is hidden until further notice.

Then, we created the Firebase Function with the code to evaluate the message and send the alerts. This function is triggered every time a message is created in the collection on Firestore.

Let's take a look at the code, shall we?

Code

Here is the code we used for the Firebase Function:

const functions = require('firebase-functions');
const admin = require('firebase-admin');
const Perspective = require('perspective-api-client');
const get = require('get-value');
const { WebClient } = require('@slack/web-api');

// Initialize
const slack = new WebClient(functions.config().slack.token);
const perspective = new Perspective({ apiKey: functions.config().perspective.api_key });

admin.initializeApp();

exports.moderateMessages = functions.firestore.document('messages/{message}').onCreate(async (snapshot, context) => {
    const id = snapshot.id;
    const data = snapshot.data();

    // Evaluate toxicity of the message via the Perspective API
    const result = await perspective.analyze(data.message);

    const toxicity = get(result, 'attributeScores.TOXICITY.summaryScore.value', {
        default: 1
    });

    if (toxicity * 100 < 25) {
        // Message is safe, approve it
        await slack.chat.postMessage({
            text: `@channel A safe message has been published on Together North! https://togethernorth.ca/m/${id}`,
            blocks: [{
                'type': 'section',
                'text': {
                    'text': `@channel A safe message has been published on Together North! https://togethernorth.ca/m/${id}`,
                    'type': 'mrkdwn'
                },
                'fields': [
                    {
                        'type': 'mrkdwn',
                        'text': `*ID*: ${id}`
                    },
                    {
                        'type': 'mrkdwn',
                        'text': ' '
                    },
                    {
                        'type': 'mrkdwn',
                        'text': `*Message*:\n${data.message}`
                    }
                ]
            }],
            channel: '#together-north',
        });
        return admin.firestore().collection('messages').doc(id).update({
            result,
            moderationScore: toxicity,
            published: true
        });
    } 

    // Message is not safe, send a message to Slack
    await slack.chat.postMessage({
        text: `@channel A suspicious message has been blocked on Together North!`,
        blocks: [{
            'type': 'section',
            'text': {
                'text': '@channel A suspicious message has been blocked on Together North!',
                'type': 'mrkdwn'
            },
            'fields': [
                {
                    'type': 'mrkdwn',
                    'text': `*ID*: ${id}`
                },
                {
                    'type': 'mrkdwn',
                    'text': ' '
                },
                {
                    'type': 'mrkdwn',
                    'text': `*Message*:\n${data.message}`
                }
            ]
        }],
        channel: '#together-north',
    });
    return admin.firestore().collection('messages').doc(id).update({
        result,
        moderationScore: toxicity,
        published: false,
    });
});

The logic is fairly simple: when a message is created, run the message through the Perspective API, if the toxicity level
is below 25%, share it in our Slack channel with the URL for easy reference and mark it as published. We store the toxicity
report from the API as well out of curiosity.

However, if the message has a toxicity level higher than 25%, then we send a different Slack notification to our Slack channel,
mark the document as unpublished and store the toxicity report as well for easy review by our team.

If you are curious about using Firebase, the Perspective API or the Slack API, get in touch with me on Twitter or in the comments.


PS: If you live in Canada or just want to share the love, leave a message on Together North!

Top comments (0)