DEV Community

Cover image for AI-Powered Content Moderation in React Using Large Language Models (LLMs)
Florian for ExoAPI

Posted on

AI-Powered Content Moderation in React Using Large Language Models (LLMs)

Content moderation is a critical requirement for any platform that allows user-generated content. The need to ensure that text such as comments and posts adhere to community standards and legal regulations is not only challenging but also time-consuming when done manually. AI-powered solutions, particularly those based on large language models (LLMs), can help automate and scale content moderation, making it more efficient and effective.

In this article, we will explore how to use AI-based content moderation with ExoAPI's SDK. We'll focus on how LLMs enhance content moderation and guide you through integrating ExoAPI into your project, using both backend and frontend components.

Why Use AI and LLMs for Content Moderation?

Traditional content moderation often relies on simple keyword filtering or human reviewers, both of which have significant limitations. AI-driven moderation, powered by large language models like ChatGPT, brings several advantages:

  • Contextual Understanding: Unlike keyword-based filtering, LLMs can understand the context of the content. For example, they can differentiate between phrases used in harmless contexts (e.g., “kill the game”) and genuinely harmful content.
  • Scalability: AI moderation tools can analyze large amounts of data in real time, making it possible to manage high-traffic platforms without overwhelming human moderators.
  • Multi-Language Support: LLMs can moderate content across multiple languages, helping platforms support a global user base.
  • Flexibility: AI models can be fine-tuned and adapted over time, becoming more accurate as they process more data and learn new trends in harmful content.

How Large Language Models (LLMs) Power Content Moderation

LLMs excel at understanding human language nuances, including detecting sentiment, context, and potential harm in text. Instead of flagging content solely based on keywords, these models can assign safety scores to content based on its overall meaning and intent. A low safety score means the content is likely harmful or inappropriate, while a high score means it's safe.

ExoAPI uses this LLM-driven approach to evaluate content, providing a safety score for text. You can set thresholds (e.g., 0.5) to decide whether to approve or reject content.

Integrating AI-Powered Content Moderation with ExoAPI

The integration involves two key steps:

  1. Setting up the backend to handle the content moderation request securely using ExoAPI.
  2. Building the frontend React components to call the backend and display moderation results.

Prerequisite: Sign Up for ExoAPI

Visit ExoAPI and sign up for an account to obtain your API key.

1. Backend: Setting Up a Moderation API

Since the API key should be kept secure and not exposed in client-side code, the content moderation logic should reside on the server. We'll create a backend endpoint that accepts content from the client, passes it to ExoAPI, and returns the moderation result.

Here's how to set up a simple backend using Express.js and ExoAPI for text moderation.

Backend Code (Node.js + Express)

First, install the necessary dependencies:

npm install express @flower-digital/exoapi-sdk body-parser
Enter fullscreen mode Exit fullscreen mode

Next, create an index.js file for your Express server:

const express = require("express");
const { ExoAPI } = require("@flower-digital/exoapi-sdk");
const bodyParser = require("body-parser");

const app = express();
const exoapi = new ExoAPI({ apiKey: process.env.EXOAPI_KEY }); // Load API key from environment variable

app.use(bodyParser.json());

// POST endpoint for submitting a comment
app.post("/comment", async (req, res) => {
  const { text } = req.body;

  try {
    const result = await exoapi.contentModeration({ text });
    // Check if the content's safety score is below the threshold (e.g., 0.5)
    const isApproved = result.safetyScore >= 0.5;
    if (isApproved) {
      // TODO: save comment in database
    }
    res.json({ approved: isApproved, safetyScore: result.safetyScore });
  } catch (err) {
    console.error("Error moderating text:", err);
    res.status(500).json({ error: "Error during moderation" });
  }
});

const PORT = process.env.PORT || 3000;
app.listen(PORT, () => console.log(`Server running on port ${PORT}`));
Enter fullscreen mode Exit fullscreen mode
  • This Express app has a single route (/comment) where the client sends text to be moderated.
  • The safetyScore from ExoAPI determines whether the content is approved. If the score is above a threshold (e.g., 0.5), the content is deemed safe.
  • API key security is ensured by using environment variables (process.env.EXOAPI_KEY).

Running the Server

Before running the server, ensure you have set the EXOAPI_KEY in your environment:

EXOAPI_KEY=your_exoapi_key node index.js
Enter fullscreen mode Exit fullscreen mode

Your backend is now ready to handle content moderation requests from your frontend.

2. Frontend: React Integration

On the frontend, we'll create a simple React form that submits text to the backend endpoint for moderation.

Frontend Code (React)

import React, { useState } from "react";

function CommentForm() {
  const [comment, setComment] = useState("");
  const [feedback, setFeedback] = useState("");
  const [loading, setLoading] = useState(false);

  const handleSubmit = async (e) => {
    e.preventDefault();
    setLoading(true);

    try {
      const response = await fetch("/comment", {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify({ text: comment }),
      });

      const data = await response.json();
      setLoading(false);

      if (data.approved) {
        setFeedback("Comment approved and submitted!");
        // Perform further actions (e.g., display on UI)
      } else {
        setFeedback(`Comment rejected (Safety Score: ${data.safetyScore}).`);
      }
    } catch (error) {
      console.error("Error submitting comment:", error);
      setLoading(false);
      setFeedback("An error occurred while moderating the comment.");
    }
  };

  return (
    <div>
      <form onSubmit={handleSubmit}>
        <textarea value={comment} onChange={(e) => setComment(e.target.value)} placeholder="Enter your comment" />
        <button type="submit" disabled={loading}>
          {loading ? "Moderating..." : "Submit"}
        </button>
      </form>
      <p>{feedback}</p>
    </div>
  );
}

export default CommentForm;
Enter fullscreen mode Exit fullscreen mode

In this example:

  • The React form allows users to enter text and submit it for moderation.
  • The form sends the comment to the backend /comment endpoint via a POST request.
  • The backend evaluates the content using ExoAPI, and the frontend displays feedback based on whether the comment is approved or rejected.

Running Frontend with Proxy

To ensure the frontend can talk to the backend during development, you can configure a proxy in your package.json (React):

{
  "proxy": "http://localhost:3000"
}
Enter fullscreen mode Exit fullscreen mode

This allows you to call the backend at /comment without needing to hardcode the backend URL in development.

Multi-Framework Integration

While this article focuses on React, the same principles can be applied across other popular frameworks. Here are examples for Vue.js and Next.js.

Vue.js

In Vue, you can create a form similar to React and call the backend using Axios or the Fetch API:

<template>
  <div>
    <textarea v-model="comment"></textarea>
    <button @click="submitComment">Submit</button>
    <p>{{ feedback }}</p>
  </div>
</template>

<script>
  export default {
    data() {
      return {
        comment: "",
        feedback: "",
      };
    },
    methods: {
      async submitComment() {
        try {
          const response = await fetch("/comment", {
            method: "POST",
            headers: { "Content-Type": "application/json" },
            body: JSON.stringify({ text: this.comment }),
          });

          const data = await response.json();
          this.feedback = data.approved ? "Comment approved and submitted!" : `Comment rejected (Safety Score: ${data.safetyScore}).`;
        } catch (error) {
          console.error("Error submitting comment:", error);
          this.feedback = "An error occurred while moderating the comment.";
        }
      },
    },
  };
</script>
Enter fullscreen mode Exit fullscreen mode

Next.js

In Next.js, you can create an API route for moderation and interact with it from the frontend.

API Route (pages/api/moderate.js):

import { ExoAPI } from "@flower-digital/exoapi-sdk";

const exoapi = new ExoAPI({ apiKey: process.env.EXOAPI_KEY });

export default async (req, res) => {
  if (req.method === "POST") {
    const { text } = req.body;

    try {
      const result = await exoapi.contentModeration({ text });
      const isApproved = result.safetyScore >= 0.5;
      if (isApproved) {
        // TODO: save comment in database
      }
      res.status(200).json({ approved: isApproved, safetyScore: result.safetyScore });
    } catch (error) {
      console.error("Error moderating text:", error);
      res.status(500).json({ error: "Error during moderation" });
    }
  } else {
    res.status(405).json({ error: "Method not allowed" });
  }
};
Enter fullscreen mode Exit fullscreen mode

Conclusion

AI-powered content moderation, driven by large language models (LLMs), provides an efficient and scalable solution to moderate user-generated content. By leveraging ExoAPI, developers can quickly integrate advanced content moderation capabilities into their applications while keeping sensitive API keys secure by using a backend server.

This approach allows developers to:

  • Prevent harmful or inappropriate content from being published.
  • Improve user experience by ensuring a safe community environment.
  • Easily scale moderation as the platform grows.

By following the steps in this article, you can implement secure AI content moderation in your React app (or other frameworks like Vue and Next.js) using ExoAPI.

Top comments (0)