DEV Community

Cristopher Coronado
Cristopher Coronado

Posted on

Detecting harm content in text and images using Azure AI Content Safety

Introduction

Social media platforms have become double-edged swords in the last years. They provide us with worldwide connections, but they can also act as havens for dangerous content. Tech companies have resorted to artificial intelligence to protect online communities from this escalating problem. Azure AI Content Safety, a service made to recognize and flag hazardous content, is a potent tool.

Development

We created a demo application to demonstrate the capabilities of Azure AI Content Safety. An Angular frontend serves as the user interface in this demo, while a.NET backend API manages the laborious task of content analysis.

For this demo we will use this GitHub repository where you can set the settings such as API Key, connection string, etc.

Prerequisites:

  • Azure subscription
  • .NET SDK
  • Angular SDK
  • SQL Server database

First, let’s create the Azure resources. Go to the Azure portal and create the Content Safety resource.

Image description

Image description

Once the resource is created, go to Keys and Endpoint under Resource Management and copy the key and endpoint.

Image description

Image description

These credentails must be pasted in the appsettings.Development.json file in the AzureAIContentSafety.API project.

{
   "AzureAIContentSafety":{
      "Endpoint":"https://<SERVICE_NAME>.cognitiveservices.azure.com/",
      "ApiKey":"",
      "TextSeverityThreshold":{
         "Blur":3,
         "Reject":5
      },
      "ImageSeverityThreshold":{
         "Blur":2,
         "Reject":4
      }
   }
}
Enter fullscreen mode Exit fullscreen mode

Now, create the Storage Account. Follow this configuration in the Basics tab.

Image description

Once the resource is created, go to Configuration under Settings and enable the blob anonymous access.

Image description

Then, go to Containers under Data storage and create the container images and chose the Anonymous access level to Blob.

Image description

Image description

Now, you must copy the connection string to your Storage account. So, go to Access keys under Security + networking. Click on Show for the connection string key1.

Image description

In the same appsettings.Development.json file, paste it.

{
   "AzureStorage":{
      "BlobCacheControl":"max-age=21600",
      "BlobContainerName":"images",
      "ConnectionString":""
   }
}
Enter fullscreen mode Exit fullscreen mode

Here's how the solution works:

  • User Interaction: A user can upload an image or enter text to interact with the Angular frontend.
  • API Request: The user-submitted content is sent to the.NET API by the Angular application.
  • Content Analysis: To check for possible harm, the .NET API then makes use of the Azure AI Content Safety service. Four main categories are covered in this analysis:
    • Hate: Content that incites hatred or discrimination against groups
    • Sexual: Content that is exploitative, explicit, or sexually suggestive
    • Violence: Information that promotes or shows violence
    • Self-Harm: Information that encourages or promotes self-harm
  • Flagging and Moderation: The AI marks content that it believes to be harmful so that human moderators can examine it further. This guarantees that suitable measures, like deletion or user suspension, can be implemented.

The severities levels for the harm categories are in a scale of 0 to 7, but for images the classifier only returns severities 0, 2, 4, and 6.

Also, for texts, if the user specifies, it can return severities in the trimmed scale of 0, 2, 4, and 6; each two adjacent levels are mapped to a single level.

  • [0, 1] -> 0
  • [2, 3] -> 2
  • [4, 5] -> 4
  • [6, 7] -> 6

You can adjust the severities thresholds based on your needs in the appsettings.Development.json file.

For testing the demo, you can either submit text, image or both.

Image description

Image description

If the image content was categorized as harmful in any category, it will be blurred by default. But you can turn on/off the blur from the button displayed at the top right corner for each image.

Image description

Image description

Image description

Conclusion

Even though AI-powered solutions like Azure AI Content Safety are a big improvement, it's crucial to keep in mind that they are not panacea. To ensure accurate and moral content control, human oversight remains required. We can make the internet a safer and more welcoming place by combining human knowledge with AI's ability. We might expect even more advanced tools to appear as technology develops further, assisting us in managing the challenges of the digital age.

Thanks for reading

Thank you very much for reading. I hope you found this article interesting and may be useful in the future. If you have any questions or ideas you need to discuss, it will be a pleasure to collaborate and exchange knowledge.

Top comments (0)