DEV Community

Kelsey Foster
Kelsey Foster

Posted on • Originally published at hackernoon.com

What Is Data Bias and How to Avoid It

Data bias can have significant implications for research and practical applications. Think back to last year’s Facebook scandal, where its AI asked users if they wanted to “keep seeing videos about primates” after watching a video featuring Black men. Viewers were understandably appalled, and it sparked a heated conversation about the limitations–and potential dangers–of AI-backed software.

While this may seem like an extreme example, it serves as a powerful reminder of the real-life implications of AI models trained with biased data.

What exactly is data bias? Data bias refers to data sets that are not representative of the population in study. Models trained on biased data may contain inherent prejudice against particular subjects or subpopulations.

Unfortunately, data bias is a common problem in AI and Machine Learning applications, often occurring unintentionally. That’s why AI Researchers and Data Scientists must be constantly vigilant to ensure that they train models free, or as free of as possible, of any type of bias.

This article will look at three ways to limit data bias: collecting data from a variety of sources, ensuring data is diverse, and monitoring real-world performance.

1. Collect data from a variety of sources

To avoid data bias, it’s imperative that data is collected from a wide variety of sources. Here are the most common avenues for collecting training data:

  • Paying for data sets
  • Using public data sets
  • Sourcing open source content
  • Using in-person or field-collected data sets

The best training data would be sourced from a combination of all four.

If your model involves predictions relating to speech, you’ll need to make sure that your overall data set is robust to all environments and background noise. This will help guarantee that your models can make just as (or nearly as) accurate predictions with noisy audio as they could with studio-quality audio.

2. Make sure data is diverse

In addition to collecting data from myriad sources, you’ll also want to make sure that the data itself is diverse. This means that speakers in the audio or video files encompass a wide range of characteristics, such as locations, dialects, genders, sex, race, nationality, and more.

Unfortunately, sourcing such diverse data may prove difficult, especially if you rely solely on open-source data. That’s why it’s important that these first two recommendations go hand in hand–a variety of sources and diverse data within each source.

3. Monitor real-world performance

Now that you’ve ensured that your initial data set is diverse, you can be confident that your model will be unbiased, right? While the chances are minimized, unfortunately, this isn’t a guarantee.

To make sure, it’s important that you monitor your model’s real-world performance, looking for any areas where bias may have crept in. Does your model better predict female speech over males? Midwestern speech over southern? If so, take time to retrain with new datasets to weed out any problem areas.

Training, testing, and retraining will be an iterative process over the model’s lifetime. For example, leading Speech-to-Text APIs are often trained on text with billions of words that help boost their accuracy. Then, researchers constantly strive to improve the model by looking for areas of deficiencies, sourcing new training data, and retraining.

By focusing on these three main steps–collecting data from many sources, ensuring a diverse data set, and monitoring model performance–you can be confident that your models will perform well in real-world scenarios and prevent any embarrassing missteps like Facebook’s AI disaster.

Top comments (1)

Collapse
 
akhil_reddy_01 profile image
Akhil Reddy

Hello!
Can you please help in understanding problem