DEV Community

Cover image for Socially Responsible Pixels? A Look Inside a Demographics Recognition Model
Matt Zeiler for Clarifai

Posted on

Socially Responsible Pixels? A Look Inside a Demographics Recognition Model

Here at Clarifai, our team pushes the technological boundaries of artificial intelligence every day. But while we’re exploring new technical frontiers, we’re also keenly aware that we’re rolling into new ethical terrain as well. In the case of computer vision, models are trained to judge an image based solely on the pixels. So, how do we build socially responsible visual recognition AI and prevent discriminatory, offensive, or biased results without any context other than the pixels?

"In the not-so-distant future, AI will power every facet of our lives, from self-driving cars to personal security to what type of ads we're shown. You don't want the underlying technology for things that affect your daily life to be a black box that may or may not be biased. You want to be able to influence the technology and teach it when it's wrong. That's where Clarifai comes in."

With this question in mind, we took a lot of care in creating our new Demographics recognition model which automatically recognizes the age, gender, and multicultural appearance of people’s faces. I want to take this opportunity to walk you through some of the ethical questions we tried to address as a company and how you can make our Demographics model even better!

https://developer.clarifai.com/models/demographics-image-recognition-model/c0c0ac362b03416da06ab3fa36fb58e3

What is the right terminology for our Demographics model and concepts?

The first thing we wanted to get right with our Demographics model was the terminology we used for the things our model recognizes. Not only did we want the terms to be accurate in describing functions in our model, but also inclusive and socially conscious. While understanding “Age is pretty self-explanatory, the concepts of “Sex/Gender and “Race/Ethnicity/Multicultural Appearance are a bit more nuanced.

What is sex vs. gender?

“Gender is the complex interrelationship between an individual’s sex (gender biology), one’s internal sense of self as male, female, both or neither (gender identity) as well as one’s outward presentations and behaviors (gender expression) related to that perception, including their gender role. - GenderSpectrum.org

At Clarifai, we want everyone to feel like they can come to work as their authentic selves. The desire to be respectful of the way people choose to identify themselves influenced our decision to give the “gender appearance part of our Demographics model the categories of “masculine or “feminine.” Because our technology is evaluating the visual characteristics of the human face, “gender felt like the appropriate terminology over “sex (which is more about physiology). However, we recognize that the “gender terms “man, “woman, “non-binary, etc. are largely an aspect of self and not something we felt our AI could appropriately label - thus, we went with “masculine and “feminine as descriptors.

What is race vs. ethnicity vs. multicultural appearance?

“People of the same [often] race share genetically transmitted physical characteristics. People of the same ethnicity share cultural, linguistic, religious, and often racial characteristics. Ethnicity is broader and more useful. Racial classifications have often been imposed by outsiders, and many of the traditional classifications are now regarded as questionable from a scientific standpoint. As a result, race is more vague and less intellectually sound than ethnicity.”

We’re proud to have a diverse team at Clarifai and we wanted to make sure our Demographics model represented our values. The terms “race and “ethnicity were more of a gray area for us than “gender/sex when applied to our Demographics model. The term “race is typically defined by physical characteristics, which would be an accurate descriptor for what our Demographics model recognizes, but is fraught with negative connotations. “Ethnicity goes far beyond physical characteristics and is therefore also not a good descriptor for what our model sees. In the end, we decided to create a new term that we feel encompasses both the function of our model and our company values - “multicultural appearance.” We aligned the default concepts we can recognize in “multicultural appearance with the US census rubric.

How can our Demographics model be used in the real world?

At Clarifai, our core mission is to understand images and video to improve life. When we build new visual recognition models, it is always with the idea that our users have the same intent - to improve life. However, when we’re talking about things like age, gender, and multicultural appearance identification technology, we realize that not all the real world use cases are positive ones.

Ultimately, it’s about intent and we have a lot of faith in humanity. We’re building an open platform where developers all over the world are creating the next generation of intelligent applications. It’s not our business to limit what developers create and we choose to believe that most people will use our Demographics for good - like bringing attention to female representation in tech or bringing an end to human trafficking.

How do we enable the community to make our Demographics model better?

Continuing on the previous point about our faith in humanity, we also see opening up training our Demographics model to the community as the only way to ensure we have the best, most accurate, and most unbiased model around.

We’ve always given developers the ability to send feedback to our model via our API and now we’ve made it even easier for everyone to contribute. Our live demos on our site now allow you to teach our systems how to improve without a single line of code - simply test out an image in our demos to see what the results are and if they’re not what you expect, you can correct them! Public feedback is only as good as the participation and we trust you to provide us ethically responsible feedback. We will carefully incorporate this feedback into future versions of our models to improve their overall quality.

https://developer.clarifai.com/models/demographics-image-recognition-model/c0c0ac362b03416da06ab3fa36fb58e3?utm_campaign=devto-org-account&utm_medium=blog&utm_source=dev-to&utm_content=article-link

We truly believe that the diversity of users on our platform and the perspectives you provide are the only way to build the best AI platform on the planet. Our AI is not perfect and there will be missteps along the way, but we hope by giving you the power to teach our AI how to see the world, we can ingrain real knowledge in artificial intelligence. We welcome you to join us in our mission to understand every image and video to improve life - start here!

Top comments (1)

Collapse
 
walker profile image
Walker Harrison

This is really neat stuff. I think that we often assume (or hope?) that efforts in software engineering or computer science will be sterilized -- lacking the social, or political, or even existential issues that crop up all the time otherwise. But this article shows that since AI is "made in our image," to borrow a biblical term, it will naturally come up against the same challenges when it comes to gender and race.

Somewhat of a segue, but I highly recommend this book about AI/human interactions. Examines what we can learn about ourselves in the pursuit of better, more realistic artificial intelligence.