DEV Community

Cover image for How do we integrate responsible AI?
Mage
Mage

Posted on

How do we integrate responsible AI?

TLDR

Responsible AI uses AI to empower individuals and put people’s values first. Its ethical deployment is essential to fight against algorithmic bias.

Outline

  • Intro
  • What is Responsible AI?
  • Steps for Responsible AI
  • Responsible AI in practice

Intro

The rapid development of AI will have a big impact on nearly every aspect of private and professional life. The determination of that impact’s success should be judged by how ethically and responsibly the technology is created. With laws not being able to keep up with the pace that AI is being developed, it falls on companies and developers to make a responsible product.

What is Responsible AI?

Image description(Source: FCW)


Data has proven itself to be a strong driver in the way companies make decisions. AI and data are being used to move businesses and society forward at rapid rates, with nearly limitless innovation.

The power of AI is in its ability to be deployed quickly across the world; however, this could also be AI’s greatest threat, as algorithms can spread bias on a massive scale at a rapid pace. This is happening in instances like Goldman Sachs and Apple allegedly using an algorithm which is biased against women when issuing credit limits; or predictive policing companies like PredPol who have been accused of increasing racial bias in law enforcement.

The legislative process hasn’t been able to keep up with AI’s innovation and as a result, has left companies to choose between higher profits or ethical good. Co-director of Digital Platforms and Democracy at Harvard, Dipayan Ghosh, said, “we’re not talking about bad people, but [tech companies] are being presented with the opportunity to make their own rules, and nobody has repealed the laws of human nature.” Companies have developed company-specific standards of ethical practice as congress has yet to pass clear resolutions to regulate technology.

Tech giants implement their own set of rules for what it means to create AI responsibly. Responsible AI is the practice of using AI to empower individuals and put people’s values first. Microsoft’s ethical values of AI are as follows: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. Other companies: Apple, Facebook, Twitter, PWC, and many more have developed a similar set of principles to guide their growth and innovation in a responsible way. How well these guidelines are upheld will be up to the companies themselves until proper legal measures are taken to address the growing AI sector.

Steps for Responsible AI

Image description(Source: wbur)


As we touched on in Why does AI responsibility matter?, algorithmic bias has become a serious problem as a result of AI’s greater implementation. Founder of The Algorithmic Justice League and student at the MIT media lab, Joy Buolamwini was introduced to algorithmic bias when a facial recognition model she created only was able to recognize white faces. When testing other projects that were built on the same generic facial recognition database, the same results occurred. Buolamwini has since made it her mission to increase education and resources to create a more equitable AI space.

In a Ted Talk, Buolamwini details the ways in which she believes a more responsible AI ecosystem can be created:

  1. Who codes matters: A study done by Zippia found that males made up 67% of software developers. Of that, 57% were white. A lack of diversity in software means that only a few voices and experiences are being heard when developing software that affects millions. More voices within software means that people’s innate blindspots to certain issues are able to be addressed.
  2. How we code matters: Without any responsible or ethical guidelines, developers have no clear path to develop inclusive AI. In a 2021 state of AI and Machine Learning report, only 25% of companies surveyed said unbiased AI is a critical mission. Tech giants are making steps to increase awareness and adoption of responsible AI practices, and in 2016 formed the Partnership on AI to advance AI’s positive outcomes on people and society.
  3. Why we code matters: AI can be used as a powerful tool for business to make more informed decisions, improve technology, and increase profit. There is also great potential for the ways AI can be used to better society as a whole. According to Buolawini “we’ve used tools of computation creation to unlock immense wealth. We now have the opportunity to unlock even greater equality if we make social change a priority and not an afterthought.”

Responsible AI in practice

Image description

At its current state, responsible AI occurs in varying degrees from company to company. Company guidelines and coalitions, such as the Partnership on AI, are a starting point in increasing AI’s ethical deployment; however, much more work needs to be done in order for responsible AI to be synonymous with general AI.

Natasha Crampton, Microsoft’s Chief Responsible AI Officer, believes that AI’s harm can occur in three ways: quality of service harm, allocation harm, and representational harm. To minimize the chance for any of these harms, Crampton asks her teams to first think about whom the systems are built for, forcing teams to think about what things could go right and wrong in the model’s deployment. Integrating checks at the early stages of development makes for checks to be done when building the model, not when the model is complete.

Tools to check for a dataset’s inclusivity and how well it represents a population are also deployed in model creation. In a Brookings panel on responsible AI, Crampton clarifies that “it goes beyond data, sometimes the discussion heavily focuses on data… when we’re building models we make lots of choices… we need to prioritize fairness at every point.” While fairness can often be difficult to contextualize as a metric, thinking about who you are building a model for can fine-tune focus on the greater social impact of a model.

Conclusion

AI is developing at a rapid rate, and it’s imperative that ethical practices are integrated into new technology. Data and technology aren’t inherent solutions to all problems, and end up creating their own set of issues when left unchecked. Companies must be made aware of AI’s ethical problems to minimize errors and ensure that AI can be used to generate a more equitable world.

Top comments (0)