What kinds of changes in technology planning and testing should AI leaders be considering?
Design for people. Automated, intelligent systems should be part of a process, but not “the” process. There should never be a point where someone using your technology says “Well, the system says…” There should always be places where human intervention, feedback, and correction is allowed, especially in automated systems where the impact is on humans.
Know now that your data sets are flawed. The original assumption that our data collected at scale was representative enough to apply to all users is flawed. Our technology is not evenly distributed and so our data points do not accurately reflect the population of the world. This results in biased outcomes and must be tested for. Joy Buolamwini’s work testing for the impact of bias in facial recognition software in the Gender Shades project is the perfect example of this. To build more inclusive products for everyone, we must have better data sets that represent everyone.
Truly distribute your teams. The developers we have now will not be able to build the technology that will help the next billion users. Hire people and design teams to enable them to work effectively around the world.