Building a better society with better AI

Artificial intelligence (AI), has the potential to bring about innovations that will improve every aspect of society, from healthcare and legacy engineering systems to artistic and entertainment processes to creative processes. Hollywood studios use AI to detect and measure biases in scripts. These are the tools that producers and writers need to create more inclusive and equitable media. AI is only as smart and intelligent as the data it is trained on. This data often reflects real-life biases. Technologists are working to address equity and inclusion in real-life, as well as in their innovations, in order to avoid perpetuating stereotypes.

Innate bias in humans

As technologists look to use AI to find human-centric solutions to optimize industry practices and everyday lives alike, it’s critical to be mindful of the ways that our innate biases can have unintended consequences.

“As humans, we are highly biased,” says Beena Ammanath, the global head of the Deloitte AI Institute, and tech and AI ethics lead at Deloitte. “And as these biases get baked into the systems, there is very high likelihood of sections of society being left behind–underrepresented minorities, people who don’t have access to certain tools–and it can drive more inequity in the world.”

Projects that begin with good intentions — to create equal outcomes or mitigate past inequities — can still end up biased if systems are trained with biased data or researchers aren’t accounting for how their own perspectives affect lines of research.

Until now, AI biases have been largely dealt with reactively, with biased algorithms and underrepresented populations being discovered after the fact. Companies must now learn to be proactive and prevent these problems from becoming a problem.

Algorithmic bias in AI

In AI, bias appears in the form of algorithmic bias. Kirk Bresniker (chief architect at Hewlett Packard Labs) explains that algorithmic bias is a set if challenges in building an AI model. “We can have a challenge either because our algorithm isn’t capable of handling different inputs or because we don’t have enough data to train our model. We have insufficient .”

data in either case.

Algorithmic error can also be caused by inaccurate processing, data modification, or someone injecting false signals. The bias can lead to unfair outcomes, privileging or excluding one group, regardless of whether it is intentional.

Ammanath outlines an algorithm that can recognize different types and styles of shoes, including flip flops, sandals or formal shoes. The algorithm was unable to recognize shoes with heels for women when it was first released. The development team consisted of a group of college graduates, all male, who never thought of training the algorithm on the heels of women’s shoes.

“This is a trivial example, but you realize that the data set was limited,” Ammanath said. Consider a similar algorithm that uses historical data to diagnose a disease. What if the algorithm wasn’t trained on specific body types, certain genders, or certain races? These impacts are enormous.

Critically, she says If you don’t have that diversity at the table, you are going to miss certain scenarios.”

Better AI means self-regulation and ethics guidelines

Simply obtaining more (and more diverse) datasets is a formidable challenge, especially as data has become more centralized. Data sharing raises many concerns, not least privacy and security.

“Right now, individuals have far less power over the vast companies that are collecting their data,” Nathan Schneider assistant professor of media studies from the University of Colorado Boulder.

It is likely that new laws and regulations will eventually dictate how and when data can be shared. Innovation doesn’t wait for legislators. AI-developing companies have the responsibility to be data stewards and protect individual privacy, while reducing algorithmic bias. It’s difficult to rely on regulations to cover all possible scenarios as technology is rapidly evolving, according to Deloitte’s Ammanath. “We are entering an era where you have to balance between adhering to existing regulations and being self-regulating.”

This kind of self-regulation means raising standards for the entire supply chain of technologies involved in building AI solutions. This includes the data, training, infrastructure, and data. Companies must also provide a way for employees to voice concerns about biases across departments. It is unlikely that biases can be completely eliminated, but companies should regularly review the effectiveness of their AI solutions.

Because AI is highly contextual, each company’s self-regulation will be different. For example, HPE established guidelines for ethical AI. Nearly a year was spent by a diverse group of people from the company to create the company’s AI principles. Then, they were vetted with a wide range of employees to make sure they could be followed and fit the corporate culture.

We wanted to increase the general understanding of these issues and then gather best practices,” Bresniker from HPE states. “This is everyone’s job–to become literate in the area.” Technologists have reached maturity with AI. They have moved beyond research to practical applications and created value across all industries. Because AI is becoming more widespread, organizations have an ethical responsibility for providing accessible, robust, inclusive solutions. Organizations have been forced to examine the data they are using in a process, sometimes for the very first time. Bresniker says, “We want people establish that providence and that measurable confidence in what’s going into,” “They have that ability to stop perpetuating systemic inequalities and create equitable outcomes for a better future.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by the editorial staff of MIT Technology Review.

Read More