A quick guide to the most important AI law you’ve never heard of
It’s a wild west out there for artificial Intelligence. AI applications are increasingly being used to make decisions about human lives without any oversight or accountability. This can have serious consequences. AI’s propensity to error and overreach often affects women, people of color, and marginalized groups.
The European Union believes it has a solution. It is the AI Act, the mother of all AI laws. This law is the first to regulate the entire sector and curb these harms. If the EU succeeds, it will set a new global standard in AI oversight.
But the world of EU legislation can be complicated and opaque. Here’s a quick guide that will cover everything you need to know regarding the EU’s AI Act. Members of the European Parliament as well as EU countries are currently amending the bill.
What’s the big deal?
The AI Act is hugely ambitious. It would require additional checks for AI uses that are most likely to cause harm. These could include systems that grade exams, recruit employees, or help judges make decisions about law, justice, and other related matters. The first bill also bans AI that is deemed unacceptable, such as scoring people based on their perceived trustworthiness.
The bill would also limit law enforcement agencies’ use facial recognition in public places. A loud group of powerful players, including members from the European Parliament and countries like Germany, want a complete ban or moratorium on facial recognition in public places by law enforcement agencies and private companies. They argue that the technology allows mass surveillance.
If the EU can pull this off it will be one of the strongest curbs on the technology. Some US states and cities, such as San Francisco and Virginia, have introduced restrictions on facial recognition, but the EU’s ban would apply to 27 countries and a population of over 447 million people.
How will it affect citizens?
In theory, it should protect humans from the worst side effects of AI by ensuring that applications face at least some level of scrutiny and accountability. People can be sure they will be protected against the most dangerous forms of AI, according to Brando Benifei (an Italian member of European Parliament) who was a key member in amending the bill.
The bill requires people to be notified when they encounter deepfakes, biometric recognition systems, or AI applications that claim to be able to read their emotions. Lawmakers are also discussing whether the law should provide a mechanism for victims of AI systems to file complaints and seek redress.
The European Parliament, one of the EU institutions working on amending the bill, is also pushing for a ban on predictive policing systems. These systems use AI to analyze large data set in order to preemptively deploy police to crime-prone regions or to try to predict a person’s criminality. These systems are highly controversial, and critics say they are often racist and lack transparency.
What about outside the EU?
The GDPR, the EU’s data protection regulation, is the bloc’s most famous tech export, and it has been copied everywhere from California to India.
The EU’s approach to AI, which targets the most risky AI, is one that all developed countries can agree on. It is possible for other countries to follow the European example and create a consistent way to regulate this technology. US companies will, by complying with the EU AI Act also raise their standards for American consumers in regard to transparency and accountability,” said Marc Rotenberg, the director of the Center for AI and Digital Policy. This non-profit tracks AI policy. The bill is also being closely monitored by the Biden administration. The US is home to some of the world’s biggest AI labs, such as those at Google AI, Meta, and OpenAI, and leads multiple different global rankings in AI research, so the White House wants to know how any regulation might apply to these companies. Influential US government officials such as Jake Sullivan, Secretary Gina Raimondo, Lynne Parker and National Security Advisor have welcomed the European effort to regulate AI.
” This is a stark contrast to the way the US saw GDPR’s development, which at that time people in the US believed would end the internet, eclipse sun and end life on this planet.” says Rotenberg.
The US has good reasons for supporting the legislation, despite some inevitable caution. It is extremely concerned about China’s growing tech influence. The official position of America is that maintaining Western dominance in tech is dependent on whether “democratic values prevail.” It wants to keep the EU, a “like-minded ally,” close.
What are the biggest challenges?
Some of the bill’s requirements are technically impossible to comply with at present. The bill’s first draft requires data sets to be free from errors and that humans can “fully understand” the way AI systems work. It is difficult to verify that the data sets used to train AI systems are error-free. The complexity of today’s neural networks means that even their creators aren’t able to fully understand how they arrive there conclusions.
Tech companies feel uncomfortable with the requirement to allow regulators and auditors access to their source codes and algorithms to enforce the law.
“The current drafting is creating a lot of discomfort because people feel that they actually can’t comply with the regulations as currently drafted,” says Miriam Vogel, who is the president and CEO of EqualAI, a nonprofit working on reducing unconscious bias in AI systems. She is also the chair of the newly formed National AI Advisory Committee that advises the White House on AI policy.
There is also a huge debate about whether the AI Act should ban facial recognition. It is contentious because EU countries don’t like it when Brussels dictates how they should handle national security and law enforcement matters. France is one of the countries that wants to allow facial recognition to be used to protect national security. The new German government, another large European country and a prominent voice in EU decision-making, supports a complete ban on facial recognition in public places. Another big debate will be over which AI types are considered “high-risk.” The AI Act lists a variety of AI systems, from systems used to allocate welfare benefits to lie detection tests. Two opposing political parties are present. One group fears that the bill’s broad scope will slow down innovation and the other believes that it will not protect enough people from serious harm. Will this hinder innovation? Silicon Valley lobbyists are often critical of the regulation because it will add red tape for AI companies. Europe disagrees. The EU counters that the AI Act will only apply to the riskiest set of AI uses, which the European Commission, the EU’s executive arm, estimates would apply to just 5 to 15% of all AI applications.
Tech companies should be assured that they want to provide a stable, clear and legally sound set of rules that will allow them to develop the majority of AI without having to worry about regulation.
Organizations that don’t comply face fines of up to EUR30 million ($31 million) or, for companies, up to 6% of total worldwide annual revenue. Europe isn’t afraid to impose fines on tech companies, as evidenced by the experience. Amazon was fined EUR746 million ($775 million) in 2021 for breaching the GDPR, and Google was fined EUR4.3 billion ($4.5 billion) in 2018 for breaching the bloc’s antitrust laws.
When will it come into effect?
It will be at least another year before a final text is set in stone, and a couple more years before businesses will have to comply. It is possible that the process of hammering out details for such a complex bill with so many contentious components could take longer. It took over four years to negotiate the GDPR, and it took six years for it to enter into force. Everything is possible in the world of EU lawmaking.
I’m a journalist who specializes in investigative reporting and writing. I have written for the New York Times and other publications.