It will soon be easy for self-driving cars to hide in plain sight. We shouldn’t let them.

It will soon be easy for self-driving cars to hide in plain sight. We shouldn’t let them.

Last Month, a video went viral . It showed a San Francisco officer stopping a car without its headlights on at night. This car was not your average car. As the cop approaches the vehicle, an off-camera person shouts, “Ain’t nobody in it!” Cruise, a subsidiary General Motors, is completely empty. The robotaxi drives off, passing an intersection and then pulling over just as the cop is about to turn back to his colleague. The video shows two police officers looking around the car trying to figure out what to do.

The confusion is certainly amusing – an everyday encounter with something that would seem magical a decade ago. As these vehicles become more common, the question about who is driving will be more serious.

It will soon become easy for self-driving cars to hide in plain sight. Many of the rooftop lidar sensors currently marking them out will likely become smaller. Mercedes vehicles equipped with the partially automated Drive Pilot system are now indistinguishable from other human-operated vehicles.

Is this a good thing. My colleagues and I conducted the largest and most thorough survey ever of citizens’ attitudes towards self-driving cars and road rules as part of our Driverless Futures Project at University College London. One of the questions we decided to ask, after conducting more than 50 deep interviews with experts, was whether autonomous cars should be labeled. The consensus from our sample of 4,800 UK citizens is clear: 87% agreed with the statement “It must be clear to other road users if a vehicle is driving itself” (just 4% disagreed, with the rest unsure).

We sent the same survey to a smaller number of experts. They were less convinced: 44% agreed and 28% disagreed that a vehicle’s status should be advertised. This is a complex question. Both sides have valid arguments.

We could argue that humans should be able to tell when robots interact with them. That was the argument put forth in 2017, in a report commissioned by the UK’s Engineering and Physical Sciences Research Council. It stated that “Robots were manufactured artifacts. “They shouldn’t be designed in such a way to exploit vulnerable users. Instead, their machine nature should reflect the truth.” Other road users could be subjected to self-driving cars being tested on public roads if they are genuinely being tested. They should consent to informed consent. Another argument for labeling, and this one is practical, is that it is safer to give wide latitude to a vehicle that may not behave as a well-trained human being.

There are also arguments against labeling. Labeling could be seen as an abdication by innovators, meaning that others should acknowledge and accommodate self-driving vehicles. It could also be argued, that a new label without a clear shared understanding of the technology’s limitations would only create confusion on roads already full of distractions.

Labels also have an impact on data collection from a scientific perspective. Data collection can be affected by labels. Something like that seemed to be on the mind of a Volvo executive who told a reporter in 2016 that “just to be on the safe side,” the company would be using unmarked cars for its proposed self-driving trial on UK roads. He said that he was certain that people would challenge them if they were marked by doing very harsh braking in front a self-driving vehicle or putting themselves in harm’s way.

On balance, the arguments in favor of labeling are stronger, at least for the short term. This debate goes beyond self-driving cars. This debate reaches the heart of the issue of how new technologies should be regulated. The developers of emerging technologies often portray them disruptive and world-changing. However, regulators are likely to dismiss them as merely incremental and unproblematic. However, new technologies don’t just fit into the existing world. They can change the world. We must be open about their potential benefits and how they might affect our lives.

To better understand and manage autonomous cars’ deployment, we must dispel the myth that they will behave just like humans. Management professor Ajay Agrawal, for example, has argued that self-driving cars basically just do what drivers do, but more efficiently: “Humans have data coming in through the sensors–the cameras on our face and the microphones on the sides of our heads–and the data comes in, we process the data with our monkey brains and then we take actions and our actions are very limited: we can turn left, we can turn right, we can brake, we can accelerate.”

That’s not how people move on the road, nor how self-driving cars work. Humans are able to communicate with each other while driving. We know that other drivers are not passive objects to avoid, but active agents with whom we must interact and who we hope will share our understanding of the rules. The self-driving car, on the other side, navigates the road in a completely new way. They rely on a combination of high-definition digital maps and GPS, as well as lidar sensors. Although birds and planes fly together, it would be wrong to treat a plane like a bird.

An engineer might argue that it is what a vehicle does in traffic. Others will want to know who or which vehicle is in charge. This is especially important for pedestrian crossings that rely on two-way communication. To make sure they are seen, a pedestrian might make eye contact to a driver. A driver might wave a pedestrian across to reassure them. These interactions may need to change if there are no signals. For example, traffic lights can reduce uncertainty. However, a self-driving vehicle may need to know how long it should wait before proceeding. These new rules will also apply to pedestrians. Until now, it was left up to self-driving car company to decide how to market themselves. This lack of standardization will cause confusion and undermine public trust. We need to be able to recognize who we are dealing with when we cross a street or negotiate narrow roads with another driver. These interactions work because there is a common understanding of our expectations and mutual responsibilities. It would be a good first step to acknowledge that we are dealing with something new. Although the technology is still in its infancy and there are many challenges ahead, transparency and clarity are essential.

Jack Stilgoe is a professor of science and technology policy at University College London.

Read More