We are being sold a fantasy about autonomous vehicles

Academic paper finds that messy humans will always get in the way of Silicon Valley’s driverless utopian vision

The image we have been sold of the self-driving car that requires no human interaction is highly misleading.

The way we think about autonomous cars is all wrong. Or at least, it has been up to now. Partly, that’s because – thanks to blizzards of marketing and often ill-informed commentary, not to mention some deeply troubling YouTube videos – we have come to assume that autonomous cars are already among us. We assume, because we are already being sold systems labelled ‘Autopilot’ and ‘Full Self Driving’ that there are right here and now cars that can independently see and perceive the world around them, and drive themselves.

That’s not only wrong, that’s dangerous – on top of which, it springs from a fundamental misunderstanding of how autonomous cars will eventually work. A new academic paper, published in Social Studies Of Science, tries to remind us that autonomous cars are human creations in a human world, and will always and ever rely on being connected to the world around them, and the people who created them.

The paper’s authors, Chris Tennant and Jack Stilgoe (respectively a research fellow, and an associate professor of science and technology, both working at University College London) interviewed more than 50 people who are working with autonomous vehicle technology. The interviewees ranged from those working in the front lines of research and development, to academic researchers, policymakers and other stakeholders. All of the interviews were anonymised, so that the interviewees could speak freely and without concern for their employment nor status. The results make for fascinating reading, and indicate that perceptions of autonomous vehicles, and what they are capable of, are generally wrong.

Google's self-driving car.

“The narrative of autonomy sees autonomous vehicles (AVs) as detached,” say Tennant and Stilgoe. By detached, they mean truly autonomous – able to make entirely independent decisions and take entirely independent actions. But that’s not how autonomous vehicles work. They’re designed by humans, remember. So they will be reliant on learning the same lessons as humans, and picking up the same visual cues and danger signals as humans. Tennant and Stilgoe refer to these as “attachments” and quote the French philosopher and sociologist Bruno Latour who said: “The modernist spirit of innovation promises emancipation from attachments, while in fact creating more attachments.”

READ MORE

The paper defines an attachment as “relationships with people, objects, institutions and infrastructures that give a thing or person definition”. So, as far as autonomous cars go, an attachment is basically whatever it has to deal with in a given day, or on a given journey. An attachment can be that between a user or owner and the car – each instruction to go somewhere is an attachment, by this definition. Each traffic regulation that must be followed is also an attachment, as are the reactions to each movement of pedestrians, other cars, buses, cyclists, weather, and even passing dogs.

Volkswagen’s self-driving car Sederic. Photograph: Reuters

We have become, say Tennant and Stilgoe, enamoured of the idea that AVs can just trundle along and make all their own decisions, using the brute-force of vast processing power and data to work out where to go next and how to get there. However, they assert, the truth is that autonomy is actually a densely layered concept, which combines the autonomy of the vehicle, the autonomy of the people using it and surrounding it, and the autonomy of the technology itself and those who develop it. It is the interleaving of those different levels that makes the creation of truly autonomous vehicles so difficult – perhaps impossible in the way we have so far imagined them.

The first, biggest, and potentially most damaging clash of attachments could be between us, the users, and the people and companies who are developing autonomous cars. As Tennant and Stilgoe put it: “[There is an] implication that innovators should be unconstrained by regulators or societal concerns. This progressive ideal is bolstered by confidence that technology is inherently emancipatory. Even a cursory test of these narratives of autonomy shows that one’s view of autonomy and attachments depends on one’s standpoint. To an AV developer, a driver asserting her right to drive could seem reactionary, clinging to the outdated attachments of car culture, while a sceptical motorist might fear becoming dependent on tech companies if she wants to get from A to B.”

A driverless Porsche controled by Huawai’s Mate 10 Pro handset, which transforms a regular car into a self-driving vehicle.

Clashing autonomies

In other words, the Silicon Valley tech mafia might decide, arbitrarily, that autonomous cars should work and function in a given way, but that might not be in a way that suits you or me. Our autonomies – our personal decisions – are clashing, if you like.

Tennant and Stilgoe also point out that purveyors of autonomous tech generally promise to liberate us from the drudgery of driving and of dealing with traffic, but fail to realise that early car makers promised early drivers similar liberation from the constraints of early 20th century travel. “When developers promise to learn from, replicate and then surpass human driving, they overlook the attachments that define the agency of a human driver. Given the constraints and conditions of human agency revealed by automobility, we should ask whether automated systems are likely to enable new freedoms or become similarly stuck in traffic.”

The paper also points out that we are encouraged by developers and creators to see autonomous vehicles as somehow inherently better and cleaner than current forms of transport. Take human control out of the equation, and replace it with the technical perfection of robotics, and the implication is that everything will be automatically better. However, as Stilgoe and Tennant point out, autonomy isn’t the removal of human beings, it’s simply moving them around in the transport supply chain. An autonomous car might be able to control its own steering and propulsion, but a human will still have to instruct it where to go, while other humans will still have to design, engineer, build, and maintain all of the software, hardware, and infrastructure that will allow that vehicle to work.

This is an issue that underpins the current battle between Tesla (famous for over-promising and under-delivering when it comes to autonomous driving) and US safety regulators. As Tara Andringa, executive director of Partners for Automated Vehicle Education (PAVE), puts it: “Fully autonomous vehicles, classified as SAE level 4 or higher, hold the potential to massively reduce the crashes that claim the lives of tens of thousands each year – but only if consumers are willing to trust the technology with their lives. That challenge is steep, but at its core, the message is simple: no car available for sale to the general public today is truly autonomous or self-driving. This simple message must be at the heart of all communications about driving automation technology, from driver education and journalism to our casual conversations.”

The Lexus GS self-driving test car merges onto a motorway.

That trust in the technology is difficult to build, and easily broken. What’s worse is that trust is so often abused and sacrificed for pure profit motive. As one responder to Tennant and Stilgoe’s paper put it: “We have people in the industry who want to pump up the value of their companies, both big, well-established companies and start-ups... one company goes out and makes an aggressive claim, and all the competitors have to make sure they’re not perceived to be left behind, so they’ve got to match it.” Or, in other words, an awful lot of the hype surrounding self-driving cars is arguably just that – hype, and nothing more.

The biggest problem is not getting vehicles to sense and detect what’s happening around them, or being able to make decisions based on that input feed, but the huge, impossibly complex number of variables involved, especially as AVs start to interact with fallible, confusing humans. As one interviewee put it: “The sort of authoritarian version is you outlaw jaywalking in order to reduce the problem... I mean, it might work in China. It’s probably not going to work in the United States. And it’s also a reminder, if you think about it, of how dumb the machines are. It’s like we can only get the machines to work if we do some pretty drastic things to people.”

‘Crazy people’

Indeed, the relative “intelligence” of artificially intelligent machines is up for debate. One expert on the subject – Dr Steve Chien, of the Jet Propulsion laboratory, whose day job is creating artificial intelligence systems for deep space probes – told The Irish Times: “I would actually say that making a self-driving car and navigating around the streets of, say, Dublin, is quite a bit more challenging than an autonomous space craft. They are challenging in different ways.

“So the space craft is more challenging because the environment is somewhat unknown. If we send a space craft to Europa, we’re going there because we know very little about it. But we’re fairly certain that it won’t have to deal with the variables of a bunch of crazy people driving about on the street, dealing with rain. For instance, where I’m from in Los Angeles, when it rains it’s crazy because people aren’t used to the rain. When I first moved there, and I’m from the midwest, where it rains all the time, I said, ‘These Californians are crazy; they don’t know how to drive in the rain.’ And now I’m one of them…”

What’s important, assert Tennant and Stilgoe, is to look behind the hype and the Silicon Valley economics (where one might say that no potential damage to society is any barrier to innovation and profit) and instead start to look at the potential of autonomous vehicles in a more realistic and grounded sense. To see how AVs can work with their attachments – road infrastructure, legislation, other road users – rather than trying to ignore or diminish such things. “Scrutinising the attachments of ‘autonomous’ vehicles and their developers’ own understandings of these attachments is a powerful way to challenge a technologically determinist view that takes the problem – human error – and the solution – artificial intelligence – for granted. Acknowledging the attachments would lay the foundations for a more inclusive constitution of autonomous vehicles, one that makes the introduction of the technology a means to societal goals; safety, sustainability, accessible mobility, rather than an end in itself.”

Neil Briscoe

Neil Briscoe

Neil Briscoe, a contributor to The Irish Times, specialises in motoring