Why autonomous cars won’t be autonomous
12 March 2018 | 0
An educated public understands that autonomous vehicles are amazing but that they are so far unable to fully take control of a passenger vehicle with no human behind the wheel.
It’s only a matter of time before today’s ‘safety drivers’ – the humans who sit idle behind the wheel in case the artificial intelligence fails – get out of the car and leave the driving to the machines, right?
Whoa, slow your roll there, self-driving car narrative.
California just approved licenses for self-driving cars to in fact have no human driver behind the wheel, or no human in the vehicle at all (after dropping off a passenger, or for deliveries) with one caveat: The self-driving car companies must monitor and be able to take over driving remotely.
This rule matches other parts of the US where no-driver autonomous vehicles have been allowed. There’s no human in the car, or at least the driver’s seat, but remote monitoring and remote control make that possible, safe and legal.
That means there will be dozens of NASA-like control rooms filled with operators and screens and traffic reports, where maybe a few dozen people are monitoring a few hundred vehicles, then taking control when they malfunction, freeze up or confront complex driving scenarios. A more likely alternative is that remote monitoring could be done in call centre-like cubicle farms.
So far Nissan, Waymo, Zoox, Phantom Auto, and Starsky Robotics have admitted operating such control rooms but every other manufacturer working with the technology has plans to use them.
AI: It’s made out of people
To compensate for the inadequacy of AI, companies often resort to behind-the-scenes human intervention.
The idea is that human supervisors make sure AI functions well, as well as taking a teaching role. When AI fails, human intervention is a guide for tweaks in the software. The explicit goal of this heuristic process is that eventually the AI will be able to function without supervision.
Remember Facebook M? That was a virtual assistant that lived on Facebook Messenger. The clear idea behind this project was to provide a virtual assistant that could interact with users as a human assistant might. The ‘product’ was machine automation. But the reality was a phalanx of human helpers behind the scenes supervising and intervening. These people also existed to train the AI on its failures, so that it could eventually operate independently. Facebook imagined a gradual phaseout of human involvement until a point of total automation.
That point never arrived.
The problem with this approach seems to be that once humans are inserted into the process, the expected self-obsolescence never happens.
In the case of Facebook M, the company expected to evolve beyond human help, but instead had to cancel the whole Facebook M project.
Facebook is quiet about the initiative, but it probably figured out what I’m telling you here and now: AI that requires human help now will probably require human help indefinitely.
Many other AI companies and services function like this, where the value proposition is AI but the reality is AI plus human helpers behind the scenes.
In the world of AI-based services, vast armies of humans toil away to compensate for the inability of today’s AI to function as we want it to.
Who’s doing this work? Well, you, for starters.
Google has for nine years used its reCAPTCHA system to authenticate users, who are asked to prove they’re human.
That proof involves a mixture of actual and fake tests, where humans recognize things that computers cannot.
At first, Google used reCAPTCHA to help computers perform optical character-recognition (OCR) on books and back issues of The New York Times.
Later, it helped Google’s A.I. to read street addresses in Google Street View.
Four years ago, Google turned reCAPTCHA into a system for training AI.
Most of this training is for recognising objects in photographs – the kinds of objects that might be useful for self-driving cars or Street View. One common scenario is that a photograph that includes street signs is divided into squares and users are asked to prove they’re human by clicking on every square that contains street signs. What’s really happening here is that Google’s AI Is being trained to know exactly which parts of the visual clutter involve street signs (which must be read and taken into account while driving) and which parts are just visual noise that can be ignored by a navigation system.
But you’re an amateur (and unwitting) AI helper. Professional AI trainers and helpers all over the world spend their workdays identifying and labeling virtual objects or real-world objects in photographs. They test and analyse and recommend changes in algorithms.
The law of unintended consequences is a towering factor in the development of any complex AI system. Here’s an oversimplified example.
Let’s say you programmed an AI robot to make a hamburger, wrap it in paper, place it in a bag, then give it to the customer, the latter task defined as putting the bag within two feet of the customer in a way that enables the customer to grasp it.
Let’s say then that in one scenario, the customer is on the other side of the room and the AI robot launches the bag at high speed at the customer. You’ll note that the AI would have performed exactly as programmed, but differently than a human would have performed. Additional rules are required to make it behave in a civilised way.
This is a vivid and absurd example, but the training of AI seems to involve an endless series of these types of course corrections because, by human definition, AI isn’t actually intelligent. Humans are required to program a common-sense response into every conceivable event.
Why AI will continue to need human help
It’s possible that self-driving car companies will never to move beyond the remote control-room scenario. People will supervise and remote-control autonomous cars for the foreseeable future.
One reason is to protect the cars from vandalism, which could become a real problem. Reports of people attacking or deliberately smashing into self-driving cars are reportedly on the rise.
It’s also possible that passengers will be able to press a button and talk to someone in the control room in the event of an emergency.
Iit will be decades before we can trust AI to be able to handle every conceivable event when human lives are at stake.
One phenomenon that feeds the illusion of AI supercompetence is called the ‘Eliza effect’, which emerged from an MIT study in 1966. Test subjects using the Eliza chatbot reported that they perceived empathy on the part of the computer.
Nowadays, the Eliza effect makes people feel that AI is generally competent, when in fact it’s only narrowly so. When we hear about a supercomputer winning at chess, we think, “If they can beat smart humans at chess, they must be smarter than smart humans”. This is the illusion. In fact, chess-playing robots are ‘smarter’ than people at one thing, whereas people are smarter than that chess-optimised computer at a million things.
What people consistently do is underestimate human intelligence, which will remain vastly superior to computers at human-centric tasks for the remainder of our lifetimes, at least.
The evolution of self-driving cars is a perfect illustration of how the belief that the machines will function on their own in complex ways is mistaken.
AI needs humans to back them up when they’re not intelligent enough to do their jobs.
IDG News Service