The dangers of self-driving cars

The very word automobile means self-moving. Through their history of more than a century, automobiles have evolved, increasingly taking over the driving process from humans. Many years ago, the car

The very word automobile means self-moving. Through their history of more than a century, automobiles have evolved, increasingly taking over the driving process from humans. Many years ago, the car had to be started with a crank; such features as automatic transmission, cruise control, and automatic braking would come many years later.

dangerous-robocars-featured

 

And now we are witnessing the emergence of fully driverless, automated cars. From the hardware point of view, there is nothing particularly complex about such vehicles. The wheels, engine, steering wheel, brakes, and various servos are just about the same as those in ordinary cars.

Embedded cameras watching over the road and the other cars are not new, either. Road-sign-detection systems and satellite navigation devices with detailed maps are available in even midrange car models. Ordinary cars can also go as far as detecting road imperfections: Based on this data, for example, certain models of Mercedes cars adapt their suspension, making the car glide smoothly over the road.

There is just one technical obstacle remaining to our fully driverless future: A software layer to manage all of the systems well enough to replace a human driver. But here many challenges arise, and not all of them concern technologies.

The world has already heard about the first robocar fatality: A Tesla with a flaw in its detection system failed to notice a truck crossing the road just in front of it. The person in the driver’s seat was unaware of the situation — he may have been watching a movie at the time.

Who is to blame for the accident? The truck driver? He should have yielded to the Tesla, which had right of way. The Tesla’s owner? He was not driving. The automaker? The company was not driving that unfortunate vehicle, either.

 Or take another situation: Imagine an accident is inevitable, and the on-board computer knows it — say, a toddler suddenly runs onto the road. Following road safety rules, the driver should pull the emergency brake without changing direction. That means a car autopilot, which blindly follows the rules, would hit the child, whereas a living person would break the rules and steer away to, say, hit a pole. That’s the better choice; the driver is likely to be fine thanks to airbags.

Here’s another: a moose running onto the road. Going by the book would result in a collision with the moose, inflicting serious damage to everyone in the car (and to the car). Normally, however, a driver would probably veer away from the moose and return to the lane after avoiding it. This maneuver actually has a name: the moose test. If all goes well, the driver simply continues driving; at worst, the car might run off the road or skid, but likely without grave impact.

Solutions

How should we attack this problem? After all, drivers need to make dozens of decisions each time they get behind the wheel. We could make the onboard computer smarter: make it able to distinguish various objects, consider different variables, and thus solve some problems stemming from unconventional road situations. If it sees a moose-like object running into its trajectory, the car could maneuver immediately. If it sees a toddler running in front of the car, it could instantly scan to ensure there are no pedestrians around and find something relatively safe to aim at, such as a pole.

That seems like a good approach: With enough algorithms, truly driverless cars can become a reality. However, it is not as simple as that.

Take a look at the results of a survey by Cognitive Technologies, a company that designs automated in-car systems. What’s interesting about this report is the large number of respondents: 80,000 people from 47 regions around Russia — ordinary folks just like you and me.

 The results showed that in the case of a pedestrian running in front of the car and another car approaching in the oncoming traffic lane, only 59% of respondents considered driving off-road to avoid the collision. Surprisingly, 38% said they would hit the pedestrian. About 3% said they would maneuver to collide with the other car, presumably hoping the other car would run off the road to avoid a collision.

If several pedestrians run in front of a moving car, 71% of respondents said they would run off-road, and 26% said they would hit the group of pedestrians.

Curiously, in the event of a dog running onto the road, 55% of respondents would run over it. Moreover, in the case of a dog running into the car’s path, people could pick the emergency brake option, which would cause the car in question to be rear-ended by the car behind it. Only 40% of the respondents chose that option.

Another survey by American researchers returned even weirder results. According to the report, people think a self-driving car should prioritize pedestrians’ lives at the cost of the passenger’s life. The more people there are in the pedestrian group, the more respondents consider this decision fair. For example, if the pedestrian group consisted of 10 people, 76% of respondents said the robocar should save them and kill a passenger.

However, this noble stance crumpled as soon as the question got closer to home. When asked whether they would buy a driverless car that would kill them or their family members to save random pedestrians, only 19% stuck with the greater good approach.

 The conclusion is clear: People’s beliefs about robocar decision-making rest on their own personal relationships with the people in hypothetical situations.

What should a carmaker do? Not all people would support the algorithms manufacturers might deploy in a driverless car, which potentially means a surge in lawsuits following road accidents. But if owners were to be entrusted with programming their car, they could do something wrong or open the system to hacker attacks.

 With that in mind, I think self-driving cars of the future (and I believe they are bound to emerge anyway) might make the concept of car ownership unviable. Instead, driverless cars would belong to transport companies, so anyone could order a lift via a mobile app, à la Uber.

That kind of approach would significantly decrease the number of cars required to transport the same number of people who own cars today. It would ameliorate many problems, such as traffic jams, insufficient parking spots, and freaky taxi drivers with less than perfect driving and communication skills.

Pedestrians could be equipped with beacons (like clothes with fluorescent inserts, which European pedestrians are legally required to wear after dark), thus eliminating the scenario in which a car has to choose between hitting a pedestrian and killing a passenger. That’s one problem down, but still many to go.

Tips

How to travel safely

Going on vacation? We’ve compiled a traveler’s guide to help you have an enjoyable safe time and completely get away from the routine.