Quantcast
Channel: io9
Viewing all articles
Browse latest Browse all 36042

A Google Self-Driving Car Got Into a Crash With a Bus (And That's Okay)

$
0
0
A Google Self-Driving Car Got Into a Crash With a Bus (And That's Okay)
A Google car on the streets. AP Photo/Tony Avelar

Well, it finally happened. One of Google’s autonomous vehicles might have caused a minor fender-bender. (Luckily, it appears nobody was hurt.) And guess what—it’s probably going to happen again. And that’s fine.

In a California DMV report discovered by Mark Harris, one of Google’s autonomous Lexus cars was stopped at a Mountain View intersection on February 14 when it needed to maneuver around sandbags placed in the right-hand lane. The car assumed—as a human driver might have—that as it nudged back into traffic, a slowing city bus was allowing the car to merge. It wasn’t, and the car struck the bus.

According to the DMV report, Google’s car “sustained body damage to the left front fender, the left front wheel and one of its driver’s-side sensors.” No injuries were reported.

This is exactly how Google is trying to teach its cars to think more like humans. Google’s explanation of the incident in its monthly report will be out tomorrow, but here’s what the department said about the crash in a copy given to The Verge this morning:

Our test driver, who had been watching the bus in the mirror, also expected the bus to slow or stop. And we can imagine the bus driver assumed we were going to stay put. Unfortunately, all these assumptions led us to the same spot in the lane at the same time. This type of misunderstanding happens between human drivers on the road every day.

This is a classic example of the negotiation that’s a normal part of driving – we’re all trying to predict each other’s movements. In this case, we clearly bear some responsibility, because if our car hadn’t moved there wouldn’t have been a collision. That said, our test driver believed the bus was going to slow or stop to allow us to merge into the traffic, and that there would be sufficient space to do that.

This morning, at a transportation event sponsored by the Los Angeles Times, Google’s self-driving project lead Chris Urmson spoke about safety as the primary motivator for the self-driving car project, with the goal to reduce the estimated 40,000 roadway deaths that occur on US streets every year—94 percent of which are caused by human error.

Urmson also fielded several questions from audience members this morning who were clearly worried about robot drivers swarming the streets (including one person who asked about the theoretical “how many kids would you kill” ethical dilemma known as the Trolley Problem). While the chances of this particular situation happening is exceedingly rare, said Urmson, which “reduces the chances of that ethical dilemma to almost nothing,” in fact, there is a hierarchy. “We try hardest to avoid unprotected road users,” he said, like pedestrians and bicyclists, followed by moving objects on the road, followed by static objects.

One might make the argument that the only way autonomous vehicles can truly be safe is if every single car (and city bus) on the road is self-driving. But Urmson said that even a single self-driving car is improving conditions for all. “Having one of them on the road makes that person safer and makes everyone around them safer.”

[The Verge and @meharris]


Contact the author at alissa@gizmodo.com or follow her at @awalkerinLA


Viewing all articles
Browse latest Browse all 36042

Trending Articles