Google has been proud of its self-driving cars – was that the right thing?
Google’s self-driving cars have been hitting the streets for over a year now and judging by the reports, they’re not perfect.
It was not long time ago when Sebastian Thrun made the first prototype of the car which is not driven by a human. The first idea, like the unbelievable future, turns into a real self-driving car which is possible to see on the streets today. One of the co-founders of Sunnyvale in California, Sena Zorlu, made a photo of this car and put it on Instagram as soon she made it waiting for the green light in Mountain View. People didn’t believe that this is possible and even they expected this type of autonomic cars on the street couldn’t believe that it’s right here, in front of their eyes.
The first idea was to make a car which will be easy for driving, struggling with everyday traffic, and help the people with low vision or other health problems and thanks to the electric engine, safety for the environment. The prototype shown in May 2014 promised success, but one of the Google self-driving cars has been survived a crash in February 2016. This car has been hit by a bus.
This accident turns in a just a first one before many other after and has opened a question about the real usage and safety of the self-driving car. Like in this first accident, Google self-driving car has survived many accidents which have been caused by humans, so the real question is who is actually guilty for the crash of this car.
Numbers don’t lie – but can make a drama
According to the information from the Google research team, Google self-driving car has been involved in 341 car accidents or needed a help of humans to avoid possible crashes. The number looks high, but if we say that this number is counted after 424,000 miles it seems much less than sounds.
Comparing to a Nissan automobile car that needed help 106 times in only 1,485 driving, Google self-driving car looks much safer and reliable. So, the first impression that car without human is not safe at all and especially Google self-driving car is unreliable and dangerous is wrong.
Does Google self-driving car need a human help?
It happened in Mountain View in California when the self-driving car stopped to let a pedestrian cross the street. It’s a normal behavior and every human driver should do the same. Still, the self-driving car was hit by another car from behind and which was driven by a human. It’s obvious that the self-driving car has an obligation to lead the law without execution, but, the human driver didn’t expect from the car in front of him to stop that suddenly and he perhaps expected to be warned by the driver that the pedestrian is in front of him.
The conclusion is that self-driving car shouldn’t avoid a law, but the engineers can change the braking performances so the car can stop gently.
Is human really needed for avoiding the accidents?
Engineers in the factory claim that every car crash with a self-driving car has been caused by a human. As the matter of fact, many crashes caused by self-driving car happened when the human had directed him. So, the question is how to avoid the possible human mistakes. The problem is many situations where humans act fast, avoiding a dangerous situation or concluding how the other driver will act thanks to the “eye contact” or simply instinct. The robot can’t do this no matter how good constructed it was.
The factory is aware of the problem with the machine in those circumstances and prepares the new generation of robot cars for the future with much better performance. For more info, be sure to check out the good folks over at CVS Ltd UK.