General Question

flo's avatar

Is it impossible that the driverless car would need to be recalled?

Asked by flo (7366 points ) 1 month ago

What if it stops when it should keep going or keeps going when it should stop, or other crazy things ? How can it be designed in such a way that the driver can’t do anything to override it?

Observing members: 0 Composing members: 0

10 Answers

Dan_Lyons's avatar

This is a huge conundrum. What if they recall the vehicle and it simply drives away?

CWOTUS's avatar

Engineers generally design things such as conveyances – planes, ships, locomotives and cars, to name a few – so that they can always be overridden by human operators when the need arises.

That’s why subway and railway engines have “deadman switches”, which means that the operating engineer has to keep a grip on it, or when it is released it applies the brakes, shuts down the engine, stops the train and signals to its control yard that it has done so. It’s why planes that fly 90% of the time on autopilot can be overtaken at any time by the pilot. And it’s why (so far, anyway) “driverless” cars include window glass, steering wheels, brake pedals and accelerators – a truly “self-driving and only self-driving” car would have no need of any of those things.

Which is not to say that a self-driving car must be made without those things, but properly engineered, even that car will have an override built in so that an operator can disconnect a failed or failing controller, engage a redundant system (another engineered feature) that will take over for emergency stops and/or controlled parking, or at the very least simply shut down the vehicle wherever it is, and let other drivers (or driverless cars) maneuver around it as any other roadway obstruction or hazard. I would imagine that the fail safe process on a driverless car would be “pull over and come to a complete stop”, turn on the blinkers, and send out an emergency signal to some agency that probably hasn’t even been invented yet.

On the other hand, even the best engineered systems can fail, the redundant systems can fail, the backup system can fail, the override can fail, and the human operator be incompetent or incapable of taking control. Accidents will happen. And when they do, engineers go back to work to figure out how to address those faults, make the systems harder, more robust, able to deal with additional and less-predictable inputs, and continually safer. After all, we’re not simply driving “horseless carriages” any more, either, are we?

johnpowell's avatar

Software problems. It is like Flash and will be updated every single fucking day.

But I think google is doing the self-driving car thing make the maps street view thing cheaper.

And really, google is a advertising company. If they want you to focus on something that isn’t driving they want you to see their ads instead.

Google is evil.

kritiper's avatar

No, it is not impossible. Not where humans are concerned!

jerv's avatar

While technically possible, it’ll never happen, mostly for the reasons @CWOTUS pointed out. Much of industrial safety is all about cutoffs and overrides, and the same attitude extends to non-industrial machinery like cars. A car without any form of overrides would never be allowed by NHTSA.

CWOTUS's avatar

To put my lengthy response in simpler terms: Engineering is all about “designing for failure”, that is, figuring out what has failed, what we know will fail, and what can be reasonably expected to fail, and “how do we prevent any or all of those failures from become a catastrophe?”

In other words, “What do we need to have in place when _____ fails?” And keep answering that question. Then build a thing that satisfies all of those questions as well as possible (while still maintaining commercial viability), put it into service… and eventually learn of a new failure mode. “Well, we never predicted that, did we?” and start over.

I recommend To Engineer Is Human: The Role of Failure in Successful Design, by Henry Petroski, a highly readable introduction to these concepts for non engineers.

jerv's avatar

@CWOTUS It should go without saying that one of the first things designed in is usually an emergency shutdown for when things fail.

flo's avatar

Here is an article that I found after posting the OP.
http://www.businessinsider.com/flaw-in-googles-self-driving-car-prototype-the-panic-button-2014-5
The panic button is a flaw?

jerv's avatar

Well, to those that don’t understand engineering and expect everything to work absolutely perfectly and reliably at all times, yes; the Panic Button proves that it isn’t 1000% perfect. To those that do know engineering, it’s expected since we know technology is not perfect.

flo's avatar

Thanks all for your answers.

Answer this question

Login

or

Join

to answer.

This question is in the General Section. Responses must be helpful and on-topic.

Your answer will be saved while you login or join.

Have a question? Ask Fluther!

What do you know more about?
or
Knowledge Networking @ Fluther