> On Thursday, NHTSA said it had discovered in 16 separate instances when this occurred that Autopilot “aborted vehicle control less than one second prior to the first impact,” suggesting the driver was not prepared to assume full control over the vehicle.
> CEO Elon Musk has often claimed that accidents cannot be the fault of the company, as data it extracted invariably showed Autopilot was not active in the moment of the collision.
@rysiek I think, more interesting than the implications for the autopilot crash statistics themselves, this is an excellent case against self driving cars. Drivers must be liable for their faults in accidents, but when companies, that operate self driving cars, refuse and avoid liability then they can't be allowed to continue operating.
@LovesTha @rysiek The current issue isn't whether the process could be safer. Rather it's about how Tesla has been trying to hide that autopilot is involved through a completely pointless distinction. Autopilot being off during the instant of collision doesn't change that it was involved in the crash.
Lots of people have died in car crashes due to manufacturer problems, but hiding these problems is way more unethical than just failing to make the cars safe.
@rune @LovesTha @rysiek If you have tried the autopilot on a Tesla.. you'd know that you are required to hold on to the steering wheel when it is active. You can not let go of the steering wheel. Also, even if it is off during the crash you as a driver would have been able to react before that second, and didn't. It is also plausible the crash would still have happened even with the autopilot off.
@shellkr @LovesTha @rysiek But how will we ever know if the autopilot or the driver is at fault when Tesla programs their autopilot to disable an instant before the crash and then lies about autopilot involvement to the authorities investigating crashes?
I don't care who's fault it is, this is shady as fuck.
@rune @LovesTha @rysiek Yes, there are probably improvements that needs to be had. I wouldn't say they necessarily lied as it was off. That doesn't mean the autopilot was fault free. It is very likely it could have been avoided with all the sensors in the car and probably will in the future. I think if we want to reduce car accidents we need to remove the human part of it. That doesn't mean accidents will not happen anymore... just that they will be reduced.
@rysiek Oh wow... If I was really conspiratorially inclined, I'd wonder whether for plausible deniability it was designed to automatically shut off when detecting an unavoidable crash, or if the software is just garbage with dangerous accidents to follow.
@rysiek somewhere out there is a programmer who took that order and made it happen.
This industry badly needs ethical standards
@cassidymcgurk I mean... I can see how "disengage in case data cannot be trusted" is a reasonable fail-safe. And so in these case it might have been that this is what happened, for whatever reason.
In the end, it is reasonable to hold the driver accountable. It's their car, it's their responsibility on the road.
It's still a horribly bad look for Tesla, even if the autopilot disengage thing was not implemented *on purpose* to avoid legal responsibility.
@rysiek @cassidymcgurk even with that, the vehicle should always safely stop on disabling the guidance/control/safety system (or on handover to another one, have made it possible for the following system (e.g. human) to safely stop, like by knowing enough distance ahead to be safe), even in case of new data being obviously wrong as old data already needs to allow for a safe stop.
or at least that's what I’d expect from knowing other (non-road) safety systems …
heck thinking about it, I’d even expect from a speed-limiting system that it disengaging means no traction until controls are set to 0
i.e. movement to brake meaning accelerate in a nearby control, brake releasing on its own when losing contact with its control, likelyhood of accelerating by falling on the control, likelihood of something blocking the brake controls, lack of emergency brake, common maneuvers in theory requiring a second person due to lack of sight, …
That said, Tesla is particularly bad even at learning from car industry's past mistakes. They just waltz into the space and decide to ignore decades of accrued knowledge.
Re: self-driving cars, accidents
@szakib @trisschen @rysiek @cassidymcgurk given the stringent standards for certifying safety-critical systems (even in the automotive domain, where cost savings are otherwise foremost), this is not surprising: it’s highly unlikely one could devise a way to demonstrate the safety of an autopilot-like system with current system architectures
which, in a sane world, would mean deploying no such systems in production at all
I would somewhat disagree on this "it is reasonable to hold the driver accountable."
you want to blame someone for something they would not know.
the systems should be designed to fail safely,
i might not exactly know precisely what that would mean but i would guess that it would be something like...
put on the 4 ways and
bring the vehicle to a stop.
or something like that.
@logan @rysiek @cassidymcgurk IIRC a few years ago a few of the major German caar manufacturers pronounced that they would assume full responsibility for driving errors of their (prospective) autonomously self-driving cars. In my eyes, that is what is necessary to build confidence, even if it takes much longer to get those cars on the road. >>
@rysiek @cassidymcgurk In my book (and hopefully that of any reasonable regulator too, so excepting a bunch of US agencies), the failure mode for something called “autopilot” can never be to disengage right before an impending impact.
The more I learn about this stuff, the less I believe we’ll ever see fully self-driving (GoA4) cars that can be used on all types of roads.
Re: self-driving cars, accidents
@rysiek @cassidymcgurk what is weird is that how come disengaging not an admission that decisions made by the self-driving component up to that point have lead to an unsafe situtation? like we’d expect a human driver to drive defensively and not intentionally get into a dangerous situation
(test cases and even whole methods to generate such test cases have been proposed in academic literature as well as by some industry players, e.g., to make sure the system can reason about potential object obscured from its vision. so there’s at least an expectation for self-driving components to be programmed this way. nevertheless, tesla – and i presume other implementers – chose the easy way out and blame humans for the shortcomings of their systems)
@rysiek i understood they lied about this by omission, that was what I was referring to, although BMW execs seem to get away with it as far as i recall.
No-one become a billionaire without committing crimes they should be in prison for though.
@rysiek seems to me that there's an overwhelming need (and public interest in the legal sense) to be able to inspect models, training data, and control software for autonomous vehicles running on public roads. what blind spots does the model have? are builds reproducible? verifyable? how secure is the supply chain (terrifying)?
@rysiek and it's not even so much about Elon and Tesla (though they are hardly unbiased actors here), as it's about individual and public safety.
this feels to me like an area where the public need outstrips any IP concerns (if you believe in that sort of thing).
what happens when the next autonomous car comes out? if it's a Volvo, do you trust it? how about a Seat? Volkswagen (don't forget dieselgate)? Uber? Huawei? Google?
the public shouldn't have to make guesses.
@rysiek right before crashes autopilot rolls itself up in a ball and starts rocking back and forth mumbling incoherently about free speech ...
This Mastodon instance is for people interested in technology. Discussions aren't limited to technology, because tech folks shouldn't be limited to technology either!