Follow

fortune.com/2022/06/10/elon-mu

> On Thursday, NHTSA said it had discovered in 16 separate instances when this occurred that Autopilot “aborted vehicle control less than one second prior to the first impact,” suggesting the driver was not prepared to assume full control over the vehicle.

> CEO Elon Musk has often claimed that accidents cannot be the fault of the company, as data it extracted invariably showed Autopilot was not active in the moment of the collision.

Sigh. 🤦‍♀️

Of course the Apartheid Richie Rich goes with:

WeLl AcKsHuLlY tHe AuToPiLoT wAs TeChNiCaLlY oFf!!1!

@rysiek I think, more interesting than the implications for the autopilot crash statistics themselves, this is an excellent case against self driving cars. Drivers must be liable for their faults in accidents, but when companies, that operate self driving cars, refuse and avoid liability then they can't be allowed to continue operating.

@rune @rysiek (disclosure the company I work for makes this solution) Tesla's defence would be fine if it was using robust methods to ensure drivers where ready to take back control at less than a seconds notice.

@LovesTha @rysiek But it doesn't matter if they take back control at a seconds notice. After that second the collision happens.

There's no way to stop it when the autopilot disables.

@rune @rysiek it also ensures they are paying enough attention to either avoid the situation earlier that autopilot bails (or it was a situation a human wouldn't have avoided)

@LovesTha @rysiek The current issue isn't whether the process could be safer. Rather it's about how Tesla has been trying to hide that autopilot is involved through a completely pointless distinction. Autopilot being off during the instant of collision doesn't change that it was involved in the crash.

Lots of people have died in car crashes due to manufacturer problems, but hiding these problems is way more unethical than just failing to make the cars safe.

@rune @LovesTha @rysiek Yea, this nearly smacks of Tesla engineers deliberately messing with statistics and culpability.

if (about to crash into old lady || about to drive off cliff) {
autopilot.disable() // driver's fault
}

@rune @LovesTha @rysiek If you have tried the autopilot on a Tesla.. you'd know that you are required to hold on to the steering wheel when it is active. You can not let go of the steering wheel. Also, even if it is off during the crash you as a driver would have been able to react before that second, and didn't. It is also plausible the crash would still have happened even with the autopilot off.

@shellkr @LovesTha @rysiek But how will we ever know if the autopilot or the driver is at fault when Tesla programs their autopilot to disable an instant before the crash and then lies about autopilot involvement to the authorities investigating crashes?

I don't care who's fault it is, this is shady as fuck.

@rune @LovesTha @rysiek Yes, there are probably improvements that needs to be had. I wouldn't say they necessarily lied as it was off. That doesn't mean the autopilot was fault free. It is very likely it could have been avoided with all the sensors in the car and probably will in the future. I think if we want to reduce car accidents we need to remove the human part of it. That doesn't mean accidents will not happen anymore... just that they will be reduced.

@shellkr @rune @rysiek holding the steering wheel has a very low correlation with paying attention to the road. You van do it while asleep, watching a movie, etc.

Watching where the drivers eyes are looking is much more indicative.

@LovesTha @rune @rysiek It is not enough to just hold. You also have to steer or it will disengage. Well, not really steer but keep a kind of resistance... hard to explain. So no, you can't sleep. Not yet anyway.

@shellkr @rune @rysiek I know I have bias here but that still sounds like a really bad proxy for paying attention.

@LovesTha @rune @rysiek If you get a chance to try a Tesla, you should. Then you would know it is pretty effective. Also, you have to stop and restart the car to activate if it gets disengaged.

I guess putting up a camera to track eye movements wouldn't hurt.

@shellkr @rune @rysiek Don't they already have an in cabin camera? They just have to do the processing.

@shellkr @rune @rysiek A quick search shows discussions of why the model 3 has an in cabin camera. (didn't read anything more than search results)

@rune @rysiek but also, if the autopilot is smart enough to turn off in dangerous situations, it should be smart enough to hit the brakes or some other failsafe instead of just letting the accident happen

@rysiek Oh wow... If I was really conspiratorially inclined, I'd wonder whether for plausible deniability it was designed to automatically shut off when detecting an unavoidable crash, or if the software is just garbage with dangerous accidents to follow.

@rysiek somewhere out there is a programmer who took that order and made it happen.

This industry badly needs ethical standards

@cassidymcgurk I mean... I can see how "disengage in case data cannot be trusted" is a reasonable fail-safe. And so in these case it might have been that this is what happened, for whatever reason.

In the end, it is reasonable to hold the driver accountable. It's their car, it's their responsibility on the road.

It's still a horribly bad look for Tesla, even if the autopilot disengage thing was not implemented *on purpose* to avoid legal responsibility.

@rysiek @cassidymcgurk even with that, the vehicle should always safely stop on disabling the guidance/control/safety system (or on handover to another one, have made it possible for the following system (e.g. human) to safely stop, like by knowing enough distance ahead to be safe), even in case of new data being obviously wrong as old data already needs to allow for a safe stop.

or at least that's what I’d expect from knowing other (non-road) safety systems …

@trisschen @rysiek @cassidymcgurk I think one of the tricks here is that the Auopilot is not classified as a safety(-critical) system, since it is "only driver-assistance"

@szakib @rysiek @cassidymcgurk I would still expect such deactivation-safety at such a system, including more basic lane assistants or distance assistants or stop-and-go assistants

heck thinking about it, I’d even expect from a speed-limiting system that it disengaging means no traction until controls are set to 0

@szakib @rysiek @cassidymcgurk I guess that's another symptom of motorcars being inherently unsafe by design

i.e. movement to brake meaning accelerate in a nearby control, brake releasing on its own when losing contact with its control, likelyhood of accelerating by falling on the control, likelihood of something blocking the brake controls, lack of emergency brake, common maneuvers in theory requiring a second person due to lack of sight, …

@trisschen @szakib @cassidymcgurk yeah, cars are not ready for regular people and everyday use, the technology needs to be developed way further. 😉

That said, Tesla is particularly bad even at learning from car industry's past mistakes. They just waltz into the space and decide to ignore decades of accrued knowledge.

Re: self-driving cars, accidents 

@szakib @trisschen @rysiek @cassidymcgurk given the stringent standards for certifying safety-critical systems (even in the automotive domain, where cost savings are otherwise foremost), this is not surprising: it’s highly unlikely one could devise a way to demonstrate the safety of an autopilot-like system with current system architectures

which, in a sane world, would mean deploying no such systems in production at all

@rysiek@mastodon.technology @cassidymcgurk@mastodon.ie

I would somewhat disagree on this "it is reasonable to hold the driver accountable."

you want to blame someone for something they would not know.

the systems should be designed to fail safely,
i might not exactly know precisely what that would mean but i would guess that it would be something like...

put on the 4 ways and
bring the vehicle to a stop.

or something like that.

@rysiek@mastodon.technology @cassidymcgurk@mastodon.ie also just in general, robots make terrible people.

people have the ability tin inference other peoples behaviours and we still make mistakes from time to time.
the AI doesn't have such ability.

@logan @rysiek @cassidymcgurk IIRC a few years ago a few of the major German caar manufacturers pronounced that they would assume full responsibility for driving errors of their (prospective) autonomously self-driving cars. In my eyes, that is what is necessary to build confidence, even if it takes much longer to get those cars on the road. >>

@logan @rysiek @cassidymcgurk Teslas "move fast and break things" approach is reckless and irresponsible. It takes a weird legal environment like the USA to have them *not* fined into oblivion for this.

@rysiek @cassidymcgurk In my book (and hopefully that of any reasonable regulator too, so excepting a bunch of US agencies), the failure mode for something called “autopilot” can never be to disengage right before an impending impact.

The more I learn about this stuff, the less I believe we’ll ever see fully self-driving (GoA4) cars that can be used on all types of roads.

Re: self-driving cars, accidents 

@rysiek @cassidymcgurk what is weird is that how come disengaging not an admission that decisions made by the self-driving component up to that point have lead to an unsafe situtation? like we’d expect a human driver to drive defensively and not intentionally get into a dangerous situation

(test cases and even whole methods to generate such test cases have been proposed in academic literature as well as by some industry players, e.g., to make sure the system can reason about potential object obscured from its vision. so there’s at least an expectation for self-driving components to be programmed this way. nevertheless, tesla – and i presume other implementers – chose the easy way out and blame humans for the shortcomings of their systems)

@rysiek i understood they lied about this by omission, that was what I was referring to, although BMW execs seem to get away with it as far as i recall.

No-one become a billionaire without committing crimes they should be in prison for though.

@rysiek seems to me that there's an overwhelming need (and public interest in the legal sense) to be able to inspect models, training data, and control software for autonomous vehicles running on public roads. what blind spots does the model have? are builds reproducible? verifyable? how secure is the supply chain (terrifying)?

@rysiek and it's not even so much about Elon and Tesla (though they are hardly unbiased actors here), as it's about individual and public safety.

this feels to me like an area where the public need outstrips any IP concerns (if you believe in that sort of thing).

what happens when the next autonomous car comes out? if it's a Volvo, do you trust it? how about a Seat? Volkswagen (don't forget dieselgate)? Uber? Huawei? Google?

the public shouldn't have to make guesses.

@rysiek Like the autopilot shouts "Jesus, take the wheel!" but there's only the driver.

@rysiek right before crashes autopilot rolls itself up in a ball and starts rocking back and forth mumbling incoherently about free speech ...

Sign in to participate in the conversation
Mastodon for Tech Folks

This Mastodon instance is for people interested in technology. Discussions aren't limited to technology, because tech folks shouldn't be limited to technology either!