mattdlynch
Matt
mattdlynch

100% agree with you here. Actively training the users would be a fantastic idea.

If Volvo’s only offers auditory reminders about the lane system, but they advertise it as being “active lane assist,” then it’s not good enough. If it’s marketed as being a simple reminder, without implying active management of lane-keeping by the car itself, then I see no problem. It’s all in how it’s presented to

Tesla should be held liable for overselling the capabilities of their system, though. Sure, the legalese and the Terms and Conditions spell out the instructions, but nobody reads those. Tesla needs to understand that, and stop overselling the capabilities of autopilot, especially in cases like Musk’s wife using it

“Better” still does not mean “good enough.”

“Autopilot was extensively tested by Tesla before release”

Ok, so if the sensors can register the object, why can’t Tesla calibrate them to detect distance and know the difference between “that overhead object is far enough away, we can pass under it.” vs “that object will remove the top of the car”? It’s not an unreasonable request that the automaker should not release their

People shouldn’t trust it, but some still do. Even saying that it’s in beta does not do enough to explain what the system can and cannot do, and the (lucky) lack of crashes prior to these lately have lulled people into a false sense of security. Tesla hasn’t done anything to disabuse people of that false security.

“Tesla is failing at foreseeable events” seems to be a running theme with them, sadly. They had to release a sunshade for the Model X because nobody thought “maybe a huge glass windshield-sunroof will get annoying,” they had to upgrade the shielding along the bottom because of battery punctures, they failed to realize

Here’s the thing: You shouldn’t develop a system and say “It’s good, people are just not using it right.” Good design is design that takes into account how people are actually going to use it in the real world, and plans for that. Tesla does not. Theirs is a bad design.

Oh, I remember. I used that link just yesterday. What I think we need is a combination of systems, most important would be making sure that the driver is looking ahead. The “hands on the wheel” would be a secondary, less-important confirmation.

Yeah, but there should be better limitations on what they can say to sound better but not be false advertising.

I don’t think most people read theirs either. Because that’s the case, I think it’s irresponsible of Tesla to use a marketing term that implies more automation than is occurring.

“Tesla: Turning every red light into a red light district.” =P

And that should have (did?) lead to better informational campaigns about the abilities and limitations of cruise control. Now Tesla needs to either improve the feature, or market it in such a way as to make very clear what the limitations are.

You would think they’d treat them cautiously, and I sure hope they would, but the data shows they don’t. They’re stupid for that, and it’s unfortunate, but I think Tesla needs to understand that and design systems that take it into account. I don’t think they’ve done a good enough job of that. Calling a system

“And even in aircraft, autopilot doesn’t mean the pilot(s) can go off and do other things.”

Well that explains it all. These crashes are because the Autopilot started deflating!

Drivers are inappropriately using the feature, yes. Tesla should understand that it will happen, though, and take one of two approaches:

Same here. How often am I really looking at the seatbelt-release button anyway? What sort of field of vision does someone need to have for it to be a glaring annoyance? Safety > aesthetics.

Clearly it does not make people want to keep an eye on the road. We can spend all day talking about how people should act, what people should interpret things as, but in the end the most important things is how people DO act. Calling it “beta Autopilot” obviously isn’t making people act appropriately.