mattdlynch
Matt
mattdlynch

Of course, there might be savings elsewhere. As we use fewer fossil fuels to power our homes, we can spend less on environmental cleanup from those fuels. Not that it would drastically alter the formula very quickly, but over time I think we could see solar setups reach break-even sooner.

Agreed.

Vaccines are demonstrably better than no-vaccines. By a huge margin. We simply don’t have enough evidence that autonomous cars are, right now.

I think the issue with just doing hands-on-the-wheel is that people will rest their hands on the wheel, then do other things (reading a book, turning around, eating, napping, etc).

I understand your argument that it’s a dangerous line of thinking, I really do, because I understand that sometimes we need to let people make their own mistakes. But when it comes to saving lives, I think we really should be trying to do more, and not pushing out software that has these glaring flaws (specifically,

Tesla needs to understand what users are going to infer, then design their marketing to account for that.

Trying to improve technology to make it safer is ridiculous? Demanding simple improvements that do not allowing people to think they can offload important work to incomplete software is ridiculous?

Then the roller coaster company should install a dead-man’s switch that requires constant interaction with the operator. When it comes to life-or-death situations, we can’t let “the person should have acted a certain way” be our solution. We need failsafes.

Ratios would be helpful here, wouldn’t they? I have no idea what they are, but let’s consider:

He did not see the semi because he was doing something else, because he thought he could thanks to Autopilot. If Tesla were more direct about the system’s limitations, people would be paying more attention to the road.

Tesla can use betas to better their software, but it needs to meet a minimum standard of safety first. I argue that it does not. You don’t teach a child to swim by throwing them in the deep end, you use the shallow end...then deeper...then the deep end. Incremental learning. Right now they’re letting users engage

I made a mistake in saying 100%, and I would edit it if I could. 99.99% would be more accurate.

1. When it comes to things like phones, and someone doesn’t read the manual, they might break their phone. That’s annoying, but not bad. When someone is using a fast-moving, heavy vehicle and didn’t read the instructions enough, they could kill other people. In cases like that, “read the instructions” isn’t good

Google Maps does.

YES. Eye-tracking is exactly what I’ve been arguing for on this website for a while now. If my $700 phone can do face-tracking and know when I’m looking at it, then a $60,000+ car should be able to as well. Force the user to scan the horizon every so often, and we’d see obstacles like cross-traffic far before

100 people is a tiny, tiny fraction of the total hours and miles driven each year. Absolutely tiny.

1. If Tesla’s system doesn’t have safeguards in place to force people to pay attention more, like requiring hands on the wheel more often, then they need to do more. Just saying “It’s in the guidelines” isn’t good enough when you’re testing “beta” software that puts the lives of other drivers/pedestrians at risk.

How often does that happen, though? Very few times, given how often those circumstances occur, I’m betting. Most people are paying more attention, and see those types of obstacles before hitting them. The Tesla Autpolit system has a fatal design flaw in that it cannot see obstacles like a perpendicular trailer. That

1. Not clearly-enough, apparently.

Very good points, particularly the second one. I think we can assume that a car 100% driven by the human behind the wheel would brake for a tractor trailer and not drive under it, like Autopilot did.