What I don't get is this part,
"Story’s death — one of 40,901 US traffic fatalities that year — was the first known pedestrian fatality linked to Tesla’s driving system"
Regardless of if its FSD or the autopilot system under investigation, if this in fact is the first recorded death that is directly linked to the car driving itself, and tesla has had cars around for what, 12 years by now with varying degrees of self-driving and steering assistance, then it would be somewhat prudent to compare against say all the accidents made by tesla drivers not using any versions of FSD/autopiloting. If it then shows that running with it on is 10%, 50% or 90% "better" in avoiding accidents, then it is still a net win. But I don't think there is statistics to say that this first death attributable to the self-driving in itself makes it worse than people (with all our flaws) driving manually.
There seems to be some kind of weird double standard where we let people get drivers licenses and run around causing X% accidents per year, then automakers add more or less helpful steering aids and get this figure down to X/2 or X/5 or X/10% and we somehow scream for regulating computers from helping us drive?
If there existed a button to reduce cancer by half, to a fifth, a tenth of what the current rates are for contracting it, I can't see many people trying to legislate so that we could never push said button until it removes ALL cancer from any patient ever. I get that self-driving is far from perfect, but if it helps, then it helps. What more is there to say?
Each and every death is tragic, not trying to be all utilitarian about it, but it seems (to me, with limited facts) that these tools overall seem to make driving slightly safer than not using them, and I can only guess that if even more cars would be using automatic distance to car in front and stuff like that, the numbers would go down even more since people tend to be somewhat more chaotic while driving which I gather makes it hard on the computer to account for.
And lastly, pet peeve about "recalled 2M cars". Yes, the term is like that, makes it sound like millions of cars had to go to the dealership or service, whereas someone codes up a fix, pushes a button and a week later, all cars are now running that fix. But that doesn't make for dramatic headlines if you have an agenda.
Yes, I have an M3, no I don't like EM heiling on tv, but that was hard to predict 4 years ago.
> There seems to be some kind of weird double standard where we let people get drivers licenses and run around causing X% accidents per year
I'm all for safe self driving, but the double standard here it's what's allowing Tesla to continue unpunished.
The autopilot/FSD algorithm/AI/implementation is relied upon as if it was a driver... But when a driver kills someone the driver is (or should be) prosecuted, and shouldn't go back immediately on the road before proving that they can drive a vehicle safely.
For humans you can blame "errors of judgement"... A person might not always reliably behave exactly in the same way when in the same situation (in both good and bad ways, e.g. distraction)
But FSD instead means that a Tesla car using the same FSD version, in the same road and sky conditions, will always behave in the same way...
If a human would cause an accident on that Arizona road in the same way, you could take their driving license away, and know that other road users don't risk being maimed by them anymore.
The double standard here instead allows for thousands of other Teslas, with the same FSD version, to drive on the same road... Knowing full well that it's only a matter of time before the same conditions happen again, and a different Tesla would replicate the crash, endangering other road users.
>The autopilot/FSD algorithm/AI/implementation is relied upon as if it was a driver...
Doesn't FSD still come with the disclaimer that the human driver should still be monitoring it?
>For humans you can blame "errors of judgement"... A person might not always reliably behave exactly in the same way when in the same situation (in both good and bad ways, e.g. distraction)
>But FSD instead means that a Tesla car using the same FSD version, in the same road and sky conditions, will always behave in the same way...
Depending on how much you believe in free will, you could argue that humans will also "behave in the same way" given similar circumstances. For instance, being sleep deprived because of DST switchover, for instance. Moreover, I don't see any reason why humans getting distracted by a phone or whatever should be meaningfully different than a Tesla on FSD getting confused by a particular way the road/sky conditions lined up, especially if the latter occurs randomly.
The thing is that FSD and autopilot are human controlled just as cruise control is. Under current laws and disclaimers FSD very clearly puts the responsibility in the hands of the driver, just as one would expect with cruise control. See: https://www.tesla.com/ownersmanual/2020_2024_modely/en_us/GU.... Until it's truly autonomous the FSD driver is just as liable in that AZ scenario.
There's several safety issues in my opinion.
1. Tesla's are foreign to drive. Driving mine off the lot was terrifying. You've got a super sensitive rocket with unusual controls. The UX is actual pretty nice for most things but it takes getting used to.. and getting used to that with no instruction is dangerous. They just give you the keys and that's it. They hold free classes but beyond that you're on your own.
2. Related to 1, engaging and disengaging FSD/autopilot is not obvious. On my '23 with stalks you press down past drive to enable FSD or autopilot (which can only be changed when parked). It's not super obvious which one you have on, which can be dangerous. To disengage you either grab the wheel hard, press the brake, or tap up on the stalk. Grabbing the wheel makes you swerve in some situations / older versions. I never do that now but I had to learn the hard way. Pressing the brake at high speeds causes significant slow down due to regen braking so it's not ideal on highways. So that leaves the stalk going up which can easily put you in reverse at low speed. Something I've done accidentally in traffic on the highway as a new owner.
3. FSD and autopilot are so good that it's much easier to trust it than traditional level 2 systems like my Subaru's. That leads to complacency and that's pretty dangerous.
That said, while all these deaths are horrible, we don't see the positives in data. I believe that these systems have absolutely saved more lives than the associated fatalities. Humans are reckless on the road. Every person driving while looking at their phone is worse than FSD which prevents you from looking away. So it's not enough just to look at crashes, you have to consider preventive measures too. Remember the lady who killed a kid on a bike in Colorado while looking at her phone? FSD would have been a godsend there and we would have never known the wiser.
https://www.reddit.com/r/TeslaFSD/comments/1l02x27/fsd_saves...
I feel like a lot of these systems, while reducing some types of incidents, also cause others.
I know the lane-keeping assist function in my 2024 car (that turns itself back on every time I start the car) has a healthy appetite for trying to veer the car into oncoming traffic whenever I’m on a road with no lane markings. This is quite common on suburban roads. So far I’ve been lucky, but if it happens at just the wrong moment I can absolutely see it causing a head-on collision that wouldn’t have happened otherwise.
For me this is the crux of the issue - these systems greatly improve some situations, but it’s not acceptable to do so at the expense of worsening others.
When I had a robot vacuum cleaner it would sometimes leave a big ball of dust in the middle of a carpet or something. If I vacuum manually, I would have a harder time working under the sofas and tables, so one could argue that we both did 95% of a full sweep of the floor, just not leaving the same 5%. I think this is somewhat the same.
The camera/radar/sensor things are going to "see" and react to different things than humans while driving, and while I would never left the ball of dust in the middle of the room, in plain sight, the robot would not clean the floor under the sofa any less efficiently than the other parts of the floor. And perhaps this carries over to self driving, for now it will make certain mistakes that humans never will do, but as others have mentioned here, it would also not tire, not look at cell phone, not get distracted.
So I wonder if we are treating these 5%-cases differently just because we associate "cleaning" with "not leaving dust balls clearly visible" and driving with "not crashing into pedestrians in situation x,y,z" while somewhat accepting that people in situations r,s,t do kill others in traffic to a certain extent.
Every mutually exclusive choice is a tradeoff, by definition. Perhaps the better word is “driving better than human drivers as a collective” to refer to a lower auto collision/injury rate.
It might not be strictly more capable than a single ideal human always paying attention, but that is neither here nor there when comparing if software assisted driving is better than non software assisted driving.
The easiest way to analyze this is to see if auto insurance companies offer a discount for using the software assist, as they are most directly impacted by population wide changes in the metric we are interested in (although I don’t think Tesla shares sufficient data about when and if FSD was used for auto insurers to be able to discern this).
It's not a mutually exclusive choice. That's the point.
Let me counter your collective auto collision/injury rate. Let's suppose the only injuries sustained in a year are the deaths of school children exiting a school bus. There's no rhyme or reason to why sometimes the driver assist mows down children. BUT, collectively, there are far, far fewer deaths. Say 1,000 school children are killed per year, but those are the only deaths. That's far less than the 40,000 Americans killed per year in auto crashes. So that's good, right? No. Of course not.
We want these systems to not make the same mistakes humans make and to not make mistakes humans wouldn't make. Do that, and your fatalities will decrease in an acceptable manner.
>That's far less than the 40,000 Americans killed per year in auto crashes. So that's good, right? No. Of course not.
>We want these systems to not make the same mistakes humans make and to not make mistakes humans wouldn't make. Do that, and your fatalities will decrease in an acceptable manner.
I don't think anyone is going to claim 1000 kids getting mowed down by automated cars is "good", but it's far preferable to 40k people getting mowed down normally. They are however, willing to accept the deaths of 1k kids if it meant saving the lives of 40k people.
> They are however, willing to accept the deaths of 1k kids if it meant saving the lives of 40k people.
I bet they're not. People instinctively reject utilitarian trade-offs, especially when it involves children. The idea of sacrificing a few for the many might make sense on paper, but in practice, it’s emotionally and politically untenable.
Note that 0-14 year olds make up 18% of the US population, so assuming the 40k deaths in the counterfactuals are evenly distributed, that'd imply 7.2k kids dying as well, still worse than the 1k.
In the United States, approximately 1,100 children under the age of 13 die each year in motor vehicle crashes. Not that it matters, because you're still making a utilitarian argument, and the majority of people, including Americans, reject that kind of reasoning.
More importantly, utilitarian arguments rarely persuade lawmakers. Policy decisions are driven by public sentiment, political incentives, and moral framing - not abstract cost-benefit calculations.
There is no utility in spending time discussing the relative cost of 1,000 lives of children versus 40,000 Americans (presuming the Americans are made up of less than 1,000 children).
Although, note that the US government has long provided better medical care to old people (via Medicare’s higher reimbursement to healthcare providers) than to [poor] children (because Medicaid pays less).
In the 1990s, it was funny seeing my 80+ year old immigrant grandparents get tons of healthcare while my dad would tell me to play carefully because we couldn’t afford the doctor if I broke an arm or leg, or we couldn’t afford a dentist and braces (small business owner so Medicaid disqualified due to assets, yet insufficient cash flow to pay doctors).
> There's no rhyme or reason to why sometimes the driver assist mows down children.
If you are claiming a software engineer is throwing in a random kill/maim function in the driver software, then that would be worse as it could be implemented at scale (rather than individual drivers choosing to kill/maim).
Otherwise, I would classify injury caused by driver assist mechanisms as technical issues due to hardware/software, directly comparable to injury caused by human drivers due to say choosing to look at their phone or drive drunk. Or being 95 and lacking physical/cognitive capacity.
> And lastly, pet peeve about "recalled 2M cars". Yes, the term is like that, makes it sound like millions of cars had to go to the dealership or service, whereas someone codes up a fix, pushes a button and a week later, all cars are now running that fix. But that doesn't make for dramatic headlines if you have an agenda.
That's not provocative. The vast majority of recalls in the modern age are software updates. Ford just recalled 1,075,299 vehicles due to backup camera failures last week and the fix is a technician plugging a cable into the vehicle and running a software update. The only difference is Tesla does it over the air.
It was reported as "Ford Recalls More Than 1 Million Vehicles with Backup Camera Issue".
The problem here is not self-driving, the problem is that for some reason Tesla don't want to use Lidar/Radar for self driving and instead just use cameras. This seems like complete arrogance from Tesla and nothing more. The article talks about this.
Wasn't it the radar/lidar what caused lots of crazy brakes when you ran towards a bridge or something else that reflected the signal as if there was a still standing vehicle or so?
“Complete arrogance” means they do not want to use lidar/radar for pure ego, and there would be no negative effects of using it.
Which is not true, as it would increase the costs of the producing and repairing the vehicle. One of Tesla’s major achievements is hitting the $30k to $50k price point with an 80% solution for self driving (regardless of their marketing it as 100%).
i suppose it’s a “major achievement” when we make something cheap that also makes roads more dangerous for humans?
I guess that’s true if we view the world through a financial lens. Otherwise no it’s not impressive IMO. Especially with their very very unethical advertising of it, calling it “full self driving” and “autopilot”. Full self driving is something like waymo.
How does it make roads more dangerous for humans? It requires the driver to be paying attention.
If drivers are driving distracted, that’s not a problem with any car, it’s a problem with humans wanting to look at their phone. At least the Tesla is monitoring people’s eyes to try to prevent them from distracted driving.
“Tesla CEO Elon Musk said last week that ‘Full Self Driving’ should be able to run without human supervision by the end of this year.” Sure elon, sure…
In this case the driver admitted to being on their phone while driving on the highway.
this software is fine in a responsible drivers hands that really gets the limitations of the software. Your average consumer does not. That is the problem.
What does Musk have anything to do with this topic? He says nonsense all the time.
> this software is fine in a responsible drivers hands that really gets the limitations of the software. Your average consumer does not. That is the problem.
The software is fine in any non distracted driver’s hands. The distracted driver part of the equation has nothing to do with Tesla, that was going to happen anyway.
At least a Tesla watches people’s eyes to prevent distracted driving. How is that worse than other cars that do not watch people’s eyes?
I’ve discussed this topic quite a lot of times in the past years, taking both sides eventually. Usually the conclusion is with a human driver you (usually) have a responsible and liable person. It’s not that clear for FSD, and that’s what scares people.
It's very clear for FSD - it's a level 2 driver assist system and by definition that means the driver is legally responsible for everything that happens. That's the single biggest reason I would never use FSD. I'm not going to put myself in a position of being held legally liable for the actions of somebody else's software.
> It’s not that clear for FSD, and that’s what scares people.
It is crystal clear, the liability and responsibility is on the human behind the wheel. The 'person driving' is some engineer thousands of miles away with no incentive not to crash the vehicle, and that is what scares me.
The human driver is responsible and liable, that is why the interior camera monitors the person’s eyes to remind them to watch the road, and I think disabled driver assist software after 3 strikes.
> WARNING
Full Self-Driving (Supervised) is a hands-on feature. Keep your hands on the steering yoke (or steering wheel) at all times, be mindful of road conditions and surrounding traffic, and always be prepared to take immediate action. Failure to follow these instructions could cause damage, serious injury or death. It is your responsibility to familiarize yourself with the limitations of Full Self-Driving (Supervised) and the situations in which it may not work as expected.
> then it would be somewhat prudent to compare against say all the accidents made by tesla drivers not using any versions of FSD/autopiloting.
One issue with this is that FSD/Autopilot is designed to disable itself in non-optimal road conditions. I know Musk claims FSD/AP is the only system to not have "restrictions", but you can check any of the Tesla & self-driving subreddits and find people complaining how they weren't allowed to use the driving systems in rain or extreme sunshine, and even some night time conditions. So now you're comparing the accident rate of FSD/AP under safe conditions, against vehicles that are driving through dangerous conditions that FSD/AP aren't allowed to.
But accident rate is just one thing to be concerned about. If you picked 100 licensed human drivers and had them drive through the accident scene described in the article, I would happily bet $1000 that every driver, or at least, 99/100, would handle it without crashing.
If 100 Tesla vehicles using the same version of FSD and camera hardware were driving through the scene described in the article, what would the accident rate be? The concern is that it seems like the deadly Tesla was acting as expected. Even worse, this sounds like a baseline scenario for any driving-assisted vehicle to handle: if the cameras are receiving an extreme amount of light/glare, why is FSD/AP not designed to drive cautiously (and then disable itself)?
> There seems to be some kind of weird double standard where we let people get drivers licenses and run around causing X% accidents per year...
Thats exactly the problem Tesla (and others have). Most polls still say large portions of the public don't feel safe with self driving cars on the road (For example: https://today.yougov.com/technology/articles/51199-americans...). People understand about DUI, fatigue, driving too fast, etc.
But a lot of automated driving deaths seem to be avoidable by a human with a bit of common sense.
Yeah wtf? I skimmed the first three entries and so far as I can't tell how FSD was involved, or even why Tesla (the company) was at fault. I guess technically those were deaths, and they were caused by a Tesla, so "telsadeaths.com" isn't inaccurate, but you'd be laughed at if you started a site called "toyotadeaths.com" with all the fatalities toyota cars were involved in.
It's not even that, it includes entries like where a Toyota Tacoma pickup was driving on the wrong direction on the highway, ran into a Tesla and someone died.
It's not even just deaths that happened in a Tesla for any reason because it also includes deaths in other vehicles, it's just any death where a Tesla was in the vicinity or something.
Reminds me of the very popular "study" that did the rounds on all over the media and on here and Reddit where they counted suicides in a Cybertruck as being its fault to make it look artificially bad against the Ford Pinto.
These false claims go unchallenged on Reddit because moderators will permanently ban anyone posting inconvenient facts, and BlueSky is probably worse.
And given all the massive number of misinformed people, these propaganda tactics actually work pretty well.
When a car with a driver assist system is the at fault vehicle in a fatal accident, how is it decided whether or not to blame the driver assist system?
Is it only blamed if it was in control at the instant the accident occurred?
Or does it also get the blame if it was in control shortly before the accident and made poor decisions that set up the accident but then realized it could not handle the situation, disengaged, and told the human to take over?
> It's not acceptable to have FSD crashes same as it's not acceptable for people to crash.
Not sure what you mean by that? We could prevent almost every death, for instance by limiting speeds to <10 for instance, that would remove SO many accidents and incidents. It would also make travel take a really long time, like as if it was 200 years ago and you rode your horse across the country. But it would prevent almost all serious accidents.
So the statement "its not acceptable for people to crash" is not true in a general sense. We are already balancing a certain amount of risk versus convenience of riding at speed to get places fast, to get transports in a timely manner and so on. Some countries, like Germany has roads with free speeds and get really horrible crashes at times involving many cars. Others place maximum limits to try to strike some kind of balance between accident rate/impact and the convenience or speed people would like to have, but from what I can tell, no country has set the limits so low that it actually fulfills what you wrote, that it is not acceptable for people to crash.
It is solvable today, just not popular to make that actual decision. But it could be done and people would stop dying on the roads.
And I was meaning to circle back again to the comparison of incidents and crashes with or without computer assistance but got derailed. =)
If we then figure that we are in the situation where those who decide are balancing between X% fatalities per year when people drive like they do currently, why would it be controversial to allow FSD or other similar techniques to have the same or lower rate of crashes?
I'm not buying into Teslas own statistics on crashes per km/mile because lying with statistics is kind of easy, but lets pretend one or all car manufacturers manage to get cars to have 0.4% crash rate per year when people have 0.5%, why would anyone then want to prevent people from using FSD? Why would we apply a different set of standards for one "type" of driver and not the other? And if we were to apply different standards, which might be reasonable, why would we not choose the "safer" one?
Not claiming we necessarily are at a place where computer assisted driving is demonstrably safer, but when we actually can prove it is even just a tenth percent better, by that time I think we should have an answer for this question.
I agree with you that comparing the rates of accidents is what's important.
It's especially important to think about what's in the denominator of these rate measurements. I think the right denominator is "per km driven". (Accidents per hour of driving is OK but penalises faster cars -- people usually need to move a fixed distance, from A to B, and a faster car travels that fixed distance in less time.) The numerator (fatal accident count) for Tesla FSD may indeed be just 1 (I didn't see this claimed in the article, but it seems the data is somewhat public, so presumably if there were more fatalities, they would have reported them), but the Tesla FSD denominator might be quite low too, which might make its rate comparable to or worse than the human one.
Of course, we'd need a much greater sample size for any confidence, so it would make more sense to compare non-fatal accidents, which are more common.
Also note that over the air updates mean a Tesla may behave differently every time you get inside one. It could get better or worse or just different. If the human behind the wheel has to compensate, they'll have to be extra vigilant despite the non-incident times lulling them into false sense of security.
So do you run the numbers per FSD version? Do you insist they must be certified before they make changes?
People are unpredictable too, yet we have a mental model of their/our behavior and our own personal limits. When I get in the car I have confidence in how I'll perform, and without FSD, will remember to constantly maintain control. And a 'flaw' in one person's wetware is limited to them. No one can push out a fleet update causing humans to start targeting pedestrians instead of avoiding them.
And the bloomberg.com page hangs and never fully loads the video. I tried with 2 different browsers on 2 different devices, but it doesn't work on neither of them.
(I've been able to see only the sun blinding the image, and the another car being circled)
Using FSD otherwise places your life and the life of others in danger.
FSD; as currently implemented, is just good enough to lull drivers into a false sense of complacency --- and make more money for Tesla.
Regardless of if its FSD or the autopilot system under investigation, if this in fact is the first recorded death that is directly linked to the car driving itself, and tesla has had cars around for what, 12 years by now with varying degrees of self-driving and steering assistance, then it would be somewhat prudent to compare against say all the accidents made by tesla drivers not using any versions of FSD/autopiloting. If it then shows that running with it on is 10%, 50% or 90% "better" in avoiding accidents, then it is still a net win. But I don't think there is statistics to say that this first death attributable to the self-driving in itself makes it worse than people (with all our flaws) driving manually.
There seems to be some kind of weird double standard where we let people get drivers licenses and run around causing X% accidents per year, then automakers add more or less helpful steering aids and get this figure down to X/2 or X/5 or X/10% and we somehow scream for regulating computers from helping us drive?
If there existed a button to reduce cancer by half, to a fifth, a tenth of what the current rates are for contracting it, I can't see many people trying to legislate so that we could never push said button until it removes ALL cancer from any patient ever. I get that self-driving is far from perfect, but if it helps, then it helps. What more is there to say?
Each and every death is tragic, not trying to be all utilitarian about it, but it seems (to me, with limited facts) that these tools overall seem to make driving slightly safer than not using them, and I can only guess that if even more cars would be using automatic distance to car in front and stuff like that, the numbers would go down even more since people tend to be somewhat more chaotic while driving which I gather makes it hard on the computer to account for.
And lastly, pet peeve about "recalled 2M cars". Yes, the term is like that, makes it sound like millions of cars had to go to the dealership or service, whereas someone codes up a fix, pushes a button and a week later, all cars are now running that fix. But that doesn't make for dramatic headlines if you have an agenda.
Yes, I have an M3, no I don't like EM heiling on tv, but that was hard to predict 4 years ago.
There have been 51 such deaths
https://en.wikipedia.org/wiki/List_of_Tesla_Autopilot_crashe...
> There seems to be some kind of weird double standard where we let people get drivers licenses and run around causing X% accidents per year
I'm all for safe self driving, but the double standard here it's what's allowing Tesla to continue unpunished.
The autopilot/FSD algorithm/AI/implementation is relied upon as if it was a driver... But when a driver kills someone the driver is (or should be) prosecuted, and shouldn't go back immediately on the road before proving that they can drive a vehicle safely.
For humans you can blame "errors of judgement"... A person might not always reliably behave exactly in the same way when in the same situation (in both good and bad ways, e.g. distraction)
But FSD instead means that a Tesla car using the same FSD version, in the same road and sky conditions, will always behave in the same way...
If a human would cause an accident on that Arizona road in the same way, you could take their driving license away, and know that other road users don't risk being maimed by them anymore.
The double standard here instead allows for thousands of other Teslas, with the same FSD version, to drive on the same road... Knowing full well that it's only a matter of time before the same conditions happen again, and a different Tesla would replicate the crash, endangering other road users.
Doesn't FSD still come with the disclaimer that the human driver should still be monitoring it?
>For humans you can blame "errors of judgement"... A person might not always reliably behave exactly in the same way when in the same situation (in both good and bad ways, e.g. distraction)
>But FSD instead means that a Tesla car using the same FSD version, in the same road and sky conditions, will always behave in the same way...
Depending on how much you believe in free will, you could argue that humans will also "behave in the same way" given similar circumstances. For instance, being sleep deprived because of DST switchover, for instance. Moreover, I don't see any reason why humans getting distracted by a phone or whatever should be meaningfully different than a Tesla on FSD getting confused by a particular way the road/sky conditions lined up, especially if the latter occurs randomly.
You're correct, they can still ground a fleet, but there doesn't also need to be a prosecution.
There's several safety issues in my opinion.
1. Tesla's are foreign to drive. Driving mine off the lot was terrifying. You've got a super sensitive rocket with unusual controls. The UX is actual pretty nice for most things but it takes getting used to.. and getting used to that with no instruction is dangerous. They just give you the keys and that's it. They hold free classes but beyond that you're on your own.
2. Related to 1, engaging and disengaging FSD/autopilot is not obvious. On my '23 with stalks you press down past drive to enable FSD or autopilot (which can only be changed when parked). It's not super obvious which one you have on, which can be dangerous. To disengage you either grab the wheel hard, press the brake, or tap up on the stalk. Grabbing the wheel makes you swerve in some situations / older versions. I never do that now but I had to learn the hard way. Pressing the brake at high speeds causes significant slow down due to regen braking so it's not ideal on highways. So that leaves the stalk going up which can easily put you in reverse at low speed. Something I've done accidentally in traffic on the highway as a new owner.
3. FSD and autopilot are so good that it's much easier to trust it than traditional level 2 systems like my Subaru's. That leads to complacency and that's pretty dangerous.
That said, while all these deaths are horrible, we don't see the positives in data. I believe that these systems have absolutely saved more lives than the associated fatalities. Humans are reckless on the road. Every person driving while looking at their phone is worse than FSD which prevents you from looking away. So it's not enough just to look at crashes, you have to consider preventive measures too. Remember the lady who killed a kid on a bike in Colorado while looking at her phone? FSD would have been a godsend there and we would have never known the wiser. https://www.reddit.com/r/TeslaFSD/comments/1l02x27/fsd_saves...
I know the lane-keeping assist function in my 2024 car (that turns itself back on every time I start the car) has a healthy appetite for trying to veer the car into oncoming traffic whenever I’m on a road with no lane markings. This is quite common on suburban roads. So far I’ve been lucky, but if it happens at just the wrong moment I can absolutely see it causing a head-on collision that wouldn’t have happened otherwise.
For me this is the crux of the issue - these systems greatly improve some situations, but it’s not acceptable to do so at the expense of worsening others.
Can you elaborate please? E.g. do you mean that such tradeoffs are not acceptable, even if they result in a net benefit?
The camera/radar/sensor things are going to "see" and react to different things than humans while driving, and while I would never left the ball of dust in the middle of the room, in plain sight, the robot would not clean the floor under the sofa any less efficiently than the other parts of the floor. And perhaps this carries over to self driving, for now it will make certain mistakes that humans never will do, but as others have mentioned here, it would also not tire, not look at cell phone, not get distracted.
So I wonder if we are treating these 5%-cases differently just because we associate "cleaning" with "not leaving dust balls clearly visible" and driving with "not crashing into pedestrians in situation x,y,z" while somewhat accepting that people in situations r,s,t do kill others in traffic to a certain extent.
It might not be strictly more capable than a single ideal human always paying attention, but that is neither here nor there when comparing if software assisted driving is better than non software assisted driving.
The easiest way to analyze this is to see if auto insurance companies offer a discount for using the software assist, as they are most directly impacted by population wide changes in the metric we are interested in (although I don’t think Tesla shares sufficient data about when and if FSD was used for auto insurers to be able to discern this).
Let me counter your collective auto collision/injury rate. Let's suppose the only injuries sustained in a year are the deaths of school children exiting a school bus. There's no rhyme or reason to why sometimes the driver assist mows down children. BUT, collectively, there are far, far fewer deaths. Say 1,000 school children are killed per year, but those are the only deaths. That's far less than the 40,000 Americans killed per year in auto crashes. So that's good, right? No. Of course not.
We want these systems to not make the same mistakes humans make and to not make mistakes humans wouldn't make. Do that, and your fatalities will decrease in an acceptable manner.
>We want these systems to not make the same mistakes humans make and to not make mistakes humans wouldn't make. Do that, and your fatalities will decrease in an acceptable manner.
I don't think anyone is going to claim 1000 kids getting mowed down by automated cars is "good", but it's far preferable to 40k people getting mowed down normally. They are however, willing to accept the deaths of 1k kids if it meant saving the lives of 40k people.
I bet they're not. People instinctively reject utilitarian trade-offs, especially when it involves children. The idea of sacrificing a few for the many might make sense on paper, but in practice, it’s emotionally and politically untenable.
More importantly, utilitarian arguments rarely persuade lawmakers. Policy decisions are driven by public sentiment, political incentives, and moral framing - not abstract cost-benefit calculations.
Comparing mortality/morbidity data is the opposite of abstract. It’s about as defined as you can get in a discussion about safety.
Although, note that the US government has long provided better medical care to old people (via Medicare’s higher reimbursement to healthcare providers) than to [poor] children (because Medicaid pays less).
In the 1990s, it was funny seeing my 80+ year old immigrant grandparents get tons of healthcare while my dad would tell me to play carefully because we couldn’t afford the doctor if I broke an arm or leg, or we couldn’t afford a dentist and braces (small business owner so Medicaid disqualified due to assets, yet insufficient cash flow to pay doctors).
> There's no rhyme or reason to why sometimes the driver assist mows down children.
If you are claiming a software engineer is throwing in a random kill/maim function in the driver software, then that would be worse as it could be implemented at scale (rather than individual drivers choosing to kill/maim).
Otherwise, I would classify injury caused by driver assist mechanisms as technical issues due to hardware/software, directly comparable to injury caused by human drivers due to say choosing to look at their phone or drive drunk. Or being 95 and lacking physical/cognitive capacity.
That's not provocative. The vast majority of recalls in the modern age are software updates. Ford just recalled 1,075,299 vehicles due to backup camera failures last week and the fix is a technician plugging a cable into the vehicle and running a software update. The only difference is Tesla does it over the air.
It was reported as "Ford Recalls More Than 1 Million Vehicles with Backup Camera Issue".
No agenda. Factual, plain language writing.
Which is not true, as it would increase the costs of the producing and repairing the vehicle. One of Tesla’s major achievements is hitting the $30k to $50k price point with an 80% solution for self driving (regardless of their marketing it as 100%).
I guess that’s true if we view the world through a financial lens. Otherwise no it’s not impressive IMO. Especially with their very very unethical advertising of it, calling it “full self driving” and “autopilot”. Full self driving is something like waymo.
If drivers are driving distracted, that’s not a problem with any car, it’s a problem with humans wanting to look at their phone. At least the Tesla is monitoring people’s eyes to try to prevent them from distracted driving.
https://komonews.com/news/local/fatal-tesla-crash-killed-mot...
“Tesla CEO Elon Musk said last week that ‘Full Self Driving’ should be able to run without human supervision by the end of this year.” Sure elon, sure…
In this case the driver admitted to being on their phone while driving on the highway.
this software is fine in a responsible drivers hands that really gets the limitations of the software. Your average consumer does not. That is the problem.
> this software is fine in a responsible drivers hands that really gets the limitations of the software. Your average consumer does not. That is the problem.
The software is fine in any non distracted driver’s hands. The distracted driver part of the equation has nothing to do with Tesla, that was going to happen anyway.
At least a Tesla watches people’s eyes to prevent distracted driving. How is that worse than other cars that do not watch people’s eyes?
It is crystal clear, the liability and responsibility is on the human behind the wheel. The 'person driving' is some engineer thousands of miles away with no incentive not to crash the vehicle, and that is what scares me.
https://www.tesla.com/ownersmanual/models/en_us/GUID-E5FF5E8...
> WARNING Full Self-Driving (Supervised) is a hands-on feature. Keep your hands on the steering yoke (or steering wheel) at all times, be mindful of road conditions and surrounding traffic, and always be prepared to take immediate action. Failure to follow these instructions could cause damage, serious injury or death. It is your responsibility to familiarize yourself with the limitations of Full Self-Driving (Supervised) and the situations in which it may not work as expected.
One issue with this is that FSD/Autopilot is designed to disable itself in non-optimal road conditions. I know Musk claims FSD/AP is the only system to not have "restrictions", but you can check any of the Tesla & self-driving subreddits and find people complaining how they weren't allowed to use the driving systems in rain or extreme sunshine, and even some night time conditions. So now you're comparing the accident rate of FSD/AP under safe conditions, against vehicles that are driving through dangerous conditions that FSD/AP aren't allowed to.
But accident rate is just one thing to be concerned about. If you picked 100 licensed human drivers and had them drive through the accident scene described in the article, I would happily bet $1000 that every driver, or at least, 99/100, would handle it without crashing.
If 100 Tesla vehicles using the same version of FSD and camera hardware were driving through the scene described in the article, what would the accident rate be? The concern is that it seems like the deadly Tesla was acting as expected. Even worse, this sounds like a baseline scenario for any driving-assisted vehicle to handle: if the cameras are receiving an extreme amount of light/glare, why is FSD/AP not designed to drive cautiously (and then disable itself)?
I think this is the first pedestrian death - there's a couple more in cars with FSD running, and lots more when autopilot was engaged:
https://www.tesladeaths.com/
> There seems to be some kind of weird double standard where we let people get drivers licenses and run around causing X% accidents per year...
Thats exactly the problem Tesla (and others have). Most polls still say large portions of the public don't feel safe with self driving cars on the road (For example: https://today.yougov.com/technology/articles/51199-americans...). People understand about DUI, fatigue, driving too fast, etc. But a lot of automated driving deaths seem to be avoidable by a human with a bit of common sense.
Blaming all of those or even most of those on autopilot seems extremely disingenuous, appears to be a misleadng propaganda website.
It's not even that, it includes entries like where a Toyota Tacoma pickup was driving on the wrong direction on the highway, ran into a Tesla and someone died.
https://archive.is/GhYIs
It's not even just deaths that happened in a Tesla for any reason because it also includes deaths in other vehicles, it's just any death where a Tesla was in the vicinity or something.
Reminds me of the very popular "study" that did the rounds on all over the media and on here and Reddit where they counted suicides in a Cybertruck as being its fault to make it look artificially bad against the Ford Pinto.
These false claims go unchallenged on Reddit because moderators will permanently ban anyone posting inconvenient facts, and BlueSky is probably worse.
And given all the massive number of misinformed people, these propaganda tactics actually work pretty well.
Is it only blamed if it was in control at the instant the accident occurred?
Or does it also get the blame if it was in control shortly before the accident and made poor decisions that set up the accident but then realized it could not handle the situation, disengaged, and told the human to take over?
Here, sun just made a tesla kill someone, could have been you or your family.
Everyone else predictably slowed down.
Who is responsible and should be sent to jail for murder?
The FSD software developers?, the driver since he was doing fuck all?
Not sure what you mean by that? We could prevent almost every death, for instance by limiting speeds to <10 for instance, that would remove SO many accidents and incidents. It would also make travel take a really long time, like as if it was 200 years ago and you rode your horse across the country. But it would prevent almost all serious accidents.
So the statement "its not acceptable for people to crash" is not true in a general sense. We are already balancing a certain amount of risk versus convenience of riding at speed to get places fast, to get transports in a timely manner and so on. Some countries, like Germany has roads with free speeds and get really horrible crashes at times involving many cars. Others place maximum limits to try to strike some kind of balance between accident rate/impact and the convenience or speed people would like to have, but from what I can tell, no country has set the limits so low that it actually fulfills what you wrote, that it is not acceptable for people to crash.
It is solvable today, just not popular to make that actual decision. But it could be done and people would stop dying on the roads.
It is true.
Imagine your son/daughter/wife/... were killed by the tesla in the video.
Would you still be so nonchallant about this?
How would you feel then about this buggy 'FSD' just plowing into your family?
Has waymo killed anyone so far?
If we then figure that we are in the situation where those who decide are balancing between X% fatalities per year when people drive like they do currently, why would it be controversial to allow FSD or other similar techniques to have the same or lower rate of crashes?
I'm not buying into Teslas own statistics on crashes per km/mile because lying with statistics is kind of easy, but lets pretend one or all car manufacturers manage to get cars to have 0.4% crash rate per year when people have 0.5%, why would anyone then want to prevent people from using FSD? Why would we apply a different set of standards for one "type" of driver and not the other? And if we were to apply different standards, which might be reasonable, why would we not choose the "safer" one?
Not claiming we necessarily are at a place where computer assisted driving is demonstrably safer, but when we actually can prove it is even just a tenth percent better, by that time I think we should have an answer for this question.
It's especially important to think about what's in the denominator of these rate measurements. I think the right denominator is "per km driven". (Accidents per hour of driving is OK but penalises faster cars -- people usually need to move a fixed distance, from A to B, and a faster car travels that fixed distance in less time.) The numerator (fatal accident count) for Tesla FSD may indeed be just 1 (I didn't see this claimed in the article, but it seems the data is somewhat public, so presumably if there were more fatalities, they would have reported them), but the Tesla FSD denominator might be quite low too, which might make its rate comparable to or worse than the human one.
Of course, we'd need a much greater sample size for any confidence, so it would make more sense to compare non-fatal accidents, which are more common.
So do you run the numbers per FSD version? Do you insist they must be certified before they make changes?
People are unpredictable too, yet we have a mental model of their/our behavior and our own personal limits. When I get in the car I have confidence in how I'll perform, and without FSD, will remember to constantly maintain control. And a 'flaw' in one person's wetware is limited to them. No one can push out a fleet update causing humans to start targeting pedestrians instead of avoiding them.
Note: archive version doesn't have the harrowing live video footage
(I've been able to see only the sun blinding the image, and the another car being circled)
Is thid footage available anywhere else?