Hacker News new | past | comments | ask | show | jobs | submit login
Tell HN: Tesla rear-ended me on autopilot, no one will investigate
413 points by raylad on Nov 1, 2021 | hide | past | favorite | 397 comments
Yesterday I was stopped at a red light on a highway and a Tesla Model S rear ended me at high speed, twice! There were two impacts somehow. It was a 4-car pileup and my car was destroyed.

The driver said she was using Autopilot, that an OTA update had had a problem earlier in the day, and that she had actually had the same kind of collision on Autopilot previously!

I talked with the NTSB who said that they won't investigate because no one was killed.

I talked with the NHTSA who said that they can't take a complaint: only the driver can file a complaint and they will only investigate if they "see a trend".

This seems like a serious issue that someone should be looking at. Does anyone have any idea about how to get this actually investigated to see if it was another Autopilot failure?




I have a Model 3. The autopilot was acting weird while driving on the freeway, sometime disengaging, but mostly not tracking the lane smoothly. For an unrelated reason, I opted to view my camera footage, and was shocked to find that my camera was completely blurry. I’m driving down the freeway at 75mph, being guided by a camera that couldn’t pass a standard driver’s vision test.

Tesla came out to replace the camera, and it appeared that a film had been deposited on the windshield, obscuring the camera.

Living in Phoenix, my car is parked outside, facing south west. During Covid, I’d go many days at a time without driving, and in the summer, the interior temperatures can easily rise over 150f. Within the camera enclosure, there’s a reflection absorbing material, which likely creates a perfect miniature greenhouse.

I believe the glue in the camera housing melted/evaporated at these elevated temperatures and deposited on the windshield.

Concerned that others would experience this problem, I opened a case with the NHTSA. Crickets.

There could be many people driving around on autopilot, with obstructed vision, due to this same failure mode. It’s something Tesla could easily check, and something NHTSA should be investigating.

For something as safety critical as a forward facing camera, you’d expect both Tesla and NHTSA would be investigating. I have no indication that anything has happened as a result of my filing. Possibly because nobody else has reported the issue - maybe because nobody else is actively viewing their camera footage? There’s no other way for a Tesla owner to be aware of the issue. Frustrating.


Tesla and Carelessness, name a better duo.

From removing radar from their cars [1] to having their $200k super car be criminally underbraked. [2]

Tesla has a low regard for human life, and exemplifies the worst of silicon valley's progress at any cost approach. Elon is undeniably intelligent, but his ethics are dangerously sidelined by his need to create products that might not be ready yet.

[1] https://www.reuters.com/business/autos-transportation/tesla-...

[2] https://www.youtube.com/watch?v=Hn9QWjxFPKM


Radar removal improved accuracy rather than harming it, they don't actually sell a car at the price point you listed. and inference from Tesla's award winning safety records suggests that Telsa does care about human life.

The factual content in your comment is decisively lacking.

Careless of you, I think.


"Radar removal improved accuracy rather than harming it..."

Reference, please.

When more sensors don't perform better than less sensors, either your hardware or your software is flawed. Removing a sensor from design because your system doesn't work speaks poorly of your product.

Even following that simplistic logic, if autopilot isn't safe, How could they dare not remove it?


> When more sensors don't perform better than less sensors, either your hardware or your software is flawed.

Consider that 99 cameras plus one radar does not make a system perform better if the task is identifying whether there is the color blue in an image. If we add ten thousand radars to this situation it doesn't produce an improvement, in fact, it gets worse, because there is more power being used by the sensors and a lot more complexity.

> Removing a sensor from design because your system doesn't work speaks poorly of your product.

This is a convoluted way of saying that making improvements to things is bad. That is nonsensical.

> Even following that simplistic logic, if autopilot isn't safe, How could they dare not remove it?

They did remove it.


Just to show you mathematically that they're right consider that if you have two sensors that are both measuring the same thing. One is Norm(mean=0, std=0.00001) and the other is Norm(mean=0, std=10). The true quantity is always zero in the real world. Given this scenario and these two senors you get the result of 0.11 and 11 as your return from the two sensors. Lets say we average the result of these two sensors. The average is farther from the true value than the first sensor was. The less reliable sensor is adding noise to our measurement. It isn't high utility.


A crafty person here will draw on ideas like kalman filters and sensor fusion to notice that as the number of measurements increase you can gain precision even in the presence of an unreliable sensor.

This is very true and it might be where the bad intuition that having multiple unreliable sensors is good is coming from. Multiple noisy readings can be averaged to get a more reliable reading if the distribution of reading errors is normally distributed. You can take ten Norm(0, 10) and average them and the result of that will be a normal distribution with a standard deviation which is lower than the standard deviation of the distribution that we sampled from. See basic statistics for the e

There are a few reasons why this doesn't apply in this case:

1. The measurements aren't going to be of the same thing. At time step t, the state of the world is potentially different from t+1. Since we have one radar sensor averaging the results of the two sensor readings doesn't give us the true position. Kalman filters solve this with a dynamics function so this problem isn't something that is impossible to solve.

2. So lets say we solve it. Even if we did measure the same thing we have another problem. We don't have time to take advantage of the central limit theorem. Yes, multiple noisy readings of the same thing average to the true reading, but since we have only a split second to decide whether to slam on the break to avoid an accident we don't have the capacity to await convergence of the noisy sensor and noisy dynamic function to the true measurement. But lets say we did even though we don't.

3. The radar sensor isn't actually a normal distribution in terms of its noise. It's actually more accurate in certain low visibility conditions because it goes through occlusion, but less accurate given certain environments like going under an underpass. Since the noise isn't normally distributed you don't have the guarantees from statistics which you were hoping to rely on.


From the keynote of CVPR. CVPR is the premier annual computer vision event comprising the main conference and several co-located workshops and short courses.

https://cvpr2021.thecvf.com/

https://www.youtube.com/watch?v=g6bOwQdCJrcCVPR


thanks for intro to that car review channel. really good stuff


I don't know much about this but wasn't Tesla getting safety award after safety award in testing and that translating into lower insurance rates?


My understanding is that they are safe in the event of a crash, not that they have the best crash avoidance.


That's patently a hilariously non-sequitur answer to the OP saying HE got rear ended by a Tesla. HE is the one with whiplash and a totaled car. Who CARES about how safe the stupid Tesla was that did it....


So, it is safer "in the event of a crash" for the people inside the car, but if crash rates are higher (statistics pending) it might be more dangerous for everybody both outside and inside of the car.


Exactly. Tesla is 100% trying to keep you safe on the inside while disregarding safety on the outside. Just look at the Cybertruck, pedestrian safety is clearly their last thought.


Doing well in crash tests means the car does well in those specific scenarios, but may not mean much about other safety issues.


The first several model years of the Model S were generally considered to be among the safest vehicles in the history of automobiles.

https://www.businessinsider.com/tesla-tesla-model-s-achieves...


I continue to believe that minor differences car safety ratings are red herrings. (In a previous life I worked as a mech engg in car manufacturing)

It reminds me of benchmarks on MNIST datasets, to decide which model is better. It is like patting yourself on the back for funding better helmets, when your bike infrastructure is straight out of mad max. The real problem is somewhere else.

Crashes almost never occur like we see in these crash tests. The prime culprit is what you can call 'dangerous actions'. If a Tesla driver is far more likely to be distracted (FSD, Touch controls) or more likely to be out of control at speed (insane acceleration with terrible braking), then a Tesla is an unsafe car. This is irrespective of the marginally lower likelihood of death in ideally controlled circumstances. Taking standard customer behaviors into account is one of the basics of design. If you market your lvl2 system as FSD, then a few idiots will let the car drive while in the back seat. If your car can't brake effectively past 120 mph, then maybe it shouldn't accelerate to that number within the blink of an eye.

I am also ignoring the safety risk posed by these vehicles towards anyone else on the street. It has long been my complaint that the SUV & Pickup craze is actively making the roads unsafe. They have worse blind spots & can kill with the kind of efficiency that sedans can only dream of.

Tesla's safety ratings are misleading too. For one, Tesla's high safety has to do with being one of the first truly electric cars on the market. Electric cars can't roll, are heavy and thus the crash zones can absorb more energy due to having more mass. Electric cars are thus objectively safer in straight line collisons. [1] But, that's like saying that your military tank is incredibly safe, but crushing dozens of cars in front of you. Safety should be about how safe the streets are with your car on the road, vs not being on the road. Alas, this is a problem with the car industry at large. So, Tesla isn't really to blame for this one.

[1] https://www.iihs.org/news/detail/with-more-electric-vehicles...


"But, that's like saying that your military tank is incredibly safe, but crushing dozens of cars in front of you. Safety should be about how safe the streets are with your car on the road, vs not being on the road. "

I recently witnessed an almost head-on collision between a H2 Hummer and a Chevy Malibu. The H2 Hummer lost control (i think due to speeding) and served into oncoming traffic. The H2 flipped on it side but was barely damaged; the Malibu was completely mangled.


An "autopilot" that requires a driver's attention is not an autopilot.

If people really understood that the car driving is less reliable than themselves, they might not turn it on at all. Imagine you teaching a 16 year old to drive while commuting, everyday. NTY


> An "autopilot" that requires a driver's attention is not an autopilot.

Weird; I thought that's exactly what an autopilot was.


And they probably still are, along with the rest of the models.

Assuming you're actually the one driving them that is, and not a broken ass camera.


Where I live Tesla has quite high insurance rates. Mostly because it’s very expensive to repair but also because Tesla drivers tend to speed and cause more accidents than other drivers.


Perhaps the drivers didn't mean to speed...


Tesla doesnt make a $200K car.......


They may not be in production yet, but they are asking $200k for the Roadster (and $250k for the founders), which might be the super car I presume parent is talking about. The plaid while not $200k still clocks in at $150k


>They may not be in production yet, but they are asking $200k for the Roadster (and $250k for the founders), which might be the super car I presume parent is talking about

Well, that's the problem. Claiming that the car that isn't even in production yet has "underbraking" issue is misleading at best.

Regardless, I don't think that person was talking about the upcoming Roadster, because they added a link as a citation in "having their $200k super car be criminally underbraked. [2]", and [2] is a link to a video review of Model S Plaid by a big car youtube channel. I have no idea why they linked that specific video, because it was a glowing review of the car. The reviewers were giddy and threw tons of praise at it in pretty much every aspect.


I meant the plaid. Sorry.

You are right, Tesla makes good cars. The rub is pretty much about carelessness on part of Tesla in safety design.

I specifically linked the video because they are even handed in their reviews and not one of those who put down cars just because they don't like the CEO.


Thanks for explaining your point, I really appreciate it.

Mostly because the original reply seemed like it was just trying to insert a video that doesn't fit the criticism drawn in the comment, but is baiting for people who will just read the comment without watching the video. But I get the point you were trying to make now.

While I disagree, I cannot in good conscience not upvote a point made well.


That's odd since Tesla seems proud of their AI expertise. Isn't something like "image is blurred" a fairly pedestrian (heh) exercise? Especially given that you would have related data, like what speed the car is going, if you're parked, etc.


They do report if the camera can't get a good image. Glare from the sun is the most common reason. It is certainly possible that there is a window between when vision is degraded enough to be dangerous and when it alerts the driver, but a single occurrence isn't proof of a widescale problem. That is the type of thing that should be investigated in more depth, but it is hard to get anyone to care when it is only a theory that isn't even yet connected to a single accident.


This does sound like something that should be the subject of a recall investigation.


Genuine question, what temperature are cars expected to be stored at safely? I'm not talking about engine temperature, just literally the room temperate of the garage. I'm not wanting to excuse or accuse anyone here, but are cars supposed to be manufactured such that they can sit in 150F garages for days at a time without any maintenance?


Because cars sit out in the sun in hot countries all day (unavoidably), the automotive supply chain have developed quite rigorous standards for pretty much everything in a car.

Typically every electronic component must be rated from -40 to +105 C. I'd imagine there's similar requirements for all the interior parts of a car too. Cars have a lot of glass, so get very hot in direct sunlight and quickly reach ambient on cold nights.

The reason cars take so long to "catch up" to consumer electronics (eg it's only in the last year or two that large LCD displays have started appearing in cars, or that LED headlights have become commonplace) is that the car design cycle is very long - typically about 7 years. Tesla decided to do things differently, which meant they could iterate much quicker. It also meant they used components not qualified for automotive use. Mostly they got away with it, but there have been some issues (for example their large displays started failing in hot countries).


There are a lot of cars that get parked in the hot Arizona sun everyday, where temperatures outdoors can exceed 100 F and interior temperatures can exceed 150 F[1].

[1] https://www.desertsun.com/story/life/home-garden/james-corne...


I would expect SOME vehicles to do so just fine.

Really hot weather in the Continental USA can be pretty toasty, and if a garage was designed more like a crummy solar furnace, I would expect to see it hit those temperatures.

My car has survived at least 150F interior temperatures parked outside during a sunny Pacific Northwest heatwave. I'm estimating that temp because I had some plastic in the car that was rated to lose stiffness at 150F and it got a bit floppy after the car was parked outside in sunny 105F+ weather weeks on end and the plastic was in shade.

It can be hard on lead acid batteries and shorten their life, some lithium ion battery chemistries top out at 130F ambient operating temperature, and ABS plastic that dashboards can be made of is rated to about 176F. If its 150F ambient, the dash could be even hotter.


When the Tesla technician came out to replace the camera anti reflection shield, the plastic just crumbled in his hands.

Btw, they never actually replaced the camera, just the anti reflection shield, and cleaned the windshield and the camera lens.


> my car is parked outside ... the interior temperatures can easily rise over 150f

I live in Arizona as well and this is why you'll see everyone in parking lots parking under trees in the summer, with empty spaces in between.

Teslas have overheat protection (turns on the AC when interior temp > 105F) for a reason (one of them because it's cheaper to manufacture cars that don't need to withstand higher temperatures).


Your garage is not 150F, and certainly not for days at a time.


It’s not parked in a garage. Outside, facing southwest, in the Phoenix summer.

Tesla’s overheat protection only works for 12 hours at a time. If I were commuting, then maybe this wouldn’t have happened, but during Covid, the car was just sitting there for days without being driven. Tesla doesn’t offer a way to extend overheat protection past 12 hours, even when plugged in.


Indeed mine certainly is not, but apparently the parent commenter's routinely reaches 150F.


I assumed they were referring to the interior of the vehicle rather than the interior of a building.


I suppose teslas could come with warnings, "this vehicle is not intended for use in Arizona".


I wonder if the US government is afraid to kill their golden goose like what happened with Boeing.

EV is the future and China is moving fast.


EV != Tesla and Tesla != Autopilot Also maybe hydrogen is the future?


> Also maybe hydrogen is the future

It definitely isn't if we are going to move away from fossil fuels.


You can get H+ by cracking water. You “just” need a source of cheap energy.

Storage is tough — those protons are really tiny and it’s hard to keep them from escaping


> You can get H+ by cracking water. You “just” need a source of cheap energy.

Well aware. But we don't have a source of cheap energy. And guess what, whatever source of energy we have, it is very wasteful to produce hydrogen.

First, you need electrolysis. Around 80% efficient. However, in order to get any usable amount of energy per volume out of it, you need to either compress hydrogen (using more power) or liquefy it (way more power, boiloff problems).

Now you need to transport it, physically. This uses more power, usually with some big trucks driving around, just as we have today with gasoline and diesel.

This will get stored into some gas station equivalent. All the while losing mass, as those protons are hard to store, indeed.

Now you can drive your vehicle and refill. Some inefficiencies here too but we can ignore them unless you need to travel long distances to refuel.

This hydrogen will generally be used in a fuel cell. The output of the fuel cell is... electrical energy (+water)

You could skip all that, use the electricity from the power grid to directly charge batteries. No hydrogen production plants needed, no container challenges, no diffusing through containers and embrittlement to worry about. No trucks full of hydrogen going around. Electricity is found almost everywhere, even in places where there's little gas infrastructure.

Mind you, hydrogen has another disadvantage, other than thermodynamics: it ensures control is kept with the existing energy companies. We would still need to rely on their infrastructure to refill our cars (much like gas stations). They would like to keep it that way. Ultimately it doesn't really matter what's the fuel we are using, as long as we keep driving to gas stations.


If hydrogen fuel cells lasted longer than batteries (in lifespans, not-per charge) or required fewer rare metals to create, I can see a giant win. EVs are pretty wasteful if you consider the entire car lifespan and not just what the end consumer fills up with, to the point that I feel the ecologically superior action is just to drive an old ICE.


> required fewer rare metals to create

Fuel cells require expensive and exotic catalysts, like platinum. So they don't even win there.

Hydrogen might have aviation uses but I'm just as skeptical we'll get over the safety issues.

> EVs are pretty wasteful if you consider the entire car lifespan

Only until the required recycling infrastructure is in place. We can recycle around 96% of the material used in batteries. There's much less equipment needed for an EV, compared to ICE. There's also a push to use less of rare earth minerals like cobalt. Any advances in battery tech (like solid state batteries) can be easily incorporated, as electric motors don't care where the electricity comes from.

EVs don't have spark plugs(consumable), oil pumps and filters, alternators(the drive motors work as alternators for regen braking), air filters, crankshafts, valves, pistons, exhaust(and catalytic converters with rare metals), starter motor... the list goes on. Even the brake pads last longer. It's at a minimum a battery bank, an electric motor, and some electronics to manage power. An order of magnitude less moving parts (that have to be manufactured, wear out, and have to be replaced).

> the ecologically superior action is just to drive an old ICE.

An old but relatively recent, fuel-injected ICE with the latest emissions standards? Probably much than getting a new one built, yes.

A really old one with a busted engine (or one before we had catalytic converters in the exhaust)? Then I do not know. Might be better to scrap and get a somewhat less old vehicle.

Honestly, the ecological win here would be to convert the existing fleet into EVs, rather than creating entirely new cars. There are some hobbyists doing that successfully. It's not even difficult - the difficult part is adapting the parts to fit the specific vehicle. The economics may be difficult to justify, however.

At some point, when we have a sizable EV fleet, getting recycled batteries into them would be the way to go. They will probably last longer than our current ICE vehicles.


You speak as if lead-acid batteries, the most recycle-friendly battery, had anything in common with EV batteries, besides being batteries.

If a lead acid battery is bread in a bag, a Lithium battery is a tacobell combo: much more energy, much more waste material.

Reduce comes before Reduce and Recycle


I agree, H+ is a dumb approach.


And H2 isn't much better.


Hydrogen as fuel is unrelated to fossil fuels. Look it up. Edit: 95% of hydrogen is currently produced from hydrocarbons. BUT Electrolysis

https://en.m.wikipedia.org/wiki/Electrolysis_of_water Hydrogen as a fuel has been tested in buses. Works a proof of concept.

This was about hyundai marketing hydrogen as a fuel:

https://www.economist.com/science-and-technology/2020/07/04/...

In the end, it will depend on the energy conversion efficiency that can achieved in the future.


Can't renewables produce hydrogen?


They can. But overwhelmingly, they do not. It's much cheaper to extract hydrogen from fossil fuels and that's not likely to change any time soon.


All major car companies manufacture EVs. Off the top of my head, I've seen BMW, Prius (Toyota), Honda electric cars outside recently.


Hydrogen production should be coupled to renewable energy production in the way they will pair bitcoin mining to any energy plant


Meanwhile, TSLA is up 52% over the past 30 days, and 200% over the last year.


Does that magically make it safe? I don’t get the connection.


My charitable read of the GP comment is that they are pointing out how ludicrous the stock increase is, but that's my reading. Side note, how much attention does your username usually garner you?


None typically. I don’t think anyone has made the connection, publicly anyways, since I’ve started using it.


Chenquieh


I think the point is that Tesla investors and customers don’t care about safety, either for the passengers or as an externality.


I think he's referring to the naive idea that stock prices in general are somehow influenced by what a company is doing and are not pure speculation.


That's horrifying. Every phone I've owned in the last 5 years or so notifies me when my camera is smudgy... it's not exactly cutting edge tech. Does this mean Tesla isn't doing this?


Tesla does do this. But there are likely some cases that are blurred enough to be a problem but not quite enough to trigger the detection.

FWIW, I've seen a Youtube video of a non-Tesla with radar cruise have a similar issue. The radar was caked with snow and it was trying to run over the car in front of it. Does it not have detection for that scenario? Of course it does, but this one somehow managed to not trigger the radar blocked warning.

These systems are not expected to be perfect at the moment.


There are five forward facing cameras in a Model 3. So you're not being told the full picture here.


https://imgur.com/a/KDkjbuF https://imgur.com/a/vEbpzPo

See for yourself.

All the forward facing cameras reside behind the rear view mirror. While the technician had removed the rear view mirror, I took a short video of the windshield blurring. Notice that through the haze it’s almost impossible to read the house numbers, about 9” tall, approximately 15’ away.

And no, the car wasn’t shipped like this. I have plenty of footage from a year earlier, where the front facing cameras are crystal clear. Something happened in the last year.


As someone keenly interested in such topics, would you permit using these pictures, and if so, how would you want to be credited? (the background is https://TorchDrift.org/ , and the image reminded me of the right hand side of our poster here https://assets.pytorch.org/pted2021/posters/I3.png)


There are three cameras in the front center of the windscreen and two more forward facing cameras on the sides of the car. Again, for a total of five forward facing cameras.

I see your picture of the on-screen rendering of an image from just one of these five cameras.

And your second picture (video) which appears to show things that have been disassembled or partly uncovered… hard to draw any conclusions from that especially about the side forward facing cameras which are situated not where your picture shows, but just behind the front doors. Also can’t draw many conclusions about the windscreen cameras, not having been there myself to see what was done in the disassembly. And it’s a picture of the interior of the car, back side of the camera assembly? Not sure what it is supposed to be adding here.


I’m going to use the terminology from this image: https://i.shgcdn.com/739286e6-3f49-463f-ae61-cbc6c99d5270/-/...

And then I’m going to reference some part numbers from the Tesla parts manual you can find online.

The forward looking side cameras are embedded in the b-pillars, and are unable to image directly in front of the car. I checked my car, and cannot see the lens of these cameras when a few degrees from straight forward. The image above shows a field of view which approximately matches what I could assess.

The only cameras which can see directly in front are the main forward camera, the narrow forward camera, and the wide forward camera. All three cameras are behind the rear view mirror, in the 1143746-00-C module.

When viewing the in-car recording, it’s not clear as to which of the three is being displayed. It’s possible the other two weren’t impaired, but I find that unlikely.

These three cameras are enclosed in a small area behind the rear view mirror, bounded by the windshield in front, and the triple cam glare shield around them. 1086333-00-E <-- Triple Cam glare shield.

The entire inside of the windshield enclosed by the glare shield was coated in a haze. When I touched the surface, it was sticky. The Tesla technician (servicing Phoenix) indicated that he had never seen this failure. I sincerely believe he hadn’t.

The video I had posted previously was showing the windshield, after the rear view mirror was removed, along with the camera module by the Tesla technician. As I panned left to right, you can see that the house numbers are obscured through the haze. If I recall correctly, the haze was roughly uniform inside that glare shield. As all three forward facing cameras are looking through that same haze, I expect all three were significantly impacted. I did not have an optical acuity test sheet to validate my subjective assessment that it was hazy, and have to go by the inability to see ~9” tall house numbers from 15’ away. I haven’t done the minute of angle calculation, but I expect that level of acuity would be sufficient for an individual to be considered legally blind.

What’s even more frustrating is that when the technician was disassembling the rear view mirror assembly, he discovered that there were 2 missing screws. When he went to remove that triple cam glare shield, it fractured into many pieces, almost turning into dust.

How did it get that hot in there? We keep a reflective windshield screen in the windshield. The rear view mirror and cameras would have been sandwiched in the small space between the screen and the windshield. Coupled with the highly optically absorbent glare shield, and exactly aligned solar exposure, and high external temperatures… there must be some strange combination that caused this failure.

My biggest gripe is that Tesla may not be forced to recall, and when this happens again, I’m going to have to pay for the repair out of pocket. I’ve already purchased a replacement glare shield off eBay in case I have to replace it in the future. The glare shield appears to have some optical absorbent flocking, likely attached with glue. It’s this glue that I expect couldn’t withstand the heat.

The technician cleaned the windshield, the camera lenses, and reassembled (the new glare shield had to be ordered and was replaced on a subsequent service call). The problem hasn’t recurred in the following 4 months, even though the parking orientation, windshield screen use, etc hasn’t changed. Luckily Phoenix does cool down.

Btw, I think the three forward camera design behind the windshield is quite innovative. The cameras are looking out through an area that is swept by the windshield wipers, helping to maintain visibility. The cameras are separated by sufficient distance that a single bug shouldn’t obscure all three. The only common environmental failure mode appears to be this weird issue on the inside of the windshield. Hard to fault the engineers designing this. I’d be extremely proud of this design, if I designed it. I happen to have hit a corner case.

My wish is that Tesla enable some sort of software to detect this failure, and offer free repair for this failure for the life of the vehicle - regardless of how often this recurs.

In typically Tesla fashion, this could be a simple over the air software update, but will likely be followed by months of arguing with the service department when the fault is detected and the repair actually needs to happen.

Without this software detection, I’m afraid that this failure will go undetected by many other Tesla owners, incorrectly putting their trust into auto pilot using cameras with this failure mode.


Awesome comment, thank you!


> was shocked to find that my camera was completely blurry

They maybe don't need to figure out how it got blurry.

But their testing regime is sad if it continues to work when it can't see!


It sounds like the fix here is a built-in-test for the camera capable of detecting this sort of obscuration / degraded acuity.


There are several cameras for redundancy. The car also doesn't need crystal clear vision to drive. What may seem like a blurry image to human eye with simple filtering can become much more visible. It also doesn't need to see details, but rather needs to detect if there are any objects in the way or where the lane markings are.



I've also filed a NHTSA report, some 12 months old now, that has been greeted with ... crickets. Something very basic, that any car maker should be able to do, is write a manual that teaches people how to use the (poorly written) software.

The manual explains that you can do a 3-step invocation of a voice command: 1) push 'voice' button; 2) utter command; 3) push 'voice' button to complete. It hasn't worked that way since 2019. See, e.g. Software version: 2021.32 [1], p. 161

Good luck teaching people how to use 'Full Self Driving' with these kinds of error checks.

[1] https://www.tesla.com/sites/default/files/model_s_owners_man...


Tesla is making a virtue out of cutting costs, using cheaper parts, and running a public beta.

I guess one real test will be what the resale value of Teslas ends up being. Because they have been growing so fast, most Teslas are very new. What will they be like at 10? 15? Will they stay in shape, or will they drag the company valuation down? Also right now they have the cool car premium, but that’s bound to be shared once other catch up.

Ford T was a similar bet I guess, attention to detail must have been lower with mass production, and it clearly paid off. Hard to know this time.


There are five forward facing cameras on a Model 3. Was it all five of them that were blurry? How could you tell? Or just the one selected as the frontmost one on the dash cam video?


Just the one that’s visible on the dashcam video, but the enclosure where all (I’m not aware of any other place) the forward facing cameras reside, the window was covered in a haze. Do you know of any way to view the other camera feeds? Are there any other forward facing cameras which aren’t behind the rear view mirror?

https://imgur.com/a/KDkjbuF https://imgur.com/a/vEbpzPo


My $300 OPPO phone alerts me when there's dust in front of the lens... Maybe maybe Tesla could benefit from some similar technology.


FWIW Oppo sells more phones per month than Teslas lifetime total


It's also a problem that the camera could have degraded vision that's inadequate for self driving yet report no problem at all and continue driving as if everything's fine.

I'm pretty sure your situation isn't the only way the vision can be obstructed, maybe liquids, or even bird poop can also degrade vision.


This is what I would expect as well. Some kind of visual self-test, and if/when it fails, the driver gets an obvious warning and autopilot does not engage - you get some kind of clearly visible warning so that you can contact Tesla to investigate / repair the obstruction.


This does happen, but maybe not in all cases. I've had it happen where lane change is no longer supported (and you get a notification) because the sun is blinding one of the pillar cameras.


I drive a Tesla. If my car rear ends another car, that's my fault. Doesn't matter if I had Autopilot on or not.

Imagine if you were rear ended by a Toyota and the driver said "it was on cruise control, shrug." Would you talk to NTSB or NHTSA about that? Probably not.

The only scenario I can imagine that'd warrant investigation is if the driver were completely unable to override Autopilot. Similar investigations have been started by NTSB when Toyota had issues with their cruise control systems several years ago.


> I drive a Tesla. If my car rear ends another car, that's my fault. Doesn't matter if I had Autopilot on or not.

You can have a similar argument for: if I'm drunk while driving that's my choice; if I don't cause any accidents, it should be none of anybody's business that I'm drunk.

You see, it doesn't work like that.

If Autopilot poses a significant risk to other road users, then an investigation is warranted.


> If Autopilot poses a significant risk to other road users, then an investigation is warranted.

Except there's no evidence that it does. I drive a Tesla. I've seen AP brake hard in circumstances where a distracted driver would have failed to notice. The system works. It does not work perfectly. But all it has to do to make that "if" clause false is work better than drivers do, and that's a very low bar.

Tesla themselves publish data on this, and cars get in fewer accidents with AP engaged. (So then the unskewers jump in and argue about how that's not apples and oranges, and we all end up in circles of nuance and no one changes their opinion. But it doesn't matter what opinions are because facts are all that matter here, and what facts exist say AP is safer than driving.)


At the risk of beating a dead horse, the issue is that AP engaged/disengaged isn't a random test, it's a user choice. Presumably people leave AP on when the outside conditions are inherently safer per mile (including, for example, on a highway). If Tesla randomly made AP unavailable some days to some drivers and ran a controlled test, that would be compelling.


You are absolutely beating a dead horse, sorry. This argument gets made repeatedly every time that report gets mentioned and... frankly I'm sick of going through it every time. Nitpicking against a result you don't like without presenting contrary data is just useless pontification. It proves nothing, and has no value but supporting your own priors vs. inconvenient evidence. Find some data.

I mean... there are two million of these cars on the street now, and all but a tiny handful of early Roadsters and S's have full featured autopilot. If there was a significant safety signal in all that data somewhere it would be very clear against the noise background now, and it isn't.


My point wasn't that I had contradicting data. My point was that the data Tesla publishes isn't valuable. It's a different statement. The premise could be right or wrong.


> My point was that the data Tesla publishes isn't valuable.

That's not true at all, and you know it. Consider the extremes: if autopilot got in zero accidents over these however many million miles of travel, you'd have to admit it was safe to use, right? If it got in 10x as many accidents as the median vehicle, we'd all agree it was a deathtrap. There's value in those numbers. Well, the truth is somewhere in between, it gets in 4x fewer accidents (I think) than the median car. That still sounds good to me.

You're just nitpicking. You're taking a genuine truth (that the variables aren't independent because "miles travelled with AP" aren't randomly sampled) and trying to use it to throw out the whole data set. Probably because you don't like what it says.

But you can't do that. The antidote to incomplete or confounded analysis is better analysis. And no one has that. And I argue that if there was a signal there that says what you want ("Tesla AP is unsafe") that it would be visible given the number of cars on the road. And it's not.

Stop nitpicking and get data, basically. The time for nitpicking is when you don't have data.


Yes, a very strong signal (everytime AP is turned on the Tesla explodes or everytime AP is turned on the Tesla drives perfectly from parking spot at origin to parking spot at desitnation, and there has not been a crash in 10 billion miles) can overcome the noise in Tesla's published data. However, any signal there is not that strong and is so weak it is drowned in noise.

Meanwhile, I do have this one datum about how OTA updates caused a couple of accidents (see OP's story.) You cannot just dismiss it because it doesn't agree with a larger trend, if that larger trend is undetectable in the data.


> However, any signal there is not that strong and is so weak it is drowned in noise.

It's a 9x reduction relative to median vehicle and 4x vs. non-AP Teslas! I mean, go read it: https://www.tesla.com/VehicleSafetyReport

You're just making stuff up. That's not a weak signal at all, and you know it. The argument you should be making is that it's a confounded signal. But that requires a level of sophistication that you aren't supplying (nor is anyone else). Which is exactly why I mentioned way upthread that these digressions were so tiresome.

There's literally no data in the world that will satisfy you, just admit it. Yours is an argument from priors only, to the extent that one anecdote ("A Tesla hit me!") is enough to outweigh a report like that.


Tesla themselves publish data on this, and cars get in fewer accidents with AP engaged

This is false, or at very best misleading, considering that Tesla's Autopilot is responsible for more than 10 fatalities. Or in other words, Tesla's AP has killed more people than the entire rest of the domestic auto industry's smart cruise control features. Combined. (And Tesla's "safety" data conspicuously fails to mention any of these fatalities.)

And that doesn't even include all the times that AP ran into stationary emergency vehicles, or the many slow-moving accidents in parking lots, or the many near-misses that were avoided because the other driver was able to avoid the suicidal Tesla in time.


Come on, the "entire rest of the domestic auto industry's smart cruise control features. Combined." constitutes what, 20k vehicles or something like that? It's just in the last few months that we've seen lanekeeping features start to land in volume models, and even then they tend to be extremely heavily geofenced. It's very hard to kill someone when no one has your product and you won't let them use it.

Who's being misleading here?


The entire rest of the domestic automobile industry sold millions of cars with smart cruise control features similar to Autopilot over the past few years, so the misleading one would be...you?

Unless you're now claiming that FSD is just a version of smart cruise control?


"Similar to autopilot" meaning what? Are you trying to say radar speed modulating is the same kind of system? You've completely lost me.

But if you mean vanilla cruise control, people die using that every day.


No, using vanilla cruise control, almost nobody dies every day. It is in fact rare enough that when it happens it gets reported in the news.

The fact that Tesla fans can't grasp the concept that Tesla is (a) not perfect and (b) deliberately endangering people's lives for profit is unfortunately not shocking any more, but it doesn't mean the rest of us have to put up with that schadenfreude.


>> If Autopilot poses a significant risk to other road users, then an investigation is warranted.

> Except there's no evidence that it does.

It sounds like you two are talking past one another.

I don't believe op is presenting evidence that it does. Instead, they are saying that there is at least one antidote (TFA) and that someone (presumably a government agency) should investigate it.


First, a very valid reply to there being "at least one anecdote (<-- sp)" is that there is a bunch of actual data on exactly the subject at hand that shows the opposite of the stated hypothesis. You don't throw out or ignore existing data just because one guy on HN got rear-ended!

Second: I think you're being too charitable. I think it's abundantly clear that grandparent believes that there is a "significant risk to other road users", and that this single data point confirms it. And clearly it doesn't.


An NTSB investigation is a separate thing from an accident investigation for fault or insurance purposes. The NTSB does not investigate a drunk driver, that does not mean the drunk driver would be free of fault or charges.


A more apt comparison would be if OP was hit by a drunk driver, and there were no laws against drunk driving. It would be appropriate to ask the government to investigate whether or not drunk driving is safe, because it could happen to many more people.


Except that both Tesla and the law make it clear that in a Level 2 system the driver is responsible at all times, regardless of whether Autopilot is engaged or not.


To stretch the metaphor further, I don't think it's fair to say Tesla makes it clear. Or better, it's not all the story.

People would be less upset with the drunk driver and more with the beer manufacture if they had been imbibing "alcohol free beer" which actually was not alcohol free.


The car tells you "please keep your hands on the wheel". It then proceeds to nag you if you don't put enough torque on the wheel, and then brings the car to a halt in the middle of the road.

Please don't propagate myths about "Tesla doesn't make it clear that you have to be in control of the vehicle."

My AP has many times disengaged because despite paying attention to the driving I wasn't putting enough torque on the wheel to convince AP that actually had my hands on the wheel.


Have you used it? Because its as clear as any system that I've ever used with a clear statement on each activation.

Most other systems are far less clear, IMO.


And a drunk driver is still responsible at all times. That doesn't change the fact that we outlawed drunk driving because it made things worse.


> Autopilot poses a significant risk to other road users, then an investigation is warranted.

You could fault their methods but NTSB is doing just that: waiting for a trend to emerge that would warrant such an investigation.

I'm not suggesting that Tesla's design is not the cause but if the driver were lying in order to shift blame, then NTSB would end up wasting their resources investigating it.


> If Autopilot poses a significant risk to other road users

It does not, if you use it as Tesla tells you to use it - paying attention to the road at all times and with your hands on the wheel. If you don't do that, Autopilot is probably still better than distracted driving - over the years I sometimes would catch myself drifting off in my thoughts as I was driving to the point where I wasn't even sure how I drove my car for the past few seconds. I bet this happens to others, too, and often. Between that and even semi-autonomous Autopilot, Autopilot seems like a safer choice, and it'll only get better from here on out, though I don't think we're anywhere near real "FSD" where you'd be able to read a newspaper as the car drives itself.


No, you're right, none of what you're talking about works like that.

There was a traffic accident. There are a variety of "automated" driving features in modern cars: Auto-park, cruise control, auto-pilot. Any one of these features could, under "less than ideal" circumstances, cause an accident that doesn't warrant contacting national authorities. Well before any regulatory body is going to do anything, private insurance would. They're going to investigate in an effort to determine fault and facts. Was autopilot to blame, or did the user spill a cup of coffee in the passenger's seat that caused them to take their eyes off the road, etc.

The idea that a national regulatory body is going to start looking into a single car crash seems great until you start thinking about the expense that would create for the taxpayers. Bureaucracy just isn't built to be agile, for better or worse.

Similarly, if you drive drunk and don't cause an accident, nobody will know. This doesn't make it legal, and nobody is trying to argue a point even tangentially similar to that. This is a straw man, and a rather obvious one. There is no national group that would ever investigate a drunk driving crash (assuming that was the only relevant information in the case). That's a local law enforcement issue.

TL;dr- The feds don't care about a sample size of 1 fender bender with no fatalities.


The class of problem called driving while incapacitated was studied and special regulatory attention was applied to it.

An automated driving system that can offer to function while incapacitated without the operater even being informed that there is any problem, is a different problem from an operator that neglected to press the brake pedal. The brake pedal itself and the rest of the braking system is absolutely required to meet a whole bunch of fitness standards.


I think that outside of a courtroom, it's possible to recognize "layers" of responsibility, for lack of a better term. I can think of a couple of scenarios:

1. In an industrial plant, equipment needs to be fitted with guards and other protective measures, even if it's arguable that sticking your hand into an exposed rotating mechanism, or grabbing a live electrical circuit, is your fault. If someone's hurt, and there's an investigation, the factory will be cited for not having the proper guards. From what I've read, this is one of the things that we now take for granted, but was hard won through labor activism resulting in a dramatic reduction in workplace deaths.

2. The European approach, where they assign "fault" for a crash, but at the same time investigate how the infrastructure can be redesigned to make the crash less likely.

Yes I would hope that a regulatory body investigates the OP's crash. At the same time, I've read some comments in HN that Tesla drivers hover their foot over the accelerator in case of "ghost" braking. Naturally this is anecdotal. Still, the driver could have mistakenly stepped on the wrong pedal.


VERY generally, in the US, the jury assigns a percentage of fault to each party. E.g., Defendant 1 was 40% at fault, Defendant 2 51%, Defendant 3 5%, and the Plaintiff 4%.

Plaintiff can collect 40% of his damages from D1, etc. Some states will allow Plaintiff to collect 96% from D2, and give D2 the right to collect the other defendants' proportionate share from them.


The big black box makes a mistake and you voluntarily accept the blame? I really don't get this train of thought. I know that this is legally correct, but at some point in time along the technological advancement of self driving technology we'll have to stop blaming ourselves and legally shift the blame to whoever created the self driving tech instead.

Edit: Updated wording to make it less confusing. From system to whoever created the self driving tech.


Because as the driver you're responsible for the tool you're using to drive your self to your destination. That means you're responsible for whether or not you use Autopilot, do 200kmph, take your hands off of the steering wheel, turn your lights on at night, and everything else associated with the operation of that vehicle.

> ... at some point in time along the technological advancement of self driving technology we'll have to stop blaming ourselves.

No we don't. We built it, so we're responsible for it and the consequences associated with its operation, regardless of how well it's engineered or how smart we think we are.


Yes, and no. With other dangerous tools, society decided manufacturers have responsibilities, too.

If you buy a chainsaw, you can trust it has certain safety features. If you buy a children’s toy, you can trust it doesn’t use lead paint, etc.

Those aren’t features you, as a consumer, have to explicitly test for before using them.

Similarly, I think society will demand that cars behave as cars, and not as projectiles. If a manufacturer claims a car has auto-pilot, but that auto-pilot has a tendency to rear end other cars, the manufacturer should carry at least some blame.

I think that should be true even if they explicitly mention that problem in the instruction manual. Certainly in the EU, whatever you buy, you can assume it is “fit for its purpose”.


The problem here is Tesla does not make the claim that the car is autonomous, in fact they clearly state otherwise, nor it is legal for any driver to operate the car like it was autonomous

To continue your analogy Chainsaw manufacturers include clear instructions, and directions to use safety PPE.. if a operator of the chainsaw fails to follow the instructions, or where the PPE and chops their leg off the manufacturer is not liable.. Hell even if they did follow the instructions and chop their leg off it would be unlikely the manufacturer would be liable.


Lets take the analogy one step further. Imagine we have an Autopilot chainsaw, you just have to touch it every once in a while to tell it to keep sawing. Then suddenly it starts sawing in a completely unexpected way and causes an accident. Are you at fault because you have to, in theory, keep in control at all times? Even though you in practice relinquish control and humans don't have the ability to context switch without a delay? The issue would not have occured if the chainsaw didn't start behaving in an unexpected way but it also would not have occured if you didn't use the Autopilot function.


>> Are you at fault because you have to, in theory, keep in control at all times?

Yes...

We already have examples of this with machinery that is automated. The Humans in charge of these automated machines are responsible for monitoring them, and have things like emergency shut downs, and various other safety protocols to shut them down when (not if When) they do unexpected things.

Machines are machines, they fuck up. The manufacturers are not held liable because the know these types of abnormal conditions exist they is why they have the safety protocols and human minders.

An example of this is the auto manufacturer i believe in TN or TX that did not train the operators properly on the safe way to operate, and lock out a machine, this failure to train lead to human injury. The manufacturer of the robot was not liable, the company that failed to train the human staff was.


Stick to my example please, your example is not the same type of situation.


There is no need to stick to your strawmen when we have real world actual examples. If you demand others only address your strawmen then you have recognized the weakness of your argument


I merely changed a car to a chainsaw, it's an analogous situation. There is no strawman, a strawman is a fallacy where the new argument is easier to attack. This is the same argument.

Your example on the other hand is not relevant to the situation at hand, it misconstructs the problem. The machine required trained operators to operate the machine, driving a car requires a trained operator but that is where the similarities stop. You don't need any additional training to operate autopilot. If someone without a license sits behind the steering wheel they're at fault cause they're not supposed to drive the car in the first place, this would be an analogous situation. Yours, as pointed out, is not.


He answered your question with a yes. If you buy and operated some chainsaw with an assistance technology and it automatically cuts down your neighbors tree then you are responsible. Your neighbor doesn't sue the chainsaw manufacturer, they sue you. You can sue the manufacturer all you want too, but the court won't let you tell your neighbor its the chainsaw manufacturer's fault and you are innocent.


I know what he replied with, a yes and an irrelevant example so that leaves just the yes. What I'm after is a stronger, relevant argument.

Let us not forget my original argument. Where I said that this is the current legal situation, I just think that this will change in the future if autonomous actions (driving or other behaviors) keep improving. I believe we have reached a point where it's not necessarily clear anymore who is in the wrong. Not in a legal sense, the law tends to run after the facts. More in a moral sense.


https://en.wikipedia.org/wiki/Product_liability:

“Product liability is the area of law in which manufacturers, distributors, suppliers, retailers, and others who make products available to the public are held responsible for the injuries those products cause.”

A specific example is https://en.wikipedia.org/wiki/Escola_v._Coca-Cola_Bottling_C.... : somebody uses a soda bottle as intended, gets hurt, and wins a case against the manufacturer of the bottle.

If you don’t follow instructions, that still can happen, but it will be harder to argue that a problem originated with the manufacturer.


> The problem here is Tesla does not make the claim that the car is autonomous

Tesla's own marketing disagrees with this. Here's what Tesla says they mean when they say "Autopilot" and "Full Self-Driving":

> The person in the driver’s seat is only there for legal reasons. He is not doing anything. The car is driving itself.

[1] https://www.tesla.com/videos/autopilot-self-driving-hardware...


That is marketing video from 2016, which many including the NTHSA criticized Tesla over, and which Tesla has updated documentation, driver prompts, and other marketing to reflect that the system is not "autonomous" infact if you look at the product pages they clearly say "full self-driving capabilities in the future—" this is also reflected by prompts to the driver, warnings to the driver, and the requirement that the driver provide feedback to the car they are in control...


>That is marketing video from 2016, which many including the NTHSA criticized Tesla over, and which Tesla has updated documentation

I can speculate from people sleeping when driving Teslas that there are some drivers that interpret the marketing and documentation contradiction as "the laws forced Tesla to add this warnings in the documentation, Elon said autopilot is OK".


but the autopilot makes you nudge the wheel every 30 seconds. if people are really sleeping in their teslas they have added a device to defeat this attention tracking mechanism


This makes my point that those sleeping drivers when they had to chose between

1 marketing message" Autopilot works and stupid politics forces us to release it with this tons of boilerplate warnings and having to pay attention"

2 the actual disclaimers the car will show you

this drivers chose to believe 1, this is obvious that marketing is undermining the reality, so IMO Elon and his marketing should be criminally responsible for undermining their own official message.


Yeah and honestly the chain of responsibility is driver first then tesla. What's important is the victim gets their payout. It's the drivers responsibility to provide that payout / get dinged on insurance. if the driver then wants to complaint to tesla for damages that's totally fair. But the victim's compensation should come first from the driver. The driver can't say "but muh autopilot" and release all blame


The OP was very clearly not looking at a payout, but at preventing future incidents.


Incorrect. You are responsible for using the tool responsibly and according to any applicable directions and best practices.


> We built it, so we're responsible for it and the consequences associated with its operation, regardless of how well it's engineered or how smart we think we are.

That's my point. The company that built the self driving technology should be held responsible. If you're not driving you're not responsible, that's how it works when somebody else is driving your car as well so why would it be any different if that somebody else is code? It seems like a good way to align the incentives to create a safe system. You could claim that at this point in time you still have to pay attention and take control when necessary, the issue I have with that argument is that we already have research showing us that the context switch results in a delay that's too long to be really useful in practice.


I didn't build it, Tesla did, and I personally think the company that signed off on "touch screen controlled window wipers" is the one with the most liability in autopilot failures.



The question is the driver responsible? Should it be the owner? Should it be the company who made the failing product?

If your toaster blows up and kills someone a solid case against the maker of toasterb


I agree, but if you bought a tool called "autopilot" and it piloted you into another car, there is something wrong, no? Maybe not the NHTSA, but it seems like someone should be tallying that.


Has Tesla ever said anything like: the “auto” in this case means “automobile” and not “automatic”?

But yeah, they shouldn’t call it that.


Yes, and that point is most likely SAE level 5, as shown here (https://www.nhtsa.gov/technology-innovation/automated-vehicl...). We are some way away from that - just how far depends on whose hype you listen to.

I drive a Tesla Model 3, but I have sufficient grey matter to understand that AutoPilot, in spite of its name, is SAE level 2 (see above reference). If the car is involved in an impact either something hit me, or I hit something - no blame attaches to the computer, because I’m still in charge. Given the current state of the art, I’m perfectly happy with that and bloody livid at anybody who is dumb enough to buy/lease and drive one of these amazing machines without being constantly and totally aware of their limitations.


> Yes, and that point is most likely SAE level 5

Hard disagree.

This is the same as identity theft, where it becomes your responsibility instead of a failure on the part of the bank to protect your account, and the burden gets shifted entirely onto the consumer.

Relevant Michell and Webb: https://www.youtube.com/watch?v=CS9ptA3Ya9E

If Tesla sells a SAE Level 2 lane following / accident avoidance feature which should be able to stop and avoid rear end collisions, yet it causes rear end collisions, they must be liable for that failure. They can't just write words and shift all liability onto the consumer for the software failing to work properly. Nice try though.

And I don't care if the consumer technically agreed to that. If someone smacks into me with Autopilot enabled, I never consented to that technology and didn't give up my rights to sue Tesla.


If someone hits your car while using the car features is exactly why you can sue both the driver and Tesla. You’re very likely to in against the driver, the other isn’t really tested in court. Your mileage may vary.

Also, how is the bank supposed to protect your account if you give your login information away to a criminal? There are a non-negligible amount of cases where the consumer compromises their accounts themselves. The responsibility lies on both. The burden often isn’t placed on the consumer either. Merchants are often the ones that pay the highest price.


I saw an old dude apparently asleep behind the wheel of a tesla while on the freeway. I immediately feared for my family members’s lives, and I had to speed in order to gain distance from this fellow.


when you turn on autopilot it tells you to keep your hands on the wheel and if it senses you are distracted it will turn it off if you don't wiggle the wheel a little to show you are paying attention. So at least as far as autopilot is concerned I would say the driver is at fault if they run into another car.


I think there is a good argument that now is a good time for a liability shift for certain types of accidents with TACC. TACC should never be rear ending people.


It’s a driver aid, not a driver replacement. I’m responsible for where my airplane autopilot takes the airplane (in large part because I’ll be the first one to arrive at the accident scene).

Why shouldn’t the driver be responsible for accidents in the car they’re driving at the current level of driver assist technology?


Yeah, I know, but I'm starting to think that we've reached a point that this shouldn't be an excuse for straight up rear end collisions. They should be forced to make the black box available and take liability if the driver didn't take over.

There's no reason that TACC shouldn't be able to handle the basic job of not running into the car in front of it in the vast majority of circumstances.


a bad actor can easily create an accident of this description, no?


How? Hard braking shouldn't do it. Cutting in front potentially could, but that'd be a different problem.


This is so tremendously NOT a case of personal responsibility. A large portion of our economy attempts to rely on the belief of: "you said it would do x, it must do x".

You bought a car, it promised functionality, it did not deliver and endangered another human/their property.

This is the fault of the manufacturer.

Here are some examples of non-autonomous driving cases where the manufacturer made a false promise: Volvo lawsuit: https://www.motorbiscuit.com/volvo-owners-seeking-class-acti...

Toyota recall after lawsuit https://www.industryweek.com/the-economy/article/21959153/to...

Chevy lawsuit: https://topclassactions.com/lawsuit-settlements/consumer-pro...

It is my sincere hope that we can enjoy Elon's persona without just letting Tesla off the hook of being a car company. Or really, a company.


Fortunately, in the US, responsibility usually isn't an all or nothing proposition. Tesla can be responsible for a FSD bug, the driver responsible for not paying attention, the driver in the other lane for swerving, the tire shop for not putting the tires on right, etc.


It's so odd that in stories like this people don't even ask this simple question: did you try to brake?

It's usually conveniently omitted from the story.


Based on videos and observed driving behavior, it seems the marketing and documentation for Autopilot is ineffective at communicating to drivers that Autopilot is basically advanced cruise control. If this is correct, it represents a systemic issue that should be investigated by the NTSB or NHTSA.


https://tesla.com/autopilot

In the video there it says:

"THE PERSON IN THE SEAT IS ONLY THERE FOR LEGAL REASONS.

HE IS NOT DOING ANYTHING. THE CAR IS DRIVING ITSELF."

Total lie unless they are psycopaths and would be willing to run it with no one in the seat if not for legal reasons. In other words unless they were willing to murder if murder were legal--the video is from 2018 or maybe even earlier, where we know they were nowhere near ready for driverless.


Did you consider that that text is about the video and not about the feature in their cars as sold to the public?


Do you think they could have filmed that video with no driver in the seat without high risk of killing someone? Only "legal reasons" kept them from doing it, they have no ethics?

On top of that it greatly mislead people about how soon it would be, and how far along they were. Buyers who they were charging like $9,000 to at the time.


The moment you use Autopilot it's evident that it's basically a fancy cruise control. You're assuming some people would not interfere with Autopilot in any scenario. To think somehow people with that much lack of common sense exist is odd. It shows a massive level of disregard for those people's ability to perform basic functions.


I literally saw a driver operating erratically in stop and go traffic. As I attempted to pass I noticed they were reading a book and driving “hands free.” They were driving a Tesla, and presumably using Autopilot. I’m sure some people operate like they’re a test driver, but there’s also folks who don’t understand the basic functions of being a tester.


Not using brakes when car is approaching a red traffic light is failing at a basic function. Being distracted by text messages, phone, etc. is not.


That may be the legal reality we live in - it's true that if a driver in a Toyota rear-ended someone else, there would be (and should be) no NTSB/NHTSA safety investigation. The driver was simply at fault, they should be appropriately penalized and will hopefully not do it again.

The difference is that safety improvements to Autopilot can scale. Patch the software or upgrade the hardware to fix the root cause of this collision and future collisions can be avoided!


> I drive a Tesla. If my car rear ends another car, that's my fault. Doesn't matter if I had Autopilot on or not.

It is supposed to be super-self-driving... it has to count for something.


No it's not. I didn't spend $10k for full self driving. That's not the same feature as Autopilot.


> The driver said she was using Autopilot, that an OTA update had had a problem earlier in the day, and that she had actually had the same kind of collision on Autopilot previously!

So she continued to use it? That's insane.

It's worrying that there are tons of videos on YouTube reviewing Tesla's Autopilot and FSD where the cars do something incredibly scary and dangerous, and the drivers just laugh it off, go "well, it's a beta!" and continue using it instead of turning off the feature that almost got them into a car accident. I don't want to share the road with drivers who think dangerous driving is a funny game.


“It’s a beta” and the unsaid “and I don’t care about other peoples’ lives anyways”.

Still pretty frustrated nobody has regulated “self driving” cars off the market yet. If you’re testing it with a driver and a copilot that’s fine, but putting it in the hands of customers and using “it’s a beta” to hide behind should not be legal.


> “It’s a beta” and the unsaid “and I don’t care about other peoples’ lives anyways”.

Yes, it's a bit distressing having myself and the people I care about be treated as NPCs to some Tesla owners as they beta test their new toys on public roadways.


I don't disagree, but I can't help but wonder if people feel the same about Ford's pre-collision assist with automatic emergency braking or, back in the day, anti-lock brakes.

Sooner or later, these technologies are either public-ready or they aren't. If Tesla autopilot isn't public-ready, I agree it shouldn't be on the road, but I suddenly realize I have no idea by what criteria the system was cleared for public-access roads.


I would absolutely not expect a feature attached to my car to be beta grade, no. And if early ABS systems had issues, I’d expect them to be recalled and fixed at the manufacturers cost.


The question is really: does your expectation match reality, or were you just uninformed of there being existing issues on ABS systems? Or more generally: have things always been the way we currently see them and we just less informed in the past, or has there actually been a change?


I’m not of the right age to know, honestly. That being said I have never seen a car maker ship a feature to me that is self described as a beta. Obviously mistakes happen and plenty of features fall short, but seeing a company knowingly ship a feature incomplete safety system for real world testing by their users with a mere EULA agreement seems like a step beyond the hypothetical of the ABS you raise.


I don't understand why there's so much significance attached to the label. I've seen a non-beta try to run off the road in the same scenario that the beta handles just fine. Its not an isolated example.

To me, that'd just mean one company, the one with the beta label, has higher standards.


I have long said that society has become such risk adverse that if someone invented a technology like the car today it would never be allowed in society.

I said this pre-COVID, and COVID has now cemented this idea, we will not accept any level of risk at all... None.

If it is not perfectly safe everything should be banned, locked down, or otherwise prohibited by law


The topic in question, Teslas (and other autonomous vehicles) on the road, would seem to suggest the opposite, no? While errors do seem to be relatively low, they do exist, and there is a large portion of people still pushing for their immediate use and I am not aware of the (US) government stepping in and banning their usage.


Then don't drive with other drivers - NON-autopilot drivers eating, texting (constantly!), seeming to look up directions and much more (falling asleep).

https://www.youtube.com/watch?v=cs0iwz3NEC0&t=2s

The fact that you promote this absolute recklessness and demand users turn off safety features that may, net net save lives is depressing.

Finally, folks get very confused between things like autopilot and other features (autosteer beta with FSD visualization) etc. I think some teslas now even show a message warning that cruise control will not brake in some cases? Anyone have that.

I've been following the unintended acceleration claims around tesla's as well. Most seem pretty bogus.

For what its worth here is data we currently have from Tesla:

"In the 2nd quarter, we recorded one crash for every 4.41 million miles driven in which drivers were using Autopilot technology (Autosteer and active safety features). For drivers who were not using Autopilot technology (no Autosteer and active safety features), we recorded one crash for every 1.2 million miles driven. By comparison, NHTSA’s most recent data shows that in the United States there is an automobile crash every 484,000 miles."

So you want to go to a 10x increase in crashes. (1 per 484 vs 1 per 4M). I realize this is nt one to one, but even outside of this we are seeing maybe a 2x improvement (if your normalize for road conditions etc).

Nission Maxima had something like 68 deaths per million registered vehicle years. Will be interesting to see how this comes out for Teslas.


That data is not comparable, the autopilot in question there can only be activated in simple situations like a straight highway when you aren't changing lanes. While the human data is for all scenarios.

Tesla is not being transparent enough with their data for us to know anything.


No one has shown any credible data that the fatality rate driving a tesla is higher than anything. They go out of their way to ignore cars getting into pretty clearly lots of accidents.

My expectation would be that Suburu WRX / Infinit Q50 / Elantra GT type drivers are crashing more than folks using autopilot? Maybe ban them first?

Anyways, we should in a few years get some NHTSA data on this, though there is some speculation that given their approach towards tesla if its possible they will delay it and stop updating vehicle model statistics if its favorable towards Tesla.


Here’s some credible data that your assumptions are probably wrong:

https://medium.com/@MidwesternHedgi/teslas-driver-fatality-r...


That page is not credible at all.

IIHS reports are publicly available [1] and the numbers there don't match what's in the article at all. Large/very large luxury vehicles have an overall fatality rate of 16-21, ranging from 0 to 60 depending on model. The overall average for all vehicles is 36. The Audi A6 for example, is reported as having 0 fatalities in the article, while in the report the actual number is 16.

The other source used, tesladeaths.com, lists a ton of accidents where AP was not deemed responsible. It actually states the real number if you pay attention - 10 confirmed deaths in the lifetime of Tesla as of 2021 - yet somehow the article claims 11 deaths up to 2016.

[1] https://www.iihs.org/api/datastoredocument/status-report/pdf...


The comparison there is as bad as elons in the other direction. They are digging into every missing bit of data and mistake for tesla in a different data set than other brands. I wonder of you could redo now with official data.


That's one subset.

What people can credibly claim, because Tesla's stats are entirely loaded and biased, is that any claim AP is safer than humans is just that, a claim. Because AP will (usually) disengage in situations where it won't do well, something humans can't.

The closest you could come (maybe, who knows, because Tesla hordes the raw data like Gollum, despite their "commitment to corporate transparency") is:

"In optimal and safe driving conditions, AP may be safer than human drivers, and in less optimal conditions, uh, we don't know, because it turns off. And we won't count those miles against AP."

Tesla should actually, if they have the courage of their convictions, show how many miles are driven where AP/FSD would refuse to engage.


> Tesla's stats are entirely loaded and biased

Do you know that, or do you suspect that?

> how many miles are driven where AP/FSD would refuse to engage

I've used FSD for the vast majority of the nearly 40k miles on my 2017 Tesla model S. Highways, boulevards, side streets, even dirt roads on occasion.

It's a powerful but subtle tool. Used correctly, I have absolutely no doubt that has made my driving experience safer and less stressful.

It definitely requires an engaged, alert driver.

Where I suspect you and I agree is the impact of Musk's language and claims around the technology.

If he would fully shut the hell up about it, I think it's quite likely that there would be way less ill will toward the product.


It's known. The logical fallacy is accurate re "miles driven" versus "total possible miles". Tesla's statisticians cannot possibly be unaware of this if they have anything beyond a middle school education, yet they repeatedly continue to tout this stat without so much as a footnote.

I realize that may still, to some, be "suspect", not "known", so yes, if you're asking, "Is there public or leaked internal documentation saying 'we realize these stats are misleading and don't care'", then no, there's not.


The burden of evidence is on Tesla to show their car is significantly safer in an apples-to-apples comparison. They haven't done that so far.

And this is actually quite intriguing: even if Tesla was 10x safer, it would be extremely difficult for them to prove it. Because they'd have to do a multi-year feature freeze on Autopilot etc. while collecting data.

See in total, Tesla has sold around 2 million cars. Comparable cars have a fatal accident rate of around 10 per 1 million vehicle years. So the target for Tesla with the current number of cars is to have less than 2 fatalities per year, on average. To show with statistical significance that you have achieved that, you would need data for around 5 years without doing any changes. Maybe only 3 or 4 years given that the number of Tesla's keeps increasing, but still.

Really, it's only when you're doing 10 million cars per year, like Volkswagen or Toyota, that you can make statistically meaningful statements about safety while also updating your car's software frequently.


> NON-autopilot drivers eating, texting (constantly!), seeming to look up directions and much more (falling asleep).

FSD is something people are _encouraged_ to use, not _discourage_.

This is an entirely disingenuous take.

And Tesla's stats have been, _repeatedly_, shown to be entirely disingenuous.

Human drivers don't have a "disengage" mode, other than pulling off the road, when conditions are "suboptimal". AP disengages. FSD disengages. And suboptimal conditions are, surprise, surprise, the conditions in which more incidents occur. So the whole "AP is safer because when it's safe to use AP it's safer" circuituous reasoning, and every stat Tesla publishes, while withholding as much information from the public as it can (while touting their "corporate transparency is a core value" spiel), should safely be thrown in the trash.


> I think some teslas now even show a message warning that cruise control will not brake in some cases?

In my experience, that warning about not braking occurs when you are in TACC mode but are pressing on the accelerator to go temporarily faster than the set speed.


Unfortunately some people may have gotten into the habit of hovering their foot over the accelerator to be ready to override phantom braking, so maybe some even rest their foot on the pedal without realizing.

Fortunately, removal of radar will reduce the frequency of phantom braking, so hopefully this habit will fade.


Well she says she did. Further confusing the matter is there is the FSD Beta, which very few people have access to for now. Then there is adaptive cruise (works really well!), then there is adaptive cruise with autosteer (not so good!). So who even knows which she was talking about.

I fault Tesla for having a mess of badly named options, and for putting beta stuff out on the roads for sure.


Yea, the fsd beta issues last weekend (weekend before?) involved false forward collision warnings.. I imagine she did not have the fsd beta though.


> So she continued to use it? That's insane.

Or she just lied to deflect blame for her clearly at-fault collision. I genuinely don't understand why this story is getting so much play here. It's like people read "Tesla" and completely lose their mind.

Literally thousands of these accidents occur every day. This is a routine a collision as one can imagine, and there is basically zero evidence for a system failure here at all. It's One Unreliable Anecdote. And it's generated 300+ comments and counting? Come on HN, do better.


> "So she continued to use it? That's insane."

Well its proving to be a great excuse when she runs into people with her car. Sounds like a win for her


this is how i feel too. i want FSD to succeed and i dont want it to destroy itself during the beta. But the videos ive seen are how you put it, people say "whoa that was close" and then re-engage autopilot and have 3-4 more incidents in the same session


And then laugh nervously, and demonstrate a complete misunderstanding of ML. "We need to keep having near misses to train the car of what a near miss might be!".

And God forbid if they post a dashcam video to YouTube. Then they'll get hit with comments like this with a sincere straight face:

> "FSD didn't try to kill you. More like you're trying to get FSD killed by publishing this video."


I think the main problem here seems to be that the driver has put too much trust into Autopilot and what she needs to do is rather assuming it is as just another driver assistance feature and always be in control of her vehicle at all times.


Was it TACC or Autopilot or FSD or driver trying to blame someone else?


> I talked with the NTSB who said that they won't investigate because no one was killed.

So they must wait until lots of people get killed by autopilot / FSD in order for your incident (and many others) to be investigated?

The scary thing is that it is beta quality software on safety critical systems and not only the drivers know it, they use it as an excuse to cover them not paying attention on the road. Was driver monitoring even switched on at the time?

As I have said before, this software without proper driver monitoring and autopilot / FSD turned on puts the driver and many others on the road at risk.

At this point, this is more like Teslas on LSD rather than FSD judging from the bugs from many YouTube videos and social media posts of all of this.

Oh dear.


> o they must wait until lots of people get killed by autopilot / FSD in order for your incident (and many others) to be investigated?

The entire NTSB (which covers trains, planes and automobiles) has 400 employees. They only get pulled into fatal accidents because they only have that much manpower.


That low level of funding for such a critical function seems a clear threat to public safety.


The NHTSA has primary responsibility for highway safety at around $1B/yr in spend and 650-ish people.


Still need transportation…


She's supposed to be able to drive herself, without autopilot.


Possible scenario A: Teslas have a fundamental flaw that causes a single driver to have several accidents. Multiplied by the number of Tesla drivers who use Autopilot, there must be hundreds of thousands of unreported and uninvestigated auto-pilot accidents. Most certainly a conspiratorial cover-up by Tesla, the NTSB, and the NHTSA.

Possible scenario B: The driver of the Tesla has misrepresented or misunderstood the facts of the situation.

Possible scenario C: The driver of the Tesla doesn't know that she should take her foot off the accelerator pedal when using Autopilot.

I suppose any of these (or several other) scenarios are at least possible. I'd probably apply Hanlon's razor here before anything else.


> Most certainly a conspiratorial cover-up by Tesla, the NTSB, and the NHTSA.

You're just adding the word "conspiracy" to imply blaming the car is insane.

You skipped "Possible scenario D: It's the car's fault, and the NTSB and NHTSA didn't respond due to a lack of interest, an incompetent person handling the complaint by the OP, the OP not being able to communicate what happened well enough, or a procedure at the agencies which requires the driver to make a complaint rather than an observer or a victim."


I would assign a large probability that Autopilot wasn't even engaged and it's driver's fault.

Accidents with Tesla (many of those that make the press) are often caused by not being accustomed to the car's acceleration.

There is Chill mode that decreases max acceleration, but I wouldn't expect the driver who manages to get into 2 accidents with a same car to read a manual


Yeah, the first thing that came to mind was that one valet who claimed that it was not his fault, but the autopilot that made him accelerate heavily and crash inside a parking garage.[0]

People called out bs on it instantly, as it would make no sense for autopilot to suddenly accelerate inside a garage until the crash, but the driver defended himself to death.

Only for the formal investigation to happen and confirm that the driver did it himself by fully pressing on the acceleration pedal instead of the braking pedal without any autopilot engagement, and was just trying to shift blame. And no, it wasn't just Tesla's legal team claiming that. The guy went to a third-party service that could decode the blackbox data from the car (again, that was his own technician, not requested by Tesla or anything), and the evidence clearly showed that the driver pressed the accelerator full-on. The raw log that is relevant to the situation is included in the link as well, in case someone wants to see it with their eyes.

Almost every single article I've seen that features bizarre accidents like this ended up being bs, and I am seeing the "accidentally pushed the accelerator instead of the braking pedal" vibes here as well. Will be eagerly awaiting for the results of the investigation before I make my own judgement though. Note: I am not claiming that autopilot is infallible, and there were indeed some legitimate incidents a while ago. But this specific one in the OP? Yeah, I call bs.

It also helps that NHTSA has already investigated such claims before[1], so they looked into hundreds of cases that were claiming Autopilot suddenly accelerating and causing crashes, and they discovered that not a single one happened due to autopilot, but due to the driver misapplying acceleration and braking pedal. Direct quote from their investigation summary (emphasis mine):

>After reviewing the available data, ODI has not identified evidence that would support opening a defect investigation into SUA in the subject vehicles. In every instance in which event data was available for review by ODI, the evidence shows that SUA crashes in the complaints cited by the petitioner have been caused by pedal misapplication. There is no evidence of any fault in the accelerator pedal assemblies, motor control systems, or brake systems that has contributed to any of the cited incidents. There is no evidence of a design factor contributing to increased likelihood of pedal misapplication.

0. https://insideevs.com/news/496305/valet-crashes-tesla-data-r...

1. https://static.nhtsa.gov/odi/inv/2020/INCLA-DP20001-6158.PDF


One concern I would like investigated is that it appears Tesla's Autopilot can't detect (or can no longer detect?) stationary objects.

Until fairly recently, I believe that the RADAR, which used to be in all their cars, would have detected the stationary object (me) and applied braking force to at least reduce the impact.

Now, though, Tesla has stopped installing RADARs in their cars and apparently also disabled existing RADAR in cars that have it (because they all are using the same camera-only software).

If this has also removed the ability to detect stationary objects and the corresponding emergency braking functionality, this is a really serious problem and needs to be addressed, perhaps with a recall to require installation of RADAR in all the new cars and re-enabling of it.


> Until fairly recently, I believe that the RADAR, which used to be in all their cars, would have detected the stationary object (me) and applied braking force to at least reduce the impact.

It is the other way around. They've stopped using RADAR because RADAR has too many false reflections at speed and it was causing learning issues in the ML dataset. The vision based position detection is better (could argue about weather and visibility here, but it is irrelevant to this convo). Reliance on RADAR was causing it to hit stopped objects, because it was relying on radar data that was already too jittery and had to be ignored.

Regardless of the RADAR issue, the driver is responsible for the vehicle at all times. If someone hits you using cruise control they remain at fault.


Nearly every major manufacturer has a separate emergency stop system that continually looks for an obstacle in the vehicle's path and applies braking force, overriding the driver and any cruise control or self-driving that is in use.

These often use RADAR, have worked for years, and are routinely tested by international agencies such as the IIHS.

See, for example: https://www.youtube.com/watch?v=TJgUiZgX5rE

Teslas at least used to do this too:

https://www.iihs.org/news/detail/performance-of-pedestrian-c...

https://www.youtube.com/watch?v=aJJfn2tO5fo

If Teslas no longer have this functionality, is a serious problem that needs to be corrected. That could mean reprogramming existing cameras or adding a sensor if the system really did rely on the now-removed RADAR.


That's not true and you are confusing different features. AEB and Autopilot/Assisted Cruise Control do not behave the same. Here's some articles I found with a google of "radar cruise control stationary object."

https://arstechnica.com/cars/2018/06/why-emergency-braking-s...

> But while there's obviously room for improvement, the reality is that the behavior of Tesla's driver assistance technology here isn't that different from that of competing systems from other carmakers. As surprising as it might seem, most of the driver-assistance systems on the roads today are simply not designed to prevent a crash in this kind of situation.

> Sam Abuelsamid, an industry analyst at Navigant and former automotive engineer, tells Ars that it's "pretty much universal" that "vehicles are programmed to ignore stationary objects at higher speeds."

Here's one with BMW owners asking about it: https://g20.bimmerpost.com/forums/showthread.php?t=1738458

Here's a review of Cadillac's SuperCruise: https://www.thedrive.com/tech/14830/i-wrote-this-in-a-superc...

> Related: All these types of semi-autonomous systems have a hard time identifying stationary cars—vehicles in stopped traffic on the highway, for example—as "targets" to avoid. Coming over a crest at 70 mph, I pick up the stopped traffic ahead of me far before the system does. I even give the system an extra second or two to identify that we're hurtling towards stopped traffic, but have to initiate braking. However, it's very likely that even at that speed, the car could have made the stop before an impact.

And the feature you're thinking of, AEB and its limitations are here: https://www.caranddriver.com/features/a24511826/safety-featu...

> If traveling above 20 mph, the Volvo will not decelerate, according to its maker.

> No car in our test could avoid a collision beyond 30 mph, and as we neared that upper limit, the Tesla and the Subaru provided no warning or braking.


The Toyota Safety Sense system will slow the car down by 25MPH regardless of its initial speed (except possibly for "very high speeds").

It makes no sense to give up entirely when an application of brakes would reduce damage.

In the case of this accident, the vehicle not only didn't apply brakes but continued to accelerate after it hit me (the double hit).

This may or may not have been operator error, but the driver says she was on autopilot, that there was an OTA update issue, and that it has happened before. This all seems to suggest an investigation is appropriate and necessary.


I don't see why this has anything to do with Toyota's system, which by the way, had a recall for TSS braking due to stationary objects.

You're mad, I get it, but you need to take this up with the driver (who claims she had a failed OTA update, which caused her to wreck, AND THEN DROVE AND WRECKED AGAIN) whose insurance will take it up with Tesla. It isn't your job to figure out why the car did it.


The point is that it not only didn't slow down but accelerated.


Okay, and?

The only point here is that you were hit by an inattentive driver who didn't maintain control of her vehicle twice in one day. She put you in danger and your life at risk, and you're blaming the car.


> If someone hits you using cruise control they remain at fault.

That'd be relevant if they didn't call it autopilot. If they called it lane assist or cruise control plus or something then I'd agree.

Tesla is at major fault for implying that the Teslas can drive themselves, even if the driver is also at fault.


Tesla could call it Egre-MuzzFlorp and it wouldn’t matter. The surveys showing people think “autopilot” means “can drive itself” were fatally flawed, in that they were done with non-Tesla owners.


And yet, people drive their Teslas as if it means "can drive itself".

Words matter. You'd need real evidence to convince me that Tesla calling the system "Autopilot" hasn't significantly contributed to crashes.


On the contrary, we need real evidence suggesting it does. Certainly the population level crash data doesn’t support it.


>The surveys showing people think “autopilot” means “can drive itself” were fatally flawed, in that they were done with non-Tesla owners.

Reality check: There are videos shown Tesla drivers sleeping in the car, or using the car from the back sit, so we have proof that a subset of Tesla users are using autopilot as "it drives itself and only laws forced Tesla to add those warnings". This videos are reality, no conspiracy, no subjectivity, no fake stats... some idiots are believing the marketing and some other idiots are still defending that marketing.


> some idiots are believing the marketing

That doesn't follow. Some people are definitely being idiots, but the marketing doesn't say that it's okay to do those things, so they're not "believing the marketing" by doing so.


If an at-fault Tesla hits me, my legal beef is with the driver of that Tesla. They may in turn rely on insurance or on suing Tesla to recover damages from their own misunderstanding of the car’s abilities or improper marketing, but my beef remains with the driver of the at-fault vehicle.


Radar is not good for detecting stationary objects. Of course you get a nice reflection back from a stationary car but you get a similarly nice reflection from an overhead trafic light or a manhole cover or a dropped nail. Because of this every automotive radar ever fielded gates out the stationary objects. If you wouldn’t you would get a crazy amount of false positives.

They can do this because the radar measures the relative speed of objects via dopler shift, and you know the speed of your own vehicle. Anything which has the same speed as you have but goes in the other direction is most likely stationary. (Or moving perpendicular to you. The velocity difference is vectorial, but dopler can only observe the component along the observation vector, and civilian radars have terrible angular resolution.)

In short: nobody ever used radar to stop cars from hitting stationary objects. This is not a Tesla specific thing.


I think its probably more likely the driver made a user error— tens of thousands of people use autopilot every day in stop and go traffic


Stop and go traffic is not the same.

In stop and go traffic, the car in front of you is visible to the camera at all times.

In this case, my car may not have been acquired by the camera before it was already stationary, in which case it might have been ignored, causing the accident.


We won't know until the lady provides you with evidence that her AP was actually on, but my guess is that she fucked up and she's just using AP to offset blame.


Exactly: An investigation would find out what happened.


Please post an update when the investigation concludes


The main point of this thread is that I don't know how to actually get an investigation to happen!


The insurance company handles it.


To be fair to scenario A, I've seen one other video of a driver reporting AP hitting the car in front of them during a rapid slowdown in traffic. Its hard to say how widespread it is, if the NHTSA isn't wanting to investigate.


Or D) Tesla stops instantly when it hits OP once by itself and again when the person behind it hits it.

(Spare me the low effort comment about how everyone should be prepared for the car in front of them to stop instantly because it hit something, statistically nobody drives like that)

Edit: I misinterpreted the OP about it being an four car pileup.


Dude. It's not a "low effort comment" to point that out, it's literally the law. If you can't stop when the person in front of you stops, you're too close. Increase your following distance. Don't let others bad habits justify your own.


Yes, yes it is a low effort comment. And it is exactly what I was trying to preempt. It adds exactly zero to the conversation to say "but the law" or "but drivers ed" or "but some proverbial rule of thumb".

For better or worse neither your fantasy of how people ought to act nor the letter of the law is reflective of how the overwhelming majority of the human population operators motor vehicles or expects others to. Is it ideal? Probably not. But it's a decent balance between being somewhat prepared for the expected traffic oddities, leaving margin for some subset of the unexpected and efficient use of road space.

I'm sure this will be an unpopular comment because there is no shortage of people here who think humans adhere to laws the way a network switch adheres to its configuration but the reality is that there is not perfect alignment between how reasonable traffic participants behave and the letter of the law.


I think that some of us would argue that these are not "reasonable traffic participants". People who do not maintain sufficient stopping distance are one of the most frustrating parts of the American driving experience and are (IMO) extremely disruptive to safe travel, especially on highways.


>I think that some of us would argue that these are not "reasonable traffic participants"

You're basically arguing that almost everyone else is unreasonable. That's going to be a very uphill argument.

Also there's no reason to hide behind "I think that some of us would argue". You clearly hold this opinion. Ask yourself why you have reservations about owning it.

>People who do not maintain sufficient stopping distance

Who defines "sufficient" because the consensus based on the observed behavior of typical traffic seems to be that "sufficient" is a few seconds where possible but always less than whatever the comments section on the internet thinks it should be

>American driving experience

The American driving experience is not particularly remarkable compared (except maybe in its low cost) compared to other developed nations. All of which are pretty tame compared to developing nations

I'm not asking you to like the way people drive. I'm just asking you to not assess traffic incidents based on the reality of how people drive and not the farcical assumption that most participants are following or can be expected to be following whatever rules are on paper.


You could have pre-empted it by not making such a claim in the first place, and I abhor your attempt to normalize this dangerous behavior. It is not a "delicate balance." Increase your follow distance. Driving closer to the car in front of you gains you nothing but sacrifices valuable reaction time in the event of an emergency.


Take your high horse and turn it into glue. I'm not attempting to normalize anything. Look outside, it's already normalized. It is the current status quo. I'm not endorsing it. I'm simply asking you to not to pretend otherwise so you can pretend to be outraged at someone who failed to avoid an accident because they were driving like typical people drive.


> If you can't stop when the person in front of you stops, you're too close

If the other driver decides to randomly brake in the middle of the road (as some Teslas have been known to do), it's not necessarily the person behind's fault.


It absolutely is. If the person behind was unable to avoid collision, then the collision occurred because they were following too closely. It doesn't matter whether it was a Tesla phantom-braking, or a human slamming on the brakes to avoid hitting a dog. It is always the driver's responsibility to maintain enough distance that they can safely respond to any action taken by the vehicle ahead.


>It absolutely is. If the person behind was unable to avoid collision, then the collision occurred because they were following too closely.

This is infantile circular logic. It makes for great internet feel good points an little else.

What the other car was doing absolutely matters. Plenty of people have brake checked their way into an accident and wound up paying for it because there were witnesses.

>It is always the driver's responsibility to maintain enough distance that they can safely respond to any action taken by the vehicle ahead

Citation please. I'm particularly interested in one that backs up the word that you thought was important enough to italicize.

My state places no specific requirement for following distance upon drivers. The state driver's manual states a suggested minimum of two seconds.

I spot checked two other states and their drivers' manuals advise similar (one advised three seconds, one advised a variable number depending on speed), neither said anything about being able to account for anything the car in front of you does.


In my part of the world, it's always the person's behind fault.


This is changing with dash-cams. If the other person has a dash-cam you are not free to do unreasonably things and then weasel out of being on the hook on the basis of the front of their car hitting the rear of yours. Of course, if there were witnesses this has always been the case.


If the dashcam shown the driver following too closely and failing to brake in time, I'm not sure a dashcam will help. But sure you can't reverse into someone.


Really? In most states and EU countries its not. It's assumed to be the person behind's fault at first, but that's a heuristic, not a necessary assignment.


No other car hit the Tesla from behind. It hit me with two impacts all by itself.


You said it was a 4 car accident? She hit you from behind and you hit the car in front of you which hit the car in front of it? Did you end up hitting the car in front of you twice because she hit you twice?


I'll go against the general opinion in this thread and say that this is not something that I'd expect to be blamed on Tesla. Especially considering this:

> The driver said she was using Autopilot, that an OTA update had had a problem earlier in the day, and that she had actually had the same kind of collision on Autopilot previously!

Autopilot might be an absolute dumpster fire, but what you are describing is similar to an adaptive cruise control failure and the liability is still with the driver. She rear ended you while you were at a red light, make sure that her insurance pays and that's it. If she wishes to sue Tesla she can obviously do so.


I have adaptive cruise control, and it behaves the exact same today as it did on the day I test drove the car. It doesn't change behavior unpredictably with OTA updates!

How was the driver supposed to know that the previous issue was not some rare fluke?

Tesla is recklessly rolling out software, and amazingly has pushed off the liability to the drivers willing to try it. Sadly we are all part of the beta, like it or not, if there are are Teslas driving around on public streets.

I'm mostly shocked that insurance rates have not skyrocketed for FSD beta testers.


I have a tesla - I could be wrong but I don't think that autopilot has seen any updates in quite some time. The exception to this is people who have the FSD Beta - this is a very small percentage of people who were able to get a impossibly high driving score. Getting into an accident would have made it impossible for this lady to get the FSD Beta


To clarify, I was responding to the OP with regard to the "investigation" part. Authorities won't launch an investigation on someone rear-ending someone else on the basis that "autopilot did it". As a wronged individual he should make sure that he is properly compensated as non-responsible for the crash. The driver herself can (and maybe should!) at least contact Tesla to inquire about the whole thing.

If truly Teslas with FSD are causing more accidents than another car then yes at some point some government body will investigate but they won't care about a single incident with only material damages.


> I have adaptive cruise control, and it behaves the exact same today as it did on the day I test drove the car. It doesn't change behavior unpredictably with OTA updates!

I do too on a non-Tesla. I've seen differing behaviors not just due to updates but due to the time of day!


Yes mine tends to fail more during winter because the sun is much lower in the sky. It also doesn't like heavy fog, what a surprise, or ignores once in a while uncommon vehicles such as trucks carrying weird cargo.


Responsibility and who pays are certainly important, but isn’t it equally concerning that there isn’t an investigation initiated directly to determine if there’s malfunctioning technology?


At scale not really, how many cars get rear-ended daily in the US? Insurance companies will forward the info to another government body which will launch an investigation if there is a significant deviation from the norm.


Have your insurance company harass Tesla. Chances are they’ll come back with a report saying “driver depressing accelerator 13.42° at time of incident overriding AEB and ignoring FCW for 5 seconds”. That’s usually how these kinds of stories end.


The OP's insurance company should be asking the other driver's insurance company to cover the loss. Then the other driver's insurance company should be harassing Tesla if it believes Tesla was responsible.


OP’s insurance company can harass Tesla directly if they believe Tesla holds evidence of the true cause of the crash, which they probably do.


But neither is going to go after Tesla, because insurance companies aren't about ferocious prosecution of potentially valid legal claims, or manifesting customer outrage, but getting things resolved in a final manner as quickly as possible with as little out of their pocket as possible.

To go after Tesla for defective FSD over any individual accident takes a litigant who is more concerned with making a point and/or harming Tesla than cost effectiveness.


It's probably safe to assume that if she were going to do that she would have done that the first she got into this type of high speed accident.


Oh, you assume Tesla won't fight to the death to avoid releasing data recorder information. They will.

Unless it "absolves" Tesla.

Remember, this is the company that when someone died, put out actual PRESS RELEASES to say "Not the car's fault. In fact it warned him before the collision that he was inattentive."

They neglected to mention it triggered ONE steering wheel warning... FOURTEEN MINUTES before the collision.

Even your example is problematic. "FSD/AP isn't at fault/didn't cause the collision/near miss, because it wasn't engaged..."

... because the driver had to take emergency action to avert FSD/AP's behavior.

They got taken to task for that, when investigatory boards started asking "just how long before that collision was FSD/AP disengaged, and was it disengaged by hard manual braking, etc.?"


> Oh, you assume Tesla won't fight to the death to avoid releasing data recorder information. They will.

Last time this happened in a similar situation, the driver went to a third-party technician that was able to extract the log data, and it proved the same thing that Tesla was claiming. The driver was full-on pressing accelerator instead of the braking pedal.[0]

Raw log data is present in that link, so it isn't just another "he said one thing, they said another thing" situation. But the driver in question was indeed fighting to death claiming their innocence, even thought the initial accident was already raising eyebrows of most people familiar with autopilot.

It also helps that NHTSA has already investigated such claims before[1], so they looked into hundreds of cases that were claiming Autopilot suddenly accelerating and causing crashes. NHTSA discovered that not a single one happened due to autopilot, but due to the driver misapplying acceleration and braking pedal.

0. https://insideevs.com/news/496305/valet-crashes-tesla-data-r...

1. https://static.nhtsa.gov/odi/inv/2020/INCLA-DP20001-6158.PDF


That was my thought as well, over here, if a car rear-ends you, both conductors stop, call their respective insurance companies, wait until a representative arrives and let them deal with the consequences.

As an individual we don´t have bargaining power vs stupid automotive choices. But insurance companies have in their best interests to make sure somebody else will pay.


I doubt they will do anything.

They have a process: claims adjuster looks at the police report, looks at the wreckage, proposes a payment, closes the file. Anything else would be extra work on their part and not required or probably even encouraged.


Insurance companies aren't going to bother investigating autopilot until this issue gets a lot more widespread. Rear-end accidents happen thousands of times a day.


I wonder if we'll see a difference liability insurance rates for Tesla owners vs. the others. If they're truly road-bound, OTA-distracted, undirected electric missiles, I'm sure we'll see a difference there before we hear back from NTSB.


Here's an idea. Maybe she said she was on autopilot as a cop out? She could also have her foot on the accelerator which disables automatic stopping (clearly shown to the driver as a warning) when autopilot and traffic-aware cruise control (TACC) is engaged.

If this is truly a substantiated issue. Hundreds of Tesla owners would have created threads on the forums below. That would indicate a trend which will prompt NHTSA to take action:

https://teslamotorsclub.com/tmc/forums/-/list

https://www.reddit.com/r/teslamotors/

But alas, a quick cursory search yielded nothing.

There was another media frenzy a while back. Falsely claiming sudden unintended acceleration (SUA) on Tesla vehicles. The result? "NHTSA determines sudden acceleration complaints in Tesla vehicles were due to driver error"

“There is no evidence of any fault in the accelerator pedal assemblies, motor control systems, or brake systems that has contributed to any of the cited incidents"

https://www.youtube.com/watch?v=TqTXhKVtQbU&t=203s

https://techcrunch.com/2021/01/08/nhtsa-tesla-sudden-uninten...


This needs more visibility


You need to be careful pointing out anything like this on HN. If you have havn't noticed, it's really cool to hate anything TSLA these days.


I know anything Tesla gets some people too excited. But it should never be at the cost of being objective.


Why do you think she was telling the truth? People’s instincts in such situations are for self-preservation, and shifting blame elsewhere is common.


I have a feeling a lot of stuff is being swept under the rug at this stage. Watching how many near misses are on YouTube from a handful of beta testers, the only accidents I've seen are rims being destroyed by curbing. From the amount of dicy things it's doing I find it hard to believe there's no accidents. Especially before the rollback last week of 10.3 where a lot of people reported very dangerous behavior as soon as it rolled out


Wait till you see the amount of car accidents from non autopilot vehicles! you'd want to ban cars all together if a youtube video was posted for every accident.


Come on, that’s blatant whataboutism. Surely we’re good enough here to realize that we can hold both Tesla and other drivers to a higher standard?

Yes, people are often bad drivers. But it doesn’t follow that we must ignore the faults of any self driving technology just because regular drivers crash too. If Tesla is pushing out updates that make their own super fans comment on the cars doing unsafe and erratic things, that’s something we should look into.


I don't think that's an example of whataboutism, given that one of the stated goals of autopilot is literally to provide a safer alternative to the status quo of humans driving manually; pointing out the riskiness of driving in general is not some unrelated thing that's being raised as a distraction from the real issue.


No, we can't. Accidents are going to happen no matter what because driving is a complex task with an infinite number of potential failure modes, and no threshold where the risk of serious injury or death can be considered zero. If something is safer than average, then it is objectively an improvement, and that's all it needs to be.


> If something is safer than average, then it is objectively an improvement, and that's all it needs to be.

GP is asserting that FSD is significantly less safe than previously known, and only driver intervention is preventing FSD from killing more people. Personally I’ve heard way too many “FSD regularly tries to drive into that bridge pillar” for me to believe an assertion that FSD is safer than drivers without significant evidence.

Second, we’re talking about a case where even Tesla owners acknowledge that an update made their cars noticeably less safe. Even in cases where FSD is better than human drivers, which I don’t think is the case yet, we should be quite concerned about the possibility of software updates making vehicles less safe.


My comment was made in response to GP's claim that it is whataboutism to compare the safety record of FSD to human drivers. It is entirely possible that FSD actually is less safe than drivers, and if so let the evidence show it. But the fact remains that you should very much be comparing the safety of FSD to human driving.


Humans are pretty good drivers compared to FSD beta. If you think that's not true after trying it or watching some videos I would appreciate it if you tore up your license and used Uber


> Humans are pretty good drivers compared to FSD beta. Lol! You haven't driven in Montana, have you? What I see in the latest youtube videos is an order of magnitude better than the average driver here.


Well, a driver is a human having a lot of other things in their head, from work, family, personal issues, driving is just an add-on, the Tesla Autopilot is a system designed to drive and the only job it has is to drive safe and be better than any human driver, that's why most of us have different expectations.


So sad that the average non-Tesla driver is not as enthusiastic about latest technology as not to broadcast their driving journeys on YouTube for the rest of the world to criticize and make fun of the shady dumpster fire of a manufacturer that made their car when they get into a close call.


"The driver said she was using Autopilot"

"she had actually had the same kind of collision on Autopilot previously!"

I've got a hypothesis...and it has nothing to do with autopilot being faulty.


Likely an error in the seat to steering wheel connector - used to see it all the time when I worked on cars.


There is a lot missing from this story.

How old was the Model S?

How did nobody hit her from behind if it was a four car pileup as you say elsewhere? It was all her plowing into four cars?

What follow distance setting did she have it on?

What were the road conditions? Dark? Light? Snow? Ice? Dry? Sandy? Flat?

What was the speed limit on this highway?

How long had the light been red?

Was the light functioning properly?

Two impacts could simply be a bounce of either your car or her car. Not some mystery, right?

Where did this happen?

Are there any reports online (local law enforcement, etc.) about what happened?

What happened shortly before? (i.e. How long had you been stopped at the light?)

Were the four cars involved all cars that she hit directly, or was it a domino effect?

Where were you relative to the other two cars besides the Tesla that were affected?

None of this is to excuse the Tesla, just a lot of info missing here that would be interesting to hear.


Tesla does the bare minimum when it comes to software QA on their autopilot. Looking at the codebase can tell you a few things:

- There are few, if any unit tests. - No regression tests, no integration tests. - CI is “something to aspire to have one day” - Uses thousands of open source dependencies, many of which are updated randomly based on if a developer needs or wants to. - No code review process. - Lots of commented out code, dead code. - No code linters or pre-commit process. - About 5 different architectures due to in part constant churn on the team.

Anyone who allows the “Tesla autopilot” would think twice if they actually knew the quality of code that they are betting their lives on. It’s a dumpster fire.


> Anyone who allows the “Tesla autopilot” would think twice if they actually knew the quality of code that they are betting their lives on. It’s a dumpster fire.

Worse yet, other drivers, and especially vulnerable pedestrians and bicyclists, have no option to opt-out of Autopilot being tested on them, with potentially fatal results


Source?


Feel free to email me: tclaburn at theregister dot com


You can see here how "journalism" runs to stuff that's not based on any data. 5 million + car accidents, and a probably total bogus retelling from someone probably not even sure what they were doing (ie, did she read manual?) will make it into a news article.


You're assuming this will instantly be spun into a story based on this one anecdote. It might be used as one anecdote of many, it may be represented as one point of view among many, it may never end up becoming a story at all.

"You can see here how" a journalist is choosing to investigate a possibly promising lead while their profession continues to be misunderstood.


We'll see how much "quality" we get out of tclaburn but I don't think it's gonna be an article full of numbers and facts, but rather outrage and feelings, judging by the "quality" that newspaper tends to produce.


Have you actually read any content at the register.com? It’s often tongue in cheek, but openly so, and rarely - if ever - (in my experience) dishonest. Don’t assume all media outlets and journalists are the same.


So what should they do? Copy OP and publish as is with no follow up, investigation or additional information? or should they do their job and investigate a claim that could be of interest to the public?

Not like they could get information from OP, and then make a FOIA request of the two departments OP mentioned to identify similar reports and write the facts up in an article for us to read, understand and decide for ourselves if it's an issue. That's good journalism: give the facts and let the public make opinions, not the current system which gives the opinions and let the public make up the facts. Just like this post, deciding it's a wide spread issue and the comments agreeing it is without any evidence.


I don't think you understand what real journalists do. Bad ones might perhaps just "forward" an unchallenged claim as your uncle does on Facebook. Journalists and editors trained at journalism schools have a little more integrity and skepticism than your uncle.


Except if you pay attention to any of this you see that they mostly forward unchallenged claims, if they fit a narrative, and do not follow-up, offer no backstory.


Ah yes the "no true journalist" argument


Wait, so your contention is that a profession is defined by its worst practitioners? OK, in that case, software developers are all script kiddies.


How else do you expect a journalist to start an investigation?


Apparently all news reports must be based solely on trends from statistically valid publications. Ignore the fact that most news is of the "man bites dog" variety and not the "dog bites man trend decreases 3% year-over-year" variety.


The Register are pretty good, to be honest. They don't run just any old crap, and are actually technical.


The problem is it's mostly coverage of the Tesla "scandal" stuff.

And there is almost never any follow-up on this stuff.

https://www.theregister.com/2021/03/26/tesla_labor_law/

------

Tesla has been ordered to correct its unlawful labor practices, and its supremo Elon Musk must delete a related tweet from three years ago.

....

The decision also directs self-styled "Technoking" Musk to delete a May 20, 2018 tweet.

> Here is the tweet BTW:

"Nothing stopping Tesla team at our car plant from voting union. Could do so tmrw if they wanted. But why pay union dues & give up stock options for nothing? Our safety record is 2X better than when plant was UAW & everybody already gets healthcare."

The ruling directs the vehicle maker to offer to rehire plaintiff and former employee Richard Ortiz and pay him lost wages, and to strike unlawful disciplinary information from the record of both Ortiz and another employee, Jose Moran.

---

The issue is the follow-up and background doesn't get covered.

In their defense, Telsa points out that Ortiz was fired for dishonesty and lying to an investigator. That being a union supporter was not related (they claim) and that the person making the decision to fire Ortiz was ALSO a union supporter and they consistently fired folks for dishonesty. I'm not saying this is accurate, but it gets ZERO coverage.

Employees had taken Pratt's workday (HR type system) photo from a work/business sytem to use it to harass Pratt (claimed Pratt). The NLRB said this shouldn't be investigated but should be "a discussion between two employees". Who knows what's true, but the constant one sided stories are BORING. And many employees DO put things on company systems and DO expect company to avoid having others take them and use them in facebook and other posts without their permission.


Hope you don't employ sensationalistic style in your journalism and try to have an in depth story based on the debate here on HN.


Detecting stationary objects is a known issue with every driver assist system. No automaker's system guarantees that it will stop for stationary objects. In fact they all explicitly state that they may not. It's a known and disclosed and accepted risk of these systems.

Ford: "may not detect stationary or slow moving vehicles" https://www.fordservicecontent.com/Ford_Content/Catalog/owne...

Tesla: "may not recognize or detect oncoming vehicles, stationary objects, and special-use lanes such as those used exclusively for bikes, carpools, emergency vehicles, etc" https://www.tesla.com/ownersmanual/model3/en_us/GUID-0535381...

GM: "may not detect and react to stopped or slow-moving vehicles ahead of you" https://my.chevrolet.com/content/dam/gmownercenter/gmna/dyna...

etc. You can find it in every brand's manual. NTSB and NHTSA knew that this risk existed when they approved these systems. There is nothing to investigate.


> It's a known and disclosed and accepted risk of these systems.

Accepted by who? As another driver getting rear-ended by these vehicles, I didn't agree to that. It's just a way for them to shift liability to the driver for what they know is going to happen.

This technology shouldn't be allowed on the road unless it could detect and avoid impact with parked cars. This should be price of admission for the technology.


Accepted by NTSB and NHTSA.


Yes, but there is usually an additional system like Toyota's Safety Stop which WILL detect a stationary object and kick in at the last minute to reduce speed by N MPH before collision.

AFAIK Tesla used to have that via its RADAR but because the new production has no RADAR, new cars can't have it, and they disabled it on the older cars too, in the interest of using a single software version.

At least this is what I've gathered from reading here and elsewhere.


Radar is useless for detecting stationary objects because of far too many false positives. Radar is the reason why these warnings exist. It is not the solution.

Teslas continue to have forward collision warning and automatic emergency braking as standard features whether or not they have radar. But again, they do not detect 100% of all stationary objects and neither does any other automaker's system.


These systems do work and have worked for years. They are also regularly tested by IIHS.

See, for example: https://www.youtube.com/watch?v=TJgUiZgX5rE

And Teslas at least used to do this too:

https://www.youtube.com/watch?v=aJJfn2tO5fo

If they no longer stop reliably to avoid hitting stationary objects, this is a serious problem that needs to be corrected.


They absolutely work but they are not reliable. Not on any car. They are a safety system of last resort.


> Does anyone have any idea about how to get this actually investigated to see if it was another Autopilot failure?

If you want to get autopilot in general invesitgated and you think that's not happening enough based on your experience, because the thresholds used by agencies skip investigation when it is warranted, you should direct your complaints to Congress, especially your members and those on the relevant committees.

This will probably not, even in the case where it is part of triggering action, get your accident invesitgated differently, but if your concern is general safety that's probably not important.

If you really want your accident investigated more thoroughly, you’ll probably have to do it yourself by gathering enough initial evidence to find an attorney willing to pursue it as a defective product liability case against Tesla. This may be complicated somewhat by dealing with the existing insurance companies involved in handling claims out of the accident, but that's probably a small part of the involved challenge. On the other hand, your personal potential financial upside is higher in this case, though the likelihood of recovery may not be good.


Correct me if this is what you meant by "talked with the NHTSA", but another place you can submit information is to the NHTSA Office of Defects Investigation (ODI): https://www.nhtsa.gov/report-a-safety-problem#index , which is the group that forces automakers to do recalls.

In my opinion (PhD student in vehicle safety) it doesn't sound severe or novel enough for the NTSB to investigate. NTSB has done good reports on a couple of similar Tesla Autopilot crashes (https://data.ntsb.gov/Docket/?NTSBNumber=HWY18FH011 https://data.ntsb.gov/Docket/?NTSBNumber=HWY16FH018).

minor note: NHTSA is spoken as "nit-sah" so "NHTSA" is better than "the NHTSA". NTSB is spoken as "N-T-S-B" and is fine.


NTSB and NHTSA can't investigate every single complaint. That's unfortunately not scalable. They would need to talk to the driver, talk to Tesla, get the data, etc. All for something that's handled through civil lawsuits. You can sue the driver because ultimately she is at fault, and if she wants to sue Tesla, that's her prerogative.

This makes sense to me.


The problem is that NTSB has a bizarro prioritization scheme. They have spent huge efforts investigating things like hot air balloon crashes, and even Harrison Ford's crash landing a vintage airplane into a golf course. The crazier the incident, the more likely they investigate -- which is completely opposite of how they should be prioritizing. The result is that the common, everyday rear-end car crashes are just written off.


It's not hard to figure out the companies themselves and other agency work well enough at solving the common reproducible errors the NTSB focuses on the edge cases that would otherwise go unanswered.


The US has one of the worst road safety records of any modern industrialized country. The reason is that other agencies have actually not solved the "common reproducible" errors at all.


I ask this every time there's an article about Autopilot behaving badly and still no good answer... What is a (hypothetical) situation where Autopilot would definitely be responsible for an accident? Not brushed off as "the driver is responsible and should have had hands on the wheel at all times."


An accident where it failed to listen to the driver overriding it and did the bad thing anyways, would be an unambiguous case.

A trend where it was frequently (over the background human accident rate) making decisions that resulted in it switching from driving normally to an accident being inevitable in less time than a human can react. Something like it frequently sharply turning the wheels at highway speed.

Short of something like that which takes agency away from the driver, the driver is in control and responsible for the vehicle, whether they are using assistive technology or not. Accidents also happen already, it doesn't make sense to insist on assistive technology being superhuman at avoiding them.


I think that's the problem, they've been allowed to legalese away all liability. There would have to be an OTA update so bug ridden that it caused multiple accidents in short order and with major carnage. In that instance, someone might notice the "trend" or public outcry would become loud enough.


Not until level 5 is rolled out – so potentially never.


Last week Tesla rolled out an update to all of its FSD beta users that caused rogue FCWs leading to dangerous sudden braking. Somehow they didn't catch this internally but most of their customers seemed to notice it the first day. It's clear that Tesla doesn't test their software enough


You could ask the other driver to file a complaint - seems like she’s not too happy with it either. You could talk to the inspectors general responsible for NHTSA and NTSB - they can investigate why those two aren’t investigating.

Only other option would be to sue Tesla. You’ll have a claim if your damages aren’t paid for by the other driver/her insurance (e.g. the damages exceed the insurance limit, so you get to sue either her or Tesla). However, you’d be better of financially suing her. If she’s uninsured she could sue Tesla to cover the damages.

Perhaps some public interest law firm would want to take up the case - that could make suing practical.


> There were two impacts somehow. It was a 4-car pileup and my car was destroyed.

Likely explanation for two impacts during a rear-end pileup:

1. Tesla rear-ends your car.

2. Another car rear-ends the Tesla, propelling it forward and hitting your car a second time.


Tesla was rear car. Nothing else hit it. It hit me twice.


Doesn't this indicate that she was likely "panic braking" on the accelerator?

This would override AEB / TACC on any vehicle I own, radar or not.


yeah this seems to be the most likely scenario. It sounds like the original poster's car was in the middle, tesla was at the end. OP's car suddenly breaks (or hits someone), tesla either does not react fast enough or starts reacting and then the tesla driver hits the gas pedal (meaning to break). Still not sure how they would be able to hit it twice if they hadn't backed up though.


I slowed down gradually and was already stopped at the red light with 2 cars in front of me when the Tesla plowed into my car.


Yes, I thought that could be the case too, but no way to know without seeing the logs.


Sounds to me like the Tesla tried to keep going after it hit you. Those cars really should not be allowed to use that tech and they've gone well beyond having enough proof to admit that.


I'm not sure what you think Autopilot is. It's basically adaptive cruise control which almost all new cars have. As shocking as this may sound like the onus is on the driver to avoid a high speed collision when approaching a stopped car at a red light. The driver should push the brake petal.

Just because it was a Tesla somehow this is a problem that NTSB and NHTSA need to investigate? But if it was a Honda Civic it's just an accident?


Recently I was driving behind a Tesla and it suddenly started braking for no reason at all, I almost rear-ended it. I think it had something to do with the sun reflecting on the road surface brightly because I noticed this reflection just as I was driving past where the car suddenly started braking.


My VW Passat was on ACC and LKAS just a week ago when it suddenly stopped detecting the car in front and was about to drive into it. This was also at a red light. As I was actually paying attention, I just hit the brakes manually. Technology isnt perfect. This happens to all cars.


i recently rented a late model rav4 and played around with the lane keeping and radar cruise control.

it kinda drove like a drunk lead footed teenager. it was cool conceptually, but it didn't seem to work very well and it was a rough ride of heavy acceleration and deceleration with occasional odd steering choices. it also had these lights on the mirrors that would light up when someone was in the blind spots, which ultimately were kind of annoying/distracting.

it felt like maybe the ui for these features, which seemed to push you towards setting adaptive cruise and lane keeping, and then letting the car do the rest, wasn't really in line with the actual capabilities of the system. although, maybe i just wasn't used to it. it would certainly do scary things like accelerate harshly upon cars ahead when the road was curved or had small hills that temporarily obscured direct line of sight to them.


Welcome to government bureaucracy and understaffing. I had a similar issue with a rights violation last year - nobody cares or investigates, just pass the buck.

Your, or their, insurance might investigate (somewhat unlikely). You can try to complain to Telsa, but I think they are more concerned about sweeping negative stuff under the rug whenever possible. Your state DOT might be interested in the report. I doubt they'll really look into unless there are a bunch of similar incidents.

"... she had actually had the same kind of collision on Autopilot previously!"

Sounds like someone is extremely ignorant or doesn't value their life. Why would she trust a safety critical system that failed and caused a crash previously?


Really sorry to hear this.

I really cannot see how this bullshit is allowed on our roads. We're all guinea pigs for this insane experiment when the truth is we know it cannot work well enough to not kill and maim people.

Our government, as you've proved, is completely asleep at the wheel here.


> Our government, as you've proved, is completely asleep at the wheel here.

Um, and not the driver?


My comment was not about the driver. It was about our Gov's response to the OP's efforts to report this, or more accurately, our Gov's complete lack of one.

I'll try real hard to be clearer for you in the future.


You might be able to seek a criminal complaint (not that I recommend it). You can call your local DA's office to see what they think. It's unlikely they'd want to take on Tesla, but a CA DA might go after the driver for criminal negligence.


Take on Tesla? The driver is responsible at all times. If AP was failing to stop the driver should have been paying attention and took over.


Did the local police not come out and investigate?


Yes, they were there and so was a fire engine. I don't think it's typical for police to request autopilot logs from Tesla though.


I'm not defending the police per-se, but I assume the local PD probably doesn't know how to handle such an investigation yet.


Anther 10 years and few thousand injured and dead, you may be able to collect enough statistics to interest someone in starting a lawsuit.

Look at what it took to get acknowledgement that Pintos had a problem.


an idea for startup - a flat screen mounted on the back of your car and a rear looking AI. Once it recognizes say Tesla behind you, it will display on the screen an image which is known to be successfully recognized by Tesla, so it wouldn't rear-end you. Of course it isn't limited to Tesla. Also there is very good business case for subscription as Tesla and the others update their autonomous AI, the users would need updates to that anti-rear-ending software.


She was probably lying.


Does it need to be investigated for you to get insurance payout? If not, I don't really understand your reason for asking. Non-software-driven rear-endings don't usually lead to investigating the mental stability of the driver, and so long as there isn't an unusually high case of accidents with this software (as in, higher-than-human rates)... This is n=1 for all we know, or am I missing any information?


On a motorcycle if I am at a stop with no one behind me I am periodically scanning my review mirror and considering escape routes. If some appears to approach too fast I start flashing my break light until it is clear they are decelerating at a comfortable level.

I am going to start taking a similar precaution when driving. I'll just flash of my hazard lights once and hopefully that triggers something in the driver or autopilot.


Since Tesla was hitting stationary emergency vehicles, you might put yourself in more danger doing that for a Tesla on Autopilot.


> The driver said she was using Autopilot, that an OTA update had had a problem earlier in the day, and that she had actually had the same kind of collision on Autopilot previously!

She had the same type of high speed collision in the past and continues to keep relying on Autopilot so heavily to cause yet another collision? I mean is the average high collision rate for Tesla drivers close to 2? Or is it just this driver?


Reach out to Jalopnik - this is right up their alley.


This is still a local thing. The Feds don't have jurisdiction.

You need to either 1) file a police report and an insurance claim, and let things play out, or 2) hire a lawyer. In either case, the Tesla has excellent dash cam video and flight-recorder black box data which can be subpoenaed/demanded.

Beyond that, you have no other options. Certainly not the NTSB or NHTSA.


Maybe it is being investigated and you don't know. For both drivers, your responsibility begins and ends with getting a police report done and informing the insurance companies, then they'll fight it out. If they think they can go after Tesla, instead of one of them paying, they will. You aren't going to hear about it regardless.


Is it OK for this disclaimer to apply to Tesla cars for 6-years (after initial SW/HW release)? "NOTE: Traffic-Aware Cruise Control is a BETA feature." How about after 10 years? Is it OK to retire a feature, without ever taking it out of BETA? Or is there some obligation, for a SW developer to actually produce working SW?


> they will only investigate if they "see a trend".

That's your answer right there. It's awful what's happened to you and your car but if this issue affects one in XXX (some large number) customers, it doesn't warrant the NHSTA's time which I'm sure has a large existing backlog of issues to investigate.


If this keeps happening, it seems like a logical consequence would be for Teslas to become more expensive to insure.


Nah, if indeed AP on was more dangerous than AP off then Tesla would just shut it down over the wire.


I'd agree if you say "less profitable" instead of "more dangerous."


AP is not profitable at all right now. I am not sure what you mean.


Talk to a good lawyer, see if you have a case to sue Tesla. They'll only fix this if it costs them money.


Tell your insurance company. Nobody is as motivated as them to make the other driver/car manufacturer/etc pay/hurt/apologize/etc. Then talk to reporters. Sadly in today's world, if it isn't trending on Twitter, it didn't happen. Sorry

Hope you are physically ok and unhurt.


> Nobody is as motivated as them to make the other driver/car manufacturer/etc pay/hurt/apologize/etc.

I don't know why you think that. Insurance companies working together is an iterated game with no clear end point. It makes all the insurance companies more money if they just settle all claims as quickly as possible, as long as the total money changing hands among all claims is correct to within the margin of how much much they save from not investigating.


She is still the driver. Blaming things on Autopilot is not going to get anyone very far in the legal sense.


Tesla seems to want it both ways... Sell their cars by marketing how magical the autopilot is, then blaming drivers when the believe the marketing hype.


I only have my own experience to draw upon, so I may be an outlier, but I didn't buy a Tesla because of its Autopilot feature. If they pulled the feature today, I'd be totally fine with that.

As a user of Autopilot, it's absolutely insane to me that anyone would blame Autopilot for a wreck. It's like a "smart" cruise control, except unlike cruise control, it gives you all sorts of warnings about your role as the driver and will shut itself off if it thinks you're not paying attention. Any one blaming Autopilot for a Tesla wreck is either trying to sensationalize, or is just completely inept or lying.


> I didn't buy a Tesla because of its Autopilot feature. If they pulled the feature today, I'd be totally fine with that.

Many many people paid $1000+ for the promise of Full Self-Driving that doesn't exist. People definitely care about the Autopilot feature a lot more than you.

> It's like a "smart" cruise control

Except they call it Autopilot! You can't call something Autopilot and then blame people for expecting the car to drive itself.

Call it lane assist or cruise control plus or something.


Yup. Tesla will happily put out press releases saying "autopilot is not at fault - the vehicle warned the driver prior to the collision to put hands on the steering wheel"...

One warning. Fourteen minutes prior to the incident. That part wasn't in the press release.

The Summon feature is the same.

Marketing copy: "Use Summon in a car park to bring your vehicle to you while dealing with a fussy child" (literal quote).

Disclaimer: "Do not use Summon while distracted."

There's apparently a giant Chinese wall between Marketing and Legal at Tesla, because it's far from the only example. Another, that's still present in some Tesla videos, and has been there for years:

> The driver is only in the seat for legal reasons. The car is driving itself.


What about done sort of civil suit for negligence? If she knew it was prone to this from her previous incident and let it happen again could that be grounds?

I bet some sort of auto accident lawyer would salivate at the idea of suing a trillion dollar company.


The negligence would be on the part of the other driver, not Tesla.


i have questions.

1) how high of a speed, exactly?

2) it rear ended you twice? as in it smacked into you, decided that wasn't enough and either waited for you to pull forward or for the cars to separate and then accelerated into you again? if it actually did this, this is downright comical. i'd wager the driver was involved in the second collision though. (which should be considered a failure of the system nonetheless, as driver+autopilot operate as a single system, where if the state of the system leaves the bewildered driver manually accelerating the car into a collision, that is still a degenerate state!)


"Tesla rear-ended another car while on autopilot" is an immediate red flag in any discussion about car accidents. It shows that the writer is about to launch on a rant with a clear agenda, and this is an argument in which facts, logic and critical thinking are not going to be welcomed.

You were upset that someone hit you, that's fair. You then swallowed the at-fault driver's story hook line and sinker, and now instead of being angry at the other driver for being negligent you're angry at the NHTSA for not Taking Me Seriously™.

11/10 to the at fault driver for such skilled deflection where she managed to not only avoid responsibility for her mistake but amplified your rage to the point that not only did you absolve the at-fault driver and blame the manufacturer of the car they were driving, but you are now angry at a regulatory agency for not taking immediate, decisive and visible action on a single word of mouth report.

TL;DR: you've been played. Double check the contact details the at-fault driver gave you.


You should delete this post, stop talking to people, and get a really good lawyer because you are gonna get a little bit of Elon’s billions!


I think the comments section here neatly summarizes how liability for any self-driving bugs will be foisted onto individual drivers.

This was once a big question mark for me - who would be liable in a self-driving crash, the driver or the manufacturer? Apparently individuals want to be liable. It's not clear whether this is the result of marketing or ideology.


Consumer protection is weak in the US. In numerous cases of faulty products and negligent manufacturers, failures of regulatory agencies to intervene eventually escalates into class actions of victims, owners, or state AGs against manufacturers -- with variable outcomes. We've seen it with opioids, cigarettes, guns, diet pills, etc..


It is not a self-driving car unless you can have a sleep in it while it takes you where you need to go in my opinion. :)

Take a real self-driving car for example: a waymo taxi you are riding in rear-ends an other car. Do you think you will be responsible for it?


If you're in California, you could file an OL-316 since a citizen could reasonably expect to think that this is an example of a "traffic collision involving an autonomous vehicle", and Tesla is a licensed member of that program. At worst, they'll refuse the report for some reason or another.


Just imagine that you're a person who constantly lies about anything just to feel better. You have a Tesla. You made a mistake and hit this car. You get out, what do you say.

That's what I thought.

Obviously this is just one possible scenario, one of the others is OP's interpretation that the lady was telling the truth.


My interpretation is that it may or may not be what happened, and should be investigated.


It's not actually that much different than a negligent driver. The driver was negligent with the OTA update and the driver was negligent enough to trust autopilot in traffic.

Tesla should of course improve its autopilot technology but I don't see how they're responsible for the crash.


A common reason people (me included) fault Tesla is that their marketing is designed to instill more trust in the auto pilot feature than the user ought have given that feature's maturity, while cleverly dancing around the truth to avoid legal liability, which can instead be placed on "negligent drivers."


My perspective is skewed because I'm in Sweden but in real world situations I've seen more good about adaptive cruise control than Tesla autopilot. Tesla autopilot is mostly something I see hyped on HN or reddit.

But in real life, adaptive cruise control is being used all the time, and no good driver that I know trusts it with their life or their insurance.


I own a Tesla, and I'm not a Tesla apologist. There's tons of stuff that is just wrong about that car (the UI for example, is a dumpster fire).

That said, I recall hearing some stuff in the early Tesla days about how the cars could drive themselves. Summoning, auto parking, and some hints at what we now know as Full Self Driving.

Aside from that, I don't recall much marketing hype around Autopilot. It has a bullet point on the Model 3 website, and some details on the naming of related features and what they do, but that's about it. None of it seems like "hype".

Here's the full description of Autopilot:

> Autopilot enables your car to steer, accelerate and brake automatically within its lane.

That's it.

In the car's user manual it makes it very clear what the feature does and what your responsibilities are as a driver. They don't make it out as some sort of magical feature.


For reference, Tesla's marketing[1] is clear about what they mean when they say "Autopilot" and "Full Self-Driving":

> The person in the driver’s seat is only there for legal reasons. He is not doing anything. The car is driving itself.

[1] https://www.tesla.com/videos/autopilot-self-driving-hardware...


Someone needs to tell this to the idiot at my mother's office that praises autopilot because he can answer all his emails during his commutes.


Things like Elon Musk (he is the Tesla CEO) making statements about the self-driving capabilities of the vehicle come to mind: https://www.wired.com/2016/10/elon-musk-says-every-new-tesla...

You're correct about what's in the fine print. But this Musk fellow is effectively Tesla's salesman in chief, and has quite a following on the Internet.


Do you have any pictures of the accident site? Just for curiosity sake.


If he does he shouldn't be posting them here. He should be giving them to his insurance company so they can go after the other driver's insurance company for compensation.


I'm confused. Digital images can be posted here AND sent to an insurance company. What am I missing?


You could potentially post evidence against yourself if you posted online. It's a little shady but I don't think many people are ok with shooting themselves in the foot.


If the images contain information that is relevant to an insurance claim, they probably should not be posted in a public forum that everyone on the Internet can see.


Sue Tesla. It's their responsibility for this garbage. You will get all the incriminating data during discovery.


That's not how it works. You sue the driver, if they have insurance the insurance company steps in, after paying you out the insurance company can go after tesla.


> That's not how it works. You sue the driver, if they have insurance the insurance company steps in, after paying you out the insurance company can go after tesla.

That's...not how it works.

Anyone injured as a result of a defective product has a claim against the manufacturer and anyone else in the chain of commerce. You don't have to sue someone else and let them sue the manufacturer.

(You can, but you don't have to, and its a complicated tactical consideration as to whether it is optimal in any particular case, further complicated by the fact that you can also sue everyone who might be liable at once, instead of picking and choosing who is best to sue.)


That's best if your goal is to get your car fixed and (hopefully none) medical bills paid. That's not likely to result in Tesla paying anything and certainly not in Tesla changing behavior.

If you want to go for the giant settlement from Tesla, that's a different lawsuit, much higher-risk, much more expensive and much higher reward (financially, media attention and changing behavior).


If that's not how it works we'd still have Pintos and Corvairs on the streets. Companies can be held responsible for defective products no matter how much they want the shift the blame to someone else.


Sue a billion dollar company? I'm not sure that's a solution for the vast majority of people. Fighting discovery is generally step one in the process of dealing with this sort of lawsuit. Something like this could/will take years, and hundreds of thousands of dollars to deal with.

It would be likely cheaper to buy a whole new car every year for five years than to take Tesla to court.


Nit: 1 $[T]rillion as of today.


Even worse/better!


Or you may find out the driver of the Tesla was lying.


I'm happy to be driving on the road with other human drivers and have them crash into me. As long as I can prove I was not at fault, I can expect fair compensation. It seems if robots crash into me, I cannot expect fair compensation because whoever programmed the robot can't be held accountable!


If Tesla, NTSB and NHTSA refuse to investigate, you could hire a lawyer who's willing to go after Tesla specifically. I've similarly been stonewalled by Lyft. Long story short, the driver was fiddling with the official Lyft app that sends notifications during revenue service when the driver caused a 3-car pile up.

Be careful, a lot car crash lawyers just want an easy pay day by hounding the insurance companies. Make sure your lawyer is on the same page as you.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: