Hacker News new | past | comments | ask | show | jobs | submit login
Tesla's “full self driving” beta is potentially dangerous (roadandtrack.com)
444 points by mdbug on March 19, 2021 | hide | past | favorite | 446 comments



The video is worth a watch: https://www.youtube.com/watch?v=antLneVlxcs

I think it's fair to say that there are definitely some sketchy moments in there. The one that surprised me the most was marked "Near Collision (2)" in the video (12:25). A car really close to the Tesla doesn't seem to be picked up at all until it's too late: https://i.imgur.com/BG0gN2R.jpeg

You can see in the first frame there that on the screen there's nothing in front of the Tesla, it's just the usual white background. But right out the windshield we can see the car is already very close. At the bottom frame which is a split second later, the car is finally detected and we can see the screen now shows the car. But at that point the driver manually intervened with a sharp swerve right as they were about to hit the car. Going frame by frame on Youtube the green box indicating the car was only there for 4 frames before the driver disengaged.

I was aware that self driving is far from ready but I didn't know that even basic detection stuff like this is buggy. Double parked cars, people walking in the street, etc, I would expect to be problematic. But this is not really an edge case, it's just a regular car at close range with no obstructions in between...


I thought maybe this Oakland video was an outlier in how bad it was, but I've watched several videos from the related videos, and the Tesla subreddit, and the system really seems like it's in bad shape. I'm really surprised that regulators are allowing this to be tested on public roads, and that Tesla hasn't pulled it. It is bad enough that when I see Teslas on the road now, I feel unsafe, because the system is so unreliable right now.

I would encourage anyone to watch the videos and make up their own mind, bc I do feel that people over-correct in their Tesla skepticism sometimes. But in this case, it seems warranted.


> It is bad enough that when I see Teslas on the road now, I feel unsafe, because the system is so unreliable right now.

At the very least you should be required to put a visual sign that your vehicle is in "FSD" mode. Sort of like how we require "oversize load" signs so that others can take extra precautions around those vehicles.


In Japan, trucks are required to have three horizontal green lights above the cab, facing forward[0].

These indicate speed. I would guess FSD should have something similar.

[0] https://www.quora.com/What-is-the-usage-of-three-green-light...


My vote is for a big old Cylon eye in the windshield.


They all just look like knight rider, oh shit was knight rider a Cylon?


This sounds like great crossover TV.


Or Student Driver is a bit more apt.


[flagged]


I would like to add that it is a safe bet that "Jesus's" bank account(or assets) is not at a loss.


> It is bad enough that when I see Teslas on the road now, I feel unsafe, because the system is so unreliable right now.

The vast majority of Teslas aren't using this FSD beta but the older and more limited Autopilot software.

You'll only have to start worrying about this in the next few months, or last year, or the year before, etc (depending on which Musk tweet you go by).


> It is bad enough that when I see Teslas on the road now, I feel unsafe, because the system is so unreliable right now.

Conspiracy theory: Tesla wants to show how bad the system is early on to raise public awareness and make people drive more cautiously around teslas.


And when that catches on, Tesla will start producing cars that look very similar to the ones from other manufacturers, so that people would have to drive carefully around every car as well.


They'd lose a lot of customers if their cars didn't look Musky.


> I'm really surprised that regulators are allowing this to be tested on public roads

One should add that in this case it's tested by consumers, not certified professionals.


There is no "certification" to be a babysitter for a self-driving car.



Just ride as a passenger through a road with cars parked on the side. When I did that about 1.5 year ago, it missed about a third of the cars and pictured a couple as moving on the driving lane.


It's almost like it's in beta, right ?


So where can I opt out (as a pedestrian or cyclist)?


That’s an excellent point. Our government has failed to handle this situation, and these systems should not be allowed on the road. I also avoid Tesla cars like the plague and do not trust what’s going on inside.


Everything gets to be crap by adding the label beta. Texas electricity system, beta!!


> The video is worth a watch: https://www.youtube.com/watch?v=antLneVlxcs

For contrast, here's Tesla's promotional video[1] for this feature and its caption:

> The person in the driver’s seat is only there for legal reasons. He is not doing anything. The car is driving itself.

It's been up on Tesla's site for almost 5 years now.

[1] https://www.tesla.com/videos/autopilot-self-driving-hardware...


> The person in the driver’s seat is only there for legal reasons.

He's the legal crumple zone.


This is probably my favorite post on this site, good one.


Did lol


The other video that this guy posted is also worth a watch:

https://news.ycombinator.com/item?id=26519495

It's failing to safely do a very basic turn across traffic on a clear flat road with wide open view to oncoming traffic, on a clear sunny day. There's not even any weird concrete barriers, shrubbery, signage, traffic light control boxes or such blocking the view of the oncoming cars.


That really isn’t good. I don’t like how it sits there with the wheels turned into traffic. If you were to get rear ended you would be pushed into traffic. It is much safer to sit with car and wheels straight and only start your turn when safe to proceed.


This is terrifying. I'm now convinced that the engineers at Tesla aren't just bad programmers, but bad drivers as well.


This is utterly horrifying.

Even if Tesla owners opting into this beta test are willing to put their lives on the, uh, line, I'm not sure everyone else on the road is as keen as they are to be volunteered into this test.

I'm not sure I understand how other posters are watching this video, and are concluding that the FSD system is being 'overly cautious'. To me, it looks like it's doing its best to get T-boned.


Which makes me wonder why these features are still allowed in cars in general. Having an aerospace background, I have no idea how certification of these things work for cars, but I would like to think some governing body has to sign of on this stuff. At least in Germany we have the TÜV for cars already on the road, and sometimes changing basic stuff like suspension isn't allowed. For safety reasons, which I get. On the other hand, you have half assed self driving features in production cars (not just Tesla).


It seems to me there are probably laws on the books here in the US that could regulators could use to stop this. For some reason, they all seem to be asleep at the wheel (for lack of a better phrase). I guess they’re prioritizing the company over the public.


It seems to be either overly cautious and just frozen in place, or "I'm gonna get myself t-boned", no middle ground in between.


It's doing both. It's stopping when it should be moving, and moving when it should be stopping.


This is precisely my problem, especially with the new Model S that guesses whether you want to go forward or reverse. I'm sure at least a few people will die for the sake of this "feature", and as a pedestrian I can't simply opt out of that risk by not buying a Tesla.


Agreeing with others here, this definitely isn't good. I'm curious if the car "boxes" in the bottom right view represent all of the cars it can see, or if it can see further out. Because in this case, every time it "went for it", there were no car boxes visible, which makes me believe whatever vision system it is using isn't able to see far enough to safely drive at those speeds.


While watching the video, my thoughts as a software engineer: wow, this is impressive.

My thoughts as a driver of a car: wow, this is trash and should not be on the road.


That's a great way to put it, agreed. Purely from a software perspective, that's quite something. But as a driver on public streets? Astoundingly terrible and very dangerous.


> https://i.imgur.com/BG0gN2R.jpeg

My god, and this is on the roads?


Doing ML with stereo vision is very hard in controlled settings, and I don't even wanna think how hard it has to be in moving vehicles with lots reflections and other IRL stuff.


I'm pretty sure Tesla only uses monoscopic vision for its self-driving features, or maybe it's just for Autopilot.


Who cares what the the difficulty involved is when these are going to be on the streets today. You sound like the video's passenger


I wonder if these cars distinguish between "I don't know what's in this space" and "I'm sure this space is empty". I hope they do... If the car doesn't recognize the other car right away it is no problem, as long as it is 100% sure it can stop in time if something appears.

I believe safe autonomous cars should behave a bit strange to human sensibilites: They would drive a LOT more defensive in many situations since they cannot be 100% sure that a child is not going to pop out the bushes and onto the street. But on a freeway, maybe if they negotiated with all surrounding cars to drive as a "fleet", it would be able to drive a LOT faster than the safe speed limit for a human driver. The comfort of the passengers would become the limiting factor in many situations.


Yea, as a development build


Move fast and crash things.


Move fast and crash into others’ things.

FTFY


If I had a friend who jerked the steering wheel around like that, I would not ride with them anymore. That is practically an SNL skit about a bad driver.


It's Toonces, the driving computer.


Mr. T-Bone?


There is nothing "beta" about this, it's at a "proof of concept" level at best.


And it's questionable if it even proofs the concept.


I don't understand why anyone wants anything less than full ('level five') self-driving - that video looks more fatiguing than just driving yourself. Watching someone drive and being ready to take over is a skilled job, why would I want to learn to do it, and do it every time I want to drive anywhere, and not be paid?


I had a car with level 2 self-driving, and it was amazing for the commute, going down from SF to the valley and back. It removed a lot of cognitive load from the boring highway driving, because that part is so predictive that the car could handle it. So that's a big partial win for systems like that.


But if you have to watch and be ready to take over at a moment's notice... it doesn't really remove that cognitive load? Unless you don't watch and aren't ready of course, but that's not really the point.


Except in highway driving, the amount of weird stuff that can happen is very, very limited, compared to city driving. It's predictable, and you learn what your car can handle, and what it can't handle.

I assume from your comment that you haven't actually spent a lot of time in a car with these features, but the best way to describe is that it's like driving in grooves. You're driving, but you don't have to actively work to keep your distance to the car in front of you, stay below the speed limit, or to keep your car centered in your lane. Just like with an automatic gearbox you don't have to actively work to keep your car in the right gear, the car does it for you. Same principle here.


Maybe Tesla should follow the “mock it til you make it” pattern with a lot of these AI companies: just hire a bunch of gig workers to sit in simulators driving your car remotely using the cameras and sensors themselves. Have the model learn from them. Genuinely wonder if this would have better results


LTE bandwidth too low, latency too high, and reliability too low.


Compared to the video of the beta, the results may be the same


The most disconcerting part of the video is watching real world objects blink in and out of the computer's "awareness".


Humans gain object permanence at around 3 months of age. This software does not yet seem to have mastered it.


Do we know what that view is showing though? If you tapped in to a 3-month-old's visual system you would see things disappear too, but that's not the full story. Maybe it's the same here?


Possibly, but it seems odd to me that they would hook it up to the raw visual system rather than showing the "current state" of the car's perception.


Yea, I understand there is noise in signals so some jiggly lines are OK, but the detected objects disappearing and jumping around seems horrible.


I think this is ok. The perception system has a threshold for the classifications and all the logic comes after that. It cant make reasonable decisions that humans can undersand on something that is 80% likely to be something. Its where where this tech is at right now. What is not ok is to call this FSD.


Honestly surprised people are surprised.


> people walking in the street, etc, I would expect to be problematic

As a baseline, this should not pose a problem.


Driving Instructor Simulator 2021


Thanks for the video link, very enlightening. It is disturbing that these are allowed on public streets, its driving skill is far below even the most nervous beginner.


Seems predictably terrible.


According to what I've seen people saying online, in most of these cases Tesla does identify the vehicle but it identifies it as going in the wrong direction so doesn't think it is a problem for pathfinding. Vehicles going in the opposite direction are marked in yellow on the visualization, so they are hard to see sometimes.


The frame grabs of this incident clearly show no vehicle, yellow or not. There are a couple of other yellow objects visible, so this is not a case where it would be hard to see.


In the video I actually do see the yellow outline if I really turn up the resolution, it's super faint though, hard to tell if it is there the whole time. Easier to see in https://www.youtube.com/watch?v=uClWlVCwHsI , at 4:00 the cars are all yellow when they shouldn't be.


True! While it's difficult to see, the yellow car is present and turns green when it is passed, so the car appears to be detected by the system.

Very hard to see because the brightness of color yellow is pretty close to brightness of the background, almost no contrast. They should have picked some other color instead.


I don't think optimizing for Youtube is important.


But it usually marks on coming traffic, it seems to be flat out missing the cars in some cases


Im both suprised and underwhelmed by the video. Its like this video make me really doubt on the FSD system because he looks very hesitant, but at the same time the 4 "near misses" looks so much as the driver overreacting to something that was perfectly fine (especially the curb and the """head on""") that i had to look if the youtuber didnt have some ties to another car maker.


December 2015: "We're going to end up with complete autonomy, and I think we will have complete autonomy in approximately two years."

January 2016: "In ~2 years, summon should work anywhere connected by land & not blocked by borders, eg you're in LA and the car is in NY"

June 2016: "I really consider autonomous driving a solved problem, I think we are less than two years away from complete autonomy, safer than humans, but regulations should take at least another year," Musk said.

March 2017: "I think that [you will be able to fall asleep in a tesla] is about two years"

March 2018: "I think probably by end of next year [end of 2019] self-driving will encompass essentially all modes of driving and be at least 100% to 200% safer than a person."

Nov 15, 2018: "Probably technically be able to [self deliver Teslas to customers doors] in about a year then its up to the regulators"

Feb 19 2019: "We will be feature complete full self driving this year. The car will be able to find you in a parking lot, pick you up, take you all the way to your destination without an intervention this year. I'm certain of that. That is not a question mark. It will be essentially safe to fall asleep and wake up at their destination towards the end of next year"

April 12th 2019: "I'd be shocked if not next year, at the latest that having the person, having human intervene will decrease safety. DECREASE! (in response to human supervision and adding driver monitoring system)"

April 22nd 2019: "We expect to be feature complete in self driving this year, and we expect to be confident enough from our standpoint to say that we think people do not need to touch the wheel and can look out the window sometime probably around the second quarter of next year."

April 22nd 2019: “We will have more than one million robotaxis on the road,” Musk said. “A year from now, we’ll have over a million cars with full self-driving, software... everything."

May 9th 2019: "We could have gamed an LA/NY Autopilot journey last year, but when we do it this year, everyone with Tesla Full Self-Driving will be able to do it too"

Dec 1, 2020: “I am extremely confident of achieving full autonomy and releasing it to the Tesla customer base next year. But I think at least some jurisdictions are going to allow full self-driving next year.”

-

Elon’s just been repeating the same promise for the over half a decade now. Oldest trick in the book.

Disclaimer: I drive a Model 3


This is pretty common in ML projects, and a big reason why there aren't many major companies whose core product is based in a complex ML algorithm that isn't fully baked by the academic community first.

In theory, if the approach to self-driving that Tesla is pursuing in any given year actually worked... then the release would be about two years away. In reality it hasn't been working well enough, and every year a new plan is drawn up to reach full autonomy in 2 years.

This is also coincidentally slightly longer than the average tenure for an engineer/scientist, and as such the champions of a given strategy/approach will have departed the company before someone observes the strategy not panning out.


As an ML researcher, I endorse this message. Casual readers may want to re-read what they wrote, because it's really true.

Exploratory AI should be thought of as "potentially kills your company if it doesn't work and you gamble on it working."

The ultimate truth is that you're outsourcing your thinking to ML researchers, much of the time. And as someone on the ground floor, let me tell you that we often don't have a clue how to achieve X any more than you do. We have ideas, testable theories, plans that may or may not work, and have a higher chance of success than random people. But it's still only a chance.

I don't think a lot of companies have fully internalized that. If your company is based around the premise of exploratory AI, you are literally gambling. And it's not the type of gambling you can win in the long run with. It's closer to "put it all on black" than "let's take a principled approach to poker."


I hope as an ML researcher, you're sensitive to the ML/AI usage here.

There are lots of ML algorithms that definitely work but none of involve claims of intelligence. So it's not just "exploratory AI" that you're talking about. It's "any AI", anything claiming to be "artificially intelligent" and fully replace a human in a key decision making position (but not "any ML" since ML encompasses much more boring stuff).


The number of companies that seem to be charging forward towards AGI is small. Most of the companies are doing what you might call "realistic AI": some definable goal (like self-driving) which everyone agrees will eventually happen (and probably will) but no one is quite sure how to get there in every detail.

I try to be an optimist, mostly because of how many counterexamples you see from history. Didn't some newspaper claim that human flight "may be invented in the next several thousand years" shortly before Kitty Hawk?

But for some reason, rich people keep coming to me with what they're trying to do. My advice is the same: you're going to lose your money unless you bet on proven techniques that exist today.

Take AI dungeon as the prime example. That is what I would call a grade-A idea. It was incredibly obvious that it would become a company (now Latitude) and that it would be successful, if someone was willing to put in the effort to make it so (which Nick Walton did). Once AI dungeon worked, and it had a "minimum viable working thing," the rest was a matter of execution.

But a lot of the ideas seem to fall into the category of... Well, for example, someone came to me saying they wanted to build a "virtual universe, filled with virtual people that you can talk to."

It sounds lovely on paper. But what are you selling, really? There has to be some specific value proposition. So strike one is that it's an unproven market. You yourself want a virtual universe. But is that virtual universe going to lead to something that will solve a lot of people's problems? And we haven't even begun to discuss how you're going to get there. What do you mean exactly by "virtual person"?

It's easy to pick on some of the outliers. But unfortunately, the problem runs much deeper. There are people who genuinely believe that AGI is within reach within our lifetimes, or perhaps within one generation. But whenever I try to corner them into giving specific details on how precisely to get there, the handwaving begins.

This is now a complete tangent, but, I found myself excited and enthusiastic to pursue AGI after a long conversation with a certain someone. They had "fresh eyes" -- a new way of viewing the situation, unlike anything that people were currently trying.

Unfortunately, after throwing myself into that mindset for several weeks, I had no choice but to conclude that their chance was closer to zero than 1%. And I was really trying hard to find that 1%, with all my mental effort (such as it is).

So what choice do we have but to let people pursue impossible dreams, and return to the work that we feel we can make an impact on? Live and let live. And of course, there's the likely outcome: our predictions will be incorrect, and we'll be talking with AGI in a virtual universe sooner than we think. But I wouldn't fall in love with their dream.

(I also think you were unfairly jumped on, and that you had a fine point, for what it's worth. Thanks for the prompt.)


Take AI dungeon as the prime example. That is what I would call a grade-A idea. It was incredibly obvious that it would become a company (now Latitude) and that it would be successful...

Not to further rain on the parade (without good reason) but I should mention...

"Recently we’ve learned that the server costs incurred for both the Griffin and Dragon models are higher than our total revenue. Nobody has ever deployed AI models this powerful before, and it requires extremely expensive hardware. If we don’t make changes to cover the costs of that hardware soon, AI Dungeon will not be financially viable."

https://blog.aidungeon.io/2020/10/01/ai-cost-update/


AI dungeon and gpt-3 is exactly the kind of superficial BS that fails over the longer run, or only serve to fool people. A more advanced Eliza, but with content lifted from real people.


Most of the companies are doing what you might call "realistic AI": some definable goal (like self-driving)

Self-driving is absolutely the key problem, I'd say. I sympathize with you optimism. I'm optimistic about what "computers can do" but I'm pessimistic about DNNs + standard control becoming able to navigate the human-machine border.

Basically, I think a lot of problem are "AGI-complete", especially problem around human-computer interaction, more problems than people like to admit. And Remember, for things just in the NP-complete class, an "average" problem can be easy, it's the few examples that prove troublesome. It seems to me that AGI-complete problems are similar (and yes, I realize the term is a neolism, defined by only extension etc but I'd still say it's valid).

I also think you were unfairly jumped on, and that you had a fine point, for what it's worth. Thanks for the prompt.

Theoretically having lots of "karma" should make me not care about it. Theoretically...


Basically, I think a lot of problem are "AGI-complete", especially problem around human-computer interaction, more problems than people like to admit.

Having recently become a parent, I have a newfound appreciation for how complex spatial navigation tasks are. Children learn to recognize faces and objects within the first 6 weeks to 4 months of their lives. They learn to navigate environments over the next 1-5 years language is fully understood for non-gibberish use cases over the next 5-20 years. It's not a fair comparison, but it provides for roughly the only yard stick we know of.


I think the comparison to NP-completeness is very apt. Will have to borrow that one.


What real world neural network algorithm is "fully baked by the academic community"? I don't think there are any.

I don't think there are companies with products based on AI where the AI has to work for the company. Google uses AI for search but search can screw up and search returns a lot of just indexed results. There's no "real world application" where AI works reliably (ie, gives a result that you can count on). That doesn't stop deep networks from an improvement on applications that were previous a combination of database queries. But this same only-relative-usefulness can be problematic when companies and institutions delegate AI to make decisions where it doesn't hurt them being wrong but it can mightly screw some random person (from credit to parole to whatever).


The relative improvement is both an oversell and an undersell depending on the context. For many applications the correct answer may be that a reasoned set of DB queries is about as good as it gets owing to lack of data, no better algorithm exiting, or product experience being mildly impacted by changes to the DB fetching component.

When confronted with these uncertainties internal stakeholder will often swing from "we just need more scientists working on this problem" to "it works fine, why would we spend time on this?" attitudes. The former almost always leads to over-investment where 3 teams of people are working on what should be one individuals project. The latter can sometimes be right, but I've also seen Fortune 500 Search rankings that have never been tuned let alone leverage an ML model.


There's a lot of amazing real world technical accomplishments seen from SpaceX. They recently landed a 1st stage booster for the 9th time. There's been more than 50 successful landings now. The idea that you don't have to throw away 2/3rds of a rocket for every launch is a game changer.

I do wonder if some of these same things could, or would have been accomplished in the absence of Musk, but with the same amount of capital and under the leadership of Gwynne Shotwell. I think they would have. Musk has a big dream, and is great at hyping stuff, but it's not like he's personally engineering the Falcon 9 and its recovery system. Shotwell hired the right people to implement the grand vision.

What I'm concerned about is the people who think that Tesla can do no wrong, and it's the most amazing thing ever. When the reality between the sales/marketing pitch, as you've documented above, and what actually exists in the real world on a certain date is so divergent.


The optimization problem of hoverslamming a rocket is something that I can fairly easily wrap my head around. Even without the advances in convex optimization of solving that problem you likely could have done it with somewhat less robust approaches. I'm pretty certain that rockets could have been landed using the engineering pieces that existed in 2001 with incremental improvements and learning (maybe not as robust so your optimizer eats a few rockets in your first 100 landings).

When it comes to automated driving though you're really requiring solving entirely novel never-before-solved problems. If there is a spectrum between "found a company on doing some existing engineering twice as good" and "found a company on solving the Goldbach conjecture" then driverless cars are a bit more towards the latter than hoverslamming rockets is.


Musk has clearly been way off base with self driving, and Tesla’s manufacturing is nowhere near where he said it would be by now. However with SpaceX it seems like he really it directly in charge of engineering. It was him pushing the booster recovery program and leading it technically. Yes of course it took a fantastic team to get it to work but I honestly believe there’s no way they’d have got there without him first relentlessly trying out every technical workaround got every problem for years, and second being willing to throw massive resources at it. I can’t see Shotwell doing any of that. She’s great, but she’s the one that makes it a viable company, not a red hot innovator.


I love SpaceX, but I don't find them quite as miraculously impressive as some people. SpaceX is "just" doing what NASA did in their glory days: rapidly iterating, innovating, and then carrying those innovations all the way through to real world flight. That last part is key. In industry we call that "shipping."

NASA didn't stop innovating per se. In the 1980s and 1990s they worked out their own version of vertical take-off and landing and actually test flew it:

https://www.youtube.com/watch?v=wv9n9Casp1o

Linear aerospike engines are pretty neat too:

https://www.youtube.com/watch?v=FcW9kUUTfxY

Problem is absolutely none of this stuff ever got close to operational. It kept getting cancelled in the prototype stage.

If the DC-X had been developed into a fully reusable first stage, we'd have had something like Falcon 9 by the year 2000 and might be on Mars by now.


What you don't say is that NASA was uber expensive in their innovation, with things like not using common CPUs because they were not well tested. And nobody takes risks there anymore.

The difference between SpaceX and NASA is mainly that SpaceX cares about cost.

Or we can say that NASA has incentives to absurdly increase cost, like any bureaucracy.

There is no way the DC-X could have been developed without increasing enormously the budget (and taxes from Americans). That was the reason it was not done.



DC-X was nowhere near to being a reusable first stage. The X-33 was a suborbital prototype.

And innovation when you have many, many billions is one thing, if you have to operate in a commercial market but still find a way to innovate at that level its quite amazing.

I have issues with how Musk communicates, but fundamentally I think Tesla is innovating as much as SpaceX. Many here and me have lots of issues in how Self-Driving is developed, but fundamentally its the right concept.

What Tesla is doing with batteries is quite amazing. I have over the last 2 years spent a lot of time understanding the battery industry and what Tesla is doing is actually incredibly impressive. People mostly don't understand that because battery and battery production is far more obscure as topic.

However whatever else you can say about Elon, he is committed to the projects that he is committed too.

SpaceX will not give up on Starship, or at most replace it with a changed design that tries to solve the same problem. Tesla will not stop trying to push down the price of battery, they are comited to it and are willing to go all the way to vertically integrate mining if that what it takes. Tesla will not stop developing Self-Driving, they will push forward and invest as much as they need to invest.

I do wish the Communication around the Self-Driving technology would change. They should have just called it 'Co-Pilot' and sold a 'Advanced Co-Pilot' and say they are working on 'Self-Driving' but it not a product (yet). Stop promising it will come so soon, and I not sure about how I feel about letting Beta testers put up videos.


> It was him pushing the booster recovery program and leading it technically.

In which way did he lead the program technically?


Sandy Munro was invited to SpaceX design review session, so he has some first-hand comments about that on this video (@6:17): https://youtu.be/S1nc_chrNQk?t=377


The devil is always in the details.

Problems that are conceptually simple can still be very complex once all of the details are added. Theoretically landing a rocket is a simple matter of turning it into a giant model aircraft and doing the math on how much fuel you need to reserve. Re-lighting an engine is conceptually easy too. In practice we know this is a huge accomplishment.

Self driving is a problem that starts out hard. Think about how you get a computer to recognize other objects, especially other cars, based on it's fairly limited set of sensor inputs. Once you add in the details, like dealing with unexpected road conditions, detecting vehicles that are partially invisible to your sensors, etc... and it's hard to see a future where the technology is viable, at least in the near term. All computer vision stuff currently has a sizeable false positive/negative error rate that you just have to accept. But on the road a false negative or false positive can be fatal.


> it's not like he's personally engineering the Falcon 9 and its recovery system

It does seem like his is heavily involved in the engineering, and he does say he's the chief engineer of SpaceX.

https://www.reddit.com/r/SpaceXLounge/comments/k1e0ta/eviden...


He says a lot of things... I think it's generally best to take most of what he says with a Falcon 9 fairing full of salt...


Sandy Munro (former Chief Engineer at Ford and worked on lots of Aerospace projects) was invited to SpaceX design review session (@6:17):

https://youtu.be/S1nc_chrNQk?t=377


That's why I linked to a list of evidence about it.


His official title at Tesla is now "Technoking", so make of that what you will.


I think "Space Karen" is more fitting.


Convincing him to insert the “echn” must have been fun.


There's a list of quotes just as wrong for spacex.

The issue isn't the predictions. It's the time frames he makes the predictions about. We will have everything he's said one day, but in the mean time we need to call him out for what he's done: lying, and why he's done it: to get money.


I actually don’t think he’s lying, I think he believes his own hype. That’s why he’s so incredibly motivated.


My take as well. I've known so many people like this. I've been a person like this. Someone close to me is working for a small startup run by a guy like this.

It's an unfortunate flipside to their willingness to work so hard-- they almost need to believe in the impossible to keep up the pace. Ideally there is someone between them and the public face of the company, but all too often they can't help spouting off stuff they should be keeping under wraps in front of TV cameras or in tweets.


Yes, he's a believer. This belief in discounting the reasoning gap between humans and software is why Musk is both up at night worrying about evil AI taking over the world (it may already be too late!) and also genuinely believes that autonomous driving is a "solved problem" that just needs some of the rough edges removed (just a couple of years away!).

It's like there is this enormous gulf that most others can see, but to Elon it's invisible.

So don't let anyone tell you that philosophy and theology don't matter. We are witnesses a multi-billion dollar bet being made on the basis of some extreme views about the nature of man, and because this appears to be a core belief of Elon, I doubt he will ever update his priors to make more effective investment decisions based on feedback from real-world tests. He will always view this as a project that is almost-ready with just a few technical glitches to overcome.


If someone in a position of power looks honest, it's either because they have such a huge competitive edge that they truly don't care, or because they are incredible bullshit artists. In the case of Musk it's a mix of both, and so far he manages to turn some hype into reality. He is the epitome of plausible deniability.


Those are really the only two possibilities ever? They're just people.


If you believe that I have a bridge to sell you.

The reason why he's so motivated is because his compensation package is so ridiculous.


I think it's fair to say that an extra marginal tens of millions of dollars or so will not affect Musk's life in the slightest. I find it hard to believe that Musk is motivated by money at this point.


I was talking about the 50 billion or so from Tesla: https://observer.com/2019/09/tesla-elon-musk-compensation-pa...

If he is so uninterested by money he can of course show that by giving it to charity.


He was always like that, which is why he did two internships in one summer and dropped out of Stanford to founded Zip2. He slept in the office. They had one computer, so it ran the service during the day and he used it as a dev machine through the night. Some people, that's just the way they are.


Again, if you believe that I have a bridge to sell you.

18 hour days are the "walking barefoot in the snow for three days" of our time: https://en.wikipedia.org/wiki/Road_to_Canossa

Amazing how neither sleep deprivation psychosis nor frost bite are a thing if you're rich enough and talking about how virtuous you are.


>>I do wonder if some of these same things could, or would have been accomplished in the absence of Musk, but with the same amount of capital and under the leadership of Gwynne Shotwell.

That assumes the same objectives would have been pursued without Musk, which I find unlikely, given the skepticism and criticism Musk had to endure, and dispel, in order to keep his companies on-target, without a scale-back of ambitions, all those years.


You forgot the best quote, I think, from Autonomy Day in April 2019: “We will have more than one million robotaxis on the road,” Musk said. “A year from now, we’ll have over a million cars with full self-driving, software... everything."


Ha, added, thanks!


I wonder if Musk is intentionally bullshitting or if he completely underestimates the problem space because he's confident of delivering big things (SpaceX, electric cars).

The folks at Waymo/Google, some of the smartest people in the industry, admit to how hard of a problem this is and set realistic expectations. For instance, Waymo (and every SDC company except Tesla) say Level 5 autonomy is impossible and they are strictly targeting Level 4. But here is Musk who says "Level 5 will be coming next year" every year. I feel like there is a serious lack of humility at Tesla (or maybe it's just Musk).


Recently I asked myself that same question and came up with a satisfying theory.

Part of Tesla's business model is selling $10k FSD packages based on a promise that they will eventually deliver full self-driving. If, at any point in the future, any evidence surfaces showing Musk or other senior leadership at Tesla did not have confidence in their ability to actually deliver FSD, this would amount to large-scale fraud and may result in a class-action lawsuit.

On the other hand, if in all communication, internal and external, Tesla leadership projects confidence in their ability to deliver FSD, but then "unforeseeable circumstances" prevent them from actually delivering it, then it's not a scam, but just an unlucky turn of events. At the very worst, Telsa might have to partly reimburse their customers, but most likely they can keep the money.

So it does not really matter what Musk believes deep down. It's probably easier to do his job if he consciously makes himself believe that he can deliver.


> If, at any point in the future, any evidence surfaces showing Musk or other senior leadership at Tesla did not have confidence in their ability to actually deliver FSD

Tesla recently said that "Full Self Driving" is not capable of autonomous driving. They tell the regulator one thing and tell their customers another:

https://www.thedrive.com/tech/39647/tesla-admits-current-ful...

https://www.caranddriver.com/news/a35785277/tesla-fsd-califo...


To be fair... he has convinced millions of people to put up $10k towards FSD. He’s laughing all the way to the bank. And we are....well...we’re on HackerNews complaining about him on a Friday night.


winning isn't on its own admirable, nor is complaining/criticizing on its own shameful. that's just the toxic ideology of "might makes right" that permeates our society


While I entirely agree with the sentiment of this post, I often find myself wondering - how does one effectively combat this attitude without making a full circle? How is anything to be established in society without some sort of "might", so to speak? Just a thought that resonated with me when I read your comment.


interesting line of thinking.

just to write it all out for clarity (not because I think you misunderstood): the might makes right that I was pointing at is an attitude of "winning is everything. winners are superior, no matter how arbitrary or unfair the game."

the might makes right that you're pointing at is the fact that in the struggle to define laws and norms, there are winners and losers.

I think it's just a question of values. there is no "neutrality" - there is always a fight between competing values, and everyone picks a side. the value that says "winning is everything, winners deserve everything, losers can die in a ditch" is a bad value to me.


Ironically, we're all here complaining on HN while he's probably hard at work on a Friday night.


What's your point? That conning tons of people out of $10k is praiseworthy?


I think the point is that Musk acheived what he had intended to do, something financial.


okay... so why is that inherently praiseworthy?


Who said it is praiseworthy?


It’s interesting, a couple years ago I remember there was some debate about upgrading the public transport in Seattle(?), and I saw some comments saying maybe we need to hold off in case a full autopilot car infrastructure might emerge as a cheaper alternative. I think we’re all a lot more grounded at this point...


> full autopilot car infrastructure

Not sure what this phrase means to you exactly, but it may not be the worst idea-- the only way I see a mass rollout of automated vehicles any time soon at this point is on dedicated roads/lanes with roadside sensors to assist. Anything else is gambling with human lives.


Oooh, that reminds me... it's been 3 years since my very gullible cousin called me an idiot for not believing that FSD would be here in 5 years. Two more years to wait for that 'toldyaso'. Shame that I didn't put money on it.


I was a fan of Tesla cars in 2016, didn’t know much about them or Musk but heard that they were fast, safe, looked good, and solved the problem of EV charging with the super charger network. Went to look at the stock price to see if I should buy some and almost fell out of my chair. Around the same time they were promising a solar roof that looked better than a normal roof and cost less, which sounded way too good to be true, and also the $35,000 model 3, which would be under 30k with the tax credit, also very difficult to believe. The final straw was the promises of full self driving, which clearly was a much harder problem than they were claiming and potentially impossible with their chosen tech stack. The more you look at this company the crazier the story gets, and now it’s worth more than the top few automakers in the world combined.

Ed Niedermeyer’s “Ludicrous” chronicles the story without the breathless media coverage that usually follows Musk. It’s incredible what the Musk empire has become, but there’s a lot more to it than meets the eye. https://www.amazon.com/Ludicrous-Unvarnished-Story-Tesla-Mot...


It took three years to go from "in two years" to "next year." That's three solar years per Elon year. Starting at two years in 2015, that means we'll get it for real in... 2021!


He must have switched to Martian years prematurely.


FSD is as hard to solve as general intelligence. I don't get why they bother going down this rabbit hole with sub-GI AI technology. While waiting for an AI breakthrough, we should look into instrumenting our cities. That's how planes and drones can navigate, approach and even land. Sure, cities are more complex, but instrumentation can automate the routes. And it's a great infrastructure undertaking the like of which we haven't has in decades.


I think it’s wrong to say that FSD requires AGI since driving is a specialised skill.

I do agree though that there’s so much that can be done right now without waiting for some magical future tech. Inner cities without cars can be solved with public transportation. De-urbanisation can be achieved with a combination of remote work, cashier-less shops and a well-run postal system. We don’t need to wait to do these things, they’re achievable now.


I mean he was promising to land rockets and put men in space and launch a mass market electric vehicle for quite a bit before he did any of those.

Loads of people called Elon a charlatan, but for them criticism came at virtually no cost, unless you account for the cost of not betting on him.


I feel like developing an AGI to drive cars from monoscopic camera vision alone is a much harder task than any of those.


I don’t think it’s accurate to say Tesla is developing an AGI. I mean, the point of the project is to get a computer to drive a vehicle, not interpret French literature, which means the “AI” will be pretty “specialized,” which is kind of counter to the “general” in AGI.

So maybe they’re developing an “ASI.” But because there are already plenty of those and we call them “AI,” we might just say Elon’s trying for a harder, higher scale, and more commercialized version of a set of technologies that already exist. Kind of like all the other “impossible” things people mocked him for missing deadlines on in the past. And now, we’ve arrived at the point.


Driving may require some level of general intelligence and reasoning beyond what a traditional AI can accomplish if they truly want to handle all the edge cases implied by a cross-country summon.


They say that about every problem we can't solve.


Real-world pose new problems instantenously. Think markets.


He also has made some very wild claims with Neuralink recently, at this point I take basically everything he says with a big grain of salt.


from an available information perspective. There have been prognostications that AGI would be available in the 2020s for some time now if Moore's law continued.

When Spacex formed in '02 and Tesla in '03 many observers would have estimated that privately funded moon rockets, luxury electric cars, and mass produced solar panels would have been a bigger challenge.


Keep in mind that Tesla's approach for solving FSD still uses mathematics that's fundamentally half a century old and is by definition narrow AI, not AGI.


> uses mathematics that's fundamentally half a century old

I have news for you, the most common operations in AlphaGo, GPT-3 or any state of the art AI are: multiplication, addition, max, log, exp, sin, cos and random - all functions known for centuries.

It's the architecture, data and the compute that are making the difference, not the math, and all three are recent accomplishments.


Yes, that's my point. That neural networks were conceptualized roughly half a decade ago. Obviously there have been a lot of advancements like convolution, drop out, attention, deep learning, etc. But fundamentally this is old mathematics and while it's yielding good results at solving specific problems, it's not the answer for AGI. For AGI we will need new breakthroughs.


century ago**


Exactly. Tackling AGI will require imo a significant breakthrough in the field of AI. Meanwhile, it's much more productive and practical to look into how we can instrument our cities to make self driving cars possible with today's technology.


It's the other way round, and you hint at it in your last sentence: the critics have had huge opportunity costs, while Elon had virtually no cost for his repeated overselling of Tesla's FSD capabilities. To the contrary, a good chunk of Tesla's market cap (and thus Elon's net worth) is clearly attributable to its large followership of retail investors hyped by Elons predictions and overselling (and I'd guess that even some professional investors are falling for the "Level 5 is right around that next corner" claims). Tesla is valued as a tech company, not as an automotive company, and that is in good part due to its alleged competence in the software tech realm, of which FSD is the most coveted piece.


Soon, we'll have practical plastic recycling, practical fusion reactors and widespread secure C codebases :-)


Don't make predictions or estimates when you're happy.

Marketers are usually overly happy, especially when in the act of estimating. This makes it hard for them to resist being overly generous with their predictions.


Elon Musk's "two years from now" (or even "later this year") is everyone else's "in five to ten years." His 2 being everyone else's 5 is also appropriate given that he describes their level 2 autonomy as level 5.


Why hasn't the SEC punished this CEO for 5 years of blatantly false statements about a public company?


There's a big difference between promising tech in the future and saying that something is true now.

This kind of marketing is a standard release of information in the form of opinion of the future, which shareholders can decide for themselves whether to believe it or not.

When the SEC slapped him down previously it was because he conducted a release of information about "facts" - not opinion - that were supposedly true now (that he had a private buyer at $x price). This was deemed a market manipulation because it was presented as 100% true. Whereas self-driving cars in 2 years might be true and it's up to individual shareholders to do their own analysis.


Because none of the statements are false, they're "forward-looking statements".


It’s unreasonable to hold someone’s statements of opinion and future expectations to the same standard as statements of fact. Otherwise every single CEO would be guilty — they tend to be an optimistic bunch.


The SEC’s raison d’etre is to preserve shareholder value.


The world's most successful huckster, and being repeatedly proven wrong only seems to increase his success and cult following.


Perhaps he’s an ambitious person with both great successes and great failures, and people admire that?


Still they are at the forefront in the field. IMO in the same ballpark as Waymo.

Elon operates by setting impossible goals and under delivering on them but still going faster than all the other players. Most FSD buyers are reasonably aware of the gamble they are taking.

If Tesla turns out to not be capable of deliver FSD, it's a straightforward class action to refund everyone. Meanwhile they are trying hard to build the damn thing and my kudos for that.


>If Tesla turns out to not be capable of deliver FSD, it's a straightforward class action to refund everyone.

The reality distortion field in action. "Hey no big deal they're straight up lying to consumers and putting their lives in danger!"


> Still they are at the forefront in the field. IMO in the same ballpark as Waymo.

They are nowhere near Waymo, who is doing actually driverless rides for the public. Tesla's confidence level is still at "you need to keep your hands on the wheel at all times".


I have questions about Waymo's taxi service. I'm certain that they occasionally run into situations that require human intervention, or at least need to be prepared for such an eventuality (see examples below). In such scenarios, what happens? The passenger operates the car? A remote operator takes controls? The car refuses to move and becomes a hazard or causes congestion?

For example, you come up to a 4-way intersection and a traffic cop there signals you to stop and says you can't go through because all the man-hole covers are off, but since you want to turn left you can go through the parking lot of the corner gas station to get onto the other road.

For another example, right after the st-patrick's day parade is over or right after the college town's team has just won the NCAA championship, the street is full of people and the car has to inch forward at a rate of half of a mile per hour and the pedestrians don't get out of the way until the car is about to touch them.


In the scenarios you described, a remote operator would "help" the car. They've specifically said remote operators can't control or joystick the car, but can "answer questions" which I take it to mean plot a different course. It can also pull over at a safe spot and not get stuck in the middle of an intersection, though nobody has run into this issue to test it from the videos I've watched.

The examples you gave sound like scenarios where remote operator would instruct "don't go there, go here" and the car does it by itself. But specifically about the traffic cop, it can actually detect hand signals from them (can't find the video though).


> since you want to turn left you can go through the parking lot

In my state this is explicitly illegal. You may not use a parking lot as a thoroughfare, regardless of convenience.


Though geofenced, Waymo actually does have fully autonomous driving working already.

Waymo also has the sum total of CAPTCHA learning for them.

Tesla is trying to solve a much more difficult problem. It’s unclear if they will.


Waymo also has LiDAR which is infinitely better for bounding box detection than images.


I beg to differ. It seems pretty clear whether they will.


I agree, betting against progress after what has happened with gtp3 and alphaGO seems unwise.


Waymo is a joke outside a trained, geofenced area. It is fascinating to me how a company went from gee we can index search results and sell ads to we can solve one of the hardest general automation problems in the world.


It's not designed to work without the map/geofence, so why is this your critique? Do you also judge fish for their crappy tree climbing?


It’s not the same people there doing search and doing self-driving.


I think Google are basically just the funders, the actual leaders on the project seem to be engineers with strong AI backgrounds.


AI does not exist.


AI exists, but not necessarily Artificial Intelligence.


As a marketing term yeah.


They definitely are not. Anyone working in this industry overwhelmingly agrees that Tesla is far behind the well established self-driving car companies (Waymo, Cruise, ...) mainly because they continue to rely on cameras only (and not lidars)


What are you talking about? Autonomity with vision only is way more advanced. Lidars are useless when vision is solved.


Solving vision is an incredibly difficult problem that is made harder by the lack of stereoscopic cameras. There is fundamentally no need to rely on vision alone other than bragging rights. It provides almost no practical benefit.


I think (and hope?) that the parent comment was sarcastic.


It’s a common talking point, I doubt that comment was sarcastic. Musk has said that he thinks Lidar is useless because humans operate vehicles with just vision, which of course is an unfair comparison since humans have intelligence to help them.


This has always been a ridiculous argument. Human eyes have a much higher contrast ratio than commercial cameras, and have built-in stereoscopic capabilities with some degree of rangefinding thanks to the fact that they are quickly adjustable. Normal cameras have none of that.


It was not sarcastic. I'm not really sure what you're arguing. It's obvious that vision-only would be more advanced than using lidars. Lidars are expensive and take space, so that's why they're trying to use cameras only.


And it's quite obvious that there are at least some situations in which vision will not work whatsoever - e.g. dense fog. I personally would quite like my autonomous vehicle to operate happily through dense fog. If you're happy to pull over to the side of the road and wait it out, be my guest.

Having multiple orthogonal subsets of the electromagnetic spectrum at your disposal provides redundancy and diversity - two features that simply CANNOT be accommodated with a single class of sensor, not matter how advanced it may be.


Yeah, that's true. Teslas have a front-facing long range radar and multiple ultrasonic sensors around the car for close range detection. Like most modern cars. But they are there just to avoid hitting anything, but can't be used alone for driving autonomously.


How can they be at the forefront of this field? A driverless Waymo will come and pick you up from in front of the Costco in Scottsdale AZ, right now. This Tesla can't do anything unsupervised.

Of all the self-driving companies it is likely that Tesla is dead last, behind Waymo, Cruise, Aurora, and Zoox.


It is outright scary how many Tesla fanboys think that Tesla are at the front of the pack. How much Koolaid can you drink without even checking what the experts in those industries are saying?


Haha I know. So many people think it is fully autonomous too.


Why can't the same Waymo come and pick me up on 5th Avenue in Manhattan?


Not sure if serious, but it is against New York law. That might not be the only reason, or even the main reason, but it remains a reason.


There is some data suggesting that Waymo cars might avoid collisions better than humans https://www.theverge.com/2021/3/8/22315361/waymo-autonomous-...

Meanwhile, Tesla autopilot accelerates towards the walls or parked vehicles. Tell me again how Tesla is "at the forefront in the field".


> If Tesla turns out to not be capable of deliver FSD, it's a straightforward class action to refund everyone.

And possibly a few deaths, yeah.


This [0] is much worse than the video in the article, IMO. It repeatedly tries to pull out in front of cars going 45+ miles an hour, in some cases giving the driver very little time to stop it. For level 5 (what Elon keeps claiming they will have very soon), they need to be able to go tens of thousands of miles without a crash-avoiding human intervention, not 10 minutes.

[0] https://www.youtube.com/watch?v=uClWlVCwHsI


The last one at 8:25 is really bad. And yeah, this is a much simpler situation than the city driving, and it still tries to kill the driver repeatedly.

Also, far from the worst thing in that video, but I notice the car sits waiting to turn with its wheels turned into the oncoming lane. You're not supposed to do that, because if you get rear-ended, you're going into oncoming traffic; best to keep the wheel straight until you're ready to turn.


Wow that's pretty bad. I only watched about half the video - but it appears the Tesla is patiently waiting for the oncoming traffic to clear, same as any human driver would. And then when there is a suitable gap in incoming traffic that any human could accelerate calmly and normally across, to finish the turn correctly, the Tesla just doesn't go. And then in the time that it took to be indecisive, more cars appear and it decides "nope, not time to go now either, keep waiting". Then at 4:05 in the video it does decide to go and almost accelerates right into the path of an oncoming car.

At 5:01 in the video it clearly has plenty of time to go and just doesn't move.


When learning to drive you have to learn how to have the judgment to commit to a decision, the car seems incapable of that and keeps reevaluating its decisions at every step. The problem is that in these cases you have to commit and accelerate to make it through.


Holy shit! This is worse than I would have expected! This is shockingly bad, real people are going to die if this is allowed to the masses. The fact that is beta is terrifying.


The thing thats more disturbing is looking at how inaccurate the car placement is for the traffic passing on the right hand side.

Lane shifting, totally wrong placement, everything moving in an arc.

That is a system not fit for purpose.


Looking at this video, I am very skeptical Tesla can solve FSD with the sensor suite on the car. The bounding boxes are all over the place; it doesn't look like the car can detect fast-moving objects beyond 20-30 meters very well.


It's astonishing... endangering other drivers on a highway for what is basically live testing an experimental feature.


Try being the drive by looking solely at the display. It’s a complete joke. If that screen represents the totality of the car’s perception of the world it’s simply never going to work. Look at the cars in the right lane, they’re fucking all over the place. Seeming changing lanes into each other constantly. It’s like it’s not even calibrated or something. WTF??

Then try making the left turn decision solely based on that screen. Good luck. Things don’t appear until they’re way closer than what is plainly obvious using vision.


Just watched the first two. I would've gone on those, probably wouldn't trust the ai to accelerate fast enough though.


Yeah, you missed where it seems to nearly pull out right in front of a truck hauling a boat (~8:27).


In the second or third one there is a truck coming and the box for it doesn’t appear on the display until it’s quite near, and then it disappears and reappears several times before the truck passes...


That is super stressful to watch.


Is flying that drone like that legal? I guess maybe if there was another observer?


How can any regulator watch this without intervention?

Tesla should upload this video to their page where the misleading video was "driver would not have to be here"

Not saying other manufacturers are worse, but this borders fraudulent advertisement.


You are assuming some has given them the authority to intervene.


The NHTSA has authority to mandate a manufacturer recall for safety, in this case to mandate that Tesla disable this broken software.


SEC can punish stock scams.


Ah yes, the “everything is securities fraud” loophole Matt Levine always talks about.


This, specifically, appears to fall into that category.


I’m not disagreeing. Here’s an example of the blog I was referencing. It’s interesting how the SEC acts like a catch all bucket of stopping “bad” behavior under this reading of the law. https://www.bloomberg.com/opinion/articles/2019-06-26/everyt...


Thank you.


Until someone does, the people posting these videos, and anyone found to be driving with FSD enabled should be fined and have points applied to their license for reckless driving.


That sounds premature. There have been zero fsd beta crashes so far according to Elon.

The beta is clearly designed to err on the side of caution. The majority of disengagement's are due to this behavior. You could actually argue that an overly cautious beta is safer than one which is 99% percent reliable because it keeps the driver from becoming complacent.

Until there are actual fsd beta accidents I say let them test their software, the upside of self driving for society is worth it.


The beta is way more aggresive than any other autonomous system I've seen. I'm not sure where you're getting the impression it errs on the side of caution.


Any non-AV driver seen doing even 1/4 of what FSD did in that video would have been pulled over and fined. Regardless of their personal driving history. When FSD does cause an accident, the driver who enabled it can also be charged with more severe offenses.


Do you drive in the city much? Many of the traffic rules that can be strongly enforced in rural and suburban areas cannot be followed in cities.

You frequently have to enter the other land, cross a yellow line, or turn into a different lane because of double parked cars or construction.


This is not behavior consistent with experienced driving, it’s the opposite. In cities you have to know when to follow the rules and when it’s safer to break them. At one point in the video the car stops behind double parked cars and doesn’t know enough to go around them. At another point it happily drives on the wrong side of the road, completely unaware of what it’s doing. All this behavior is incredibly dangerous and if I saw another driver doing that I would stay far away. With this many close calls I don’t believe that there have been no crashes or fender benders, no matter what Elon says.


If there really have been no crashes with this system on the road, it is a testemant to how safe the average human driver actually is. That doesn't say good things about the future of self-driving cars.


Well everyone worried that driver assistance functionality like this would make drivers complacent.

Perhaps this does the opposite, it's so unpredictable that drivers much pay more attention than they normally would and therefore they don't crash :D


I’ve lived and driven in cities my entire life. If I was a police officer and saw any single event captured in that video I’d keep my eye on the vehicle. The vehicle in that video repeatedly followed up questionable activity with additional questionable activity. It was inexcusable reckless driving. As a human I have zero doubt that many combinations of events in that video would have resulted in my being pulled over and potentially fined. That the driver in the video wasn’t pulled over was simply luck that there were few (1 to my count) police in his proximity. But the video was publicly broadcast and the human is ultimately responsible for the activity of the vehicle they control. So that driver should be fined.


I'd appreciate it if the NHTSA and DOT would do their fucking jobs and halt operation of these semi-autonomous 5000lb+ hazardous bots on public roads.

Us non-Tesla drivers have not signed up to be part of your volunteer risk-taking beta-testing team, go do this shit on private roads.


Yeah, I'm surprised this is allowed. It's risky for the owner, who opts in but may be mislead about the current accuracy of the model, but even worse, it's dangerous to all 3rd party drivers, ciclists and pedestrians.

It's basically driving like a drunk guy.


I've contacted my representatives about this, I believe that all of us who are concerned should do so as well.


Cruise has some long videos of self driving in San Francisco. It's far better than this.

What's striking is watching the system's screen vs the real world, and seeing Tesla not detecting important things. Tesla's big problem is simply that they are not very good at building a 3D model of the world around their vehicles. They try to do it on vision alone, and, after over five years, they still can't do that reliably.


I don't follow this too closely but are you referring to company-released videos? Seems unfair to compare a compilation of bad incidents from one of many (thousands?) of users of Tesla's system to one curated by the company releasing it. I've seen a few videos of the Tesla doing 15+ minute drives without intervention too.


Tesla has marketing videos as well (and great FSD Beta videos), these videos testing Tesla try to find edge cases and the hardest places to drive where it's still allowed. There are serious bugs to fix, but that doesn't mean that Cruise doesn't have them.


I would be very surprised if Cruise had a better 3d rep. It is probably far worse and that is why they are hiding it.


They have a lidar. They already have an accurate 3D rep from the sensors


They have a startup and an unfulfilled promise. Their ADAS supercruise has no 3d view. Their self driving vehicle is no competition to tesla.


Supercruise is not the same as Cruise self driving.


> It's far better than this.

I don't know what you're referring to. Any links?


Putting the near-misses and safety issues aside, this looks absolutely maddening and horrible to drive even in the best conditions. Just watching it sit there staring at a plainly empty and clear crosswalk with a green light, the loud blips and alarms as cars get close, etc. would make me so anxious to be behind the wheel.


I tried FSD in a Model 3 recently. It was quite bad. Biggest problem was that the car wanted to come to a complete stop at green lights, and other cars were behind me wondering WTF the problem was. And it tried to make a right turn from the middle lane without first moving into the clearly-marked, empty right turn lane first. At which point I turned the damn thing off and made no attempt to reengage it.

Not only is FSD not worth $10k; it wouldn't be worth any amount of money if Tesla paid me to use it.

FSD is going to get somebody killed. Tesla should stick to what they're good at and quit hyping something they're very bad at.


The FSD beta, while still not ready, is pretty far from Autopilot that everyone has access to right now.


For most of the video, I felt like "laughably bad" was an exaggeration. The car made poor questionable moves, but it seemed to err on the side of being too cautious and too protective of pedestrians.

The last two minutes were really shocking. The car would have crashed several times had the driver not been extremely vigilant. Tesla should not be testing this on the general public.


It stops in the middle of the road, drives into the back of parked cars, shakes back and forward, cuts across lanes, stays out of its lane, misreads road signs, pulls in front of cars, gets honked at repeatedly, drives on the wrong side of the road, and drives so slow and unsure that other cars have no idea what to expect it to do. It isn’t only laughably bad it’s absurd that it hasn’t been banned and Tesla forced to issue refunds.


Whether it's laughable or not is a judgment call, but as a frequent pedestrian and cyclist, cars erratically swerving all over the road like that is not what I would call protective.


I agree, protecting the cyclists should have higher priority considering the damage that a crash can cause.


The worst thing about this system is it doesn't make a decision and stick with it. It just kind of meanders between lanes. Then when it realizes it is going in the wrong direction it does the wrong thing to get back on track. It should just follow a turn lane and renavigate.


Just like a normal driver. Have you been a passenger in a car with and old lady like my mum. So it's a bad driver, how much better than the average driver does it have to be?


Agreed. Interesting to reflect on what us human will do instinctively vs what the computer comes up with.


I don't get what's so difficult about self driving. The car just needs to show a path it's going to follow during the next ten seconds and stick with it. Only when the path appears wrong, the driver takes control. This let's the driver know how the car sees the road.

Unless the self driving saga has a different goal: transportation as a service. This is where the money are and this is where full auto driving is a necessity.


Their world model is shit apparently. It should be a hierarchical Bayesian model, constantly updated, and checked for likehood distance from priors, and when it gets too big, alert the driver and disengage.

Cars don't randomly appear and disappear in the middle of the open road. If their model cannot interpret these events as "a big fuckin' problem, I'm out of my operational envelope, I need to stop", then that model is indeed dangerous.

Sure, this is easier said than done.


The ML model may be shit and that's ok so long as it shows the path it's going to follow and sticks with it.


Sure, if it's simply a follow the car in front of me model. (Which is lightyears from FSD.) Also then it needs to be super-super finicky about what it interprets as a car/road/etc, and start disengaging the instant it detects something is not right. But obviously Tesla/Musk decided to just tone down the carefulness.


> The car just needs to show a path it's going to follow during the next ten seconds

It could even project this onto the wind screen. Some new cars do a neat trick with perspective adjustment so the projection appears 'flat' even on a tilted wind screen.


What’s so difficult about intelligent language models, you just pick the next few words the model will say.


Yea, really. What's so difficult about X usually means you have not thought about the problem enough.

What's so difficult about making an app? You mean other than having to have some understanding of 20 years of tech its built on top of?


Why wouldn’t everyone jump at the opportunity to pay Elon Musk money and risk their lives and time so he can get enough data to then make even more billions.


People probably want to be the first to try something that, if it worked, would change humanity as we know it. Or even just contribute to the progress in their own way. Money is hardly the only motivation for many things, nothing stopping them from owning Tesla stock if they really believed their involvement was going to make billions though.


How would a self driving car "change humanity as we know it"?

Sure it might make travel by car suck less but not in any significant way that would change humanity in any meaningful way. At least not outside of a very narrow bubble we might be a part of.


Humanity was supposed to have been changed 4 years ago.

It’s fairly obvious that the only thing people will achieve is killing themselves, their loved ones, or innocent bystanders.

And Tesla will bring us nowhere closer.

Anyways, hasn’t Tesla been far more successful as a Bitcoin marketing company in a week than as a manufacturer of cars for over a decade?

If you do want to risk your lives over advancing humanity why not risk it with Waymo which is actually making progress in this arena.


Not just the Tesla drivers but the pedestrians and the other cars that have to deal with this unsafe, dangerous behavior.

Watching the live 3d view shown in these videos - it's a very erratic representation of the world around it and should give a pause to anyone that thinks this is "FSD".


I'd happily pay some money to help test something that I've hoped and dreamed would be possible in my lifetime.

Driving sucks and is dangerous as hell. Anything that 1) makes me need to drive less often, or 2) makes driving safer is a worthwhile financial & time investment for me.


Nobody is asking you to do it so why are you so pressed?


Because Teslas pose a risk to people who did not throw money at Musk. I didn’t sign-up for the beta program, so I’m pretty pissed to be an unwilling participant.


>Because Teslas pose a risk to people who did not throw money at Musk.

Heh:

>Because horseless carriages pose a risk to people who did not throw money at Ford.


Over 1 million people are killed by 'horseless carriages' every year. That's not including the ones who will die from climate change.


Yep, driving your car is the most dangerous things a person can do in an ordinary day, but that hasn't stopped us from making them or using them, has it?

We are not the brightest, nor safest species.


There has now been a history of over a hundred years, wherein cars have been driven.

We normally require a minimum age before a prospective driver may obtain a learner's permit to drive within a city, unlike this FSD, which in the linked video drove like a drunk driver.


Human driving tests are hilariously easy. I bet it would pass with flying colors.

Driving in downtown Oakland was incredibly complex with all sorts of unusual features, driving the course the car did in the video would have stressed me out a little, and I've been driving for 24 years. Full self driving does need to handle those situations to be trustworthy, but for highway driving and sedate suburban settings it's probably just fine.


Human driving tests in USA are hilariously easy. In Israel you must take 8 or 12h of theory, after this written exam, followed by 30h of driving lessons. At this point if teacher thinks that you not gonna kill anyone, will make you internal driving test in driving school. If you pass, he will submit you to state driving test (you can't sign up by yourself).

Driving test takes 40min, through city (including all possible variations of parking) and highway on unpredictable route. I think less than half manage to pass it on first attempt.

If you passed, you get "new driving license" with a bunch of limitations (no passengers, supervision of experienced driver for 1 year, etc). If you get a violation of anything, license is taken and back to driving test.

After few years there is mandatory refresh course of (advanced) theory with exam. If you fail exam, back to driving school you go.

Edit: add into the mix, crazy, Mediterranean style driving on streets where exam takes place


> Edit: add into the mix, crazy, Mediterranean style driving on streets where exam takes place

The test must not screen people well enough if there are that many "crazy, Mediterranean style drivers" on the streets who already passed.


they become crazy past test. environmental pressure and evolutionary selection :)

but, from my vague memories, when you sign up to theory lessons you need to bring note from doctor that you are healthy and have no history of mental disorders (serious stuff) and such


In the USA, yes they are.

In europe, they are far more strict.


> Human driving tests are hilariously easy. I bet it would pass with flying colors.

Driving tests are indeed way too easy, however, the level of driving seen in these Tesla videos would get any human driver a fail on the test. Just the constant indecisive jerking of the wheel would be a fail, let alone all the near crashes.


By that logic no potential new technology can ever be dangerous. What if it turns out FSD is less safe than humans because the technology doesn’t work? Plenty of people got sold dangerous flying machines before flight was actually invented.


It's not logic, just an observation that we embrace technologies that are dangerous anyway.


But only when there is a clear benefit, for Tesla’s version of full self driving that’s far from clear.


If you drive a car, you don't get a choice. Just because you're not in the Tesla doesn't mean you aren't part of the beta.


I'm of the opinion that if the auto-pilot was a real driver, I'd like that "driver": 1) put into the passenger seat and required to simply observe, 2) sent back to school to learn the road rules, 3) re-sit the test to confirm knowledge and re-sit practical.

I also noticed a few sudden stops from the way the guys lurched forwards at stops. Getting rear-ended due to pointless sudden stops is a fail as well.

Its not ready for use. From what I've just seen from that video I'd be de-rating that "driver" from full license back to probationary or more realistically, right back to learner.

I'm glad there was a human at the wheel.


As a lead engineer, I would not let software like this out into the wild. The risk is so high and it's not ready. A lot of this would be caught by any engineer driving the car for... 15 minutes. You have thousands of engineers to throw at the problem - why can't you stay in a lane while turning? Why can't you read turn arrows in the road? Both of these are extremely fixable problems. Why is the car stopping in the middle of nowhere?


This is the part I can't understand. You release software to dogfood, public beta, etc when you want to get more feedback and you've exhausted the ability of the previous tier to provide it. You wouldn't release software to dogfood if the programmers on the project can readily find defects in a minute or two, you go to dogfood if the engineers use it all the time and can't find new defects. The only reason for this FSD software to be in the wild is it is part of the larger pump and dump scam.


It would take too long, on the contrary you want to release it as early as possible. It need to learn from real life situation. To get a true real life data you have to... release it to real life situation.


>Why is the car stopping in the middle of nowhere?

garbage collection?


I think is because they're going for a full machine learning approach, the neural network tries to learn the best solution but is unable to receive simple instructions like "go in a straight line"


They're not going for a full ML approach. That's just more Elon marketing language. Tesla reverse engineers have gotten their hands on the FSD beta and inspected the internals - they use ML for perception and scene understanding, but all of that information feeds into traditional robotics prediction, planning, and pathfinding algorithms.


> You have thousands of engineers to throw at the problem - why can't you stay in a lane while turning? Why can't you read turn arrows in the road? Both of these are extremely fixable problems. Why is the car stopping in the middle of nowhere?

Why would anyone risk blowing that whistle when Musk and Tesla have tried to get employees murdered by the police by accusing them of being mass shooters[1]?

[1] https://www.bloomberg.com/news/features/2019-03-13/when-elon...


If a company could produce a computer that could think with the complexity of a 3 year old that would be revolutionary. To put 3 year olds behind the wheel of 1 gigajoule of kinetic energy would also be revolutionary.


Yes recreating life would be revolutionary. Worth remembering that science has never created life from non life. We’ve manipulated DNA and cloned but never created from scratch even the simplest life forms. We shouldn’t assume that recreating human brains in computers is an unavoidable eventuality, computers are different than brains.


Watching the video gives me flashbacks of trying to teach someone to drive. Actually I'd probably prefer that than trying to babysit this headache in traffic. At least then I'd get paid for the hassle.


Why do I get the feeling that teaching someone (or something) to drive is exactly the idea with the FSD beta?


Yes, like teaching a driver who can't understand complex situations, who is fearless and without conscience. Besides that it's the same.


Both fearless and lacking confidence.


That's probably part of the calculus, but mainly it's to deliver at least something on an over-promised feature.

I also wonder how good the training examples are, if they're just using average drivers without any quality control. Like when I'm teaching someone to drive, I'm following traffic laws to the tee. But when I'm doing my day to day driving, it's mostly about optimizing time. There's probably a reason why Waymo hired and trained drivers to teach their algo in a standardized way.


I haven't really been following self driving closely, but this looks way better than i was expecting. Those streets were very busy, and it mostly did ok. I thought city driving was just off the table, and all the focus was going to be on long haul highway driving.


That video shows the worst performance of the system. On less busy streets it usually does much better. Highway driving has already been solved. City streets is the missing part from FSD (which is now in beta).

Mountain road: https://www.youtube.com/watch?v=6nr9_wBPAHs

Highway: https://www.youtube.com/watch?v=XPrsRM2cxGs


Then again, Teslas repeatedly crash into stationary objects, even on the highway, the latest from this week:

https://mobile.twitter.com/MSPFirstDist/status/1372152867612...


Highway driving is not solved.


Would you pay for what you saw in the video?


I would pay for a device that would detect and alert me about an FSD-enabled car being nearby so I can keep as much distance between me and them as possible.


no, but in general I wouldn't pay for any beta test. I would drive around in it and report bugs if they paid me though. (incase anyone from tesla is reading)


People will die, tesla will bring the full might of their legal team to prove somehow that the driver/injured party is at fault because of some disclaimer. Musk will continue to be heralded as humanity saviour. Kenya will be president.

Innovation is the new Moloch.


> All of this testing is being carried out on public roads, for the benefit of the world's most valuable automaker, at basically zero cost.

This is an extreme example of "socialize losses, capitalize profits".


I've got a question.

What on earth is with "double parked cars"? Is that an American thing? I've never seen it in my life. How can it be legal to just park in the middle of the road and block traffic?


In parts of Boston you have to basically consider the rightmost lane the "temporary parking" lane. It's pointless to try to drive in it because every block will be two or three cars or vans parked making deliveries or picking people up.

The actual parking spots are full of residents' cars which haven't moved since the last street sweeping or snow ban.

To top it off, many/most streets are one way, meaning, cars will be "double-parked" in the left lane too. Fortunately our streets are 3-4 lanes wide so there's still room for actual traffic.

I used to try to bike on these streets (which are also full of potholes). I'm surprised I'm still alive.

Here's a great example on Boylston Street in 2016: https://goo.gl/maps/PQgtntVNYzgq98wcA Note the bicyclist up ahead who just had to navigate that. Spin around to see the Dunbar truck double-parked on the left side.

Same spot, 2019: https://goo.gl/maps/fb6k6i6HQUizpoKc8 This time the car's on the opposite side of the street. But spin around to see both a city bus and a huge SUV double-parked at the same spot.

(City buses often have to double-park either due to construction or just because some cars parked in the bus area.)

Of course when we are lucky to get bike lanes they are often treated as "double parking" lanes.

Tickets are handed out but they are treated as the cost of doing business. It's rare to see a delivery truck driving around without a ticket stashed under windshield wiper.

Having driven in streets like this most of my life is the reason I feel it is the ultimate hubris for Elon or anyone to think they can solve full self driving without also solving AGI.


It's not common for it to be legal except in very specific situations. I don't know if it's American, but they do it here a lot in dense cities for deliveries. It's not a true parking, like for hours, it's for running in and out.


It's called "stopping" and not "parking" when you stop (and potentially even turn off your engine and leave the vehicle unattended) to make a delivery or load/unload.


In many places it's considered parking, legally, anytime you stop and place the car in park.


I'm not sure why nobody thought of this, but I think self-driving cars should have some special lights or signs that show the world around them that they are self-driving. That would at least lessen the frustration of drivers behind when it is reluctant to turn into oncoming traffic, or acts like a drunken driver.



ML AI gives you perception only, no conceptual understanding. That conceptual awareness of the environment and causality is most definitely a requirement for safe autonomous driving.


Could you give an example of "conceptual understanding" that humans have which is necessary for safe autonomous driving? How does autonomous driving differ from, say, playing poker when it comes to implementing a machine learning solution?


Playing poker is trivial to automate, it’s essentially fully solved.

Driving is often surprisingly social. I had many instances where I observed somebody pulling out of a driveway in front of me and something about how their head was turned or whatnot meant I subconsciously “knew” in a split second that they were about to go full speed without looking, so I had to slow down. Stop sign tie-breaking gets complicated. Pedestrian/bike rider eye contact is a thing. “Do I know that they know that I know what their intentions are?” And so on.

It’s not as complex as fully solving human communication, but it seems like a huge challenge when mixing human and automated driving on the same roads.

But I also don’t know if there were similar transition concerns when horse buggies were being replaced with cars and people had wrong expectations?

Maybe we should paint these cars in bright orange and have flashing lights on all the time so that people learn to expect the different behaviour. (The huge Lidar rig on top of some self driving cars works to that purpose)


The reason we humans can understand very complex situations with very little information very quickly (such as navigating through traffic) is that we not only perceive that "stuff" is out there and moving, but we also instantly classify that stuff, and in so doing, understand what it can and cant do almost instantly. A large semi truck can only maneuver in certain ways and certain speeds, a car moving erratically likely has a driver that is somehow impaired, a child is fully upredictable and could jump in the street, etc. We know about not just that stuff is out there, but what that stuff is, and how it causally can behave given how things have been behaving up to that point.


There's a saying in software, you throw away the first version. The main idea is that when you start on something, you don't know what you don't know. The labor of the first version allows you to learn the domain, the limits, edge cases, and what you need to do. Then the second version starts on a much better foundation.

At this point, many teams (Waymo, Tesla, Uber, etc.) have been working on this for years. And there must have been times when the engineers thought, this would work so much better if only... Maybe it's time to gather these thoughts and sort them by low effort/high reward. For example, if all cars had a cheap sticker indicating forward vs back, would that improve self-driving performance by a lot? It wouldn't be that hard to get car manufacturers to add a sticker somewhere. Or what about traffic lights? Maybe adding a cheap light or sensor to it would vastly improve self-driving performance. Or those road reflectors, can they get some minor changes to help the self-driving software?

It doesn't seem likely that level 5 or even level 4 will happen when self-driving is literally defined as replacing what a human driver would do. But what if the computer had some help from both infrastructure and other vehicles? Why not start thinking about cheap changes that can be implemented to fix the blind spots and flaws in the self-driving software? If these changes are cheap and easy to implement and would vastly improve the software performance, isn't that worth it for true self-driving to work?


When people first started talking about self driving this is what I thought it was going to be. If you want to enter the "self driving zone" your car needs to be equipped with a government approved self driving module, and everything on the roads in that zone is standardised and maintained.

It just seems like a problem that's multiple orders of magnitude easier to solve from a technical standpoint.


It’s definitely easier to solve, but who’s going to foot the bill for this stuff? The whole problem is that road conditions conditions are variable. A lot of the time a Tesla FSD system makes a bad decision, that decision would be possible for a human to make. Sure, it’s not probable all the time, but it’s not outside of the realm of possibility.

You can always decrease the odds of something bad happening with better road design (and better viability), better lane markings, better signs, etc. This would work for both FSD systems and humans. Yet, terribly unmaintained roads are the norm for a lot of people who are just “used to it.” If you create an FSD zone and outfit it with tons of sensors, why not just fix the road design problems that would benefit humans too?


The same people who foot the bill for road maintenance now?

Obviously it's not the same in every city in the world, but roads are pretty decently maintained where I live. And it's not like we're talking about high tech stuff, like other people said, it's just painting distinctive patterns on things to give sensors more information than they otherwise would have. We already paint distinctive markings for humans, it's not much of a big step further. Probably just some QR-code-looking stuff sprinkled on signs, lane markings and the front/back of other cars etc.

Honestly to me the hard bit seems to be getting everyone together and deciding on the standards, especially with how technophobic some cities/countries governments are. Painting a few markings once every few years to keep them fresh is a piece of cake.


What happens when the helps disappears, gets obstructed etc...?


People tell me I'm crazy when I tell them I always try to avoid driving near Tesla cars on the highway, but it seems my suspicions were correct.


The number of people that I've seen off in their own worlds doing their makeup or tapping away on their phone using Tesla's Autopilot was reason enough for me to avoid those cars while driving. I've seen someone using Autopilot nearly drive another car off the road when lane switching before, and the driver was just nodding off as if nothing happened.

Tesla's FSD scares the absolute shit out of me.


How on earth will this cope with medieval European or Asian cities if it cant cope with a generic American grid city with spacious lanes.


What is fascinating to me about the "self-driving" phenomenon is how many otherwise intelligent people got taken in by the hype.


> What is fascinating to me about the "self-driving" phenomenon is how many otherwise intelligent people got taken in by the hype.

I've read that "intelligent" people are often easy marks for con artists, because they're often so overconfident about their judgements and opinions.


I think any claimed 'full self driving' car should be required to pass a driving test like regular drivers do. It's pretty clear to me that Tesla will not pass that. So question closed, it should not be on the road.


I think full self driving will be achieved when infrastructure and laws are adapted to it. We already have the tech to pull it off. A clear example of this is metro line 14 in Paris (it has no driver).

We don't need trains, just proper painting of the roads and separate autonomous vehicles from the rest.

Trying to use the current infrastructure without any adaptation seems to me a dead road.


In the shown video where the car tries to run through a gate, wouldn’t that likely not have been an issue if they used lidar? I know there’s a lot of work on machine vision and that it has lots of potential; but if lidar is even 5% safer, and you’re worried about getting into a car accident wouldn’t you buy anything else than a Tesla when it comes to autonomous driving?


I'm wondering if there's actually something wrong with the cameras. With good cameras and cutting edge AI, spatial estimation shouldn't be that hard. Traffic with unpredictable moving objects is hard, but finding drivable area in static setting and not hitting on curbs should be easy, right?


For years I've thought full autonomous driving is never going to be practical without an equivalent engineering effort on the road infrastructure side of things. Standardized markings, drive by wire, special AI compatible lanes, etc. Why are we still doing this the hard way?


> Why are we still doing this the hard way?

Scam artists who are really good at their craft milked investors of billions of dollars by selling them on self-driving, and some are desperate to see returns.


Some really impressive parts, and some fairly scary parts.

Also, man, people in oakland seem to honk pretty quickly.


And yet, given that Tesla has rolled it out, they probably started to get the largest data stream of expert corrections for a level 4 system to date. This will allow them to make faster progress than any of their competitors.


Its level 2.

And they've been having streams of data for _years_ with autopilot. and yet they let this shit out. Look at how incorrectly they've placed the objects, traffic and other things. It has _no_ situational awareness, because it can't see 20% of the objects.

Considering that https://wayve.ai/ can drive around london unaided, tesla is utterly rubbish


It dont see this as a major advantage. OEMs usually have a lot of test vehicles going around. Nonetheless Tesla seems to have set up the right infrastructur.


I find this fascinating, it's still in beta but the amount of work thrive) they've done to get here is amazing!

Props to Tesla's engineering team, hopefully it'll be a bit better as years go by.


I have the impression that FSD will be acceptable only when the large majority of other cars on the street also have FSD.


Tell me again are all accidents that occur every minute also due to an autonomous car?


Every human with a license is a different & imperfect implementation of autonomy. The result is reflected in a figure known as "the road toll", a euphemism that allows us to think less about the huge amount of damage they cause.


This is an incredible embarrassment.


Well it might be useful if you are a driving instructor cause it acts like a bad student


You don't say ! Electric Jesus would have exagerated ? Probably a one off ...



While I in general agree with the article, counterpoint is that there have not been any Tesla accidents recently while in autopilot mode I believe?

Edit: I guess I was wrong, the accidents are now so common we don’t care anymore. In the beginning they were hyped up a lot.


It’s because we are scared as fuck with autopilot on and are hyper attentive. My car drives like a drunk teenager and routinely tries to murder me in autopilot.


This is awful, but it did make me laugh out loud.

That said, I have a lot of faith in Papa Musk and his team. I think this nut will be cracked eventually, but its just going to take a lot of time to do so - far more than the timeline he initially proposed, as detailed in your excellent post earlier.


Why would any of this faith be created given the continuous line of bullshit you've been fed?


Peter Thiel once said, "Never bet against Elon Musk."

Elon said he was going to build a private space launch company. He did.

Elon said he was going to build an electric car company. He did.

Elon said he was going to build a $35,000 electric car for the masses. He did.

Elon said he was going to build a worldwide satellite constellation for Internet access. He is. Rapidly.

Everything Elon has said he was going to do, he has. He hasn't always done it at the speed at which he said he would, but he does always get it done.

He (meaning his team / company) will eventually develop a fully self-driving car. No, it may not occur at the pace at which he proclaimed, but I have confidence it'll occur.


> Everything Elon has said he was going to do, he has.

True, as long as this excludes everything he still says he is going to do but hasn't yet done. But of course, he will.


He has accomplished everything he said he would do, except the ones he hasn’t yet.

So if he ever doesn’t accomplish something we’ll never know because maybe it just hasn’t happened yet.

He also said they would have a solar roof as cheap as a normal roof and that the new roadster would hover with rocket engines. Let’s see if that happens.


> He has accomplished everything he said he would do, except the ones he hasn’t yet.

That is true for absolutely everyone in the world.


That’s the point.


That's an excellent point! Too bad I initially missed that.


Name one of those projects that didn't take in huge amounts of money from the government? He did those things because they were initiatives of the US federal government (including autonomous cars, see DARPA challenge).

Thank uncle sam for putting in the effort to create the markets that Elon plays in.


And that's a bad thing because...?

The Federal Government has consistently funded some of our most important projects. The Human Genome Project, for instance.


It's not a bad thing. The bad thing is the myth of the genius of Musk.


I absolutely believe he's a genius, just not in the way of Einstein or Tesla. He's a genius at assembling people in order to execute a vision, and when I say genius, I don't mean the layperson's definition of "really smart", I really do mean that whatever qualities combine in a person to allow them to convince hundreds, or thousands, or tens of thousands of people to follow them and help them build their vision, Musk is a genius-level intellect in that capacity.

He's not unlike Steve Jobs. They both possessed enough knowledge and technical acumen to understand what they're being told about disparate subjects, but they may not have the in-depth knowledge that a Jim Keller would. And despite my enormous respect for Keller, he's not an Elon Musk or a Steve Jobs. Jim's a craftsman at heart, and that comes through in interviews with him. Musk and Jobs are true visionaries.


Your description is not of a genius but more a "commercial community organizer" who organizes people and then negotiates between a company looking for cost reductions and State&Federal Governments to subsidize.

This is not genius-level but is the blending of two skills, a charismatic Administator, who can temporarily suspend disbelief, and a Fundraiser. Each of those takes a ton of work. I'm not minimizing Musk's dilligence in any way, but there are lots of people who can do those activities at high levels. The difference is that most people can't survive their tweeting SEC violations while stoned.


Most of the things he has accomplished have been accomplished before. I'm not saying it's easy and he doesn't deserve respect for it. But it's not like he's constantly achieving unprecedented things. Full self-driving cars, Hyperloop, Neuralink, people on Mars are just pipe dreams so far. And that wouldn't be a problem if he simply didn't claim for 5 years that FSD will definitely be ready next year and let customers pay for it and test it on public roads.


Ok, I'll bite.

Tesla will never achieve safe, certified level 5 autonomous self driving with the current generation of vehicles, without either a fuckton more CPU power or better sensors(as in some proper high res depth sensors).

On this I will bet. Come back at me in 5 years. He'll either have to bite his pride and put some sort of re-branded lidar on, or some decent stereo camera rigs.


You know there are many promises he made that haven't come to fruition? Remember Hyperloop? Interplanetary travel? Neuralink?

His achievements are remarkable but that doesn't mean that he should be treated like an infallible demi-god. Yes he's done some really cool stuff but that doesn't mean he should not be the subject of scrutiny.


Neuralink's being actively developed right now.

Hyperloop was an idea and nothing more. He never said he was personally committed to developing that, hence third-parties working on it.

Interplanetary travel? Starship is actively going through testing and looks like it should be ready for moon launches around 2023, which is in two more years.

I don't consider him infallible, or a demi-god, but its hard not to admire the sheer work ethic of this man. He's legitimately trying, and by all accounts of people who know him, he's actively working on all these projects and also making considerable stride towards these goals. These are not easily achieved tasks.


I admire the work he does. But he oversells. FSD is one example. He has been promising it for half a decade and people have paid him substantial amount of money for a feature that he'll likely never deliver as he describes and certainly not on the time-line he promised. At the same time he is putting dangerously incomplete software out on public roads and marketing it as FSD when it's clearly not. It's endangering not only his own customers but also everyone around a Tesla on the public roads. If that's not unethical, I don't know what is.

He promised Earth point to point travel, which would require rocket safety to improve by a factor of a million.

He has been promising ridiculous things from Neuralink but with literally nothing to show for it.

Look at this tweet: https://mobile.twitter.com/elonmusk/status/88805317515594957...

Does that really indicate no personal commitment? He touts that he's received government permission to build a Hyperloop, NY to DC in 29 mins.


To be fair though, iterative is usually the way to go.

We didn't get to the safety levels that airliners currently have by sitting down with books and, we got there by having the NTSB thoroughly analyze each incident.

Humans are terrible drivers, so each day we delay the transition to self-driving also costs lives, it's just that human traffic fatalities are less visible since society generally accepts that level of death.


> Hyperloop was an idea and nothing more. He never said he was personally committed to developing that, hence third-parties working on it.

It's not even his idea, Robert Goddard came up with it decades ago.

He also threw money into it, and then backed out, saying he was too busy.

And now he's tweeting about applying for and obtaining permission to build on the East Coast.

So, there's several issues there. Not to mention the more general issue - that it offers the same benefit as Maglev with massively inflated costs and far less comfortable passenger experience.


>Remember Hyperloop? Interplanetary travel? Neuralink?

All of these are still in progress/development in various ways. I don't remember him ever saying they were complete (or even over with).


He's definitely described autonomous driving as a solved problem. Does that count as "complete?" I don't know what happened with solarcity, but I thought I heard it's done poorly too.


What do you mean by "promises"?


Because most of his promises came through (eventually)


A large factor in that is there is a lot less vehicle use recently compared to the norm.

To the extent that people who were using the feature on their way to work are working from home, that's a lot less miles driven and less potential collisions.

To the extend that many people are working from home, that's a lot less traffic on the road to interact with, and drivers may be more attentive because of the novelty.

Statistics would be nice, of course. Tesla doesn't report on total miles driven.

Here's a recent Tesla Autopilot into parked emergency vehicle report though. https://www.cnbc.com/2021/03/17/tesla-in-autopilot-hits-poli...


This isn't Autopilot, this is FSD Beta. We don't know if it has had any accidents because it's limited to a handful beta testers.

Anyway, "accidents" are not a good measure of safety for FSD Beta because just in this video it would've caused about 4-5 accidents and all of them were averted only because the driver intervened.




That site presents figures for other manufacturers at the bottom, but those are driver deaths per million registered vehicle years. The comparable Tesla figure is 47 deaths (from their own site) / about 1.5 million registered vehicle years, so around 30-31 deaths per million registered vehicle years, which puts it about average.


But for comparable cars with comparable demographics, isn't it super high?


It's impossible, legally, for there to be a "tesla accident". It is the driver's responsibility. Wake me up when Tesla can be legally held responsible is when I buy one.


I’m not sure your statement that it’s impossible to hold Tesla responsible is an accurate statement of the law. There is no law that says that Tesla can’t be held responsible. Any car manufacturer can be held responsible for damages caused by a product defect.


Ok, yes, "impossible" is a poor choice of word from a legal perspective, but is not tesla covering themselves legally from legal liability of accidents stemming from drivers enabling "full self driving"? No? If not, then I am interested is what I'm saying... I think I am not alone in that line of thinking...


I have to object here, yesterday into a parked police car with blue lights on.


> counterpoint is that there have not been any Tesla accidents recently while in autopilot mode I believe?

Don't forget about that Tesla on autopilot which crashed into a parked police car __two days ago__ in Michigan...


Autopilot there has been many, but none in FSD, which is what the article is about.


That's actually going to be the key argument for Tesla (or any solution) leveling up from driver assist level 2 to full autonomy (level 3 to 6): it will have to be evidence based. The difference is that Tesla is actively accumulating a lot of evidence in the form millions of vehicles actively using this and gathering a lot of telemetry, video footage, etc. in the process.

As soon as it's going to be legal somewhere to remove your hands from the wheel, there will be incidents for sure. And then there will be responses to those incidents. At some point, that drops to acceptable levels and more will follow. Tesla will simply be able to pull out the numbers and say: look we're doing it here and it's fine. Solving the chicken egg problem of having the data to say that is key to ever getting to that point.

Tesla's strategy is not to lock itself up in an ivory tower for a few decades and to then release the perfect system. Companies following that strategy will fail. Instead, they are releasing imperfect software now and they are improving it rapidly and incrementally. As long as they have hands on the steering wheel (level 2) it does not actually have to be perfect. It will give them a steady stream of issues to fix. Lots of real world video footage to analyze. And a track record of slowly improving safety relative to human drivers in terms of numbers of incidents per million miles. Rolling this out to millions of drivers is going to get them a lot of data to work with very quickly. Data is the key thing to have in this space. Without data you can't make progress. Tesla is setting themselves up to get a lot of that. The reason they are getting cocky is that they already have good metrics.

It would be interesting to repeat the same journey with the same car in about a year. I bet it will still have issues but probably less of them. It's going to take more than two years for sure. But ultimately leveling up to 3 and 4 is going to have to be evidence based. And Tesla is building that case by gathering data from real drivers intervening when the software messes up and fixing things as that happens. That's something you can measure. The better that metric gets, the closer they get to their goal. Who else is even close to doing that? I think some of the Chinese manufacturers have shamelessly copied Tesla's strategy. And in some cases the actual software apparently. Regardless, China is the market to watch when it comes to full autonomy. A bit of wild west but they are moving fast there. IMHO if Tesla gets there, those manufacturers won't be very far behind. Unlike some traditional manufacturers who seem to be a bit hands off (bad pun) when it comes to moving forward on this front.


I think that's the wrong question. The right question is "how much better is it than human drivers?" When you get up to 2x better, there will be _only_ 20,000 fatal Tesla crashes compared to 40,000 human ones a year in the US. It's a very low bar.

When Tesla is 2x better, we'd be reckless not to use it immediately.


Except Tesla kills 3x more of it's drivers than competitor cars. https://medium.com/@MidwesternHedgi/teslas-driver-fatality-r...

So depends on your measure. If you mean "on average" then yes. But it seems Tesla has a long way to go in other safety measures first.


> "how much better is it than human drivers?"

If you watch the videos in the thread, the answer is it is far worse than the worst human driver you've ever seen.


Human also laughably bad and potentially dangerous in driving.

Full self driving doesn't mean perfect driving, it will always be continuous improvement.


This is what-aboutism. Humans are not the subject of scrutiny for this video. It was the human who countermanded the dangerous decisions made by the FSD system.


No, it's accurately pointing out that perfect driving by an AI is unachievable, if we demand that, then 'self driving' will literally never be a thing. Ever.

The standard should be: "Safer than human driving" shouldn't it?

And when you realize that humans are already terrible drivers it doesn't seem so reckless to hold AI driving to a realistic, achievable standard instead of an ideology.

Another way to look at it: imagine two different human drivers, one is clearly "better" and faster than the other, but wrecks occasionally, but the other is a "bad" driver, but never does.

Which is a more desirable model for an AI to emulate?


Humans have a semantic model of the world. "AI," so far does not. When human drivers see things they can usually discern what they mean, even with bad and high latency sensor input much faster and better than "AI." The problem to solve with self-driving is not sensor field or latency, of course a machine can do better, rather it is integrating what is sensed into a reasonable plan of action, "AI" demonstrably sucks at this.


Has anyone given a Tesla -- or any other autonomous vehicle, for that matter -- a driver's test?


I'd love to see a video of that! The test they give humans is an absolute joke in every state I've taken it. Bumbling idiots pass it every single day.


Humans are absolutely the subject of scrutiny for this video, and anything else that will ever be related to autonomous driving, because we're the only other source of drivers for automobiles.

The only bar to clear, and I mean the only bar is, "Do autonomous vehicles kill fewer human beings than humans on the road do?"

That's it. When it decreases by even 1 human being below what humans cause, its time to switch over, because like every single mechanical, electronic, and computerized tool, it will only continue to reduce the number of human deaths as we progress with its development.

I get that autonomous vehicle deaths freak people out because they do things a human would never do. But? So what? If 53,000 people die in automobile accidents where they drive into each other because they stop paying attention to the road to reach back and slap their 10 year old who won't leave his sister alone, it makes absolutely zero difference than if the car drives into the ocean and drowns its driver. Same result. Dead person. We only think it matters because we like to think we have control over our lives. You don't and you never will. You could have a massive aneurism the next time you're driving and slam your vehicle right into an oak tree. The Universe is a random uncaring system.

Your life is just a series of mostly random events that you shape into a coherent story because accepting that shit just happens and you have to deal with it is, like, a real bummer, man.


I see this sentiment often. But is this comparison fair? For example, what does the distribution of risk look like in each cohort (AI vs human drivers)?

Presumably the risk of an accident is relatively evenly distributed between all AI drivers (they are using the same AI after all). But is the risk of a car accident evenly distributed between all people? Not even close. It’s perfectly possible to simultaneously reduce the overall risk for everyone while at the same time increasing the risk for a given individual by an order of magnitude.

Would you be willing to assume a greater risk of accidental death _personally_ to decrease overall risk of death? Not a question I imagine reaches broad consensus...

And what about the soft problems? Like responsibility. A self-driving car runs off the road and kills your daughter. Now what? Tesla is certainly not going to accept responsibility. So... you just “chalk it up” as bad luck? At least the current paradigm has the _ability_ to offer closure after a tragedy.

Reducing “self-driving cars” to a single metric is not only mathematically dubious, it’s ethically abhorrent and just plain stupid. I expect better.


>The only bar to clear, and I mean the only bar is, "Do autonomous vehicles kill fewer human beings than humans on the road do?"

Not even that, I would totally accept if autonomous vehicles kill only slightly more than human on the road do.


It’s laughable and potentially dangerous but also just available to 2k people, from whom it is revoked if the interior camera shows they aren’t paying attention. Musk is the first person to say “be careful” when using it. The article is kind of clickbaity. No, Tesla’s FSD is not ready for wide usage. That’s also hardly a secret.


What's the point of FSD if you have to pay attention?

Come on folks... Elon has been claiming for years and years that you'll be able to read a book, or take a nap while in FSD mode, and FSD is "just a year away...". He also claimed all the hardware existed in current-model vehicles and would just be "unlocked" later. The lies pile up, yet people clamor over each other to defend Tesla and Musk.


I agree, it is just a transportation vehicle.

Musk is a deceiver, he is not accidentally optimistic with his "forecasts".

I will believe fsd or autopilot works properly when musk and his kids do a coast to coast trip in winter from NYC, via Chicago driving through every city at rush hour. Blindfolded with no driver intervention. Then it would deserve a fsd label. There should be no room for double speak on this matter.

People say that humans are bad drivers as well. True, if distracted, tired, DUI, inexperienced. Still, no human with sound eyes and a drivers licence would drive like in that video. Unless on the run from police maybe.


Even distracted, tired, drunk drivers do much better than this.

None of us are perfect, we’ve all driven drunk. How many times you got caught? It’s the fact people don’t get caught or crashed before that they risk it.

So really these vehicles are driving worse than humans at their worst. That’s probably why we are underestimating the problem.


> we’ve all driven drunk.

No we haven't. Some people are responsible drivers and don't drive after drinking alcohol. Others don't drink at all. I've driven while tired, but never drunk.

Apart from that, I agree with you.


True, most of us then. Definitely the majority. I’ve seen people sleeping and driving in bumper to bumper traffic without causing any problems.

So there’s something going on in the brain that we’re not getting.


Good point, too many have operated a car in some unfit state. The remedy for this could be so much simpler than waiting for Musks fantasies to materialize. Breathalizer integrated to car , quick drug test and some device for detecting tiredness which all need to be "ticked" , else the car engine will not crank up or "electro" up.

The distraction issue could be minimized as well, albeit with rather invasive methods.

All the above plus taxi apps becoming abundant and the common sense to assess when to use a taxi and the fsd software will take a long time to be better than humans.

So far, summon does not even work and i remember driving in a Mercedes with lane assist and automatic braking if the distance to vehicle ahead is too small in 2010 or something. Did Tesla even exist at that time or where they at the stage where they used Lotus cars and replaced the drive train and the stickers? The features are not much better than what I have seen in a 2010 Mercer.


So because his predictions were wrong they should just stop and give up?

Its not FSD because its beta of the software that will enable that. That's the point, your training the network.

The hardware does not seem to be the constraint here. CommaAI does a great job on a processor much, much slower. If their prediction is wrong, they put their own money on the line to upgrade the hardware if need be.

I don't think they were lies, Elon usually just says what he believes, often he is right but optimistic, sometimes he is wrong. Clearly he was most wrong about AI but he is right often enough and always speaks his mind.

This is what basically everybody that interacts with Elon says, the guy is candid and says what he believes at that moment. This has been reported by people that worked with him and by people like journalist who have interviewed him (recently just in Liftoff book).

Tesla has to put most of the money made from the FSD sale into a holding account, as they have not delivered the feature.

Generally more generally, I don't think its smart marketing or communication that they are doing.


Have you ever used regular old cruise control? You still have to pay attention to what speed you should be moving at, and it makes it much easier for you to pay attention.


How else are they going to train the model?


What does training a ML model have to do with telling people their new car will have all the hardware to unlock Level 5 Autonomy - when in fact that turned out to be a lie at best, fraudulent at worst? People plopped down huge money expecting Musk's promises to be true...

Here's an article from 2016[1] where Musk makes exactly that claim.

From the video embedded in the article, I'd wager we're a long ways away from any Tesla having anything remotely similar to FSD... yet Musk still claims Next Year™[2]

I'm at a loss why Tesla even decided to wade into the FSD waters. What was wrong with making a really good EV that people wanted to buy? FSD seems to have become a major distraction for Tesla... now plagued with build quality issues and FSD issues.

[1] https://evannex.com/blogs/news/elon-musk-self-driving-hardwa...

[2] https://www.cnet.com/roadshow/news/elon-musk-full-self-drivi...


FSD works, it's not better than humans yet. Needs more training.


FSD means the car has the ability to drive itself. It doesn't necessarily mean it can drive perfectly without accident. Thats why you still need to pay attention.


I have an old 2004 toyota. If I let go of the wheel, it'll drive itself for a bit before I either crash or retake control. I never realized it had FSD. Full self driving means the car drives itself fully.


Your toyota at best can be said partially self drive, albeit suck at it.

The full part is where it also include: following line, navigate, parking, line change, stop and go, recognize traffic sign, etc


I dunno. I always imagined FSD as at least level 4. Musk's comments about level 5 certainly at least implied FSD would be above level 2, which Tesla has more recently described their FSD as.


Come on, of course it does. Don't serve us the old "it's not a bug, it's a feature".


Where has tesla claim/promise that it can drive itself perfectly without accident ?


Musk himself does.


As far as I know he never does. Maybe he said it will be eventually but never promised it can do it right now.


He literally said "you'll be able to sleep in your car on FSD mode".


So it's full self-driving, but it's not, like, full self-driving?

Musk is "the first to say" a lot of things.


Or it's full self-driving but it's like, in beta?

If I were participating in a beta test and my phone kept crashing, I could complain about that, but not much, right? beta software does that.

Of course a car crashing is a much bigger deal, so, like: pay attention.

I'm pessimistic about it ever getting out of beta, personally. But it is what it says on the label. I've seen other, equally cherry-picked videos, where the FSD system has done a competent job of getting from A to B with zero driver intervention.


Realistically, it's a display of just how far away FSD actually is. The mistakes the car makes in the video are unforgivable and extremely dangerous.

> I've seen other, equally cherry-picked videos

Cherry picked? Did you watch the video? It's a complete drive through Oakland - not cherry picked snippets.


Yes, cherry picked.

I've seen video which is also unedited, and which shows Tesla FSD behaving just fine. It was promoted onto my Twitter feed by people who want to believe FSD is going well.

Whereas this one was promoted to the top of HN by people who want to sell clicks promoting the idea that FSD is going poorly.

Either one is cherry picking. What wouldn't be cherry picking is watching as much video of FSD driving as you can, collecting statistics on it, trying to make sure your samples are representative (good luck if you aren't Tesla), and writing up a report on what you find.

This is, of course, a ton of work.


Your definition of "cherry picked" seems to differ from the generally accepted definition.

The guy that made the video and posted it to youtube is a Tesla "fanboy". I doubt his goal was to paint Tesla's FSD software in a bad light.

Yet, in one video, the Tesla nearly hits pedestrians, nearly side swipes other vehicles, drives in the wrong lane, comes to a complete stop in the middle of a busy road, and tried to drive straight through a iron fence. That doesn't look very good to me...


> Of course a car crashing is a much bigger deal, so, like: pay attention.

No, sue Musk & Tesla for fraud. & false advertisement. Unless they are willing to put their responsibility on the line, it will remain a piece of garbage.


Two things I think about whenever self driving comes up:

1. Success isn't perfect driving. It's when it drives statistically better than most people. Humans themselves are often terrible drivers ourselves, so why would we hold a robot to a standard we can't even achieve?

2. Ford's customers were right all along: What we want is faster horses.


Personally, I would consider success to be when it drives statistically better than _me_, not just _most people_. It really depends on how you choose to aggregate your data; if it's better on average, well, the average crash rate per mile driven for human drivers could be dragged down by a minority of very bad drivers. If that's the case, then if everyone bought a Tesla and used FSD, the total crash rate per mile driven would go up, despite FSD being "statistically better than most people" by that measure.


The big benefit of AI would be that it wouldn't get tired, impatient or distracted, which would potentially be a big safety boon even if it were slightly less capable than a normal driver at their best.

That said, FSD is driving so bad that I would demand an actual driver that drives this way get their license revoked.


I'll consider it when Musk & Tesla engage their full legal responsibilities when a car is in FSD mode, anything else is unacceptable.


That's something I hadn't thought about. It makes sense. We hold drivers accountable, who do we hold accountable if it's partially self driven, but the driver is supposed to be ultimately responsible?

A lawyer would say Tesla, because $, but at least at this point I would still claim the driver is. Especially with 'beta' in the software title, where they're explicitly told to pay attention and take control.

I don't envy car manufactures working on this. They're facing exactly the same battle that Ford did when the 'horseless carriage' was becoming popular. We knew then that humans are honestly speaking, terrible drivers who frequently get distracted, or lose control of the vehicle.

Just look at the newspaper articles from those days: https://pessimists.co/horseless-carriage-archive/

Stanford banned them!

I expect self-driving to follow exactly the same course that horseless carriages did, to have heavy early resistance and lots of bad incidents, then acceptance, then be utterly commonplace. With the same track record of decreasing accidents and increasingly safety measures over time and eventually cease to even be a controversy.


I mean this is what I'm getting at, if by important measures like preventing deaths, injuries, and damage it's statistically better than humans (and there's a whole discussion on how you collect and decide on that data), even if the behavior is weirdly unlike a human driver, is that success? Is that good enough?

I'm not asserting it even, just spitballing because this is fascinating to me.


Yeah, it's something I'd be interested in reading more about: how do we define "better" in this context? Better for whom: society as a whole? Pedestrians? Drivers of self-driving cars? Passengers of self-driving cars? Drivers of non-self-driving cars? And so on...


Precisely. The impression I got from the video was the car was way more cautious, even more courteous than an overly cautious human drivers would be. It definitely fucked up, but if we define "better" as driving exactly like a skilled human driver, then I expect it to kill exactly as many people as we do.

The counter-intuitive advice people are given for things like an animal suddenly leaping into the road is to just hit it, because slamming on the breaks can and has lead to the car's occupants dying for the sake of a squirrel.

What do we expect a self driving car to do in that situation? Just imagine the outrage and headlines if a Tesla does what humans are told to do, and kills a dog or cat!


Horses are capable of fully autonomous driving. I've heard tales of horses safely walking home with drunk idiots on their backs.


Oh look another Anti Tesla auto journalist that came from Jalopnik. That is a surprise...

And to clarify again, I think Full Self Driving and Autopilot are bad names for driver assist that does not fully self drive or autopilot in the sense that the average layperson would think those terms mean. I dont know if Tesla will ever get to full self driving or autopilot in that respect. I also think that the driver assist as it exists today will prevent entire categories of accidents even if it causes another class of accidents. I think the net difference will be that we are safer on the road because of it. While we are so quick to call out Tesla for the quality of autopilot we forget that at least 50% of the people on the road are distracted and have no such safety net. At least once per drive I sit behind someone who decided that the red light was the perfect time to catch up on their digital life and misses the green light. I see people weaving like a drunk driver and going 20 mph under the speed limit so they can have a buffer zone in front of them while they phone and drive. We need a solution to this as people arent going to stay off their phones...


What do you mean? The thing is it's trying to do full self driving. It just happens to steer into people, cars and walls while doing so!

There is a big difference between "bad marketing" and shipping wannabe self-driving code.


Did you watch the video the article is mostly based on? The car trys to drive into several objects several times, other cars, oncoming traffic, literal walls. It's dangerous that they just put this out for people to 'test' while making outrageous claims about how well their tech works.


More than ten years after ACC and lane keep assist were first broadly introduced, Tesla took those two technologies and rebranded them as Autopilot, conveniently removing the safety feature requiring drivers to keep their hands on the wheel.

Now they've released a "full self driving" beta that can not actually fully self drive without killing its passengers, while Waymo has been operating a publicly accessible, fully autonomous taxi service for years now. Last I checked they don't even have safety drivers anymore.

And somehow, there are still people who seem to believe that Tesla is pushing the envelope.


Road and Track lacks any sort journalistic of credibility and yet again it shows here. For years they would always rate BMW as their absolute top pick (despite huge issues like the e46 chassis failures) while being wined, dined and flown all over the country by BMW's press arm. I would not be surprised if they cherry picked this video because Tesla doesn't "kiss the ring" and buy ads, wine and dine journalists, give sneak previews, etc.

I don't even own a Tesla (I do have an Audi E-tron though) or own Tesla stock directly so it's not like I'm a fanboy or anything. I'm just tired of the total failure of journalism in this country.


The youtube video embedded in the article does not appear to have been produced by Road and Track. In fact it looks like a channel of a Tesla fan, it even has referral links in it for Tesla stuff. All they did was find the video and write an article about it.

I watched the video. If I had to pick an uncharitable description for just how bad it drives, and how jerky and unpredictable it is, I would say that it drives like a drunk 17 year old.


There are tens of thousands of videos of people using FSD beta without issue. Why report on the 1-10,000?


There are thousands of videos like the one they reported on. The fact that even one such video exists showcasing such laughably bad performance invalidates the premise of FSD.


I don't have a horse in this race re: Road and Track but in what sense is this video "cherry picked"? It's a genuine video of Tesla's FSD feature. You can watch it yourself and judge the quality of driving. I don't understand what this BMW business has to do with anything.


OP probably meant "cherry picked" in the sense of R&T picking a purposefully-nonrepresentative video of the "average FSD experience".

The video is genuine and real but it shows a completely different experience from pretty much almost every other drive-through FSD video I've seen (which have overall been mostly flawless, with a couple other ones sprinkled in that have also been pretty bad at times).

It's important to report on outliers and worst-case scenarios though, especially when the outcome could be death. Obviously this beta has a lot more work to be done before it can graduate into widespread use.


Second oldest trick in the book: attack the messenger


People creating disruptive technologies don't have to fear from the centralized media anymore.

ARK Invest's success and openness made people realize that they can put their money in a disruptive company if they are able to handle the risk/volatility. There are amazing channels that go into the technical details (Limiting Factor is an examlple for Tesla's battery tech).


> Road and Track lacks any sort journalistic of credibility

Shooting at the messenger fallacy.


it's "kiss the ring" as in bow down beneath the seat of power, whereas "kiss the king" sounds more like a vaguely homoerotic fast food promotion.


I fixed it, it was an autocorrect typo. Thanks.


yeah yeah, sure. try explaining that to the family.


Can you elaborate on the specific issues you see with Road & Track's coverage in this case?


What do your feelings about the magazine really have to do with these events? Just watch the video and comment upon that, if you prefer. It's the primary source.


Nothing you said in all those words addresses the content of the article.


Well, here's a different video. "Full Self Driving" looks dangerous and probably shouldn't be on the road:

https://www.youtube.com/watch?v=uClWlVCwHsI




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: