I don't get what's so difficult about self driving. The car just needs to show a path it's going to follow during the next ten seconds and stick with it. Only when the path appears wrong, the driver takes control. This let's the driver know how the car sees the road.
Unless the self driving saga has a different goal: transportation as a service. This is where the money are and this is where full auto driving is a necessity.
Their world model is shit apparently. It should be a hierarchical Bayesian model, constantly updated, and checked for likehood distance from priors, and when it gets too big, alert the driver and disengage.
Cars don't randomly appear and disappear in the middle of the open road. If their model cannot interpret these events as "a big fuckin' problem, I'm out of my operational envelope, I need to stop", then that model is indeed dangerous.
Sure, if it's simply a follow the car in front of me model. (Which is lightyears from FSD.) Also then it needs to be super-super finicky about what it interprets as a car/road/etc, and start disengaging the instant it detects something is not right. But obviously Tesla/Musk decided to just tone down the carefulness.
> The car just needs to show a path it's going to follow during the next ten seconds
It could even project this onto the wind screen. Some new cars do a neat trick with perspective adjustment so the projection appears 'flat' even on a tilted wind screen.
Unless the self driving saga has a different goal: transportation as a service. This is where the money are and this is where full auto driving is a necessity.