Then you and other people like you can volunteer to work at a test track as obstacles. Leave the rest of us out of it. You’re entitled to your utopian fantasy of what might be, and the best way to express that is to put your life on the line instead of volunteering others to do it.
Statements like this seem to be trying to make the point that any amount of increased public risk is unacceptable unless everyone opts in to accept it.
No, statements like mine reject the notion that sacrificing lives today for the possibility of an unproven improvement sat some unspecified later date is morally reprehensible. My answer is to invite the person endorsing this to put their own life on the line instead. If we were talking about medical testing on humans instead of cars, maybe it would be easier for some to understand. “It’s so bad today, the only course of action is to act recklessly to improve things as fast as possible!” is a thoroughly rejected argument.
Statements like this seem to be trying to make the point that any amount of increased public risk is unacceptable unless everyone opts in to accept it.
I find this argument entirely uncompelling. There are an inordinate amount of risks imposed upon me while operating in society which I neither agreed to nor would I given the choice, aside from in a hand-wavy “social contract” sense.
However as a member of society I am in fact compelled to accept myriad risks on a daily basis imposed on me for the benefit of others, or the benefit of potential scientific advancement.
This concept that a society cannot morally make risk-reward decisions in the course of technological advancement is entire bunk IMO. Government can absolutely morally make these trade offs and does so constantly.
I feel pretty good about the current legal and regulatory frameworks and the technical chops of the NTSB to monitor developments in self-driving R&D adequately.
One thing I am certain of is that we absolutely must succeed in developing, productizing, deploying, and ultimately mandating self-driving technology. Millions of lives and trillions of dollars are at stake, and yes, society as a whole will incur individual loss of life in the pursuit of this goal. An attempt to develop the technology with zero risk of collateral damage would in fact cost many more lives overall.
So far, per mile driven, the fatality rates of Uber and Tesla SDVs are far higher than human drivers in statistically safer than average vehicles.
Waymo has a better track record but essentially refuses to drive at significant speed -- I have yet to see a Waymo SDV place itself in a situation with a higher speed limit than 35mph, and I see them a lot.
I regularly see the Waymo vehicles in 45 mph zones on my commute. I can't say I've seen them go any faster, but I spend about 95% of my time on roads marked 45 mph or less.
>One thing I am certain of is that we absolutely must succeed in developing, productizing, deploying, and ultimately mandating self-driving technology.
On a planet where the primary killer of millions of people is unclean drinking water the argument that we "absolutely must" develop self-driving cars is not very compelling. Especially since what is really at stake here is your "trillions of dollars". People are primarily developing these technologies to make money, not prevent deaths, and the idea that society as a whole should shoulder the burden of additional risk to greatly enrich a few individuals seems morally indefensible.
When capitalism encourages extraordinary levels of investment in technological advancement which will save millions of lives, I call that a win-win.
Unclean water killing millions of people a year is a compelling reason to encourage investment in more affordable solutions for cleaning water.
You know what else kills 1.25 million people per year? Human driven cars. That’s a pretty compelling reason to encourage investment in eliminating the steering wheel. The best way we accelerate the advancement of this technology is, generally, not to regulate it out of existence.
> People are primarily developing these technologies to make money, not prevent deaths, and the idea that society as a whole should shoulder the burden of additional risk to greatly enrich a few individuals seems morally indefensible.
They can’t make the money unless they can prevent the deaths. The benefits to society as a whole as the technology is perfected is worth trillions. The idea that we would want to perpetuate a system which kills 1.25 million people a year because an extraordinarily rapidly progressing technology isn’t yet perfected is what seems morally indefensible to me.
I don’t know what you’re on about society shouldering a burden to enrich a few individuals. Economically speaking we all become richer with self driving cars.
The way I see it the question is simple — increase regulation now and shift the adoption curve to the right - resulting in on the order of millions of net additional driving deaths... or eliminate roadblocks and stimulate investment in self-driving R&D and shift the adoption curve to the left - saving millions of lives.
The fact there are those calling for clamping down on regulation and stymieing self-driving development after exactly one singular fatality in which self-driving technology may have played a contributing factor (but where the police on the scene ruled that the car was not at fault) appears to me to be a fearful mob type of response.
I am saying we all go out into a chaotic world every day and face technology which companies have put into the world for their own profit-seeking motive which is just as likely (if not significantly more-so) to kill us as a self driving car.
This has nothing to do with sacrafice. The government necessarily makes life and death trade-offs in setting regulatory hurdles. It’s entirely unconvincing to argue that the beta driver assistance and beta self-driving technology should be banned from roads because it can’t guarantee someone won’t die.
Because in fact automobiles are killing over a million people per year. Self driving automobiles are responsible for exactly 1 of those 1.25 million deaths, and driver assistance technology have contributed to about a dozen more.
What is indisputably moving fast and killing people are... people driving cars. The singular solution to this ongoing bloodbath may be only a few years away. But you seem to want millions of people to continue dying because of a misappropriated Facebook slogan?
I’m not a big fan of populist outrage in the best of times, but when it seeks to perpetuate the status quo of 1.25 million annual auto fatalities I think we owe it to ourselves to elevate the discussion to the point where you are actually considering the implication of your proposed solution.
Every year you shift the self-driving adoption curve you are killing hundreds of thousands of people. But in the real world today this technology could arguably share the blame for about dozen deaths. Should the government ban the technology from public roads and force companies like Tesla to recall their existing functionality? There’s a hysterical appeal to emotion be made, but nothing close to a cogent logical argument.
If anything, government should be pushing self-driving requirements harder than carbon limits. Instead of municipalities claiming they are going to be banning gas engines by 2030 they should be claiming they will ban steering wheels. The steering wheel is far, far deadlier.
Nobody here is calling for a flat ban of all self-driving technology. You are tilting at strawmen.
The blowback here is in reaction to what appears to be objectively bad software. Irresponsibly bad. Maybe even negligently bad.
Look to drug trials & the FDA. We take risk to test possibly life-saving drugs. But, we don't let just any whackjob take his unproven mad science experiment to clinical trial and inject people with mantis DNA.
We’re talking about whether it’s reasonable to beta test self-driving tech on public roads, and the relative risk of slowing down the advancement of self-driving technology (this is what is meant by “shifting the adoption curve to the right” — as in, seeing widespread adoption of self-driving further into the future).
I don’t think it’s a strawman at all to observe that greater regulations and restrictions on public testing will delay wide-spread self driving. Nor is it unreasonable to surmise a multi-year shift in the adoption curve could cost millions of lives.
I mean, argue that public mistrust of the technology due to sensationalization of the highly public (yet rare) failures is more likely to delay rollout worse than stricter regulations. There’s a reasonable rebuttal I could engage with!
"Those of you who volunteered to be injected with praying mantis DNA, I've got some good news and some bad news. Bad news is we're postponing those tests indefinitely. Good news is we've got a much better test for you: fighting an army of mantis men. Pick up a rifle and follow the yellow line. You'll know when the test starts." -Cave Johnson
The strawman is you're the only one talking about banning self-driving test mules from the road.
That’s not whataboutism - I’m saying it’s the job of elected government to make exactly these determinations even when actual lives hang in the balance, and in fact many thousands of regulations make these sorts of risk/reward trade offs in all sorts of ways in our daily lives. Most of them are in fact significantly more risky than allowing self-driving cars on the road in return for significantly less potential reward.
My point is primarily that the claim like “I didn’t consent to share the road with this nascent technology” — as if that means the technology should not be allowed — is not actually how our society functions. We don’t actually get to veto technology that minutely increases our risk of injury when we venture out in the world.
I’m surprised to see your ad hominem attacks and accusation of shilling. You’re relatively new here so please allow me point out that kind of response is against HN guidelines.
Shilling is a very different thing from just being biased because of a personal stake in something. Shilling is dishonesty, bias is just human. I know you’re not shilling, I’m asking if you might be biased due to a personal stake, or ideology. Are you?
Call me paranoid, but this is the second time you’ve decided to cherrypick my comments and play the victim rather than just saying “nope, no personal financial or ideological stake.” I’ll ask again, do either of those apply to you?
Statements like this seem to be trying to make the point that any amount of increased public risk is unacceptable unless everyone opts in to accept it.
No, statements like mine reject the notion that sacrificing lives today for the possibility of an unproven improvement sat some unspecified later date is morally reprehensible. My answer is to invite the person endorsing this to put their own life on the line instead. If we were talking about medical testing on humans instead of cars, maybe it would be easier for some to understand. “It’s so bad today, the only course of action is to act recklessly to improve things as fast as possible!” is a thoroughly rejected argument.