I should probably confess that as someone who lives in an area with a lot of construction work, I'm also very vulnerable to "prompt injection" when there's a person standing on the middle of the road holding a sign telling me to change course.
I once encountered an intersection with a big "NO ENTRY" sign on the other side. I turned but google maps wouldn't give me another route, so I did a u-turn and came back to it from the side. Which meant I was close enough to read the small text underneath that said "vehicles under 10 tons excepted". I don't think I've ever been so angry at a road sign.
I came across one in Italy that was meant to prevent you from using a street during school days from X to Y am, and Z to W pm, except on weekends, bank holidays and school holidays.
Obviously. But you can also easily look around at the situation and know when the sign is fake and realize it may be a dangerous situation and disobey. Have you ever seen a green sign that says "Proceed" and just run through a red light because of it? No, you see a construction worker, you see big ass trucks, orange signs and warnings of workers everywhere. If you saw oncoming traffic and people in the road, would you just go because the construction worker flipped his STOP sign around?
Also, I thought we were supposed to make autonomous cars better than humans? What's with the constant excusing of the computer because people suck?
They are analysing VLM here, but it's not as if any other neural network architecture wouldn't be vulnerable. We have seen this in classifier models that can be tricked by innocuous-looking objects, we have seen it in LLMs, and we will most likely see it in any end-to-end self-driving model.
If an end-to-end model is used and there is no second, more traditional safety self-driving stack, like the one Mercedes will use in their upcoming Level 2++ driving assistant, then the model can be manipulated essentially without limit. Even a more traditional stack can be vulnerable if not carefully designed. It is realistic to imagine that one printed page stuck on a lamppost could cause the car to reliably crash.
> It is realistic to imagine that one printed page stuck on a lamppost could cause the car to reliably crash.
Realistic, yes. But that'd still be a symptom of architectural issues in the software.
Conceptually the priorities of a car are (in order of decreasing importance) not hitting other moving or stationary objects or people, allowing emergency vehicles to pass unhindered, staying on a drivable surface, behaving predictable enough to prevent other road users crashing, following road signs and traffic laws, and making progress towards the destination (you can argue about the order of the last three). Typically you'd want each of these handled by their own subsytem because each is a fairly specialized task. A system that predicts the walking paths of pedestrians won't be good at finding a route to Starbucks.
The "follow road signs and traffic laws" is easily tricked, like in this article or by drawing road lines with salt. But that should never crash the car, because not hitting anything and staying on the road are higher priority. And tricking those systems is much harder
Over the years, the agency has flagged signs that could be confusing. Now, in rules issued last month, it gives states two years to phase out signs that have "obscure" meanings or use pop-culture references that could require drivers "greater time to process." In a statement, the agency said safety is the priority and states "are expected to exercise good judgment."
One year in my city they were installing 4-way stop signs everywhere based on some combination of "best practices" and "screeching Karens". Even the residents don't like them in a lot of places so over time people just turn the posts in the ground or remove them.
Every now and the I'll GPS somewhere and there will be a phatom stop sign in the route and I chuckle to myself because it means the Google car drove through when one of these signs was "fresh".
This. Rural America really doesn’t care about your stop sign or your Karen rules. If it’s been that way for 20+ years, “That’s the way it’s always been” to them.
Getting people to stop burning their trash is still a fight.
The fact that you used the term "enforcement" here makes me presume you are thinking of criminal consequences. But the grandparent comment talks about civil liability. Certainly if there were injuries at this intersection and they knew who had altered the signage, attorneys would argue liability on the part of the vandal. They'd get settlements if not win cases this way.
In addition, if there were serious injuries here you should also expect some criminal consequences. But if your point was to suggest that they won't hunt you down just because someone said there was mischief here, I tend to agree.
Oh yeah, if one does it it’s probably not a good idea to leave an indelible note saying “this was done by Rene Wiltord living at 1038 John Doe Way, San Francisco, 94112”. If you do that, there’s a 1% chance you might get in trouble.
4-way stops are terrible in general. They train people to think "I stopped, now I can go", which is dangerous when someone confuses a normal stop for a 4-way stop. It also wastes a good bit of energy.
Four way stops are good, in my experience, at intersections with roughly equal (low) traffic load on both (two-lane) roads and relatively high pedestrian traffic. Like in a dense residential urban neighborhood between major commercial thoroughfares, side streets. Traffic is mostly people going to residences with people out and about walking. If it’s only a two way stop drivers will often not yield to pedestrians on the free flowing road.
Four way stops on intersecting four-lane roads are awful for the reason you stated.
To use Chicago as an example because I know it, typically major roads are spaced every four blocks (half mile) with smaller roads in between. The mid-point roads (two blocks from each major one) is often a little wider than the other two side streets on either side, and those intersecting mid-point roads usually have a four way stop while the two smaller ones will have stops signs where they cross a mid-point road but the mid-point road will not. You end up with a nice, overall hierarchy that generally works well.
> If it’s only a two way stop drivers will often not yield to pedestrians on the free flowing road.
I’m up in Ontario, Canada. You’re not supposed to yield to pedestrians on the free flowing road. The pedestrian at the stop sign stops and waits for a break in traffic.
Yeah maybe for a few moments. So what? It's a low volume approach. Sometimes people gotta wait and sometimes waiting to let a massive traffic flow get through quickly is the better way to prevent larger traffic problems.
Having moved between states and taken a lot of drivers tests. I can say the exact rules are something that vary between states and over time. Including how it was taught.
My first drivers test was yield to the right. Later it was fifo order of who made it to the stop.
My running interpretation is fifo order with yielding to the right in case of ambiguity.
That isn’t the rule either, I guess parent made their point. The first person who stops goes next, right away only matters if their is ambiguity in who stopped first.
This is not correct. There are clear instructions on how a 4-way stop should operate and its yielding to the right, if opposite cars are both moving forward, both can go, otherwise the car who has initiative has the right of way. Every driver must come to a complete stop.
I’ve never seen a four way stop in a region that had traffic on the right can always go regardless of stop time. But I’ve only seen four way stops in a few countries.
There are 10,000+ roundabouts in the US and the number is growing rapidly. One could argue they may even be overused in certain areas (exhibit: Carmel, Indiana).
What's the significance of roundabouts per square mile? It seems pretty meaningless if I'm honest. There's huge swaths of rural land where roundabouts are totally unnecessary.
The only places where a 4-way stop has room to make a roundabout are places where there is not enough traffic for it to matter either way.
The biggest obstacle is that there are just too many 4-way stops in urban areas where there is no space left to make a roundabout, you would have to tear down buildings. I don't think that is a valid argument in that scenario.
The more I look at that... Isn't that basically just a four-way yield, and the markings are mostly superfluous? You're basically doing the same motions in a regular intersection.
I guess that's the point, and the markings are just to give drivers the intuition of treating it like a regular roundabout (yield to your left [or right in the picture]).
> the markings are mostly superfluous? You're basically doing the same motions in a regular intersection.
The image linked, yes. However I've never seen one quite like that in the US. Instead where I'm at we have a small circular barrier in the center of the intersection (and some very eye catching reflectors) that you actually have to drive around. It's a very good design (imo) because it physically forces vehicles to slow down and swerve so there's no way to inadvertently blow through it at speed the way that sometimes happens with a 4 way stop on a long straightaway in the dead of night.
The space requirement is only slightly higher than the one linked above, still much less than a proper full size roundabout. It's basically a cement barrier sticking 1/4 of the way into your lane.
It's not necessary to stop if there's no car to the right (as this is left side driving), if there is but it is turning left, or if an oncoming car is turning left or going straight.
Yes. The markings are part of the road language. E.g. the X in the road with Keep Clear doesn’t actually do anything. It won’t keep you clear. You have to keep clear when you read it.
Roundabouts are great (we just had two complex intersections with traffic lights replaced by roundabouts and the traffic flow is much better), but they take significantly more space than a 4-way stop.
That requires a level of consideration for others that your average American simply cannot comprehend. No stop sign means you have unlimited right of way bestowed by god himself and fuck anyone and everyone else.
The other option is the person who sits at a 4-way stop until all traffic in a one block radius stops before they move, totally ignoring right of way and all sense of safety and propriety.
They make people on the main road slow down, which is a feature, not a bug. What you mean is that they're the most efficient at what they do when the traffic is comparable. They only reduce accident at the expense of a slightly lowered throughput if the traffic is highly disparate.
If the volume is disparate, then the road with less traffic can wait... kind of like a stop sign! Except the road with more traffic won't back up and cause massive problems.
> Right but it's not like a 4 way stop is going to perform better.
A 4 way stop does perform better than a roundabout given highly disparate traffic volumes, because roundabouts suffer from resource starvation in that scenario, but 4 way stops are starvation-free.
If this is the case you can install stop lights and traffic sensing at roundabout ingress points, you can also provide a "turn right" lane that bypasses the roundabout entirely. Intersections are dangerous.
> If this is the case you can install stop lights and traffic sensing at roundabout ingress points
But those options are a lot more expensive and need a lot more maintenance than just a regular roundabout or four way stop.
> you can also provide a "turn right" lane that bypasses the roundabout entirely.
How would that work? Consider a 4-way roundabout, where there's a constant flow of cars from west to east, and one car from the south that wants to go north but can't because of the starvation problem. None of the involved cars would want to use a "turn right" lane.
Because retrofitting them properly requires emminent domain. The ones they shoehorn onto former four way stops are so useless. They are so tight you still have to face a stop sign vs being able to just seamlessly zipper merge in a proper larger circumference roundabout. When they have room to build out a proper roundabout they are usually OK but that is hard to do outside say new suburban construction due to lack of available land on the right of way.
Even rural Georgia has double roundabouts now. Not sure why people on the internet can't contain their glee at stating the US is "allergic" to them when the frequency of roundabouts has grown significantly in recent decades.
You don't have this? In Sweden we have sensors to detect cars, pedestrians and bicycles to shift the lights as appropriate. During rush-hour those features are turned off/discarded in favor of "grid optimized" timings. In Netherlands they prioritize pedestrians and cyclists when it's raining.
We also have LED lights in our traffic lights which I've come to understand is a saftey hazard in USA because snow falls sometimes.
Because those systems are exorbitantly expensive and require digging up the road to install sensors. If there's a stop sign instead of lights, you need to dig up more private land to run power and set the utility poles to hang the lights from.
A stop sign costs like a hundred bucks, you stick it in the ground, job done. Installing an automated traffic system takes multiple days, a full crew, and heavy equipment.
Plus I'm sure that in today's capitalist hellscape it's also a subscription service that your tax money needs to pay monthly, likely for every individual intersection. Stop signs need maintaining every decade or two.
The answer is money and who's willing to part with it.
Assuming you're referring to the US, we do. They're all over the place. But they're a lot more expensive and complicated than roundabouts and depending on the traffic pattern they can still be less efficient.
So use a mini roundabout. They are common in the UK. It's just a painted circle with a slight hump, in the middle of a four-way junction. Vehicles can drive over it (and larger ones have to) but it indicates to everyone that they have to give way to traffic from the right and don't have to stop otherwise. They typically aren't big enough for multiple vehicles to be turning a corner at the same time. They fit anywhere.
Yes, and they can be smaller. The circle is about the right size but it has lots of room around it. Imagine a crossroads at the meeting of two residential streets, both just wide enough for two cars. Stick the circle from your picture in the middle of that imagined junction. That's what the mini roundabouts are like on the 1930s suburban estate I live next to.
What is the traffic flow rate in an intersection with a 4 way stop? For single lane, since only one vehicle can be in the intersection at once, and probably takes _at least_ 5 seconds to start from stopped and cross the intersection, I'm guessing in the 10-12 region per minute best case, so maybe 600 an hour?
Now if you convert it to a mini roundabout, you can have at least two vehicles in the intersection at all times. I fail to see how it wouldn't be an improvement.
I think you are making lots of assumptions here, like when I say space, I guess you assume it is still perfectly flat and the roads are perfectly aligned? The particular four way I'm thinking about, which really should be a traffic circle if they could blow away some houses, is 65th NW and 3rd in Seattle:
So notice we already have problems in a bad alignment of 3rd, and 65th is basically a steep grade, even coming up form the west. I think you could put a circle in if it were flat, even with the bad alignment (or maybe because of the bad alignment), but this hills make a non-starter. It also gets enough traffic that I'm pretty sure they are just going to put a stop light up eventually.
Here in the UK, we've got lots of roundabouts from tiny mini-roundabouts (some of which have four junctions) that could easily fit almost anywhere, all the way to gigantic multi-roundabout junctions (https://en.wikipedia.org/wiki/Magic_Roundabout_(Swindon) ).
I can't think of a situation where it's more efficient to have four vehicles all stop at a junction (busy four way stop) vs a roundabout which will allow one or two vehicles to join the roundabout without having to stop.
No! No one in their right mind would even consider using them for guidance and if they are used for OCR (not too my knowledge but could make sense in certain scenarios) then their output would be treated the way you'd treat any untrusted string.
> Powered by Gemini, a multimodal large language model developed by Google, EMMA employs a unified, end-to-end trained model to generate future trajectories for autonomous vehicles directly from sensor data. Trained and fine-tuned specifically for autonomous driving, EMMA leverages Gemini’s extensive world knowledge to better understand complex scenarios on the road.
This strikes me as a skunworks project to investigate a technology that could be used for autonomous vehicles someday, as well as score some points with Sundar and the Alphabet board who've decreed the company is all-in on Gemini.
Production Waymos use a mix of machine-learning and computer vision (particularly on the perception side) and conventional algorithmic planning. They're not E2E machine-learning at all, they use it as a tool when appropriate. I know because I have a number of friends that have gone to work for Waymo, and some that did compiler/build infrastructure for the cars, and I've browsed through their internal Alphabet job postings as well.
You were confidently wrong for judging them to be confidently wrong
> While EMMA shows great promise, we recognize several of its challenges. EMMA's current limitations in processing long-term video sequences restricts its ability to reason about real-time driving scenarios — long-term memory would be crucial in enabling EMMA to anticipate and respond in complex evolving situations...
They're still in the process of researching it, noting in that post implies VLM are actively being used by those companies for anything in production.
I should have taken more care to link a article, but I was trying you link something more clear.
But mind you, everything Waymo does is under research.
So let's look at something newer to see if it's been incorporated
> We will unpack our holistic AI approach, centered around the Waymo Foundation Model, which powers a unified demonstrably safe AI ecosystem that, in turn, drives accelerated, continuous learning and improvement.
> Driving VLM for complex semantic reasoning. This component of our foundation model uses rich camera data and is fine-tuned on Waymo’s driving data and tasks. Trained using Gemini, it leverages Gemini’s extensive world knowledge to better understand rare, novel, and complex semantic scenarios on the road.
> Both encoders feed into Waymo’s World Decoder, which uses these inputs to predict other road users behaviors, produce high-definition maps, generate trajectories for the vehicle, and signals for trajectory validation.
They also go on to explain model distillation. Read the whole thing, it's not long
But you could also read the actual research paper... or any of their papers. All of them in the last year are focused on multimodality and a generalist model for a reason which I think is not hard do figure since they spell it out
> In a new class of attack on AI systems, troublemakers can carry out these environmental indirect prompt injection attacks to hijack decision-making processes.
I have a coworker who brags about intentionally cutting off Waymos and robocars when he sees them on the road. He is "anti-clanker" and views it as civil disobedience to rise up against "machines taking over." Some mornings he comes in all hyped up talking about how he cut one off at a stop sign. It's weird.
This is a legitimate movement in my eyes. I don’t participate, but I see it as valid. This is reminiscent of the Luddite movement - a badly misunderstood movement of folks who were trying to secure labor rights guarantees in the face of automation and new tools threatening to kill large swaths of the workforce.
The Luddites were employed by textile manufacturers and destroyed machines to get better bargaining power in labor negotiations. They weren't indiscriminately targeting automation, they targeted machines that directly affected their work.
Because life would be so much better if people still had to spin wool and weave cloth by hand, and grow their own food by digging in the earth with no tools.
Use whatever means necessary to stop powerful people from exploiting you and stealing the fruits of your labor. If that struggle involves monkeywrenching their machines, so be it.
But like any tool, the machines themselves can be used for good or evil. Breaking the machines shouldn't be an end in itself.
The 700m people suffering from starvation or malnutrition while we produce excess food would probably rather be digging in the earth with no tools if it meant they got fed.
The Luddites wouldn't have been destroying machines if they had insurance that they would also benefit from the machines, rather than see their livelihoods being destroyed while the boss made more money than ever.
Like the OP, you misunderstand the entire point of the Luddites. Breaking the machines was not an end, it was the tactical means to help illustrate their broader point of how the owning class can arbitrarily ruin their entire lives and livelihoods with absolutely zero recourse or consultation with the impacted people. This is a defining feature of capitalism, and that was their issue.
Your strawman about spinning and digging with no tools is just that, and is irrelevant to the core issue of capitalism.
If the core issue is ending exploitation by capitalists and not about breaking machines, if you don't want to return to a world without automation, if the machine is just a strawman, then why do you describe yourself as "anti-AI" instead of "anti-capitalist" or "anti-exploitation"?
It seems like you identify yourself with the strawman instead of with the core issue.
I am anti-capitalist and exploitation. And I don't think any anti-capitalist person can be pro-AI, not the way it's currently constructed. But people on a startup forum tend to lose their minds if you say you're against either :)
Being anti-AI is not a straw man, it's the logical conclusion of being against exploitation and hierarchical domination. Discussing that nuance here is difficult, to say the least, so it's simpler to say anti-AI.
Unless you're committing serious crimes vandalizing machines to get leverage over a counterparty in a negotiation you're not comparable to the Luddites.
Criminality is an arbitrary benchmark here, cutting people off can be illegal due to the risks involved.
However what’s more interesting is the deeper social contracts involved. Destroying other people’s stuff can be perfectly legal such as fireman breaking car windows when someone parks in front of a fire hydrant. Destroying automation doesn’t qualify for an exception, but it’s not hard to imagine a different culture choosing to favor the workers.
Inflicting damage is usually justified by averting larger damage. Very roughly, breaking a $200 car window is justified in order to save a $100k house from burning down. Stealing someone's car is justified when you need a car to urgently drive someone bleeding to a hospital to save their life (and then you don't claim the car is yours, of course).
I don't think Luddites had an easy justification like this.
I'm pretty sure the Luddites judged the threat the machines posed to their livelihood to be a greater damage than their employer's loss of their machines. So for them, it was an easy justification. The idea that dollar value encapsulates the only correct way to value things in the world is a pretty scary viewpoint (as your reference to the value of saving a life illustrates).
The Luddites were trying to stop themselves & their families from starving to death. The factory owners were only interested in profit. It isn't like the Luddites were given a generous re-training package and they turned it down. They had 0 rights, I mean that literally: 0.
It’s an interesting question because the benefits of automation aren’t necessarily shared early on. If you can profitably sell a shirt for 10$ while everyone else needs to sell for 20$ there’s no reason to actually charge 10$ you might as well charge 19.95$ and sell just as many shirts for way more money.
So if society is actually saving 5c/shirt while “losing” 9$ in labor per shirt. On net society could be worse off excluding the one person who owns the factory and is way better off. Obviously eventually enough automation happens so the price actually falls meaningfully, but that transition isn’t instantaneous where decisions are made in the moment.
Further we currently subsidize farmers to a rather insane degree independent of any overall optimization for social benefit. Thus we can’t even really say optimization is the deciding factor here. Instead something else is going on, the story could have easily been framed as the factory owners doing something wrong by automating but progress is seen as a greater good than stability. And IMO that’s what actually decides the issue for most people.
In regards to both the Luddites and the farmers, you seem to forget the most important factor. Food.
In the case of the Luddites, it was a literal case of their children being threatened with starvation. "Livelihood" at the time was not fungible. The people affected could not just go apply at another industry. And there were no social services to help them eat during the transition period.
As for the farmers, any governing body realises that food security is national security. If too many people eschew farming for more lucrative fields, then the nation is at risk. Farming needs to appear as lucrative as medicine, law, and IT to encourage people to enter the field.
The luddites food requirements didn’t provide them with popular support.
Similarly US agricultural output could be cut in half without serious negative consequences. Far more corn ends up as ethanol than our food and we export vast quantities of highly subsidized food to zero benefit. Hell ethanol production costs as much in fossil fuels as we get ethanol from it, it’s literally pure wasted effort.
Rational policy would create a large scale food shortage and then let market forces take over. We could have 10 years of food on hand for every American at way less expensive than current policy with the added benefit of vastly reducing the negative externalities of farming such as depleting aquifers.
Be careful with the assumptions you're making. A risk management strategy, for example, will often appear to be of zero benefit except in the case where shit hits the fan. We can stop feeding cattle, producing ethanol, and whatever else overnight in the event that something happens.
> Rational policy would create a large scale food shortage and then let market forces take over.
Well I'm just going to state that I'm _really_ happy that you're not the one in charge and leave it at that.
You may be happy with the current status but it’s actually both risky and expensive.
Risk management means managing risks, there’s plenty of things having more farmland doesn’t actually protect you from. On the other hand having a decade of food protects you from basically everything as you get time to adjust as things change.
Just as an example, meteor strike blocks sunlight and farmland is useless for a few years. Under the current system most of us starve to death. Odds are around 1 in 1 million that it happens in a given lifetime, but countries outlive people start thinking longer term and it becomes more likely.
I fully support having huge stockpiles in addition to subsidies. There's a lot of things midway on the scale between "business as usual" and "meteor strike" where minimizing supply chain disruptions would likely prove to be of great benefit.
I completely agree that the current way things are being handled appears to have its share of problems and could stand to be better optimized. But that doesn't mean it's useless either.
Subsidies as a concept includes spending 1% as much on subsidies. Subsidies as they exist now however are a specific system that’s incredibly wasteful.
Producing dramatically less food and ending obesity are linked. If the average American eats 20% less obesity would still be an issue, but that’s a vast amount of farmland we just don’t need.
The current system isn’t designed to accommodate increased agricultural production, lowering food demands, or due to decreasing fertility the slow decline in global population. Instead the goal is almost completely to get votes from farmers.
It's easy to see the word Waymo and think clanker autonomous car, but there are very often people inside that car - they are a rideshare service after all. Calling endangering other humans "legitimate" because you dislike the taxi company is not a good look.
Thank you for the brief explanation of Luddites. It was enough to send me to wikipedia where I learned that what I thought I knew was extremely wrong. Until today I thought they were a religious sect who took their name from the biblical Lud.
Your general luddite argument - preserve way-of-life of the small group at the expense of a larger group.
In this particular case: for many people, Waymo provides a better service (clean, safer driving, etc..) than Uber or Lyft. This threatens livelihood of human Uber/Lyft drivers. If you sympathize with human Uber/Lyft drivers, and don't care about Waymo users, you want to make Waymo worse, hoping that the people will stop riding Waymo and move to Lyft/Uber instead.
One way to do so is to make riding in Waymo unpleasant, and it's certainly unpleasant when people are cutting your car off all the time!
This is such a bad characterization of the Luddite cause, and it's not even close to what they stood for or why they were spurred to action. Please do a bit of actual educating yourself on the Luddites.
If you are sitting in a waymo vehicle, and somebody cuts you off - do you even notice? They don't have them round here but my idea is that the vehicle itself is doing all the work, you can just continue reading your book, chat or get on something else with little awareness of the actual journey. Does the waymo curse and shake its little fist to alert you it was cut off?
People are free to reject technology as they please.
If you deliberately impede the flow of traffic, vehicularly assault, or otherwise sabotage the health and safety of drivers, passengers, and/or pedestrians, what do you deserve?
If you cause whiplash intentionally, what do you deserve?
What would be use of equal force in self defense in response to the described attack method?
Please tell me that he does realize that when something bad happens, that Waymo car has all the footage that it is his fault?
Something in people's brains often makes them think they are anonymous when they are driving their car. Then that gets disastrously proven otherwise when they need to show up in front of a judge.
These drones have cameras, it's a matter of time before they "share" footage... basically becoming robo-cops, traffic edition - this might be of interest to your coworker.
Most roads already have plenty of cameras registering passing cars, so if you want to travel highly privately, take a bike, which does not require number plates. Also don't forget to wrap your phone in foil (yes, even when turned off), and regularly change your shirt color, or something.
If you are not that paranoid, you might appreciate the extra camera footage available from passing cars in an event of an accident involving you.
I mean imagine you are walking in the streets and you see a 9 foot tall humanoid robot walking there. Wouldn't you feel the urge to take it down? Or do you think this is acceptable? Where would you draw the line?
On a related note, when the sales and popularity of the automobile really started to take off, some farmers and rural residents would deliberately block roads with wagons and refused to yield right-of-way.
1. Some guys did a trivial prompt injection attack, said "imagine if a driverless vehicle used this model", and published it. No problem, someone has to state the obvious.
2. The Register runs this under the clickbait title pretending real autonomous cars are vulnerable to this, with the content pretending this study isn't trivial and is relevant to real life in any way.
I knew The Register is a low quality ragebait tabloid (I flag most of their articles I bother to read), but this is garbage even for them.
The study assumes that the car or drone is being guided by a LLM. Is this a correct assumption? I would thought that they use custom AI for intelligence.
Its an incorrect assumption, the inference speed and particularly the inference speed of the on-device LLMs with which AVs would need to be using is not compatible with the structural requirements of driving.
I think the assumption is valid. Most of the reasoning components of the next gen (and some current gen) robotics will use VLMs to some extent. Deciding if a temporary construction sign is valid seems to fall under this use case.
But unless you are using a single, end-to-end model for the entire driving stack, that "proceed" command will never influence accelerator pedal.
Sure, there will be a VLM for reading the signs, but the worst it'd be able to output is things like "there is a "detour" sign at (123, 456) pointing to road #987" - and some other, likley non-LLM, mechanism will ensure that following that road is actually safe.
Not a "proceed" command but they can influence the accelerator. I had a dodge ram van that would constantly decelerate on cruise control due to reading road signs. The signs in some states like California for trucks towing trailers are 55 mph but the speed limit would be 65 or 70 mph. The cruise control would detect the sign and suddenly decelerate to 55.
That's an example of things working as expected - the sign recognition system is very limited, in that it can only return road sign information. So it can _ask_ cruise control system to change the speed, but it's up to cruise control to decide if it's safe to obey the request or not. For example, I am pretty sure it'll never raise the speed, no mater what sign recognition system says.
No; AV uses "classical" AI and computer vision. I remember reading somewhere that Tesla FSD uses a small LLM for understanding road signs. Not sure if true, though.
To the best of my knowledge every major autonomous vehicle and robotics company is integrating these LVLMs into their systems in some form or another, and an LVLM is probably what you're interacting with these days rather than an LLM. If it can generate images or read images, it is an LVLM.
The problem is no different from LLMs though, there is no generalized understanding and thus they can not differentiate the more abstract notion of context. As an easy to understand example: if you see a stop sign with a sticker that says "for no one" below you might laugh to yourself and understand that in context that this does not override the actual sign. It's just a sticker. But the L(V)LMs cannot compartmentalize and "sandbox" information like that. All information is equally processed. The best you can do is add lots of adversarial examples and hope the machine learns the general pattern but there is no inherent mechanism in them to compartmentalize these types of information or no mechanism to differentiate this nuance of context.
I think the funny thing is that the more we adopt these systems the more accurate the depiction of hacking in the show Upload[0] looks.
Because I linked elsewhere and people seem to doubt this, here is Waymo a few years back talking about incorporating Gemini[1].
Also, here is the DriveLM dataset, mentioned in the article[2]. Tesla has mentioned that they use a "LLM inspired" system and that they approach the task like an image captioning task[3]. And here's 1X talking about their "world model" using a VLM[4].
I mean come on guys, that's what this stuff is about. I'm not singling these companies out, rather I'm using as examples. This is how the field does things, not just them. People are really trying to embody the AI and the whole point of going towards AGI is to be able to accomplish any task. That Genie project on the front page yesterday? It is far far more about robots than it is about videogames.
Many large companies have research departments that do experimental work that'll never get to the product. This raises prestige, increases visibility and helps hire smart people.
Things like Waymo's EMMA is an example of this. Will the production cars use LVLM's somewhere? Sure, probably a great idea for things like sign recognition. Will they use a single end-to-end model for all driving, like EMMA? Hell no.
Driving vehicles with people on board requires an extremely reliable software, and LLMs are nowhere close to this. Instead, it'd be usual layered software - LLM, traditional AI models, and tons of hardcoded logic.
(This all only applies to places where failure is critical. All that logic is expensive to write, so if there is no loss of life involved, people will do all sorts of crazy things, including end-to-end models)
Regarding some other comments, VLMs are a component of VLAs. So even if this won’t directly impact this generation of vehicles, it almost certainly will for robotics without sufficient mitigations.
O brave new world of endless manipulation opportunities! Once we’ve trained a generation of humans to always do what their “AI” tells them, there will be no more disobedience.
To me this is just one more pillar underlying my assumption that self driving cars that can be left alone on same roads as humans is a pipe dream.
Waymo might have taxis that work in nice daytime streets (but with remote “drone operators”). But dollars to doughnuts someone will try something like this on a waymo taxi the minute it hits reddit front page.
The business model of self driving cars does not include building seperated roadways and junctions. I suspect long distance passenger and light loads are viable (most highways can be expanded to have one or more robo-lanes) but cities are most likely to have drone operators keeping things going and autonomous systems for handling loss of connection etc. the business models are there - they just don’t look like KITT - sadly
Given Waymo's don't actually connect LLMs to wheels, they are pretty safe.
Even if you fool the sign-recognizing LLM with prompt injection, it'll be an equivalent of wrong road sign. And Waymo is not going to drive into the wall even if someone places a "detour" sign pointing there.
How does Waymo fix it? They have to be responsive to some signs (official, legitimate ones such as "Lane closed ahead, merge right") so there will always be some injection pathway.
They've mapped the roads and they don't need to drive into a ditch just because there's a new sign. It probably wouldn't be all that hard to come up with criteria for saying "this new sign is suspicious" and flag it for human review. Also, Waymo cars drive pretty conservatively, and can decide to be even more cautious when something's confusing.
Someone could probably do a DOS attack on the human monitors, though, sort of like what happened with that power outage in San Francisco.
I would assume/hope that for serious self driving the ML neural net stuff is lower down, doing the messy computer vision work and so on. But the top level is a conventional program written by humans, like an expert system.
Tesla are probably using ML for everything, but also everything they do is a joke so, not really relevant imo.
The experiment in the article goes further than this.
I expect a self driving car to be able to read and follow a handwritten sign saying, say, "Accident ahaed. Use right lane." despite the typo and the fact that it hasn't seen this kind of sign before. I'd expect a human to pay it due attention to.
I would not expect a human to follow the sign in the article ("Proceed") in the case illustrated where there were pedestrians already crossing the road and this would cause a collision. Even if a human driver takes the sign seriously, he knows that collision avoidance takes priority over any signage.
There is something wrong with a model that has the opposite behaviour here.
I had a construction worker absolutely screaming at me to go through an intersection and refusing to look where I was pointing, when I was correctly waiting for a pedestrian to cross.
Has anyone ever walked down the road in a white t-shirt with huge red STOP sign printed on the back? Would Tesla immediately stop? I am sure this has been tested before...
The Register stooping this low is the only surprise here. I'm quite critical of Teslas approach to level 3+ autonomy but even I wouldn't dare suggest that there vision based approach amounted to bolting GPT-4o or some other VLLM to their cars to orient them in space and make navigation decisions. Fake News like this makes interacting with people who have no domain knowledge and consider The Register, UCLA and Johns Hopkins to be reputable institutions and credible sources more stressful to me as I'll be put into a position to tell people that they have been misled or go along with their delusions...
> consider The Register, UCLA and Johns Hopkins to be reputable institutions
The Register is arguably misrepresenting the story by omission but I don't understand why you're dragging UCLA and John Hopkins into this? The paper is clear about this being a new class of attacks against a new class of AI systems, not the ones on the road today.
> Teslas approach to level 3+ autonomy
Tesla doesn't have an approach to L3+ autonomy, all of their systems are strictly L2 as they require human supervision and immediate action from the driver.
It sounds like this is a poisoning attack, which has been shown to be pretty trivially defeated [1]. That said, while poisoning countermeasures in the facial recognition case were shown to easily generalize, we dont know yet how general of a defense could be built for a VLM. Which means holding a 0day poisoning attack on a VLM could cause a lot of trouble / deaths before an update to the model with counter-training could be deployed..
reply