> Requiring technical evidence such as screencasts showing reproducibility, integration or unit tests demonstrating the fault, or complete reproduction steps with logs and source code makes it much harder to submit slop.
If this isn't already a requirement, I'm not sure I understand what even non-AI-generated reports look like. Isn't the bare-minimum of CVE reporting a minimally reproducible example? Like, even if you find some function, that for example doesn't do bounds-checking on some array, you can trivially write some unit testing code that's able to break it.
As someone who worked on the recieving end of security reports, often not. They can be surprisingly poorly written.
You sort of want to reject them all, but ocassionally a gem gets submitted which makes you reluctant.
For example, years ago i was responsible for triaging bug bounty reports at a SaaS company i worked at at the time. One of the most interesting reports was that someone found a way to bypass our oauth thing by using a bug in safari that allowed them to bypass most oauth forms. The report was barely understandable written in broken english. The impression i got was they tried to send it to apple but apple ignored them. We ended up rewriting the report and submitting it to apple on there behalf (we made sure the reporter got all credit).
If we ignored poorly written reports we would have missed that. Is it worth it though? I dont know.
It was a long time ago so i might be misremembering, but i think the idea was that safari would leak the target of redirects cross domain, which allowed the attacker to capture some of the oauth tokens.
So safari was not following the web browser specs in a way that compromised oauth in a common mode of implementation.
In the AI age I'd prefer poorly written reports in broken English. Just as long as that doesnt become a known bypass and so the AI is instructed to sound broken.
The problem that is that a lot of CVEs often don't represent "real" vulnerabilities, but merely theoretical ones that could hypothetically be combined to make a real exploit.
Regex exploitation is the forever example to bring up here, as it's generally the main reason that "autofail the CI system the moment an auditing command fails" doesn't work on certain codebases. The reason this happens is because it's trivial to make a string that can waste significant resources to try and do a regex match against, and the moment you have a function that accepts a user-supplied regex pattern, that's suddenly an exploit... which gets a CVE. A lot of projects then have CVEs filed against them because internal functions rely on Regex calls as arguments, even if they're in code the user is flat-out never going to be able interact with (ie. Several dozen layers deep in framework soup there's a regex call somewhere, in a way the user won't be able to access unless a developer several layers up starts breaking the framework they're using in really weird ways on purpose).
The CVE system is just completely broken and barely serves as an indicator of much of anything really. The approval system from what I can tell favors acceptance over rejection, since the people reviewing the initial CVE filing aren't the same people that actively investigate if the CVE is bogus or not and the incentive for the CVE system is literally to encourage companies to give a shit about software security (at the same time, this fact is also often exploited to create beg bounties). CVEs have been filed against software for what amounts to "a computer allows a user to do things on it" even before AI slop made everything worse; the system was questionable in quality 7 years ago at the very least, and is even worse these days.
The only indicator it really gives is that a real security exploit can feel more legitimate if it gets a CVE assigned to it.
The fact that we have lambdas/serverless functions and people are still over-engineering k8s clusters for their "startup project" is genuinely hilarious. You can literally validate your idea with some janky Python code and like 20 bucks a month.
The problem is that people don't like hearing their ideas suck. I do this too, to be fair. So, yes, we spend endless hours architecting what we'd desperately hope will be the next Facebook because hearing "you are definitely not the next Facebook" sucks. But alas, that's what doing startups is: mostly building 1000 not-Facebooks.
The lesson here is that the faster you fail, the faster you can succeed.
I heard this years ago from someone, but there's material impact to a company's bottom line if those pages get updated, so that's why someone fairly senior has to usually "approve" it. Obviously it's technically trivial, but if they acknowledge downtime (for example, like in the AWS case), investors will have questions, it might make quarterly reports, and it might impact stock price.
So it's not just a "status page," it's an indicator that could affect market sentiment, so there's a lot of pressure to leave everything "green" until there's no way to avoid it.
I feel like there should at least be some sort of disclaimer then that tells me the status page can take up to xx minutes to show an outage and not make it seem as if it is updated instantaniously. That way I could way those xx minutes before I file a ticket with support and not have the case thinking it is an isolated problem for me instead of a major outage.
This will be litigated and I have a feeling OpenAI/Anthropic/Claude/MistralAI will win, since we've been down similar roads before[1]. With that said, AI slop will never be a replacement for human creativity, and while AI is pretty incredible technology, I'm actually way more bullish on people.
You don’t grasp how anime and IP are underwritten by the state in Japan. It will firewall this and then other creators will follow their lead. The point is to head off slop, not allow it. Litigation isn’t the end, firewalls are.
We’re deving a game AI can’t use. We’ve invented several firewalls.
I just looked it up because I was interested in their etymologies, but it seems that the words actually have the same (Old English/Germanic) root: essentially a portmanteau of "many" + "fold."
Very true. I rarely find myself "Googling" anymore. I'd rather just ask ChatGPT. Even if the enshittification (ads, etc.) will happen down the line, at least we'll have an absolutely awesome product (like Google was to Yahoo) for 5-10 years.
OpenAI is at the very least worth at least half as much as Google. I foresee Google becoming like IBM, and these new LLM companies being the new generation of tech companies.
I'm an early-stage CTO, expert engineer, and data professional interested in team-building, consulting and architecting data pipelines or API-heavy backends. At Edmunds.com, I worked on a fairly successful ad-tech product and my team bootstrapped a data pipeline using Spark, Databricks, and microservices built with Java, Python, and Scala.
At ATTN:, I re-built an ETL Kubernetes stack, including data loaders and extractors that handle >10,000 API payload extractions daily. I created SOPs for managing data interoperability with Facebook Marketing, Facebook Graph, Instagram Graph, Google DFP, Salesforce, etc.
More recently, I was the CTO and co-founder of a gaming startup. We raised over $6M and I was in charge of building out a team of over a dozen remote engineers and designers, with a breadth of experience ranging from Citibank, to Goldman Sachs, to Microsoft. I moved on, but retain significant equity and a board seat.
I am also a minority owner of a coffee shop in northern Spain. That I'm a top-tier developer goes without saying. I'm interested in flexing my consulting muscle and can help with best practices, architecture, and hiring.
Would love to connect even if it's just for networking!
> &mut future1 is dropped, but this is just a reference and so has no effect. Importantly, the future itself (future1) is not dropped.
There's a lot of talk about Rust's await implementation, but I don't really think that's the issue here. After all, Rust doesn't guarantee convergence. Tokio, on the other hand (being a library that handles multi-threading), should (at least when using its own constructs, e.g. the `select!` macro).
So, since the crux of the problem is the `tokio::select!` macro, it seems like a pretty clear tokio bug. Side note, I never looked at it before, but the macro[1] is absolutely hideous.
There's nothing `select!` could do here to force `future1` to drop, because it doesn't receive ownership of `future1`. If we wanted to force this, we'd have to forbid `select!` from polling futures by reference, but that's a pretty fundamental capability that we often rely on to `select!` in a loop for example. The blanket `impl<F> Future for &mut F where F: Future ...` isn't a Tokio thing either; that's in the standard library.
Surely not every use of `select!` needs this ability. If you can design a more restrictive interface that makes correctness easier to determine, then you should use that interface where you can, and reserve `select!` for only those cases where you can't.
What could `tokio::select!` do differently here to prevent bugs like this?
In the case of `select!`, it is a direct consequence of the ability to poll a `&mut` reference to a future in a `select!` arm, where the future is not dropped should another future win the "race" of the select. This is not really a choice Tokio made when designing `select!`, but is instead due to the existence of implementations of `Future` for `&mut T: Future + Unpin`[1] and `Pin<T: Future>`[2] in the standard library.
Tokio's `select!` macro cannot easily stop the user from doing this, and, furthermore, the fact that you can do this is useful --- there are many legitimate reasons you might want to continue polling a future if another branch of the select completes first. It's desirable to be able to express the idea that we want to continually poll drive one asynchronous operation to completion while periodically checking if some other thing has happened and taking action based on that, and then continue driving forward the ongoing operation. That was precisely what the code in which we found the bug was doing, and it is a pretty reasonable thing to want to do; a version of the `select!` macro which disallows that would limit its usefulness. The issue arises specifically from the fact that the `&mut future` has been polled to a state in which it has acquired, but not released, a shared lock or lock-like resource, and then another arm of the `select!` completes first and the body of that branch runs async code that also awaits that shared resource.
If you can think of an API change which Tokio could make that would solve this problem, I'd love to hear it. But, having spent some time trying to think of one myself, I'm not sure how it would be done without limiting the ability to express code that one might reasonably want to be able to write, and without making fundamental changes to the design of Rust async as a whole.
It's desirable to be able to express the idea that we want to continually poll drive one asynchronous operation to completion while periodically checking if some other thing has happened and taking action based on that, and then continue driving forward the ongoing operation.
This idea may be desirable; but, a deadlock is possible if there's a dependency between the two operations. The crux is the "and then continue," which I'm taking to mean that the first operation is meant to pause whilst the second operation occurs. The use of `&mut` in the code specifically enables that too.
If it's OK for the first operation to run concurrently with the other thing, then wrt. Tokio's APIs, have you seen LocalSet[1]? Specifically:
let local = LocalSet::new();
local.spawn_local(async move {
sleep(Duration::from_millis(500)).await;
do_async_thing("op2", lock.clone()).await;
});
local.run_until(&mut future1).await;
This code expresses your idea under a concurrent environment that resolves the deadlock. However, `op2` will still never acquire the lock because `op1` is first in the queue. I strongly suspect that isn't the intended behaviour; but, it's also what would have happened if the `select!` code had worked as imagined.
A meta-idea I have: look at all usages of `select!` with `&mut future`s in the code, and see if there are maybe 4 or 5 patterns that emerge. With that it might be possible to say "instead of `select!` use `poll_continuing!` or `poll_first_up!` or `poll_some_other_common_pattern!`".
It feels like a lot of the way Rust untangles these tricky problems is by identifying slightly more contextful abstractions, though at the cost of needing more scratch space in the mind for various methods
I can imagine an alternate universe in which you cannot do:
1. Create future A.
2. Poll future A at least once but not provably poll it to completion and also not drop it. This includes selecting it.
3. Pause yourself by awaiting anything that does not involve continuing to poll A.
I’m struggling a bit to imagine the scenario in which it makes sense to pause a coroutine that you depend on in the middle like this. But I also don’t immediately see a way to change a language like Rust to reliably prevent doing this without massively breaking changes. See my other comment :)
I'm not familiar with tokio, but I am familiar with folly coro in C++ which is similiar-ish. You cannot co_await a folly::coro::Task by reference, you must move it. It seems like that prevents this bug. So maybe select! is the low level API and a higher level (i.e. safer) abstraction can be built on top?
Although the design of the `tokio::select!` macro creates ways to run into this behavior, I don't believe the problem is specific to `tokio`. Why wouldn't the example from the post using Streams happen with any other executor?
First of all, great write-up! Had a blast reading it :) I think there's a difference between a language giving you a footgun and a library giving you a footgun. Libraries, by definition, are supposed to be as user-friendly as possible.
For example, I can just do `loop { }` which the language is perfectly okay with letting me do anywhere in my code (and essentially hanging execution). But if I'm using a library and I'm calling `innocuous()` and there's a `loop { }` buried somewhere in there, that is (in my opinion) the library's responsibility.
N.B. I don't know enough about tokio's internals to suggest any changes and don't want to pretend like I'm an expert, but I do think this caveat should be clearly documented and a "safe" version of `select!` (which wouldn't work with references) should be provided.
i forget if this part unwinds to the exact same place, but some of this kind of design constraint in tokio stems from the much earlier language capabilities and is prohibitive to adjust without breaking the user ecosystem.
one of the key advertised selling points in some of the other runtimes was specifically around behavior of tasks on drop of their join handles for example, for reasons closely related to this post.
> Pricing model for a terminal. What a time to be alive.
As soon as they raised like 50M+ (why you'd ever need 50 million dollars to build a terminal—which have been essentially "solved" since the 1970s—is a pretty good question), this was bound to happen. Same nonsense will happen to Zed, etc.
To be fair, for those of us who live in a terminal, the terminal is/was not solved.
Old terminals are slow and have a bunch of weird Unicode issues.
Now, Warp is a terrible product, and I have nothing nice to say about them.
But look at modern terminals like Kitty or Ghostty. There are so many very nice improvements. Like mouse support that works well (as opposed to "kind of works, but who needs a mouse?!, won't fix"), fast keyboard response (you'd think it wouldn't be noticeable, but it's very noticeable), copy-and-paste that makes sense and isn't different from everything else on the system, etc.
If this isn't already a requirement, I'm not sure I understand what even non-AI-generated reports look like. Isn't the bare-minimum of CVE reporting a minimally reproducible example? Like, even if you find some function, that for example doesn't do bounds-checking on some array, you can trivially write some unit testing code that's able to break it.
reply