Trail of Bits --- which has a reputation in the field approaching "unimpeachable" --- confirmed a series of serious vulnerabilities. Whether there are real findings involved in this report isn't in question, and hasn't been since the day of the announcement, when Dan Guido from Trail of Bits confirmed that they'd reviewed and confirmed the finding.
It's ironic, or maybe it isn't, that after CTS-Labs published their findings in a manner basically optimized for innuendo than Anandtech has run a story that is basically composed of innuendo. Given how charged people's feelings about AMD seem to be, this is probably manna from heaven for them, and since CTS-Labs isn't publishing the full technical details, it'll be raining bread for them for many days to come.
I wouldn't care, except that after the original monster thread about the CTS-Lab announcement, it's become apparent that HN commenters have a very poor understanding of how vulnerability research actually works, and Anandtech is perpetuating some of those myths, like the idea that researchers invariably (or even routinely) arrange for CVE allocation when publishing new flaws.
You're building a strawman (as you were in that announcement thread as well). The fact that they didn't allocate a CVE or agree on an embargo is really the least of the issues. It's clear that you personally don't want to do that as a security researcher. That's fine, I tend to agree with you.
But that's really not the issue. The real problem is that their reveal was clearly meant to mislead. It's was a direct attack on AMD and probably more specifically AMD's stock. Technical details were hidden in the middle of a mediocre whitepaper, all the descriptions of the flaws appear much more critical than they really are. Meanwhile they took the time to make a fancy website and even a video so it's clearly not because of a lack of time or resources. I don't have any particular problem with them "0-day" the announcement but I do have many problems with this pretty obvious manipulation attempt.
Do you genuinely think that CTS-lab acted in good faith here and that the way they handled the disclosure was appropriate? Would you advise other security researchers to do the same thing, to make sure to reveal the vulnerabilities in a way that will cause as much damage as possible, even if it means resorting to good old FUD?
I think attempts to profit off stock swings caused by vulnerabilities are silly and based off a misunderstanding of what really causes stock prices to move.
But even if they weren't, I wouldn't care.
There are three things you can do with a vulnerability that I have a problem with:
1. You can actively exploit the vulnerability to harm people (or knowingly share it with people who do).
2. You can sell a vulnerability without due diligence to ensure it won't be used to harm others.
3. You can lie about having found a vulnerability.
> I genuinely, sincerely don't care how CTS acted.
You've made that quite clear, and it's fine for you to feel that way. But don't go around insulting anyone who thinks that some of those less-technical issues surrounding these vulnerabilities are worth talking about.
So if you make it easier for someone else to develop an exploit, which is then deployed before mitigation can be, that's OK so long as you didn't directly profit from it? I think reasonable people can take a different view. It's all too easy to say "let the chips fall where they may" when you feel none of the consequences. It's even easier when maintaining a general atmosphere of threat is in your own business interest.
And I think it's awfully easy for people who don't do independent unpaid research, and don't contribute in any way to its being conducted, to have strong opinions about the responsibilities researchers have to them.
Let me sum this up: they're not working for you. You don't get to tell them what to do. If AMD wanted to protect their users, they'd quadruple their security verification budget. What do you think that budget is as a percentage of R&D costs at AMD? Take a guess.
That's one of the worst appeals to authority I've ever seen. What hubris. Yes, many of us are busy building the things that you merely criticize. Some of us even have experience in security, but it's not what we do now. But because we're not your tribe, you consistently act like we're not entitled to opinions. You didn't even try to engage with the actual argument here.
It is not unethical to discover things on your own and reveal some or all of it to the world. Sorry. What's your guess at AMD's security verification budget as a percentage of R&D? And what do you think it should be?
I don’t seriously have to explain the concept of two wrongs not making a right, do I? Hopefully by the same token, you’re not really confused by the notion of waiting a month before making something public, vs. being told to never release it. It’s possible to be disgusted with large companies treating security as an afterthought, and people pushing a release in 24 hours because of their disclosed financial interest in the outcome. Likewise, I can support your argument that independent researchers aren’t beholden to AMD or Intel, etc, while finding your total lack of espoused ethics disturbing.
No, see, I answered your question directly. You can't pretend my followup question was that answer and then play the "2 wrongs" card. Can you please first acknowledge that, then, if possible, actually answer the question I asked?
I didn’t ask you a question, I made a statement about you persistently attacking the strawman of “I can do what I want” when what’s been presented over and over is an ethical argument. You still seem to be (intentionally?) ignoring the difference between permissible, and ethical.
You’re free to ignore or reject the ethical dimension of your work, but there is a word for that you can’t talk your way out of: unethical.
"There is no immediate risk of exploitation of these vulnerabilities for most users. Even if the full details were published today, attackers would need to invest significant development efforts to build attack tools that utilize these vulnerabilities. This level of effort is beyond the reach of most attackers (see https://www.usenix.org/system/files/1401_08-12_mickens.pdf, Figure 1)
These types of vulnerabilities should not surprise any security researchers; similar flaws have been found in other embedded systems that have attempted to implement security features. They are the result of simple programming flaws, unclear security boundaries, and insufficient security testing."
- https://blog.trailofbits.com/2018/03/15/amd-flaws-technical-...
>There is no immediate risk of exploitation of these vulnerabilities for most users. Even if the full details were published today, attackers would need to invest significant development efforts to build attack tools that utilize these vulnerabilities. This level of effort is beyond the reach of most attackers...
ToB is right to say this, but it’s not at all uncommon for very serious security vulnerabilities to be “beyond the reach of most attackers.” Browse through Google Project Zero’s blog for examples.
> These types of vulnerabilities should not surprise any security researchers; similar flaws have been found in other embedded systems that have attempted to implement security features. They are the result of simple programming flaws, unclear security boundaries, and insufficient security testing.
Again, correct. However, that describes most serious security vulnerabilities. I’m a security researcher; at this point I’m nearly immune to astonishment about how bad simple programming errors can be. ToB is not insinuating the impact is small, they’re reminding the community that serious problems emerge from seemingly innocuous failures.
For example, I’ve actually witnessed a two-factor auth and password reset system utterly fail and compromise the login interface. A developer wrote “!= 404” instead of “== 200” for the status code handling logic. They forgot the 2fa microservice would return a “429” after five incorrect codes triggered the rate limiter. It was literally a one-line fix. Mistakes don’t get much simpler than that unless you make a typo or off by one error, but it still allowed every single user’s account to be arbitrarily compromised. These mistakes are extremely easy to make the lower down the stack you go.
So because there is a vulnerability, and confirmed by what you describe as "unimpeachable" Trail of Bits.
I cannot dislike the way CTS-Labs has handle this? And I cannot question their motive behind this? Given other facts we have including questionable parties?
May I ask, is that what you are suggesting? Given also the numerous comment you have made in this post.
It was fairly clear from the beginning that these were "pedestrian" (I can't think a better term right now) security flaws that were dressed up to be super-ultra-scary. I would like to believe that this will cause security researchers to maybe hesitate longer before assigning catchy names to new security flaws.
These are not pedestrian vulnerabilities. Not only do they appear to destroy the security of the PSP enclave system AMD uses, but it actually makes systems that have the PSP less secure than systems that do. They're essentially the worst possible vulnerabilities you can find in a security coprocessor.
If you'd found the same class of vulnerabilities in Intel SGX or the iPhone SEP, you'd have a contender for the top vulnerability discovery of the year; an almost-lock on the Pwnie.
I simply can't understand the people who are downplaying this other than by assuming that people love AMD so much that they don't want these to be severe vulnerabilities.
Do I think this will move the stock? No. I don't think an SGX break would hurt Intel much either.
Aren't these the same level as the Intel ME vulnerabilities? I'm basically assuming this is wonderful schadenfreude to see that AMD is basically no better than Intel at protecting this sort of stuff.
When I say "pedestrian" vulnerabilities, what I mean is that most users should wait for the patch to come out and apply it, but otherwise not panic. It's definitely not Heartbleed-class "you need to have patched yesterday and, if you haven't, you're already compromised."
Yes, these are comparable to ME vulnerabilities, which were plastered all across the HN front page and had people asking how they could find parts that didn't have MEs or whether they should switch to AMD.
The are not. Remote is far worse as it usually enables compromise (via privilege escalation most of the cases) and persistence without need for a previous break or physical presense.
These reported errors, while quite severe for what i've been able to make out of the less-than-good paper, do not grant a primary mode of attack and do not provide a way of getting privileges. Just a way of keeping it forever and ever and ever and ever ...
Should be fixed and done properly, just get the fucking CVEs already and publish it ... It is highly likely that it's in some way applicable to other secure elements on other cpu's as well so a proper response is needed.
CVEs are something that people who don't do vulnerability research seem to care about a lot more than people who do. What difference do you suggest it makes whether CTS-Labs gets them allocated or AMD does?
The basis of this is mostly you just saying it emphatically, as far as I can tell. Most real-world exploits rely on a combination of vulnerabilities. What's a sensible ranking of 'remote' over 'persistent'?
just get the fucking CVEs already
What does this really have to do with anything? It's hard to imagine anyone dealing with a real deployed system saying 'Well, since there is no CVE, this does not affect us at all".
I think these "amdflaws" vulnerabilities very much deserve to be discussed, and as somebody who doesn't like these blackbox enclaves in modern architectures I'm pretty happy to be proved right once again.
That being said with their shady behaviour CTS-labs have managed the tour de force of overshadowing these vulnerabilities with their botched hit piece "reveal". Paradoxically AMD might end up receiving less backlash than they deserve for their shoddy work because the researchers tried to pull a quick scam out of it. Great job CTS-labs.
> "All exploits require the ability to run an executable as admin"
> "There is no immediate risk of exploitation of these vulnerabilities for most users."
> "These types of vulnerabilities should not surprise any security researchers; similar flaws have been found in other embedded systems that have attempted to implement security features."
To be clear, ToB is not trivializing the vulnerabilities. In particular, your second and third quote here are equally applicable to most serious vulnerabilities found in e.g. iOS.
It's also what makes the vulnerabilities not-that-bad. It's just another "peripheral not properly handling firmware updates" thing, which are all over the place.
I mean, it's cool from a security research perspective, the PSP isn't just some random keyboard/HDD, but the severity seems overstated.
I'm not here to argue that CTS-Labs or Viceroy didn't overstate the vulnerabilities. Surely, Viceroy did! They said the stock was heading for zero!
I'm saying that the inverse argument, that the vulnerabilities are "pedestrian" and not worth making noise about, is at least equally false.
Unfortunately, the day of the announcement, several people trafficked in opinions based on CTS-Labs white paper --- which any skilled reader knew immediately, just from the format, wasn't a technical explanation --- that these vulnerabilities were non-issues. If I see people repeating that notion, I'm going to point out: the equivalent vulnerability in the iPhone platform would be front-page news.
Did AMD pull off a next level PR move here? I run a threadripper on my home box that holds keys that need to stay private. Is there anything I can do to mitigate venerability or should I start a painful key rotation process and keep them only on my MBP until a patch is out?
> I run a threadripper on my home box that holds keys that need to stay private. Is there anything I can do to mitigate venerability or should I start a painful key rotation process and keep them only on my MBP until a patch is out?
Yep, you should consider them burned. Also make sure you stock up on tinfoil too as you’ll need to make sure the fresh replacement keys don’t get mind read from a distance.
</sarcasm>
In all seriousness, what part of this situation would lead you to think what you’re saying? There’s been good coverage of what it’d take to exploit these vulnerabilities, so I’m not sure what lead you to this line of reasoning.
Are you storing those keys in the TPM or a smartcard? Is the machine connected to the Internet, or even to the local network? In other words ask yourself if the keys would be compromised by anyone getting root on the machine, and then:
* if yes, what are the ways that one could get root on that machine? Is it better or worse on the MBP?
* if not, are the keys stored in a place which is related to the secure processor?
And also: is there a reason why somebody could 1) know that you have keys on that machine 2) be interested in those keys 3) have the resources to conduct the attack?
In a nutshell, analyze your attack surface and model your threats.
Get a proper smartcard, don't let it be connected all the time (remove when not needed) and if they need to be kept private really badly, for the love of god don't connect it to the internet in any way.
I wish they'd gotten some answers for the Viceroy situation, which is the main sketchy part of this endeavor. Beyond that it seems like a set of real vulns with pretty real impact hindered for the HN crowd by an overly glitzy PR push. The ToB blog post is what should have come out concurrently to satisfy the overly cynical HN crowd - maybe good notes for CTS for next time. It's ironic that the overall reaction here was so jaded when people are up in arms about Intel ME every time it's so much as mentioned.
Intel has been cleaning up as of late, going all the way back to the earliest Core processors with patches. That's some Microsoft level of support, probably because there are old enterprise deployments, and it speaks to how Intel is different from AMD in tenor.
My problem is not the security aspect of it, it’s the economic warfare part of it. CTS was shorting the stock. They were security trolling. Everything has a bug right? Find a bug, short a stock. Profit.
Well, yes, until everything is a bit more secure because companies take it more seriously or the computer industry has learned a valuable lesson, and then everyone has benefited because of that.
Either the flaws will get priced into the stock forecasting and cost of doing business in the industry or people won't care (there are strong indications it's the latter for these flaws).
It's the exact same for people that do this to alert on regular companies. Markets function efficiently when there is lots of accurate information. These people are exposing new information (either previously unknown or hidden by people with a vested interest), which allows the markets to function more efficiently afterwards (which might mean using better info to choose who not to trust with your data, etc). What they get out of it is the ability to execute on the new information slightly before everyone else.
This of course assumes the data is real. If it's not there is no public benefit, so it's just fraud. I was under the impression that at least some of the flaws were very trumped up, so it was a little complicated whether they did something wrong, but if most /all are real and cause make problems, then I guess I don't have much of a problem with what they've done even if I do think they've taken a slightly less safe route for the public in order to maximize money. Like freedom of speech is one of those things that I don't always like how it's used but I recognize the good far outweighs the bad.
> until everything is a bit more secure because companies take it more seriously
That logic works for full vs. limited (or no) disclosure, but not for immediate vs. 90-day disclosure. In general, the incentive for a vendor with a reasonable embargo period is the same as without, because the effect on their reputation is the same. The only people differently affected are users, who are left more vulnerable for longer than they would have been. Even if one could argue that vendors deserve shaming for what might have been a simple mistake, users surely deserve better.
It's very easy for those who do not suffer the damage from their actions to rationalize their impulsive (or in this case venal) behavior, but that's just another kind of agency problem that people who fling economic jargon around should consider.
> The only people differently affected are users, who are left more vulnerable for longer than they would have been.
The assumption being that nobody else knows about the flaws? We can argue about how likely it is some other people knew about the flaw on a case by case basis, but a blanket statement that assumes that there was no risk while it was unreported is not accurately portraying the situation.
> Even if one could argue that vendors deserve shaming for what might have been a simple mistake, users surely deserve better.
I'm arguing that the risk that the companies exposed people to prior to the announcement (to a small degree), and the risk people may be exposed to after the announcement (to a large degree) are the incentive for someone doing this to take advantage of a short. Would I personally prefer less public risk through a coordinated exposure? Sure. Am I willing to state that it should be required? I don't think so, since that may greatly reduce the incentive of someone looking to do the investigative legwork. I think that's a net loss for the public, since the risk is still there, it's just reduced (since it's not widely known), but it will exist in that reduced state for a long period (possibly indefinitely).
When the choice is between an unknown risk (which can by high) for an indeterminate period or a semi-high risk for a short period (with the ability to mitigate risk as needed, since you know about it), I'll take the latter over the former.
> The assumption being that nobody else knows about the flaws?
Not nobody, but fewer. The information in a disclosure like this will focus a whole lot of miscreants' attention on something they might not have thought of before.
> that may greatly reduce the incentive of someone looking to do the investigative legwork
Now you're the one making assumptions. Most people don't need the incentive of an immediate crisis to do that legwork. Often, the mere fact that the bugs are real is enough, regardless of timelines. In most other cases, the fact that the clock is ticking is sufficient. If somebody has a proven track record of taking these things lightly then forcing their hand might be justified. Otherwise, you're essentially telling people you think they're lazy or negligent when you have no evidence of such. That's not a good way to start a collaborative process, in a situation where collaboration might be key to a timely fix.
Actually that is great, I would enjoy a meltdown of industry thanks to those kind of actually.
Maybe then the industry could start taking QA, testing, static analysis and formal proofs seriously, instead of being done if there is any budget left, if at all.
> Everything has a bug right? Find a bug, short a stock. Profit.
There are reasonable arguments that encouraging this would reduce vulnerabilities, or at least make them more costly. I’m not sure I would bother raising those arguments as it’s likely to draw in anti-finance ideological ire, but they do exist. Activist short selling is not novel, nor is it even novel for activists to short sell on the weight of security vulnerabilities.
One problem is that the security firm is incentivized to hinder any mitigation efforts. Hypothetically, a delayed fix or even a successful attack would help their short position.
No, I do not. I assume they are NDA'd, like virtually every firm is when doing a project like this, and got permission only to reveal what they revealed in the post.
Strange thing about article, is that CTS-Labs founders admit that they used to work for intelligence.
I know several people, who worked for intelligence. They never say "I worked for intelligence" and instead, tell you what they currently do and completely bypass "intelligence".
Some things don't change, even when old mottos are updated. (Mossad previous motto: "For by stratagems you wage war." (Proverbs 24:6) or as translated into a book partly titled on the subject, "by way of deception, you wage war."
Possibly also relevant, previous HN discussion on another vulnerability on AMD's PSP, reported and fixed in December 2017 (but only disclosed in January 2018):
One note here: David Kanter from RealWorldTech is the, well, real deal. Few, if any have a better understanding of the chipmaking industry. " semiconductor industry consultant, " is a very modest description of someone leading RWT for 16 years now.
Before anyone claims they know wtf they are talking about, answer the following real technical questions -- I haven't seen a single one who has (including the so-called "Trail of Bits"):
1. Can Ryzenfall and Fallout be exploited without code signed by AMD?
2. Can Masterkey be exploited even after disabling the PSP in the BIOS which AMD has allowed since January?
Regarding your first question, why would that matter if the code that is signed by AMD is out in the wild already and can be executed in a manner that makes the exploit work?
That is how I read the situation, anyway; I don't think these people who didn't contact AMD in advance are getting new code signed by AMD to run their exploit. If they have some means of getting AMD to sign anything they want, that's bigger news than any of the vulnerabilities they are currently talking about.
My read was that they're exploiting a flaw in some readily available, already signed-by-AMD code to load something new and behave badly. If that's the case, I don't see why the answer to (question 1) would be meaningful. If that's not the case, do you have a link to the explanation that claims getting new code signed by AMD is necessary?
"C: Say, for example, CTS-Labs were in charge of finding Meltdown and Spectre, you would have also followed the same path of logic?
YLZ: I think that it would have depended on the circumstances of how we found it, how exploitable it was, how reproducible it was. I am not sure it would be the case. Every situation I think is specific."
Am curious if it is legal to fund security research to find flaws in a competitors product. If so then going forward this might become a tool used by hardware / software companies to go after their competition.
I’m not sure how you mean; in general, I was tasked with finding vulnerabilities that could be used for reverse engineering purposes and which woukdn’t be disclosed.
I can't think of a reason why it wouldn't be completely legal? Unless there's some sort of ToS violation, however it'd be interesting to see how that would stand up to legal scrutiny. There are however clear ethical considerations surrounding disclosure which is really interesting in this case.
(Part of) HN got real salty after Google auto-published after the 90-day disclosure deadline expired,but before MS had a patch out. I wonder what the reaction would be if Project Zero adopted a 24-hour disclosure...
I wouldn't be surprised if it's illegal somewhere in the world, but in the US, publishing factual information you have not signed away the right to publish via an NDA or similar contract is usually legal. I don't know that motive is ever involved as part of the test of determining whether or not you are allowed to write something.
Gamers Nexus did a good video. Long story short, looks like an investment propaganda attempt to try and get a profit out of shorting AMD stock. Whether this is a cover for anything else is getting deep into conspiracy.
I have no doubt that the findings are real, but they are more along the lines of bugs rather than “halt the press” gaping security holes.
We’ve all publicly disclosed and written patches for countless security vulnerabilities in open source code that’s widely distributed with zero fanfare - not even a cve - because we realize the difference between a security bug and a world-stopping, corporation-killing security bug. And the arrogance of saying something along the lines of “if I gave them a day or a million years it wouldn’t matter because this bug is too big to fix,” is beyond comment.
At no point did I give RedHat 24 hours to patch something before coming out with a well-orchestrated PR campaign with the hype engine on overdrive. Just submit a patch, make sure they admit they’re wrong and will take the appropriate measures, and move on.
Now if they found a remote exploit that could let me run arbitrary code on any AMD processor even in a sandboxes environment... we’d be having a different discussion altogether.
If you have admin access to these machines, you can persist into the secure coprocessor. In the presence of these vulnerabilities, it is actually worse to have a secure coprocessor than not to have one at all. They are serious vulnerabilities; about the worst you can have in an enclave scheme. If SGX had a comparable problem, it would be headline news.
Does that mean the stock should head to zero, like some crazy prop trading firm claimed they should? The fuck should I know? I am cynical about the impact vulnerabilities have on stock prices and don't think consumers generally care.
I’m with you, but again, context. A compromised machine used to be a hosed machine until you reinitialized the drive and started clean. People are just starting to wake up to the fact that there’s another computer in your computer to which the same maxims apply.
Anyone that is serious about security hopefully knows this.
(I have no skin in the AMD/Intel CPU game. Too many machines running both to bother. I can’t believe I have to say this on HN, it’s what I’d expect of [H]ard in 2004.)
The thing that gets me is that they are simultaneously making claims that some of those vulnerabilities can't just be fixed with a quick firmware update and that they would allow for a backdoor that could never be removed. If the flaw that allows the attacker to overwrite the firmware with their own version can't be fixed in software then how exactly is the malicious firmware supposed to prevent its own removal?
One of those claims has to be false. If an attacker can shut the door behind them so to speak then it has to be possible for AMD to do the same.
Persistence is a big deal, but I can't help but feel a certain level of "meh" because my gut feeling is that persistence is a lot more useful in espionage and state actors than in the kind of business environments I deal with.
Maybe this is going to be the next big thing thrown into Metasploit so it can be easily used to make persistent something that's effective on a variety of systems with a variety of environments and data while still being subtle enough to avoid having the infection vector code make it into every AV package out there, but I doubt it.
To me the use of this kind of exploit feels like a very fine needle. Very fine needles are very very useful in a lot of specific situations, but very fine needles scattered haphazardly around the environment are a very different thing.
No, they didn't confirm serious vulnerabilities. Excerpt from their blog:
>There is no immediate risk of exploitation of these vulnerabilities for most users. Even if the full details were published today, attackers would need to invest significant development efforts to build attack tools that utilize these vulnerabilities. This level of effort is beyond the reach of most attackers
None of that discounts these from being serious vulnerabilities. If you are utilizing the secure coprocessor on a machine that is tasked with keeping secrets secure, and you are worried about sophisticated attackers, then this is incredibly serious.
Just because it doesn't impact you, or the majority of users, does not mean it is not a severe issue. The majority of users are quite likely to be of low interest to attackers wanting to exploit the secure coprocessor in the first place.
> If you are utilizing the secure coprocessor on a machine that is tasked with keeping secrets secure, and you are worried about sophisticated attackers, then this is incredibly serious.
If you are worried about sophisticated attackers, and you are trusting a "secure coprocessor" with such an unspecified and undocumented interface as this, I'd argue that you have no reason to think that the tool you have chosen to defend against the threat you're worried about is adequate. Contrast a coprocessor with no documented security boundary (if I'm missing something on that front, please post a link!) against a traditional HSM. Sure, the latter have had some flaws too, historically, but the well-defined boundary is key to even being able to assess suitability. I've not seen any documentation of this coprocessor's interface/boundaries at all. Let alone any that approaches the level of detail you'd get with an HSM.
I'd agree there - but that doesn't stop AMD from advertising this as a function these CPUs provide, so if you've taken them at their word, this is a big deal.
Exactly. CTS May be sloppy or naive, but if the people they claim to have shared the vulnerabilities with have been verified, then there's no reason to think there's a Big Conspiracy of Stock Shorters here.
Not truly fake, then again, a lot of propaganda is also based on an element of truth. Is there any meaningful distinction between propaganda and fake news?
I accept with basically no questions that the announcement was coordinated to enable a stock-shorting scheme. You might be interested to know: this isn't the first time that's happened. I don't care. I don't think these stock-shorting schemes work. I agree with Matt Levine: attempts to profit from stock declines are far from the worst things people can do with vulnerability research --- for instance, they could collude with vendors to withhold disclosure for months or years, which is something that happens.
Where I have problems:
* When AMD fanatics try to spin the confusion about the story into a claim that CTS-Labs didn't find anything, which we know now to be false.
* When anybody reacts to the confusion by announcing to Hacker News that there are immutable norms of vulnerability disclosure that were broken in this case, especially when those supposed norms are false and most especially when they assert obligations researchers have to vendors.
I don’t even have a problem with shorting. It’s fair game, and a company’s bug bounty should reflect what they stand to lose by not paying top dollar to preempt exactly that. Any talk about the company being “entitled” to anything is BS. If someone had a vulnerability that would get them access into anyone’s Dropbox as their IPO rolls out.. they’ve earned it.
I just think if that’s your plan, you best not be bluffing and had better have something really good up your sleeve. I do have a problem with overstating your case and spreading FUD to try to make a quick buck off of at best B+ attempt.
https://news.ycombinator.com/item?id=16595184
Trail of Bits --- which has a reputation in the field approaching "unimpeachable" --- confirmed a series of serious vulnerabilities. Whether there are real findings involved in this report isn't in question, and hasn't been since the day of the announcement, when Dan Guido from Trail of Bits confirmed that they'd reviewed and confirmed the finding.
It's ironic, or maybe it isn't, that after CTS-Labs published their findings in a manner basically optimized for innuendo than Anandtech has run a story that is basically composed of innuendo. Given how charged people's feelings about AMD seem to be, this is probably manna from heaven for them, and since CTS-Labs isn't publishing the full technical details, it'll be raining bread for them for many days to come.
I wouldn't care, except that after the original monster thread about the CTS-Lab announcement, it's become apparent that HN commenters have a very poor understanding of how vulnerability research actually works, and Anandtech is perpetuating some of those myths, like the idea that researchers invariably (or even routinely) arrange for CVE allocation when publishing new flaws.