Note those dates are DAY-MONTH. At least they patched it within a single day.
I find it funny that the author found a massive vulnerability but chose to wait a couple days to report it so they could finish a nice write-up.
Reminds me of my experience with HackerOne: We had some participants who would find a small vulnerability, but then sit on it for months while they tried to find a way to turn it into a larger vulnerability to claim a higher prize.
Then when they finally gave up on further escalation and submitted it, they'd get angry when we informed them that we had already patched it (and therefore would not pay them). The incentives in infosec are weird.
Yes, I understand, but that’s my point: In my experience, the detailed write-ups that external pentesters sent us could have been replaced by a 1-2 paragraph email for our engineers to read and fix ASAP.
> When you turn actual, creative and exhausting work (vulnerability research) into some kind of high stakes gig job you deserve this problem.
You don’t make HackerOne your primary source of security testing. It’s a fun thing you do in addition to your formal security work internally.
The reason people do it is because so many people expect or even demand payment and public recognition for submitting security issues they found. Just look at how many comments in this thread are insisting that they pay the author various amounts of money. The blog post even has a line about how they have not provided recognition (despite being posted exactly on the day it was fixed, giving the company almost no time to actually do so).
HackerOne style programs provide a way to formalize this, publicize the rules (e.g we pay $25K for privilege escalation or something) and give recognition to people finding the bugs.
Pentesters like it not only because they get paid, but now they can point to their record on a public website.
Furthermore, companies that don't already have very mature security programs will not benefit from bug bounties. I've run a bug bounty program before on H1, and it was a nightmare. No one reads the scope and you're inundated with 99/100 really trashy reports. Managing such a program is a full time job for one or more people especially if it's a big company.
Measuring productivity in a useful way is pretty close to impossible in a vast swath of jobs, though people make a killing (and make everyone involved considerably more miserable) pretending otherwise
The reason most people have converged on a preference for salaried work is that most jobs don't actually need consistency to be useful, but most people do need consistent pay to focus on a job
Very much agree the incentives aren't fully aligned.
From a bug hunters perspective, certain issues are often underpaid or marked as non-issues (and then subsequently fixed without paying out) so it’s in their interest to find a chain of issues or explore to show real impact.
Then from the programmes perspective you have to content with gpt generated reports for complete non issues so I can also understand why the might be quick to dismiss without hard impact evidence rather than a “potentially could be used to”
In cases where a small vulnerability is successfully turned into a larger vulnerability, everyone wins, right?
Considering that there is “more than one way to skin a cat”, it is not a given that vulnerabilities further along the chain will be resolved by closing the initial vector.
When a chain of vulnerabilities is reported it might become clear that not only does the initial attack vector need to be closed, but additional work needs to be done in other areas because there are other ways to reach that code which was called further along the attack chain.
> In cases where a small vulnerability is successfully turned into a larger vulnerability, everyone wins, right?
Nope! The two vulnerabilities are usually one and the same. The person is just trying to find a clever way to access additional data to make their payout larger.
From the customer perspective, getting the initial vulnerability fixed ASAP is the best outcome.
When they start delaying things to explore creative ways to make their payout larger, everything goes unfixed longer.
> I find it funny that the author found a massive vulnerability but chose to wait a couple days to report it so they could finish a nice write-up.
That's what you'd expect: finding != understanding, and you need some understanding before you can submit a sensible, actionable report to the vulnerable party. And then you need to write it up in a way that will be understood by the recipient. Going from initial finding to submitting a detailed report in a few days is excellent turn-around time.
Yeah... Is it ok to do a public writeup on the same date the vuln was patched without an acknowledgement from the client? I would have scheduled this blog post at least a week later.
Maybe... But bashing the client on the day they patched because they haven't communicated is somewhat shaky. Bashing them a week later is totally cool in my books.
What "client"? This looks like a researcher reporting a bug for free (or maybe through a bug bounty program). They have zero obligation and the vendor is not a "client".
Why should the researchers or other vulnerability spotters care about the company's customers? The companies don't care further than what they can profit from the customers.
Yes, I know what full disclosure is. Companies don't do full disclosure about anything. Full disclosure is better than not disclosing publicly. But monetizing the vulnerability is akin to what companies do.
I find it utterly bizarre that it's totally OK and even lauded that companies are selfish profit maximizing machines that DGAF, but individuals should pamper them like babies.
Full disclosure isn't something for _companies_ to do. It's what _researchers_ do.
Full disclosure isn't compatible with the monetization incentives offered by companies. You're publishing in public and immediately.
I think you clearly do not understand what full disclosure is.
My understanding of Full Disclosure is that researchers publish the vulnerability (and potentially exploit) publicly without coordinating with the software vendor. This contrasts with Coordinated Disclosure (sometimes "Responsible disclosure" in corporate propaganda) or No Disclosure (and potentially e.g. selling the exploit).
I admittedly used disclosure in a bit different sense for companies in that companies typically don't give out any (truthful) information they have if they aren't required by law. And they lie when profitable.
The symmetric action from a researcher is to sell the exploit to the highest bidder. Of course if the researcher wants to do other disclosures, that's fine too. But what I don't like is the double standard that researchers are scolded for being "unethical" but companies, by design, not caring about ethics at all is just fine and the way it should be.
But that's exactly why as a researcher you should operate under Full Disclosure. Properly motivate the companies to do what is right and don't take on questions about financial motivations, etc.
Well - only the amateur infosec world where you try and force someone to be your client after you do the work, and then get butthurt when they don't become your client.
In the professional infosec world the clients choose to hire you first.
* They were just trolling Firebase accounts for anything left open, and the first hit was a company that works with a bunch of American fast food chains. That doesn't require OP to live in the US.
* They specified "America's fast food chains"; someone living in the US probably wouldn't qualify it with "America's".
* They used a $DAY/$MONTH date format, which is uncommon in the US.
That's what i was thinking too, not because it's not already 10th January in europe, but because i doubt you can except a 'thank you' in <8 hours. So I assume this might have been 2023?
> 06/01 - Vulnerability Discovered
> 09/01 - Write-up completed & Emailed to them
> 10/01 - Vulnerability patched
Note those dates are DAY-MONTH. At least they patched it within a single day.
I find it funny that the author found a massive vulnerability but chose to wait a couple days to report it so they could finish a nice write-up.
Reminds me of my experience with HackerOne: We had some participants who would find a small vulnerability, but then sit on it for months while they tried to find a way to turn it into a larger vulnerability to claim a higher prize.
Then when they finally gave up on further escalation and submitted it, they'd get angry when we informed them that we had already patched it (and therefore would not pay them). The incentives in infosec are weird.