Hi! I'm the person in charge of managing the bug bounty program, and I'd like to shed light on what happened from our end. I already apologized and explained this to @0xcrypto internally, but I believe that I should say something here to clarify what happened.
The truth here is that we were never able to fully reproduce the issue from the beginning, but struggled to close it because of the fear of missing something. Shortly after when we got back to the reporter for the last time, saying that we'll find a resolution, is when we were convinced that we were not able to reproduce it. Around that time we received a similar OAuth-related report. Unfortunately, this led to an internal mix-up, making us believe that we had addressed and communicated the resolution.
Because of the way I have notifications set up, I missed the follow-ups, and the issue stayed in Triage state indefinitely without receiving updates. This is by no means an excuse about the lack of updates, about which I'm deeply sorry. I've been a bug bounty hunter for many years and understand how frustrating it is to wait for updates from companies.
Finally, I'd like to reassure y'all that the security of our customers is of the utmost importance to us, and everything we say in our security page is true.
Well mistakes happen. One thing that is still not explained is that I contacted Hackerone many times in the timespan of 3 years but they couldn't get in contact with you either.
Also, it is still unclear how you wanna continue with the report since it is no longer reproducible. I would have discussed it further on Hackerone but apparently I have been ghosted again after the apologize message.
Hey 0xcrypto, I'm very sorry if I gave the impression that we weren't open to discussing anything further on the original issue. After my message, we only received a short comment from you. The issue actually will be still open for a short while just in case you want to discuss further details. Let's continue the conversation there.
It's unclear to me (not that I necessarily need to know), but do you believe in the end that the vulnerability as described there worked, and if so, do you know why you failed to reproduce it?
The fact that we kept it in triage means that we believed there was something. Also the reporter gave a really good explanation.
By the time the report was originally sent the feature was just released, and while we never deployed a code change to directly address it, it wouldn't be the first time that we receive something that I believe it was genuinely a security issue and stopped being reproducible due to an seemingly unrelated change around the same time.
I'm wondering how your two quotes "security of our customers is of the utmost importance to us" and "we believed there was something" fit together given that the issue stayed open for three years?
So for three years you believed there was something, yet you didn't invest sufficient resources to reproduce and/or understand the issue, while at the same time, all these three years security was of utmost importance?
Hey, I got into more details in my internal discussion with the researcher and previous post, but around the time we determined we couldn't replicate it, we got a similar report leading me to believe this was already closed. I didn't believe there was something the whole time. It was a mix-up on my side, and I'm sorry about it.
I think I understand, I've also fallen victim to losing track of things, so I understand. If you haven't, maybe having a policy of trying to have zero security issues in the backlog would help here? That way things can't get lost, and if they're closed then at least the other party can see their issue has been closed and act accordingly (maybe try and escalate or something if they still think it's a real issue).
Wouldn't the wrong party, after getting an erroneous closure email, have immediately followed up, multiple times probably if the first one was ignored?
It's still unclear what prevented the follow up communications from making its way to you.
Great to hear from you firsthand!
While the issue was not reproducable for you, wouldn‘t it have been easy to have a look in the source code if the open-redirect was at all possible?
Honestly, just makes me happy your posting that you at least take it seriously. Nobody's perfect, but in my opinion things get out of hand when people don't take responsibility when mistakes happen. Good work, I know it's not your fault, need more people like you out here fixing.
Great to hear, I love using Harvest. But could you please finally fix the (not so) new mobile app (iOS)? There are so many tiny issues that I stopped reporting them to your support.
The app state constantly gets out of sync with server state (some changes on the server only show up after a force reload, some changes on the client just revert after pressing save)
And the time tracking UX is so annoying (buttons that are only visible if you scroll down, start/stop/restart/delete buttons are constantly at different locations, depending on the state of the item).
The old app was not pretty, but it worked without issues.
Thank you for your feedback. While my main focus is on Data and Security, I'll ensure that your issues are heard by the team responsible for our mobile app. I'm aware that we're constantly working on improving the iOS app experience.
Well, this is disconcerting. I've used Harvest and found the support to be absolutely stellar, with prompt responses that clearly and deeply understood the nuances of how customers are using the product and detailed steps on how to creatively use existing features. Anything unimplemented yielded, "we'll put that on the backlog but no promises." Given the 30 headcount cited in engineering [1], I don't know where it goes because I didn't see other features getting cranked out either.
I started getting spammed as a "user of Harvest" which prompted me to suspect that they were selling their customer lists. They took this claim extremely seriously, connecting me with company heads immediately to issue stern denials and execute a prompt investigation. That was great.
What I think it came down to, though, was also engineering. I figured out a rather easy way to reliably infer active customers, which also, "went on the backlog" and remains unfixed months later. And it's a fix that appears to be super trivial.
They also only offer MFA if you're signing in with Google [2]. But the app itself is DAMN good at what it does.
Thank you for your kind words. I can confirm that our support team is stellar. Despite being a small team, we approach every matter very seriously and I was personally involved in the investigation you referenced. The miscommunication with the reporter on this thread was entirely my oversight (I explained it on the top response) and I'll make sure it won't happen again.
I can confirm the quality of their support. My requests were always processed by very qualified people, often within minutes. Some of my feature requests even ended up in their product. Maybe it wasn't because of me, but then they share my ideas of a good time tracker.
The headline seems pretty unfair to Microsoft here, seemingly to capitalize on the press of their recent auth disaster. The first thought that came to my mind on reading the headline was "oh great, another MS breach".
These are in fact harvest's tokens, which only erroneously exposed access to their app, because of an injection vuln in their code, and would be exactly as compromised behind any other IdP.
Hi, author of the blog post here. Yes I understand your concern and I tried keeping Microsoft's name out of the title but couldn't think of anything else. Since the vulnerability only affects the oauth implementation for the connection with Microsoft accounts. Previously the title was "Microsoft OAuth token leak via open redirect in Harvest App" but later I changed it to "Microsoft Account's OAuth tokens leaking via open redirect in Harvest App". I am still considering to change it and open to suggestions.
If only tokens minted by MS were in scope of the vulnerability because of Harvest's outlook integration, maybe something like "Harvest OAuth CSRF Leaks Tokens of Microsoft Outlook Users" or "CSRF in Harvest's Outlook Integration Leaks User Tokens".
If you want to add any editorializing around mitigation, linking to the OAuth RFC[0] that dictates a MUST for binding the users auth state with the request to prevent such attacks would be instructive to readers.
RFC 6749 goes into details on how the authorization server should prevent this type of attack
The authorization server MUST require public clients and SHOULD require
confidential clients to register their redirection URIs. If a redirection
URI is provided in the request, the authorization server MUST validate it
against the registered value.
So how is this possible, when presumably the Harvest app did not register the malicious redirect_uri?
Does the Microsoft OAuth server ignore URL parameters within a redirect_uri when comparing with registered redirect URIs for the OAuth client?
They are adding a second redirect on top and sticking it into the state parameter, presumably so they can redirect to anywhere.
so the flow wanted was
Go the some harvest authorize url,
That redirects to the Microsoft authorize url with redirect_uri=registered_uri and state=some_encoded_final_uri,
user enters credentials,
redirect to a registered uri
read state parameter and redirect to uri encoded in state.
This exploit still redirect to an authorized uri, but that endpoint then reads the the state parameter and happily forwards the response/token.
3 mistakes in this, abusing state, not encypting and validing state if you are going to abuse it. Enabling implicit grant(even if they needed it, should have made a second registration with limited uses).
It's kinda normal that you'd want to let a user log in and return them to the page they were at.
For example, if you're making a shopping website and a user asks to put something in their basket and you send them to log in, you'd want to return them to the item they were about to buy, not dump them back at the homepage.
What's the proper way of doing this, without "abusing state" ?
There's no reason to have a URL (or any data) encoded in the state parameter. The purpose of the parameter is to provide an opaque lookup key which you can utilize to provide correct, validated responses. This is usually done in some sort of database or Redis-like cache. My workflows have always used a random UUID for the state key and I just encode the necessary (validated) data items needed for the next step as a JSON blob. It's essentially a very short-lived web session.
If for some reason you really do need to transmit this data in-band (ultra rare use case) you should at least be using something like HMAC to verify that all carriers have transported the data unmodified. It is your responsibility to ensure the integrity of the data end-to-end.
Don't attach the sensitive URL parameters to the second redirect. The first redirect logs you in via cookie, and then if the second redirect is on the right origin it will have access to your cart.
Also only allow redirects to your domain or website, not literally anywhere on the internet. And the token should stay in your website’s cookies - it’s unclear why the second redirect would ever need to pass a token if it can read it from site cookies in the first place.
But in the POC link, they have state=1 as a parameter for the authorization server, there is another state parameter encoded into the value for the redirect_uri, which makes me wonder why that even matches the registered redirect_uri.
You are right that redirect_uri must match the exact registered redirect_uri.
But some providers allow query parameters. For Microsoft, it was possible in 2020 when I reported the vulnerability. In 2022, they restricted query parameter support to only applications that is built for Work and School accounts and in August 2022, they added a section for this in the documentation.
The Harvest redirect_uri is registered with Microsoft. Harvest implements its own redirect after the Microsoft OAuth server redirects to them, based on the data in the state.
I agree the fact that Harvest blindly redirects helps enable the attack, but according to the OAuth standard, a redirect_uri which does match a registered one should not be accepted before authorization takes place.
From the POC authorization URL, the redirect_uri parameter and value are:
then why does any extra URL parameters added to that value get accepted by the Microsoft OAuth server before authorization, when they clearly do not match the registered one?
edit: I tried authorizing using another OAuth server provider, with a changed redirect_uri by appending URL parameters to the encoded value, and the OAuth server (I believe, quite rightly) rejected the authorization request.
I'm guessing, as would be typical of many companies, it ended up on a backlog as low priority, survived a few Jira reorganisations and corporate restructuring, before eventually being noticed and fixed.
They're a small company with an even smaller engineering team, I think 13 devs or something like that. I would imagine either everyone knows about it immediately or they are too overloaded with work that it gets deprioritised into oblivion after a quick first look.
It's not an excuse, it's just poor engineering culture or lack of security awareness. I work with an engineering team of 5 - security issues still get prioritised and fixed. Feature work gets deprioritised, as it should, as soon as there's a credible security concern.
All they had to do was add and validate a nonce value in the state, or at the very least, to triage, sanitize the subdomain value. The latter would literally be a 10 minute fix.
Harvest Security Team here. I addressed this on another comment, but basically we were never able to reproduce and there was no explicit fix, but it stayed on Triage state when it should've been Closed, due to a human error on my side.
Can someone with OAuth expertise explain this issue in a few more details, as I've read the blog a few times, but still don't understand the actual vulnerability.
From my very limited OAuth knowledge isn't this how it works:
1. The Harvest application asks Microsoft to verify a user.
2. The user is verified by Microsoft.
3. If the user verification is successful Microsoft redirects back to the callback URL, passing back the access token inside the body of the response message.
In this case hasn't the writer of the blog just created a hand-crafted URL so that the return is back to example.com rather than the actual return URL?
Yes, the blog author has created such a hand-crafted URL, but note that the callback url in the attack's url is to a harvestapp domain, and the attacker controlled portion is in the state, which is pretty much opaque to the oauth server.
That url allows you to link someone to a login.microsoftonline.com link, have a login prompt show up that says "login to harvestapp", and then have the attacker be able to gain permissions related to your real harvestapp account.
Normally, this would not be possible. The attacker with example.com could register a new app that does redirect to example.com, but that would not give them an access token with permissions related to harvestapp, so it would not be useful.
The oauth app, on microsoft's end, has a whitelist of valid redirects, so an attempt to do something like "login.microsoftonline.com/authorize?client_id=$harvestAppID&redirect_uri=attacker.com" will error out on microsoft's side, since that is not a valid redirect uri to receive an access token.
The attack is only possible because there's a valid "outlook-integration.harvestapp.com" URL, which receives the access token, but then also redirects to the attacker's site and gives them the access token too.
The vulnerability came from the outlook-integration.harvestapp.com. It used a JSON object as `state` containing instructions once the OAuth2 Callback succeeded.
The property `subdomain` was used to redirect the browser to a subdomain of harvestapp.com, passing the `#id-token`. The problem came from the fact that the value of `subdomain` was injected directly to:
* the additional redirect using the JSON object in state
* the `subdomain` not being properly verified
* the implicit grant being supported
Which allowed an attacker to get an access token for a user's Microsoft account.
From my reading, this seems to be entirely an issue due to an improper implementation on Harvest's side, nothing to do with Microsoft's implementation of OAuth. Am I correct?
It seems pretty clear to me from reading the blog post that the issue was what I outlined (sorry for the lack of list formatting, I always forget I need an extra line after each bullet point).
All they had to do was sanitize the subdomain var to only allow values valid in host part of a URL. But also, one of the state parameter's primary uses is exactly to prevent XSRF attacks like this by using a random nonce value so that you can validate from the redirect that your system was the initiator of the auth request. The data in this state was not sensitive, so encryption is not really necessary.
Microsoft checks the return URL to see if it is one of the whitelisted URLs specified by Harvest. Harvest added their own redirection mechanism on top of this, presumably to support multiple instances of their software, which did not do a good job of sanitizing input values for their redirect. So no, this is not an implicit issue with oauth, just a shoddy implementation.
Ok, I think I understand but correct me if I'm wrong. Normally that return URL would be hidden from view, as it would live in configuration detail found inside of the Microsoft system, attached to the client_id.
However, Harvest weakened this security by adding in the additional (and unsafe) return_to parameter to manage their return URL.
> The authorization server SHOULD require the client to provide the complete redirection URI (the client MAY use the "state" request parameter to achieve per-request customization). If requiring the registration of the complete redirection URI is not possible, the authorization server SHOULD require the registration of the URI scheme, authority, and path (allowing the client to dynamically vary only the query component of the redirection URI when requesting authorization).
> The authorization server MAY allow the client to register multiple redirection endpoints.
Either the redirect URL is statically configured, or it's accepted as a query param to the auth request, and subject to a strict whitelist. It's not a secret from the user, but even for a SPA it is usually transient so you don't have the user sitting at some ugly URL with "?code=abc123...". Typically you would use the state query param to retain any context needed to redirect the user to their desired destination, but that would be after the redirect endpoint uses the passed code to fetch the token and store it somewhere locally. In this case apparently the redirect endpoint allowed redirecting to entirely different applications by simply forwarding on the sensitive query params, but did not validate that those destinations were on any whitelist.
Not as far as I recall (haven’t done OAuth in a hot minute) but the redirect URL is typically in the GET parameters or in the body of the request, neither of which is hidden from view.
This issue seems to be that there was a secondary redirect in the body of one of the requests (I believe the token response), that could be forged to loosely match a trusted domain but with an attacker’s domain present, eg “//attacker.com/trusted.com/“.
hi ^^, limited knowledge as well, however I'm pretty sure the issue is that Harvest allows all urls to be used as callback urls. You should tell microsoft to allow only certain urls as callbacks. eg, when setting up the workflow, they probably used a wildcard as an allow list of callback urls, instead of creating an actual list of trusted callback urls. I think that's what's happening here, could be totally wrong tho :D
I can see longer than 90 days if there's some HUGE change required, and a decent sized team is allocated to the problem for most of their work time. OR if there's a solution but it needs to progress on a specific (and relatively short) timeline for customer notification.
However that additional leeway should be afforded by the researcher and/or their lawyers / representatives. It's something a company might ask for in good faith in response to a larger than usual issue.
Hackerone is beholden to the company running the bug bounty program. The extent that they are involved heavily depends on what services they are providing (triage, etc). At the most basic level, they're just providing a platform for disclosure of vulnerabilities and some boilerplate legalese to prevent legal departments from sueing researchers.
In the vast majority of cases, companies deny requests for public disclosure. A researcher that discloses regardless of permission violates their agreement with hackerone and the company and exposes themselves to legal liability. In this case it seems the company agreed to public disclosure, which IMO should be applauded, even if their response was very slow.
I've personally had several four figure bugs unremediated for >1year, but I never thought it was hackerone's fault.
Author of the blog post here. Yes, I agree that it wasn't Hackerone's fault and they tried their best to help.
As for the violation of agreement with hackerone, I have read the policy many times before publishing the article and even asked Hackerone about this. The vulnerability is already fixed and I haven't heard from Harvest since April 2022 so there's no point asking them as it would seem like a threat rather than an actual disclosure. An excerpt from the agreement:
> Last resort: If 180 days have elapsed with the Security Team being unable or unwilling to provide a vulnerability disclosure timeline, the contents of the Report may be publicly disclosed by the Finder. We believe transparency is in the public's best interest in these extreme cases.
So, bug bounty programmes sprung up as a well to help coordinate disclosure and help researchers engage in responsible disclosure.
A key part of responsible disclosure is the disclosure part.
Often researchers would disclose unpatched issues to put weight on companies, even large companies, to actually patch issues.
One of the side-effects of programs like Hackerone is that actually doing your own responsible disclosure is now frowned upon (often to the point of legal problems).
But part of the social contract of absorbing coordinated disclosure should be an expectation that hackerone allows disclosing even unfixed issues.
Hackerone should not be "beholden" to companies. They make the rules. They could allow disclosure of issues if they wanted to make that a condition of the platform.
It's companies sitting on vulnerabilities that birthed the concept of "responsible disclosure" in the first place. If H1 etc are allowing it then there needs to be renaisance of the practice outside the platforms.
So, it basically sounds like we are missing a governed body consiting or researched with possibly tiered disclosure process (for severity) and the possibility to _maybe_ apply for an extension of disclosure. Would this ever happen?
I dealt with a HackerOne issue from the company side where the HackerOne participant was constantly violating HackerOne’s own rules: Breaking disclosure timelines, posting false social media statements about the bug, and even threatening our employees.
HackerOne didn’t care. No matter how many times we pointed out the person was violating their own rules, they claimed they couldn’t do anything.
It felt like a company that had been built up to steady state operations, then stripped down to a bare minimum operating crew where questions were answered by powerless support people.
This was a while ago. Maybe things have changed, but that was my impression at the time.
Of the three parties involved (HackerOne, the company, and the researcher finding the bug), the company has all of the leverage. If they feel like HackerOne is stepping on their toes and making decisions as to whether to "let" companies do things, those companies will just leave HackerOne and create an in-house solution.
HackerOne should require companies to put down 10-100k in an escrow account, that can be used to pay out security researchers on the discretion of HackerOne. Allowing companies to decide when and if a bounty is paid out doesn't make any sense in this case.
You assume that the reputation loss of leaving HackerOne is not an issue for the company.
It seems very reasonable to me that if the decision to leave HackerOne is prompted by conflict over responsible disclosure, then it is appropriate for HackerOne to disclose that fact. Including disclosing the bugs that the company was unwilling to responsibly disclose.
This puts HackerOne in the position of actually representing the interests of the hackers. And makes participating in HackerOne to be more than a meaningless publicity gesture for the companies.
I don't understand why this issue was not communicated to Microsoft. They could've just revoked access for this oauth application until the issue was fixed.
Although there are probably thousands of similar bad implementations out there that are connected to Microsoft via oauth.
Every oauth application needs to be registered individually, togther with a client secret or certificate. In case of Microsoft via the Azure portal. That registration can (technically) be revoked by the oauth provider.
I have no idea if Microsoft would react to such a report, and what's the correct channel to submit it. But bug reports or abuse reports they usually take seriously.
Is Harvest an important app? Implicit-flow open redirect bugs are pretty bread-and-butter, so it's odd to see one at the top of HN. Are people reading this title as if it was a major Microsoft vulnerability?
How can I solve this problem cheaply? Here are a few thoughts:
- Generate an encrypted token based on the redirect state value.
- Store the mapping of tenant_id and unique state.
- wait Microsoft support wildcard redirects.
First, just don't enable implicit grant. That makes it a lot harder to screw up.
State is for preventing CSRF, not transferring data. Don't abuse state, it's wrong.
Use your own authorize url, add an encrypted cookie and redirect to the real one. Even if the cookie is encrypted, only put some kind of session/cache key in it, don't actually send "info".
Read cookie in callback then delete it.
"...In the process of disclosing and patching this vulnerability, the Harvest team was barely responsive. The company acknowledged the vulnerability by triaging but took a very long time to fix the vulnerability. After 3 years of reporting, the company finally fixed the vulnerability silently and didn't bother to inform...no bounty or even HackerOne points were rewarded by the company..."
"...Harvest cares deeply about protecting the privacy of the data entrusted to us by our customers. This is one of the core values at the heart of our business... "
"Harvest cares deeply about profiting from the illusion of the privacy around the data we harvest from customers. Profit is one of the core values at the heart of our business"
Hey! I'm part of Harvest Security Team. We'll be changing the way we do this, but by the time this happened I triaged the report after reading it because it really looked legit. The reality is that we were never able to reproduce and there was no explicit fix.
The issue stayed on Triage state and I missed the reporter updates. I talked to the author of the post and I believe we are in good terms now.
The security and privacy of our customers is extremely important to us, everything we say in our security page is true and I've been working on this for years.
Question to mods? Was there any particular reason why this post was moved to the bottom of the thread? I can see the reasoning if a person mentioned in the post replies that their answer is moved to the top of the thread. But are HN mods now also moving post independently of the upvotes? Is the criteria mentioned somewhere?
Good to know that an accident caused education or possible intent/ my life in a nutshell is wrapped around curly braces or 0eth limits of imagination. -on the other hand base2 mathematics is about as intriguing for the average person human experience or augmented reality. We had a great time deciding intent or purpose for research that didn’t start case files with day1 or private practice or contract but it wasn’t just this case of you look at the transfer of data from a computer view it’s taking the path of least interest for intent or excusable bipolar circuitry’s since ohms law was a Trending topic for engineering students.. if you see a pattern of the norm through greed or monetizing a new way to fool the next the button space bar was not just space exploration downpress.. we are greedy at nature but still to find hope of future generations box is one less hacker 1 case we see . The protocol drivers were never expected to array or say imprint to the spool of knowledge. Everybody sees the problem in our brains we are programmed to develop work arounds for habituation intention. Who wants to sue BG not me but for the fact that and understand how or why binary systems can evolve into tricking people and a entire generation regeneration cycle between our days and days leftist will lobby null hypothesis and preach a beta tester from Islamic nations is only capable to learn within there own framework and environment.. now evolution will change into something we call Ai when technology has solid proof of learning a new hobby without internet or WiFi so their brains will become more advanced by repetition of Or religion based keywords.. dao coining mining technology isn’t just a profit it’s a process and not our choice but we should be able to make it less likely to be confused or so easy to learn a “me” signal on our own time invest in American diesel repairs llc in physics of diesel technology.. this was a very important thing to live through by investing in ourselves and our communities by choosing “Integrity over compromise “ I’ve been offered a dollar worth of coin that allows the other to breach me.. and possibly affect our own people customers and there environments.. Oeth choice of truth is 110% always been teaching itself how to handle adaptation for healing a broken heart or broken soul.. in the end we will never know how I did recreate it but it wasn’t my intention.. it’s only plausible that we will now know how to handle this gift from my own experiences and loss.. talent is not all it takes it’s a way of life. Ty for examples
The truth here is that we were never able to fully reproduce the issue from the beginning, but struggled to close it because of the fear of missing something. Shortly after when we got back to the reporter for the last time, saying that we'll find a resolution, is when we were convinced that we were not able to reproduce it. Around that time we received a similar OAuth-related report. Unfortunately, this led to an internal mix-up, making us believe that we had addressed and communicated the resolution.
Because of the way I have notifications set up, I missed the follow-ups, and the issue stayed in Triage state indefinitely without receiving updates. This is by no means an excuse about the lack of updates, about which I'm deeply sorry. I've been a bug bounty hunter for many years and understand how frustrating it is to wait for updates from companies.
Finally, I'd like to reassure y'all that the security of our customers is of the utmost importance to us, and everything we say in our security page is true.