It's worth mentioning that many smartphone users upload content to Youtube seemingly mistakenly and unaware that they are doing so (also known as fat-fingering[0]). A quick search for:
DSC001.mp4
And then filtering that by recently uploaded always yields interesting results. For those that don't know, `DSC-XXX` is a standard naming scheme for digital cameras. More on the default naming scheme in the following link[1]
The DataGenetics article was interesting but doesn’t explain where “DSC” comes from. Wikipedia [1] explains that it’s an abbreviation for Digital Still Camera, corroborated by the Design rule for Camera File system (DCF) [2] which also specifies “DCIM” (an abbreviation for Digital Camera IMages) as the name of the DCF image root directory.
If you're interested in diving deeper into those sorts of videos, http://astronaut.io/ was shared on HN a while and discussed previously[1]. It definitely digs up some interesting stuff.
Hi could you please add "NSFW" to that? I am a vegan and these images are very disturbing to me. I mean, I saw someone said it was a deer so I very quickly clicked to confirm I should post this - I am mostly saying this for others and so people are aware this can be disturbing to some.
Thank you.
EDIT: Wow I kindly say something is disturbing to me and y'all respond with down votes. Thanks for the compassion.
The comment is too old, I can no longer edit it. I’m sorry that you were disturbed. I did not feel it needed a warning label when posting, as the video shows neither cruelty, nor did I consider it particularly gory. But I guess I could have erred on the side of caution and included a short description as not to catch anyone off guard. Then again, that’s part of what fascinated me about the video and made me post it..
For late readers who are deciding on whether to watch it: the video shows somebody butchering some deer meat, and the deer carcass is shortly seen hanging in the background.
Maybe because it has nothing to do with being vegan ? Some people are grossed out by carcasses while eating meat. And I used to cut animals open for work, during a time where I was strictly vegetarian.
Right there with you. Didn't sign up to watch a video of someone mutilating a corpse, and I didn't read your comment until it was too late. A content warning in general would be nice for these situations, as I'd probably click on "NSFW." May be a while before this catches on, unfortunately.
Those words may technically be accurate, but I don't think that's a very fair representation of what's going on in the video.
It may be worth considering how ordinary and unremarkable such a sight is to all the world's humans. Even in the first world, a stroll down Chinatown of any major city is bound to result in seeing some "carcasses" hanging upside down from the ceiling.
Some may object to the idea of labeling the customs of such a huge swath of the global population as something which demands a warning.
This is why I said it may take a while to catch on. Morality and norms are dynamic things. Raising animals for the sole purpose of butchering them is an incredibly inefficient way to feed people. Once we move beyond it, I have little doubt peoples' feelings about the practice will change quickly, as they have with other historically brutal practices.
Roughly 1% of the cattle produced in the US are grass fed (source below). The rest are fed in feed lots and receive primarily a diet of corn and soy. The rule of thumb is that for each step you go up the food chain, an order of magnitude of biomass efficiency is lost-- that is, you have to give an animal 10 calories of feed to produce 1 calorie of meat. In beef, the caloric and protein efficiency metrics are 3%, meaning that you're feeding the cow 33x as many calories and grams of protein as you're getting out of it (source below). This is incredibly inefficient and is one of many reasons that a meat-based diet is not sustainable for a growing population and particularly a growing (global) middle class.
Walking down the street in Chinatown I may not be too surprised to see a hanging animal carcass, but this is Hacker News not a street market.
> unremarkable such a sight is to all the world's humans
Vegetarianism is pretty common in the world, with for example 30% of India being vegetarian[1]. So be careful not to project your own thoughts on to "all the worlds humans".
The overlap between people who have the technical knowledge to do harm and people willing to do harm is rather small, so I would advise against sharing technical knowledge like this.
It should be legal to share it of course. I would just discourage sharing such a specific, actionable piece of information with many possible misuses and few socially beneficial uses.
This hack is great because the dev didn't have to read assembly code or parse WireShark logs or deploy any kind of cracking software. They simply observed the ecosystem carefully and deduced a failure mode through some serious mental BFS.
>> This hack is great because the dev didn't have to read assembly code or parse WireShark logs or deploy any kind of cracking software.
Monitoring network traffic (http requests) and logs is similar to any other logged data or reading disassembled code. Patching in a different video ID is sort of like patching ASM to implement some hack. The automation created at the end to extract and assemble the video was basically creation of cracking software for this particular exploit.
What one person calls arcane knowledge is another's everyday tools. This is a case where I see obscure technical stuff, but web devs see regular stuff ;-)
It's also worth that the author doesn't go too much into detail of the how, and focuses more on the what
> With my first account, I started using YouTube, trying every feature, pressing every button I could find, and whenever I saw an HTTP request with a video ID in it, I changed it to the target Private video
Was this done with some tooling or scripts, or purely by eyeing devtools? I could see that step for example being very similar to "parse WireShark logs", for example.
I agree that the level of detail included makes it fairly readable without being to scary to non-experts.
Pretty much every single web vulnerability researcher uses it, to the point of absurdity. Squint hard enough and screwdrivers have a familiar shape, so you of course you look for a big enough hammer.
As a web dev trying to get into reverse engineering, this was super interesting to read. With the knowledge I have, it does seem like both things require a certain amalgamation of barely-related knowledge to be able to do effectively, and I didn't realize that until just now.
Classic "confused deputy" problem. What is the current recommendation in the modern microservices world to solve it?
When user agent (UA) makes authenticated call to service A, which in turn makes call to service B:
UA -[user auth]-> A -[????]-> B
how to pass authentication information from A, when making a call to B? Options I can think of:
- pass UA token as is. This has a problem that token becomes too powerful and can be made to call any service.
- pass own token and pass user auth info as an additional field. This doesn't solve confused deputy problem, since own token can be used with any user auth and service B can be tricked to make request for data in B not belonging to user
- Mint new unique token derived from tuple (A own token, UA token, B service name). B then extracts user information from the token presented by A and authorizes request. This seems to solve confused deputy problem, because A has no access to other UA tokens, so it can't mint a new token for a wrong user. Downside is that token minting should probably be done in another service and it requires making a call to it for almost every request between two microservices, making it a bottleneck pretty quickly.
I've never seen last one in real life, maybe it has some critical flaws I am failing to see?
Capabilities systems are designed specifically for this purpose. In such a system, a capability specifically for the user's right to access A and B is exposed as handle / token, and services A and B can't access anything without first being given an exposed capability handle. Notably, capabilities can be constrained so that it's not keys to the kingdom.
> A confused deputy is a legitimate, more privileged computer program that is tricked by another program into misusing its authority on the system.[0]
In this case, we know we don't have access to open the safe, but we were able to convince the deputy, who does have access to the safe, to give us what's inside one small piece at a time. The deputy didn't intend to empty the safe- he was only showing little bits of what's inside! That's all he's allowed to do with his access!
I see where the OP is coming from in calling it that.
There's a sense in which it is also a confused deputy problem though. You don't have authority to the object (the video) yourself, but the Ads service does, and you can convince the Ads service to use its authority to reveal information on the video that you don't otherwise have access to.
I guess. If the ad service let you create an ad consisting of another video, and you could embed a private video, that'd be pretty clear.
This reminds me more of the Facebook bug where www.facebook.com had access control but mobile.facebook.com didn't. I don't really consider every endpoint to be a "deputy".
Your third solution basically reinvents Kerberos. I don't think Kerberos envisioned services making calls to each other though. In the 1980s, I think it was assumed that the client would contact each service separately and combine the results itself.
AFAIK keycloack by RedHat which is auth as service, passes token as is.
Not sure what you mean by "token becomes too powerful and can be made to call any service." Each sub-service can have in token what is required to access it, and that can be managed by main frontend service.
There is a limit to token size but you can easily optimize claims and stuff to not go overboard in majority of cases.
> token becomes too powerful and can be made to call any service.
If UA token is passed as is down the chain of microservices, then every service starts to accept it. Intercepting this single token allows attacked to craws whole internal system. It wont grant access to other users data, but nevertheless it doesn't seem like a secure solution to me.
> Each sub-service can have in token what is required to access it, and that can be managed by main frontend service.
This would require UA token to contain audience claim of every single internal service, this is unlikely to pass security review.
> Intercepting this single token allows attacked to craws whole internal system.
It can intercept it, but can not change it. It can replay it eventually (even that shortly, depending on timeframe of your access token which is usually minutes) but you can protect against it.
> This would require UA token to contain audience claim of every single internal service, this is unlikely to pass security review.
I have penetration tests on my main service. Sub-services are not accessible and can be secured to desired level on the internal network. I never had security inspections on internal services (I work on highly critical gov systems). Maybe in some domains its like you say but I believe its generally not a problem. Furthermore, we need to have some perspective on this - there are multiple easier ways to hack a service and there probably exists big number of other exploits that are easier to achieve.
If the token having claims is a security issue, the entry point could swap the users token (containing just their unique id and an expiration) with an authorized token with claims, and keep that token within the local network. Then there’s a single token broker layer and claims are secure. I’m not sure why claims would be an issue to have in the original token though, could you provide some more info on that?
Your first option has an additional threat vector - the UA token is repayable against the first service. In case of compromise, as you say, the toke is too powerful and can do too much.
The second option is indeed bad.
The third option is used heavily in production for both cost savings and latency reduction.
There's a 4th option which is to go back to your auth server with the UA token and get a new one representing all the data you discuss in your tuple, but still signed and valid representing (a, user, b). This is the on behalf of flow, and is standardized in oauth2 under the name Token Exchange, roughly.
(E - actually, my 4th and your 3rd are the same. My 3rd is an improvement to not require minting a new token)
Give user agent two tokens: one for A and one for B (let's call it UB). Pass UA and UB to A. A passes its own token to B plus the UB token. B uses user info from UB and roles from both UB and A's token.
UB has a list of allowed intermediates (in this case, A) so user agent doesn't send it to every service.
In my implementation there were various kinds of tokens, so UB couldn't be used by itself to invoke B directly.
For our situation all this complexity turned out to be not worth it. :-/
If you use a service-mesh (such as Istio), you can have all inter-microservice communications be over mutual TLS. Assuming you only expose an API gateway to the outside world, have the gateway handle authentication, then each service can handle any feature-level authorization with that user info.
Bonus: When using a mesh service like this, you can also ban/rate-limit/load-balance/canary calls between any two microservices if necessary.
The idea that client A has its identity authenticated by service B, and that service B checks that client A is authorized to access some endpoint, does not solve the problem of B accessing content on behalf of A that user U should not get to see.
The researcher suggests that finding private youtube video id's itself would be a bug...
Youtube video ID's were generated by taking an internal integer, and encrypting it with a fixed key. That key has been leaked in the early days of youtube (pre google buying it).
That means there are a bunch of early video id's that are predictable. That makes this bug much worse.
Also, notable, already linked, YouTube videos are made private all of the time.
So this big created a method for continued albeit silent and low res access to private but known videos of note.
I think videos after being made private can be edited by the owner. So it would be possible for new “private” data from a known video id to leak this way.
i dont see how that would be a "bug" per se - the concept of "private" is set based on the account ownership, not the id itself. you could set a public video to private, or vice versa. whatever the ID, google still has to check against your authz before showing the video. this seems better than relying on security by obscurity.
Nice work to the researcher and also gotta give YT credit for nailing down a lot of entry points in the first place and responding responsibly to this disclosure.
It seems like many people disagree. But it's true.
Microsoft used to pay 20k for exploit primitives that could potentially lead privilege escalation. These days the bounty program seem to require a demonstration (read: working exploit).
Zerodium offers up to 80k for a working local privilege escalation exploit. Depending on the workings, if that exploit can be used to break out of a browser sandbox you might earn a bonus.
The whitehat bounties are not market rate, if you only look at the monetary rewards.
The other thing to keep in mind here is that serious vulnerabilities can also be sold on the dark web for ~50K+ IIRC. They can also be sold to multiple different entities looking to build bot nets or whatever it is that people who are always wearing Guy Fawkes masks while sitting in dark rooms do. If you're paying just 5-20K, you're kind of counting on the people who find the exploit being nice and doing the right thing.
Maybe that's mitigated because people with the know-how to find exploits like that are usually well-educated and not desperately in need of money, but people can be greedy.
Also mitigated by the concern researchers should probably have for selling exploits to "bad guys", and the associated liabilities, criminal and otherwise
Exactly. Only one of these paths does not have a hidden/probabilistic cost. Hard to say what's the market rate for exposing yourself to criminal prosecution, it's not like there's a hedging market for such things.
Though if as a corp you cover the black market rate fully then there's really no reason for a researcher to ever sell on the black market.
What's criminal about selling to Zerodium though? I guess a case could be made that selling on the darknet is 'not in good faith'. That is an important criteria for determining if you're aiding and abetting where I'm from.
But this problem is not there with all avenues for grey market transactions.
For this kind of specialism, I'd expect a company to pay an internal employee that much ($5000) per week of work - and contractors should be charging double.
The fact that a company has undervalued this work and failed to identify it as important, and someone external has identified it, makes it worth even more.
Exactly. If you have the skill set to find exploits like described here or even more complicated ones, would that payout tempt you do this fulltime? If you live in the US then I would say no. It ends up being much lower pay than you could get in other areas.
For brilliant hackers in other parts of the world, then this might be a nice job. But I don't know if these bug bounties are available everywhere.
Security researchers will disagree with you. This payout amount is considered exploitative for a bug that could cause major financial loss to clients, and reputational damage to Google.
Most companies are exploiting security researchers and pay them bounties that could be compared to the discounts found on Fiverr for different services.
Maybe the solution is to start leaking them (anonymously of course) and then when the damages add up say "I was going to disclose this but you don't pay". A few incidents and maybe they'll start paying?
Considering there is no practical attack here -- you don't know the private video's ID -- it's unlikely that that would serve as much of an incentive in this case. They'd just get the bug for free. Plus, youtube has no way of verifying you had the bug before it was leaked publicly.
One suggestion upthread was that videos that start public or unlisted can be made private without changing the ID. I haven't verified this myself, but if true it could be a way to get around someone sharing a video, then making it private to hide the contents. Obviously this is less severe than if, say, all private video IDs were well known, or a one-up integer, but it's still a leak and could be exploited in real-world scenarios.
My suggestion wasn't that someone should get credit. My suggestion was only that leaking the bugs and causing actual damage would eventually raise the price paid. As for proof of date of discovery, write a letter and sign it with bitcoin or equivalent.
That still leaves vanishingly small odds for a user to actually be exploited, in addition to the very small resolution and total lack of audio... both of which would be overcome by somebody simply downloading the video while it was public. As soon as the video was public _and_ shared/discovered by other people, it was already owned to begin with.
Well the market disagrees with security researchers.
5000 dollars is akin to to a very healthy contractor rate of $200 an hour at 25 hours of work, which is a conservative estimate of how much time OP spent discovering this. That to me feels pretty fair pay, based on things in reality, not some future value of potential costs savings that require some pretty hand-wavy maths to quantify.
* experts are paid for applying their knowledge, not their time[1][2]
* A “fair” time based system should also pay for unsuccessful searches e.g. the previous month unsuccessfully searching for a bug in Chrome.
* if person A spends 1 hour finding bug X, and person B spends 1000 hours finding exactly the same bug X, then it is a fallacy that you could pick a fair hourly rate.
Also I’ll mention that you don’t get paid according to how much damage a bug can cause. 1: usually the damages occur to a third party (e.g. users of Microsoft Windows, not Microsoft). 2: imagine you find ten bugs that could wipe out the business Acme - you can’t get paid 10x Acme’s value (not even just 1x Acme’s profits.)
Factor in how rare it is to find a software defect like this and how many fruitless hours of work go into finding it, the author is hardly breaking even. The only financial advantage I see is bestowed by the credibility gained from the publicity. Considering the monetary damage that a defect like this can do to YT and considering the thriving black market for zero days, $5000 seems irresponsibly low.
It's not when you consider it in context of the unpaid work one has to do to find payout bugs like this.
For context, this is approximately what Google has to pay for an entry level engineer employee to work for ~40 hours.
Finding a bug of this severity level in a publicly accessible service with a bug bounty program every 40 hours of work is... a stretch of the imagination for an entry level person.
Security researchers can easily get $500 per hour on consult.
The author should "charge" based on a percentage of the value that this bug fix gives to google. I'd argue that for such a huge platform this bug is worth tens if not hundreds of thousands. Certainly would cause way more reputational damage than that of there was a large-scale data leak based on this.
Why? I wish it paid a salary too but it isn't worthless. At the end of the day the businesses that best allocate their dollars for internal vs. outsourced talent are going to come out on top.
I certainly don't see room for adding regulatory requirements here. And, about public pressure, this type of work is in its infancy, relatively speaking. Going straight for the throat is too much.
"The free market will solve it" is sadly no longer a uniquely US-based view, but it is still wrong. There are so many examples of this I don't even know where to begin telling you how wrong it is.
That's too general of a statement. Everything is governed so there is a mix of free market and regulation. Having everything state-owned isn't a solution either and I know you're not suggesting that.
> Isn't the entire pen-test/security industry based on exposure (of your "brand") though? What you can bill depends directly on how well known you are & your past work, in lieu of an objective measure of how good at it a person is. Geohot could bill thousands (or tens of thousands) per hour of his time and no one would bat an eye, but if some guy named Blake tried that, he'd be laughed out of the room.
Sounds like a good argument at first but you're talking about running afoul of the law. You can't make a career of that. The better you get, the more people are after you.
I don't mean that the US would chase you in Russia but you are digging yourself a hole and limiting options as you go.
There is also a darknet diaries episode (can't remember which) but the guy who found a bug had got into instagram s3 buckets and source code, he felt he should have got the $1M bug bounty but instead facebook claimed he did it without permission to go further and got fuck all.
I've only just seen your comment and cba to re-read every comment but just at a quick glance out of the small amount of top-level comments 3 of them in the thread suggesting the same.
I think that it's a pretty reasonable payout - people running bug bounties are mostly interested in actual security impact and practical exploitability. And the necessity to hit G server thousands of times to extract even a short video reduces that by quite a large proportion; which is definitely not a shot at the researcher, it's a great find, but exploitability is definitely lower than more direct form of IDOR (e.g. input private video ID and get the whole video in response).
By the same logic, blind SQLi will typically be valued 'less' (hence pay out less) than SQLi with output.
You wouldn't have to hit the server "thousands of times", do a binary search across frames to get the interesting (i.e. incriminating) bits, then flesh out detail. Security cameras often capture single-digit frames per second, but that doesn't make the footage less valuable as evidence.
My concern here is that, from the perspective of those looking for these, the relevant time spent is not only that spent on identifying this bug, but all the times spent looking for bugs where there were none, or where they didn't see them.
Maybe bug bounty researchers should include this in their writeup that they send with the disclosure. Label it "security" testing, some details on what endpoints/methods where checked. Suggest a market value for this work, and say that an invoice with 50% rebate can be made out to cover it, if they wish.
Need to word it carefully to avoid it sounding like a threat or demand. Intention would be to highlight the work put in and the value for the company, and make it as easy as possible to compensate that.
They get paid so little because there isn't any other legal means to exchange a bug into money. A monopoly on the only legal way to sell goods means that you get bad prices.
But I do wonder if it would be possible to set up a legal alternative. I suspect if you did you would find law makers lobbied to make it illegal and it would already be decided as unethical by the corporate designed ethics systems.
If the payout were too small, nobody would bother researching vulnerabilities and claiming payment while following the rules. The fact that people are doing it suggests the market is working (and perhaps the payment is even high considering how much participation is going on).
This is indeed probably one of the most common issues with all these independent services/microservices/let's build this fast/I just need that fragment of the API, so I am not gonna call the main API, let me just build a fast wrapper around it/...etc.
This is absolutely one of the biggest issues I also have seen in several companies.
this was all very easy to follow and made sense. kudos to the author.
for the experienced hackers in the room - what would your reasonable next step be if you wanted to get audio or higher resolution video?
just wondering because i often see these researchers not stop after finding the first exploit, and its often the subsequent exploits built up from their knowledge of the system that uncover the really damning security holes
If you don't want to download every single frame you could feed these into DAIN (Depth-Aware Video Frame Interpolation) https://github.com/baowenbo/DAIN
A bit off topic, but what does YouTube gain from offering private and unlisted videos? It's convenient hosting, but it seems like they don't get nearly the same benefit as from a public video.
As I understand it, private and unlisted videos are often used as a draft mode for videos that eventually become public. Uploaders want to check everything, make sure all the transcoding is done, etc before flipping a video to public. Additionally, unlisted videos are sometimes used to run ads. This way, the ads can be played on YouTube without being listed on the advertiser's channel.
Not sure what Youtube gains except keeping people in the ecosystem, but Youtubers sometimes offer early access to their Patreon subscribers, which is done by uploading a private video and then changing the video to public after the exclusive period has finished.
I'd imagine keeping people in the ecosystem is plenty reason. Sharing sites live and die by their creators, who are a small fraction of the users. Making life marginally more convenient for them should have disproportionate returns.
I use unlisted and private videos. There's plenty of stuff I only want friends/families to see, but not the whole wide world. So, private or unlisted, then send the URL. It works great. So great, in fact, I can't think of anything better. Do you just not have this use case, or are you aware of a product that does this better?
I don't think the question is why it's useful to consumers, but rather how does it lead to revenue for YouTube? Do private videos drive substantial direct ad revenue, increase engagement, or similar?
Personally I'd bet on driving general engagement with the platform in some way, but the particular manner is not clear to me.
Well, without private, i would not have uploaded anything to youtube. After a few personal uses, I eventually started using youtube to upload and share content. I assume they want to capture the audience like me.
Also, the versitility of youtube as a tool leads me to buy youtube premium for $10/mo.
Also, the most professional content creators upload private, then schedule the video public at a time that will get the most exposure by the youtube algorithm. They also pre-prepare multiple thumbnails and swap them out for the first few hours of public exposure. it's a calculus.
Well in that case the answer is super simple: The same reason Google provides any other free service, whether it be Maps, Gmail, Photos, Search, Hangouts, Meet, Pay, whatever. The more Google services you use and the more time you spend using them, the more you can be monetized.
It's an essential feature for scheduling and organization.
Imagine your business is built on Youtube. You want to be able to test things in your videos internally, and upload them prior to a scheduled release date.
To keep people from going to other video hosts who offer those features. It's a value-add for the platform. It lets you upload videos well in advance of publishing them, or temporarily remove them. You also have to consider the fact viewer numbers are so low for private and unlisted videos they may as well be free to host, aside from the drive space they occupy.
For unlisted videos, I'd say it's an useful feature, for when you want to easily share a video to some people, but not the whole world (especially if they already have a YT account). And for YT, it would allow for creators to continue to use YT instead of having to go looking for another solution.
There are a lot of product announcements that are handled by uploading private videos that are made public at a given time, so there'd be quite a lot of attacker interest in this exploit if it hadn't been fixed. Worth the bounty.
It isn't a very noteworthy scenario, because it requires the person who _has_ been given access to a private video/ID to share it with someone who shouldn't have access. In that already-rare scenario, the person with access can simply download or record the video anyway, thereby leaking it (with audio and high-resolution to boot). And that's with everything working as intended.
Chained with another exploit to actually be able to discover someone's private video IDs, it would have been worth a much higher bounty.
It is very common for people to discover private video IDs in product pages by combing through the HTML for announcement pages, product listings inadvertently put up early by retailers (and some retailer, somewhere, will always screw it up) etc.
I honestly can't think of a week that goes by without the discovery of a private video ID in the videogame space tbh for some unannounced title or feature.
Ummm... no. If this is like a week's worth of work, you're looking at $250k annualized income, $125k after overhead.
In practice, you're not finding a bug like this every week.
The bug bounty programs were originally intended to give a white hat market alternative to the black hat and gray hat markets. They don't do that. If I find a bug, and I want profit, I'm much better off selling to my government than to Google.
One can only imagine the number of exploits the US, China, Russia, North Korea, etc. have in their cyber-warfare vaults.
Exploits compound. Often, two minor exploits make a major exploit.
Every time someone gets a bug bounty there's someone saying it's not enough and it should have been a bajillion dollars instead. $5000 for a week's work is great and clearly it's working.
Some points to consider are that there's risk involved dealing with the black market, including getting the payout in a way that doesn't trace back to you and legal liability if you're caught, a company has no reason to pay >=$x for an exploit that will cost them $x, and beyond that I suspect a lot of people simply feel better about telling the company about an exploit than selling it to criminals who will use it for extortion and theft.
So lets say you are able to find one of these every other week. $130k pretax. But you are finding 26 different bugs that (judging on the responses in this comment section) require fairly clever thinking, and you are doing it consistently.
I don't think companies owe it to researchers to exclusively supply their income, but I think theres room for improvement on the payout when most of the point is to deter selling on the black/gray market.
I got a 5k payout from google for serious OAuth bypass bug. I'm not a security researcher so I wouldn't have any idea or really desire to sell something like this to a Government.
But I'd have to agree that if I had publicly revealed the bug Goog would have lost magnitudes of business or possible fines from governments far above and beyond 5k.
Lol so far from it. This came from a bug assigned to me in JIRA like any other support ticket on any other bullshit day. Its just by the end of the day I realized I could access certain data from any gSuite domain. I submitted it through the bounty system as I didn't want it public in the issue tracker. I really just wanted the API fixed as we were getting shitted on by our customers...
If everything were priced at its actual value, SV would collapse like a house of cards. The whole industry is based on obtaining for bargain basement prices engineering or research work which coukd be worth billions.
That's exactly why there is a black market for 0-day exploits... because they are worth more than what are paid by the companies owning those liabilities
I’m sort of hesitant to pursue this, but I have one video in my favorites list that’s been made private, and I’m super curious what it was, and why I liked it in the first place.
I really love the author's persistence of just changing the ID at every chance. "Hacking" often carries with it the perception that it's a highly covert and technical craft, when in reality it's often times people banging on the wall in different ways until the behavior changes.
It had ... an interesting feature called Moments...To be honest I am not quite sure what advertisers use this feature for, nevertheless, it seemed interesting
For putting their ad at the point most relevant, maybe, or better still, putting their ad at the point to which their audience will skim to.
a bug like this suggest that Google services infrastructure isn't based on identity propagation, and instead services have full access to other services, and access control and authorization checks are performed in the client service.
I thought, that there was an unwritten rule on HN, that you don't mention closed-source, subscription-based software behind an email registration without making those down-sides abundantly clear? Don't enable the next Solarwind.
> First thing I did was to upload a video to my second testing account’s YouTube channel, and set the video’s privacy to Private, so I can use that video for testing. (Remember, always only test against resources/accounts you own!)
AFAIK this is against YouTube's EULA, so owning a resource is irrelevant. He doesn't own the accounts, Google does.
And Google prohibits any attempt at tampering with their systems.
Google also has a bug bounty program which sets terms and conditions for these kinds of activity: "When investigating a vulnerability, please, only ever target your own accounts." from https://www.google.com/about/appsecurity/reward-program/.
[0] https://www.urbandictionary.com/define.php?term=fat-fingerin...
[1] https://datagenetics.com/blog/december22012/index.html