> This showed up to Internet users trying to access our customers' sites as an error page indicating a failure within Cloudflare's network.
As a visitor to random web pages, I definitely appreciated this—much better than their completely false “checking the security of your connection” message.
> The issue was not caused, directly or indirectly, by a cyber attack or malicious activity of any kind. Instead, it was triggered by a change to one of our database systems' permissions
Also appreciate the honesty here.
> On 18 November 2025 at 11:20 UTC (all times in this blog are UTC), Cloudflare's network began experiencing significant failures to deliver core network traffic. […]
> Core traffic was largely flowing as normal by 14:30. We worked over the next few hours to mitigate increased load on various parts of our network as traffic rushed back online. As of 17:06 all systems at Cloudflare were functioning as normal.
Why did this take so long to resolve? I read through the entire article, and I understand why the outage happened, but when most of the network goes down, why wasn't the first step to revert any recent configuration changes, even ones that seem unrelated to the outage? (Or did I just misread something and this was explained somewhere?)
Of course, the correct solution is always obvious in retrospect, and it's impressive that it only took 7 minutes between the start of the outage and the incident being investigated, but it taking a further 4 hours to resolve the problem and 8 hours total for everything to be back to normal isn't great.
Because we initially thought it was an attack. And then when we figured it out we didn’t have a way to insert a good file into the queue. And then we needed to reboot processes on (a lot) of machines worldwide to get them to flush their bad files.
Thanks for the explanation! This definitely reminds me of CrowdStrike outages last year:
- A product depends on frequent configuration updates to defend against attackers.
- A bad data file is pushed into production.
- The system is unable to easily/automatically recover from bad data files.
(The CrowdStrike outages were quite a bit worse though, since it took down the entire computer and remediation required manual intervention on thousands of desktops, whereas parts of Cloudflare were still usable throughout the outage and the issue was 100% resolved in a few hours)
It'd be fun to read more about how you all procedurally respond to this (but maybe this is just a fixation of mine lately). Like are you tabletopping this scenario, are teams building out runbooks for how to quickly resolve this, what's the balancing test for "this needs a functional change to how our distributed systems work" vs. "instead of layering additional complexity on, we should just have a process for quickly and maybe even speculatively restoring this part of the system to a known good state in an outage".
We incorrectly thought at the time it was attack traffic coming in via WARP into LHR. In reality it was just that the failures started showing up there first because of how the bad file propagated and where it was working hours in the world.
Probably because it was the London team that was actively investigating the incident and initially came to the conclusion that it may be a DDoS while being unable to authenticate to their own systems.
Question from a casual bystander, why not have a virtual/staging mini node that receives these feature file changes first and catches errors to veto full production push?
Or you do have something like this but the specific db permission change in this context only failed in production
I think the reasoning behind this is because of the nature of the file being pushed - from the post mortem:
"This feature file is refreshed every few minutes and published to our entire network and allows us to react to variations in traffic flows across the Internet. It allows us to react to new types of bots and new bot attacks. So it’s critical that it is rolled out frequently and rapidly as bad actors change their tactics quickly."
In this case, the file fails quickly. A pretest that consists of just attempting to load the file would have caught it. Minutes is more than enough time to perform such a check.
Just asking out of curiosity, but roughly how many staff would've been involved in some way in sorting out the issue? Either outside regular hours or redirected from their planned work?
Is there some way to check the sanity of the configuration change, monitor it and then revert back to an earlier working configuration if things don't work out?
Is it though? Or is it, oh, this is such a simple change that we really don't need to test it attitude? I'm not saying this applies to TFA, but some people are so confident that no pressure is felt.
However, you forgot that the lighting conditions are where only red lights from the klaxons are showing so you really can't differentiate the colors of the wires
Side thought as we're working on 100% onchain systems (for digital assets security, different goals):
Public chains (e.g. EVMs) can be a tamper‑evident gate that only promotes a new config artifact if (a) a delay or multi‑sig review has elapsed, and (b) a succinct proof shows the artifact satisfies safety invariants like ≤200 features, deduped, schema X, etc.
That could have blocked propagation of the oversized file long before it reached the edge :)
> much better than their completely false “checking the security of your connection” message
The exact wording (which I can easily find, because a good chunk of the internet gives it to me, because I’m on Indian broadband):
> example.com needs to review the security of your connection before proceeding.
It bothers me how this bald-faced lie of a wording has persisted.
(The “Verify you are human by completing the action below.” / “Verify you are human” checkbox is also pretty false, as ticking the box in no way verifies you are human, but that feels slightly less disingenuous.)
DNS is actually one of the easiest services to self-host, and it's fairly tolerant of downtime due to caching. If you want redundancy/geographical distribution, Hurricane Electric has a free secondary/slave DNS service [0] where they'll automatically mirror your primary/master DNS server.
The author of ConTeXt (a TeX format similar to LaTeX) has some interesting comments on AsciiMath [0] [1]. Its space handling looks especially problematic; the example given in [0]
o ox x = xo
a ax x = xa
ooxx=xo
aaxx=xa
ac+sinx+xsqrtx+sinsqrtx+sinsqrt(x)
produces the following output
o ⊗ x = x o
a a x x = x a
∞ × = x o
a a × = x a
a c + sin x + x √x + sin √x + sin √x
Its handling of commas looks even worse, but it's tricky to demonstrate that in plain text.
Shameless plug (again): This is one of the issues I addressed when I wrote the competing mathup library.
alphanumeric tokens MUST be separated from each other with a symbol or whitespace.
ooxx => ooxx
oo xx => ∞×
sqrtx => sqrtx
sqrt x => √x
sqrt(x) => √x
The way I deal with commas is that I always treat commas as an operator (or a seperator in group context), unless the author configures comma to be the decimal mark (and I only allow exactly one decimal mark for each parse run).
> the only robust way to
edit \ASCIIMATH\ is to use a \WYSIWYG\ editor and hope that the parser doesn't change ever.
Ouch. I wrote a couple of parsers when I was young and foolish without trying to specify the grammar, and it’s a good thing they didn’t get popular, because every bugfix changed the syntax and broke texts that had been working before.
> It has significant whitespace, that shouldn't be problematic, should it?
Significant whitespace is totally fine, but whitespace that is sometimes significant and sometimes not isn't. In the examples above, "sinsqrtx" produces the same output as "sin sqrt x", but "ooxx", "o ox x", and "o o x x" all produce completely different output.
The problem is with the stream of alphanumeric symbols. Most languages treat sqrtx as a single token, but asciimath treats it as two (sqrt and x) if you put a symbol in between them (sqrt+x) most languages treats it as three tokens (sqrt, +, x) and asciimath is no different.
asciimath’s choice here is to make whitespace (sometimes) optional between two subsequent alphanumeric tokens, and it is a rather odd choice. I‘m not sure which other language (markup or otherwise) does it this way.
This already exists, and I agree that it's the best solution here, but for some reason this was rejected by the Chrome developers. I discussed this solution a little more elsewhere in the thread [0].
> This really is just a storm in a waterglass. Nothing like the hundreds or tens of thousands of flash and java applet based web pages that went defunct when we deprecated those technologies.
Sure, but Flash and Java were never standards-compliant parts of the web platform. As far as I'm aware, this is the first time that something has been removed from the web platform without any replacements—Mutation Events [0] come close, but Mutation Observers are a fairly close replacement, and it took 10 years for them to be fully deprecated and removed from browsers.
I'm strongly against the removal of XSLT support from browsers—I use both the JavaScript "XSLTProcessor" functions [0] and "<?xml-stylesheet …?>" [1] on my personal website, I commented on the original GitHub thread [2], and I use XSLT for non-web purposes [3].
But I think that this website is being hyperbolic: I believe that Google's stated security/maintenance justifications are genuine (but wildly misguided), and I certainly don't believe that Google is paying Mozilla/Apple to drop XSLT support. I'm all in favour of trying to preserve XSLT support, but a page like this is more likely to annoy the decision-makers than to convince them to not remove XSLT support.
Small, sure, but not elite. xml-stylesheet is by far the easiest way to make a simple templated website full of static pages. You almost could not make it any simpler.
WebExtensions still have them? I thought the move to HTML (for better or worse) would've killed that. Even install.rdf got replaced IIRC so there shouldn't be much traces of XML in the new extensions system...
Can’t you just do the xslt transformation server-side? Then you can use the newest and best xslt tools, and the output will work in any browser, even browsers that never had any built-in xslt support.
> Cant you just do the xslt transformation server-side?
For my Atom feed, sure. I'm already special-casing browsers for my Atom feed [0], so it wouldn't really be too difficult to modify that to just return HTML instead. And as others mentioned, you can style RSS/Atom directly with CSS [1].
For my Stardew Valley Item Finder web app, no. I specifically designed that web app to work offline (as an installable PWA), so anything server-side won't work. I'll probably end up adding the JS/wasm polyfill [2] to that when Chrome finally removes support, but the web app previously had zero dependencies, so I'm a little bit annoyed that I'll have to add a 2MB dependency.
That is actually mozilla's stand in the linked issue except it's on client though. They would rather replace it with some non native replacement (So there is no surprising security issue anymore) if remove directly is impractical.
There is actually a example of such situation. Mozilla removed adobe pdf plugin support a long time ago and replaced it with pdf.js. It's still a slight performance regression for very giant pdf. But it is enough for most use case.
But the bottom line is "it's actually worth to do it because people are using it". They won't actively support a feature that little people use because they don't have the people to support it.
Huh? How would a static site generator serve both RSS and the HTML view of the RSS from the same file?
To be extra clear: I want to have <a href="feed.xml">My RSS Feed</a> link on my blog so everyone can find my feed. I also want users who don't know about RSS to see something other than a wall of plain-text XML.
You don't serve them from the same file. You serve them from separate files.
As I mention in my other comment to you, I don't know why you want an RSS file to be viewable. That's not an expected behavior. RSS is for aggregators to consume, not for viewing.
Technically, the web server can do content negotiation based on Accept headers with static files. But… In theory, you shouldn't need a direct link to the RSS feed on your web page. Most feed readers support a link-alternate in the HTML header:
Someone who wants to subscribe can just drop example.com/blog in to the feed reader and it will do the right thing. The "RSS Feed" interactive link then could go to a HTML web page with instructions for subscribing and/or a preview.
I think also literally, independent of the cheeky tone.
Where it lost me was:
>RSS is used to syndicate NEWS and by killing it Google can control the media. XSLT is used worldwide by multiple government sites. Google are now trying to control LEGISLATION. With these technologies removed what is stopping Google?
I mean yes Google lobbies, and certainly can lobby for bad things. And though I personally didn't know much of anything about XSLT, I from reading a bit about it I certainly am ready to accept the premise that we want it. But... is Google lobbying for an XSLT law? Does "control legislation" mean deprecate a tool for publishing info on government sites?
I actually love the cheeky style overall, would say it's a brilliant signature style to get attention, but I think this implying this is tied to a campaign to control laws is rhetorical overreach even by its own intentionally cheeky standards.
I think the reason you're considering it rhetorical overreach is because you're taking it seriously. If the author doesn't actually mind the removal of XSLT support (i.e. possibly rues its removal, but understands and accepts the reasons), then it's really a perfectly fine way to just be funny.
Right, my quote and your clarification are saying the same thing (at least that's what I had in mind when I wrote that).
But that leaves us back where we started because characterizing that as "control the laws" is an instance of the the rhetorical overreach I'm talking about, strongly implying something like literal control over the policy making process.
Laws that are designed to help you but you can't easily access, or laws that are designed to control/restrict you and that get shoved in your face: once you manage "consumption" of laws, you can push your agenda too.
I agree that you would have to believe something like that to make sense of what it's implying. But by the same token, that very contention is so implausible that that's what makes it rhetorical overreach.
It would be ridiculous to suggest that anyone's access to published legislation would be threatened by its deprecation.
This is probably the part where someone goes "aha, exactly! That's why it's okay to be deprecated!" Okay, but the point was supposed to be what would a proponent of XSLT mean by this that wouldn't count as them engaging in rhetorical overreach. Something that makes the case against themselves ain't it.
It's hard enough telling them to also get off Instagram and Whatsapp and switch to Signal to maintain privacy. I'm going to have a hard time explaining what XSLT is!
> but a page like this is more likely to annoy the decision-makers than to convince them to not remove XSLT support.
You cannot “convince decision-makers” with a webpage anyway. The goal of this one is to raise awareness on the topic, which is pretty much the only thing you can do with a mere webpage.
For some reason people seem to think raising awareness is all you need to do. That only works if people already generally agree with you on the issue. Want to save endangered animals? raising awareness is great. However if you're on an issue where people are generally aware but unconvinced, raising more awareness does not help. Having better arguments might.
>For some reason people seem to think raising awareness is all you need to do.
I guess I'm not seeing how that follows. It can still be complimentary to the overall goal rather than a failure to understand the necessity of persuasion. I think the needed alchemy is a serving of both, and I think it actually is trying to persuade at least to some degree.
I take your point with endangered animal awareness as a case of a cause where more awareness leads to diminishing returns. But if anything that serves to emphasize how XSLT is, by contrast, not anywhere near "save the animals" level of oversaturation. Because save the animals (in some variation) is on the bumper sticker of at least one car in any grocery store parking lot, and I don't think XSLT is close to that.
I think it's the other way around. Simply raising awareness about endangered animals may be enough to gain traction since many/most people are naturally sympathetic about it. Conversely, XSLT being deprecated has lower awareness initially, but when you raise it many people hearing that aren't necessarily sympathetic - I don't think most engineers think particularly fondly about XSLT, my reaction to it being deprecated is basically "good riddance, I didn't think anyone was really using it in browsers anyway".
As an open source developer, i also have a lot of sympathy to google in this situation. Having a legacy feature holding the entire project back despite almost nobody using it because the tiny fracation that do are very vocal and think its fine to be abusive to developers to get what they want despite the fact its free software they didn't pay a dime for, is something i think a lot of open source devs can sympathize with.
I think all that you say applies to a random open source project done by volunteer developers, but really doesn't in case of Google.
Google has used its weight to build a technically better product, won the market, and are now driving the whole web platform forward the way they like it.
This has nothing to do with the cost of maintaining the browser for them.
It seems likely to me that it is about the 'cost' - not literally monetary cost but one or two engineers periodically have to wrangle libxslt for Chrome and they think it's a pain in the ass and not widely used, and are now responding by saying "What if I didn't have to deal with this any more".
I'm not sure what else it would be about - I don't see why they would especially care about removing XSLT support if cost isn't a factor.
Google is still made up of people, who work a finite amount of hours in a day, and maybe have other things they want to spend their time on then maintaining legacy cruft.
There is this weird idea that wealthy people & corporations arent like the rest of us, and no rules apply to them. And to a certain extent its true that things are different if you have that type of wealth. But at the end of the day, everyone is still human, and the same restrictions still generally apply. At most they are just pushed a little further out.
My comment is not about that at all: it's a response to claim how Google SW engineering team is feeling the heat just like any other free software project, and thus we should be sympathetic to them?
I am sure they've got good reasons they want to do this: them having the same problems as an unstaffed open source project getting vocal user requests is not one of them.
>I think it's the other way around. Simply raising awareness about endangered animals may be enough to gain traction since many/most people are naturally sympathetic about it.
You're completely right in your literal point quoted above, but note what I was emphasizing. In this example, "save the animals" was offered as an example of a problem oversaturated in awareness to a point of diminishing returns. If you don't think animal welfare illustrates that particular idea, insert whatever your preferred example is. Free tibet, stop diamond trade, don't eat too much sodium, Nico Harrison shouldn't be a GM in NBA basketball, etc.
I think everyone on all sides agrees with these messages and agrees that there's value in broadcasting them up to a point, but then it becomes not an issue of awareness but willpower of relevant actors.
You also may well be right that developers would react negatively, honestly I'm not sure. But the point here was supposed to be that this pages author wasn't making the mistake of strategic misunderstanding on the point of oversaturating an audience with a message. Though perhaps they made the mistake in thinking they would reach a sympathetic audience.
> For some reason people seem to think raising awareness is all you need to do.
I don't think many do.
It's just that raising awareness is the first step (and likely the only one you'll ever see anyway, because for most topics you aren't in a position where convincing *you* in particular has any impact).
Sure, but translating that movement to actual policy change usually depends on how much uninvolved people are sympathetic to the protestors, which usually involves how rational the protestors are precieved as. Decision makers are affected by public sentiment, but public sentiment of the uninvolved public generally carries more weight.
Thats why the other side usually try to smear protests as being crazy mobs who would never be happy. The moment you convince uninvolved people of this, the protestors lose most power.
> Rational arguments come later, and mostly behind closed doors.
I disagree with this. Rational arguments behind closed doors happen before resorting to protest not after. If you're resorting to protest you are trying to leverage public support into a more powerful position. That's about how much power you have not the soundness of your argument.
> Sure, but translating that movement to actual policy change usually depends on how much uninvolved people are sympathetic to the protestors
No, that's the exception rather than the rule. That's a convenient thing to teach to the general public and that's why people like MLK Jr. and Gandhi are being celebrated, but most movement that make actual policy changes do so while disregarding bystanders entirely (or even actively hurting bystanders. That's why terrorism, very unfortunately, is effective in practice).
> which usually involves how rational the protestors are precieved as
I'm afraid most people don't really care about how rational anyone is perceived at. Trump wouldn't have been elected twice if that was the case.
> Decision makers are affected by public sentiment, but public sentiment of the uninvolved public generally carries more weight.
They only care about the sentiment of the people that can cause them nuisance. A big crowd of passively annoyed people will have much less bargaining power than a mob of angry male teenagers doxxing and mailing death threats: see the gaming industry.
> I disagree with this. Rational arguments behind closed doors happen before resorting to protest not after.
Bold claim that contradicts the entire history of social conflicts…
My emotional response to XSLT being removed was: "finally!". You would need some good arguments to convince me that despite my emotions applauding this descion it is actually a bad thing.
> Last time this came up the consensus was that libxstl was barely maintained and never intended to be used in a secure context and full of bugs.
Sure, I agree with you there, but removing XSLT support entirely doesn't seem like a very good solution. The Chrome developer who proposed removing XSLT developed a browser extension that embeds libxslt [0], so my preferred solution would be to bundle that by default with the browser. This would:
1. Fix any libxslt security issues immediately, instead of leaving it enabled for 18 months until it's fully deprecated.
2. Solve any backwards compatibility concerns, since it's using the exact same library as before. This would avoid needing to get "consensus" from other browser makers, since they wouldn't be removing any features.
3. Be easy and straightforward to implement and maintain, since the extension is already written and browsers already bundle some extensions by default. Writing a replacement in Rust/another memory-safe language is certainly a good idea, but this solution requires far less effort.
This option was proposed to the Chrome developers, but was rejected for vague and uncompelling reasons [1].
> I think if the XSLT people really wanted to save it the best thing to do would have been to write a replacement in Rust.
That's already been done [2], but maintaining that and integrating it into the browsers is still lots of work, and the browser makers clearly don't have enough time/interest to bother with it.
From your [1] “rejected for vague and uncompelling reasons”:
>>> To see how difficult it would be, I wrote a WASM-based polyfill that attempts to allow existing code to continue functioning, while not using native XSLT features from the browser.
>> Could Chrome ship a package like this instead of using native XSLT code, to address some of the security concerns? (I'm thinking about how Firefox renders PDFs without native code using PDF.js.)
> This is definitely something we have been thinking about. However, our current feeling is that since the web has mostly moved on from XSLT, and there are external libraries that have kept current with XSLT 3.0, it would be better to remove 1.0 from browsers, rather than keep an old version around with even more wrappers around them.
The bit that bothers me is that Google continue to primarily say they’re removing it for security reasons, although they have literally made a browser extension which is a drop-in replacement and removes 100% of the security concerns. The people that are writing about the reasons know this (one of them is the guy that wrote it), which makes the claim a blatant lie.
I want people to call Google specifically out on this (and Apple and Mozilla if they ever express it that way, which they may have done but I don’t know): that their “security” argument is deceit, trickery, dishonest, grossly misleading, a bald-faced lie. If they said they want to remove it because barely anyone uses it and it will shrink their distribution by one megabyte, I would still disagree because I value the ability to apply XSLT on feeds and other XML documents (my Atom and RSS feed stylesheets are the most comprehensive I know of), but I would at least listen to such honest arguments. But falsely hiding behind “security”? I impugn their honour.
(If their extension is not, as their descriptions have implied, a complete, drop-in replacement with no caveats, I invite correction and may amend my expressed opinion.)
You still need to maintain that sandbox. Ultimately no one wants to spend energy maintaining software that isn't used very heavily. That's why feature depreciation happens. If someone cares enough, they should step in an offer to take over long term maintenance and fix the problems. Ideally a group of people, and perhaps more ideally, a group with some financial backing (eg a company), otherwise it may be difficult to actually trust that they will live up to the commitment.
Even projects like Linux deprecate old underused features all the time. At least the Internet has real metrics about API usage which allows for making informed decisions. Folks describing how they are part of that small fraction of users doesn't really change the data. What's also interesting is that a very similar group of people seem to lament about how it's impossible to write a new browser these days because there are too many features to support.
"The sandbox" in this case is their ability to execute WASM securely. It's a necessary part of the "modern" web. If they were planning on also nuking WASM from orbit because it couldn't be made secure, this would be another topic entirely. There's nothing they're maintaining just-for-xslt-1.0-support beyond a simple build of libxslt to WASM, a copy block in their build code, and a line in a JSON list to load WASM provided built-ins (which they would want anyway for other code).
I think their logic makes sense. They're removing support because of security concerns, and they're not adding support back using an extension because approximately nobody uses this feature.
Adding the support back via an extension isn't cost free.
I suppose that’s a legitimate framing. But I will still insist that, at the very least, their framing is deliberately misleading, and that saying “you can’t have XSLT because security” is dishonest.
But when it “isn’t cost-free”… they’ve already done 99.9% of the work required (they already have the extension, and I believe they already have infrastructure to ship built-in functionality in the form of Web Extensions—definitely Firefox does that), and I seem to recall hearing of them shifting one or two things from C/C++ to WASM before already, so really it’s only a question of whether it will increase installer/installed size, which I don’t know about.
According to the extension's README there are still issues with it, so they definitely would have to do more work.
And yeah Chrome is really strict about binary size these days. Every kB has to be justified. It doesn't support brotli compression because it would have added like 16kB to the binary size.
"effecting something else" (i.e. escaping the sandbox) is the core issue. JavaScript (and WASM) engines have to be designed to defend against the user running outright malicious scripts without those scripts being able to gain access to the rest of the browser or the host system. By comparison, potentially exploitable but non-malicious, messy code is basically a non-issue. Any attacker that found a bug in a sandboxed XSLT polyfil that allowed them to escape the sandbox or do anything else malicious would be able to just ship the same code to the browser themselves to achieve the same effect.
The easier thing might have been if Chrome & co opted to include any number of polyfills in JS bundled with the browser instead of making an odd situation where things just break.
I think you can recognize that the burden of maintaining a proven security nightmare is annoying while simultaneously getting annoyed for them over-grabbing on this.
Which would be a totally sensible thing you do. Especially if jpeg was a rarely used image format with few libraries supporting it, the main one being unmaintained.
There is already a replacement in rust but people like you and the Google engineers have ignored that fact. “Good luck” they all say turning their nose away from reality so they can kill it. Thanks for your support.
> LLMs are garbage and they add nothing to the browsing experience.
The builtin translation feature [0] is LLM-based [1], and that adds a ton to my browsing experience, since it's made web pages in other languages accessible to me.
[1]: According to Wikipedia, "A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks" [2]. The translation code is transformer/RNN-based and trained on raw texts [3, 4], and translation definitely qualifies as a natural language processing task, meaning that the translation feature is LLM-based.
It does, but it's L3-only (software-based), not L1 (hardware-based) [0]. Streaming providers can then decide which content they'll let you access depending on the level. Speaking from experience, some providers work perfectly (full HD content with no issues), others only give you a really low-resolution stream, and others refuse to work entirely.
> If so, why would Google allow this but not for other OSS browsers?
When EME [1] was first released, Firefox had ~10% market share, and it would look pretty bad for Google to exclude another major browser maker. Smaller browsers don't have the political clout necessary to convince Google to give them access.
> Work functions don't make sense as a token tax; there's actually the opposite of the antispam asymmetry there. Every bot request to a web page yields tokens to the AI company. Legitimate users, who far outnumber the bots, are actually paying more of a cost.
Agreed, residential proxies are far more expensive than compute, yet the bots seem to have no problem obtaining millions of residential IPs. So I'm not really sure why Anubis works—my best guess is that the bots have some sort of time limit for each page, and they haven't bothered to increase it for pages that use Anubis.
> with a content protection system built in Javascript that was deliberately expensive to reverse engineer and which could surreptitiously probe the precise browser configuration a request to create a new Youtube account was using.
> The next thing Anubis builds should be that, and when they do that, they should chuck the proof of work thing.
They did [0], but it doesn't work [1]. Of course, the Anubis implementation is much simpler than YouTube's, but (1) Anubis doesn't have dozens of employees who can test hundreds of browser/OS/version combinations to make sure that it doesn't inadvertently block human users, and (2) it's much trickier to design an open-source program that resists reverse-engineering than a closed-source program, and I wouldn't want to use Anubis if it went closed-source.
Google's content-protection system didn't simply make sure you could run client-side Javascript. It implemented an obfuscating virtual machine that, if I'm remembering right (I may be getting some of the detailed blurred with Blu Ray's BD+ scheme) built up a hash input of runtime artifacts. As I understand it, it was one person's work, not the work of a big team. The "source code" we're talking about here is clientside Javascript.
Either way: what Anubis does now --- just from a CS perspective, that's all --- doesn't make sense.
As a visitor to random web pages, I definitely appreciated this—much better than their completely false “checking the security of your connection” message.
> The issue was not caused, directly or indirectly, by a cyber attack or malicious activity of any kind. Instead, it was triggered by a change to one of our database systems' permissions
Also appreciate the honesty here.
> On 18 November 2025 at 11:20 UTC (all times in this blog are UTC), Cloudflare's network began experiencing significant failures to deliver core network traffic. […]
> Core traffic was largely flowing as normal by 14:30. We worked over the next few hours to mitigate increased load on various parts of our network as traffic rushed back online. As of 17:06 all systems at Cloudflare were functioning as normal.
Why did this take so long to resolve? I read through the entire article, and I understand why the outage happened, but when most of the network goes down, why wasn't the first step to revert any recent configuration changes, even ones that seem unrelated to the outage? (Or did I just misread something and this was explained somewhere?)
Of course, the correct solution is always obvious in retrospect, and it's impressive that it only took 7 minutes between the start of the outage and the incident being investigated, but it taking a further 4 hours to resolve the problem and 8 hours total for everything to be back to normal isn't great.
reply