> I don't see the U.S rushing to adopt either renewables or nuclear. We're just increasing our fossil fuel burning (natural gas)
This is wrong. Natural gas is falling from 42% of U.S. electricity generation in '23 and '24 to 40% in '25E and '26E [1]. Renewables, meanwhile, keep marching from 23% ('24) to 24% ('25E) and 26% ('26E). (Nuclear falls from 19% ('24) to 18% ('25E and '26E).
That's capacity, not generation. Getting through the accounting tricks that make renewables seem viable is a challenge. 1 watt of nuclear capacity is worth 1.5 watts of FF and 9 watts of renewables. That's because the amount of power from each type of plant is very different due to downtimes of generation. Nuclear runs all the time and refuels for a couple of days every 18 months (depending on the reactor). FF plants run most of the time by require 10x more maintenance downtime. Renewables only make power about 10% of the time. That's how they skew the numbers to make renewables seem viable when they produce a shockingly low amount of actual power. Oh, and if you use renewables for baseload you have to keep a spinning reserve which means they actually increase (not decrease) the amount of CO2 emitted per watt generated.
Irrelevant. The question is what we're investing in. "The U.S" is "rushing to adopt...renewables."
> FF plants run most of the time
"CCGT capacity factor rose from 40% in 2008 to 57% in 2022" [1]. "In the western United States," meanwhile "the capacity value of PV plants can be in the range of 50% to 80%" [2].
> That's how they skew the numbers to make renewables seem viable when they produce a shockingly low amount of actual power
You should tell the folks overwhelming choosing to build and finance renewable power plants! They clearly missed this key point, they’ll surely be grateful you let them know that their renewable investments don’t make sense, and they should have picked nuclear due to it being cheaper overall
How did we get to a place where either Cloudflare or AWS having an outage means a large part of the web going down? This centralization is very worrying.
Oddly this centralization allows a complete deferral of blame without you even doing anything: if you’re down, that’s bad. But if you’re down, Spotify is down, social media is down… then “the internet is broken” and you don’t look so bad.
It also reduces your incentive to change, if “the internet is down” people will put down their device and do something else. Even if your web site is up they’ll assume it isn’t.
I’m not saying this is a good thing but I’m simply being realistic about why we ended up where we are.
As a user I do care, because I waste so much time on Cloudflare's "prove you are human" blocking-page (why do I have to prove it over and over again?), and frequently run on websites blocking me entirely based on some bad IP-blacklist used along with Cloudflare.
If you have a site with valuable content the LLM crawlers hound you to no end. CF is basically a protection racket at this point for many sites. It doesnt even stop the more determined ones but it keeps some away.
Oh, they're still botnets. We just look the other way because they're useful.
And they're pretty tame as far as computer fraud goes - if my device gets compromised I'd much rather deal with it being used for fake YouTube views than ransomware or a banking trojan.
You can make a little bit of cash on the side letting companies use your bandwidth a bit for proxying. You won’t even notice. $50/month. Times are tough!
Of course the risk here being whatever nefarious or illegal shit is flowing through your pipes, which you consented to and even received consideration for.
Unfortunately the problem isn't just "the internet sucks" it's "the internet sucks, and everyone uses it" - meaning people are not doing stuff offline, and a lot of our lives require us to be online.
Absolutely. They have dramatically worsened the world, with little to no net positive impact. Nearly every (if not all) positive impacts have an associated negative that that dwarfs it.
LLMs aren't going anywhere, but the world would be a better place if they hadn't been developed. Even if they had more positive impacts, those would not outweigh the massive environmental degradation they are causing or the massive disincentive they created against researching other, more useful forms of AI.
IMO LLMs have been a net negative on society, including my life. But I'm merely pointing out the stark contrast on this website, and that fact that we can choose to live differently.
I am not anti-AI, nor unhappy about how any current LLM works. I'm unhappy about how AI is used and abused to collective detriment. LLM scraper spam leading to increased centralization and wider impacting failures is just one example.
Your position is similar to saying that medical drugs have been a net negative on society, because some drugs have been used and abused to collective detriment (and other negative effects, such as doctors prescribing pills instead of suggesting lifestyle changes). Does it mean that we would be better off without any medical drugs?
My position is that the negatives outweigh the positives, and I don't appreciate your straw man response. It's clear your question is not genuine and you're here to be contrarian.
A solid secondary option is making LLM scraping for training opt-in, and/or compensating sites that were/are scraped for training data. Hell, maybe then you could not knock websites over incentivizing them to use Cloudflare in the first place.
But that means LLM researchers have to respect other people's IP which hasn't been high on their todo lists as yet.
bUt ThAT dOeSn'T sCaLe - not my fuckin problem chief. If you as an LLM developer are finding your IP banned or you as a web user are sick of doing "prove you're human" challenges, it isn't the website's fault. They're trying to control costs being arbitrarily put onto them by a disinterested 3rd party who feels entitled to their content, which it costs them money to deliver. Blame the asshole scraping sites left and right.
Edit: and you wouldn't even need to go THAT far. I scrape a whole bunch of sites for some tools I built and a homemade news aggregator. My IP has never been flagged because I keep the number of requests down wherever possible, and rate-limit them so it's more in line with human like browsing. Like so much of this could be solved with basic fucking courtesy.
Not to speak for the other poster, but... That's not a good-faith question.
Most of the problems on the internet in 2025 aren't because of one particular technology. They're because the modern web was based on gentleman's agreements and handshakes, and since those things have now gotten in the way of exponential profit increases on behalf of a few Stanford dropouts, they're being ignored writ large.
CF being down wouldn't be nearly as big of a deal if their service wasn't one of the main ways to protect against LLM crawlers that blatantly ignore robots.txt and other long-established means to control automated extraction of web content. But, well, it is one of the main ways.
Would it be one of the main ways to protect against LLM web scraping if we investigated one of the LLM startups for what is arguably a violation of the Computer Fraud and Abuse Act, arrested their C-suite, and sent each member to a medium-security federal prison (I don't know, maybe Leavenworth?) for multiple years after a fair trial?
I'm Sure there will be an investigation... By the SEC when the bubble pops and takes the S&P with it. No prison though, probably jobs at the next ponzi scheme
I just realized, why don't they have some "definitely human" third party cookie that caches your humanness for 24h or so? I'm sure there's a reason, I've heard third party cookies were less respected now, but can someone chime in on why this doesn't work and save a ton of compute?
Yes, there are several, and the good one (linked below) lets you use the "humanness" token across different websites without them being able to use it as a tracking signal / supercookie. It's very clever.
That's a problem caused by bots and spammers and DDoSers, that Cloudflare is trying to alleviate.
And you generally don't have to prove it over and over again unless there's a high-risk signal associated with you, like you're using a VPN or have cookies disabled, etc. Which are great for protecting your privacy, but then obviously privacy means you do have to keep demonstrating you're not a bot.
You might say the problem CloudFlare is causing is lesser than the ones it's solving, but you can't say they're not causing a new, separate problem.
That they're trying counts for brownie points, it's not an excuse to be satisfied with something that still bothers a lot of people. Do better, CloudFlare.
"We have decided to endlessly punish you for using what few tools you have to avoid being exploited online, because it makes our multi-billion dollar business easier. Sucks to be you."
> It has nothing to do with being a multi-billion dollar corp.
Cloudflare is the multi-billion dollar corporation. It has everything to do with that, because they are the primary cause, and their resources and position make them by far the best equipped to solve it.
> Criticizing when there's no other solution isn't very useful, is it?
Of course it is. Without criticism, the growing problem goes unacknowledged and allowed to persist. It should instead be continually called out until it is prioritized, and some of those billions should be spent on researching a solution. (Similarly, a company found to be dumping waste into a river should be held responsible for cleaning up the mess they created. Even if that turns out to be expensive or difficult.)
Expecting a single affected person to solve it for the big corp that caused it is unrealistic. And blaming the victims because they use VPNs or disable cookies is... unhelpful.
CloudFlare is protecting sites from DDoS attacks and out-of-control bots. They're not the ones causing them. If CloudFlare wasn't asking you to prove you're human, many times the site would be down entirely because it couldn't keep up. Or the site would simply shut down because it couldn't afford it.
And this isn't a question of spending some fraction of billions on researching a solution. There fundamentally isn't one, if you understand how the internet works. This is a problem a lot of people would like to solve better, believe me.
So, yes, criticizing Cloudflare here is as useful as criticizing it for not having faster-than-light communication. There's nothing else it can do. It's not "blaming the victims".
I'm going to assume you simply don't have the technical understanding of how the internet works. Because the position you're taking is simply absurd and nonsensical, and there's no way you would write what you're writing otherwise.
Privacy through uniformity, operational security by routine, herd immunity for privacy, traffic normalization, "anonymity set expansion", "nothing to hide" paradox, etc.
I.e., if you use Tor for "normie sites", then the fact that someone can be seen using Tor is no longer a reliable proxy for detecting them trying to see/do something confidential and it becomes harder to identify & target journalists, etc. just because they're using Tor.
Tor Browser has ~1M daily users. Tons of people use it for hitting sites that may be blocked in their country or they want to have some privacy like view pregnancy or health related articles and etc.
In addition to the reasons in sibling comment, this also acts as a filter for low-quality ad-based sites; same reason I close just about any website that gives me a popup about a ToS agreement.
This is essentially the entire IT excuse for going to anything cloud. I see IT engineers all the time justifying that the downtime stops being their problem and they stop being to blame for it. There's zero personal responsibility in trying to preserve service, because it isn't "their problem" anymore. Anyone who thinks the cloud makes service more reliable is absolutely kidding themselves, because everyone who made the decision to go that way already knows it isn't true, it just won't be their problem to fix it.
If anyone in the industry actually cared about reliability and took personal stake in their system being up, everyone would be back on-prem.
Reliability is not even how the cloud got sold to the C Suite. Good God, when my last company started putting things on Azure back in 2015 stuff would break weekly, usually on Monday mornings.
No, the value proposition was always about saving money, turning CapEx into OpEx. Direct quote from my former CEO maybe 9 years ago: We are getting out of the business of buying servers.
Cloud engineering involves architecting for unexpected events: retry patterns, availability zones, multi-region fail over, that sort of thing.
Now - does it all add up to cost savings? I could not tell you. I have seen some case studies, but I also have been around long enough to take those with a big grain of salt.
It's amazing how there's so many cybersecurity incidents now. Bypassing IT will always backfire spectacularly, IT is the people that stop you from dumbing.
The opposite was/is true. If your cloud box can only be used by two people and IT don’t even know about it then IT can never be persuaded to provide the keys to the rest of the company as they were predisposed to doing.
I saw this stuff too many times, and it is precisely why the cloud exploded in use in about 2010.
One notable example was signing keys for builds for distribution actually. And IT had a habit of handing them out to absolutely everyone. Being able to audit who did the signing was done in spite of IT who could, of course, never be persuaded of the merit of any process they don’t own.
I won't discount your IT can be bad, but also if you're keeping something as core to your security as signing keys somewhere your IT can't audit, you are just as bad. And your IT won't be the ones fired when your keys leak.
That might have been true for some kind of organization, but definitely not for every kind. On the other side, there were start-ups that wanted the elasticity and no commitments. But both sides at least partially liked the "it's not on me anymore" feature.
IMHO it adds, but only if you are big enough. Netflix level. At that level, you go and dine with Bezos and negotiate a massive discount. For anyone else, I’d genuinely love to see the numbers that prove otherwise.
> There's zero personal responsibility
Unfortunately, this seems to be the unspoken mantra of modern IT management. Nobody wants to be directly accountable for anything, yet everyone wants to have their fingerprints on everything. A paradox of collaboration without ownership.
Cloud providers have formalized these deals actually. If you promise to spend X amount over Y period, you get Z discounts.
And this is not reserved instances, this is an org level pricing deal. Some have been calling it anti-competitive and saying the regulators need to look at the practice.
> IMHO it adds, but only if you are big enough. Netflix level. At that level, you go and dine with Bezos and negotiate a massive discount. For anyone else, I’d genuinely love to see the numbers that prove otherwise.
It adds if you're smart about using resources efficiently, at any level. And engineer the system to spin up / spin down as customers dictate.
For situations where resources are allocated but are only being utilized a low percentage (even < 50% in some cases), it is not cost effective. All that compute / RAM / disk / network etc. is just sitting there wasted.
I mean in the end it's about making a trade off that makes sense for your business.
If the business can live with a couple of hours downtime per year when "cloud" is down, and they think they can ship faster / have less crew / (insert perceived benefit), then I don't know why that is a problem.
It is a trade-off between convenience and freedom. Netflix vs buying your movies. Spotify vs mp3s. Most tech products have alternatives. But you need to be flexible and adjust your expectations. Most people are not willing to do that
The issue is that real life is not adaptable. Resources and capital are slow.
That's the whole issue with monopolies for example, innit? We envision "ideal free market dynamics" yet in practice everybody just centralizes for efficiency gains.
Right, and my point is that "ideal free market dynamics" conveniently always ignore this failure state that seems to always emerge as a logical consequence of its tenets.
I don't have a better solution, but it's a clear problem. Also, for some reason, more and more people (not you) will praise and attack anyone who doesn't defend state A (ideal equilibrium). Leaving no room to point out state B as a logical consequence of A which requires intervention.
The definition of a monopoly basically resolves to "those companies that don't get pressured to meaningfully compete on price or quality", it's a tautology. If a firm has to compete, it doesn't remain a monopoly. What's the point you're making here?
There absolutely are options but we aren't using them because nobody cares enough about these downsides. bsky is up, with Mastodon you even have choice between tons of servers and setting up your own. Yet, nobody cares enough about the occasional outage to switch. It's such a minor inconvenience that it won't move the needle one bit. If people actually cared, businesses would lose customers and correct the issue.
More like it's time for the pendulum to swing back...
We had very decentralized "internet" with BBSes, AOL, Prodigy, etc.
Then we centralized on AOL (ask anyone over 40 if they remember "AOL Keyword: ACME" plastered all over roadside billboards).
Then we revolted and decentralized across MySpace, Digg, Facebook, Reddit, etc.
Then we centralized on Facebook.
We are in the midst of a second decentralization...
...from an information consumer's perspective. From an internet infrastructure perspective, the trend has been consistently toward more decentralization. Initially, even after everyone moved away from AOL as their sole information source online, they were still accessing all the other sites over their AOL dial-up connection. Eventually, competitors arrived and, since AOL no longer had a monopoly on content, they lost their grip on the infrastructure monopoly.
Later, moving up the stack, the re-centralization around Facebook (and Google) allowed those sources to centralize power in identity management. Today, though, people increasingly only authenticate to Facebook or Google in order to authenticate to some 3rd party site. Eventually, competitors for auth will arrive (or already have ahem passkeys coughcough) and, as no one goes to Facebook anymore anyway, they'll lose grip on identity management.
It's an ebb and flow, but the fundamental capability for decentralization has existed in the technology behind the internet from the beginning. Adoption and acclimatization, however, is a much slower process.
These centralized services do and did solve problems. I'm old enough to remember renting a quarter rack, racking my own server and other infrastructure, and managing all that. That option hasn't gone away, but there are layers of abstraction at work that many people probably haven't and don't want to be exposed to.
Aaand even if we ignore the "benefit" of Cloudflare and AWS outages being blamed on them, rather than you, what does uptime look like for artisanaly hosted services on a quarter rack vs your average services on AWS and Cloudflare?
> Businesses and peoples’ livelihoods are online nowadays
What happened to having a business continuity plan? E.g. when your IT system is down, writing down incoming orders manually and filling them into the system when it's restored?
I have a creeping suspicion that people don't care about that, in which case they can't really expect more than to occasionally be forced into some downtime by factors outside of their control.
Either it's important enough to have contingencies in place, or it's not. Downtime will happen either way, no matter how brilliant the engineers working at these large orgs are. It's just that with so much centralization (probably too much) the blast range of any one outage will be really large.
My wife and I own a small theatre. We can process orders in-store just fine. Our customers can even avoid online processing fees if they purchase in-store. And if our POS system went down, we could absolutely fall back to pencil and paper.
Doesn't change the fact that 99% of our ticket sales happen online. People will even come in to the theatre to check us out (we're magicians and it's a small magic shop + magic-themed theatre - so people are curious and we get a lot of foot traffic) but, despite being in the store, despite being able to buy tickets right then and there and despite the fact that it would cost less to do so ... they invariably take a flyer and scan the QR code and buy online.
We might be kind of niche, since events usually sell to groups of people and it's rare that someone decides to attend an event by themselves right there on the spot. So that undoubtedly explains why people behave like this - they're texting friends and trying to see who is interested in going. But I'm still bringing us up as an example to illustrate just how "online" people are these days. Being online allows you to take a step back, read the reviews, price shop, order later and have things delivered to your house once you've decided to commit to purchasing. That's just normal these days for so many businesses and their customers.
I’m not so sure about that. The pre-internet age had a lot of forced “mental health breaks”. Phone lines went down. Mail was delayed. Trains stalled. Businesses and livelihoods continued to thrive.
The idea that we absolutely need 24/7 productivity is a new one and I’m not that convinced by it. Obviously there are some scenarios that need constant connectivity but those are more about safety (we don’t want the traffic lights to stop working everywhere) than profit.
Just want to correct the record here, as someone who worked at a local CLEC where we took availability quite seriously before the age of the self-defeatist software engineer.
Phone lines absolutely did not go down. Physical POTS lines (Yes, even the cheap residential ones) were required to have around 5 9s of availability, or approximately 5 minutes per year. And that's for a physical medium affected by weather, natural disasters, accidents, and physical maintenance. If we or the LEC did not meet those targets contracts would be breached and worst case the government would get involved.
Okay, as someone who also worked in that era I’ll be pedantic: internal phone systems went down. I experienced it multiple times so I certainly know it happened.
FWIW nothing I said was “self defeatist”, I made it clear I don’t think it’s a good thing. It’s just a simple financial reality that the additional redundancy isn’t worth the extra cost in a lot of situations.
Most businesses are totally fine if they have a few hours of downtime. More uptime is better, but treating an outage like a disaster or an e-commerce site like a power plant is more about software engineer egos than business or customer needs.
If AWS is down, most businesses on AWS are also down, and it’s mostly fine for those businesses.
I’ve worked in cloud consulting for a little over five years. I can say 95% of the time when I discuss the cost and complexity tradeoffs of their websites being down vs going multi region or god forbid “multi cloud”, they shrug and say, it will be fine if they are down for a couple of hours.
This was the same when I was doing consulting inside (ie large companies willing to pay the premium cost of AWS ProServe consultants) and outside working at 3rd party companies.
It's better to have diverse, imperfect infrastructure, than one form of infra that goes down with devastating results.
I'm being semi-flippant but people do need to cope with an internet that is less than 100% reliable. As the youth like to say, you need to touch grass
Being less flippant: an economy that is completely reliant on the internet is one vulnerable to cyberattacks, malware, catastrophic hardware loss
It also protects us from the malfeasance or incompetence of actors like Google (who are great stewards of internet infrastructure... until it's no longer in their interests)
Wealthy, investment-bloated software companies will be fine.
Smaller companies that provide real world services or goods to make a much more meagre living that rely on some of the services sold to them by said software companies will be impacted much more greatly.
Losing a day or two of sales to someone who relies on making sales every day can be a growing hardship.
This doesn’t just impact developers. It’s exactly this kind of myopic thinking that leads to scenarios like mass outages.
My friend, that is just not reality. And it's not just e-commerce t shirt slingers I am talking about here.
You have to realize when software companies tell the world they should rely on their works, they world will do so. And once that occurs, the responsibility is all on the software companies to meet the expectations they built in people!
It's mad that this industry works so hard to claim the trust of millions of people, then shirks it as soon as it's convenient.
> But if you’re down, Spotify is down, social media is down… then “the internet is broken” and you don’t look so bad.
In my direct experience, this isn't true if you're running something even vaguely mission-critical for your customers. Your customer's workers just know that they can't do their job for the day, and your customer's management just knows that the solution they shepherded through their organization is failing.
It's really quite funny, many of the ACTUALLY vital systems to running the world as we know it are running off of very different softwares. Cloudflare appears to have a much higher % of non vital systems running on it than say something like akamai.
If akamai went down i have a feeling you'd see a whole lot more real life chaos.
i also find the sentiment of "well we use a third party so blame them" completely baffeling.
if you run anything even remotely mission critical, not having a plan B which is executable and of which you are in control (and a plan C) will make you look completely incompetent.
There are very, very few events which some people who run mission critical systems accept as force majeur. Most of those are of the scale "national emergency" or worse.
>There are very, very few events which some people who run mission critical systems accept as force majeur. Most of those are of the scale "national emergency" or worse.
And why should anyone be surprised? It's been about 80 years since "The buck stops here."[0] had any real relevance. And more's the pity.
100% this. While in my professional capacity I'm all in for reliability and redundancy, as an individual, I quite like these situations when it's obvious that I won't be getting any work done and it's out of my control, so I can go run some errands to or read a book, or just finish early.
Which "user" are you referring to? Cloudflare users or end product users?
End product users have no power, they can complain to support and maybe get a free month of service, but the 0.1% of customers that do that aren't going to turn the tide and have anything change.
Engineering teams using these services also get "covered" by them - they can finger point and say "everyone else was down too."
Admittedly when I wrote that I was thinking about the recent AWS outage. Anecdotally, I asked friends and family about their experience and they assumed the internet was down. Almost everything at my work runs on Google cloud so we were still running but we observed a notable dip in traffic during the outage all the same.
> it is still bad
No doubt. But there’s a calculation to make, is it bad enough to spend the extra money on mitigations, to hire extra devops folks to manage it all… and in the majority of end user facing cases the answer is no, it isn’t.
Where I've worked and we've been in the cloud I've always promoted just running in one AZ, I run my own things in one Hetzner DC (hel1). I've done hybrid cloud as well and in that case we only have one AZ for the on-premise stuff anyways (plus offsite backup)
That one time when an AZ goes down and your infra successfully fails over to the other two isn't worth it for a lot of my scale companies, ops consultants seem to be chasing high cloud spend to justify their own high cost. I also factor in that I live in Sweden where most infrastructure outages are exceptionally rare.
Ofc it depends on what kind of company you are and what you're providing.
Eh? It's because they are offering a service too good to refuse.
The internet this day is fucking dangerous and murderous as hell. We need Cloudflare just to keep services up due to the deluge of AI data scrapers and other garbage.
More like "don't have choice". It's not like service provider gonna go to competition, because before you switch, it will be back.
Frankly it's a blessing, always being able to blame the cloud that management forced company to migrate to be "cheaper" (which half of the time turns out to be false anyway)
> It also reduces your incentive to change, if “the internet is down” people will put down their device and do something else. Even if your web site is up they’ll assume it isn’t.
I agree. When people talk about the enshittification of the internet, Cloudflare plays a significant role.
Many reasons but DDoS protection has massive network effects. The more customers you have (and therefore bandwidth provision) the easier it is to hold up against a DDoS, as DDoS are targeting just one (usually) customer.
So there are massive economies of scale. Small CDN with (say) 10,000 customers and 10mbit/sec per customer can handle 100gbit/s DDoS (way too simplistic, but hopefully you get the idea) - way too small.
If you have the same traffic provisioned on average per customer and have 1 million customers, you can handle a DDoS 100x the size.
Only way to compete with this is to massively overprovision bandwidth per customer (which is expensive, as those customers won't pay more just for you to have more redundancy because you are smaller).
In a way (like many things in infrastructure) CDNs are natural monopolies. The bigger you get -> the more bandwidth and PoP you can have -> more attractive to more customers (this repeats over and over).
It was probably very astute of Cloudflare to realise that offering such a generous free plan was a key step in this.
In a CDN, customers consume bandwidth; they do not contribute it. If Cloudflare adds 1 million free customers, they do not magically acquire 1 million extra pipes to the internet backbone. They acquire 1 million new liabilities that require more infrastructure investment.
All you are doing is echoing their pitch book. Of course they want to skim their share of the pie.
I imagine every single customer is provisioned based on some peak expected typical traffic and that's what they base their capital investment in bandwidth on.
However most customers are rarely at their peak, this gives you tremendous spare capacity to use to eat DDoS attacks, assuming that the attacks are uncorrelated. This gives you huge amounts of capacity that's frequently doing nothing. Cloudflare advertise this spare capacity as "DDoS protection."
I suppose in theory it might be possible to massively optimise utilisation of your links, but that would be at the cost of DDoS protection and might not improve your margin very meaningfully, especially is customers care a lot about being online.
> In a CDN, customers consume bandwidth; they do not contribute it
They contribute money which buys infrastructure.
> If Cloudflare adds 1 million free customers,
Is the free tier really customers? Regardless most of them are small that it doesn't cost cloudflare much anyways. The infrastructure is already there anyways. Its worth it to them for the good will it generates which leads to future paying customers. It probably also gives them visibility into what is good vs bad traffic.
1 million small sites could very well cost less to cloudflare than 1 big site.
OP is saying it's cheaper overall for a 10 million customer company to add infrastructure for 1 million more than it is for a 10,000 customer company to add infrastructure for 1000 more people.
If you're looking at this as a "share of the pie", it's probably not going to make sense. The industry is not zero sum.
You aren't understanding economy of scale, and peak to average ratios.
The same reason I use cloud compute -- elastic infrastructure because I can't afford the peaks -- is the same reason large service providers "work".
It's funny how we always focus on Cloudflare, but all cloud providers have this same concentration downside. I think it's because Cloudflare loves to talk out of both sides of their mouth.
The "economies of scale" defense of Cloudflare ignores a fundamental reality: 23.8 million websites run on Cloudflare's free tier versus only 210,000 paying customers or so. Free users are not a strategic asset. They are an uncompensated cost, full stop. Cloudflare doesn't absorb this loss out of altruism; they monetize it by building AI bot-detection systems, charging for bot mitigation, and extracting threat intelligence data. Today's outage was caused by a bug in Cloudflare's service to combat bots.
That's AI bots, BTW. Bots like Playwright or Crawl4AI, which provide a useful service to individuals using agentic AI. Cloudflare is hostile to these types of users, even though they likely cost websites nothing to support well.
The "scale saves money" argument commits a critical error: it counts only the benefits of concentration while externally distributing the costs.
Yes, economies of scale exist. But Cloudflare's scale creates catastrophic systemic risk that individual companies using cloud compute never would. An estimated $5-15 billion was lost for every hour of the outage according to Tom's Guide. That cost didn't disappear. It was transferred to millions of websites, businesses, and users who had zero choice in the matter.
Again, corporations shitting on free users. It's a bad habit and a dark pattern.
Even worse, were you hoping to call an Uber this morning for your $5K vacation? Good luck.
This is worse than pure economic inefficiency. Cloudflare operates as an authorized man-in-the-middle to 20% of the internet, decrypting and inspecting traffic flows. When their systems fail, not due to attacks, but to internal bugs in their monetization systems, they don't just lose uptime.
They create a security vulnerability where encrypted connections briefly lose their encryption guarantee. They've done this before (Cloudbleed), and they'll do it again. Stop pretending to have rational arguments with irrational future outcomes.
The deeper problem: compute, storage, and networking are cheap. The "we need Cloudflare's scale for DDoS protection" argument is a circular justification for the very concentration that makes DDoS attractive in the first place. In a fragmented internet with 10 CDNs, a successful DDoS on one affects 10% of users. In a Cloudflare-dependent internet, a DDoS, or a bug, affects 50%, if Cloudflare is unable to mitigate (or DDoSs themselves).
Cloudflare has inserted themselves as an unremovable chokepoint. Their business model depends on staying that chokepoint. Their argument for why they must stay a chokepoint is self-reinforcing. And every outage proves the model is rotten.
hang on, you're reading some kind of cloudflare advocacy in my post. apologies if i implied that. i don't like to come off as a crank is all. IMO cloudflare is an evil that needs to be defeated. i'm just explaining how their business model "works" and why massive economy of scale matters, to support the GP poster.
i don't even think they are evil because of the concentration of power, that's just a problematic issue. the evil part is they convince themselves they aren't the bad guys. that they are saving us from ourselves. that the things they do are net positives, or even absolute positives. like the whole "let's defend the internet from AI crawlers" position they appointed themselves sheriff on, that i think you're referencing. it's an extremely dangerous position we've allowed them to occupy.
> they monetize it
yes, and they can't do this without the scale.
> scale saves money
any company, uber for example, can design their infra to not rely on a sole provider. but why? their customers aren't going to leave in droves when a pretty reliable provider has the occasional hiccup. so it's not worth the cost, so why shouldn't they externalize it? uber isn't in business to make the internet a better place. so yes, scale does save money. you're arguing something at a higher principle than how architectural decisions are made.
i'm not defending economy of scale as a necessary evil. i'm just backing up that it's how cloudflare is built, and that it is in fact useful to customers.
In my opinion, DDoS is possible only because there is no network protocol for a host to control traffic filtering on upstream providers (deny traffic from certain subnets or countries). In this case everybody would prefer write their own systems rather than rely on a harmful monopoly.
The recent Azure DDoS used 500k botnet IPs. These will have been widely distributed across subnets and countries, so your blocking approach would not have been an effective mitigation.
Identifying and dynamically blocking the 500k offending IPs would certainly be possible technically -- 500k /32s is not a hard filtering problem -- but I seriously question the operational ability of internet providers to perform such granular blocking in real-time against dynamic targets.
I also have concerns that automated blocking protocols would be widely abused by bad actors who are able to engineer their way into the network at a carrier level (i.e. certain governments).
Is this really true? What device in the network are you loading that filter into? Is it even capable of handling the packet throughput of that many clients while also handling such a large block list?
But this is not one subnet. It is a large number of IPs distributed across a bunch of providers, and handled possibly by dozens if not hundreds of routers along the way. Each of these routers won't have trouble blocking a dozen or two IPs that would be currently involved in a DDoS attack.
But this would require a service like DNSBL / RBL which email providers use. Mutually trusting big players would exchange lists of IPs currently involved in DDoS attacks, and block them way downstream in their networks, a few hops from the originating machines. They could even notify the affected customers.
But this would require a lot of work to build, and a serious amount of care to operate correctly and efficiently. ISPs don't seem to have a monetary incentive to do that.
It also completely overlooks the fact that some of the traffic has spoofed source IP addresses and a bad actor could use automated black holing to knock a legitimate site offline.
That already exists… that's part of cloudflare and other vendors mitigation strategy. There’s absolutely no chance ISPs are going to extend that functionality to random individuals on the internet.
What traffic would you request the upstream providers to block if getting hit by Aisuru? Considering the botnet consists of residential routers, those are the same networks your users will be originating from. Sure, in best case, if your site is very regional, you can just block all traffic outside your country - but most services don't have this luxury.
Blocking individual IP addresses? Sure, but consider that before your service detects enough anomalous traffic from one particular IP and is able to send the request to block upstream, your service will already be down from the aggregate traffic. Even a "slow" ddos with <10 packets per second from one source is enough to saturate your 10Gbps link if the attacker has a million machines to originate traffic from.
In many cases the infected devices are in developing countries where none of your customers is. Many sites are regional, for example, a medium business operating within one country, or even city.
And even if the attack comes from your country, it is better to block part of the customers and figure out what to do next rather than have your site down.
Could it not be argued that ISPs should be forced to block users with vulnerable devices?
They have all the data on what CPE a user has, can send a letter and email with a deadline, and cut them off after it expires and the router has not been updated/is still exposed to the wide internet.
My dad’s small town ISP called him to say his household connection recently started saturating the link 24/7 and to look into whether a device had been compromised.
(Turns out some raspi reseller shipped a product with empty uname/password)
While a cute story, how do you scale that? And what about all the users that would be incapable of troubleshooting it, like if their laptop, roku, or smart lightbulb were compromised? They just lose internet?
And what about a botnet that doesn’t saturate your connection, how does your ISP even know? They get full access to your traffic for heuristics? What if it’s just one curl request per N seconds?
> While a cute story, how do you scale that? And what about all the users that would be incapable of troubleshooting it, like if their laptop, roku, or smart lightbulb were compromised? They just lose internet?
Uh, yes. Exactly and plainly that. We also go and suspend people's driver licenses or at the very least seriously fine them if they misbehave on the road, including driving around with unsafe cars.
Access to the Internet should be a privilege, not a right. Maybe the resulting anger from widespread crackdowns would be enough of a push for legislators to demand better security from device vendors.
> And what about a botnet that doesn’t saturate your connection, how does your ISP even know?
In ye olde days providers had (to have to) abuse@ mailboxes. Credible evidence of malicious behavior reported to these did lead to customers getting told to clean up shop or else.
Xfinity did exactly this to me a few years ago. I wasn't compromised but tried running a blockchain node on my machine. The connection to the whole house was blocked off until I stopped it.
> here is no network protocol for a host to control traffic filtering on upstream providers (deny traffic from certain subnets or countries).
There is no network protocol per se, but there is commercial solutions like fortinet that can block countries iirc, but to note that it's only ip range based so it's not worth a lot
Yeah, I went to HN after the third web page didn't work. I am not just worried about the single point of failure, I am much more worried about this centralization eventually shaping the future standards of the web and making it de facto impossible to self-host anything.
Well that and the fact that when 99% goes through a central party, then that central party will be very interesting for authoritarian governments to apply sweeping censorship rules to.
It is already nearly impossible/very expensive in my country to be able to get a public IP address (Even IPv6) which you could host on. World is heavily moving towards centrally dependant on these big Cloud providers.
What part of the world has any ipv6 limitations? In the USA An ISP will give you a /48 from their /32 if you have any colo arrangement without even a blink. That gives you 2^16 networks with essentially infinite number of hosts on each network. Zero additional charge.
It is not as bad as Cloudflare or AWS because certificates will not expire the instant there is an outage, but considers that:
- It serves about 2/3 of all websites
- TLS is becoming more and more critical over time. If certificates fail, the web may as well be down
- Certificate lifetimes are becoming shorter and shorter, now 90 days, but Let's Encrypt is now considering 6 days, with 47 days being planned as a minimum
- An outage is one thing, but should a compromise happen, that would be even more catastrophic
Let's Encrypt is a good guy now, but remember that Google used to be a good guy in the 2000s too!
(Disclaimer: I am tech lead of Let's Encrypt software engineering)
I'm also concerned about LE being a single point of failure for the internet! I really wish there were other free and open CAs out there. Our goal is to encrypt the web, not to perpetuate ourselves.
That said, I'm not sure the line of reasoning here really holds up? There's a big difference between this three-hour outage and the multi-day outage that would be necessary to prevent certificate renewal, even with 6-day certs. And there's an even bigger difference between this sort of network disruption and the kind of compromise that would be necessary to take LE out permanently.
So while yes, I share your fear about the internet-wide impact of total Let's Encrypt collapse, I don't think that these situations are particularly analogous.
Agree, I’ve thought about this one too. The history of SSL/TLS certs is pretty hacky anyway in my opinion. The main problem they are solving really should have been solved at the network layer with ubiquitous IPsec and key distribution via DNS since most users just blindly trust whatever root CAs ship with their browser or OS, and the ecosystem has been full of implementation and operational issues.
Let’s Encrypt is great at making the existing system less painful, and there are a few alternatives like ZeroSSL, but all of this automation is basically a pile of workarounds on top of a fundamentally inappropriate design.
There's not really a way around the initial trust problem with consumer oriented certs though. Yours could reduce the number of initially trusted down to one I think but not any further.
its a shame DANE never took off.
If we actually got around to running a trusted DNSSEC based DNS system and allowed clients to create certificates thanks to DANE, we would be in a far more resilient setup compared to what we are now.
But DNSSEC was hard according to some, and now we are running a massive SPOF in terms of TLS certificates.
It didn't "not take off" --- it didn't work. You couldn't run it on the actual Internet with actual users, at least not without having a fallback path that attackers could trigger that meant DANE was really just yet another CA, only this one you can't detect misbehavior or kill it when it does misbehave.
Mostly since the AWS craze started a decade ago, developers have gone away from Dedicated servers (which are actually cheaper, go figure), which is causing all this mess.
It's genuinely insane that many companies are designing a great amount of fallbacks... on the software level but almost none is thought on the hardware/infrastructure level, common-sense dictate that you should never host everything on a single provider.
I tried as hard as I could to stay self hosted (and my backend is, still), but getting constant DDoS attacks and not having the time to deal with fighting them 2-3x a month was what ultimately forced me to Cloudflare. It's still worse than before even with their layers of protection, and now I get to watch my site be down a while, with no ability to switch DNS to point back to my own proxy layer, since CF is down :/
This is wild. Was your website somehow controversial? Ive been running many different websites for over 30+ years now, and have never been the target of a DDOS. The closest I’ve seen was when one website had a blind time based sql injection vulnerability and the attacker was abusing it, all the SLEEP() injected into the database brought the server to a crawl. But that’s just one attacker from a handful of IPs, hardly what i would call a DDOS.
I made the mistake of telling people it was hosted on a Pi cluster in a YouTube video a couple years ago, and asked nobody to try DDoSing it. I was a bit more naive thinking the YouTube viewer community was more like HN where people may joke about it, but nobody would actually do it.
I was wrong, and ever since I've dealt with a targeted attack (which was evolving as I added more CF firewall rules). At this point it's taken care of, but only because I have most things completely blocked at the CF firewall layer.
Until I changed job recently, I spent the past 8 years working in an area of tech that many people on places like HN and Reddit think that the work is a horrific waste of effort (DRM and content security for a streaming company).
The idea that if companies like my former employer would stop doing DRM their audience would embrace it is rife idealism. But based on bitter experience so enough people will do bad things just for the lulz that you need to cover your ass.
My home lab will never have an open port, I'll always put things behind a CDN or zero trust system, even then...
FWIW, it's worthwhile just for educational reasons to look at abuseipdb.com quite revealing.
It wasn't really a comment on the tech of DRM but of the business threats that require its use.
That being said, streaming content security is more than just DRM and DRM is more than just copy protection. There's a whole suite of tools inside DRM systems to manage content access at different levels and rulesets that can be applied for different situations. It's still fundamentally controlling an encrypted bitstream however. But I've implemented a great deal more than just DRM in order to build a better content security platform. Transit level controls, advanced token schemes, visible/invisible watermarking, threat/intrusion detection and abuse detection, there's quite a bit that can be implemented.
Jeff the reason why I think is that youtube community is more main-stream and I would consider you to be a really nice youtuber but even that might attract some bad faith actors just because of how main-stream youtube is as well compared to HN which is more niche-overall
(also congrats on 1 million subscribers but I know you must be tired of listening it but have a nice day jeff! Your videos are awesome!)
When I was younger and living in military dorms, I put a old throw away laptop hosting a simple website via Apache on the internet. Every time I checked the log it'd be full of so many random, wild spurts of attacks (granted I had basically 0 legit traffic).
I think people sometimes mistake legitimate traffic spikes for DDOS attacks. My blog has the former, but no site I have ever hosted has seen the latter.
With the state of constant attack from AI scrapers and DDOS bots, you pretty much need to have a CDN from someone now, if you have a serious business service. The poor guys with single prem boxes with static HTML can /maybe/ weather some of this storm alone but not everything.
This is the sad reality behind it. My websites would be constantly down because of AI scrapers. If anyone knows a good alternative, that doesn't cost an arm and a leg I am very open to hear!
I self hosted on one of the company’s servers back in the late 90s. Hard drive crashes (and a hack once, through an Apache bug) had our services (http, pop, smtp, nfs, smb, etc ) down for at least 2-3 days (full reinstall, reconfiguration, etc).
Then, with regular VPSs I also had systems down for 1-2 days. Just last week the company that hosts NextCloud for us was down the whole weekend (from Friday evening) and we couldn’t get their attention until Monday.
So far these huge outages that last 2-5 hours are still lower impact for me, and require me to take less action.
I like the idea of having my own rack in a data center somewhere (or sharing the rack, whatever) but even a tiny cost is still more than free. And even then, that data center will also have outages, with none of the benefits of a Cloudflare Pages, GitHub Pages, etc.
> developers have gone away from Dedicated servers (which are actually cheaper, go figure)
It depends on how you calculate your cost. If you only include the physical infrastructure having a dedicated server is cheaper. But by having some dedicated server you loose a lot of flexibility. Needs more resources? Just scale up your ec2, and with a dedicated server there is a lot more work involved.
Do you want a 'production-ready' database? With AWS you can just click a few buttons and have a RDS ready to use. To roll out your own PG installation you need someone with a lot of knowledge(how to configure replication? backups? updates? ...).
So if you include salaries in the calculation the result changes a lot. And even if you already have some experts in your payroll by putting them to work in deploying a PG instance you won't be able to use them to build other things that may generate more value to you business than the premium you pay to AWS.
Cloud-Hoster are that hardware-fallback. They started with offering better redundancy and scaling than your homemade breadbox. But it seems they lost something along the way and now we have this.
Maintainance cost is the main issue for on-prem infra, nowadays add things like DDOS protection and/or scraping protection, which can require dedicated team or for your company to rely on some library or open source project that is not guaranteed to be maintained forever (unless you give them support, which i believe in)... Yeah I can understand why companies shift off of on-prem nowadays
... dedis are cheaper if you are rightsized. If you are wrongsize they just plain crash and you may or may not be able to afford the upgrade.
I was at Softlayer before I was at AWS and what catalyzed the move was the time I needed to add another hard drive to a system and somehow they screwed it up. I couldn't put a trouble ticket it to get it fixed because my database record in their trouble ticket system was corrupted. The next day I moved my stuff to AWS and the day after that they had a top sales guy talk to me to try to get me to stay but it was too late.
Technically, multi-node cluster with failover (or full on active-active) will have far higher uptime than just a single node.
Practically, to get the multi-node cluster (for any non trivial workload) to work right, reliably, fail-over in every case etc. is far more work, far more code (that can have more bugs), and even if you do everything right and test what you can, unexpected stuff can still kill it. Like recently we had uncorrectable memory error which just happened to hit the ceph daemon just right that one of the OSDs misbehaved and bogged down entire cluster...
You jest, but this actually does exist. Multiple CDNs sell multi-CDN load balancing (divide traffic between 2+ CDNs per variously-complicated specifications, with failover) as a value add feature, and IIRC there is at least one company for which this is the marquee feature. It's also relatively doable in-house as these things go.
As someone who has worked for a CDN for over a decade, this is what most big customers do. Under normal circumstances, they send portions of traffic to different CDNs, usually based on cost (and or performance in various regions). When an issue happens, they will pull traffic from the problem CDN.
Of course, if a big incident happens for a big CDN, there might not be enough latent capacity in the other CDNs to take all the traffic. CDNs are a cutthroat business, with small margins, so there usually isn’t a TON of unused capacity laying around.
This might sound crazy as a software engineer, but I actually like the occasional "snow day" where everything goes down. It's healthy for us to all disconnect from the internet for a bit. The centralization unintentionally helps facilitate that. At least, that's my glass half full perspective.
I can understand that sentiment. Just don't lose sight of the impact it can have on every day people. My wife and I own a small theatre and we sell tickets through Eventbrite. It's not my full time job but it is hers. Eventbrite sent out an email this morning letting us know that they are impacted by the outage. Our event page appears to be working but I do wonder if it's impacting ticket sales for this weekend's shows.
So while us in tech might like a "snow day", there are millions of small businesses and people trying to go about their day to day lives who get cut off because of someone else's fuck-ups when this happens.
Absolutely solid point; there are a couple of apps I use daily for productivity, chores, even for alarm scheduling, that with the free versions, the ads wouldn’t load so I couldn’t use them but some of them were updated already. Made me realize I forgot that we’re kind of like cyborgs relying on technology that’s integrated so deeply into our lives that all it takes is an EMP blast like a monopolistic service going down to bring -us- down until we take a breath and learn how to walk again. Wild time.
> This might sound crazy as a software engineer, but I actually like the occasional "snow day" where everything goes down
As as software engineer, I get it. as a CTO, I spent this morning triaging with my devops ai(actual Indian) to find some workaround (we found one) while our CEO was doing damage control with customers (non technical field) who were angry that we were down and they were losing business by the minute.
sometimes I miss not having a direct stake in the success of the business.
I'm guessing you're employed and your salary is guaranteed regardless. Would you have the same outlook if you were the self-employed founder of an online business and every minute of outage was costing you money?
If you're an event organizer whose big event is in two days, for example, then every minute your website's down translates to people not paying to attend your paid event. Bonus points because as event managers know, people often wait until 2 days before the event to subscribe for good. Bonus points if you knew this and therefore ran a costly email campaign just before the outage, a campaign that is now sitting at a near-0% click rate.
For businesses whose profit margins are already slim, which is most traditional businesses trading online, making less money than they usually would will put them into the red, and even for those that are still in profit, making less money than you usually would means you have less money to pay the expenses that you usually do, expenses that are predicated on you making a certain amount of revenue.
You're living in a bubble. I know enough people who live paycheck to paycheck and always have exactly $0 in their pocket before the end of the month. It's pretty normal in some parts of the world, maybe even most of them.
That's a weirdly flippant response to what's a serious issue, but I'll give it the courtesy of a reply anyway - maybe not, but a business not making enough profit might go under, or they might only have to fire someone to prevent that from happening.
any brick and mortar store can have an outage, sometimes multiple days when they repave the streets or have a utility outage. You are white-knighting for imaginary victims of imaginary tragedies. thats weird
You can make any leaps of logic into chaos theory. hell, maybe a fully-occupied orphanage will burn down because of some percolation of events that wouldn't have occurred had CloudFlare kept running. Maybe the next genocidal leader was conceived at the precise moment because CloudFlare was down and his parents used the time to make whoopie?
It costs a lot of money to move, you don't know if the alternative will be any better, and if it affects a lot of companies then it's nobody's fault. "Nobody ever got fired for buying Cloudflare/AWS" as they say.
It's just that customers are more understanding when they see their Netflix not working either otherwise they just think you're less professional. Try talking to customers after an outage and you will see.
it's not just that, it's the creation of a sorta status symbol, or at least of symbol of normality.
there was a point (maybe still) where not having a netflix subscription was seen as 'strange'.
if that's the case in your social circles -- and these kind of social things bother you -- you're not going to cancel the subscription due to bad service until it becomes a socially accepted norm.
except, yknow, where peoples lives and livelihoods depend on access to information/being able to do things on exact time. aws and cloudflare are disqualifying themselves from hospitals and military and whatnot.
For example, Cloudflare employees make money on promises to mitigate such attacks, but then can’t guarantee they will, and take all their customers down at once. It’s a shared pain model.
How did we get to a place where Cloudflare being down means we see an outage page, but on that page it tells us explicitly that the host we're trying to connect to is up, and it's just a Cloudflare problem.
If it can tell us that the host is up, surely it can just bypass itself to route traffic.
People use CloudFlare because it's a "free" way for most sites to not get exploited (WAF) or DDoSed (CDN/proxy) regularly. A DDoS can cost quite a bit more than a day of downtime, even just a thundering herd of legitimate users can explode an egress bill.
It sucks there's not more competition in this space but CloudFlare isn't widely used for no reason.
AWS also solves real problems people have. Maintaining infrastructure is expensive as is hardware service and maintenance. Redundancy is even harder and more expensive. You can run a fairly inexpensive and performant system on AWS for years for the cost of a single co-located server.
It's not only centralization in the sense your website will be down if they are down but it is also a centralized MITM proxy. If you transfer sensitive data like chats over cloudflare-"protected" endpoints, you also allow CF to transparently read and analyze it in plain-text. It must be very easy for state agencies to spy on the internet nowadays, they woukd just ask CF to redirect traffic to them.
Because it's better to have a really convenient and cheap service that works 99% of the time, than a resilient that is more expensive or more cumbersome to use.
It's like github vs whatever else you can do with git that is truly decentralized. The centralization has such massive benefits that I'm very happy to pay the price of "when it's down I can't work".
Most developers don't care to know how the underlying infrastructure works (or why) and so they take whatever the public consensus is re: infra as a statement of fact (for the better part of the last 15 years or so that was "just use the cloud"). A shocking amount of technical decisions are socially, not technically enforced.
This topic is raised every time there is an outage with cloudflare and the truth of the matter is, they offer an incredible service, there is not a bit enough competition to deal with it. By definition their services are so good BECAUSE their adoption rate is so high.
It's very frustrating of course, and it's the nature of the beast.
Compliance. If you wanna sell your SAAS to big corpo, their compliance teams will feel you know what you're doing if they read AWS or Cloudflare on your architecture, even if you do not quite know what you're doing.
Because DDoS is a fact of life (and even if you aren't targeted by DDoS, the bot traffic probing you to see if you can be made part of the botnet is enough to take down a cheap $5 VPS). So we have to ask - why? Personally, I don't accept the hand-wavy explanation that botnets are "just a bunch of hacked IoT devices". No, your smart lightbulb isn't taking down Reddit. I slightly believe the secondary explanation that it's a bunch of hacked home routers. We know that home routers are full of things like suspicious oopsie definitely-not-government backdoors.
There is this tendency to phrase questions (or statements) as
"when did 'we' ".
These decision are made individually not centrally. There is no process in place (and most likely there will never be) that will be able to control and dictate if people decide one way of doing things is the best way to do it. Even assuming they understand everything or know of the pitfalls.
Even if you can control individually what you do for the site you operate (or are involved in) you won't have any control on parts of your site (or business) that you rely on where others use AWS or Cloudflare.
IMO, centralization is inevitable because the fundamental forces drive things in that direction. Clouds are useful for a variety of reasons (technical, time to market, economic), so developers want to use them. But clouds are expensive to build and operate, so there are only a few organizations with the budget and competency to do it well. So, as the market matures you end up with 3 to 5 major cloud operators per region, with another handful of smaller specialists. And that’s just the way it works. Fighting against that is to completely swim upstream with every market force in opposition.
I would be less worried if Cloudflare and AWS weren't involved in many more things than simply running DNS.
AWS - someone touches DynamoDB and it kills the DNS.
Cloudflare - someone touches functionality completely unrelated to DNS hosting and proxying and, naturally, it kills the DNS.
There is this critical infrastructure that just becomes one small part of a wider product offering, worked on by many hands, and this critical infrastructure gets taken down by what is essentially a side-effect.
It's a strong argument to move to providers that just do one thing and do it well.
Re: Cloudflare it is because developers actively pushed "just use Cloudflare" again and again and again.
It has been dead to me since the SSL cache vulnerability thing and the arrogance with which senior people expected others to solve their problems.
But consider how many people still do stupid things like use the default CDN offered by some third party library, or use google fonts directly; people are lazy and don't care.
We take the idea of the internet always being on for granted. Most people don’t understand the stack and assume that when sites go down it’s isolated, and although I agree with you, it’s just as much complacency and lack of oversight and enforcement delays in bureaucracy as it is centralization. But I guess that’s kind of the umbrella to those things… lol
Well the centralisation without rapid recovery and practices that provide substantial resiliency… that would be worrying.
But I dare say the folks at these organisations take these matters incredibly seriously and the centralisation problem is largely one of risk efficiency.
I think there is no excuse, however, to not have multi region on state, and pilot light architectures just in case.
A lot (and I mean a lot) of people in IT like centralization specifically because it’s hard to blame people for doing something that everyone else is doing.
This was always the case. There was always a "us-east" in some capacity, under Equinix, etc. Except it used to be the only "zone," which is why the internet is still so brittle despite having multiple zones. People need to build out support for different zones. Old habits die hard, I guess.
> How did we get to a place where either Cloudflare or AWS having an outage means a large part of the web going down?
As always, in the name of "security". When are we going to learn that anything done, either by the government or by a corporation, in the name of security is always bad for the average person?
It's weird to think about so bear with me. I don't mean this sardonically or misanthropically. But, it's "just the internet." It's just the internet. It dones't REALLY matter in a large enough macro view. It's JUST the internet.
It's because single points of traffic concentration are the most surveillable architecture, so FVEY et al economically reward with one hand those companies who would build the architecture they want to surveil with the other hand.
Currently at the public library and I can't use the customer inventory terminals to search for books. They're just a web browser interface to the public facing website, and it's hosted behind CF. Bananas.
Don't forget the CloudStrike outage: One company had a bug that brought down almost everything. Who would have thought there are so many single points of failure across the entire Internet.
For most services it's safer to host from behind Cloudflare, and Cloudflare is considered more highly available than a single IaaS or PaaS, at least in my headcanon.
The same reason we have centralization across the economy. Economies of scale is how you make a big business succesful, and once you are on top its hard to dislodge you.
Agreed. More worrying is that it appears standard practice or separation between domain and nameserver administration has been lost to one-stop-shop marketing.
Short-term economic forces, probably. Centralization is often cheaper in the near term. The cost of designing in single-point failure modes gets paid later.
And all of these outages happening not long after most of them dismissed a large amount of experienced staff while moving jobs offshore to save in labor costs.
I think some of the issues in the last outage actually affected multiple regions. IIRC internally some critical infrastructure for AWS depends on us-east-1 or at least it failed in a way that didn't allow failover.
5 mins. of thought to figure out why these services exist?
Dialogue about mitigations/solutions? Alternative services? High availability strategies?
Nah! It's free to complain.
Me personally, I'd say those companies do a phenomenal job by being a de facto backbone of the modern web. Also Cloudflare, in particular, gives me a lot of things for free.
It's not really. People are just very bad at putting the things around them into perspective.
Your power is provided by a power utility company. They usually serve an entire state, if not more than one (there are smaller ones too). That's "centralization" in that it's one company, and if they "go down", so do a lot of businesses. But actually it's not "centralized", in that 1) there are actually many different companies across the country/world, and 2) each company "decentralizes" most of its infrastructure to prevent massive outages.
And yes, power utilities have outages. But usually they are limited in scope and short-lived. They're so limited that most people don't notice when they happen, unless it's a giant weather system. Then if it's a (rare) large enough impact, people will say "we need to reform the power grid!". But later when they've calmed down, they realize that would be difficult to do without making things worse, and this event isn't common.
Large internet service providers like AWS, Cloudflare, etc, are basically internet utilities. Yes they are large, like power utilities. Yes they have outages, like power utilities. But the fact that a lot of the country uses them, isn't any worse than a lot of the country using a particular power company. And unlike the power companies, we're not really that dependent on internet service providers. You can't really change your power company; you can change an internet service provider.
Power didn't used to be as reliable as it is. Everything we have is incredibly new and modern. And as time has passed, we have learned how to deal with failures. Safety and reliability has increased throughout critical industries as we have learned to adapt to failures. But that doesn't mean there won't be failures, or that we can avoid them all.
We also have the freedom to architect our technology to work around outages. All the outages you have heard about recently could be worked around, if the people who built on them had tried:
- CDN goes down? Most people don't absolutely need a CDN. Point your DNS at your origins until the CDN comes back. (And obviously, your DNS provider shouldn't be the same as your CDN...)
- The control plane goes down on dynamic cloud APIs? Enable a "limp mode" that persists existing infrastructure to serve your core needs. You should be able to service most (if not all) of your business needs without constantly calling a control plane.
- An AZ or region goes down? Use your disaster recovery plan: deploy infrastructure-as-code into another region or AZ. Destroy it when the az/region comes back.
...and all of that just to avoid a few hours of downtime per year? It's likely cheaper to just take the downtime. But that doesn't stop people from piling on when things go wrong, questioning whether the existence of a utility is a good idea.
Do I need to use the other services from the Ory stack to have this be complete? I tried reading the Ory docs a couple of times when I needed an auth solution but it was indecipherable to me as someone not living in the auth world
If you are "just" doing first-party login, session, and user mgmt then Ory Kratos is all you need.
I would say in the majority of cases you would be fine with just Ory Kratos.
If you want 3rd party integrations, or become an IDP (think "login with $yourcorp"), or you migrate an existing system that relies on OAuth2 that you want to keep, or you have more complex auth flows where OAuth2 shines, then you want Ory Hydra.
If you want a "fine-grained" global, centralized authz system, complex and scalable authz as described by Google Zanzibar, then you want Ory Keto.
If you want to support SAML as well, you want Ory Polis.
If you want a "zero trust" setup, then you want Ory Oathkeeper.
That being said in almost all cases Kratos will be fine and you can pick and choose what you actually need.
sure,
I would say its mostly correct.
You can solve Permissions and API Gateway also differently - for example many use OAuth2 claims and scopes for permissions. I personally think that isn't good practice - like "first-party auth" I think its outside of the scope that OAuth2 was built for originally - but it works and many are used to building authz that way.
You could also use the identity metadata on Kratos for permissions - this works well for simple RBAC usecases but if you want "large scale" and "finegrained" something like Ory Keto is probably the more reasonable choice.
Feel free to message me on the Ory Community Slack if you want to discuss further: https://slack.ory.com/
How should we think about how much effective compute is being done with these devices compared to classical (GPU) computing? Obviously FLOPs doesn't make sense, so what does?
I've noticed that when I'm on Zepbound 2.5mg, I don't have the strong urge to play video games and can focus on my projects. Even if I do play video games while on it, I lose interest after a maximum of half an hour.
I know this is late to this topic, but I was wondering. You say you have more time for your projects, but do you ALSO lose motivation to do your projects too?
I am in a career that I really enjoy, but which requires high motivation and productivity. I don't want to lose that along with the pounds.
Not really. It lets me focus on what I want to focus on without losing motivation. Gaming is just an addictive behavior which is why it works against it, but not real work that I actually want to do.
Same here. I am thinking of getting one refurbished M1 or M2 machine to get it to running Asahi Linux, but even then there are a few important things that seems to be unsupported (like external monitors).
I'm running a 4k monitor from the HDMI port on my M1 MBP with Asahi. I was thinking the same, so discovering a hdmi port was quite a nice suprise, especially since only 3xUSB-C
I was thinking of getting a MacBook Air and I think it doesn't have a HDMI port. But good to know that the HDMI port in MBP works. I will take a look at them.
I don't see the U.S rushing to adopt either renewables or nuclear. We're just increasing our fossil fuel burning (natural gas).
reply