I love that pretty much all the JS runtimes have settled on `(Request): Response`[0], but I really wish they would standardize starting the server as well. Would make writing cross-runtime services easier.
I wonder where the pattern first comes from? I think I either came across it in Express (JS) or Ring (Clojure) first but surely it was first done somewhere else.
They're close to as cheap as they come for CDN bandwidth list prices without minimums, especially for CDNs with their level of global coverage. Competitors in this space in particular are often much more expensive, like Deno Deploy's $0.50/GB.
Cloudflare is effectively impossible to compare because it's all "free until you get an email from sales".
Supabase Edge Runtime is easy to self-host (works great as a multi-threaded JS web server). We love community contributions :) Let us know if you would like to collaborate.
I can vouch for Bunny. They are a fantastic company with fantastic prices and fantastic reliability. I have used their CDNs and all of their products for more than 4-5 years now.
Same here. Bunny CDN plus Wasabi. It’s an excellent, inexpensive duo. Bunnny now has their own Object storage, but it wasn’t mature in time for me to. Hold around it a few years back.
Completely agree. We migrated our entire video library from Azure to Bunny. We went from paying over $2,500 in egress every month to about $200. It’s unreal how much Bunny has saved us.
Wouldn't running on the edge of the internet mean running on the devices that I see when I look around my house? It feel like this serverless thing is rather serverful, unless I've overlooked the part where users are running a node somewhere.
I guess edge is just a buzzword, maybe it is like a metaphor; if you think of the internet as a sphere users reach to for content, something being on the edge means you don't have to reach that hard, it's right there on top. Or maybe it means close to the edge, close to end-user devices.
Serverless is definitely a misnomer, but it means that you don't 'own' the server your thing is running on, there are some restrictions and you can't run anything you could on an actual VPS or hardware box. So in a way the server is abstracted away. You just use resources, but those could be anywhere, running on any node of the edge network.
Right after CenturyLink rebranded to Lumen, but before I heard about it, I clicked a buzzword-laden link looking for people involved in "Edge Computing". I had been writing vehicle traffic controller firmware and thought "hey, I guess I'm doing edge computing--out here at the curb--maybe I should check this out."
Turns out, they meant installing modems in people's houses. Edge, it would seem, is a very versatile buzzword.
I feel like Edge is more acceptable; running at a PoP is close to the edge; running inside an ISP network is even closer; it's not really achievable, but running in ISP managed modems or cellular base stations is pretty much the limit of plausible Edge computing.
Serverless really should mean the client does the work, but it seems pretty equivalent to shared hosting. Dreamhost (and the shell account you used to get with an ISP!) was serverless before it was cool?
When I hear "Edge" I imagine that it keeps working if you remove the ISP (e.g. it'll still talk to with other stuff on the LAN) but it works better when the internet is available. Like bit torrent.
I'm aware that what they usually mean is significantly less interesting.
I think of it as the edge of the server side, ie the closest to the user where the service operator still controls the data. An edge function in a data center can hide information from unauthorized users. An edge function in a home would have a much harder time of pulling that off.
Why design your own API so that I can't try it without rewriting my entrypoints? No thanks.
Cloudflare is building an insanely good platform and I think it is one that is worth betting on into the future. I have no idea where this company came from. Maybe it's a rebrand, because they seem to have serious customer base and perhaps network footprint.
PoPs are ~119 which is significantly fewer (less than half) of Cloudflare's presence, and Cloudflare has queueing, streaming, D1 (databasing), R2, and all sorts of other things. Workers' DX cannot be beaten.
Just my 2c. If the creators are here, I'd love to know why you decided to design a new API. That is so upsetting.
Bunny has been around for much longer than CloudFlare. All those third party video streaming websites (e.g. adult content) all rely on CDNs like these. Bandwidth is very cheap. CloudFlare is able to command their prices mostly because of the security features and the fact that they are a pull based CDN. Most of the internet outside of SaaS rely on traditional CDNs like Bunny for low cost distribution.
Did they undergo a rebrand? Did I just miss this company for many years (it's possible)? I'm happy to believe you. But when you say "traditional CDNs," I think Akamai.
> Cloudflare doesn't execute workers in all their PoPs.
Yes we do!
> I'm in central Mexico and my workers execute in DFW even though there's a Cloudflare PoP not even 30 mins away from here (QRO).
I think you will find that even if you turned off Workers, your site would still be routed to DFW. Some of our colos don't have enough capacity to serve all traffic in their local region, so we selectively serve a subset of sites from that colo and reroute others to a bigger colo further away. There are a lot of factors that go into the routing decision but generally sites on the free plan or lower plan levels are more likely to be rerouted. In any case, the routing has absolutely nothing to do with whether you are using Workers. Every single machine in our edge network runs Workers and is prepared to serve traffic for any site, should that traffic get routed there.
(Additionally, sometimes ISP network connectivity doesn't map to geography like you'd think. It's entirely possible that your ISP has better connectivity to our DFW location than the QRO location.)
I've heard this argument from you before (on Twitter iirc) but I've been using Workers for 4 years now. Never, not even once, I have seen a Worker executing in Mexico. They always execute in DFW.
The CDN does cache stuff on QRO often but Workers and KV are a completely different story.
We're not on the free plan. We pay both for Workers and the CF domain plan.
Maybe all PoPs have the technical capacity to run Workers but if for whatever reason they don't, then it's irrelevant.
> The CDN does cache stuff on QRO often but Workers and KV are a completely different story.
I don't know of any way that requests to the same hostname could go to QRO for cache but not for Workers. Once the HTTP headers are parsed, if the URL matches a Worker, that Worker will run on the same machine. This could change in the future! We could probably gain some efficiency by coalescing requests for the same Worker onto fewer machines. But at present, we don't.
I do believe you that you haven't seen your Workers run in QRO, but the explanation for that has to be something unrelated to Workers itself. I don't know enough about your configuration to know what it might be.
Back a couple of years ago your CEO and another CF employee explained free plans got routed to other PoPs:
> Not all sites will be in all cities. Generally you’re correct that Free sites may not be in some smaller PoPs depending on capacity and what peering relationships we have.
> The higher the plan the higher the priority, so if capacity is an issue (for whatever issue, from straight up usage to DDoSes) free sites will get dropped from specific locations sooner. Usually you will still maintain the main locations.
So I ended up getting a paid plan but still the behavior hasn't changed. I've tried with different ISPs and locations and I've never seen a Worker executing in Mexico (QRO, GDL, MEX) or any of the other PoPs in the US closer than DFW (MFE, SAT, AUS, IAH).
Cloudflare DX is garbage. It has improved a bit in the last year but it's very far from being usable by your average developer. I am building a product on workers and I am questioning that decision every other day.
Are you doing it on Rust? TypeScript with Workers is a dream. Consider that, while it is not yet fully mature, you can build and launch your app once and it is global-first. It costs like $100 or less to run at significant scale. It's a dream.
Yes. There is a steep discovery curve for the wasm target. However, it makes it easier for development because once your code compile, it’ll probably run fine. There are some gotchas related to the platforms but once you learn them, you’ll be fine. Still, none of this is documented and the worker crate is practically unmaintained.
Once you have the app running in the cloud, Workers are a great runtime. Super solid with great perf and uptime. But CF still needs to improve local DX, a lot.
I think it's pretty good, but yeah, not ideal. I'm also building a product on workers, and using D1, KV, R2, queues, and am pretty happy with the DX. Running remote previews is pretty neat.
Cloudflare had only 100 PoPs just a few years ago. Bunny has been around 10 years, but didn't get the cash injection from Google like Cloudflare did.
If you read the article, Bunny uses Deno, CF uses a cut down version of Chromeium (each instance is like a browser tab; isolated). Thus the API difference.
But I do agree, CF is building out more of a suite.
WorkerD isn't anywhere near a "cutdown version of Chromium," it is an incredible platform with years of engineering put into it, from some of the people behind very similar and successful products (GAE, Protocol Buffers, to name some). I assume you are referring to V8 here but that also powers Deno.
> We've all been there: your app gains popularity, and suddenly, you're scrambling to add new servers.
Yeah, but the headache is usually from database, cache and other shared resource servers.
Scaling HTTP has been very easy for most applications for the last 15 years or so.
I have to confess I really don't see the appeal of edge workers in general outside of specific applications where latency is of high concern. Such applications do exist, of course, but this kind of offering is treated so generally that I feel like I'm either immune to the marketing or I'm missing something important.
> I have to confess I really don't see the appeal of edge workers in general outside of specific applications where latency is of high concern. Such applications do exist, of course, but this kind of offering is treated so generally that I feel like I'm either immune to the marketing or I'm missing something important.
Oh, there are lots of things you can do 'on edge' that can be easier/faster:
+ A/B testing
+ cookie warnings just for EU but not everyone else
+ proxy; helpful if you want to hide where your API is from or username/pass
+ route redirects
+ take off some workload from your server
+ mini applets (eg signup forms are great edge use-case)
If your pages depend on data from APIs that are not globally distributed, having an edge runtime can be worse. Specially when dealing with non distributed databases.
Haven't had the chance to look into this in depth yet, but is this a like Cloudflare pages or Vercel? Can you host static sites a la Next/Nuxt/Sveltekit/Solid etc?
Vercel Edge is Cloudflare Workers. This is interesting because there are relatively few providers that are running a proper runtime for generic JS functions at the edge. Cloudflare, Deno Deploy, Fastly, Wasmer.
I was wondering how this compares to Deno Deploy. From an API point of view, it looks rather limited? They seem to have some storage offerings but it’s unclear how they connect.
In this case, instead of putting everything in a couple giant DCs (eg us-east-1, put PoPs (point of presence) as close to end customers as possible. that way, round trip times between the pop and the customer is as small as possible, making their experience better. Edge then simply refers to those PoPs collective conceptually; edge compute is then just running code on those PoPs.
Ok, got it. So this implies installing equipment in a number of PoPs presumably based on some study of where your core customers are? And I guess this isn't for all application logic, just cache stuff, quick and easy interaction gains, and none the less still passing heavy lifting back to the DC?
As far as app logic, it depends on how much you can get the workers to do in their allotted time (which is short, iirc) so yeah, imo you still need heavier resources in a DC.
[0]: https://blog.val.town/blog/the-api-we-forgot-to-name/