Question to ones following quic/http3 development, what does it bring over http2?
From my point of view, http3 is more of a sidegrade and has no net benefit over http2.
We can make TCP fast, very fast, and the use of UDP was not a prerequisite for any of its functionality.
The "multistream" functionality of QUIC is mainly to benefit companies with big, highly loaded CDNs, and kind of obviates the economic point of squeezing multiple virtual streams into a single TCP connection as used in http2.
By throwing away TCP, they are throwing away decades of optimisations, and hardware offloading that network hardware makers made to handle TCP well.
If the talk was really about extracting single digit improvements from it, I think it would've made more sense to finally put SCTP, and DCCP to good use.
By throwing away TCP, they are throwing away decades of optimisations, and hardware offloading that network hardware makers made to handle TCP well
Indeed. I work at Netflix on optimizing cpu efficiency on our Open Connect CDN nodes, largely to reduce power use and capital expenses. We use FreeBSD, ngnix & TCP, and make heavy use of offloads like async sendfile(), TSO, LRO, kTLS and more recently hardware kTLS offload.
Right now, I have a single socket 32c/64t AMD Rome server delivering over 350Gb/s of real Netflix customer traffic. This traffic is all TLS encrypted, and is served across hundreds of thousands of TCP connections.
From measurements we've done, current QUIC would cost about 3x as much as TCP when using software crypto. So my back-of-the-envelope guess is that this box would do about 77Gb/s with QUIC (230Gb/s is the limit when disabling hardware TLS offload and using software crypto).
Are the benefits of QUIC really worth an a 4x increase in the amount of energy required per stream?
Once QUIC has optimizations similar TCP in place, the story will obviously be different. But we're not there yet.
At the moment, I think QUIC has a lot more benefit for many small requests, rather than for one very large streaming request.
QUIC would have the advantage that you could maintain that one stream and multiplex both video data and control information over it without the problems you'd encounter doing so on HTTP/1.1 or HTTP/2, but that definitely isn't worth the performance loss you'd get by deploying it today.
I think it makes perfect sense for you to wait for QUIC to be more heavily optimized. Once optimized, I think QUIC has the potential to be even faster, but it isn't there yet for your use case. (For that matter, some of that optimization is work you've put into optimizing existing HTTP and TCP, so it isn't surprising that your existing optimized stack beats current QUIC.)
QUIC is already getting those optimisations as the existing large deployments are incentivised to do so, as one example Kazuho Oku from Fastly made changes to their TLS implementation that showed improvements to AEAD and header encryption[1]. I suspect we will see improvements to QUIC's performance at a pace faster than the optimisations to TLS were made to make ubiquitous use of it trivial.
> Would you say that QUIC might not be worth it for video content as it's the transferal of large files over the network.
Whilst QUIC shines when you have a lot of small assets that you want to fetch as quickly as possible?
The way video is generally served now is actually as a large number of dynamically-selected chunks of the video and associated audio. QUIC makes perfect sense for YouTube/Netflix/Vimeo type VOD, and especially the MPEG-DASH style of streaming.
So you’re using hardware offload of the crypto now? And that wouldn’t be available if you switch to QUIC?
It seems a bad reason to knock the protocol cos people haven’t implemented something unrelated (hardware acceleration) for it.
I appreciate it obviously would be stupid for you to switch given those stats. But it’s not related to the protocol design or some fundamental shortcoming with QUIC.
Is it worth it now for your use-case? Maybe not. Is it useful for others? Probably. I’m also curious that Google chose to deliver QUIC for everything when they can including YouTube. I guess the cost tolerance for them is there because they believe they improve the experience enough that it’s a net revenue generator.
A lot of it depends on whether or not the content you're delivering is a static file, or if you're transcoding something. When its a static file, your job is "easier" in that you can use sendfile, and (with hardware kTLS offload) avoid having the CPU touch any data being sent (the Netflix case). When you have a gigantic long tail, and are transcoding a lot of traffic, then optimizations like sendfile and kTLS hardware offload matter a lot less. I imagine Google falls more on one side of the spectrum, and we fall on the other.
Sendfile is great for The Netflix Workload, but for more dynamic web stuff it feels like the "other way around" i.e. netmap makes more sense.. and QUIC is kind of a natural fit for that :)
HTTP2 has multiplexing implemented on top of TCP, which is not multiplexed. This means one stream can be delayed due to head-of-line blocking from another stream getting retransmits. Changing TCP to support this is complicated due to transparent proxies that would break, and difficult to upgrade segmenter implementations.
The benefit is not limited to CDNs, any long fat network where we have high throughput and high latency (e.g. most cellular connectivity) should see improvement. I would guess that CDNs may benefit less actually since they tend to terminate near the user and would generally have lower RTT. A typical modern web page has many resources to be fetched, and by making them independent the user experience is likely to be significantly better.
Disclaimer: work at google, not on QUIC but am familiar with the QUIC project and team.
TCP has some downsides which can not be fixed. And QUIC solves them by going below TCP. It just uses UDP because all other protocols are usually blocked and thus unusable.
Transparent handover (when changing IP addresses)
Multipath
Multiplexing: No head-of-line blocking
Faster handshake
Enforce authentication and encryption (also of some metadata)
Forward Error Correction and other improvements for wire less networks
So the improvements are mostly noticeable for for mobile devices.
There are some mistakes here. MPTCP definitely exists and has been deployed, which also encompasses handover. Faster handshake is covered by TFO in principle. FEC was only a proposal and has been dropped.
Those aren't TCP downsides per se, just different design decisions based on different requirements back when TCP was standardised.
QUIC eliminates the TLS handshake time on top of TCP. You can't really improve on the TCP 3-way handshake - you need a minimum of 3 messages to ensure that both sides have agreed to a connection.
You also have to question whether FEC should be implemented in layer 4, rather than implemented over the lower layers only as when needed. It seems to be terribly inefficient to implement FEC end-to-end when the packetloss is due to the last mile (or several meters for WiFi).
To me, QUIC seems to be the wrong approach to the problem. If you want a connection-oriented protocol with the advantages above, it's much better just to push through a new version of TCP with the extensions. Sure, it's going to take longer than approving a new protocol over UDP, but the potential benefits are substantial. On the other hand, there is an argument that by pushing through QUIC, there is pressure to improve TCP in those ways. We'll have to see how this pans out.
There is no way to 'push a new version of TCP', I believe that's one of the strong motivations for these protocols. Billions of devices and the world's entire network infrastructure would need to be upgraded, you'd only start seeing decent support after a decade. So you have to work with what we have - TCP and UDP.
TCP has a variable options range of up to 320 bits starting at the 20th octet to handle exactly feature extensions (This is exactly how Multi-Path TCP is implemented). There are also 3 reserved bits set for future use (13th octet), which can also form the basis of even larger extended option ranges for stuff like TLS negotiation.
You're describing how it can be upgraded in theory but the parent was explaining that it can't be done in practice.
One of the lessons of the last decade or so has been that only end-to-end encryption prevents ossification. To the extent that middleboxes can read stuff they will break forward compatibility.
For example certificate compression. How are we only sending that to standardisation now in 2020? Well it was impossible to deploy this before TLS 1.3. Why? Because older versions of TLS didn't encrypt certificates, and so middleboxes could read them, and so middleboxes would freak out if the certificate wasn't as expected. That's all it took to make a useful optimisation impossible to deploy.
> You're describing how it can be upgraded in theory but the parent was explaining that it can't be done in practice.
Adding features to TCP was done many times successfully already. When my jurney started there was no ws-scaling, ECN, SACK... They are all widely adopted now.
Ten years after it was standardised, field reports showed ECN at 0.06% of all connections using ECN while almost 40% showed countermeasures preventing ECN or other failures (the remainder did not attempt ECN)
So what did they do? Well they adjusted ECN so that it tolerates the broken middleboxes. That's what you have today. Your systems go "Oh, ECN is broken, oh well" and press on without it. You see this as a feature added "successfully" and I call it what it is: Failure.
Where you're talking about TCP, then the only state aware devices in the path are usually stateful firewalls close to the beginning and end of the path. I can't see this causing major issues as firewalls don't usually act on options unless specifically configured, seeing that they don't change the fundamentals of TCP flow (ie. flags, seqs and acks). The option but are usually just flags for signalling between the two endpoints.
My apologies for just taking your comment as a jumping-off point, but could you or anyone else point me in the best (free) direction to properly learn about this whole stack of protocols? It's been very high on my list to finally figure out what sits below HTTP, and in a roundabout way my appetite has been whetted by reading a Claude Shannon biography and The Information (which I suppose aren't perhaps /that/ close to UDP and so on but more about information theory and error correction as a historical development), but those books made me really want to dive in and learn about the building blocks of 'the internet'. Any pointers to books, articles or courses would be very much appreciated!
TCP/IP Illustated Volume 1: The Protocols by W. R. Stevens is the classic textbook and a great place to start off. You can then read the IETF RFCs, which are usually well written and completely readable to anybody with a computing background.
Hmm I would say TCP does have downsides. The choice between TCP and UDP is all-or-nothing. Quic does some nice modularization/layering so that you can effectively control reliability and ordering separately.
One way to think about it is negotiating a connection vs
what that connection actually is are pretty separate. Sadly even the literature and education materials on TCP is pretty poor at explaining this. With the QUIC spec this just feels much clearer.
I would actually say Quic is worse. In addition to mandatory ordering (like TCP) it also provides you mandatory encryption.
You can not really have "flexible ordering" with default Quic, since Quic Streams are inherently ordered Streams. You could treat them as unordered on one end and deliver out-of-order data as soon as at arrives, but that won't allow the peer to send unordered data due to an overall limited flow control window. Future extensions on QUIC (like the proposed datagram one) might change this.
Is transparent handover in use in QUIC or HTTP/3 today? Can you start a large download or SSH session on Wi-Fi, switch to mobile data, and not break the connection?
It's supported by the protocol. But I'm not sure if it's actually supported by anyone who deployed the protocol. It requires to actually support connection ID migration in the library (which not all of them do), as well as to support it on infrastructure (e.g. L4 load balancers): If those receive a packet belonging to the same connection, but having a new source IP address (due to client having migrated), they still need to route things to the same server which has all the state about the connection. Therefore the whole thing is rather hard to implement.
I'm not sure if forward error correction works with UDP. UDP has a checksum field that's optional in IPv4, mandatory in IPv6 (why IPvAnything dictates anything in the transport layer is beyond me). Assuming you treat the checksum as absent and set it to zeros, it could still get added at any hop, and if that gets corrupted, the packet would be dropped.
If routers either ignore the checksum or never add it in, it could work.
Omitting the UDP/IPv4 checksum is permitted because the IPv4 header has its own redundant checksum, but omitting the UDP/IPv6 checksum would leave the IPv6 header with no checksum at all, which is why that's not permitted.
https://en.wikipedia.org/wiki/UDP-Lite lets you checksum the UDP/IP header plus an arbitrary (possibly zero-length) prefix of the payload, but I'm not sure if people actually use that.
I believe the "errors" in this context are generally not actual corrupted packets, which are rare, but dropped packets seen in the context of a larger data transmission.
If your ISP is blocking UDP and not just your router (which you can reconfigure if you want to use some other IP protocols) then you're arguably not on an Internet Service Provider at all, you're on some sort of WWW Service Provider.
Well that applies to any protocol , ISPs don’t block anything , your company usually do though .
UDP is very commonly blocked in almost every corporate environment I have seen, it is a pain get it enabled for applications like video streaming which benefits from UDP
Well, then you don't have internet access in your company. Only some fossilized subset of internet. They have intentionally excluded themselves from any kind of progress, so arguing against any new kind of standard because some people deliberately exclude themselves from new standards is a non-argument in my opinion.
It's like arguing against a developing new TLS version because people configure their firewalls to block any flavor of TLS they don't know.
Expect it is not just configuration. It is hardware too.
I am member in a internet provider by students. We couldn't allow an UDP since we have an agreement with our uplink that we will make it impossible to host outwards facing services. To guarantee that for UDP we would need a stateful firewall, which we didn't have.
Due to agreements without uplink they want to have full access to our firewall and since they only use Cisco, we are required to do so too.
This means we had to shell out €€€€€ to get a firewall from Cisco and we had to put aside money to do that for years. Then our (old) core router had 90%+ load during testing of the firewall since the hardware UDP support it supposedly had was not up for the task. So we had to buy a second core router where we lucky weren't bound to Cisco due to outside obligations another €€€€ investment.
It's not just stupid configuration, sometimes it is just means and it takes long strategic planning we can do as a non-profit to afford such things.
> We couldn't allow an UDP since we have an agreement with our uplink that we will make it impossible to host outwards facing services.
Again, that agreement already means you're not providing internet service, even if you provided outgoing UDP you still would be violating the end to end principle. The hardware is just a consequence of wanting to be more (rather: less) than a dumb pipe.
So what are we then?
I think we are better internet provider than any commercial one could be in the dorms we supply.
Being an internet provider is being more than a dumb pipe. There are regulations too. We have to do customer service, keep up times, ... But according to you we are not an internet service provider.
Even today I am forced to deploy TCP fallback pretty much everywhere, even with corporations "whitelist" us, mainly because they have many/many PoPs all over the world. There is no guarantee their system-wide policies is actually applied at a particular access point/ device for any number of reasons.
It is particularly worse in consulting industry, This is usually because their customer actually controls the network/infra for the teams working with them, usually that means a lot of fragmented IT managment.
This is from my experience deploying Flash / WebRTC stack video tools in Fortune 500/2000 type orgs in last 8-9 years. One of the main reasons we moved to WebRTC from Flash early on is because the heavy friction for deploying RTMP/RTMPS(E). I can count in one hand how many of them had sensible network polices.
You may had a good experience with network policies, many many large organizations still insist on IE11 and restricted networks world around, that is the sad reality .
In HTTP/2, because all the streams are muxed over a single TCP connection, packet loss really screws things up - every stream grinds to a halt. It is worse than HTTP/1.x in that case. I observed it myself when Fastmail was having some networking troubles - one response not making it through caused every other request to stall indefinitely.
HTTP/3 being over UDP will hopefully fix that, while keeping the benefits of HTTP/2.
Yes, it being muxed into a single TCP stream was marketed as an advantage originally, as for almost all mainstream OSes, and network hardware, handling fever TCP connections faster is easier than more more slower connections.
Even if a packet loss was to occur, it would still have a net benefit over pounding a CDN with countless individual requests.
Problem with multiple TCP connections is slow TCP startup. When TCP connections are established and "warmed up", multiple TCP connections are almost always work faster and more reliable.
That was exploited back in the day by download managers which opened multiple connections to the save resource and managed to significantly increase download speed.
> That was exploited back in the day by download managers which opened multiple connections to the save resource and managed to significantly increase download speed.
Even back then, that wasn't anything to do with multiple TCP connections being fundamentally faster somehow. The speed increase achieved by download managers comes down to:
1. DNS resolution round-robin-ing you to different CDN nodes, such that you're actually making requests to different hardware that each has its own NIC to separately saturate; where if the each server NIC can do 1Mbps, and your link can do 2Mbps, then you need two parallel chunk downloads from separate servers to saturate your link. (This is the same fundamental reason downloading e.g. an OS installer through a torrent is usually faster than downloading it through HTTP from the OS maker's CDN.)
2. Intentional per-connection download-rate throttling applied by many servers (especially those of "download sites" like CNet, which "back in the day" were one of the main places people would experience large downloads); where making multiple concurrent HTTP range-requests to the server at once was a way to bypass the server's throttling. You don't see this advantage any more, as these days the servers of "download sites" have enough smarts to dynamically aggregate downlink quota between nodes in a table keyed by peer IP, such that N concurrent downloads from the same site are always each throttled to 1/Nth the rate.
Multiple TCP connections might increase speed even to single source because of the following reasons:
1. TCP connection increases speed gradually over time until it hit some bandwidth limit. So if you open 10 connections, you'll hit that limit faster.
2. Rare packet drop will drop connection speed and it'll take some time to restore that speed on a given connection. With multiple simultaneous connections and rare drops only one or two connections will experience that speed drop, while other connections might even compensate that with increased speed.
So if your connection is flawless, it does not make sense. Otherwise it might make sense.
While we can achieve most improvements with TCP too (assuming you're on linux and have a recent kernel) there are a few things that QUIC offers on top of those improvements
0RTT session resumption, while TFO could offer this too most people don't enable it due to broken middleboxes.
Header encryption, which hopefully prevents future ossification due to those broken middleboxes.
Multiple independent streams within a connection where each stream only suffers from its own packet losses instead of all streams being affected together. This provides a small latency benefit. TCP SACKs and fast retransmits should paper over most of this latency, but not all of it. QUIC also offers more SACK ranges than TCP SACK which might help on links with high, non-bursty packet loss.
Then there's path migration, e.g. when switching from mobile to wifi. This could also be achieved with MPTCP, but that's still making its way into the kernel.
I think this notion that you need to work around them, and accommodate, rather than to ignore them, and do things right needs to be thrown out.
All those middleboxes are non-essential, and not are not inherent part of the original design.
I especially don't understand google engineers who decided on TLS 1.3 version hack when the very same situation has repeated itself with 1.3, and many new TLS extensions.
The logic fail for example with IPv6, a giant amount of IPv6 software/hardware is plainly unusable, but this is not a reason to try to hack around it, by trying reimplementing IPv6 in v4.
Tcp packets require syn/ack, which makes them work quite poorly on wireless networks where devices are receiving the majority of packets. What happens is now your tcp acks go through the roof, because sending that packet from devices may not be received.
But syn/ack is only for data you want to make sure was received. Things like video and audio don't need that to view the content.
So it's one big use is it's a protocol that works over wireless much better for content viewing.
You can test this yourself with a basic server and a mobile phone over WiFi at distance. Over http/2 video will cut out and have a difficult time loading. Over quic it'll just keep going.
QUIC has ACKS also. It is a connection oriented protocol that can guarantee delivery of packets.
QUIC also has a three-way handshake (on new conntections). The equivalent of TCP's SYN, SYN+ACK, ACK, but stacked with the TLS negotiation at the same time.
Also, SYN is only used during this handshake in TCP.
I have that same question, "Why and who does this benefit?" It seems like everything runs in the cloud these days and is fronted by huge CDN's. The performance on 80%+ of websites should be really fast already due to local proximity wins (speed of light/bits). It looks like there's a megacloud provider location just about everywhere these days with more on the way.
This seems like a lot of energy spent to make already fast things just a tiny bit faster. Are there any real world estimates or projections for improvement? Like, how quickly will that Facebook page load or show those Google (ad) search results?
Things feel largely fine, yet here are more "improvements".
And once we have these gains, they will probably be lost in bloated javascript or app layer inefficiencies.
One thing I read about a while ago (can't find the link at the moment) is that QUIC doesn't play well with firewalls that do QoS (at least, by default), because a node will send as much UDP data as it likes and the system has to deal with it as it comes in; with TCP, you can adjust the window size, etc. to slow the transmission rate, and so on, but current firewalls don't have any (practical?) way to manage this.
I solved this issue at my last company by just blocking 443/UDP at the firewall so that clients (i.e. Chrome) would fall back to HTTP/2.
The problem with making changes to TCP or using SCTP is that it would take a lot of time for the implementations to be rolled worldwide, one of the main reasons why is that so is that these protocols are implemented in kernel space any change would take years before its widely adopted over the network. TCP fast open is an example of this. For this reason, QUIC's functions are implemented in the user space over UDP(no changes made to UDP). But this has also created problems for QUIC as it turns out that QUIC has significantly higher CPU usage compared to TCP.
Yes, the biggest thing about Quic deployment is that UDP other than most well known ports is being firewalled almost universally on low end ISPs.
In the end, a QUIC probe, and a subsequent TCP fallback would be worse than well done TCP, and no better than MPTCP/TFO/SCTP/DCCP (all of which were supported by mainline OSes for a long time)
I believe macOS/iOS is the only major OS with MPTCP right now. Some others have patches available or are in the process of landing their own implementations but that'll likely take a bit and still doesn't include Windows. SCTP is supported by most kernels but usually can't get out of your local network before something either explicitly blocks it or can't understand it and so drops it. DCCP won't even work on your network, unless your network is only Linux machines. TFO is blocked by enough junk middleboxes that, on average, it's less efficient than just not trying to use it.
Your only real choices are TCP without any extensions created in the past 20 years or UDP. Some firewalls will block UDP but at least it'll survive the open internet so that's a problem you can get IT to fix.
I'm not sure how that really is a problem. I am writing this from a box that has IPv6 and most of the latest TCP stuff including BBR, ECN and TFO. It took a while until we got to that point, but here we are. In the past we were able to wait too, why not now?
I think the time taken by end-users/customers using your app will be of more concern here since they will take significantly more time to upgrade their devices to a new transport protocol or fixes in existing ones. QUIC will enable us to restrict such updates to the developer level.
That doesn't really explain what changed, where the urgency is coming from. Also note that some TCP improvements are sender-side only, so you only need to upgrade your server, not all client devices.
The urgency is coming from mobile networks and high reliance on low latency for user engagement.
Also even if we assume that 50% of all internet users have access the most modern protocols as soon as reasonable we still care about how quickly the next half will upgrade.
And in a sense I agree with this vision of the internet. On one hand the internet could be a super optimized distributed communication network with a lot of embedded functionality (I really like the concept of content-centric-networks/name-centric-networks) on the other hand the internet could be a dumb many-ended pipe (the IP protocol)
In terms of how I use the internet in practice I always prefer the dumb pipe model.
The upgrade problem is something that google created with android devices being stuck on old kernels, now they're trying to paper over that mess by moving things to userspace. That may solve this particular problem but it creates a new maintenance nightmare with hundreds of different applications bringing their own congestion control and different transport implementation on top of dozens of differnet TLS libraries.
It seems like a little more patience and ensuring that devices remain upgradable could give us most of the advancements without the downsides.
That is a possible source of error, still there is the issue of network devices that implement ossified versions of internet standards. For example when they tried to use different kinds of compression in unencrypted HTTP it came out that a lot of middleboxes would simply "fix" the content-encoding header to be either gzip or deflate, regardless of that making it illegible.
Not an expert, but one interesting feature is the ability to keep a connection open across network changes.
In TCP and UDP, connections are identified using local IP address/local port/remote IP address/remote port tuples. This means, if my IP address changes, (e.g. because I connected to a new WiFi network) all TCP connections that were active before must be abandoned. With QUIC, supposedly, an identifier separate from IP addresses is used, so there are ways how a connection could be resumed. (I'm fuzzy on the details though)
Humm, will nginx enable header compression this time? Their HTTP/2 module disabled the hpack dynamic table, as I recall. Will they serve us again a poor-man QUIC and tell us "it's all fixed in the paid version" again?
IIRC wasn’t this due to security exploit (information leak) with compression enabled you can figure out what is in the headers by doing some kind of observation of the changes in byte sizes?
you're thinking of an exploit in spdy (the h2 predecessor) in which the headers were just run through the same gzip context. The HPACK format in h2 and h3 is meant to remove those oracles. (though it is less effective bytewise than gzip).
Google ruined the simplicity and orthogonality of the internet while we all were there looking. There is nothing of the original design grandiosity of the first batch of internet protocols here. It's just engineering work of sacrificing every elegance and modularity to seek some percentage (not order of magnitude) performance gain.
Simplicity, orthogonality, elegance, modularity, etc. are useful when you want to build lots of different things easily.
When you're building one single thing that's used by 4.6 billion people, it turns out that percentage optimizations matter!
(I work for a company that built a shitty half-baked homegrown QUIC equivalent because in rural Ethiopia, HTTPS handshakes were so slow that they literally just didn't work. Glad that Google is optimizing our percent-of-a-percent use case!)
The thing is, what Google does often doesn't work.
Unsound hacks that kind of work "acceptable" in A/B test telemetry and slowly break in real life from inherent design deficiency, are almost always worse than something saying from the start "will not work on bugged os/hardware version, but work really well on standard compliant ones"
The TLS 1.3 hack a Google engineer has forced through IETF is now backfiring for example. They did it to hack around a certain brand of middleboxes, but the hack instead broke few other ones, and embedded http servers. They may well errata it, and go back to normal versioning in 1.4, despite putting it on paper in 1.3 that the hack is here permanently.
> The TLS 1.3 hack a Google engineer has forced through IETF is now backfiring for example.
How is it "backfiring"? It seems to be working for billions of people. If you've got a non-compliant TLS implementation that broke you get to keep both halves, good luck with that.
> There is nothing of the original design grandiosity of the first batch of internet protocols here.
I'd be interested to know how you feel TCP and QUIC differ in this regard.
It seems to me that they're both reasonably similar protocols that design for slightly different things and make different tradeoffs (reliable single-stream transfer vs reliable multi-stream transfer, ip and port-based endpoints vs abstract connection ids, etc).
I guess you could see the integration of encryption to be an ugly layering violation, but IIUC the primary driver for that decision is less performance and more to prevent ossification by ensuring as much protocol information as possible is inaccessible to middleware.
I'm less enthused about HTTP/3 - I would've liked to see a simpler, "QUIC-ified" version of HTTP/1.1 done first (ie. basically just normal HTTP except with one request per stream), but I guess it's a hard sell to roll back from HTTP/2's efficiencies, and HTTP/2 itself is unsuitable for QUIC since it duplicates much of its functionality.
I also want to see QUIC used for more protocols, especially those that (like HTTP/2) already had some level of stream multiplexing built in (AMQP comes to mind here).
I've used them and works great. Also have a plugins for popular CMS platforms (WordPress, Joomla, Drupal, Magento, OpenCart, PrestaShop, MediaWiki, etc.) that REALLY helps.
Depends on what you mean with "technical". Both implement the QUIC specification for obvious reasons, but there is a fair amount of difference in how they for example handle reordering of frames, prioritization of outgoing frames and some other details which will have a yet-to-be-determined impact on performance.
Besides that
- one of the implementations has unit-tests
- one of the implementations is in written in a by-default memory safe programming language
- one of the implementations might become available to Nginx users without having to invest extra work
But CF is using their own implementation for WARP:
https://blog.cloudflare.com/1111-warp-better-vpn/
"We built WARP around WireGuard, a modern, efficient VPN protocol that is much more efficient than legacy VPN protocols."
I'm very skeptical about QUIC/"HTTP3". TCP works extremely well already. Yes, one can do kind of better for some particular workloads but I have yet to see a successful implementation that goes ahead of TCP.
Take a Remote Desktop protocol used in Microsoft Windows. It can work over TCP but recent revisions tend to automatically switch to UDP. And know what? They are not reliable to the point that customers have to turn the UDP layer off. TCP gives a slightly worse latency but it is much more reliable and thus usable. Thanks there is a Group Policy for that.
I'm not even talking about Google as a company who constantly tries to attack the network infrastructure with its variant of EEE (Embrace, Extend and Extinguish). What's the end game? The crippled protocols worldwide imposed by the ad casino company? No, thank you. Internet must remain free of all of that.
From my point of view, http3 is more of a sidegrade and has no net benefit over http2.
We can make TCP fast, very fast, and the use of UDP was not a prerequisite for any of its functionality.
The "multistream" functionality of QUIC is mainly to benefit companies with big, highly loaded CDNs, and kind of obviates the economic point of squeezing multiple virtual streams into a single TCP connection as used in http2.
By throwing away TCP, they are throwing away decades of optimisations, and hardware offloading that network hardware makers made to handle TCP well.
If the talk was really about extracting single digit improvements from it, I think it would've made more sense to finally put SCTP, and DCCP to good use.