What is the problem with long lived certs? Is the cryptography behind PKI considered weak nowadays, that you can collect enough material on cert that you can derive privkey? Is there other fundamental cryptographic weakness?
I understand problem of still-valid certs after domain expiration, but mere rotation strikes me as similar to password rotation ... which is currently discouraged.
The main issue is lack of a working revocation mechanism.
Servers get compromised sometimes, people do stupid things with keys, etc.
We dont have a really good way of revoking keys after something bad happens. We have some bad ways, but they kind of suck.
An additional reason might be making it easier to punish a misbehabing CA (CAs are often too big to fail, you cant ban them without breaking half the internet)
No kidding. Where I use to work, certs are a nightmare.
For example, some sites were closed and we asked the site owner to revoke their cert. We got "What does that mean ?", they had lost their private key.
Also other departments had certs that expired, they had no idea what to do. I left over a year ago, and someone who knows more about certs that I do left not long afterwards. I know many certs are due expire soon, good luck to them.
The point of this is I can see 45 day certs being a huge issue for that company. When I left they were looking into non-expiring certs. I have no idea what they ended up doing.
Well with ACME theory is that certs should renew automatically so no one should cate.
But to set it all up in F500 size company, that is totally different discussion and not only TLS certs but all kind of other cert auth that happens there.
Others have answered on one level. I want to answer on another.
A few older crypto mechanisms were designed around trusting one thing totally. For example, everything's secure until the end of time provided that the user keeps a private key totally private at all times, with no interruption, ever, and if the user lapses then the overall mechanism breaks really badly.
They were complicated mechanisms held together by a screw made of a metal which was assumed to be infinitely strong.
The current fashion is to trust thing in a more limited way, and to design systems such that they won't blow up spectactularly if something breaks. Being able to revoke keys is part of that, it is a weakness that helps to avoid a really bad weakness.
> If the key is 'revokable' that is a weakness in it self.
Ish. The threat of that is part of a different risk model, and probably less serious.
A long-lived certificate that ends up being able to be used by a malicious party (perhaps due to a leak of the private key) could be serious for many users if the certificate can't be reliably revoked, which they currently can't be.
A mistaken or malicious revocation if someone were to manage that could be an issue to the service operator (users can't get in) but this is a safer failure than users connecting to a malicious shadow service due, for example, to an accidentally exposed private key and a DNS poisoning attack.
As others have mentioned: if revocation worked better currently, this would be less of an issue.
A mistaken revocation is orders of magnitude less harmful then a failure to revoke. If revocation worked pretty reliably, that would be an improvement over the status quo.
I'd read it primarily as censorship risk. If a CA is under a government that wants your website taken down, they can just lean on the CA to stop renewing any certificates for it, as opposed to futzing around with ISPs or DNS providers that can only have a local impact. Or alternatively, the CAs in the future might directly use their monopoly to decide who's good and evil (especially on "integrity of the network" grounds against those judged to be spammers et al.).
At least perceived censorship risk is why the archive.is guy always uses HTTP links and not HTTPS links for his site, iirc.
I'd classify that as very low risk. If a CA's business is compromised by a government, then it's pretty easy to just switch to using a different CA, preferably in a different jurisdiction to work around the censorship.
I don't really get the argument behind using HTTP links to avoid the censorship risk with HTTPS - just provide both and get the best of both worlds. Also, using HTTP is far more prone to being interfered with in transit - I recall BT (or their ISP business department) trialling that and injecting adverts into HTTP pages. I can't recall any instance of HTTPS being censored by restricting certificates.
FWIW, a slight clarification here would be that the majority of TLS certificates are issued by CAs in the US, but the majority of CAs are not headquartered in the US.
Whilst that could be a problem, there are some CAs that aren't based in the US. I consider it a low risk issue as I haven't heard of it happening. It'd be more likely for your IP to be cut off rather than attempting to deny an SSL cert.
As you say though, the risk of revocation is low, since anyone who wants to do that can just fail to renew (renewing and revoking are different operations).
I guess it decreases the lead time of censorship. That seems pretty minor though.
* Certs come with secrets. Long-lived secrets are riskier than short-lived ones because of window of opportunity if they're compromised in an undetected way.
* Less frequent cert rotations mean that the rotation process is inherently riskier. The old adage of "request a 2-year cert, and you're scheduling an outage 2 years from now" has a lot of truth to it. More frequent rotations increases the incentive to automate, which reduces the service risk.
Your second point is crucial in my opinion. In most organisations, there'll be a rush to get some new web service configured and an overworked admin is likely to set up the initial certificate. Without a short expiry date, you can almost guarantee that the admin hasn't got around to monitoring, automating or documenting the process and might not even still be working there in 2 years time.
That's a problem for the company, not the security of the Internet. Why do the PKI people take it upon themselves to increase the problems for these companies in order to force them to automate processes?
Reduced certificate expirations also enhances the security of the internet as it reduces the window of opportunity for nefarious uses. It could possibly reduce their number of support calls from someone who's taken over from a previous admin and is now faced with an undocumented manual process to replace certs in a hurry as their website has an expired cert.
Personally, I don't see the problem with short expiry dates, though less than a month would be too short in my opinion.
A window of opportunity of 45 days is not significantly different from one of 90 or 365 days. If it's the only protection you have from a leaked private key, then a while lot of people are gonna get MITMed.
But the argument I was replying to was saying that the main advantage of short expiration times is that it encourages companies to automate the process, which reduces the chance that the certificate is accidentally allowed to expire. This is not a security issue for anyone, least of all people using browsers.
Plus, this concept that short expiration times increase security is suspect at best. If the private key leaked, 45 days is far too long, you'd need to reduce this to hours to actually help as a revocation strategy. And even then, chances are that the new key will leak as well right away, as it's most likely that the key was stolen by some undetected malware. And if the key didn't leak, a two year old cert is just as secure as a two minute old one.
> This is not a security issue for anyone, least of all people using browsers.
I disagree. The smooth running and ease of automation of TLS certs benefits the entire ecosystem, including the end-users. Remember when the only sites that had TLS certs were the ones that could afford it?
> If the private key leaked, 45 days is far too long, you'd need to reduce this to hours to actually help as a revocation strategy.
This is a good example of the Nirvana Fallacy.
> And even then, chances are that the new key will leak as well right away, as it's most likely that the key was stolen by some undetected malware.
No certificate expiry control can protect against continuous, undetectable data exfiltration. Meanwhile, a one-time access that grants me the ability to impersonate you for 2 years is a significantly worse situation than one that only grants me that ability for a few weeks.
Privilege escalation and Dev Ops rot. Long-lived certs often get compromised when privilege escalations happen and someone gets access to an account or computer that has private keys on it.
One example scenario for privilege escalation: let's say a hacker gets access to one of your employee's or vendor's machines and associated accounts using a zero-day, or phishing, or some other method that goes undetected for some time. The attacker, as part of this attack, successfully gets access to your cert's private keys through some way or another without drawing attention to themselves.
Some time later, your firm makes several security updates. When doing this, you unknowingly patched the attacker out of your network. The attacker is now in a race against time if they want to do something with the cert before it expires, and in this kind of situation, the sooner that cert expires, the better, because the attacker gets less time to do something with it. In a perfect world, the cert expired exactly when they got patched out, but because we're not guaranteed to know if there's an attacker in the first place, "keeping the expiration time as short as is reasonably possible without impacting service reliability" is what things seem to be moving towards, to limit the blast radius during access leaks.
As for Dev Ops rot, speed has a tendency to change requirements in favor of automation. Generally, certificate rotations tend to be a pain point - they break management panes, they take down websites, they throw browser errors, they don't get updated in pipelines, and other woes happen when they expire that demand people keep track of a ton of localized knowledge and deadlines that's easy to lose or forget. However, paradoxically, the longer the time between rotations, the more painful they tend to be, because once rotations are sufficiently fast, it becomes unmanageable to do them manually: demanding speed forces people to build anti-fragile rotation systems. Making the requirement be shorter is in some sense an attempt to encode into managerial culture "you need to automate this", as a bulwark against swapping your certs out being anything besides automated or one click rotations.
I'm by no means an expert, but the difference between passwords and certs is that certs can be used without any interaction with the authority.
A leaked password will reveal itself to the authority when used. You have to connect to something to use it and when doing so, can be flagged.
A long lived certificate and key can be used with no interaction with the authority, so how do you know that it is being used maliciously? The renewal is the interaction with the authority which could pick up malicious activity, so making it more regular is beneficial.
100 year certs require a consultant when the 100 years are up on any certificate and no one has any idea how the system was set up. They require a consultant when there's a security breach and you have to figure out which of the many certs you have ever used is compromised, or let the attackers in.
The hope is that 45 day certs don't require consultants because if you don't set it right you'll find it right away... and of course the risk of leak is much lesser.
It's very easy to use free certificates (e.g. LetsEncrypt) that can use a free script to automate the renewal of them. There's also plenty of free guides on how to use them for various web servers etc. That comes to a grand total of nothing.
If you'd rather pay someone else to do it for you, then that's your own issue and not really anything to do with the length of certificate validity.
This, combined with your other responses to this thread, makes it very apparent that you’re more interested in asserting your ignorant view rather than actually learning anything. Please don’t bother commenting.
TLS cert prices have crashed to…in most cases, $0, as average certificate validity times have shortened. If there’s any cost relationship at all, it’s the exact opposite of what you’re implying.
If re-signing adds material complexity to your architecture (including the ‘hidden complexity’ of a dev / ops person needing to remember to manually renew a cert every two years) then frankly it sounds like you’ve got larger problems on your hands.
We've banned this account for repeatedly breaking the site guidelines. Please don't create accounts to break HN's rules with. If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.
It's particularly bad to do this when you're knowledgeable and (let's assume) correct on a topic, because then the bad parts of your comment (such as personal attacks) end up discrediting the truth. That doesn't help anyone. https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...
>but mere rotation strikes me as similar to password rotation ... which is currently discouraged.
Cargo culting strikes again. Forcing password rotation is bad because it causes people to choose passwords with a given pattern (eg. password1, password2, etc.), which defeats any security benefit. Rotating certificates have no such issue, because the key is (presumably) randomly generated.
You can also set your password to "password" but hopefully you don't. Guidelines and technical measures can't keep everybody safe all the time, if they are determined to be unsafe, but that is never an argument against them.
Don't you have to go out of your way to do that? You can probably choose a weak RSA key as well (ie. one that's 4096 bits but not a prime), but if you have to go out of your way to do that there's little anyone can do to stop you.
Does it terminate existing connections and re-handshake when renewing the cert then? Or does it potentially hold many keys in memory for existing sessions? IIRC a TLS session can potentially last for a long time and 0-RTT depends on it being the same key, right? Couldn't find any answers in the docs: https://caddyserver.com/docs/caddyfile/directives/tls#reuse_...
The renewal of the key does not affect existing connections/sessions because they (the sessions) don't use the key directly. The private key is only used in the beginning to agree on the symmetric key, then the symmetric key is used from there onwards.
I think it’s the same mentality as “change all your passwords every 6 months”, just a useful principle “change your password if you think you need to” because a ridiculous rule of thumb and hammer slap everything is a nail. A reasonable rule like “update your cert at least once a year” becomes “by extension wouldn’t updating your cert once a month better?”
> Chrome does not trust user-added Certificate Authorities for QUIC.
Interesting. In linked issue chrome team says:
> We explicitly disallow non-publicly-trusted certificates in QUIC to prevent the deployment of QUIC interception software/hardware, as that would harm the evolvability of the QUIC protocol long-term. Use-cases that rely on non-publicly-trusted certificates can use TLS+TCP instead of QUIC.
I don't follow evolution of those protocols, but i am not sure how disallowing custom certificates has anything with "evolvability" of protocol ...
If I were to guess, it's to allow Google freedom in experimenting with changes to QUIC, since they control both the client and large server endpoints (Google Search, Youtube etc).
They can easily release a sightly tweaked QUIC version in Chrome and support it on e.g Youtube, and then use metrics from that to inform proposed changes to the "real" standard (or just continue to run the special version for their own stuff).
If they were to allow custom certificates, enterprises using something like ZScaler's ZIA to MITM employee network traffic, would risk to break when they tweak the protocol. If the data stream is completely encrypted and opaque to middleboxes, Google can more or less do whatever they want.
That’s exactly what Google is hoping will happen. If QUIC is blocked entirely, there’s no risk that small tweaks to the quic protocol will break Google’s websites for any companies using these tools.
Well, my company is doing it already. They split VPN traffic depending on the target domain (mostly for benign reasons), and that can't do it with QUIC, so they have to block QUIC traffic.
When this kind of VPN clients do split traffic based on domains, they do it with some tricks, either via DNS or capturing traffic on the browser, or similar things.
But for doing split VPN with IP addresses they need to create an IP route in the VPN client. If you just have a couple IPs, it's fine, but if you have a couple hundred targets, you're gonna break some guys Windows or Mac machine sending that huge routing table.
Also, there are targets that change IP addresses. For example, AWS Elastic Load Balancers change IP addresses sometimes (if nothing have changed in the last years, haven't deployed ELBs in a while...).
Middle boxes (https://en.m.wikipedia.org/wiki/Middlebox) are a well known source of protocol stagnation. A protocol with extensibility usually needs the client and server to upgrade, but with middle boxes there are N other devices that potentially need updating as well. Where the user (client) and service provider (server) are motivated to adopt new feature sets, the owners of middle boxes might be far less so. In net, it makes it hard for protocols to evolve.
TLS1.3 breaks MITM boxes because a client can establish a session key outside of the network with the middle box and continue using it afterwards in the middlebox’s network.
> I don't follow evolution of those protocols, but i am not sure how disallowing custom certificates has anything with "evolvability" of protocol ...
One of the reasons for developing HTTP 2 and 3 was because it was so difficult to make changes to HTTP 1.1 because of middleware that relied heavily on implementation details, so it was hard to tweak things without inadvertently breaking people. They're trying to avoid a similar situation with newer versions.
The entire protocol puts corporate/institutional needs first and foremost to the detriment of human person use cases. HTTP/3 makes all web things require CA TLS and means that if something in the TLS breaks (as it does every couple years with root cert expirations, version obsolecence, acme version obsolecence, etc) then the website is not accessible. Because there's no such thing as HTTP+HTTPS HTTP/3, self-signed HTTPS HTTP/3, or even, as in this case, custom CA TLS HTTP/3. It's designed entirely around corporate/institutional needs and is a terrible protocol for human people. HTTP+HTTPS websites can last decades without admin work. HTTP/3 websites can only last a few years at most.
If it was about institutional needs, surely it would make it easier to mitm for middleboxes? The biggest opposition to QUIC came from big corporations and other institutional players
QUIC isn't just some transport protocol though: it's weird. These restrictions are based in the QUIC libs, not in UDP (which is the QUIC transport protocol).
And while you can use QUIC with custom certs in a technical sense if you do the compile flags and build your own universe, 99.9999% of the people on Earth with their standard QUIC lib implementations (most use the same two) will be unable to connect to it.
I don't know what you're talking about, but I just imported a QUIC library and used it with a self-signed certificate. No extra steps required, either on the server or the client side.
Yes, the protocol is weird, compared to TCP. It has many extra features and one restriction, which is mandatory TLS, which I wouldn't even consider skipping anyway. Still nothing to do with ads.
He was arguing about 99.99% of users being people that cannot use your stuff because chrome doesn't allow the use of snakeoil / self signed certs for QUIC, specifically, and TLS encryption is mandatory.
If you compare that to the graceful Connection: Upgrade handshake in http/1.1 and websockets, for example, this would've been much better because there is no isolation based on tools and libraries, only based on trust chain of the certificates. If a new version of the protocol breaks, it automatically falls back with both parties knowing about it. If QUIC protocol has changes on the client side, good luck finding that out.
"Either way, I still fail to see how this relates to the original complaint that QUIC somehow leads to ads."
"HTTP/3 uses QUIC, a transport layer network protocol which uses user space congestion control over the User Datagram Protocol (UDP). The switch to QUIC aims to fix a major problem of HTTP/2 called "head-of-line blocking": because the parallel nature of HTTP/2's multiplexing is not visible to TCP's loss recovery mechanisms, a lost or reordered packet causes all active transactions to experience a stall regardless of whether that transaction was impacted by the lost packet." (Wikipedia)
I'm a text-only browser user; I also use TCP clients with a localhost TLS forward proxy. When I visit a website I only request resources manually and only from a single domain at a time. This works great for me. But obviously it is not compatible with ads. This is because ads requires websites that cause so-called "modern" browsers to automatically request resources (ads) from multiple domains "simulataneously", i.e., "in parallel". As such, I see no ads. Not a problem, I can live without them.
However, for those who let these browsers request ads automatically, there are problems. "The web", as they experience it, gets slower. Because although the requests to multiple domains may be executed "simultaneously", and the HTTP/1.1 protocol allows for pipelining requests, these browsers cannot process the responses that arrive out of order.
HTTP/1.1 pipelining does work. Outside the browser, when requesting non-advertising resources, for example. I have used HTTP/1.1 pipelining outside the browser for over 20 years. It works beautifully for me. I am only requesting resources from a single domain and I want the responses in order. So, right there, we can see that HTTP/3 is addressing the needs of advertising companies and their partners rather than web users who are reqesting non-advertising resources like me.
As the web became infested with ads/tracking and became slower as a result, the ad companies and CDNs, e.g., Google and Akamai, sought to "make the web faster" by introducing a new HTTP protcocol for the so-called "modern" browser. (This browser, as we know, is more or less under the control of an advertising company, the same one introducing the protocol.)
Now, a web user (cf. institutional player in the online advertising industry) might conclude the easiest way to "make the web faster" is to remove what is making the web slow: ads.
But the "solution" chosen by the advertising company was to keep the ads and try to make it easier for the so-called "modern" browser to process out-of-order responses from multiple domains, e.g., ad servers, faster.
Will HTTP/3 and its use of QUIC "lead to ads". That is for the reader decide. I think it is safe to say it will not lead away from them and it could help cement the ad infestation of the web even more.
tl;dr HTTP/1.1 is optimal for sequential information retrieval from a single domain. HTTP/3 is optimal for resource, e.g., advertising, retrieval from multiple domains "simultaneously".
HTTP/1.1 is no different from HTTP/2 and even HTTP/3 when it comes to multiple domains. The optimization strategies are all aimed at solving simultaneous resource requests from a single host (not domain but close enough) - should be obvious when you consider that each host requires at least one separate socket connection anyway.
In practice http/1.1 with 1 connection per request even encouraged making more domains/subdomains if you wanted to have more simultaneous requests
The ad company wants to make ads browsing easier, therefore it created HTTP/3. I can entertain that statement.
The ad company created QUIC to make HTTP/3 possible. That's also certainly true.
What follows: the ad company created QUIC because they wanted to make ads easier.
But "QUIC exists to improve ad deliverability" is true only in the shallowest of senses, similar to "nuclear power plants exist to drop bombs on civilians" just because research on nuclear power was driven in large part by war needs. In reality, nuclear power has its own uses beyond the military.
Similarly, QUIC taken on its own merits does not have anything to do with ads. It's just another general purpose protocol.
BTW, multiple streams will not make it any faster to load ads from third parties. Head-of-line blocking only affects resources within a single TCP connection, which can only ever be to one server. That means QUIC's streams do nothing to make loading Google's third party ads easier.
> We explicitly disallow non-publicly-trusted certificates in QUIC to prevent the deployment of QUIC interception software/hardware, as that would harm the evolvability of the QUIC protocol
There is a case of Kazakhstan installing certs to MITM citizens couple years ago and bunch of cases where bad actors can social engineer people to install certain for.
I think because of KZ case browsers and Chrome especially went for using only their own cert store instead of operating system one.
Browsers responded by blacklisting the Kazakh certificate the same way they blacklist the certificates that came with pre-installed spyware on laptops from shit vendors like Lenovo. You don't need to block all certificates to prevent against a well-known bad certificate.
If your company requires communications to be monitored, the typical enforcement is a custom company CA installed on company equipment. Then they intercept TLS and proxy it.
Those proxies tend to be strict in what they accept, and slow to learn new protocol extensions. If Google wants to use Chrome browsers to try out a new version of QUIC with its servers, proxies make that harder.
It can seem confusing but it all makes sense when you realise Chrome is designed to work for Google, not for you. I remember people switching their Grandmas to Chrome 15 years ago when they could've chosen Firefox. Many of us knew this would happen, but convenience and branding is everything, sadly.
> Chrome is designed to work for Google, not for you.
Maybe more accurately “chrome is designed to work for you in so far as that also works for google”. I share the long standing dismay that so many willingly surrendered their data and attention stream to an ad company.
I don't really think Firefox cares about having users. The one killer feature Chrome has is being able to access all your state by logging into your Chrome account. Firefox refuses to provide this basic service which will allow you to seamlessly use your data on Firefox and then eventually stop using Chrome. I wish Firefox nothing but the worst.
I may be feeding the trolls, but not only is there a sync mechanism, at least with Firefox you can self-host[1] such a thing, thereby doubly ensuring the data isn't used for something you disagree with
If you're going to say that Firefox doesn't care about having users, point out its just stunningly stupid memory usage, blatantly stale developer tools (that one hurts me the worst because the Chrome dev-tooling is actually open source, so there's nothing stopping them from actually having Best In Class dev tooling other than no-fucks-given), or the har-de-har-har that comes up periodically of bugs that have been open longer than a lot of developers have been alive
Don't care about self hosting. That's not a feature to me, it's a burden. I would rather some cloud provider do that for me, thankfully Google does it for free and the convenience is much appreciated. It's the same reason i'd put my personal code in Github than some hard drive in the basement which may die anytime.
Perhaps you interpreted my comment as that one must self-host, versus what I intended which is "you can use theirs, or you can use yours, depending on your paranoia level". I thought to include that distinction because some folks believe that Chrome is merely a data exfiltration and ad delivery vector created by the biggest Ad Tech on the planet and therefore don't trust them to be good stewards of arguably the most sensitive thing a modern user creates: browser history
Of course one can just use Firefox Sync out of the box <https://support.mozilla.org/en-US/kb/sync>; even Mozilla has not yet stooped so low as to require opening a terminal just to use Firefox or its Sync component
Again, I don't care that Firefox wants me to build up my profile on one device. Then transfer that to some other device. I already have my profile. Let me login to my chrome profile and use it. I don't want to buy into your shitty ecosystem even if it gives freedom (that i dont want or care about).
I felt your reply was too harsh, but after few moments I realized that I instinctively think the same I treat any output from ChatGpt as garbage until checked in other sources. So effectively, not worth looking there in the first place.
I think that it's often easier to verify an answer than to find an answer with nothing to go on, so perhaps not entirely garbage but certainly not reliable.
In Bulgarian, латиница ("latin alphabet") transliterated to latin alphabet is just "latinitsa" or "latinica".
In Polish "cyrillic" is "cyrylica" - basically reverse.