Cost is going to be taken care of by let's encrypt.
I am surprised complexity of setting up SSL isn't mentioned in the article.
First the tools are complex to use, use various formats for storing keys that are incompatible. In Linux you pretty much have to rely on cryptic command lines. Windows is slightly simpler. And you need more tools to convert certificates between the different formats (for instance using an IIS certificate with FileZilla).
Then you have to deal with the complexity of the algorithms used, the fact that chrome won't accept sha1 anymore, the fact that http2 is very picky in the cryptographic methods used.
What is a best practice today breaks tomorrow. For instance the following script written only a couple of years ago shows how to set up windows to get an A on SSLLabs:
Etc, etc, etc. This is hard work. Until setting up SSL will be a simple box to tick in IIS or a single setting to change in apache, only developers who really care about security and are really motivated will deal with this shit.
The cipher list is effectively a big issue. When using the good old Sun Fortran compiler, I was used to have a -fast flag which would be smart to detect the hardware and be as fast as possible while respecting the IEEE maths (as far as I can remember).
I would love a -secure flag to just use the most secure option of the current version of the software even at the cost of X years of backward compatibility at the tool/client/browser level.
> I would love a -secure flag to just use the most secure option of the current version of the software even at the cost of X years of backward compatibility at the tool/client/browser level.
How high HIGH is depends on your version of openssl, ie, you probably don't want it for, say RHEL 5's openssl. Which (openssl has a long lifecycle, LTS Linux distros extend it further) some places may still be using.
A better implementation of the same concept is Mozilla's SSL Config Generator (which is what CertSimple point customers to - we also maintain the node port): it has Modern/Intermediate/Old with a clear idea of the compatibility you're losing with each: https://mozilla.github.io/server-side-tls/ssl-config-generat...
Or they should package it in vestions. "TLS 1.3" would come with a standard list of ciphers and other parameters. All you would have to do is tell your system "Disable SSL3.0" "Enable TLS 1.3", etc.
It definitely helps, but if you're trying to keep up with the latest security, you'd be susceptible to Logjam (due to DHE-supporting ciphers in the default TLS 1.2 list).
There's the Mozilla SSL Configuration Generator which helps with the madness, but they still aren't keeping up with the latest recommendations.
I just set up a site a few days ago with caddy - http://caddyserver.com. Getting certs for multiple domains was a matter of adding a single line for tls to the config, it generates a key locally, requests the cert, and installs it transparently on the first request. Pretty smooth. I hope they will autorenew without intervention, haven't tested that bit yet, but so far it was painless.
I imagine most mainstream servers will incorporate this sort of smooth tls flow at some point now that services like letsencrypt are available to base it on. I'm considering moving web server for other sites (usually use nginx) for this single feature.
I'll look into it, but to illustrate my point, when clicking on your link I get an invalid certificate error in chrome. Granted I am using an old browser (Win XP / Chrome 40) but it illustrates my point that it is hard to get SSL to work well.
Wasn't XP EOL in 2014 or something after being unsupported since 2009? That's long enough ago that many websites don't test on XP any more, or even work (like sites which require SNI AFAIK). I'm happy to drop XP users I'm afraid, given it has been unsupported for some time (I understand some workplaces mandate it still).
Following in the footsteps of IE 6, Windows XP refuses to disappear for sometimes unbelievable reasons.
It was exposed in a conference at "pas sage en seine 2015"[1] that almost every french banks websites use broken security because their ATMs are still running windows XP and go through the same web server.
Why is your employer forcing to use a severely outdated system? Do you often come across broken or very slow websites (I imagine the hardware is from the same era as the OS)?
We as an organisation like to take our time. We wouldn't welcome a new joiner with a PC on his desk. Much more fun to see him struggling for weeks with our internal IT approval processes. We happily pay millions to Microsoft to get the privilege of using Windows Xp / 32 bit systems a little bit longer (I have now almost more RAM in my mobile phone than my workstation). In fact I suspect many of the critical infrastructures of the organisation are running on COBOL. We'd probably still use CRT screens if our IT department could source them. We are certainly still using blackberries (I hope they will stash enough devices before RIM goes bust).
Let's Encrypt is still completely unusable. We fought useless error messages for two hours for our recent renewal, gave up, and renewed with our old CA. Old CA was a very painless process by comparison.
Still no mention about ads. Using HTTPS means you have to load ads over HTTPS and a huge number of ad networks are not reachable over HTTPS and another large number of assets / tags are hard coded to fetch HTTP resources. For any ad supported website, HTTPS will cause a significant loss of ad revenue.
Given how long it's taking the industry to transition away from Flash I'm not holding my breath about this being fixed any time soon.
Facebook (in addition to Google, as mentioned above) has already required all 3rd party ads to be served over https both on their core site and on all apps on their desktop platform.
Solution to what? Publishers still have to enable HTTPS on their sites and they'll only do that once all the resources on the page are HTTPS compatible.
While I agree with the gist of that article/presentation, it's more on the silly side compared to many other more detailed and accurate pieces about the growing weight of the modern web.
While blocking ads will help some (and certainly some ad providers have gone way too far with their bandwidth/cpu requirements), the web is just a richer media experience than before. The mainstream does not just want text and 80-90% of the page weight ends up being images, even on that article itself. There are other advancements like HTTP/2 and modern CDNs and better image formats like FLIF that will solve this problem much more effectively without going back to a textual world.
Still don't see what any of this has to do with HTTPS encryption everywhere though...
The reasons are as described - a big company with outdated or poor tech policies. Just because AWS is advanced doesn't mean every use of it is. Clearly Cloudfront supports HTTPS and has for a long time so they can enable it if they want to.
For example, one of the biggest CDNs in the world (I'm looking at you, Akamai), charge dramatically more for delivering content over HTTPS. Let's say you're delivering video content at scale, the difference between HTTP and HTTPS delivery can be many millions of dollars a year.
But why not use a different CDN, say Cloudfront which prices the same for HTTP and HTTPS?, well, simple, the same problem. Cloudfront is many times more expensive than other CDNs at scale.
Really, we need to apply pressure to all CDNs to equal out their HTTPS pricing (Its not just Akamai...)
This isn't helped by the subset of CDN providers who use the enterprise sales models where they try to bundle services and set pricing based on your perceived ability to pay. It's always possible to negotiate a better deal but everyone has to burn the time needed to do that independently and can't publicly share the results.
Unless you're doing a lot of large files like media or downloads, I always recommend CloudFlare. It's a great service and there are no bandwidth or request costs.
> In January this year (2010), Gmail switched to using HTTPS for everything by default. Previously it had been introduced as an option, but now all of our users use HTTPS to secure their email between their browsers and Google, all the time. In order to do this we had to deploy no additional machines and no special hardware. On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead. Many people believe that SSL takes a lot of CPU time and we hope the above numbers (public for the first time) will help to dispel that.
At the time that was written, only the frontends did TLS---remember, there wasn't universal strong encryption or authentication within Google's back-end until after the Snowden leaks.
Nope. The symmetric crypto is basically free. The asymmetric crypto is cheap.
IPv4 addresses cost. Most big sites can't quite go SNI-only yet. Soon!
Customer support costs. Why would TLS involve more customer support? Because it's a technology designed only to break connections. Ideally that's only the adversarial connections---but much like the TSA, a mis-designed authentication scheme can cause great stress and drama.
I do think the world will be all-TLS before too long; the parts that aren't will probably not be HTTP over TCP (e.g., content addressable networking).
No SNI on Windows XP (IE6, IE7, IE8, Safari), nor Android 2.3 Browser, nor BlackBerry. Windows XP still has 4-5% usage share on the web, which is 1 in 20 people. Lots and lots of low-end and older Android handsets were on 2.3 because 4.0 had new hardware requirements. Combined, it's a relevant number of people for large sites.
> Windows XP still has 4-5% usage share on the web, which is 1 in 20 people.
Chrome and FF together have a market share of about 70%, so roughly speaking, only 30% of those XP users can be expected to still be using IE or safari. The percentage might be a touch higher as users left on XP might be less likely to use a different browser, but it's certainly not like all of today's XP users are on IE + Safari.
You know how Mozilla recently had to delay the SHA1 deprecation in Firefox because it broke people behind shitty MITM "security" solutions? That crap often doesn't support any "modern" TLS features like SNI either.
Windows XP. Python through about 2.6. Java until quite recently. Old Perl code that I shudder to think about.
<shudder>
Oh, and some old Android. The Android version that started supporting SNI is apparently also the version that tightened up licensing, so a number of third-world forks still use the old version. And as long as most of the web works without SNI, they don't have a huge incentive to change that.
The work that some browser vendors (Mozilla, Google) are doing to privilege imperfect-TLS over no-TLS is crucial here. Also, yes: new Web work should all be done over HTTPS.
some pretty modern languages / libraries don't have SNI built in. E.g. requests under python 2.7 doesn't do it by default. As for browsers, I think the answer is that most do - it ends up being something like IE on XP and really old Android that gives you trouble.
With AES-NI, a single modern CPU core can encrypt 10 gigabits/second and do about 50,000 public key operations per second. Cryptography is not that expensive.
It fails spectacularly as well. No matter the quality of the content of this article (and I'm one to agree with it), this sort of throws it out the window for me. If you aren't able to manage a functioning version of the product you are describing (especially in the lets encrypt era) you don't get to do a technical article on said product.
I'm not convinced it's even a great idea until something is done about a) the entirely broken CA trust model or b) mainstream web browser's entirely broken attitude to that trust model.
I am interested in (a). Recently I heard from a trusted source who works for a popular CDN that some top levels CAs, who are trusted in my browser, are known to be issuing certificates for domains which they have no authority to be doing so, for the purpose of man-in-the-middle analysis by certain parties. But that there is a reluctance to pull these certificates because that would be like switching off portions of the web.
Is there any truth or more solid reference to this from those who are in the industry and know more about these topics? Rather alarming if so.
Yes, there's truth in it. It's takes time for them to be caught and punished, and the Certificate Transparency effort will make it much more difficult for these scumbags to do so.
just listen to security now [0]. it'll make your blood boil how, e.g., symantec fucks up over and over again. this industry is full of incompetent idiots.
Thanks, interesting to hear. If I heard correctly this is a case of incompetence -- which is certainly a problem. Though the information I heard was about lesser-known CAs granting certificates to enable some kind of snooping.
It seems that a problem with the current implementation of SSL is that it conflates the requirement for encryption with that for identification.
Knowing that "Symantec thinks this is google.com" has limited benefit to me; I'm happier with the idea that "This is the same google.com you visited before".
In addition to ad revenue issue r1ch mentioned (a friend of mine told me HTTPS ads provided less than half the revenue HTTP ads provide), there are a few other things I've encountered while working on HTTPS support:
- More ad network misbehavior: the last time I checked, a lot of HTTPS ads on AdSense would take up 100% CPU and lag the browser, because it would try to request an HTTP resource and fail in an infinite loop
- More failed connections: just two weeks ago, a Firefox update made HTTPS sites inaccessible on computers with certain HTTPS-scanning antiviruses installed: https://news.ycombinator.com/item?id=10854629
- Warnings when embedding HTTP images: Users often embed HTTP images in community sites like forums, chatrooms, etc, which cause warnings in Firefox and make Chrome remove its "secure" designators
- Annoyance: I tried to get a Let's Encrypt certificate recently. My server runs Node, and it doesn't serve arbitrary files from a directory (it's a WebSocket server) so the webroot method wasn't possible, so I actually had to stop the server and reconfigure the firewall not to redirect ports 80 and 443 to get the certificate.
`letsencrypt-auto` also gives very cryptic error messages: apparently "Correct zName not found for TLS SNI challenge. Found" means "please use `sudo -H`". And the standalone verification failed for an unknown reason, but at least I eventually got the Apache verification working. And I'm going to have to go through all this again in three months...
Regarding the "Annoyance" bullet point, you might have a better time if you put an Apache or nginx in the front (on ports 80/443), have it serve the acme-challenges statically and proxy_pass everything else to Node running on a high port. TLS handling would then happen in the frontend server only.
Regarding the "please use sudo" issue, it's very much possible to run the letsencrypt client as a non-root user if you set your filesystem permissions carefully. Here's the letsencrypt configuration for a private webserver of mine: https://github.com/majewsky/system-configuration/blob/master...
Adding an entire piece of server software for soemthing like node or phoenix, just to answer an HTTP response is overkill. If you want to use LE, make a node client to get the certs and do the renews and shove it on npm.
Im not able to use SSL certs on all my endpoints. I have to use self-signed certs. That's because many of my machines are on TOR hidden services.
The CA-CERT won't allow any certs for .onion unless you buy at exorbitant price an EV2 cert. And that especially means no free certs.
So I do use self-signed because I want end to end crypto (and the next-to node can see data). Of course my browsers throw a fit, but alas I am indeed secure.
... but I want an .onion cert authority by LetsEncrypt and first-class network routing for all .onion addresses in Linux via /etc/resolve.conf
What you're saying is correct for connections from tor to cleartext sites. Hidden services are end-to-end encrypted. I'm not sure which cipher tor uses; maybe modern browsers are better in that respect.
Something that no one has seemed to mention: it doesn't work on shared hosting, it requires a static IP. For small sites, the additional cost of moving to a plan that supports SSL and has a static IP, this could be a big cost.
SNI breaks soo many things. A lot of RSS readers, for an example saving to pocket won't work. Bing (at least few months ago) didn't know how to index sites using HTTPS+SNI. You had to e-mail them manually, so they can whitelist your site. Probably let some crawlers than know how to handle SNI index your site instead of default ones.
There are so many services that have problems with SNI. For an example MailChimp RSS to E-mail. They use W3C Feed Validation Service, that's why it breaks it, it won't let you past it since you can't validate the feed. Reddit thumbnails/suggest title feature is broken.
I can list 100 other services that break with SNI; it's a pain in the ass. And I'm not talking about users with old browser.
I jumped on the Google's HTTPS everything train and but by the look of it - it was a bad decision. I'm using SNI+HTTPS for over a year now for a fairly popular static site/blog. 500.000 users per month. The ironic thing is that I have no complaints from users, but plenty of tech fails. The hip startups such as Pocket or RSS readers. The users are better prepared for HTTPS everywhere than the tech people, who should have know better.
I don't need an IP address for every (static) site I own. I have one VPS and I happily serve 6 (or 100) static sites and I just want to use SNI and give HTTPS to users, but I can't.
---------
EDIT: Am I the only person who thinks support of SNI is necessary to HTTPS everywhere to succeed or am I a lonely wolf?
Interesting, I hadn't heard of that. Perhaps I should make a correction -- on the webhost my small business uses, purchasing a static IP is required for SSL support. I was under the impression this was a technical limitation, but I was apparently wrong.
Shared webhost here: It's one of those complicated part-technical, part-user education issues.
Technical: SNI breaks older browsers (IE on XP, older mobiles). Also requires a modern (ish) OS on the server, if using CPanel, which like, every shared host in the world is pretty much.
User Education: Explaining exactly which clients will be broken is hard, etc.
#2 is basically gone now, because the users who would be broken are broken by the removal of SSLv3 and insecure crypto anyway - so we've now already killed them and can SNI without having to explain anything complicated to the user.
> I was under the impression this was a technical limitation
It was, but one that is becoming less and less relevant finally.
The problem comes from not knowing the host name until after the SSL/TLS channel has been negotiated, so the server doesn't know which certificate to hand out unless there is only one that is relevant for a given address.
SNI has been widely supported server-side and on up-to-date client applications for a long time, this is essentially a change to the protocol whereby the client tells the server which site it is looking for as part of the SSL/TLS negotiation so there is no problem providing the right certificate during that process. It has been supported by cPanel since the beginning of 2014, and presumable the competing shared host control panels too, so if your host is up-to-date then SNI should be available (if your host is running a 2014-or-before build of the control panel, move host now as they are likely vulnerable to a variety of security issues).
Unfortunately "up-to-date client applications" has been the sticking point: no version of Internet Explorer running on XP supports SNI, nor do older mobile browsers. Your site would still work on those older browsers but the user would get a security warning on each visit which is not desirable. Browsers that old are getting less common, and are getting locked out already because of protocol/cypher changes intended to mitigate recently discovered attacks against SSL/TLS, so SNI is an increasingly practical option (how practical is your decision: can you afford to potentially scare off that remaining small percentage of potential users? Personally I say "definitely yes, people that out-of-date are a nightmare to support anyway so my competitors can have them").
Of course IPv6 also fixes the issue too, but adoption of that lags behind the adoption of SNI compatible browsers by a long way.
Your comment shows another problem: Myths stay around.
This problem was technically fixed 13 years ago, yet I still hear this claim on a regular basis. (Unfortunately it is still a problem if you have a large user basis with legacy clients, meaning mostly Windows XP and very old Androids. But for most people that shouldn't matter.)
I host "small sites" with a shared hosting provider. Those clients who require or desire it, don't really baulk at the SSL cert and dedicated IP cost. Typically it's around $100/year.
What is the Full SSL feature offered by you? From what I understand it means supporting weaker ciphers apart from SSL and older TLS versions. Is that the case?
The full ssl feature means skiing a fallback certificate for non-SNI capable browsers.
Internet explorer on windows XP and Android 2.3 doesn't support SNI based ssl and our full ssl solution assures that even these older browsers can access the site over https without any warnings...
Isn't it the contrary. If Internet would be practically unavailable in China except for domestic resources, due to every single server using TLS for absolutely anything — would Great Firewall survive?
Yes, it would and is. Remember that all major webservices that take up most of peoples online time in the US and Europe is already banned in China. There is no Netflix, Hulu, Facebook, Google, Youtube, twitter, snapchat, Whatsapp working at all or reliably within the firewall.
The quality of the content inside the firewall is good enough for the majority of the population not to care the least about what is happening on the web outside the firewall.
The good viral videos and memes get translated and spread on local services, and plenty are already created within.
Cost to the end user $10-200/year - even more on Heroku($20/month).
Configuration/Setup sucks - seriously you are supposed to run a bunch of unix commands, copy something to a server, and the whole process isn't user friendly.
Limited end user benefit - A business owner and user up until recently hasn't thought much or cared much about information security outside of online purchases.
Until it is cheap/free and once click easy, it won't be everywhere.
There are some issues that are still prevent using their certificates in production¹. For example this one prevents Windows XP clients (not just IE, but anything that uses system X.509 libraries) from connecting: https://github.com/letsencrypt/letsencrypt/issues/1660.
Still there's WoSign (free for everyone) and StartSSL (free for non-commercial use).
____
¹) Only where 3-5% of more obscure clients still matter.
Ads are one reason, some big networks still don't fully support HTTPS so publishers get a reduced ad pool (and potentially lower earnings as a result) if they switch over to HTTPS.
"Why isn't HTTPS everywhere yet?": Because it makes absolutely no sense for the vast majority of online content. Email in https ? Sure. Reading news sites in https ? Accessing RFC in https ? My favorite online recipe site in https ? A total waste of money and CPU.
Oh, and sure, the problem has been "solved" for few geeks using the latest browsers accepting let's encrypt certificates. Sure.
> Because it makes absolutely no sense for the vast majority of online content.
So you use postcards for most "the vast majority" of your snail (postal service) mail, right? Because envelopes[1] make "absolutely no sense"?
Besides the security issues that have already been mentioned of someone modifying the content as a MITM - something which ISPs are already doing[2] - this is really just another version of the "If you have nothing to hide..." falacy. You do have things to hide, because you shouldn't let every node that handles your traffic compile a database of your browsing activities.
> Because it makes absolutely no sense for the vast majority of online content.
Hey, not so fast with conclusions, please.
Do you want an ISP filling your recipe reading with their advertisements. Nope? Or a random dude spoofing a Starbucks WiFi hotspot and serving you a malware on what you think is a trusted download site. Still no?
That's what TLS is for. Not for random paranoid geeks or "just email and online payments" security.
Oh, but, yes, Let's Encrypt is not yet completely production grade and still have some compatibility issues with older systems. Currently the only non-paid options I know are WoSign and StartSSL.
If there is a fraction of a $ to be made per instance by replacing references to (for example) Nutella with references and referral links to some other spread I'm sure it would happen somewhere, and might change the outcome of the recipe in terms of taste/texture or worse make people ill if a recipe avoiding particular allergens ends up with the wrong product being recommended.
This does not make sense, I'm not sure what you're trying to say. The cross-sign works on all major browsers. Domain validation as practised by Let's Encrypt is in accordance with CA/B Baseline Requirements, and they're not the first CA to offer domain ownership verification via HTTP.
I don't think you understand how the CA system or Let's Encrypt works.
IdenTrust signed Let's Encrypt's intermediate certificate[1] using their trusted root certificate. This intermediate certificate has the CA flag set, so other certificates signed by it are to be trusted too. Intermediate certificates are used by all CAs.
When you run the client on your server, the client submits a CSR to Let's Encrypt's CA server. Let's Encrypt will then verify domain ownership (via HTTP, TLS or DNS[2]). This is done in accordance with CA/B Baseline Requirements, and their practice doesn't differ from other CAs who also offer HTTP[3] or DNS verification. If ownership can be verified, Let's Encrypt will then sign your certificate using their trusted intermediate certificate and send it back to you. It is thus not self-signed, but signed by a trusted CA key.
Wosign does not issue a cert without an email verification.
You can look on their website. There's a nice big box to enter an email address to verify your domain and get the free cert.
Sorry -- "Let's Encrypt" with file verification is not DV.
It's just not DV. Simple really...
And it's gonna break the web. Like netlify up there a few post... using rackspace and "Let's Encrypt". ehh. The first few months will probably be fine and then everything is gonna tank.
Until the smart realize that ACME is fine -- as long as you remove the "or file verification" text.
You seem to think that a certificate authority is obligated to verify ownership of a domain by email (for DV certs). You are mistaken, and that's not the case. The mechanisms supported by Let's Encrypt to authenticate domains are just as good as email verification.
> For each Fully-Qualified Domain Name listed in a Certificate, the CA SHALL confirm that[] the Applicant[] has control over the FQDN by: [...]
> 4. Communicating with the Domain’s administrator using an email address [...] ‘admin’,
‘administrator’, ‘webmaster’, ‘hostmaster’, or ‘postmaster’ [...] followed by the Domain Name[];
> 6. Having the Applicant demonstrate practical control over the FQDN by making an agreed-upon change to information found on an online Web page identified by a uniform resource identifier containing the FQDN
> 7. Using any other method of confirmation, provided that the CA maintains documented evidence that the method of confirmation establishes that the Applicant is the Domain Name Registrant or has control over the FQDN to at least the same level of assurance as those methods previously described.
The approach used by Let's Encrypt meets this standard (of which email is just one of several options), and the methods they use are just as secure as using email. They specifically offer approach 6. above, which they call Simple HTTP, as well as other approaches that comply with 7. as outlined in https://letsencrypt.github.io/acme-spec/#rfc.section.7 Involving email in the verification process makes certificate provisioning harder to automate and adds no additional security.
I regularly purchase SSL certificates - about 10 or so a year, from gandi. Their validation process is identical to lets encrypt without the automation. All someone needs to get a cert is to control http://yourdomain.tld/<some file> for the brief verification period. A DV cert is automatically issued without an email confirmation of any sort to the domain. I'm certain because I've set up SSL for domains that have no MX records nor any mail servers at the apex record. http://wiki.gandi.net/en/ssl/dcv
DV certs make no promises about the identity of the site operator, just that communications are encrypted. I remember a decade ago buying certificates from Thawte required faxing them business records and ensuring that the D&B records matched the domain registration - while paying $450/year for a cert for a single hostname. The process was through, but wholly unnecessary for the vast majority of sites that weren't directly taking payment information.
Our intermediate is signed by ISRG Root X1. However, since we are a very new certificate authority, ISRG Root X1 is not yet trusted in most browsers. In order to be broadly trusted right away, our intermediate is also cross-signed by another certificate authority, IdenTrust, whose root is already trusted in all major browsers. Specifically, IdenTrust has cross-signed our intermediate using their DST Root CA X3.
I am surprised complexity of setting up SSL isn't mentioned in the article.
First the tools are complex to use, use various formats for storing keys that are incompatible. In Linux you pretty much have to rely on cryptic command lines. Windows is slightly simpler. And you need more tools to convert certificates between the different formats (for instance using an IIS certificate with FileZilla).
Then you have to deal with the complexity of the algorithms used, the fact that chrome won't accept sha1 anymore, the fact that http2 is very picky in the cryptographic methods used.
What is a best practice today breaks tomorrow. For instance the following script written only a couple of years ago shows how to set up windows to get an A on SSLLabs:
https://www.hass.de/content/setup-your-iis-ssl-perfect-forwa...
But with IIS10 / http2, this list of ciphers is incompatible with chrome. I found a list that works here:
https://code.google.com/p/chromium/issues/detail?id=529994
but I am sure it will break older browsers.
Etc, etc, etc. This is hard work. Until setting up SSL will be a simple box to tick in IIS or a single setting to change in apache, only developers who really care about security and are really motivated will deal with this shit.