Hacker News new | past | comments | ask | show | jobs | submit login
Why isn't HTTPS everywhere yet? (webappsec-test.info)
118 points by yeukhon on Jan 18, 2016 | hide | past | favorite | 123 comments



Cost is going to be taken care of by let's encrypt.

I am surprised complexity of setting up SSL isn't mentioned in the article.

First the tools are complex to use, use various formats for storing keys that are incompatible. In Linux you pretty much have to rely on cryptic command lines. Windows is slightly simpler. And you need more tools to convert certificates between the different formats (for instance using an IIS certificate with FileZilla).

Then you have to deal with the complexity of the algorithms used, the fact that chrome won't accept sha1 anymore, the fact that http2 is very picky in the cryptographic methods used.

What is a best practice today breaks tomorrow. For instance the following script written only a couple of years ago shows how to set up windows to get an A on SSLLabs:

https://www.hass.de/content/setup-your-iis-ssl-perfect-forwa...

But with IIS10 / http2, this list of ciphers is incompatible with chrome. I found a list that works here:

https://code.google.com/p/chromium/issues/detail?id=529994

but I am sure it will break older browsers.

Etc, etc, etc. This is hard work. Until setting up SSL will be a simple box to tick in IIS or a single setting to change in apache, only developers who really care about security and are really motivated will deal with this shit.


The cipher list is effectively a big issue. When using the good old Sun Fortran compiler, I was used to have a -fast flag which would be smart to detect the hardware and be as fast as possible while respecting the IEEE maths (as far as I can remember).

I would love a -secure flag to just use the most secure option of the current version of the software even at the cost of X years of backward compatibility at the tool/client/browser level.

For example for nginx instead of something like:

    ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;
You would get:

    ssl_ciphers SECURE;


> I would love a -secure flag to just use the most secure option of the current version of the software even at the cost of X years of backward compatibility at the tool/client/browser level.

> ssl_ciphers SECURE;

That's actually pretty close to nginx's default (unpatched) config (http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_c...), except openssl calls it HIGH.

ssl_ciphers HIGH:!aNULL:!MD5;

How high HIGH is depends on your version of openssl, ie, you probably don't want it for, say RHEL 5's openssl. Which (openssl has a long lifecycle, LTS Linux distros extend it further) some places may still be using.

https://www.openssl.org/docs/manmaster/apps/ciphers.html

A better implementation of the same concept is Mozilla's SSL Config Generator (which is what CertSimple point customers to - we also maintain the node port): it has Modern/Intermediate/Old with a clear idea of the compatibility you're losing with each: https://mozilla.github.io/server-side-tls/ssl-config-generat...


Or they should package it in vestions. "TLS 1.3" would come with a standard list of ciphers and other parameters. All you would have to do is tell your system "Disable SSL3.0" "Enable TLS 1.3", etc.

Right now it feels like trying to fly a B747.


That does happen! Enable TLS 1.2 only and you're in pretty good shape. TLS 1.3 will drop more bad stuff and add a bit more good stuff.


It definitely helps, but if you're trying to keep up with the latest security, you'd be susceptible to Logjam (due to DHE-supporting ciphers in the default TLS 1.2 list).

There's the Mozilla SSL Configuration Generator which helps with the madness, but they still aren't keeping up with the latest recommendations.


It can be as simple as that.

I just set up a site a few days ago with caddy - http://caddyserver.com. Getting certs for multiple domains was a matter of adding a single line for tls to the config, it generates a key locally, requests the cert, and installs it transparently on the first request. Pretty smooth. I hope they will autorenew without intervention, haven't tested that bit yet, but so far it was painless.

I imagine most mainstream servers will incorporate this sort of smooth tls flow at some point now that services like letsencrypt are available to base it on. I'm considering moving web server for other sites (usually use nginx) for this single feature.


I'll look into it, but to illustrate my point, when clicking on your link I get an invalid certificate error in chrome. Granted I am using an old browser (Win XP / Chrome 40) but it illustrates my point that it is hard to get SSL to work well.


Wasn't XP EOL in 2014 or something after being unsupported since 2009? That's long enough ago that many websites don't test on XP any more, or even work (like sites which require SNI AFAIK). I'm happy to drop XP users I'm afraid, given it has been unsupported for some time (I understand some workplaces mandate it still).


Following in the footsteps of IE 6, Windows XP refuses to disappear for sometimes unbelievable reasons.

It was exposed in a conference at "pas sage en seine 2015"[1] that almost every french banks websites use broken security because their ATMs are still running windows XP and go through the same web server.

[1]: http://passageenseine.org/


As you suggest, I am not using XP by choice...


Why is your employer forcing to use a severely outdated system? Do you often come across broken or very slow websites (I imagine the hardware is from the same era as the OS)?


Hardware is high end 2009 workstations.

We as an organisation like to take our time. We wouldn't welcome a new joiner with a PC on his desk. Much more fun to see him struggling for weeks with our internal IT approval processes. We happily pay millions to Microsoft to get the privilege of using Windows Xp / 32 bit systems a little bit longer (I have now almost more RAM in my mobile phone than my workstation). In fact I suspect many of the critical infrastructures of the organisation are running on COBOL. We'd probably still use CRT screens if our IT department could source them. We are certainly still using blackberries (I hope they will stash enough devices before RIM goes bust).


Eugh... best of luck :D


Let's Encrypt is still completely unusable. We fought useless error messages for two hours for our recent renewal, gave up, and renewed with our old CA. Old CA was a very painless process by comparison.


Still no mention about ads. Using HTTPS means you have to load ads over HTTPS and a huge number of ad networks are not reachable over HTTPS and another large number of assets / tags are hard coded to fetch HTTP resources. For any ad supported website, HTTPS will cause a significant loss of ad revenue.

Given how long it's taking the industry to transition away from Flash I'm not holding my breath about this being fixed any time soon.


I'm surprised by this. Both Google and Amazon forced the adtech provider I work for to move to HTTPS and fix all the associated errors mid last year.


agreed, google forced us 100% https. i cant imagine theres that much difference in ad revenue anymore


Facebook (in addition to Google, as mentioned above) has already required all 3rd party ads to be served over https both on their core site and on all apps on their desktop platform.


This is a big reason. Unfortunately the industry is full of old and/or poor tech that is already fragile as it is.

All modern ad networks (like ours) are completely HTTPS ready but the transition will be slow due to momentum with existing vendors.


Solution: block HTTP ads.


Solution to what? Publishers still have to enable HTTPS on their sites and they'll only do that once all the resources on the page are HTTPS compatible.



While I agree with the gist of that article/presentation, it's more on the silly side compared to many other more detailed and accurate pieces about the growing weight of the modern web.

While blocking ads will help some (and certainly some ad providers have gone way too far with their bandwidth/cpu requirements), the web is just a richer media experience than before. The mainstream does not just want text and 80-90% of the page weight ends up being images, even on that article itself. There are other advancements like HTTP/2 and modern CDNs and better image formats like FLIF that will solve this problem much more effectively without going back to a textual world.

Still don't see what any of this has to do with HTTPS encryption everywhere though...


Well, even Amazon (affiliate) ads server their image over http. So for whatever reason, even the owner of AWS can't serve their ads over https.

Do you need to use their widgets in the first place is another question.


The reasons are as described - a big company with outdated or poor tech policies. Just because AWS is advanced doesn't mean every use of it is. Clearly Cloudfront supports HTTPS and has for a long time so they can enable it if they want to.


the advertisement problem is a subset (albeit a rather large one) of mixed content; the article does make some references to it.


Another big blocker is cost.

For example, one of the biggest CDNs in the world (I'm looking at you, Akamai), charge dramatically more for delivering content over HTTPS. Let's say you're delivering video content at scale, the difference between HTTP and HTTPS delivery can be many millions of dollars a year.

But why not use a different CDN, say Cloudfront which prices the same for HTTP and HTTPS?, well, simple, the same problem. Cloudfront is many times more expensive than other CDNs at scale.

Really, we need to apply pressure to all CDNs to equal out their HTTPS pricing (Its not just Akamai...)


This isn't helped by the subset of CDN providers who use the enterprise sales models where they try to bundle services and set pricing based on your perceived ability to pay. It's always possible to negotiate a better deal but everyone has to burn the time needed to do that independently and can't publicly share the results.


BTW, Cloudfront also prices HTTPS a little bit higher than HTTP (on per request pricing, not on bandwidth).


Also Heroku's price for serving https with your own domain is absurdly high.

I basically means there is not free version of Heroku for serious projects, which is ok. But taken in isolation, the price tag looks pretty bad..


Unless you're doing a lot of large files like media or downloads, I always recommend CloudFlare. It's a great service and there are no bandwidth or request costs.


MaxCDN serves us pretty well (a few TB/mo). It has been much much cheaper than Cloudfront ever was (mostly due to not charging by number of requests).


I'd imagine HTTPS is significantly more expensive to host than plain HTTP, due to the CPU requirements of the crypto involved.


Maybe at the CDN level where there's lots of caching it's different, but for regular hosting apps are primarily IO bound so it's essentially "free" to do encryption. From http://www.imperialviolet.org/2010/06/25/overclocking-ssl.ht...:

> In January this year (2010), Gmail switched to using HTTPS for everything by default. Previously it had been introduced as an option, but now all of our users use HTTPS to secure their email between their browsers and Google, all the time. In order to do this we had to deploy no additional machines and no special hardware. On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead. Many people believe that SSL takes a lot of CPU time and we hope the above numbers (public for the first time) will help to dispel that.


At the time that was written, only the frontends did TLS---remember, there wasn't universal strong encryption or authentication within Google's back-end until after the Snowden leaks.


Nope. The symmetric crypto is basically free. The asymmetric crypto is cheap.

IPv4 addresses cost. Most big sites can't quite go SNI-only yet. Soon!

Customer support costs. Why would TLS involve more customer support? Because it's a technology designed only to break connections. Ideally that's only the adversarial connections---but much like the TSA, a mis-designed authentication scheme can cause great stress and drama.

I do think the world will be all-TLS before too long; the parts that aren't will probably not be HTTP over TCP (e.g., content addressable networking).


> IPv4 addresses cost. Most big sites can't quite go SNI-only yet. Soon!

Why can't they go SNI? It's not like IE6 or netscape are relevant any more, is there some other issue?


No SNI on Windows XP (IE6, IE7, IE8, Safari), nor Android 2.3 Browser, nor BlackBerry. Windows XP still has 4-5% usage share on the web, which is 1 in 20 people. Lots and lots of low-end and older Android handsets were on 2.3 because 4.0 had new hardware requirements. Combined, it's a relevant number of people for large sites.


And highly dependent on the region and demographics of the visitors, so the numbers can be way higher (or way lower if you are lucky).


> Windows XP still has 4-5% usage share on the web, which is 1 in 20 people.

Chrome and FF together have a market share of about 70%, so roughly speaking, only 30% of those XP users can be expected to still be using IE or safari. The percentage might be a touch higher as users left on XP might be less likely to use a different browser, but it's certainly not like all of today's XP users are on IE + Safari.


> nor BlackBerry

Maybe not for the old blackberries, but SNI works on BB10.


You know how Mozilla recently had to delay the SHA1 deprecation in Firefox because it broke people behind shitty MITM "security" solutions? That crap often doesn't support any "modern" TLS features like SNI either.

See for example here: https://palant.de/2014/05/27/proxies-breaking-up-ssl-connect...


Windows XP. Python through about 2.6. Java until quite recently. Old Perl code that I shudder to think about.

<shudder>

Oh, and some old Android. The Android version that started supporting SNI is apparently also the version that tightened up licensing, so a number of third-world forks still use the old version. And as long as most of the web works without SNI, they don't have a huge incentive to change that.

The work that some browser vendors (Mozilla, Google) are doing to privilege imperfect-TLS over no-TLS is crucial here. Also, yes: new Web work should all be done over HTTPS.


some pretty modern languages / libraries don't have SNI built in. E.g. requests under python 2.7 doesn't do it by default. As for browsers, I think the answer is that most do - it ends up being something like IE on XP and really old Android that gives you trouble.


Chromecast for example does not support SNI, which is crazy.


With AES-NI, a single modern CPU core can encrypt 10 gigabits/second and do about 50,000 public key operations per second. Cryptography is not that expensive.


AES isn't the only algorithm used in TLS handshakes.


That is why I also said 50,000 public key operations per second.


I like the irony of that page being served over plain HTTP :-)


Ha! And HTTPS doesn't seem to work for me at all.


https://www.ssllabs.com/ssltest/analyze.html?d=http%3A%2F%2F...

    Assessment failed: No secure protocols supported
It fails spectacularly as well. No matter the quality of the content of this article (and I'm one to agree with it), this sort of throws it out the window for me. If you aren't able to manage a functioning version of the product you are describing (especially in the lets encrypt era) you don't get to do a technical article on said product.


one man's irony is another man's satire...


I'm not convinced it's even a great idea until something is done about a) the entirely broken CA trust model or b) mainstream web browser's entirely broken attitude to that trust model.


I am interested in (a). Recently I heard from a trusted source who works for a popular CDN that some top levels CAs, who are trusted in my browser, are known to be issuing certificates for domains which they have no authority to be doing so, for the purpose of man-in-the-middle analysis by certain parties. But that there is a reluctance to pull these certificates because that would be like switching off portions of the web.

Is there any truth or more solid reference to this from those who are in the industry and know more about these topics? Rather alarming if so.


Yes, there's truth in it. It's takes time for them to be caught and punished, and the Certificate Transparency effort will make it much more difficult for these scumbags to do so.


just listen to security now [0]. it'll make your blood boil how, e.g., symantec fucks up over and over again. this industry is full of incompetent idiots.

[0] https://twit.tv/shows/security-now (episode 532 is particularly revealing)


Thanks, interesting to hear. If I heard correctly this is a case of incompetence -- which is certainly a problem. Though the information I heard was about lesser-known CAs granting certificates to enable some kind of snooping.

It seems that a problem with the current implementation of SSL is that it conflates the requirement for encryption with that for identification.

Knowing that "Symantec thinks this is google.com" has limited benefit to me; I'm happier with the idea that "This is the same google.com you visited before".


In addition to ad revenue issue r1ch mentioned (a friend of mine told me HTTPS ads provided less than half the revenue HTTP ads provide), there are a few other things I've encountered while working on HTTPS support:

- More ad network misbehavior: the last time I checked, a lot of HTTPS ads on AdSense would take up 100% CPU and lag the browser, because it would try to request an HTTP resource and fail in an infinite loop

- More failed connections: just two weeks ago, a Firefox update made HTTPS sites inaccessible on computers with certain HTTPS-scanning antiviruses installed: https://news.ycombinator.com/item?id=10854629

- Browser bugs: Chrome doesn't let you HTML5-drag-and-drop data-URIs from HTTPS: https://github.com/Zarel/Pokemon-Showdown-Client/commit/53a1....

- Warnings when embedding HTTP images: Users often embed HTTP images in community sites like forums, chatrooms, etc, which cause warnings in Firefox and make Chrome remove its "secure" designators

- Annoyance: I tried to get a Let's Encrypt certificate recently. My server runs Node, and it doesn't serve arbitrary files from a directory (it's a WebSocket server) so the webroot method wasn't possible, so I actually had to stop the server and reconfigure the firewall not to redirect ports 80 and 443 to get the certificate.

`letsencrypt-auto` also gives very cryptic error messages: apparently "Correct zName not found for TLS SNI challenge. Found" means "please use `sudo -H`". And the standalone verification failed for an unknown reason, but at least I eventually got the Apache verification working. And I'm going to have to go through all this again in three months...


Full URL (the one above is truncated) for the drag and drop issue:

https://github.com/Zarel/Pokemon-Showdown-Client/commit/53a1...


Regarding the "Annoyance" bullet point, you might have a better time if you put an Apache or nginx in the front (on ports 80/443), have it serve the acme-challenges statically and proxy_pass everything else to Node running on a high port. TLS handling would then happen in the frontend server only.

Regarding the "please use sudo" issue, it's very much possible to run the letsencrypt client as a non-root user if you set your filesystem permissions carefully. Here's the letsencrypt configuration for a private webserver of mine: https://github.com/majewsky/system-configuration/blob/master...


Adding an entire piece of server software for soemthing like node or phoenix, just to answer an HTTP response is overkill. If you want to use LE, make a node client to get the certs and do the renews and shove it on npm.


A: It doesn't solve any problems that people think they have, and causes others.


Im not able to use SSL certs on all my endpoints. I have to use self-signed certs. That's because many of my machines are on TOR hidden services.

The CA-CERT won't allow any certs for .onion unless you buy at exorbitant price an EV2 cert. And that especially means no free certs.

So I do use self-signed because I want end to end crypto (and the next-to node can see data). Of course my browsers throw a fit, but alas I am indeed secure.

... but I want an .onion cert authority by LetsEncrypt and first-class network routing for all .onion addresses in Linux via /etc/resolve.conf


I always thought .onion sites are end to end encrypted. "The rendezvous point simply relays (end-to-end encrypted) messages from client to service and vice versa." according to https://www.torproject.org/docs/hidden-services.html.en There's even a ticket to add a padlock to indicate this in TOR Browser https://trac.torproject.org/projects/tor/ticket/8686


This is correct, hidden services are fully encrypted from the users browser to the tor relay running the hidden service


My understanding was that the connection prior saw the cleartext of the final destination's packets.

SSL was the way to make sure that end to end was encrypted so that nobody could read the data (mainly login/password info).

I think the only way to see is to be part of the tunnel architecture and see if my thoughts are right or wrong.


What you're saying is correct for connections from tor to cleartext sites. Hidden services are end-to-end encrypted. I'm not sure which cipher tor uses; maybe modern browsers are better in that respect.


Something that no one has seemed to mention: it doesn't work on shared hosting, it requires a static IP. For small sites, the additional cost of moving to a plan that supports SSL and has a static IP, this could be a big cost.


It doesn't require a static IP; you can use SNI to use HTTPS on a shared IP:

https://en.wikipedia.org/wiki/Server_Name_Indication

SNI was already been mentioned an hour ago, too:

https://news.ycombinator.com/item?id=10928036


SNI breaks soo many things. A lot of RSS readers, for an example saving to pocket won't work. Bing (at least few months ago) didn't know how to index sites using HTTPS+SNI. You had to e-mail them manually, so they can whitelist your site. Probably let some crawlers than know how to handle SNI index your site instead of default ones.

There are so many services that have problems with SNI. For an example MailChimp RSS to E-mail. They use W3C Feed Validation Service, that's why it breaks it, it won't let you past it since you can't validate the feed. Reddit thumbnails/suggest title feature is broken.

I can list 100 other services that break with SNI; it's a pain in the ass. And I'm not talking about users with old browser.

I jumped on the Google's HTTPS everything train and but by the look of it - it was a bad decision. I'm using SNI+HTTPS for over a year now for a fairly popular static site/blog. 500.000 users per month. The ironic thing is that I have no complaints from users, but plenty of tech fails. The hip startups such as Pocket or RSS readers. The users are better prepared for HTTPS everywhere than the tech people, who should have know better.

I don't need an IP address for every (static) site I own. I have one VPS and I happily serve 6 (or 100) static sites and I just want to use SNI and give HTTPS to users, but I can't.

---------

EDIT: Am I the only person who thinks support of SNI is necessary to HTTPS everywhere to succeed or am I a lonely wolf?


Interesting, I hadn't heard of that. Perhaps I should make a correction -- on the webhost my small business uses, purchasing a static IP is required for SSL support. I was under the impression this was a technical limitation, but I was apparently wrong.


Shared webhost here: It's one of those complicated part-technical, part-user education issues.

Technical: SNI breaks older browsers (IE on XP, older mobiles). Also requires a modern (ish) OS on the server, if using CPanel, which like, every shared host in the world is pretty much.

User Education: Explaining exactly which clients will be broken is hard, etc.

#2 is basically gone now, because the users who would be broken are broken by the removal of SSLv3 and insecure crypto anyway - so we've now already killed them and can SNI without having to explain anything complicated to the user.


> I was under the impression this was a technical limitation

It was, but one that is becoming less and less relevant finally.

The problem comes from not knowing the host name until after the SSL/TLS channel has been negotiated, so the server doesn't know which certificate to hand out unless there is only one that is relevant for a given address.

SNI has been widely supported server-side and on up-to-date client applications for a long time, this is essentially a change to the protocol whereby the client tells the server which site it is looking for as part of the SSL/TLS negotiation so there is no problem providing the right certificate during that process. It has been supported by cPanel since the beginning of 2014, and presumable the competing shared host control panels too, so if your host is up-to-date then SNI should be available (if your host is running a 2014-or-before build of the control panel, move host now as they are likely vulnerable to a variety of security issues).

Unfortunately "up-to-date client applications" has been the sticking point: no version of Internet Explorer running on XP supports SNI, nor do older mobile browsers. Your site would still work on those older browsers but the user would get a security warning on each visit which is not desirable. Browsers that old are getting less common, and are getting locked out already because of protocol/cypher changes intended to mitigate recently discovered attacks against SSL/TLS, so SNI is an increasingly practical option (how practical is your decision: can you afford to potentially scare off that remaining small percentage of potential users? Personally I say "definitely yes, people that out-of-date are a nightmare to support anyway so my competitors can have them").

Of course IPv6 also fixes the issue too, but adoption of that lags behind the adoption of SNI compatible browsers by a long way.


Static IP? Do you mean "one certificate per IP, unless you're okay with not supporting clients that don't use SNI"?


Your comment shows another problem: Myths stay around.

This problem was technically fixed 13 years ago, yet I still hear this claim on a regular basis. (Unfortunately it is still a problem if you have a large user basis with legacy clients, meaning mostly Windows XP and very old Androids. But for most people that shouldn't matter.)


I host "small sites" with a shared hosting provider. Those clients who require or desire it, don't really baulk at the SSL cert and dedicated IP cost. Typically it's around $100/year.


We just launched support for free SSL on all our plans at https://www.netlify.com, doing what we can to help getting HTTPS everywhere.

Won't help on mixed content or broken ad providers, but at least give people a place to put a project on a custom domain for free with HTTPS :)


What is the Full SSL feature offered by you? From what I understand it means supporting weaker ciphers apart from SSL and older TLS versions. Is that the case?


The full ssl feature means skiing a fallback certificate for non-SNI capable browsers.

Internet explorer on windows XP and Android 2.3 doesn't support SNI based ssl and our full ssl solution assures that even these older browsers can access the site over https without any warnings...


HTTPS (in the current form) adds a dependency on external authority. This breaks the very fundamental principle of the web.



Still an external authority.


Sure. And why would I trust anyone who says I'm a good guy, trust me, without any confirmation?


When I lived in China I was pretty happy the https movement hadn't won yet. Sites that forced me on a secure line were basically useless.

The great firewall doesn't appreciate encrypted connections, and will usually grind to a halt, when deciding if it will let you get the content.


Isn't it the contrary. If Internet would be practically unavailable in China except for domestic resources, due to every single server using TLS for absolutely anything — would Great Firewall survive?


Yes, it would and is. Remember that all major webservices that take up most of peoples online time in the US and Europe is already banned in China. There is no Netflix, Hulu, Facebook, Google, Youtube, twitter, snapchat, Whatsapp working at all or reliably within the firewall.

The quality of the content inside the firewall is good enough for the majority of the population not to care the least about what is happening on the web outside the firewall.

The good viral videos and memes get translated and spread on local services, and plenty are already created within.


The Great Firewall only applies to traffic coming in and out of China. Domestic Internet surveillance and filtering is handled by Golden Shield.

If all Chinese domestic traffic used TLS, nothing would change. The PRC Government can still knock on the door if your company is causing trouble.


Three obvious reasons...

Cost to the end user $10-200/year - even more on Heroku($20/month).

Configuration/Setup sucks - seriously you are supposed to run a bunch of unix commands, copy something to a server, and the whole process isn't user friendly.

Limited end user benefit - A business owner and user up until recently hasn't thought much or cared much about information security outside of online purchases.

Until it is cheap/free and once click easy, it won't be everywhere.


"Cost to the end user $10-200/year"

What? How have you not heard of Let's Encrypt by now?


There are some issues that are still prevent using their certificates in production¹. For example this one prevents Windows XP clients (not just IE, but anything that uses system X.509 libraries) from connecting: https://github.com/letsencrypt/letsencrypt/issues/1660.

Still there's WoSign (free for everyone) and StartSSL (free for non-commercial use).

____

¹) Only where 3-5% of more obscure clients still matter.


Ads are one reason, some big networks still don't fully support HTTPS so publishers get a reduced ad pool (and potentially lower earnings as a result) if they switch over to HTTPS.


Because many websites are ad supported and few ad networks have https ads ready and configured properly, so they do not display.

Even on adsense, inventory drops for a https-only website.


I work for a large bank and a lot of "lesser known" SSL sites (eg blogs that have a cert) are just straight off blocked by the bluecoat proxy.


"Why isn't HTTPS everywhere yet?": Because it makes absolutely no sense for the vast majority of online content. Email in https ? Sure. Reading news sites in https ? Accessing RFC in https ? My favorite online recipe site in https ? A total waste of money and CPU.

Oh, and sure, the problem has been "solved" for few geeks using the latest browsers accepting let's encrypt certificates. Sure.


> Because it makes absolutely no sense for the vast majority of online content.

So you use postcards for most "the vast majority" of your snail (postal service) mail, right? Because envelopes[1] make "absolutely no sense"?

Besides the security issues that have already been mentioned of someone modifying the content as a MITM - something which ISPs are already doing[2] - this is really just another version of the "If you have nothing to hide..." falacy. You do have things to hide, because you shouldn't let every node that handles your traffic compile a database of your browsing activities.

[1] https://www.philzimmermann.com/EN/essays/WhyIWrotePGP.html

[2] e.g. "X-UIDH" and the various ISPs that inject javascript for various reasons.


Actually, yes. The vast majority of snail mail I receive is open catalogs and other junk mail. Its content is entirely public.


> Because it makes absolutely no sense for the vast majority of online content.

Hey, not so fast with conclusions, please.

Do you want an ISP filling your recipe reading with their advertisements. Nope? Or a random dude spoofing a Starbucks WiFi hotspot and serving you a malware on what you think is a trusted download site. Still no?

That's what TLS is for. Not for random paranoid geeks or "just email and online payments" security.

Oh, but, yes, Let's Encrypt is not yet completely production grade and still have some compatibility issues with older systems. Currently the only non-paid options I know are WoSign and StartSSL.


> Do you want an ISP filling your recipe reading with their advertisements.

Thought this would go in a direction where your ISP changes the ingredients :)


If there is a fraction of a $ to be made per instance by replacing references to (for example) Nutella with references and referral links to some other spread I'm sure it would happen somewhere, and might change the outcome of the recipe in terms of taste/texture or worse make people ill if a recipe avoiding particular allergens ends up with the wrong product being recommended.


Even if you don't care about secrecy/privacy, things like this [1,2] and this [3] are reasons to do everything under HTTPS.

[1] https://news.ycombinator.com/item?id=10926696

[2] https://labs.mwrinfosecurity.com/blog/2013/09/24/webview-add...

[3] https://mitmproxy.org/


a2enmod headers

echo -e "ServerSignature Off\nServerTokens Prod" >> /etc/apache2/apache2.conf

/etc/init.d/apache2 restart

openssl req -new -nodes -keyout webappsec-test.info.key -out webappsec-test.info.csr -newkey rsa:2048

cat webappsec-test.info.csr

Register here:

https://www.startssl.com/

Copy and paste csr...

cat Domain_cert.pem CA_root.pem

https://mozilla.github.io/server-side-tls/ssl-config-generat...

====>

A+ on

https://www.ssllabs.com/

Now? What's so hard about that?

I guess if you're using EC2 and Ubuntu on your server... ehhh...


Why use StartSSL, when Let's Encrypt will give you an equally good certificate (with more flexibility on SANs!) with less hassle?


Less hassle? You have to renew it every a few months.


You're supposed to renew them automatically from a monthly cron job.


I don't really trust an automatic tool with my configuration files, to be honest...


The config files are only changed when it first installs the certificates. Renewals just replace the certificates in /etc/letsencrypt/.


No, it doesn't.

Lets Encrypt certs are equivalent to self-signed certs.

No hassle ==> broken shit.


No, you are incorrect. At the moment, their certs are cross-signed, and I have tested my site with multiple browsers. It works.


The cross sign is too far up the chain.

If you cross sign a self-signed cert what do you get?

A Let's Encrypt Cert.

Domain Ownership is not the same as file ownership.


This does not make sense, I'm not sure what you're trying to say. The cross-sign works on all major browsers. Domain validation as practised by Let's Encrypt is in accordance with CA/B Baseline Requirements, and they're not the first CA to offer domain ownership verification via HTTP.


Of course the cross signed cert works!

That not the point.

The cross signature is worthless if the signed cert beneath it is self-signed.

If Identrust cross signed self signed certs, the self signed cert would be trusted too... and it would "work" in all browsers...

Coly Tarry Frap Tarts! you're daft.

Here try this:

1. Setup a domain with an MX record that points off domain and a dmarc p=reject record...

2. Try to get a cert from any CA other than Lets Encrypt without answering any emails sent to the off domain MX.

...

Then... install Lets Encrypt... bam-o! Cert heaven!

It's self-gawd-damn-fing-signed.


I don't think you understand how the CA system or Let's Encrypt works.

IdenTrust signed Let's Encrypt's intermediate certificate[1] using their trusted root certificate. This intermediate certificate has the CA flag set, so other certificates signed by it are to be trusted too. Intermediate certificates are used by all CAs.

When you run the client on your server, the client submits a CSR to Let's Encrypt's CA server. Let's Encrypt will then verify domain ownership (via HTTP, TLS or DNS[2]). This is done in accordance with CA/B Baseline Requirements, and their practice doesn't differ from other CAs who also offer HTTP[3] or DNS verification. If ownership can be verified, Let's Encrypt will then sign your certificate using their trusted intermediate certificate and send it back to you. It is thus not self-signed, but signed by a trusted CA key.

[1]: https://crt.sh/?caid=7395

[2]: soon!

[3]: At least one other CA (WoSign) does this, there are probably others I'm not aware of.


@pfg:

I know exactly how CAs work. I've built several.

Wosign does not issue a cert without an email verification.

You can look on their website. There's a nice big box to enter an email address to verify your domain and get the free cert.

Sorry -- "Let's Encrypt" with file verification is not DV.

It's just not DV. Simple really...

And it's gonna break the web. Like netlify up there a few post... using rackspace and "Let's Encrypt". ehh. The first few months will probably be fine and then everything is gonna tank.

Until the smart realize that ACME is fine -- as long as you remove the "or file verification" text.


You seem to think that a certificate authority is obligated to verify ownership of a domain by email (for DV certs). You are mistaken, and that's not the case. The mechanisms supported by Let's Encrypt to authenticate domains are just as good as email verification.

If you've been involved in a CA, then you should be familiar with the CAB forum baseline requirements for Verification Practices. https://cabforum.org/wp-content/uploads/BRv1.2.3.pdf

> For each Fully-Qualified Domain Name listed in a Certificate, the CA SHALL confirm that[] the Applicant[] has control over the FQDN by: [...]

> 4. Communicating with the Domain’s administrator using an email address [...] ‘admin’, ‘administrator’, ‘webmaster’, ‘hostmaster’, or ‘postmaster’ [...] followed by the Domain Name[];

> 6. Having the Applicant demonstrate practical control over the FQDN by making an agreed-upon change to information found on an online Web page identified by a uniform resource identifier containing the FQDN

> 7. Using any other method of confirmation, provided that the CA maintains documented evidence that the method of confirmation establishes that the Applicant is the Domain Name Registrant or has control over the FQDN to at least the same level of assurance as those methods previously described.

The approach used by Let's Encrypt meets this standard (of which email is just one of several options), and the methods they use are just as secure as using email. They specifically offer approach 6. above, which they call Simple HTTP, as well as other approaches that comply with 7. as outlined in https://letsencrypt.github.io/acme-spec/#rfc.section.7 Involving email in the verification process makes certificate provisioning harder to automate and adds no additional security.


I regularly purchase SSL certificates - about 10 or so a year, from gandi. Their validation process is identical to lets encrypt without the automation. All someone needs to get a cert is to control http://yourdomain.tld/<some file> for the brief verification period. A DV cert is automatically issued without an email confirmation of any sort to the domain. I'm certain because I've set up SSL for domains that have no MX records nor any mail servers at the apex record. http://wiki.gandi.net/en/ssl/dcv

DV certs make no promises about the identity of the site operator, just that communications are encrypted. I remember a decade ago buying certificates from Thawte required faxing them business records and ensuring that the D&B records matched the domain registration - while paying $450/year for a cert for a single hostname. The process was through, but wholly unnecessary for the vast majority of sites that weren't directly taking payment information.


I just signed up using my personal gmail address and was offered this option to verify domain ownership: http://puu.sh/mB373/62a963d896.png

The file contains a random token. This is essentially the same solution Let's Encrypt offers (among others).


> Lets Encrypt certs are equivalent to self-signed certs.

This is entirely false.


Oh yeah! sure...

what exactly ensures that the DV cert is a DV cert?


what exactly ensures that the DV cert is a DV cert?

The fact that IdenTrust says it's a DV cert, and browsers trust its claims.


I don't think that's correct per https://letsencrypt.org/certificates/

Our intermediate is signed by ISRG Root X1. However, since we are a very new certificate authority, ISRG Root X1 is not yet trusted in most browsers. In order to be broadly trusted right away, our intermediate is also cross-signed by another certificate authority, IdenTrust, whose root is already trusted in all major browsers. Specifically, IdenTrust has cross-signed our intermediate using their DST Root CA X3.


You might want to share a few more details if you want those claims to be taken seriously.


Nice. Can you do the same trick for us, but for NGINX?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: