I think the history and architecture lesson here is clear: conflating privacy with trust/identity was a huge mistake.
We could've had a mostly encrypted Internet a long time ago if encryption and privacy were not hitched to a commercial identity certificate with crappy maintenance tools.
I hope security architects going forward heed this lesson.
This is a very common complaint, and it's not valid. You cannot have privacy without some form of trust or identity. Network cryptography doesn't work that way. We have to assume the adversary controls the network, and so can manipulate any cryptographic handshake they see.
The "identity" in TLS exists principally in order to prevent network adversaries from substituting their own keys for those of your intended peer.
That's 100% accurate but the parent post is clearly about EV certs attempting to provide added trust/identity beyond what's needed for the privacy provided by TLS, and it's skeptical that that's happening.
A more-SSH-like approach would be for browsers to identify PSL-level domains with simple site names, which can be done by-hand for the Alexa top 500 if nothing else; so "https://docs.google.com/whatever" maps to a green "Google docs> whatever" in your URL bar; click to reveal the full URL. The idea is that a local store for this mapping is distributed across all of the browsers out there, so by default you get a yellow arrow on `-> https://idontknowthisdomain.example/` and perhaps as HTTP goes south we get a red one on `-> http://thisdomainneither.example/`. But you can right click that yellow arrow and it asks "Do you trust the following domain?" with a "Name: ____" box to submit your own name. After that, this domain has been white-listed for you to have the given name, just as in HTTPS. Possibly you can even have the page itself suggest the name via a meta tag. The important point is that the user has to say that they trust this domain to have that common-name, before they see it pop up in green.
From there, the browsers can do what they already do best: phone home. If you get millions of different users suggesting the same official name for a web site, it's more reasonable to automatically add them so that more than just the instantaneous Alexa Top 500 is covered.
You still rely on the core certs to provide privacy, but people who visit a phishing site now see by default either a yellow or red arrow where they are accustomed to seeing a green one that they either set up themselves or someone else set up for them.
>From there, the browsers can do what they already do best: phone home. If you get millions of different users suggesting the same official name for a web site, it's more reasonable to automatically add them so that more than just the instantaneous Alexa Top 500 is covered.
I've thought about it a bit, and I must say I'm not really sure what to make of it.
I got some of my CS courses from a man named Kevin Walsh who I think was a Ph.D. candidate at the time; I think he's now teaching at Holy Cross. If my memory serves his group solved (at least for the time being) the Sybil attack problem on the then-active distributed-filesharing-networks at the time, but it never really grew in adoption because the stakes were really low.
Their basic idea was (again if memory serves me right) to let people self-classify into clusters, so your esteem of someone else's ranking of websites would be based on your own ranking of those websites and how they match. In that way, you get a fully distributed web-of-trust system which nevertheless can't be Sybil-poisoned: to inject their own phishing sites, an attacker first needs to inject trustworthy reviews for a bunch of other unclassified sites, which violates the Sybil assumption that your reviewer identity is disposable or forgeable or whatever and demands that you contribute more to the network than you take away.
I'm not sure if this could be quasi-centralized for browser reviews of proposed short-names for domains, since it assumes a big peer-to-peer decentralized system with lots of people reviewing lots of things and lets them sort themselves into their own groups; it gets dodgy if you're retroactively like "okay so here's the main group now". There is a hint of what it would look like if you try to recentralize it and say "this is the consensus for that domain name's shortname" in the form of how blockchain networks look: there you have a similar story of "if you want to break our consensus that your site is a phishing site, you first need to join our network and outperform the rest of it." But yeah, I think that sorting out the data requires some sort of complicated thinking. I'm more dreaming that sorting out the interface and getting approximate correctness for 90% of traffic so that people get accustomed to that little green box being there, is not especially hard.
On the flip-side, this is already a problem in the form that browsers have to detect phishing somehow and there are Sybil attacks on that based on marking other sites as phishing to try to decrease peoples' confidence in these phishing indicators. So even though there is a technical problem here, I'm not sure I see it as novel to this particular approach.
> the parent post is clearly about EV certs attempting to provide added trust/identity beyond what's needed for the privacy provided by TLS, and it's skeptical that that's happening.
The (great-grand)parent is probably not talking about EV certs, because then they'd be stupid. They say:
> We could've had a mostly encrypted Internet a long time ago if encryption and privacy were not hitched to a commercial identity certificate with crappy maintenance tools.
...which wouldn't make sense if it didn't include the basic DV certificates, because then those DV certs would be exactly the sort of quick&easy encryption-without-identity they're looking for.
They're complaining about encryption requiring a process that used to cost enough to discourage many people, even for DV certs. The situation has only really changed in the last two years or so with LE, and you can quite clearly see the impact of free certificates and a slightly better process on the rate of encryption.
I'm talking about the overall post (i.e. JoshTriplett's submission) there, not the parentmost comment by payne92. I read payne92's comment as dreaming about an alternate past where something like Let's Encrypt was available from the start because that was the original model of trust in TLS, not authorities saying "we will certify that you are who you say you are" but merely authorities saying "we will certify that someone who proved their control of that domain name recently, said that this was a valid public key for it."
Yeah, I think we're basically in agreement then. Although I'd note that Let's Encrypt's model is no different than the paid DV certificates that came before–they're just giving those away for free. You were always able to get such a certificate without proof of identity, for something around $100/y.
Key continuity gets you pretty far, though. Knowing I'm talking to the same paypal as I was yesterday is about as, if not more, comforting as having somebody else assert "this is paypal".
But that implies that some time in the past, you had a "first contact" with the real Paypal. You then need something somewhere that helps validate that initial trust.
And let's say a computer already made first contact with paypal.com and bankofamerica.com, what happens when the user needs to format the harddrive or gets a brand new smartphone? Do they export their previous successful handshakes to a flash card and import them into their new device to maintain key continuity? Or is it easier to use an entity (CAs or something similar) to help establish indentity+trust?
You've already established an implicit trust of the DNS at that point to even make that first trust with a paypal.com or bankofamerica.com. The various DNS-pinned certificate proposals aren't necessarily the right answer either, but the proponents are correct that the "first contact" with a server isn't with the server at all, but in DNS name resolution. If you trust DV certs at all, a lot of it is because you trust DNS, for the most part, and the DNS-pinning certificate advocates are at least correct that if we're all relying on that as our identity+trust database then they already are our most important CAs today in identity+trust discussions.
It would be easy enough to have a few "trusted" and "pinned" domains in the browser, or even certs... these pinned sites could then be used to do the cert lookup validation, which then runs against the SOA for the domain...
There's ways to do third party validation of a DNS entry.. only so much that it's the correct entry, since that's really all DV gives you anyway with automation. This is really only needed on first contact too.
Right, but it seemed like tedunangst's continuity suggestion was an answer to tptacek's "You cannot have privacy without some form of trust or identity."
Since continuity doesn't work for 1st contact, it still doesn't solve the fundamental trust issue. Based on the thread's context about the value or worthlessness of CAs... If you have to still use CAs for 1% of the key handshakes, "key continuity" seems like a tangent to the topic.
I think we're veering into pedantic silliness, but "the person I talked to on Monday" is an identity. (Created on the fly when I speak with them on Monday.) This isn't always a useful identity, but perhaps a relevant point in a discussion of identity vs trust.
True. I have first encountered this idea many years ago, with the Kong program [1].
"Unlike most digital signature programs, this one has no concept of "true names". It makes no attempt to determine that the Bob you are talking to is the "real" Bob. It merely ensures that it is the same Bob."
This is a very common complaint, and it's not valid. You cannot have privacy without some form of trust or identity.
There's no reason that "Only $KEY can read this" and "$KEY belongs to $READ_WORLD_ENTITY" have to be tightly coupled.
Just because you need [the network operator to think you have] some form of the latter, does not in any way mean that the former is "not valid" unless it has one particular form of it baked in.
Tight coupling is evil (ref. dependency injection), and complaining about it is not in any way not valid.
Why do we need to assume the adversary controls the network? We've seen real examples of such attacks on networks, using methods like cable splicing, passive wifi listening, etc.
While there are alternatives to such attacks, active attacks require a greater investment by the attacker, and in some cases aren't practically feasible without being detected by the legitimate network operator.
Any given state actor will have full control over some networks. It's fair to say that any given network is fully controlled by at least one state actor, possibly more. It also stands to reason that each state actor has many more networks they can passively listen to. [EDIT: clarity]
Ultimately, if your threat model is to protect against the actions of a state actor who likely does not have active control over your network, but might be able to passively listen, then ubiquitous encryption helps a lot with defense in depth.
A more mundane case is free wifi at an airport. Someone can set up a hotspot with the same SSID and act as a MitM, but it's not undetectable. Here, encrypting application traffic is just one solution, and not necessarily the best one, but you shouldn't be relying on only one layer of protection.
I think that's only true if your trust system recognizes the attacking key as valid for the destination. DV seems to prevent that, absent typo squatting.
I'm not arguing that we have the optimal trust system now. I'm saying that the argument that we should have started with "privacy" for everyone and made trust optional is invalid. A different key infrastructure may very well have been better than the X.509 PKI! But not having any identity in the system is an unrealistic goal.
Whenever dealing with financial websites, I am always extremely suspicious of a website that does not have an EV certificate because of the added level of scruitiny applied to such certs. The number of mis-issued DV certs, and typos means that it would be relatively easy to enter banking credentials to a well-executed phishing site. It is going to be a lot harder to get an EV cert issued to paypa1.com with “PayPal, Inc. [US]” as the organization.
The only website that I regularly have anxiousness about entering my credentials on are Google properties. Since email can be used to reset pretty much anything, Google is one of the most important set of credentials to prevent falling into the wrong hands. Unfortunately they don’t seem to care about EV certs.
Additionally, app-based OAuth screens with web login prompts regularly give me pause. I don’t like not being able to see the URL and certificate information.
It's funny, but this is where having a password manager really saves me... if I don't have a login for a site I use often, I'll really scrutinize it... sometimes login urls change, but not very often.
The real snag is the way that SSL/TLS is marketed. In the post is a link to a belting tweet by a Scott Hanselman:
"HTTPS & SSL doesn't mean "trust this." It means "this is private." You may be having a private conversation with Satan."
TLS (nee SSL) was conceived a long time ago (it's about a 10th of the age of my house but in internet terms a good while back) and is now ubiquitous. The trouble is that end users int al. do not really understand what it really means.
It really doesn't help that Chrome/Chromium and Firefox at least are making it harder and harder to actually view a SSL cert from a website. The Google mob are making it almost bloody impossible for those not familiar with dev tools - rubbish.
I think the real take-away here is that there is just no substitute for looking at the URL bar to see the domain. HTTPS means that the url bar is not lying, but you still have to look there and see if it's the correct entity.
EV certs are more of the same - you still have to look up at that URL bar and see the organization name.
All of us who know how to trust websites, do it by looking at the domain. We don't remember what sites have an EV cert. It doesn't matter, the standard procedure always works: check https, check domain, done.
No other solution works reliably, at the scale of the web. Any other solution ends up adding complication. Doing this the right way really isn't hard, people just need one half-hour class, in their entire life, to enable them to reasonably safely use the web.
Browsers should have some sort of non-binary 'trust' score that looks at things like:
* how many users have been here before
* how 'new' is the site
* how much does it look like other popular sites or sites the user has visited (perceptual hash)
* how close is it's name to a popular site (levenshtein distance)
* how did the user get here (clicked a link vs clicked a bookmark)
* does it have form fields
* does it appear to ask for credentials (OCR rendered page for 'Login', 'Username',or 'Password')
This score should be what gets you the pretty green padlock - otherwise you should just get a dull grey "Private Connection" label.
I imagine it's possible to evade some or all of these but it increases the difficulty and cost of shenanigans. We often get tripped up where for security application we like some sort of mathematical certainty we're protected when 'pretty good' is still better than nothing.
I think browser vendors may be hesitant to implement anything but a clearly defined protocol, because almost by definition a probabilistic model will result in them occasionally vouching for the wrong sites. Even if they could probably exclude legal liability, it may create PR headaches.
As a small fish in the big pond that is the www, I'm also wary of any solution that may end up giving the handful of established behemoths an advantage that smaller competitors cannot at least in theory also get (except by growing to similar size). Right now, there's a process that establishes a "good enough" level: EV certificates are within the reach of even small websites, and with my identity being known and proven, I'd argue that they actually track pretty closely to the actual "truth" of the respective trust-worthiness.
This sort of scoring is why it's nearly impossible to run your own email in 2017, though. Reputation can't be the deciding factor on whether a site can be secure on the internet as it implies centralization.
Agreed. I want encryption not verified-identity from SSL. Self-signed certs provide encryption - browser makers didn't let that happen and created a $$$ monopoly in CAs. It wasnt until the CA list became so obviuously broken ('approved' CAs issuing bogus google.com certs for example) that letsencrypt became conceivable. I'll use the cert-ID and my own mechanism to validate the cert-ID, like a name system. a Distributed name system even.
I wish that the tor and i2p style (hashes of public keys/the public keys as) addresses were the way we did encryption and verified integrity on the internet.
EV would be a lot more interesting if browser UIs did something useful with it, but there does seem to be major chicken/egg issue. An alert box which says "Are you sure you want to send this credit card number to a site that can't prove it's controlled by a legal entity?" may be useful for phishing prevention, but no browser would ever implement it until EV is more widespread.
In the entire article, there is no mention of the (admittedly dangerous [0] and not entirely foolproof [1]) HPKP header.
HPKP (HTTP Public Key Pinning) is a header you send to browsers telling them "the only certificates / intermediate certificates you should trust for this domain are: ABC, XYZ. that's it. if a non-matching cert gets presented for this domain, go apeshit.".
Using HPKP, you can pin any level of certificate: you can pin your own leaf certificate that you purchase from whomever (or that you get from Lets Encrypt), you can pin that vendor's intermediate cert, you can pin that vendor's root cert, whatever you want. You have to specify at least two pins, and you can mix and match the level of the pins.
This is a key point: a lot of (all?) CAs have different intermediate certificates for DV vs EV certificates.
If MyCompany Inc buys an EV Cert from ReputableCertVendor (and another from IrreproachableCertVendor) and issues HPKP headers pinning to the EV intermediates of those two vendors, then can't I have a reasonable expectation that those two companies will take measures to make sure they don't issue EV certificates for perceptually-similar domain names (and could I win a court case against them if they did)? Is that level of assurance not what the exorbitant fees are supposed to go towards?
[0] a colleague of mine calls it the "HPKP footgun"; if you bought a 1 year cert 10 months ago, start pinning that cert's vendor's intermediate and one other, and _then those two vendors go out of business the next day_, you are going to have a really bad day in a couple months when your existing cert expires (browsers that have visited in the last two months will only honor certs from those two CAs, but you can't get a new one and your old one is expired).
[1] HPKP is TOFU (trust on first use, meaning a user/browser has to reach _your site first_ in order to get the "right" HPKP header. If their first visit to mycompany.com is sslstripped or otherwise MITMed, that MITM can and will strip the HPKP header before proxying the response, and then that user/browser doesn't get the benefit of pinning). AFAIK you can't submit your own pins to the browser preload lists like you can with HSTS: https://security.stackexchange.com/questions/143500/are-ther...
Here, I have an idea for how to sell this to the CAs: establish a perceptual distance metric, and the larger a perceptual distance "moat" a customer wants, the more you charge them.
then if the customer is google.com, and they've paid for a moat of width 5, and somebody tries to register these domains:
then all those with distance <= 5 will get flagged for manual review and the CA will offer google a chance to +1 or -1 it. The person trying to register goegle.com can show trademark paperwork to try to override the -1 decision.
OK this is actually not a great idea (too easy to abuse). But what else are all those sweet, sweet EV cert fees going towards?
Actually, allow the registration of up to 5 distance, including xn--* tlds translated... then require anyone registering closer than that to have an EV cert... Have the price set to $20K-$100K/year per order of distance. For a banking establishment, or mega site, it should be reasonable.
The option for anyone too close would be to get an EV cert... also requiring EV for any domain within a distance of 4 to alexa top 100, 3 to the alexa top 1000 and 2 to the top 10000 would be a good start.
> My question is how would registrars coordinate this?
A few of them get together and make a pact about how to measure similarity, then convince browser vendors to treat their certificates specially (triple padlocks!! 3 > 1, must be safer!!)?
In theory, yes. In practice, I don't know of any CA which guarantees continuity of intermediate certificates. In fact, some explicitly say they may rotate intermediates at any time. So the footgun is much worse than you describe: even if a CA doesn't go out of business, they may discontinue the intermediate you were pinning.
See here for a real-world example of what can go wrong when you pin to intermediates: https://cabforum.org/pipermail/public/2016-November/008989.h... (in this case, things worked out OK for the customer, but only because Symantec was willing to intentionally break the rules)
> If MyCompany Inc buys an EV Cert from ReputableCertVendor (and another from IrreproachableCertVendor) and issues HPKP headers pinning to the EV intermediates of those two vendors, then can't I have a reasonable expectation that those two companies will take measures to make sure they don't issue EV certificates for perceptually-similar domain names (and could I win a court case against them if they did)?
What is your goal here? Pins are per-domain (optionally including all subdomains). They wont affect similar domain names - browsers would accept a certificate from any trusted CA.
shit of course you're right, the article talked explicitly about people not differentiating between the EV browser treatment and the DV browser treatment, so all this iron cladding / gold plating on the domain i'm trying to protect doesn't stop typo squatters doing anything.
HPKP is dangerous. With configuration mistake or lost keys you can render the entire domain unusable. Imagine microsoft.com being lost for 10 years because someone put nonsense in HPKP. It's like nuclear reactor, except the advantage is pretty small and risk is huge.
While incidental to the bigger point, I'd point out that I went and actually checked 8 banks in my country (which represents pretty much the whole banking sector afaik); every single one of them has an EV cert on landing page. Two of them (smaller ones) did not automatically upgrade my connection to https on landing page which bit tarnishes the result, but overall I'd say the situation is fairly good. Also all the names that appeared on address bars here were sensible/expected ones, which is not always the case.
Regarding tech people knowing ev vs non-ev, there are very few companies that need to know about them at all, the ones that get more than 10k hits per day through a login. And if you work at say Apple a different team would be taking care of certs, not everyday programmers, so we're talking a small percentage of the tech crowd that needs to know about EV certs.
I find that EV certs are valued by web marketers and nobody else. It gets written down on a project spec because they read it's "more secure" and the IT team goes through the hoops of providing the very expensive EV cert.
A few years later (upon renewal time), it gets swapped out for a "normal" cert and nobody notices.
> I find that EV certs are valued by web marketers and nobody else
Fun fact: it was actually our marketing department that nixed our EV cert, on the grounds that having the company's legal name in the address bar would be "confusing" to customers who expected to only see the domain name.
This sort of makes sense, and is something to plan for when incorporating or licensing a new LLC. If you're going to have an online website that is https://widgets.com , try to get your new company name to be as close to Widgets Incorporated as possible.
Do you believe the EV process is easily subverted, and certificates issued without adequate proof of identity?
Or do you believe that doing business with someone whose identity is certain is just as risky as doing business with someone who only proved control of the domain's DNS?
> if you are in a small business in a competitive market it could help.
Sure, it "could help". But let's say there are 2 identical sites. One site upgrades from DV to EV. The other site takes the same money and invests it in a Graphics Designer to make their home page look nicer.
Which one will sell more product? I'll put my money on the nicer-looking site every time.
Depends whether the redesign was actually necessary. Websites and software UIs are often redesigned simply because the powers that be are bored with the existing design (through familiarity). The users either don't use the site enough to notice, or are annoyed with the new design because they have to re-learn how to use the software. If it is effective, an EV certificate may have a more positive impact than a redesign which just serves to provide a change of scenery for the developers/management.
> If you are a top 10 site you won't need an EV, but if you are in a small business in a competitive market it could help.
This exact point is discussed and addressed in the article:
> What we're seeing here is that EVs are most frequently used by larger sites and as size declines, so does EV adoption. Now perhaps the commercial CAs are simply seeing this as a large addressable market for their product (which would bring us back to financial motives again), but clearly their view of who actually needs the cert the most is not consistent with those who are actually buying them.
Basically you're right but missing context. Small businesses for the most part are not technically oriented enough to appreciate or even know about the existence of EV certs. But I can guarantee you that through my years of working for a certificate reseller, those SMBs who adopted an EV saw a increase in conversion rate.
In this case the numbers do not tell the whole story.
That does not really address the point. The fact that smaller sites are not buying EV certs does not mean that it would not be beneficial for them to do so.
What's more likely? That EV certs have been available for a decade and would have benefited those sites, but despite that, small sites generally haven't obtained EV certs? Or that EV certs don't actually help such sites?
See also the mentions in the article of informal surveys of non-technical users, who show no apparent affinity for EV certs.
I have not seen any A/B studies of the efficacy of EV certificates in driving conversions/sales/etc.
> If you are a top 10 site you won't need an EV, but if you are in a small business in a competitive market it could help.
Random people don't know an obscure corporation any better than they know an obscure domain name. It's as possible for Foo LLC to be operated by "Satan" as foo.com.
And the people who know what an EV cert is know that. So hardly anyone will even realize that you have one, and the few people who do still won't care.
You may be right but I worked for a certificate reseller for years and those we persuaded to adopt EVs saw an increase in conversion rate, so theres that.
The question isn't whether you can statistically measure an effect, it's whether the effect is large enough to justify the effort and expense.
It would be interesting to see a real study that accounts for confounders like site improvements or new products rolled out at the same time as the new certificate.
If there was actually a large effect it might suggest that browsers UIs need to be adjusted, given how little real assurance of trustworthiness an EV certificate provides.
I am not aware of anyone at all who cares about EV. If you are a small business you should probably use the money that you would waste on EV elsewhere.
I personally care about them, I saw about a 30% increase in conversion rate once added (vs a standard Let's Encrypt DV cert). The cert paid for itself in literally less than a couple days.
I did say a "little more detail" and your response indicates your customers are more likely to understand the green bar thing.
I'll be willing to bet that few of them will be able to usefully debate the merits of EV over DV though 8)
However, it is a start and I'm glad it works for you. It would seem that the message is getting through a bit. I own an IT company with 20 odd employees and I estimate that roughly 50% of those staff would be unable to tell me exactly what an EV cert is for as opposed to a DV but to be fair they are all rather good at looking for the signs that a site is dodgy (or bone fide)
One problem with Let's Encrypt is that it discourages OV ("organization validated") certs in favor of DV ("domain control only validated") certs. With many of the commercial vendors, you get organizational info in a cert, indicating who you're talking to. A DV cert doesn't have that, and is only one step up from self-signed.
Do they discourage or just not support? I've never seen Let's Encrypt claim OV is pointless or anything like that.
However, what are the benefits of OV though? From checking Google, they seem to display as regular DV certificates but have additional information in the certificate. I don't think anyone actually looks at the certificate if you have most users not even being aware of how EV SSL is shown.
I know; OV never really caught on. I use them with SiteTruth to read site ownership and look up information about the company. This doesn't work for Let's Encrypt sites. I think that if you're accepting money, you should have at least an OV cert. Most shopping sites do.
We could've had a mostly encrypted Internet a long time ago if encryption and privacy were not hitched to a commercial identity certificate with crappy maintenance tools.
I hope security architects going forward heed this lesson.