Hacker News new | past | comments | ask | show | jobs | submit login

And you can update it at its own rhythm, potentially different from your upgrade path. And you can make them tls-end for you. Your customer might even have 3000 of those and already know how to keep them happy running. Not so bad.



> And you can make them tls-end for you.

Nothing says end-to-end security like terminating TLS at a network choke point so intruders can easily snoop all traffic.


Case and point "SSL added and removed here! :)"

https://blog.encrypt.me/2013/11/05/ssl-added-and-removed-her...


FYI the common idiom is "case in point," rather than "case and point."

The phrase represents this idea: I have a case to make (or an argument), and there is a single element that is conclusive enough to make the whole case in and of itself.

You might say, "I can address this entire case in a single point." This shortens to "case in point." The implication is that the single point you make is enough of an argument to prove your whole case.


What is the threat model there? What if the system can't be upgraded for reasons? What if your service/gateway is just behind the 'network choke' (who said you had to have only one?). Are you paying to upgrade everyone and their perfectly working mainframes or java 8 apps to TLS 1.3? How do your intruders come in? They have to break the appliance? How's the chance you have better tuned/setup your TLS terminator or FW than network security 'experts'?


The threat model is that any foothold an attacker gets behind you TLS terminator potentially allows then to snoop plaintext traffic - which likely includes login creds and auth tokens for all those mainframes and java 8 apps.

(Note, I do exactly this a bit myself - terminate TLS at Elastic Load Balancers - and I feel a little dirty about it ever4y time I'm reminded... I sometimes wonder if I spend more time ensuring VPCs are appropriately isolated and keeping instances running untrusted or less trusted code out of vpcs with production customer data flying around unencrypted, than I would setting up to use encrypted data-on-the-fly everywhere. The big inertia holding that back is that we have so much legacy stuff running on stuff like Grails3 and Java8 that) he benefits of starting "doing it right" are not going to be fully realised for many years while those old platforms still need to run, and the added complexity of running two differently architected platforms is a big issue... I know what we should be doing, but the path to get there and the expense of travelling down it are high. We'll get there in "drip feed" mode where new projects and major updates to existing projects will do it right, but I'll be astounded if we don't still have some old untouched Java8 or Grails3 running in production in 5 years time...)


You don't have to forward traffic onto HTTP - ELBs & ALBs will happily forward traffic to HTTPS endpoints. That gives you an encrypted backend, but still allows you to manage the certificates & TLS policies in one spot. The backend servers can happily run on self-signed certs and the load balancer won't care.


Take a look at AWS nitro instances they give you "free" ipsec type network encryption with some caveats I forget.


The main caveat is that it only applies between Nitro instances: not AWS services like load balancer or databases, which would be one of the areas where it’d be most useful for anyone with legacy apps.


We ditched the the tls termination in nginx strategy, shifted to a nlb with tls termination in app. We used let’s encrypt because we needed something dynamic so we didn’t have to manage certs. Well we received one of those CISO forms from a big enterprise and one of the questions. Do you use LetsEncrypt? We said yes and the responded that it wasn’t a permitted tls solution. For what it’s worth I was happy with it.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: