Hacker News new | past | comments | ask | show | jobs | submit | aeijdenberg's comments login

We use Terraform a lot too - and most of the time it's great, but not infallible.

Our team managed to screw-up some pretty major DNS due to a valid terraform plan that looked OK, but in reality then deleted a bunch of records, before failing (for some reason I can't remember) before it could create new ones.

And of course, we forgot that although we had shortened TTL on our records, the TTL on the parent records that I think get hit when no records are found were much longer, so we had a real bad afternoon. :)


> but in reality then deleted a bunch of records, before failing […] before it could create new ones.

    lifecycle {
      create_before_destroy = true
    }
may be your friend :) (not sure if applicable though)


Slight, but important (if you don't want to run out of fuel) nit, indicated airspeed (KIAS) is not the same as true airspeed (KTAS).

To calculate ground speed (as required for navigation and fuel planning) you need true airspeed (not indicated) as well as wind direction and speed.

See some discussion here: https://www.quora.com/In-aviation-what-is-the-difference-bet...


Not so slight. At airliner altitudes they can differ by 50% or so.

IAS is what the pilots care about while flying because it’s what matters for aerodynamics (ie stall speed)


False only for very small values of code.

ie if your code itself is split into modules, they won't work (as they are imported by their full path, not relatively), and anything in your vendor dir is also ignored when used outside of a GOPATH entry.


You can certainly claim something is centralized and tamper-evident. ie demonstrate proof that something has not been mutated over time.

See RFC6962 Certificate Transparency logs and their consistency proofs for a widely used example.


Not quite as simple as a cryptographic hash alone - remember that if the set of possible inputs can be easily enumerated, then it's trivial to find the input data by brute force.

There are ways to work around this, for example objecthash[0] describes a small modification that prepends the input data with 32 bytes of random data before hashing in order to prevent this.

[0] https://github.com/benlaurie/objecthash#redactability


yes precisely, you must include a random value that remains secret until the appropriate reveal time.


Glad to see any doc published that gets developers thinking more about security...

One "trend", or rather bad habit that I've noticed a lot in discussion with other developers recently, and this doc also falls into, is that there seems to more focus on "input sanitisation" rather than "output escaping".

Regardless of what's been done to input, if the result is that you have a string that you need to embed into another string, then you need to know how to escape that appropriately for the context in which it's being used. Whether the data is user generated, or taken from your database, always assume that it's trying to break your app, and always escape it on output.


Making a hash of the release is just a small part of it (and is the first part of what they are doing).

The trick is to be confident that you're getting the same hash as everyone else - and that's what requiring a proof that it be added to a CT logs gives you some level of assurance about.


If I'm understanding correctly, the plan is to piggy-back on top of the existing Certificate Transparency [0] infrastructure by issuing a regular X509 certificate per Firefox release, but for a special domain name that includes a Merkle tree hash for the files in that release, with a known suffix (".fx-trans.net").

In that manner they can piggy-back on top of the CT ecosystem (including existing logs, including existing search / monitoring tools, and presumably gossip if/when that's solved).

This seems like a really cool hack! The state of binary software distribution is really pretty scary when you think about it - techniques like this have the potential to restore a lot of confidence.

[0] http://www.certificate-transparency.org/


Specifically, Certificate Transparency makes it possible to detect SSL certificates that have been mistakenly issued by a certificate authority or maliciously acquired from an otherwise unimpeachable certificate authority. It also makes it possible to identify certificate authorities that have gone rogue and are maliciously issuing certificates.

Interesting. I assume this either helped with the evidence for - or was developed because of - the whole Symantec CA dustup going on?


CT significantly pre-dates the recent Symantec issues, but yes, it does provide an excellent tool for providing evidence of misissuance [0] [1] - and that's the crux of it - in order for a certificate to be considered valid in a CT world, it must present proof that it has been publicly logged.

[0] https://security.googleblog.com/2015/09/improved-digital-cer... [1] http://searchsecurity.techtarget.com/news/450411573/Certific... [2]


Correct. I believe Certificate Transparency existed prior to any of the issues with Symantec, but CT was indeed involved in exposing some of the Symantec shenanigans.

https://arstechnica.com/security/2017/01/already-on-probatio...

> Ayer discovered the unauthorized certificates by analyzing the publicly available certificate transparency log

That article also links to the primary source, https://www.mail-archive.com/dev-security-policy@lists.mozil... which in turn links to a public viewer for Certificate Transparency logs.


The initial impetus was actually a design to allow a CA to be transparent about its own operations. However, the DigiNotar incident triggered the plan to apply it to all CAs.


Knowing quite little about the technicalities behind CT, I'm interested in the scalability of this. If CT were to be piggybacked upon by a large number of open source binary software distributions, I assume this wouldn't be problematic in any way. CT is already designed - I guess - to handle theoretically all domains. Plus, Firefox is a pretty big, popular distribution to be starting with.


CT logs are designed to be able to handle queries from all web browsers on a daily / more frequent basis, and the output from queries is easily cacheable (and the logs can be mirrored in a read-only manner).

If FF is already doing any log inclusion proofs for certificates, then I think including one more (for the FF release itself) would be pretty much line noise.

I think an interesting question arises as to how well with the CT logs themselves would scale to handle the same kinds of certificates for all binaries, if this ends up taking off as a good idea in general. They've had to handle quite an explosion in X509 certificates over the past year or two due to Let's Encrypt. Some of Google's logs now show more than 80,000,000 certificates [0] in there - IIRC 2 years ago it was a low single digit million.

[0] https://crt.sh/monitored-logs


I actually think that building an independent system for binaries is a better plan, for various reasons.

One is that log bloat is indeed a problem, not so much for the logs, but for those that want to monitor them.

The other is CT has made some tradeoffs to allow cert issuance to be quick. I don't believe binaries need the same tradeoffs, and, for example, instead of an SCT, they should come with an inclusion proof (something I'd like to see for certs, too, in the long run).


Yes, very cool. I believe we should do something similar for web applications. I wrote a blog post about that a while back:

http://blog.airbornos.com/post/2015/04/25/Secure-Delivery-of...


As I understand it, Chrome (unlike Firefox) does not ship its own root CA store - rather it defers to the root store of the operating system that it's running on. It does however apply some form of blacklist / additional restrictions over what the OS may allow.


If you're looking to be able to consistently hash JSON objects you might want to look at Ben Laurie's objecthash: https://github.com/benlaurie/objecthash

It describes a consistent way to hash an object without defining a new format.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: