Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Just because something is closed source doesn't mean it's insecure. RdRand meets various standards for RNGs and the dieharder tests don't show anything of concern. While you can't be 100 percent sure of the reliability of RdRand because you can't audit it, I feel safe trusting it for all but the most critical of applications. Here's a blog post describing testing RdRand with dieharder: http://smackerelofopinion.blogspot.com/2012/10/intel-rdrand-...


You are right that closed source doesn't mean its insecure - on the other hand, open source could prove that it is indeed secure. With new scandals coming up every week these days, about hidden backdoors in security software, I trust open source more than ever before.


Ironically, it's particularly vis a vis cryptographic random number generation where we can most easily show open source cryptography failing its users; Debian fatally broke the OpenSSL CSPRNG so badly that attackers could remotely brute force SSH keys.


Whereas with closed source you would almost never know. Crypto is very hard to do properly, but at least with open source you have the possibility of independent third party analysis.


Wasn't the debian vulnerability discovered because someone noticed that two different servers had the same key? That would have gone down exactly the same with closed source.


Not defending the debain change but openssl code structure / readability is far from great, the only packages I would put behind openssl is libxml2, glib and glibc.


Weird. I find glib quite readable: https://git.gnome.org/browse/glib/tree/


Yes. And didn't that bug stay in Debian for two years?

Open source code still needs people looking at it.


Maybe no-one would have noticed if it was closed source. I bet if Microsoft released everything as Open Source there would be billions of bugs discovered.


The Debian RNG bug was noticed by folks who found identical certificates in the wild, not by code inspection. Similar RNG weaknesses are commonly found in closed systems as well, so it doesn't seem to be a particularly open/closed source thing.


It's true that merely the ability for widespread code inspection doesn't mean all the code really gets widespread inspection [although I'm surprised by the number of messages I see on mailing lists like Q: "Hi, I'm a Chinese grad student and have been reading the gcc source... I don't understand how XXX can work, given that YYY... can you explain? thanks" A: "oh, hmm, actually, that seems to be a bug..."]

Still, I think a common pattern is (1) notice funny symptom, (2) go look at code, puzzle through it for a while, and then "oh!" You're now in a much stronger position to fix the problem or petition for a fix.

With closed-source code, step (2) is a lot harder unless you're in a privileged position...


Open source would not prove it is secure. At best, you could look for obvious attacks. Cryptography is hard.


No, at best you could have a large, diverse group of experts look for potential flaws and fix them. But, I agree being open source is not enough, you need to be open source, and the one implementation that everyone contributes to.


AES of a counter and a key set from a table based on your CPU serial number also "meets various standards for RNGs and the dieharder tests don't show anything of concern".

And, of course, if you make it the kernel /dev/random you're making it the source of randomness recommended for long term keys and other important things... while you don't know what the users will use it for, you can safely assume it will include some of "the most critical of applications".


As you said, "While you can't be 100 percent sure of the reliability of RdRand because you can't audit it" So, if you are willing to disregard the importance of auditing a (critical!) piece of code, and take and use a nice black box given to you by that big company, then it begs the question.... Why on earth are you using open source?? It just makes no sense. To me, at least.


But how can you be sure it's not just a very very good self-synchronising PRNG?

f(<Some number of previous outputs>, <Secret key known to Intel and the NSA>, <Your cpu identifier>) = <next output>


Calculate how many bits intel could fit on a chip. Apply statistical tests until you're sure the output contains more than that much entropy?


"Just because something is closed source doesn't mean it's insecure"

Yes it does. Closed source, to the extent it impairs audits, does mean something is insecure.

It does not prove that the software is backdoored, otherwise compromised, or defective in any way - if that is what you meant, you're correct. But security means not only absence of these conditions, but also that you can verify that these conditions are absent.

Security is relative. "Secure" is a short form of "high enough confidence". There can be a rational basis for high or low confidence, based on various factors, including testing and likely motives of the parties. Closed source itself is a bad factor. Collaboration with the USG is a very bad factor.


Meeting various standards for RNGs doesn't help though if the algorithm is properly backdoored. Like for instance Dual_EC_DRBG could be..

Also correct me if I am wrong, but wasn't this about using RdRand as entropy source to the linux /dev/random, which afaik is not injection-proof..

And I would consider /dev/random among the most critial of applications.


I think that transparency and opaqueness provide different kinds of security, but they are related to security. It would be wrong to say that they are independent simply because an opaque system can still have the ability to be extremely secure.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: