Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> At every single point in time, state of the art cryptography is going to be able to produce fake random that the current state of the art cryptanalysis cannot prove as not random.

That may not be true. (As in, I'm not sure it is.)

For many useful crypto algorithms where we give a nominal security measure (in bits), there is a theoretical attack that requires several fewer bits but is still infeasible.

For example, we might say a crypto block takes a 128-bit key and needs an order of magnitude 2^128 attempts to brute force the key.

The crypto block remains useful when academic papers reveal how they could crake it in about 2^123 attempts.

The difference between 2^128 and 2^123 is irrelevant in practice as long as we can't approach that many calculations. But it does represent a significant bias away from "looks random if you don't know the key".

It seems plausible to me that a difference like that would manifest in statistical analysis that state of the art cryptanalysis could prove as not random by practical means, while still unable to crack (as in obtain inputs) by practical means.



That makes sense. I stand corrected.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: