If you expect RDRAND to change its behavior to be nefarious, also expect ADD and MUL to do the same. The RDRAND conspiracy theory is bizarre because underneath lies a core belief that a malicious entity would go so far as to insert a backdoor, but put it strictly in a simple and easily avoidable place? So they're malicious entities, but can only touch certain instructions?
Malicious entities don't play by made up rules. RDRAND being a convenient scapegoat seems like exactly the thing they'd want, too.
The concern for RDRAND was really that it might be used to exfiltrate data.
Imagine a parallel universe where everybody happily just uses RDRAND to get random numbers. For example when you connect to an HTTPS server, as part of the TLS protcol it sends you a whole bunch of random bytes (these are crucial to making TLS work, similar approaches happen in other modern protocols). But in that universe those bytes came directly from RDRAND, after all it's random...
Except, if RDRAND is actually an exfiltration route, what you've just done is carve a big hole in your security perimeter to let RDRAND's owners export whatever they want.
XORing RDRAND into an apparently random bitstream is thus safe because the bitstream is now random if either RDRAND works as intended OR your apparently random bitstream is indeed random.
> XORing RDRAND into an apparently random bitstream is thus safe because the bitstream is now random if either RDRAND works as intended OR your apparently random bitstream is indeed random.
That's making assumptions. For instance, it wouldn't be beyond the realms of possibility for the compromised CPU to also track which registers contain a result produced from RDRAND, and make (X XOR RDRAND) produce a predictable result. After all, RDRAND is already an undefined number, so the system can legitimately decide later what it would like it to be. Yes, it would require more silicon in the registers, re-ordering and dispatch system, and ALU, but it would be feasible.
A change that large (probably requiring new custom RAMs and modifications to fairly timing-constrained register renaming logic) doesn't seem feasible for somebody to insert below the RTL level without being noticed. It would be much easier to just make RDRAND somehow less random, while still passing whatever randomness test is used for qualification.
Wouldn't require different RAM, and wouldn't really require any different register renaming logic. It would however require an extra bit on each register to flag the value as originating from RDRAND, and a different ALU that uses that flag and changes the outcome for certain operations.
Obviously, if you store the result of RDRAND to main RAM and read it back in later, this would defeat the system as the flag wouldn't be preserved. But I'm guessing most code won't do that for performance reasons.
The simpler option of just making RDRAND predictable is less powerful, because then the operating system can compensate with randomness obtained from elsewhere. The attack above allows the CPU to actually compromise the operating system's own randomness source.
I don't see how the RDRAND change would be gotten away with either, if someone else is looking at the silicon design.
To modify RDRAND so that it is less random in a way that's useful for an attacker, yet passes statistical randomness testing by the OS and other software, would require RDRAND to implement something cryptographic, so that only the attacker, knowing a secret, can "undo" the not-really-randomness.
A new crypto block would surely be very noticable at the RTL level.
Another comment[1] gave a link to an existing implementation of such backdoor using only doping. It doesn't implement a cryptographic scheme but weakens the randomness in a way that still pass NIST test suite.
I don't get your point. Should we assume there is no backdoor at all just because we can't predict all kind of backdoors that can be?
Being defensive against RDRAND is just one (relatively easy) way to defend oneself against a (relatively easy to implement) backdoor. Yes, this defense isn't perfect, because there can be other (more difficult to escape) backdoors, but that's not a compelling reason not to avoid the “easy” ones…
Because obtaining a seed is the only non deterministic part of a RNG. Once you have the seed you can trivially predict the next numbers. Since random numbers are used to generate encryption keys, being able to manipulate random numbers also allows you to defeat encryption. The way the RDRAND backdoor would work is pretty simple. When it is activated (system wide) it simply needs to return numbers based on a deterministic RNG (with a seed known to the owners of the backdoor). To an observer it would still work as intended and there is no way to prove that there is a backdoor.
Verifying an ADD instruction is very easy and if it were to return wrong results then it would be obvious that it is buggy. Programs would cease to work. Alternatively it would have to be incredibly smart and detect the currently executed program and exfiltrate the encryption key during the execution of the encryption function.
The first backdoor scales to millions of potential targets. The second is a carefully targeted attack against a known enemy. Targeted attacks have cheaper alternatives than silicon back doors.
There's a reasonable argument to be made that limiting the backdoor to very specific instructions that specifically target the domains you care about makes it less likely that your backdoor will be triggered by accident. Escaping detection is just as important here.
> If you expect RDRAND to change its behavior to be nefarious, also expect ADD and MUL to do the same.
There's a very important difference: ADD and MUL are deterministic, while the output of RDRAND is random. If ADD or MUL or FDIV return incorrect results, that can be detected (as the Pentium has shown). If RDRAND returns backdoored values, it cannot be detected by only looking at its output; you have to check the circuit.
Surely you can check the output and see if it's random? Don't these attacks rely on perturbing the RNG do it's no longer a TRNG, isn't output the only way to tell??
>Surely you can check the output and see if it's random?
No you can't. That's an inherent property of randomness. If you're lucky you can win the lottery 10 times in a row. The only thing you can verify is that the random number generator is not obviously broken (see AMD's RDRAND bug) but you can't verify that it's truly random.
> isn't output the only way to tell??
Looking at the implementation is the only way to tell.
Let's say I generate a list of one million random numbers through a trustworthy source.
Now I build a random number generator that does nothing but just return numbers from the list.
There is an impractical way of verifying a random number generator that is only useful in theory. Remember how you can flip a coin and take 1000 samples and you get roughly 1/2 probability for each side? The number of samples you have to take grows with the number of outcomes. If your RNG returns a 64 bit number you can take 2^64*x samples where x is a very large number (the larger the better). x = 1 is already impractical (50 years @ 3 billion RDRAND/s) but to be really sure you would probably need x > 10000. Nobody on earth has that much time. Especially not CPU manufacturers that release new chips every year.
Even if you cannot be 100% sure that something is not really random, there are plenty of statistical measurements that can be used to assess the quality of the output (in terms of entropy).
> Let's say I generate a list of one million random numbers through a trustworthy source. Now I build a random number generator that does nothing but just return numbers from the list.
In less than a second your generator would be stalled which is a pretty obvious flaw to see.
The real issue is cryptographic algorithms because they are designed to simulate randomness, and they are adversarially improving when statistic methods of studies become more powerful. At every single point in time, state of the art cryptography is going to be able to produce fake random that the current state of the art cryptanalysis cannot prove as not random.
> At every single point in time, state of the art cryptography is going to be able to produce fake random that the current state of the art cryptanalysis cannot prove as not random.
That may not be true. (As in, I'm not sure it is.)
For many useful crypto algorithms where we give a nominal security measure (in bits), there is a theoretical attack that requires several fewer bits but is still infeasible.
For example, we might say a crypto block takes a 128-bit key and needs an order of magnitude 2^128 attempts to brute force the key.
The crypto block remains useful when academic papers reveal how they could crake it in about 2^123 attempts.
The difference between 2^128 and 2^123 is irrelevant in practice as long as we can't approach that many calculations. But it does represent a significant bias away from "looks random if you don't know the key".
It seems plausible to me that a difference like that would manifest in statistical analysis that state of the art cryptanalysis could prove as not random by practical means, while still unable to crack (as in obtain inputs) by practical means.
Or even impossible: if the not-so-random value comes from an AES stream (or any encryption method actually), it's impossible to prove as long as AES is not broken (proving that an AES stream can be distinguished from random is a sufficient definition of “broken” in the crypto community).