Could this have any practical application to software testing?
If there is any analogy here, perhaps one could transmute a program into something that crashed or failed more immediately if it was buggy. Of course even if that worked, one might not be able to trace the failure back to an actual line of code; it's not clear that this "smearing" process preserves information like that.
"...imagine you wake up one morning with your
head full of a complete proof of the Riemann
hypothesis. (This is arguably the greatest
open problem in mathematics, and is a deep
statement about the distribution of the prime
numbers, the atoms of arithmetic.) Giddy with
anticipation of certain fame, you leap out of
bed and type up the proof, in its full 500-page
glory. That done, you are seized by doubt. How
wise is it to inflict the full brunt of your genius
on your colleagues? Will anyone even listen, or
be prepared to check your proof?
What you can do is re-format your write-up
into a (somewhat longer) PCP proof. Anyone
now wishing to verify your argument need only
pick a handful of words at random, and follow a
set list of instructions to conclude whether it is
correct or not. An error might have slipped in
on account of faulty working, buggy reformatting,
or outright cheating. No matter, it will be
caught with overwhelming probability. What
the reformatting step does is smear any error
all over the proof, making it easier to spot. In
much the same way, a diligent sandwich maker
will smear a smidgen of jam evenly over his
bread, rather than leaving it concentrated
in one corner, and so make the whole more
savoury.
...In its full splendour, PCP asserts that
any statement S whose validity can be ascertained
by a proof P written over n bits also
admits an alternative proof, Q. This proof Q
has two appealing features: it can be derived
from P in a number of steps proportional to
n^c, where c is some constant; and P can be verified
by examining only three bits of Q picked
at random. If S is true, a correct P will satisfy
the verifier with a probability of 99%. If it is not
true, any alleged proof P will trigger a rejection
from Q with a probability higher than 50%
(ref. 4). Not impressed with this error rate?
Then all you have to do is pick, instead of three
bits of Q, as many bits as are contained in this
line of text. The error probability will drop to
one in a billion."
Basically, given a proof P of a statement expressed in n bits, you can in polynomial time with respect to n create a new PCP proof Q which spreads out (somehow?) P, including the errors in P. Since the errors are spread out, with high probability, you'll catch an error choosing a very small number of bits of Q.
It's like MAGIC... Especially since they don't explain it at all in the article.
Hmm. According to that, the theorem applies to propositional logic. Propositional logic is a very weak system; I don't think it can state the Riemann hypothesis.
Edit: I take that back: propositional logic definitely can't state the Riemann hypothesis. PropLog is complete, which means that by Goedel's incompleteness theorem it can't even model the natural numbers, much less the complexes.
scribd link, via slurping (www.scribd.com/slurp?url=...)