> You might say the standard natural numbers are those of the form 1 + ··· + 1, where we add 1 to itself some finite number of times. But what does 'finite number' mean here? It means a standard natural number! So this is circular.
> So, conceivably, the concept of 'standard' natural number, and the concept of 'standard' model of Peano arithmetic, are more subjective than most mathematicians think. Perhaps some of my 'standard' natural numbers are nonstandard for you!
Arithmetic is usually formalized using first order logic (that's what Godel did in his incompleteness paper, for example).
That means that one can write formulas quantifying only over numbers (that's what's done for the axioms defining addition and multiplication), but in order to get recursion, one needs to use infinite many axioms, one for each predicate P, saying:
if P(0) and (∀n)P(n) => P(n+1) then (∀n) P(n)
Intuitively, this gets us that all the numbers that can be reached starting from zero by using the "+1" operation will follow the rules of arithmetic.
But it does not guarantee that all numbers _can_ be reached this way. In the standard model, the one we have in mind, that is the case, but it turns out that there are constructions like this
0, 1, 2, ... -2', -1', 0', 1', 2' ...
having numbers that came after the ones in the standard model, but that cannot be reached by applying "+1" a finite number of times, where all the axioms of arithmetic are also true, and are therefore also models of arithmetic.
Some formulas are going to be true in one and false in the other, but all those are independent of the axioms of arithmetic. The study of formal methods in mathematics and their limitations is truly fascinating.
This seems all circling around the same misunderstanding on my part, as these Chaitin strings can't reduce to a set of operations less than their length, and prime numbers can't reduce to a set of factors smaller then themselves either, and to me these seem like analogies of each other. Are they artifacts of the same thing?
Am I misunderstanding Kolmolgorov complexity when intuitively it seems like the minimum program to produce a string would be based on the theoreitical minimum compression of that string, because the program that produced it would be as long as one that had the bits per symbol in the full string, minus the bits per symbol of the compressed string - where the shortest program would be one that produced all strings with that many remainder bits per symbol (plus the length of the string of the compressed data?)
I'm asking whether the smallest program to produce (S) is the one that produces the string of (L) = len(C) + the length of the de/compression program. That's the theoretical minimum program. A generic program that produced strings of length (L) with information bits per symbol (B) would therefore be the smallest?
It's a naive question, but there has to be a ratio between the number of symbols in the string, the number of instructions to produce each symbol, the information bits per symbol, and the minimum theoretical compression.
It's said that decompression is not guaranteed, but once you absract the original input string out into something defined by properties, a program producing all things with those properties (e.g. strings with the same bits per symbol) must yield that string. (this sounds circular)
The ellipse in this reasoning is that it sounds like Kolmolgorov complexity means the program reproducing the string must produce the actual string, and not just every instance of the attributes that meets the criteria to have produced that string, or the program itself is not allowed to just be a proof that it can produce that string. Or, is the minimum program to produce the string necessarily a proof that it can produce the string?
(Maybe it's that Kolomolgorov complexity refers to the minimum set of instruction/operations in a given language to produce a string, and that complexity refers to the completeness of that instruction set vs. Turing completeness ?)
Back to the incompleteness in the article, my question sounds like the same problem, which is if the necessarily minimum program is that the program is a proof it can produce the string (given its length and symbols), the program is complete, and has undecidable inputs, so there must be a property of strings in general where the strings are themselves intrinsicly decidable and undecidable programatically (meaning they can be distinguised from all other strings) sort of like primes and composite numbers, or rational vs. irrational strings, but undecidable and decidable strings for an abstract program. Thinking that way, string compresssion becomes a kind of analogous factoring operation, where you are folding a string into other constituent instructions instead of multiplicative integer factors. The compression is a lossless transformation into a program, or one that is lossy.
So, are a program that produces only one string, vs. a program that verifies whether a given string is a theorem of a program that may only produce it - are they logically the same thing? At a higher level of abstraction they sound like it.
The conflict seems to be between the separate operations of producing vs. verifying strings, as though there were some limit on the reversibility of that process, (like a trapdoor) related to the incompleteness, where now, we're into Chaitin's quesiton of whether there are strings that do not have programs to generate them that are proportionally shorter than the strings themselves. But then that asks, what about strings whose programs are the same length, longer, and possibly infinite. This suggests to me there is information contained within a string about whether the program to generate it completes.
So how much of a string do you need to know to show whether a program to generate it is shorter than itself, and what is that property? (It sounds like bits per symbol/entropy) If there is a trade off like knowing both the length and symbols you can produce it, but without complete information about it, you are left with satisfying other criteria about it (checksums, factors, etc) and assuming the criteria will eventually express the string.
Maybe the minimum compression of a string yields a set of symbols that can encode arguments to instructions, which when run, produce the string, but given certain string complexities, some of those instruction sets and arguments are Turing complete, where others are not, similar to being decidable and undecidable in operations fewer than some constant, just as integers can be factored or not based on one operation (division) and can be composite and prime.
Sort of suggests there is a ring of strings that can be produced by Turing complete instruction sets, and one of those that cannot, and there is a way to find out whether one is a member of one set or the other - by finding if the string compresses to a string with symbols that produces encodings for that sufficiently complex set of instructions.
This is really running in circles and the size of this wall of text looks a lot like regret in the making, but I can't seem to stop sounding out how limits on Kolmolgorov complexity rhyme with other undecidable and hard limits.
Perhaps the most intuitive example is "the smallest unnameable number".
It's clearly impossible to name all positive integers of unbounded size, if you have some finite limit on name length, but it's also impossible to find the smallest unnameable one, because if you did, "smallest unnameable number" would be its name, yielding a paradox.
> So, conceivably, the concept of 'standard' natural number, and the concept of 'standard' model of Peano arithmetic, are more subjective than most mathematicians think. Perhaps some of my 'standard' natural numbers are nonstandard for you!
Huh -_-a. Does anyone have a quick rundown?