Chaitin's construction
- Correction in process. There is a substantial point that is lacking yet, the formulae and the concepts are not correct without it.
Introduction
Are there any real numbers which are defined exactly, but cannot be computed? This question leads us to exact real arithmetic, and algorithmic information theory, and foundations of mathematics and computer science.
See Wikipedia article on Chaitin's construction, referring to e.g.
- Omega and why math has no TOEs (Gregory Chaitin).
PROBLEM with the approach below
The approach below fails to take into account that a halting probability is only (provably) random if the programming language used is additively optimal. That is, it must provide access to raw binary data without overhead. Thus, a binary code for Combinatory Logic or Lambda Calculus does not suffice. It's also necessary to apply the encoded term on the binary data that follows the encoded term, as is done for example in Section 4.1 of Tromp's paper.
Basing it on combinatory logic
Some more direct relatedness to functional programming: we can base on combinatory logic (instead of a Turing machine).
Coding
See the prefix coding system described in Binary Lambda Calculus and Combinatory Logic (page 20) written by John Tromp:
of course, , are meta-variables, and also some other notations are changed slightly.
Decoding
Having seen this, decoding is rather straightforward. Here is a parser for illustration, but it serves only didactical purposes: it will not be used in the final implementation, because a good term generator makes parsing superfluous at this task.
Chaitin's construction
Now, Chaitin's construction will be here
where
- should denote an unary predicate “has normal form” (“terminates”)
- should mean an operator “decode” (a function from finite bit sequences to combinatory logic terms)
- should denote the set of all finite bit sequences
- should denote the set of syntactically correct bit sequences (semantically, they may either terminate or diverge), i.e. the domain of the decoding function, i.e. the range of the coding function. Thus,
- “Absolute value”
- should mean the length of a bit sequence (not combinatory logic term evaluation!)
Table for small legths
Length () | All strings () | Decodable strings, ratio, their sum till now | Terminating, ratio, their sum till now | approximated till now: mantissa -- binary, length-fitting binary, decimal |
---|---|---|---|---|
0 | 1 | 0, 0, 0 | 0, 0, 0 | -, -, - |
1 | 2 | 0, 0, 0 | 0, 0, 0 | -, 0, 0 |
2 | 4 | 2, , | 2, , | 1, 10, 5 |
3 | 8 | 0, 0, | 0, 0, | 1, 100, 5 |
4 | 16 | 0, 0, | 0, 0, | 1, 1000, 5 |
5 | 32 | 4, , | 4, , | 101, 10100, 625 |
It illustrates nicely, that Chaitin's construction is a normal number, as if its digits (in binary representation) were generated by tossing a coin.
Eliminating any concept of code by handling combinatory logic terms directly
Chaitin's construction can be grasped also as
We can avoid referring to any code notion, if we modularize out function
and give it a separate name, e.g.
and notice that it can be defined directly in terms of CL-terms (we need not use any decoding concept any longer):
Thus, we transfer (lift) the notion of “length” from bit sequences to combinatory logic terms in an appropriate way. Let us call it, e.g. the “norm” of the term.
Thus, Chaitin's construction is grasped also as
where
is a partial function defined on CL terms, it attributes to each "terminating" term its normal form.
Thus, we have no notions of “bit sequence”,“code”, “coding”, “decoding” at all. But their ghosts still haunt us: the definition of norm function looks rather strange without thinking on the fact that is was transferred from a concept of coding.
More natural norm functions (from CL terms)
Question: If we already move away from the approaches referring to any code concept, then could we define norm in other ways? E.g.
And is it worth doing it at all? The former one, at leat, had a good theoretical foundation (based on analysis, arithmetic and probability theory). This latter one is not so cleaner, that we should prefer it, so, lacking theoretical grounds.
What I really want is to exclude conceptually the notion of coding, and with it the notion of “syntactically incorrect versus syntactically correct but diverging”. Thus, taking into account only syntactically correct things, seeing only the choice of terminating versus non-terminating. Thus taking only termination vs nontermination into account, when calculating Chaitin's construction.
What I want to preserve:
- it can be interpreted as a probability
- it is a normal number, as if its digits (in binary representation) were generated by tossing a coin
thus I do not want to spoil these features.
Table for simpler CL-terms
Let us not take into account coding and thus excluding the notion of “syntactically incorrect coding” even conceptually. Can we guess a good norm?
Binary tree pattern | Maximal depth, vertices, edges | Leafs, branches | So many CL-terms = how to count it | Terminating, ratio | So many till now, ratio till now |
---|---|---|---|---|---|
0, 1, 0 | 1, 0 | 2, 1 | 2, 1 | ||
1, 3, 2 | 2, 1 | 4, 1 | 6, 1 | ||
2, 5, 4 | 3, 2 | 8, 1 | 14, 1 | ||
2, 5, 4 | 3, 2 | 8, 1 | 22, 1 | ||
2, 7, 6 | 4, 3 | 16, 1 | 38, 1 |
Implementation
To do: Writing a program in Haskell -- or in combinatory logic:-) -- which could help in making conjectures on combinatory logic-based Chaitin's constructions. It would make only approximations, in a similar way that most Mandelbrot plotting softwares work. The analogy:
- they ask for a maximum limit of iterations, so that they can make a conjecture on convergence of a series;
- this program will ask for the maximum limit of reducton steps, so that it can make a conjecture on termination (having-normal-form) of a CL term.
Explanation for this: non-termination of each actually examined CL-term cannot be proven by the program, but a good conjecture can be made: if termination does not take place in the given limit of reduction steps, then the actually examined CL-term is regarded as non-terminating.
Architecture
A CL term generator generates CL terms in “ascending order” (in terms of a theoretically appropriate “norm”), and by computing the norm of each CL-term, it approximates Chaitin's construction (at a given number of digits, and according to the given maximal limit of reduction steps).
User interface
chaitin --model-of-computation=cl --encoding=tromp --limit-of-reduction-steps=500 --digits=9 --decimal chaitin --model-of-computation=cl --encoding=direct --limit-of-reduction-steps=500 --digits=9 --decimal
Term generator
module CLGen where
import Generator (gen0)
import CL (k, s, apply)
direct :: [CL]
direct = gen0 apply [s, k]
See combinatory logic term modules here.
module Generator (gen0) where
import PreludeExt (cross)
gen0 :: (a -> a -> a) -> [a] -> [a]
gen0 f c = gen f c 0
gen :: (a -> a -> a) -> [a] -> Integer -> [a]
gen f c n = sizedGen f c n ++ gen f c (succ n)
sizedGen :: (a -> a -> a) -> [a] -> Integer -> [a]
sizedGen f c 0 = c
sizedGen f c (n + 1) = map (uncurry f)
$
concat [sizedGen f c i `cross` sizedGen f c (n - i) | i <- [0..n]]
module PreludeExt (cross) where
cross :: [a] -> [a] -> [(a, a)]
cross xs ys = [(x, y) | x <- xs, y <- ys]
Related concepts
To do
- Making tasks described in #Implementation
- Making more natural norm functions (from CL-terms), see #More natural norm functions (from CL terms)