Difference between revisions of "Chaitin's construction"

From HaskellWiki
Jump to navigation Jump to search
(→‎Table for simpler CL-terms: New column added: Omega approximated till now: mantissa binary (decimal))
(→‎Table for small legths: Extending binary approximation of Omega to fit the code length)
Line 49: Line 49:
 
! Decodable strings, ratio, their sum till now
 
! Decodable strings, ratio, their sum till now
 
! Terminating, ratio, their sum till now
 
! Terminating, ratio, their sum till now
! <math>\Omega</math> approximated till now: mantissa binary (decimal)
+
! <math>\Omega</math> approximated till now: mantissa -- binary, length-fitting binary, decimal
 
|-
 
|-
 
| 0
 
| 0
Line 55: Line 55:
 
| 0, 0, 0
 
| 0, 0, 0
 
| 0, 0, 0
 
| 0, 0, 0
| - (-)
+
| -, -, -
 
|-
 
|-
 
| 1
 
| 1
Line 61: Line 61:
 
| 0, 0, 0
 
| 0, 0, 0
 
| 0, 0, 0
 
| 0, 0, 0
| - (-)
+
| -, 0, 0
 
|-
 
|-
 
| 2
 
| 2
Line 67: Line 67:
 
| 2, <math>\frac12</math>, <math>\frac12</math>
 
| 2, <math>\frac12</math>, <math>\frac12</math>
 
| 2, <math>\frac12</math>, <math>\frac12</math>
 
| 2, <math>\frac12</math>, <math>\frac12</math>
| 1 (5)
+
| 1, 10, 5
 
|-
 
|-
 
| 3
 
| 3
Line 73: Line 73:
 
| 0, 0, <math>\frac12</math>
 
| 0, 0, <math>\frac12</math>
 
| 0, 0, <math>\frac12</math>
 
| 0, 0, <math>\frac12</math>
| 1 (5)
+
| 1, 100, 5
 
|-
 
|-
 
| 4
 
| 4
Line 79: Line 79:
 
| 0, 0, <math>\frac12</math>
 
| 0, 0, <math>\frac12</math>
 
| 0, 0, <math>\frac12</math>
 
| 0, 0, <math>\frac12</math>
| 1 (5)
+
| 1, 1000, 5
 
|-
 
|-
 
| 5
 
| 5
Line 85: Line 85:
 
| 4, <math>\frac18</math>, <math>\frac58</math>
 
| 4, <math>\frac18</math>, <math>\frac58</math>
 
| 4, <math>\frac18</math>, <math>\frac58</math>
 
| 4, <math>\frac18</math>, <math>\frac58</math>
| 101 (625)
+
| 101, 10100, 625
 
|}
 
|}
   

Revision as of 21:09, 5 August 2006

Introduction

Are there any real numbers which are defined exactly, but cannot be computed? This question leads us to exact real arithmetic, and algorithmic information theory, and foundations of mathematics and computer science.

See Wikipedia article on Chaitin's construction, referring to e.g.

Basing it on combinatory logic

Some more direct relatedness to functional programming: we can base on combinatory logic (instead of a Turing machine).

Coding

See the prefix coding system described in Binary Lambda Calculus and Combinatory Logic (page 20) written by John Tromp:

of course, , are meta-variables, and also some other notations are changed slightly.

Decoding

Having seen this, decoding is rather straightforward. Here is a parser for illustration, but it serves only didactical purposes: it will not be used in the final implementation, because a good term generator makes parsing superfluous at this task.

Chaitin's construction

Now, Chaitin's construction will be here

where

should denote an unary predicate “has normal form” (“terminates”)
should mean an operator “decode” (a function from finite bit sequences to combinatory logic terms)
should denote the set of all finite bit sequences
should denote the set of syntactically correct bit sequences (semantically, they may either terminate or diverge), i.e. the domain of the decoding function, i.e. the range of the coding function. Thus,
“Absolute value”
should mean the length of a bit sequence (not combinatory logic term evaluation!)

Table for small legths

Length () All strings () Decodable strings, ratio, their sum till now Terminating, ratio, their sum till now approximated till now: mantissa -- binary, length-fitting binary, decimal
0 1 0, 0, 0 0, 0, 0 -, -, -
1 2 0, 0, 0 0, 0, 0 -, 0, 0
2 4 2, , 2, , 1, 10, 5
3 8 0, 0, 0, 0, 1, 100, 5
4 16 0, 0, 0, 0, 1, 1000, 5
5 32 4, , 4, , 101, 10100, 625

Eliminating any concept of code by handling combinatory logic terms directly

We can avoid referring to any code notion, if we transfer (lift) the notion of “length” from bit sequences to combinatory logic terms in an appropriate way. Let us call it the “norm” of the term:

where

Thus, we have no notions of “bit sequence”,“code”, “coding”, “decoding” at all. But their ghosts still haunt us: the definition of norm function looks rather strange without thinking on the fact that is was transferred from a concept of coding.

More natural norm functions (from CL terms)

Question: If we already move away from the approaches referring to any code concept, then could we define norm in other ways? E.g.

And is it worth doing it at all? The former one, at leat, had a good theoretical foundation (based on analysis, arithmetic and probability theory). This latter one is not so cleaner, that we should prefer it, so, lacking theoretical grounds.

What I really want is to exclude the (IMHO) underestimation of this “probability of termination” number -- an underestimation coming from taking into account the syntactically non-correct codes (IMHO). Thus taking only termination vs nontermination into account, when calculating this number (which can be interpreted as a probability).

Table for simpler CL-terms

Let us not take into account coding and thus excluding the notion of “syntactically incorrect coding” even conceptually. Can we guess a good norm?

Binary tree pattern Maximal depth, vertices, edges Leafs, branches So many CL-terms = how to count it Terminating, ratio So many till now, ratio till now
0, 1, 0 1, 0 2, 1 2, 1
1, 3, 2 2, 1 4, 1 6, 1
2, 5, 4 3, 2 8, 1 14, 1
2, 5, 4 3, 2 8, 1 22, 1
2, 7, 6 4, 3 16, 1 38, 1

Implementation

To do: Writing a program in Haskell -- or in combinatory logic:-) -- which could help in making conjectures on combinatory logic-based Chaitin's constructions. It would make only approximations, in a similar way that most Mandelbrot plotting softwares work. The analogy:

  • they ask for a maximum limit of iterations, so that they can make a conjecture on convergence of a series;
  • this program will ask for the maximum limit of reducton steps, so that it can make a conjecture on termination (having-normal-form) of a CL term.

Explanation for this: non-termination of each actually examined CL-term cannot be proven by the program, but a good conjecture can be made: if termination does not take place in the given limit of reduction steps, then the actually examined CL-term is regarded as non-terminating.

Architecture

A CL term generator generates CL terms in “ascending order” (in terms of a theoretically appropriate “norm”), and by computing the norm of each CL-term, it approximates Chaitin's construct (at a given number of digits, and according to the given maximal limit of reduction steps).

User interface

chaitin --model-of-computation=cl --encoding=tromp --limit-of-reduction-steps=500 --digits=9 --decimal
chaitin --model-of-computation=cl --encoding=direct --limit-of-reduction-steps=500 --digits=9 --decimal

Term generator

 module CLGen where

 import Generator (gen0)
 import CL (k, s, apply)

 direct :: [CL]
 direct = gen0 apply [s, k]

See combinatory logic term modules here.

 module Generator (gen0) where

 import PreludeExt (cross)

 gen0 :: (a -> a -> a) -> [a] -> [a]
 gen0 f c = gen f c 0

 gen :: (a -> a -> a) -> [a] -> Integer -> [a]
 gen f c n = sizedGen f c n ++ gen f c (succ n)

 sizedGen :: (a -> a -> a) -> [a] -> Integer -> [a]
 sizedGen f c 0 = c
 sizedGen f c (n + 1) = map (uncurry f)
                      $
                      concat [sizedGen f c i `cross` sizedGen f c (n - i) | i <- [0..n]]
 module PreludeExt (cross) where

 cross :: [a] -> [a] -> [(a, a)]
 cross xs ys = [(x, y) | x <- xs, y <- ys]

Related concepts

To do