Recursive function theory

From HaskellWiki

Introduction

Designed languages

  • Dr Matt Fairtlough's Minimal Programming Language (MIN) is not exactly a recursive function theory language, but it is based on natural numbers, too and its equivalent power with partal recursive functions is shown in its description.

Implementations

In Haskell, among other implementations (e.g. written in Java) in Dr Matt Fairtlough's lecture notes (see the bottom of the page).

Motivations

Well-known concepts are taken from [Mon:MathLog], but several new notations (only notations, not concepts) are introduced to reflect all concepts described in [Mon:MathLog], and some simplifications are made (by allowing zero-arity generalizations). These are plans to achieve formalizations that can allow us in the future to incarnate the main concepts of recursive function theory in a toy programming language, to play with it so that some interesting concepts can be taught in a funny way.

The relatedness of this page to Haskell (and to functional programming) is very few. It seems to me that (programming in) recursive functional theory may be rather another world (e.g. currying is missing, too) -- although the lack of variables (even lack of formal parameters) can yield a feeling resembling to pointfree style programming, or even to combinatory logic.

But despite of its weak (direct) relatedness to functional programming, maybe this page can be useful for someone in the future

Other few relatedness of this topic to Haskell may appear by the fact that the Haskell implementation of the mentioned toy programming language may use

  • tricks with types, type arithmetic
  • or metaprogramming concepts, at worst preprocessing steps

because type-safe implementations of K˙_nm and K_nm does not look straightforward for me.

Notations

N
the set of natural numbers, incuding 0
Type system
I do not use this term in a precise way here: see the following items for explanation. Headlines Type system may suggest that the type system of the planned recursive function theory programming language implementation can prevent the user from applying a partial function to an out-of-domain value -- but I think that type safety in this sense cannot be achieved.
f:AB
f is a total function from A to B. In an implemented recursion function theory language, it means a function that surely terminates.
f:AB
f is a partial function from A to B (it may be either a total function or a proper partial function). In an implemented recursion theory language, maybe this information (being partial) cannot be grasped by the type system. It may mean that proper partial functions simply fail to terminate, without reflecting this possibility in the type system in any way.
A×B
Descartes-product of sets is used here for stressing the fact that recursive function theory does not know the concept of currying (much more other concepts will be needed to achieve something similar to currying, see Kleene's snm theorem). So using f:A×BC (instead of f:ABC) is intended to stress the lack of currying (at this level).

Primitive recursive functions

Type system

0=N
n+1=N××NNn+1

Initial functions

Constant

0:0
0=0

This allows us to deal with a concept of zero in recursive function theory. In the literature (in [Mon:MathLog], too) this aim is achieved in another way: a

zero:1
zerox=0

is defined instead. Is this approach superfluously overcomplicated? Can we avoid it and use the more simple and indirect looking

0:0
0=0

approach?

Are these approaches equivalent? Is the latter (more simple looking) one as powerful as the former one? Could we define a zero using the

0:0
0=0

approach? Let us try:

zero=K_100

(see the definition of K_nm somewhat below). This looks like working, but raises new questions: what about generalizing operations (here: composition) to deal with zero-arity cases in an appropriate way? E.g.

K˙_n0c
K_n0c

where c:0,nN can be regarded as n-ary functions throwing all their n arguments away and returning c.

Does it take a generalization to allow such cases, or can they be inferred? A practical approach to solve such questions: let us write a Haskell program which implements (at least partially) recursive function theory. Then we can see clearly which things have to be defined and things which are consequences. I think the K_n0c construct is a rather straighforward thing.

Why all this can be important: it may be exactly K_n0c that saves us from defining the concept of zero in recursive function theory as

zero:1
zerox=0

-- it may be superfluous: if we need functions that throw away (some or all) of their arguments and return a constant, then we can combine them from K_nm, s and 0, if we allow concepts like K_m0.

Successor function

s:1
s=λx.x+1

Projection functions

For all 0i<m:

Uim:m
Uimx0xixm1=xi

Operations

Composition

K˙_nm:m×nmn
K˙_nmfg0,,gm1x0xn1=f(g0x0xn1)(gm1x0xn1)

This resembles to the Φmn combinator of Combinatory logic (as described in [HasFeyCr:CombLog1, 171]). If we prefer avoiding the notion of the nested tuple, and use a more homogenous style (somewhat resembling to currying):

K_nm:m×n××nnm

Let underbrace not mislead us -- it does not mean any bracing.

K_nmfg0gm1x0xn1=f(g0x0xn1)(gm1x0xn1)

remembering us to

K_nmfg0gm1x0xn1=Φmnfg0gm1x0xn1

Primitive recursion

R_m:m×m+2m+1
R_mfh=gwhere
gx0xm10=fx0xm1
gx0xm1(sy)=hx0xm1y(gx0xm1y)

The last equation resembles to the Sn combinator of Combinatory logic (as described in [HasFeyCr:CombLog1, 169]):

gx0xm1(sy)=Sm+1hgx0xm1y

General recursive functions

Everything seen above, and the new concepts:

Type system

m^={f:m+1|fisspecial}

See the definition of being special [Mon:MathLog, 45]. This property ensures, that minimalization does not lead us out of the world of total functions. Its definition is the rather straightforward formalization of this expectation.

specialmfx0,,xm1NyNfx0xm1y=0

It resembles to the concept of inverse -- more exactly, to the existence part.


Operations

Minimalization

μ_m:m^m
μ_mfx0xm1=min{yN|fx0xm1y=0}

Minimalization does not lead us out of the word of total functions, if we use it only for special functions -- the property of being special is defined exactly for this purpose [Mon:MatLog, 45]. As we can see, minimalization is a concept resembling somehow to the concept of inverse.

Existence of the required minimum value of the set -- a sufficient and satisfactory condition for this is that the set is never empty. And this is equivalent to the statement

x0,,xm1NyNfx0xm1y=0

Partial recursive functions

Everything seen above, but new constructs are provided, too.

Type system

n+1=N××NNn+1

Question: is there any sense to define 0 in another way than simply 0=0=N? Partial constant? Is

0=N

or

0=N{}?

Operations

K˙¯nm:m×nmn
K¯nm:m×n××nnm
R¯m:m×m+2m+1
μ¯m:m+1m

Their definitions are straightforward extension of the corresponding total function based definitions.

Remark: these operations take partial functions as arguments, but they are total operations themselves in the sense that they always yield a result -- at worst an empty function (as an ultimate partial function).

Bibliography

[HasFeyCr:CombLog1]
Curry, Haskell B; Feys, Robert; Craig, William: Combinatory Logic. Volume I. North-Holland Publishing Company, Amsterdam, 1958.
[Mon:MathLog]
Monk, J. Donald: Mathematical Logic. Springer-Verlag, New York * Heidelberg * Berlin, 1976.