Difference between revisions of "Recursive function theory"

From HaskellWiki
Jump to navigation Jump to search
(Reverting ,,zero-arity case excluded'' -- zero arity cases can be fruitful)
(→‎Implementations: link to archive.org-archived version as the website is no longer available)
(17 intermediate revisions by one other user not shown)
Line 1: Line 1:
 
== Introduction ==
 
== Introduction ==
   
[http://planetmath.org/encyclopedia/RecursiveFunction.html PlanetMath article]
+
* [http://planetmath.org/encyclopedia/RecursiveFunction.html PlanetMath article]
  +
* Dr Matt Fairtlough's [http://www.dcs.shef.ac.uk/~matt/teaching/04/com2030/lectures/tomlect13.pdf Introduction to recursive function theory] among his [http://www.dcs.shef.ac.uk/~matt/teaching/04/com2030/ lecture notes]
   
  +
== Designed languages ==
== Plans towards a programming language ==
 
  +
* Dr Matt Fairtlough's [http://www.dcs.shef.ac.uk/~matt/teaching/04/com2030/lectures/tomlect16.pdf Minimal Programming Language (MIN)] is not exactly a recursive function theory language, but it is based on natural numbers, too and its equivalent power with partal recursive functions is shown in its description.
   
  +
== Implementations ==
Well-known concepts are taken from [Mon:MatLog], but several new notations (only notations, not concepts) are introduced to reflect all concepts described in [Mon:MatLog], and some simplification are made (by allowing zero-arity generalizations). These are plans to achive formalizations that can allow us in the future to incarnate the main concepts of recursive function theory in a programming language.
 
  +
[http://web.archive.org/web/20060713051445/http://www.dcs.shef.ac.uk/~matt/teaching/04/com2030/lectures/Recursion.hs In Haskell], among other implementations (e.g. written in Java) in Dr Matt Fairtlough's [http://web.archive.org/web/20060713073121/http://www.dcs.shef.ac.uk/~matt/teaching/04/com2030/ lecture notes] (see the bottom of the page).
  +
  +
== Motivations ==
  +
 
Well-known concepts are taken from [Mon:MathLog], but several new notations (only notations, not concepts) are introduced to reflect all concepts described in [Mon:MathLog], and some simplifications are made (by allowing zero-arity generalizations). These are plans to achieve formalizations that can allow us in the future to incarnate the main concepts of recursive function theory in a ''toy'' programming language, to play with it so that some interesting concepts can be taught in a funny way.
  +
  +
The relatedness of this page to Haskell (and to functional programming) is very few. It seems to me that (programming in) recursive functional theory may be rather another world (e.g. currying is missing, too) -- although the lack of variables (even lack of formal parameters) can yield a feeling resembling to [[pointfree]] style programming, or even to [[combinatory logic]].
  +
  +
But despite of its weak (direct) relatedness to functional programming, maybe this page can be useful for someone in the future
  +
* e.g. when writing on quines (self-printing programs or self-representing formulas) etc. David Madore's [http://www.madore.org/~david/computers/quine.html Quines (self-replicating programs)] page uses recursive function theory for explaining the theoretical roots of quines (e.g. [http://www.madore.org/~david/computers/quine.html#sec_fp fixed point theorem])
  +
* or when writing on other general concepts of [[computer science]].
  +
  +
Other few relatedness of this topic to Haskell may appear by the fact that the Haskell implementation of the mentioned toy programming language may use
  +
* tricks with types, [[type arithmetic]]
  +
* or metaprogramming concepts, at worst preprocessing steps
  +
because type-safe implementations of <math>\underline\mathbf\dot K^m_n</math> and <math>\underline\mathbf K^m_n</math> does not look straightforward for me.
  +
  +
== Notations ==
  +
  +
;<math>\mathbb N</math>
  +
:the set of natural numbers, incuding 0
  +
;''Type system''
  +
:I do not use this term in a precise way here: see the following items for explanation. Headlines ''Type system'' may suggest that the type system of the planned recursive function theory programming language implementation can prevent the user from applying a partial function to an out-of-domain value -- but I think that type safety in this sense cannot be achieved.
  +
;<math>f:A\to B</math>
  +
:<math>f</math> is a total function from <math>A</math> to <math>B</math>. In an implemented recursion function theory language, it means a function that surely terminates.
  +
;<math>f:A\supset\!\to B</math>
  +
:<math>f</math> is a partial function from <math>A</math> to <math>B</math> (it may be either a total function or a proper partial function). In an implemented recursion theory language, maybe this information (being partial) cannot be grasped by the type system. It may mean that proper partial functions simply fail to terminate, without reflecting this possibility in the type system in any way.
  +
;<math>A\times B</math>
  +
:Descartes-product of sets is used here for stressing the fact that recursive function theory does not know the concept of ''currying'' (much more other concepts will be needed to achieve something similar to currying, see [http://en.wikipedia.org/wiki/S-m-n_theorem Kleene's <math>s^m_n</math> theorem]). So using <math>f:A\times B\to C</math> (instead of <math>f:A\to B\to C</math>) is intended to stress the lack of currying (at this level).
   
 
== Primitive recursive functions ==
 
== Primitive recursive functions ==
Line 14: Line 45:
 
:<math>\begin{matrix}\left\lfloor n + 1\right\rfloor = \underbrace{\mathbb N\times\dots\times\mathbb N}\to\mathbb N\\\;\;\;\;\;\;\;\;n+1\end{matrix}</math>
 
:<math>\begin{matrix}\left\lfloor n + 1\right\rfloor = \underbrace{\mathbb N\times\dots\times\mathbb N}\to\mathbb N\\\;\;\;\;\;\;\;\;n+1\end{matrix}</math>
   
=== Base functions ===
+
=== Initial functions ===
   
 
==== Constant ====
 
==== Constant ====
Line 21: Line 52:
 
:<math>\mathbf 0 = 0</math>
 
:<math>\mathbf 0 = 0</math>
   
  +
This allows us to deal with a concept of zero in recursive function theory.
Question: is the well-known <math>\mathbf{zero}(x)=0</math> approach superfluous? Can we avoid it and use the more simple and indirect <math>\mathbf{0} = 0</math> approach, if we generalize operations (especially composition) to deal with zero-arity cases in an appropriate way?
 
  +
In the literature (in [Mon:MathLog], too) this aim is achieved in another way: a
E.g., <math>\underline\mathbf\dot K^0_1\mathbf0\left\langle\right\rangle = n</math> and <math>\underline\mathbf K^0_1\mathbf0 = n</math>, too?
 
  +
:<math>\mathbf{zero} : \left\lfloor1\right\rfloor</math>
Does it take a generalization to allow such cases, or can it be inferred?
 
  +
:<math>\mathbf{zero}\;x = 0</math>
  +
is defined instead. Is this approach superfluously overcomplicated? Can we avoid it and use the more simple and indirect looking
  +
:<math>\mathbf 0 : \left\lfloor0\right\rfloor</math>
  +
:<math>\mathbf 0 = 0</math>
  +
approach?
  +
  +
Are these approaches equivalent? Is the latter (more simple looking) one as powerful as the former one? Could we define a <math>\mathbf{zero}</math> using the
  +
:<math>\mathbf 0 : \left\lfloor0\right\rfloor</math>
  +
:<math>\mathbf 0 = 0</math>
  +
approach?
  +
Let us try:
  +
:<math>\mathbf{zero} = \underline\mathbf K^0_1 \mathbf0</math>
  +
(see the definition of <math>\underline\mathbf K^m_n</math> somewhat below).
  +
This looks like working, but raises new questions: what about generalizing operations (here: composition) to deal with zero-arity cases in an appropriate way?
  +
E.g.
 
:<math>\underline\mathbf\dot K^0_n c\left\langle\right\rangle</math>
  +
:<math>\underline\mathbf K^0_n c</math>
  +
where <math>c : \left\lfloor0\right\rfloor, n \in \mathbb N</math>
  +
can be regarded as <math>n</math>-ary functions throwing all their <math>n</math> arguments away and returning <math>c</math>.
  +
 
Does it take a generalization to allow such cases, or can they be inferred?
  +
A practical approach to solve such questions: let us write a Haskell program which implements (at least partially) recursive function theory. Then we can see clearly which things have to be defined and things which are consequences. I think the <math>\underline\mathbf K^0_n c</math> construct is a rather straighforward thing.
  +
  +
Why all this can be important: it may be exactly <math>\underline\mathbf K^0_n c</math> that saves us from defining the concept of zero in recursive function theory as
  +
:<math>\mathbf{zero} : \left\lfloor1\right\rfloor</math>
  +
:<math>\mathbf{zero}\;x = 0</math>
  +
-- it may be superfluous: if we need functions that throw away (some or all) of their arguments and return a constant, then we can combine them from <math>\underline\mathbf K^m_n</math>, <math>\mathbf s</math> and <math>\mathbf 0</math>, if we allow concepts like <math>\underline\mathbf K^0_m</math>.
   
 
==== Successor function ====
 
==== Successor function ====
Line 68: Line 126:
   
 
See the definition of being ''special'' <nowiki>[Mon:MathLog, 45]</nowiki>. This property ensures, that minimalization does not lead us out of the world of total functions. Its definition is the rather straightforward formalization of this expectation.
 
See the definition of being ''special'' <nowiki>[Mon:MathLog, 45]</nowiki>. This property ensures, that minimalization does not lead us out of the world of total functions. Its definition is the rather straightforward formalization of this expectation.
  +
:<math>\mathbf{special}^m f \equiv \forall x_0, \dots, x_{m-1} \in \mathbb N\;\;\exists y \in \mathbb N\;\;f x_0 \dots x_{m-1} y = 0</math>
  +
It resembles to the concept of inverse -- more exactly, to the existence part.
   
   
Line 75: Line 135:
   
 
:<math>\underline\mu^m : \widehat m \to \left\lfloor m\right\rfloor</math>
 
:<math>\underline\mu^m : \widehat m \to \left\lfloor m\right\rfloor</math>
:<math>\underline\mu^m f = \min \left\{y\in\mathbb N\;\vert\;f x_0 \dots x_{m-1} y = 0\right\}</math>
+
:<math>\underline\mu^m f x_0 \dots x_{m-1} = \min \left\{y\in\mathbb N\;\vert\;f x_0 \dots x_{m-1} y = 0\right\}</math>
 
Minimalization does not lead us out of the word of total functions, if we use it only for ''special'' functions -- the property of being ''special'' is defined exactly for this purpose [Mon:MatLog, 45].
 
Minimalization does not lead us out of the word of total functions, if we use it only for ''special'' functions -- the property of being ''special'' is defined exactly for this purpose [Mon:MatLog, 45].
  +
As we can see, minimalization is a concept resembling somehow to the concept of ''inverse''.
  +
  +
Existence of the required minimum value of the set -- a sufficient and satisfactory condition for this is that the set is never empty. And this is equivalent to the statement
  +
:<math>\forall x_0, \dots, x_{m-1} \in \mathbb N\;\;\exists y \in \mathbb N\;\;f x_0 \dots x_{m-1} y = 0</math>
   
 
== Partial recursive functions ==
 
== Partial recursive functions ==
Line 87: Line 151:
   
 
Question: is there any sense to define
 
Question: is there any sense to define
<math>\left\lceil0\right\rceil</math> in another way than simply <math>\left\lceil0\right\rceil = \left\lfloor0\right\rfloor = \mathbb N</math>? Partial constants?
+
<math>\left\lceil0\right\rceil</math> in another way than simply <math>\left\lceil0\right\rceil = \left\lfloor0\right\rfloor = \mathbb N</math>? Partial constant?
  +
Is
  +
:<math>\left\lceil0\right\rceil = \mathbb N</math>
   
  +
or
  +
:<math>\left\lceil0\right\rceil = \mathbb N\;\cup\;\left\{\bot\right\}</math>?
 
=== Operations ===
 
=== Operations ===
   
 
:<math>\overline\mathbf\dot K^m_n : \left\lceil m\right\rceil \times \left\lceil n\right\rceil^m \to \left\lceil n\right\rceil</math>
 
:<math>\overline\mathbf\dot K^m_n : \left\lceil m\right\rceil \times \left\lceil n\right\rceil^m \to \left\lceil n\right\rceil</math>
  +
:<math>\begin{matrix}\overline\mathbf K^m_n : \left\lceil m\right\rceil \times \underbrace{\left\lceil n\right\rceil\times\dots\times\left\lceil n\right\rceil} \to \left\lceil n\right\rceil\\\;\;\;\;\;\;\;\;\;\;m\end{matrix}</math>
  +
 
:<math>\overline\mathbf R^m : \left\lceil m\right\rceil \times \left\lceil m+2\right\rceil \to \left\lceil m+1\right\rceil</math>
 
:<math>\overline\mathbf R^m : \left\lceil m\right\rceil \times \left\lceil m+2\right\rceil \to \left\lceil m+1\right\rceil</math>
 
:<math>\overline\mu^m : \left\lceil m+1\right\rceil \to \left\lceil m\right\rceil</math>
 
:<math>\overline\mu^m : \left\lceil m+1\right\rceil \to \left\lceil m\right\rceil</math>
   
Their definitions are straightforward.
+
Their definitions are straightforward extension of the corresponding total function based definitions.
  +
  +
Remark: these operations take partial functions as arguments, but they are total operations themselves in the sense that they always yield a result -- at worst an empty function (as an ultimate partial function).
   
 
== Bibliography ==
 
== Bibliography ==

Revision as of 13:13, 29 January 2012

Introduction

Designed languages

  • Dr Matt Fairtlough's Minimal Programming Language (MIN) is not exactly a recursive function theory language, but it is based on natural numbers, too and its equivalent power with partal recursive functions is shown in its description.

Implementations

In Haskell, among other implementations (e.g. written in Java) in Dr Matt Fairtlough's lecture notes (see the bottom of the page).

Motivations

Well-known concepts are taken from [Mon:MathLog], but several new notations (only notations, not concepts) are introduced to reflect all concepts described in [Mon:MathLog], and some simplifications are made (by allowing zero-arity generalizations). These are plans to achieve formalizations that can allow us in the future to incarnate the main concepts of recursive function theory in a toy programming language, to play with it so that some interesting concepts can be taught in a funny way.

The relatedness of this page to Haskell (and to functional programming) is very few. It seems to me that (programming in) recursive functional theory may be rather another world (e.g. currying is missing, too) -- although the lack of variables (even lack of formal parameters) can yield a feeling resembling to pointfree style programming, or even to combinatory logic.

But despite of its weak (direct) relatedness to functional programming, maybe this page can be useful for someone in the future

Other few relatedness of this topic to Haskell may appear by the fact that the Haskell implementation of the mentioned toy programming language may use

  • tricks with types, type arithmetic
  • or metaprogramming concepts, at worst preprocessing steps

because type-safe implementations of and does not look straightforward for me.

Notations

the set of natural numbers, incuding 0
Type system
I do not use this term in a precise way here: see the following items for explanation. Headlines Type system may suggest that the type system of the planned recursive function theory programming language implementation can prevent the user from applying a partial function to an out-of-domain value -- but I think that type safety in this sense cannot be achieved.
is a total function from to . In an implemented recursion function theory language, it means a function that surely terminates.
is a partial function from to (it may be either a total function or a proper partial function). In an implemented recursion theory language, maybe this information (being partial) cannot be grasped by the type system. It may mean that proper partial functions simply fail to terminate, without reflecting this possibility in the type system in any way.
Descartes-product of sets is used here for stressing the fact that recursive function theory does not know the concept of currying (much more other concepts will be needed to achieve something similar to currying, see Kleene's theorem). So using (instead of ) is intended to stress the lack of currying (at this level).

Primitive recursive functions

Type system

Initial functions

Constant

This allows us to deal with a concept of zero in recursive function theory. In the literature (in [Mon:MathLog], too) this aim is achieved in another way: a

is defined instead. Is this approach superfluously overcomplicated? Can we avoid it and use the more simple and indirect looking

approach?

Are these approaches equivalent? Is the latter (more simple looking) one as powerful as the former one? Could we define a using the

approach? Let us try:

(see the definition of somewhat below). This looks like working, but raises new questions: what about generalizing operations (here: composition) to deal with zero-arity cases in an appropriate way? E.g.

where can be regarded as -ary functions throwing all their arguments away and returning .

Does it take a generalization to allow such cases, or can they be inferred? A practical approach to solve such questions: let us write a Haskell program which implements (at least partially) recursive function theory. Then we can see clearly which things have to be defined and things which are consequences. I think the construct is a rather straighforward thing.

Why all this can be important: it may be exactly that saves us from defining the concept of zero in recursive function theory as

-- it may be superfluous: if we need functions that throw away (some or all) of their arguments and return a constant, then we can combine them from , and , if we allow concepts like .

Successor function

Projection functions

For all :

Operations

Composition

This resembles to the combinator of Combinatory logic (as described in [HasFeyCr:CombLog1, 171]). If we prefer avoiding the notion of the nested tuple, and use a more homogenous style (somewhat resembling to currying):

Let underbrace not mislead us -- it does not mean any bracing.

remembering us to

Primitive recursion

The last equation resembles to the combinator of Combinatory logic (as described in [HasFeyCr:CombLog1, 169]):

General recursive functions

Everything seen above, and the new concepts:

Type system

See the definition of being special [Mon:MathLog, 45]. This property ensures, that minimalization does not lead us out of the world of total functions. Its definition is the rather straightforward formalization of this expectation.

It resembles to the concept of inverse -- more exactly, to the existence part.


Operations

Minimalization

Minimalization does not lead us out of the word of total functions, if we use it only for special functions -- the property of being special is defined exactly for this purpose [Mon:MatLog, 45]. As we can see, minimalization is a concept resembling somehow to the concept of inverse.

Existence of the required minimum value of the set -- a sufficient and satisfactory condition for this is that the set is never empty. And this is equivalent to the statement

Partial recursive functions

Everything seen above, but new constructs are provided, too.

Type system

Question: is there any sense to define in another way than simply ? Partial constant? Is

or

?

Operations

Their definitions are straightforward extension of the corresponding total function based definitions.

Remark: these operations take partial functions as arguments, but they are total operations themselves in the sense that they always yield a result -- at worst an empty function (as an ultimate partial function).

Bibliography

[HasFeyCr:CombLog1]
Curry, Haskell B; Feys, Robert; Craig, William: Combinatory Logic. Volume I. North-Holland Publishing Company, Amsterdam, 1958.
[Mon:MathLog]
Monk, J. Donald: Mathematical Logic. Springer-Verlag, New York * Heidelberg * Berlin, 1976.