Difference between revisions of "Combinatory logic"

From HaskellWiki
Jump to navigation Jump to search
m (better layout for opreration ,,maybe->>='' by reducing space between hyphen and greater-than-sign)
m (Typesetting all combinators with boldface uniformly, so that they can be distiguished easily from variables and (other) metasigns)
Line 8: Line 8:
 
* Talks about it at [http://www.mail-archive.com/haskell-cafe@haskell.org/msg12705.html haskell-cafe ] <math>\subset</math> [http://comments.gmane.org/gmane.comp.lang.haskell.cafe/11408 haskell-cafe]
 
* Talks about it at [http://www.mail-archive.com/haskell-cafe@haskell.org/msg12705.html haskell-cafe ] <math>\subset</math> [http://comments.gmane.org/gmane.comp.lang.haskell.cafe/11408 haskell-cafe]
 
* Lot of interpreters at [http://homepages.cwi.nl/~tromp/cl/cl.html John's Lambda Calculus and Combinatory Logic Playground]
 
* Lot of interpreters at [http://homepages.cwi.nl/~tromp/cl/cl.html John's Lambda Calculus and Combinatory Logic Playground]
* [http://physis.fx3.hu/ CL++], a lazy-evaluating combinatory logic interpreter with some computer algebra service: e.g. it can reply the question <math>+\;2\; 3;</math> with <math>5</math> instead of a huge amount of parantheses and K, S combinators. Unfortunately I have not written it directly in English, so all documantations, source code and libraries are in Hungarian. I want to rewrite it using more advanced Haskell programming concepts (e.g. monads or attribute grammars) and directly in English.
+
* [http://physis.fx3.hu/ CL++], a lazy-evaluating combinatory logic interpreter with some computer algebra service: e.g. it can reply the question <math>\mathbf+\;\mathbf2\;\mathbf3;</math> with <math>\mathbf5</math> instead of a huge amount of parantheses and <math>\mathbf K</math>, <math>\mathbf S</math> combinators. Unfortunately I have not written it directly in English, so all documantations, source code and libraries are in Hungarian. I want to rewrite it using more advanced Haskell programming concepts (e.g. monads or attribute grammars) and directly in English.
   
 
= Programming in CL =
 
= Programming in CL =
Line 22: Line 22:
 
Let us begin with a notion of the ordered pair and denote it by <math>\Diamond_2</math>.
 
Let us begin with a notion of the ordered pair and denote it by <math>\Diamond_2</math>.
 
We know this construct well when defining operations for booleans
 
We know this construct well when defining operations for booleans
* <math>\mathrm{true} \equiv \mathbf K</math>
+
* <math>\mathbf{true} \equiv \mathbf K</math>
* <math>\mathrm{false} \equiv \mathbf{K_*}</math>
+
* <math>\mathbf{false} \equiv \mathbf{K_*}</math>
* <math>\mathrm{not} \equiv \Diamond_2\;\mathrm{false}\;\mathrm{true}</math>
+
* <math>\mathbf{not} \equiv \mathbf{\Diamond_2}\;\mathbf{false}\;\mathbf{true}</math>
and Church numbers. I think, in generally, when defining datatypes in a continuation-passing way (e.g. Maybe or direct sum), then operations on so-defined datatypes often turn to be well-definable by some <math>\Diamond_n</math>.
+
and Church numbers. I think, in generally, when defining datatypes in a continuation-passing way (e.g. Maybe or direct sum), then operations on so-defined datatypes often turn to be well-definable by some <math>\mathbf{\Diamond_n}</math>.
   
 
We define it with
 
We define it with
* <math>\Diamond_2 \equiv \lambda\;x\;y\;f\;.\;f\;x\;y</math>
+
* <math>\mathbf{\Diamond_2} \equiv \lambda\;x\;y\;f\;.\;f\;x\;y</math>
 
in <math>lambda</math>-calculus and
 
in <math>lambda</math>-calculus and
* <math>\Diamond_2 \equiv\;\mathbf{C_{(1)}}\;\mathbf{C_*}</math>
+
* <math>\mathbf{\Diamond_2} \equiv\;\mathbf{C_{(1)}}\;\mathbf{C_*}</math>
 
in combinatory logic.
 
in combinatory logic.
   
Line 36: Line 36:
 
* as the <math>\langle\dots\rangle</math> construct can be generalized to any natural number <math>n</math> (the concept of <math>n</math>-tuple, see Barendregt's <math>\lambda</math> Calculus)
 
* as the <math>\langle\dots\rangle</math> construct can be generalized to any natural number <math>n</math> (the concept of <math>n</math>-tuple, see Barendregt's <math>\lambda</math> Calculus)
 
* and in this generalized scheme <math>\mathbf I</math> corresponds to the 0 case, <math>\mathbf{C_*}</math> to the 1 case, and the ordered pair construct <math>\Diamond_2</math> to the 2 case, as though defining
 
* and in this generalized scheme <math>\mathbf I</math> corresponds to the 0 case, <math>\mathbf{C_*}</math> to the 1 case, and the ordered pair construct <math>\Diamond_2</math> to the 2 case, as though defining
** <math>\Diamond_0 \equiv \mathbf I</math>
+
** <math>\mathbf{\Diamond_0} \equiv \mathbf I</math>
** <math>\Diamond_1 \equiv \mathbf{C_*}</math>
+
** <math>\mathbf{\Diamond_1} \equiv \mathbf{C_*}</math>
 
so we can write definition
 
so we can write definition
* <math>\Diamond_2 \equiv \mathbf{C_{(1)}}\;\mathbf{C_*}</math>
+
* <math>\mathbf{\Diamond_2} \equiv \mathbf{C_{(1)}}\;\mathbf{C_*}</math>
 
or the same
 
or the same
* <math>\Diamond_2 \equiv \mathbf C \cdot \mathbf{C_*}</math>
+
* <math>\mathbf{\Diamond_2} \equiv \mathbf C \cdot \mathbf{C_*}</math>
 
in a more interesting way:
 
in a more interesting way:
* <math>\Diamond_2 \equiv \mathbf C\cdot\Diamond_1</math>
+
* <math>\mathbf{\Diamond_2} \equiv \mathbf C\cdot\mathbf{\Diamond_1}</math>
 
Is this generalizable? I do not know.
 
Is this generalizable? I do not know.
I know an analogy in the case of <code>left</code>, <code>right</code>, <code>just</code>, <code>nothing</code>.
+
I know an analogy in the case of <math>\mathbf{left}</math>, <math>\mathbf{right}</math>, <math>\mathbf{just}</math>, <math>\mathbf{nothing}</math>.
   
 
==== Direct sum ====
 
==== Direct sum ====
Line 53: Line 53:
 
A nice argument described in David Madore's [http://www.madore.org/~david/programs/unlambda/ Unlambda] page gives us a continuation-passig style like solution.
 
A nice argument described in David Madore's [http://www.madore.org/~david/programs/unlambda/ Unlambda] page gives us a continuation-passig style like solution.
 
We expect reductions like
 
We expect reductions like
* <math>\mathrm{left}\;x \to \lambda\;f\;g\;.\;f\;x</math>
+
* <math>\mathbf{left}\;x \to \lambda\;f\;g\;.\;f\;x</math>
* <math>\mathrm{right}\;x \to \lambda\;f\;g\;.\;g\;x</math>
+
* <math>\mathbf{right}\;x \to \lambda\;f\;g\;.\;g\;x</math>
   
 
so we define
 
so we define
* <math>\mathrm{left} \equiv \lambda\;x\;f\;g\;.\;f\;x</math>
+
* <math>\mathbf{left} \equiv \lambda\;x\;f\;g\;.\;f\;x</math>
* <math>\mathrm{right} \equiv \lambda\;x\;f\;g\;.\;g\;x</math>
+
* <math>\mathbf{right} \equiv \lambda\;x\;f\;g\;.\;g\;x</math>
   
 
now we translate it from <math>\lambda</math>-calculus into combinatory logic:
 
now we translate it from <math>\lambda</math>-calculus into combinatory logic:
* <math>\mathrm{left} \equiv \mathbf{K_{(2)}}\;\mathbf{C_*}</math>
+
* <math>\mathbf{left} \equiv \mathbf{K_{(2)}}\;\mathbf{C_*}</math>
* <math>\mathrm{right} \equiv \mathbf{K_{(1)}}\;\mathbf{C_*}</math>
+
* <math>\mathbf{right} \equiv \mathbf{K_{(1)}}\;\mathbf{C_*}</math>
   
 
Of course, we can recognize Haskell's <code>Either (Left, Right)</code>.
 
Of course, we can recognize Haskell's <code>Either (Left, Right)</code>.
Line 76: Line 76:
 
* ''j'' as just-continuation
 
* ''j'' as just-continuation
 
In a continuation passing style approach, if we want to implement something like the Maybe constuct in <math>\lambda</math>-calculus, then we may expect the following reductions:
 
In a continuation passing style approach, if we want to implement something like the Maybe constuct in <math>\lambda</math>-calculus, then we may expect the following reductions:
* <math>\mathrm{nothing} \equiv \lambda\;n\;j\;.\;n</math>
+
* <math>\mathbf{nothing} \equiv \lambda\;n\;j\;.\;n</math>
* <math>\mathrm{just}\;x\;\to\;\lambda\;n\;j\;.\;j\;x</math>
+
* <math>\mathbf{just}\;x\;\to\;\lambda\;n\;j\;.\;j\;x</math>
we know both of them well, one is just '''K''', and we remember the other too from the direct sum:
+
we know both of them well, one is just <math>\mathbf K</math>, and we remember the other too from the direct sum:
* <math>\mathrm{nothing} \equiv \mathbf K</math>
+
* <math>\mathbf{nothing} \equiv \mathbf K</math>
* <math>\mathrm{just} \equiv \mathrm{right}</math>
+
* <math>\mathbf{just} \equiv \mathbf{right}</math>
 
thus their definition is
 
thus their definition is
* <math>\mathrm{nothing} \equiv \mathbf K</math>
+
* <math>\mathbf{nothing} \equiv \mathbf K</math>
* <math>\mathrm{just} \equiv \mathbf{K_{(1)}}\;\mathbf{C_*}</math>
+
* <math>\mathbf{just} \equiv \mathbf{K_{(1)}}\;\mathbf{C_*}</math>
where both <code>just</code> and <code>right</code> have a common definition.
+
where both <math>\mathbf{just}</math> and <math>\mathbf{right}</math> have a common definition.
 
=== Catamorphisms for recursive datatypes ===
 
=== Catamorphisms for recursive datatypes ===
   
Line 95: Line 95:
 
and returns a value coming from a term consisting of applying cons-continuations and nil-continuations in the same shape as the correspondig list.
 
and returns a value coming from a term consisting of applying cons-continuations and nil-continuations in the same shape as the correspondig list.
 
E. g. in case of having defined
 
E. g. in case of having defined
* <math>\mathrm{oneTwoThree} \equiv \mathrm{cons}\;1\;\left( \mathrm{cons}\;2\;\left( \mathrm{cons}\;3\;\mathrm{nil} \right) \right)</math>
+
* <math>\mathbf{oneTwoThree} \equiv \mathbf{cons}\;\mathbf1\;\left( \mathbf{cons}\;\mathbf2\;\left(\mathbf{cons}\;\mathbf3\;\mathbf{nil}\right) \right)</math>
 
the expression
 
the expression
* <math>\mathrm{oneTwoThrhree} + 0</math>
+
* <math>\mathbf{oneTwoThrree}\;\mathbf+\;\mathbf0</math>
 
reduces to
 
reduces to
  +
* \mathbf+\;\mathbf1 (\mathbf+\;\mathbf2 (\mathbf+\;\mathbf3\;\mathbf0))
* + 1 (+ 2 (+ 3 0))
 
But how to define <code>cons</code> and <code>nil</code>?
+
But how to define <math>\mathbf{cons}</math> and <math>\mathbf{nil}</math>?
 
In <math>\lambda</math>-calculus, we should like to see the following reductions:
 
In <math>\lambda</math>-calculus, we should like to see the following reductions:
* <math>\mathrm{nil}\;c\;n\;\to\;\;n</math>
+
* <math>\mathbf{nil}\;c\;n\;\to\;\;n</math>
* <math>\mathrm{cons}\;h\;t\; \to \;\lambda\;c\;n\;.\;c\;h\;\left(t\;c\;n\right)</math>
+
* <math>\mathbf{cons}\;h\;t\; \to \;\lambda\;c\;n\;.\;c\;h\;\left(t\;c\;n\right)</math>
 
Let us think of the variables as <math>h</math> denoting head, <math>t</math> denoting tail, <math>c</math> denoting cons-continuation, and <math>n</math> denoting nil-continuation.
 
Let us think of the variables as <math>h</math> denoting head, <math>t</math> denoting tail, <math>c</math> denoting cons-continuation, and <math>n</math> denoting nil-continuation.
   
 
Thus, we could achieve this goal with the following definitions:
 
Thus, we could achieve this goal with the following definitions:
* <math>\mathrm{nil} \equiv \lambda\;c\;n\;.\;n</math>
+
* <math>\mathbf{nil} \equiv \lambda\;c\;n\;.\;n</math>
* <math>\mathrm{cons} \equiv \lambda\;h\;t\;c\;n\;.\;c\;h\;\left(t\;c\;n\right)</math>
+
* <math>\mathbf{cons} \equiv \lambda\;h\;t\;c\;n\;.\;c\;h\;\left(t\;c\;n\right)</math>
   
 
Using the formulating combinators described in Haskell B. Curry's Combinatory Logic I, we can translate these definitions into combinatory logic without any pain:
 
Using the formulating combinators described in Haskell B. Curry's Combinatory Logic I, we can translate these definitions into combinatory logic without any pain:
* <math>\mathrm{nil} \equiv \mathbf K_*</math>
+
* <math>\mathbf{nil} \equiv \mathbf K_*</math>
* <math>\mathrm{cons} \equiv \mathbf B \left( \mathbf{\Phi}\;\mathbf B \right) \mathbf{C_*}</math>
+
* <math>\mathbf{cons} \equiv \mathbf B \left( \mathbf{\Phi}\;\mathbf B \right) \mathbf{C_*}</math>
   
 
Of course we could use the two parameters in the opposite order, but I am not sure yet that it would provide a more easy way.
 
Of course we could use the two parameters in the opposite order, but I am not sure yet that it would provide a more easy way.
Line 120: Line 120:
 
concat = foldr (++) []
 
concat = foldr (++) []
 
which corresponds in cominatory logic to reducing
 
which corresponds in cominatory logic to reducing
* <math>\mathrm{concat}\;l\equiv l\;\mathrm{append}\;\mathrm{nil}</math>
+
* <math>\mathbf{concat}\;l\equiv l\;\mathbf{append}\;\mathbf{nil}</math>
 
Let us use the ordered pair (direct product) construct:
 
Let us use the ordered pair (direct product) construct:
* <math>\mathrm{concat}\equiv \Diamond_2\;\mathrm{append}\;\mathrm{nil}</math>
+
* <math>\mathbf{concat}\equiv \mathbf{\Diamond_2}\;\mathbf{append}\;\mathbf{nil}</math>
and if I use that nasty <code>centred</code> (see later)
+
and if I use that nasty <math>\mathbf{centred}</mathbf></math> (see later)
* <math>\mathrm{concat} \equiv \mathrm{centred}\;\mathrm{append}</math>
+
* <math>\mathbf{concat} \equiv \mathbf{centred}\;\mathbf{append}</math>
   
 
== Monads in Combinatory Logic? ==
 
== Monads in Combinatory Logic? ==
Line 137: Line 137:
 
...
 
...
   
* <math>\mathrm{maybe\!\!-\!\!return} \equiv \mathrm{just}</math>
+
* <math>\mathbf{maybe\!\!-\!\!return} \equiv \mathbf{just}</math>
   
 
===== map =====
 
===== map =====
Line 147: Line 147:
 
<math>\lambda</math>-calculus:
 
<math>\lambda</math>-calculus:
 
Expected reductions:
 
Expected reductions:
* <math>\mathrm{maybe\!\!-\!\!map}\;f\;p\;\to\;p\;\mathrm{nothing}\;\left(\mathrm{just_{(1)}}\;f\right)</math>
+
* <math>\mathbf{maybe\!\!-\!\!map}\;f\;p\;\to\;p\;\mathbf{nothing}\;\left(\mathbf{just_{(1)}}\;f\right)</math>
   
 
Definition:
 
Definition:
* <math>\mathrm{maybe\!\!-\!\!map}\equiv \lambda\;f\;p\;.\;p\;\mathrm{nothing}\;\left(\mathrm{just_{(1)}}\;f\right)</math>
+
* <math>\mathbf{maybe\!\!-\!\!map}\equiv \lambda\;f\;p\;.\;p\;\mathbf{nothing}\;\left(\mathbf{just_{(1)}}\;f\right)</math>
   
 
Combinatory logic: we expect the same reduction here too
 
Combinatory logic: we expect the same reduction here too
* <math>\mathrm{maybe\!\!-\!\!map}\;f\;p\;\to\;p\;\mathrm{nothing}\;\left(\mathrm{just_{(1)}}\;f\right)</math>
+
* <math>\mathbf{maybe\!\!-\!\!map}\;f\;p\;\to\;p\;\mathbf{nothing}\;\left(\mathbf{just_{(1)}}\;f\right)</math>
 
let us get rid of one parameter:
 
let us get rid of one parameter:
* <math>\mathrm{maybe\!\!-\!\!map}\;f\;\to\;\Diamond_2\;\mathrm{nothing}\;\left(\mathrm{just_{(1)}}\;f\right)</math>
+
* <math>\mathbf{maybe\!\!-\!\!map}\;f\;\to\;\mathbf{\Diamond_2}\;\mathbf{nothing}\;\left(\mathbf{just_{(1)}}\;f\right)</math>
 
now we have the definition:
 
now we have the definition:
* <math>\mathrm{maybe\!\!-\!\!map}\equiv \Diamond_2\;\mathrm{nothing}\;\cdot\;\mathrm{just_{(1)}}</math>
+
* <math>\mathbf{maybe\!\!-\!\!map}\equiv \mathbf{\Diamond_2}\;\mathbf{nothing}\;\cdot\;\mathbf{just_{(1)}}</math>
   
 
===== bind =====
 
===== bind =====
Line 165: Line 165:
   
 
<math>\lambda</math>-calculus: we expect
 
<math>\lambda</math>-calculus: we expect
* <math>\mathrm{maybe\!\!-\!\!=\!\!<\!\!<}\;f\;p\;\to\;p\;\mathrm{nothing}\;f</math>
+
* <math>\mathbf{maybe\!\!-\!\!=\!\!<\!\!<}\;f\;p\;\to\;p\;\mathbf{nothing}\;f</math>
 
achieved by defintion
 
achieved by defintion
* <math>\mathrm{maybe\!\!-\!\!=\!\!<\!\!<} \equiv \lambda\;f\;p\;.\;p\;\mathrm{nothing}\;f</math>
+
* <math>\mathbf{maybe\!\!-\!\!=\!\!<\!\!<} \equiv \lambda\;f\;p\;.\;p\;\mathbf{nothing}\;f</math>
   
 
In combinatory logic the above expected reduction
 
In combinatory logic the above expected reduction
* <math>\mathrm{maybe\!\!-\!\!=\!\!<\!\!<}\;f\;p\;\to\;p\;\mathrm{nothing}\;f</math>
+
* <math>\mathbf{maybe\!\!-\!\!=\!\!<\!\!<}\;f\;p\;\to\;p\;\mathbf{nothing}\;f</math>
 
getting rid of the outest parameter
 
getting rid of the outest parameter
* <math>\mathrm{maybe\!\!-\!\!=\!\!<\!\!<}\;f\;\to\;\Diamond_2\;\mathrm{nothing}\;f</math>
+
* <math>\mathbf{maybe\!\!-\!\!=\!\!<\!\!<}\;f\;\to\;\mathbf{\Diamond_2}\;\mathbf{nothing}\;f</math>
 
yielding definition
 
yielding definition
* <math>\mathrm{maybe\!\!-\!\!=\!\!<\!\!<} \equiv \Diamond_2\;\mathrm{nothing}</math>
+
* <math>\mathbf{maybe\!\!-\!\!=\!\!<\!\!<} \equiv \mathbf{\Diamond_2}\;\mathbf{nothing}</math>
 
and of course
 
and of course
* <math>\mathrm{maybe\!\!-\!\!\!>\!\!>\!\!=} \equiv \mathbf C\;\mathrm{maybe\!\!-\!\!=\!\!<\!\!<}</math>
+
* <math>\mathbf{maybe\!\!-\!\!\!>\!\!>\!\!=} \equiv \mathbf C\;\mathbf{maybe\!\!-\!\!=\!\!<\!\!<}</math>
   
 
But the other way (starting with a better chosen parameter order) is much better:
 
But the other way (starting with a better chosen parameter order) is much better:
* <math>\mathrm{maybe\!\!-\!\!\!>\!\!>\!\!=}\;p\;f\;\to\;p\;\mathrm{nothing}\;f</math>
+
* <math>\mathbf{maybe\!\!-\!\!\!>\!\!>\!\!=}\;p\;f\;\to\;p\;\mathbf{nothing}\;f</math>
* <math>\mathrm{maybe\!\!-\!\!\!>\!\!>\!\!=}\;p\;\to\;p\;\mathrm{nothing}</math>
+
* <math>\mathbf{maybe\!\!-\!\!\!>\!\!>\!\!=}\;p\;\to\;p\;\mathbf{nothing}</math>
 
yielding the much simplier and more efficient definition:
 
yielding the much simplier and more efficient definition:
* <math>\mathrm{maybe\!\!-\!\!\!>\!\!>\!\!=} \equiv \mathbf{C_*}\;\mathrm{nothing}</math>
+
* <math>\mathbf{maybe\!\!-\!\!\!>\!\!>\!\!=} \equiv \mathbf{C_*}\;\mathbf{nothing}</math>
   
We know already that <math>\mathbf{C_*}</math> can be seen as as a member of the scheme of tuples: <math>\Diamond_n</math> for <math>n=1</math> case.
+
We know already that <math>\mathbf{C_*}</math> can be seen as as a member of the scheme of tuples: <math>\mathbf{\Diamond_n}</math> for <math>n=1</math> case.
As the tupe construction is a usual guest at things like this (we shall meet it at list and other maybe-operations like '''maybe-join'''),
+
As the tupe construction is a usual guest at things like this (we shall meet it at list and other maybe-operations like <math>\mathbf{maybe\!\!-\!\!join}</math>),
 
so us express the above definition with <math>\mathbf{C_*}</math> denoted as <math>\Diamond_1</math>:
 
so us express the above definition with <math>\mathbf{C_*}</math> denoted as <math>\Diamond_1</math>:
* <math>\mathrm{maybe\!\!-\!\!\!>\!\!>\!\!=} \equiv \mathbf{\Diamond_1}\;\mathrm{nothing}</math>
+
* <math>\mathbf{maybe\!\!-\!\!\!>\!\!>\!\!=} \equiv \mathbf{\Diamond_1}\;\mathbf{nothing}</math>
 
hoping that this will enable us some interesting generalization in the future.
 
hoping that this will enable us some interesting generalization in the future.
   
But why we have not made a more brave genralization, and express monadic bind from monadic join and map? Later in the list monad, we shall see that it may be better to avoid this for sake of deforestation. Here a maybe similar problem will appear: the problem of superfluous '''I'''.
+
But why we have not made a more brave genralization, and express monadic bind from monadic join and map? Later in the list monad, we shall see that it may be better to avoid this for sake of deforestation. Here a maybe similar problem will appear: the problem of superfluous <math>\mathbf I</math>.
   
 
===== join =====
 
===== join =====
* <math>\mathrm{maybe\!\!-\!\!join} \equiv \Diamond_2\;\mathrm{nothing}\;\mathbf I</math>
+
* <math>\mathbf{maybe\!\!-\!\!join} \equiv \mathbf{\Diamond_2}\;\mathbf{nothing}\;\mathbf I</math>
We should think of changing the architecture if we suspect that we could avoid '''I''' and solve the problem with a more simple construct.
+
We should think of changing the architecture if we suspect that we could avoid <math>\mathbf I</math> and solve the problem with a more simple construct.
   
   
Line 202: Line 202:
   
 
we could name
 
we could name
* <math>\mathrm{list\!\!-\!\!join} \equiv \mathrm{concat}
+
* <math>\mathbf{list\!\!-\!\!join} \equiv \mathbf{concat}
 
</math>
 
</math>
 
Now let us see mapping a list, concatenating a list, binding a list.
 
Now let us see mapping a list, concatenating a list, binding a list.
 
Mapping and binding have a common property: yielding nil for nil.
 
Mapping and binding have a common property: yielding nil for nil.
I shall say these operations are ''centred'': their definition would contain a <math>\mathbf C\;\Diamond_2\;\mathrm{nil}</math> subexpression. Thus I shall give a name to this subexpression:
+
I shall say these operations are ''centred'': their definition would contain a <math>\mathbf C\;\mathbf{\Diamond_2}\;\mathbf{nil}</math> subexpression. Thus I shall give a name to this subexpression:
* <math>\mathrm{centred} \equiv \mathbf C\;\Diamond_2\;\mathrm{nil}</math>
+
* <math>\mathbf{centred} \equiv \mathbf C\;\mathbf{\Diamond_2}\;\mathbf{nil}</math>
   
 
Now let us define map and bind for lists:
 
Now let us define map and bind for lists:
* <math>\mathrm{list\!\!-\!\!map} \equiv \mathrm{centred}_{(1)}\;\mathrm{cons}_{(1)}</math>
+
* <math>\mathbf{list\!\!-\!\!map} \equiv \mathbf{centred}_{(1)}\;\mathbf{cons}_{(1)}</math>
* <math>\mathrm{list\!\!-\!\!=\!\!<\!\!<} \equiv \mathrm{centred}_{(1)}\;\mathrm{append}_{(1)}</math>
+
* <math>\mathbf{list\!\!-\!\!=\!\!<\!\!<} \equiv \mathbf{centred}_{(1)}\;\mathbf{append}_{(1)}</math>
now we see it was worth of defining a common <math>\mathrm{centred}</math>.
+
now we see it was worth of defining a common <math>\mathbf{centred}</math>.
But to tell the truth, it may be a trap. <code>centred</code> breaks a symmetry: we should always define the cons and nil part of the foldr construct on the same level, always together. Modularization should be pointed towards this direction, and not to run forward into the T-street of <code>centred</code>.
+
But to tell the truth, it may be a trap. <math>\mathbf{centred}</math> breaks a symmetry: we should always define the cons and nil part of the foldr construct on the same level, always together. Modularization should be pointed towards this direction, and not to run forward into the T-street of <math>\mathbf{centred}</math>.
   
 
Another remark: of course we can get the monadic bind for lists
 
Another remark: of course we can get the monadic bind for lists
* <math>\mathrm{list\!\!-\!\!>\!\!>\!\!=} \equiv \mathbf{C}\;\mathrm{list\!\!-\!\!=\!\!<\!\!<}</math>
+
* <math>\mathbf{list\!\!-\!\!\!>\!\!>\!\!=} \equiv \mathbf{C}\;\mathbf{list\!\!-\!\!=\!\!<\!\!<}</math>
But we used <code>append</code> here. How do we define it? It is surprizingly simple. Let us think how we would define it in Haskell by <code>foldr</code>, if it was not defined already as <code>++</code> defined in Prelude:
+
But we used <math>\mathbf{append}</math> here. How do we define it? It is surprizingly simple. Let us think how we would define it in Haskell by <code>foldr</code>, if it was not defined already as <code>++</code> defined in Prelude:
 
In defining
 
In defining
 
(++) list1 list2
 
(++) list1 list2
Line 244: Line 244:
 
return = flip (:) []
 
return = flip (:) []
 
so
 
so
* <math>\mathrm{list\!\!-\!\!return} \equiv \mathbf C\;\mathrm{cons}\;\mathrm{nil}</math>
+
* <math>\mathbf{list\!\!-\!\!return} \equiv \mathbf C\;\mathbf{cons}\;\mathbf{nil}</math>
   
 
=== How to AOP with monads in Combinatory Logic? ===
 
=== How to AOP with monads in Combinatory Logic? ===
Line 254: Line 254:
 
Of course my simple CL interpreter does not know anything on type classes, overloading. But there is a rather restricted andstatic possibility provided by the concept of ''definition'' itself:
 
Of course my simple CL interpreter does not know anything on type classes, overloading. But there is a rather restricted andstatic possibility provided by the concept of ''definition'' itself:
   
* <math>\mathrm{work} \equiv \mathrm{A\!\!-\!\!>\!\!>\!\!>\!\!=}\;\mathrm{subwork\!\!-\!\!1}\;\mathrm{parametrized\!\!-\!\!subwork\!\!-\!\!2}</math>
+
* <math>\mathbf{work} \equiv \mathbf{A\!\!-\!\!\!>\!\!>\!\!>\!\!=}\;\mathbf{subwork\!\!-\!\!1}\;\mathbf{parametrized\!\!-\!\!subwork\!\!-\!\!2}</math>
   
 
and later we can change the binding mode named A e.g. from a failure-handling Maybe-like one to a more general indeterminism-handling list-like one, then we can do that simply by replacing definition
 
and later we can change the binding mode named A e.g. from a failure-handling Maybe-like one to a more general indeterminism-handling list-like one, then we can do that simply by replacing definition
* <math>\mathrm{A\!\!-\!\!>\!\!>\!\!>\!\!=} \equiv \mathrm{maybe\!\!-\!\!>\!\!>\!\!>\!\!=}</math>
+
* <math>\mathbf{A\!\!\!-\!\!>\!\!>\!\!>\!\!=} \equiv \mathbf{maybe\!\!\!-\!\!>\!\!>\!\!>\!\!=}</math>
 
with definition
 
with definition
* <math>\mathrm{A\!\!-\!\!>\!\!>\!\!>\!\!=} \equiv \mathrm{list\!\!-\!\!>\!\!>\!\!>\!\!=}
+
* <math>\mathbf{A\!\!\!-\!\!>\!\!>\!\!>\!\!=} \equiv \mathbf{list\!\!\!-\!\!>\!\!>\!\!>\!\!=}
 
</math>
 
</math>
   
Line 266: Line 266:
 
[http://citeseer.ist.psu.edu/246934.html Systems of Illative Combinatory Logic complete for first-order propositional and predicate calculus] by Henk Barendregt, Martin Bunder, Wil Dekkers.
 
[http://citeseer.ist.psu.edu/246934.html Systems of Illative Combinatory Logic complete for first-order propositional and predicate calculus] by Henk Barendregt, Martin Bunder, Wil Dekkers.
   
I think combinator '''G''' can be thought of as something analogous to [[Dependent types]]:
+
I think combinator <math>\mathbf G</math> can be thought of as something analogous to [[Dependent types]]:
 
it seems to me that the dependent type construct
 
it seems to me that the dependent type construct
 
<math>\forall x : S \Rightarrow T</math>
 
<math>\forall x : S \Rightarrow T</math>
Line 272: Line 272:
 
<math>\mathbf G\;S\;(\lambda x . T)</math>
 
<math>\mathbf G\;S\;(\lambda x . T)</math>
 
in Illative Combinatory Logic. I think e.g. the followings should correspond to each other:
 
in Illative Combinatory Logic. I think e.g. the followings should correspond to each other:
* <math>\mathrm{realNullvector} :\;\;\;\forall n: \mathrm{Nat} \Rightarrow \mathrm{RealVector}\;n</math>
+
* <math>\mathbf{realNullvector} :\;\;\;\forall n: \mathbf{Nat} \Rightarrow \mathbf{RealVector}\;n</math>
* <math>\mathbf G\;\,\mathrm{Nat}\;\,\mathrm{RealVector}\;\,\mathrm{realNullvector}</math>
+
* <math>\mathbf G\;\,\mathbf{Nat}\;\,\mathbf{RealVector}\;\,\mathbf{realNullvector}</math>
   
   

Revision as of 14:22, 3 March 2006

Portals and other large-scale resources

Implementing CL

  • Talks about it at haskell-cafe haskell-cafe
  • Lot of interpreters at John's Lambda Calculus and Combinatory Logic Playground
  • CL++, a lazy-evaluating combinatory logic interpreter with some computer algebra service: e.g. it can reply the question with instead of a huge amount of parantheses and , combinators. Unfortunately I have not written it directly in English, so all documantations, source code and libraries are in Hungarian. I want to rewrite it using more advanced Haskell programming concepts (e.g. monads or attribute grammars) and directly in English.

Programming in CL

I think many thoughts from John Hughes' Why Functional Programming Matters can be applied to programming in Combinatory Logic. And almost all concepts used in the Haskell world (catamorphisms etc.) helps us a lot here too. Combinatory logic is a powerful and concise programming language. I wonder how functional logic programming could be done by using the concepts of Illative combinatory logic, too.

Datatypes

Continuation passing for polynomial datatypes

Direct product

Let us begin with a notion of the ordered pair and denote it by . We know this construct well when defining operations for booleans

and Church numbers. I think, in generally, when defining datatypes in a continuation-passing way (e.g. Maybe or direct sum), then operations on so-defined datatypes often turn to be well-definable by some .

We define it with

in -calculus and

in combinatory logic.

A nice generalization scheme:

  • as the construct can be generalized to any natural number (the concept of -tuple, see Barendregt's Calculus)
  • and in this generalized scheme corresponds to the 0 case, to the 1 case, and the ordered pair construct to the 2 case, as though defining

so we can write definition

or the same

in a more interesting way:

Is this generalizable? I do not know. I know an analogy in the case of , , , .

Direct sum

The notion of ordered pair mentioned above really enables us to deal with direct products. What about it dual concept? How to make direct sums in Combinatory Logic? And after we have implemented it, how can we see that it is really a dual concept of direct product?

A nice argument described in David Madore's Unlambda page gives us a continuation-passig style like solution. We expect reductions like

so we define

now we translate it from -calculus into combinatory logic:

Of course, we can recognize Haskell's Either (Left, Right).

Maybe

Let us remember Haskell's maybe:

maybe :: a' -> (a -> a') -> Maybe a -> a'
maybe n j Nothing = n
maybe n j (Just x) = j x

thinking of

  • n as nothing-continuation
  • j as just-continuation

In a continuation passing style approach, if we want to implement something like the Maybe constuct in -calculus, then we may expect the following reductions:

we know both of them well, one is just , and we remember the other too from the direct sum:

thus their definition is

where both and have a common definition.

Catamorphisms for recursive datatypes

List

Let us define the concept of list by its catamorphism (see Haskell's foldr): a list (each concrete list) is a function taking two arguments

  • a two-parameter function argument (cons-continuation)
  • a zero-parameter function argument (nil-continuation)

and returns a value coming from a term consisting of applying cons-continuations and nil-continuations in the same shape as the correspondig list. E. g. in case of having defined

the expression

reduces to

  • \mathbf+\;\mathbf1 (\mathbf+\;\mathbf2 (\mathbf+\;\mathbf3\;\mathbf0))

But how to define and ? In -calculus, we should like to see the following reductions:

Let us think of the variables as denoting head, denoting tail, denoting cons-continuation, and denoting nil-continuation.

Thus, we could achieve this goal with the following definitions:

Using the formulating combinators described in Haskell B. Curry's Combinatory Logic I, we can translate these definitions into combinatory logic without any pain:

Of course we could use the two parameters in the opposite order, but I am not sure yet that it would provide a more easy way.

A little practice: let us define concat. In Haskell, we can do that by

concat = foldr (++) []

which corresponds in cominatory logic to reducing

Let us use the ordered pair (direct product) construct:

and if I use that nasty (see later)

Monads in Combinatory Logic?

Concrete monads

Maybe as a monad

return

Implementing the return monadic method for the Maybe monad is rather straightforward, both in Haskell and CL:

instance Monad Maybe
        return = Just
        ...
map

Haskell:

instance Functor Maybe where
        map f = maybe Nothing (Just . f)

-calculus: Expected reductions:

Definition:

Combinatory logic: we expect the same reduction here too

let us get rid of one parameter:

now we have the definition:

bind

Haskell:

instance Monad Maybe (>>=) where
        (>>=) f p = maybe Nothing f

-calculus: we expect

achieved by defintion

In combinatory logic the above expected reduction

getting rid of the outest parameter

yielding definition

and of course

But the other way (starting with a better chosen parameter order) is much better:

yielding the much simplier and more efficient definition:

We know already that can be seen as as a member of the scheme of tuples: for case. As the tupe construction is a usual guest at things like this (we shall meet it at list and other maybe-operations like ), so us express the above definition with denoted as :

hoping that this will enable us some interesting generalization in the future.

But why we have not made a more brave genralization, and express monadic bind from monadic join and map? Later in the list monad, we shall see that it may be better to avoid this for sake of deforestation. Here a maybe similar problem will appear: the problem of superfluous .

join

We should think of changing the architecture if we suspect that we could avoid and solve the problem with a more simple construct.


The list as a monad

Let us think of our list-operations as implementing monadic methods of the list monad. We can express this by definitions too, e.g.

we could name

Now let us see mapping a list, concatenating a list, binding a list. Mapping and binding have a common property: yielding nil for nil. I shall say these operations are centred: their definition would contain a subexpression. Thus I shall give a name to this subexpression:

Now let us define map and bind for lists:

now we see it was worth of defining a common . But to tell the truth, it may be a trap. breaks a symmetry: we should always define the cons and nil part of the foldr construct on the same level, always together. Modularization should be pointed towards this direction, and not to run forward into the T-street of .

Another remark: of course we can get the monadic bind for lists

But we used here. How do we define it? It is surprizingly simple. Let us think how we would define it in Haskell by foldr, if it was not defined already as ++ defined in Prelude: In defining

(++) list1 list2

we can do it by foldr:

(++) [] list2 = list2
(++) (a : as) list2 = a : (++) as list2

thus

(++) list1 list2 = foldr (:) list2 list1

let us se how we should reduce its corresponding expression in Combinatory Logic:

thus

Thus, we have defined monadic bind for lists. I shall call this the deforested bind for lists. Of course, we could define it another way too: by concat and map, which corresponds to defining monadic bind from monadic map and monadic join. But I think this way forces my CL-interpreter to manage temporary lists, so I gave rather the deforested definition.

Defining the other monadic operation: return for lists is easy:

instance Monad [] where
        return = (: [])

in Haskell -- we know,

(: [])

translates to

return = flip (:) []

so

How to AOP with monads in Combinatory Logic?

We have defined monadic list in CL. Of course we can make monadic Maybe, binary tree, Error monad with direct sum constructs...

But separation of concerns by monads is more than having a bunch of special monads. It requires other possibilities too: e.g. being able to use monads generally, which can become any concrete mondads.

Of course my simple CL interpreter does not know anything on type classes, overloading. But there is a rather restricted andstatic possibility provided by the concept of definition itself:

and later we can change the binding mode named A e.g. from a failure-handling Maybe-like one to a more general indeterminism-handling list-like one, then we can do that simply by replacing definition

with definition

Illative Combinatory Logic

Systems of Illative Combinatory Logic complete for first-order propositional and predicate calculus by Henk Barendregt, Martin Bunder, Wil Dekkers.

I think combinator can be thought of as something analogous to Dependent types: it seems to me that the dependent type construct of Epigram corresponds to in Illative Combinatory Logic. I think e.g. the followings should correspond to each other:


My dream is making something in Illative Combinatory Logic. Maybe it could be theroretical base for a functional logic language?