- 1 Portals and other large-scale resources
- 2 Implementing CL
- 3 Programming in CL
- 4 Illative Combinatory Logic
Portals and other large-scale resources
- Talks about it at haskell-cafe haskell-cafe
- Lot of interpreters at John's Lambda Calculus and Combinatory Logic Playground
- CL++, a lazy-evaluating combinatory logic interpreter with some computer algebra service: e.g. it can reply the question with instead of a huge amount of parantheses and K, S combinators. Unfortunately I have not written it directly in English, so all documantations, source code and libraries are in Hungarian. I want to rewrite it using more advanced Haskell programming concepts (e.g. monads or attribute grammars) and directly in English.
Programming in CL
I think many thoughts from John Hughes' Why Functional Programming Matters can be applied to programming in Combinatory Logic. And almost all concepts used in the Haskell world (catamorphisms etc.) helps us a lot here too. Combinatory logic is a powerful and programming concise language. I wonder how functional logic programming could be done by using the concepts of Illative combinatory logic, too.
Continuation passing for polynomial datatypes
Let us begin with a notion of the ordered pair and denote it by . We know this construct well when defining operations for booleans
and Church numbers. I think, in generally, when defining datatypes in a continuation-passing way (e.g. Maybe or direct sum), then operations on so-defined datatypes often turn to be well-definable by some .
We define it with
in -calculus and
in combinatory logic.
A nice generalization scheme:
- as the construct can be generalized to any natural number (the concept of -tuple, see Barendregt's Calculus)
- and in this generalized scheme corresponds to the 0 case, to the 1 case, and the ordered pair construct to the 2 case, as though defining
so we can write definition
or the same
in a more interesting way:
Is this generalizable? I do not know.
I know an analogy in the case of
The notion of ordered pair mentioned above really enables us to deal with direct products. What about it dual concept? How to make direct sums in Combinatory Logic? And after we have implemented it, how can we see that it is really a dual concept of direct product?
A nice argument described in David Madore's Unlambda page gives us a continuation-passig style like solution. We expect reductions like
so we define
now we translate it from -calculus into combinatory logic:
Of course, we can recognize Haskell's
Either (Left, Right).
Catamorphisms for recursive datatypes
Let us define the concept of list by its catamorphism (see Haskell's
a list (each concrete list) is a function taking two arguments
- a two-parameter function argument (cons-continuation)
- a zero-parameter function argument (nil-continuation)
and returns a value coming from a term consisting of applying cons-continuations and nil-continuations in the same shape as the correspondig list. E. g. in case of having defined
- + 1 (+ 2 (+ 3 0))
But how to define
does the job. Let us think of the variables as denoting head, denoting tail, denoting cons-continuation, and denoting nil-continuation.
Using the formulating combinators described in Haskell B. Curry's Combinatory Logic I, we can traslate these definitions into combinatory logic without any pain:
Of course we could use the two parameters in the opposite order, but I am not sure yet that it would provide a more easy way.
A little practice: let us define concat. In Haskell, we ca do that by
concat = foldr (++) 
which corresponds in cominatory logic to reducing
Let us use the ordered pair (direct product) construct:
and if I use that nasty
centred (see later)
Monads in Combinatory Logic?
The list as a monad
Let us think of our list-operations as implementing monadic methods of the list monad. We con express this by definitions too, e.g.
we could name
Now let us see mapping a list, concatenating a list, binding a list. Mapping and binding have a common property: yielding nil for nil. I shall say these operations are centred: their definition would contain a subexpression. Thus I shall give a name to this subexpression:
Now let us define map and bind for lists:
now we see it was worth of defining a common .
But to tell the truth, it may be a trap.
centred breaks a symmetry: we should always define the cons and nil part of the foldr construct on the same level, always together. Modularization should be pointed towards this direction, and not to run forward into the T-street of
Another remark: of course we can get the monadic bind for lists
But we used
append here. How do we define it? It is surprizingly simple. Let us think how we would define it in Haskell by
foldr, if it was not defined already as
++ defined in Prelude:
(++) list1 list2
we can do it by
(++)  list2 = list2 (++) (a : as) list2 = a : (++) as list2
(++) list1 list2 = foldr (:) list2 list1
let us se how we should reduce its corresponding expression in Combinatory Logic:
Thus, we have defined monadic bind for lists. I shall call this the deforested bind for lists. Of course, we could define it another way too: by concat and map, which corresponds to defining monadic bind from monadic map and monadic join. But I think this way forces my CL-interpreter to manage temporary lists, so I gave rather the deforested definition.
Defining the other monadic operation: return for lists is easy:
instance Monad  where return = (: )
in Haskell -- we know,
return = flip (:) 
How to AOP with monads in Combinatory Logic?
We have defined monadic list in CL. Of course we can make monadic Maybe, binary tree, Error monad with direct sum constructs...
But separation of concerns by monads is more than having a bunch of special monads. It requires other possibilities too: e.g. being able to use monads generally, which can become any concrete mondads.
Of course my simple CL interpreter does not know anything on type classes, overloading. But there is a rather restricted andstatic possibility provided by the concept of definition itself:
and later we can change the binding mode named A e.g. from a failure-handling Maybe-like one to a more general indeterminism-handling list-like one, then we can do that simply by replacing definition
Illative Combinatory Logic
Systems of Illative Combinatory Logic complete for first-order propositional and predicate calculus by Henk Barendregt, Martin Bunder, Wil Dekkers.
I think combinator G can be thought of as something analogous to DependentTypes: it seems to me that the dependent type construct of Epigram corresponds to in Illative Combinatory Logic. I think e.g. the followings should correspond to each other:
My dream is making something in Illative Combinatory Logic. Maybe it could be theroretical base for a functional logic language?