# Combinatory logic

# Implementing CL

# Programming in CL

I think many thoughts from John Hughes' Why Functional Programming Matters can be applied to programming in Combinatory Logic. And almost all concepts used in the Haskell world (catamorphisms etc.) helps us a lot here too. Combinatory logic is a powerful and programming concise language. I wonder how functional logic programming could be done by using the concepts of Illative combinatory logic, too.

## Datatypes

### Continuation passing for linear datatypes

#### Direct product

in -calculus and

in combinatory logic.

A nice generalization scheme:

- as the construct can be generalized to any natural number (the concept of -tuple, see Barendregt's Calculus)
- and in this generalizedscheme corresponds to the 0 case, to the 1 case, and the ordered pair construct to the 2 case, as though defining

so we can write definition

or the same

in a more interesting way:

Is this generalizable? I do not know.
I know an analogy in the case of `left`

, `right`

, `just`

, `nothing`

.

We know this construct well when defining operations for booelans, and church-numbers. But in generally, in defining datatypes in this continuation-passing way, then operations on so-defined datatypes often turn to be well-definable by

### Catamorphisms for recursive datatypes

#### List

Let us define the concept of list by its catamorphism (see Haskell's `foldr`

):
a list (each concrete list) is a function taking two arguments

- a two-parameter function argument (cons-continuation)
- a zero-parameter function argument (nil-continuation)

and returns a value coming from a term consisting of applying cons-continuations and nil-continuations in the same shape as the correspondig list. E. g. in case of having defined

the expression

reduces to

- + 1 (+ 2 (+ 3 0))

But how to define `cons`

and `nil`

?
In -calculus

does the job. Let us think of the variables as denoting head, denoting tail, denoting cons-continuation, and denoting list-continuation.

Using the formulating combinators described in CUrry: Combinatory Logic I, we can traslate these definitions into combinatory logic without any pain:

Of course we could use the two parameters in the opposite order, but I am not sure yet that it would provide a more easy way.

A little practice: let us define concat. In Haskell, we ca do that by

concat = foldr (++) []

which corresponds in cominatory logic to reducing

Let us use the ordered pair (direct product) construct:

and if I use that nasty `centred`

(see later)

## Monads in Combinatory Logic?

### Concrete monads

#### The list as a monad

We could name

- \mathrm{concat} \equiv \mathrm{list\!\!-\!\!join}

Now let us see mapping a list, concatenating a list, binding a list.
Mapping and binding have a common property: yielding nil for nil.
I shall say these operations are *centred*: their definition would contain a subexpression. Thus I shall give a name to this subexpression:

Now let us define map and bind for lists:

now we see it was worth of defining a common .
But to tell the truth, it may be a trap. `centred`

breaks a symmetry: we should always define the cons and nil part of the foldr construct on the same level, always together. Modularization should be pointed towards this direction, and not to run forward into the T-street of `centred`

.

Another remark: of course we can get the monadic bind for lists

But we used `append`

here. How do we define it? It is surprizingly simple. Let us think how we would define it in Haskell by `foldr`

, if it was not defined already as `++`

defined in Prelude:
In defining

(++) list1 list2

we can do it by `foldr`

:

(++) [] list2 = list2 (++) (a : as) list2 = a : (++) as list2

thus

(++) list1 list2 = foldr (:) list2 list1

let us se how we should reduce its corresponding expression in Combinatory Logic:

thus

Thus, we have defined monadic bind for lists. I shall call this the *deforested* bind for lists. Of course, we could define it another way too: by concat and map, which corresponds to defining monadic bind from monadic map and monadic join.
But I think this way forces my CL-interpreter to manage temporary lists, so I gave rather the deforested definition.

Defining the other monadic operation: return for lists is easy:

instance Monad [] where return = (: [])

in Haskell -- we know,

(: [])

translates to

return = flip (:) []

so

### How to AOP with monads in Combinatory Logic?

We have defined monadic list in CL. Of course we can make monadic Maybe, binary tree, Error monad with direct sum constructs...

But separation of concerns by monads is more than having a bunch of special monads. It requires other possibilities too: e.g. being able to use monads generally, which can become any concrete mondads.

Of course my simple CL interpreter does not know anything on type classes, overloading. But there is a rather restricted andstatic possibility provided by the concept of *definition* itself:

and later we can change the binding mode named A e.g. from a failure-handling Maybe-like one to a more general indeterminism-handling list-like one, then we can do that simply by replacing definition

with definition

# Illative Combinatory Logic

Systems of Illative Combinatory Logic complete for first-order propositional and predicate calculus by Henk Barendregt, Martin Bunder, Wil Dekkers.

I think combinator **G** can be thought of as smething analogous to *dependent types*:
it seems to me that the dependent type construct
of Epigram corresponds to
in Illative Combinatory Logic. I think e.g. the followings should correspond to each other:

My dream is making something in Illative Combinatory Logic. Maybe it could be theroretical base for a functional logic language?