Combinatory logic: Difference between revisions

From HaskellWiki
m (Typesetting all combinators with boldface uniformly, so that they can be distiguished easily from variables and (other) metasigns)
m (Some layout improvements with more appropriate horizontal spacings)
Line 97: Line 97:
* <math>\mathbf{oneTwoThree} \equiv \mathbf{cons}\;\mathbf1\;\left( \mathbf{cons}\;\mathbf2\;\left(\mathbf{cons}\;\mathbf3\;\mathbf{nil}\right) \right)</math>
* <math>\mathbf{oneTwoThree} \equiv \mathbf{cons}\;\mathbf1\;\left( \mathbf{cons}\;\mathbf2\;\left(\mathbf{cons}\;\mathbf3\;\mathbf{nil}\right) \right)</math>
the expression
the expression
* <math>\mathbf{oneTwoThrree}\;\mathbf+\;\mathbf0</math>
* <math>\mathbf{oneTwoThree}\;\mathbf+\;\mathbf0</math>
reduces to
reduces to
* \mathbf+\;\mathbf1 (\mathbf+\;\mathbf2 (\mathbf+\;\mathbf3\;\mathbf0))
* <math>\mathbf+\;\mathbf1\;\left(\mathbf+\;\mathbf2\; \left(\mathbf+\;\mathbf3\;\mathbf0\right)\right)</math>
But how to define <math>\mathbf{cons}</math> and <math>\mathbf{nil}</math>?
But how to define <math>\mathbf{cons}</math> and <math>\mathbf{nil}</math>?
In <math>\lambda</math>-calculus, we should like to see the following reductions:
In <math>\lambda</math>-calculus, we should like to see the following reductions:

Revision as of 14:28, 3 March 2006

Portals and other large-scale resources

Implementing CL

  • Talks about it at haskell-cafe haskell-cafe
  • Lot of interpreters at John's Lambda Calculus and Combinatory Logic Playground
  • CL++, a lazy-evaluating combinatory logic interpreter with some computer algebra service: e.g. it can reply the question +23; with 5 instead of a huge amount of parantheses and K, S combinators. Unfortunately I have not written it directly in English, so all documantations, source code and libraries are in Hungarian. I want to rewrite it using more advanced Haskell programming concepts (e.g. monads or attribute grammars) and directly in English.

Programming in CL

I think many thoughts from John Hughes' Why Functional Programming Matters can be applied to programming in Combinatory Logic. And almost all concepts used in the Haskell world (catamorphisms etc.) helps us a lot here too. Combinatory logic is a powerful and concise programming language. I wonder how functional logic programming could be done by using the concepts of Illative combinatory logic, too.

Datatypes

Continuation passing for polynomial datatypes

Direct product

Let us begin with a notion of the ordered pair and denote it by 2. We know this construct well when defining operations for booleans

  • trueK
  • falseK*
  • not2falsetrue

and Church numbers. I think, in generally, when defining datatypes in a continuation-passing way (e.g. Maybe or direct sum), then operations on so-defined datatypes often turn to be well-definable by some n.

We define it with

  • 2λxyf.fxy

in lambda-calculus and

  • 2C(1)C*

in combinatory logic.

A nice generalization scheme:

  • as the construct can be generalized to any natural number n (the concept of n-tuple, see Barendregt's λ Calculus)
  • and in this generalized scheme I corresponds to the 0 case, C* to the 1 case, and the ordered pair construct 2 to the 2 case, as though defining
    • 0I
    • 1C*

so we can write definition

  • 2C(1)C*

or the same

  • 2CC*

in a more interesting way:

  • 2C1

Is this generalizable? I do not know. I know an analogy in the case of left, right, just, nothing.

Direct sum

The notion of ordered pair mentioned above really enables us to deal with direct products. What about it dual concept? How to make direct sums in Combinatory Logic? And after we have implemented it, how can we see that it is really a dual concept of direct product?

A nice argument described in David Madore's Unlambda page gives us a continuation-passig style like solution. We expect reductions like

  • leftxλfg.fx
  • rightxλfg.gx

so we define

  • leftλxfg.fx
  • rightλxfg.gx

now we translate it from λ-calculus into combinatory logic:

  • leftK(2)C*
  • rightK(1)C*

Of course, we can recognize Haskell's Either (Left, Right).

Maybe

Let us remember Haskell's maybe:

maybe :: a' -> (a -> a') -> Maybe a -> a'
maybe n j Nothing = n
maybe n j (Just x) = j x

thinking of

  • n as nothing-continuation
  • j as just-continuation

In a continuation passing style approach, if we want to implement something like the Maybe constuct in λ-calculus, then we may expect the following reductions:

  • nothingλnj.n
  • justxλnj.jx

we know both of them well, one is just K, and we remember the other too from the direct sum:

  • nothingK
  • justright

thus their definition is

  • nothingK
  • justK(1)C*

where both just and right have a common definition.

Catamorphisms for recursive datatypes

List

Let us define the concept of list by its catamorphism (see Haskell's foldr): a list (each concrete list) is a function taking two arguments

  • a two-parameter function argument (cons-continuation)
  • a zero-parameter function argument (nil-continuation)

and returns a value coming from a term consisting of applying cons-continuations and nil-continuations in the same shape as the correspondig list. E. g. in case of having defined

  • oneTwoThreecons1(cons2(cons3nil))

the expression

  • oneTwoThree+0

reduces to

  • +1(+2(+30))

But how to define cons and nil? In λ-calculus, we should like to see the following reductions:

  • nilcnn
  • conshtλcn.ch(tcn)

Let us think of the variables as h denoting head, t denoting tail, c denoting cons-continuation, and n denoting nil-continuation.

Thus, we could achieve this goal with the following definitions:

  • nilλcn.n
  • consλhtcn.ch(tcn)

Using the formulating combinators described in Haskell B. Curry's Combinatory Logic I, we can translate these definitions into combinatory logic without any pain:

  • nilK*
  • consB(ΦB)C*

Of course we could use the two parameters in the opposite order, but I am not sure yet that it would provide a more easy way.

A little practice: let us define concat. In Haskell, we can do that by

concat = foldr (++) []

which corresponds in cominatory logic to reducing

  • concatllappendnil

Let us use the ordered pair (direct product) construct:

  • concat2appendnil

and if I use that nasty centred</mathbf> (see later)

  • concatcentredappend

Monads in Combinatory Logic?

Concrete monads

Maybe as a monad

return

Implementing the return monadic method for the Maybe monad is rather straightforward, both in Haskell and CL:

instance Monad Maybe
        return = Just
        ...
  • maybereturnjust
map

Haskell:

instance Functor Maybe where
        map f = maybe Nothing (Just . f)

λ-calculus: Expected reductions:

  • maybemapfppnothing(just(1)f)

Definition:

  • maybemapλfp.pnothing(just(1)f)

Combinatory logic: we expect the same reduction here too

  • maybemapfppnothing(just(1)f)

let us get rid of one parameter:

  • maybemapf2nothing(just(1)f)

now we have the definition:

  • maybemap2nothingjust(1)
bind

Haskell:

instance Monad Maybe (>>=) where
        (>>=) f p = maybe Nothing f

λ-calculus: we expect

  • maybe=<<fppnothingf

achieved by defintion

  • maybe=<<λfp.pnothingf

In combinatory logic the above expected reduction

  • maybe=<<fppnothingf

getting rid of the outest parameter

  • maybe=<<f2nothingf

yielding definition

  • maybe=<<2nothing

and of course

  • maybe>>=Cmaybe=<<

But the other way (starting with a better chosen parameter order) is much better:

  • maybe>>=pfpnothingf
  • maybe>>=ppnothing

yielding the much simplier and more efficient definition:

  • maybe>>=C*nothing

We know already that C* can be seen as as a member of the scheme of tuples: n for n=1 case. As the tupe construction is a usual guest at things like this (we shall meet it at list and other maybe-operations like maybejoin), so us express the above definition with C* denoted as 1:

  • maybe>>=1nothing

hoping that this will enable us some interesting generalization in the future.

But why we have not made a more brave genralization, and express monadic bind from monadic join and map? Later in the list monad, we shall see that it may be better to avoid this for sake of deforestation. Here a maybe similar problem will appear: the problem of superfluous I.

join
  • maybejoin2nothingI

We should think of changing the architecture if we suspect that we could avoid I and solve the problem with a more simple construct.


The list as a monad

Let us think of our list-operations as implementing monadic methods of the list monad. We can express this by definitions too, e.g.

we could name

  • listjoinconcat

Now let us see mapping a list, concatenating a list, binding a list. Mapping and binding have a common property: yielding nil for nil. I shall say these operations are centred: their definition would contain a C2nil subexpression. Thus I shall give a name to this subexpression:

  • centredC2nil

Now let us define map and bind for lists:

  • listmapcentred(1)cons(1)
  • list=<<centred(1)append(1)

now we see it was worth of defining a common centred. But to tell the truth, it may be a trap. centred breaks a symmetry: we should always define the cons and nil part of the foldr construct on the same level, always together. Modularization should be pointed towards this direction, and not to run forward into the T-street of centred.

Another remark: of course we can get the monadic bind for lists

  • list>>=Clist=<<

But we used append here. How do we define it? It is surprizingly simple. Let us think how we would define it in Haskell by foldr, if it was not defined already as ++ defined in Prelude: In defining

(++) list1 list2

we can do it by foldr:

(++) [] list2 = list2
(++) (a : as) list2 = a : (++) as list2

thus

(++) list1 list2 = foldr (:) list2 list1

let us se how we should reduce its corresponding expression in Combinatory Logic:

  • appendlmlconsm

thus

  • appendlm=lconsm
  • appendl=1lcons
  • appendC*cons

Thus, we have defined monadic bind for lists. I shall call this the deforested bind for lists. Of course, we could define it another way too: by concat and map, which corresponds to defining monadic bind from monadic map and monadic join. But I think this way forces my CL-interpreter to manage temporary lists, so I gave rather the deforested definition.

Defining the other monadic operation: return for lists is easy:

instance Monad [] where
        return = (: [])

in Haskell -- we know,

(: [])

translates to

return = flip (:) []

so

  • listreturnCconsnil

How to AOP with monads in Combinatory Logic?

We have defined monadic list in CL. Of course we can make monadic Maybe, binary tree, Error monad with direct sum constructs...

But separation of concerns by monads is more than having a bunch of special monads. It requires other possibilities too: e.g. being able to use monads generally, which can become any concrete mondads.

Of course my simple CL interpreter does not know anything on type classes, overloading. But there is a rather restricted andstatic possibility provided by the concept of definition itself:

  • workA>>>=subwork1parametrizedsubwork2

and later we can change the binding mode named A e.g. from a failure-handling Maybe-like one to a more general indeterminism-handling list-like one, then we can do that simply by replacing definition

  • A>>>=maybe>>>=

with definition

  • A>>>=list>>>=

Illative Combinatory Logic

Systems of Illative Combinatory Logic complete for first-order propositional and predicate calculus by Henk Barendregt, Martin Bunder, Wil Dekkers.

I think combinator G can be thought of as something analogous to Dependent types: it seems to me that the dependent type construct x:ST of Epigram corresponds to GS(λx.T) in Illative Combinatory Logic. I think e.g. the followings should correspond to each other:

  • realNullvector:n:NatRealVectorn
  • GNatRealVectorrealNullvector


My dream is making something in Illative Combinatory Logic. Maybe it could be theroretical base for a functional logic language?