# User:Michiexile/MATH198/Lecture 4

## Contents

### 1 Product

Recall the construction of a cartesian product of two sets: $A\times B=\{(a,b) : a\in A, b\in B\}$. We have functions $p_A:A\times B\to A$ and $p_B:A\times B\to B$ extracting the two sets from the product, and we can take any two functions $f:A\to A'$ and $g:B\to B'$ and take them together to form a function $f\times g:A\times B\to A'\times B'$.

Similarly, we can form the type of pairs of Haskell types:
Pair s t = (s,t)
. For the pair type, we have canonical functions
fst :: (s,t) -> s
and
snd :: (s,t) -> t
extracting the components. And given two functions
f :: s -> s'
and
g :: t -> t'
, there is a function
f *** g :: (s,t) -> (s',t')
.

An element of the pair is completely determined by the two elements included in it. Hence, if we have a pair of generalized elements $q_1:V\to A$ and $q_2:V\to B$, we can find a unique generalized element $q:V\to A\times B$ such that the projection arrows on this gives us the original elements back.

This argument indicates to us a possible definition that avoids talking about elements in sets in the first place, and we are lead to the

Definition A product of two objects A,B in a category C is an object $A\times B$ equipped with arrows $A \leftarrow^{p_1} A\times B\rightarrow^{p_2} B$ such that for any other object V with arrows $A \leftarrow^{q_1} V \rightarrow^{q_2} B$, there is a unique arrow $V\to A\times B$ such that the diagram

commutes. The diagram $A \leftarrow^{p_1} A\times B\rightarrow^{p_2} B$ is called a product cone if it is a diagram of a product with the projection arrows from its definition.

In the category of sets, the unique map is given by q(v) = (q1(v),q2(v)). In the Haskell category, it is given by the combinator
(&&&) :: (a -> b) -> (a -> c) -> a -> (b,c)
.

We tend to talk about the product. The justification for this lies in the first interesting

Proposition If P and P' are both products for A,B, then they are isomorphic.

Proof Consider the diagram

Both vertical arrows are given by the product property of the two product cones involved. Their compositions are endo-arrows of P,P', such that in each case, we get a diagram like

with $V=A\times B=P$ (or P'), and q1 = p1,q2 = p2. There is, by the product property, only one endoarrow that can make the diagram work - but both the composition of the two arrows, and the identity arrow itself, make the diagram commute. Therefore, the composition has to be the identity. QED.

We can expand the binary product to higher order products easily - instead of pairs of arrows, we have families of arrows, and all the diagrams carry over to the larger case.

#### 1.1 Binary functions

Functions into a product help define the product in the first place, and function as elements of the product. Functions from a product, on the other hand, allow us to put a formalism around the idea of functions of several variables.

So a function of two variables, of types
A
and
B
is a function
f :: (A,B) -> C
. The Haskell idiom for the same thing,
A -> B -> C
as a function taking one argument and returning a function of a single variable; as well as the
curry
/
uncurry
procedure is tightly connected to this viewpoint, and will reemerge below, as well as when we talk about adjunctions later on.

### 2 Coproduct

The product came, in part, out of considering the pair construction. One alternative way to write the
Pair a b
type is:
data Pair a b = Pair a b

and the resulting type is isomorphic, in Hask, to the product type we discussed above.

This is one of two basic things we can do in a
data
type declaration, and corresponds to the record types in Computer Science jargon.

The other thing we can do is to form a union type, by something like

data Union a b = Left a | Right b
which takes on either a value of type
a
or of type
b
, depending on what constructor we use.

This type guarantees the existence of two functions

Left  :: a -> Union a b
Right :: b -> Union a b

Similarly, in the category of sets we have the disjoint union $S\coprod T = S\times 0 \cup T \times 1$, which also comes with functions $i_S: S\to S\coprod T, i_T: T\to S\coprod T$.

We can use all this to mimic the product definition. The directions of the inclusions indicate that we may well want the dualization of the definition. Thus we define:

Definition A coproduct A + B of objects A,B in a category C is an object equipped with arrows $A \rightarrow^{i_1} A+B \leftarrow^{i_2} B$ such that for any other object V with arrows $A\rightarrow^{q_1} V\leftarrow^{q_2} B$, there is a unique arrow $A+B\to V$ such that the diagram

commutes. The diagram $A \rightarrow^{i_1} A+B \leftarrow^{i_2} B$ is called a coproduct cocone, and the arrows are inclusion arrows.

For sets, we need to insist that instead of just any $S\times 0$ and $T\times 1$, we need the specific construction taking pairs for the coproduct to work out well. The issue here is that the categorical product is not defined as one single construction, but rather from how it behaves with respect to the arrows involved.

With this caveat, however, the coproduct in Set really is the disjoint union sketched above.

For Hask, the coproduct is the type construction of
Union
above - more usually written
Either a b
.

And following closely in the dualization of the things we did for products, there is a first

Proposition If C,C' are both coproducts for some pair A,B in a category D, then they are isomorphic.

The proof follows the exact pattern of the corresponding proposition for products.

### 3 Cartesian Closed Categories and typed lambda-calculus

A category is said to have pairwise products if for any objects A,B, there is a product object $A\times B$.

A category is said to have pairwise coproducts if for any objects A,B, there is a coproduct object A + B.

Recall when we talked about internal homs in Lecture 2. We can now define what we mean, formally, by the concept:

Definition An object C in a category D is an internal hom object or an exponential object $[A\to B]$ or BA if it comes equipped with an arrow $ev: [A\to B] \times A \to B$, called the evaluation arrow, such that for any other arrow $f: C\times A\to B$, there is a unique arrow $\lambda f: C\to [A\to B]$ such that the composite

$C\times A\to^{\lambda f\times 1_A} [A\to B]\times A\to^{ev} B$

is f.

The idea here is that with something in an exponential object, and something in the source of the arrows we imagine live inside the exponential, we can produce the evaluation of the arrow at the source to produce something in the target. Using global elements, this reasoning comes through in a more natural manner: given $f: 1\to [A\to B]$ and $x: 1\to A$ we can produce the global element $f(x) = ev \circ f\times x: 1\to B$. Furthermore, we can always produce something in the exponential whenever we have something that looks as if it should be there.

And with this we can define

Definition A category C is a Cartesian Closed Category or a CCC if:

1. C has a terminal object 1
2. Each pair of objects $A, B\in C_0$ has a product $A\times B$ and projections $p_1:A\times B\to A$, $p_2:A\times B\to B$.
3. For every pair $A, B\in C_0$ of objects, there is an exponential object $[A\to B]$ with an evaluation map $[A\to B]\times A\to B$.

#### 3.1 Currying

Note that the exponential as described here is exactly what we need in order to discuss the Haskell concept of multi-parameter functions. If we consider the type of a binary function in Haskell:

binFunction :: a -> a -> a
This function really lives in the Haskell type
a -> (a -> a)
, and thus is an element in the repeated exponential object $[A \to [A\to A]]$. Evaluating once gives us a single-parameter function, the first parameter consumed by the first evaluation, and we can evaluate a second time, feeding in the second parameter to get an end result from the function.

On the other hand, we can feed in both values at once, and get

binFunction' :: (a,a) -> a

which lives in the exponential object $[A\times A\to A]$.

These are genuinely different objects, but they seem to do the same thing: consume two distinct values to produce a third value. The resolution of the difference lies, again, in a recognition from Set theory: there is an isomorphism

$Hom(S, Hom(T, V)) = Hom(S\times T, V)$

which we can use as inspiration for an isomorphism $Hom(S,[T\to V]) = Hom(S\times T, V)$ valid in Cartesian Closed Categories.

As it turns out, this is exactly what we need for λ-calculus. Any typed λ-calculus gives rise to a CCC in a natural manner, and any CCC has an internal language which satisfies, by the axioms for the CCC, all requirements to be a typed λ-calculus.

More importantly, by stating λ-calculus in terms of a CCC instead of in terms of terms and rewriting rules is that you can escape worrying about variable clashes, alpha reductions and composability - the categorical translation ignores, at least superficially, the variables, reduces terms with morphisms that have equality built in, and provides associative composition for free.

At this point, I'd reccomend reading more on Wikipedia [1] and [2], as well as in Lambek & Scott: Introduction to Higher Order Categorical Logic. The book by Lambek & Scott goes into great depth on these issues, but may be less than friendly to a novice.

### 4 Algebra of datatypes

Recall from Lecture 3 that we can consider endofunctors as container datatypes. Some of the more obvious such container datatypes include:

data 1 a = Empty
data T a = T a

These being the data type that has only one single element and the data type that has exactly one value contained.

Using these, we can generate a whole slew of further datatypes. First off, we can generate a data type with any finite number of elements by $n = 1 + 1 + \dots + 1$ (n times). Remember that the coproduct construction for data types allows us to know which summand of the coproduct a given part is in, so the single elements in all the
1
s in the definition of
n
here are all distinguishable, thus giving the final type the required number of elements. Of note among these is the data type
Bool = 2
- the Boolean data type, characterized by having exactly two elements.

Furthermore, we can note that $1\times T = T$, with the isomorphism given by the maps

f (Empty, T x) = T x
g (T x) = (Empty, T x)

Thus we have the capacity to add and multiply types with each other. We can verify, for any types A,B,C $A\times(B+C) = A\times B + A\times C$

We can thus make sense of types like T3 + 2T2 (either a triple of single values, or one out of two tagged pairs of single values).

This allows us to start working out a calculus of data types with versatile expression power. We can produce recursive data type definitions by using equations to define data types, that then allow a direct translation back into Haskell data type definitions, such as: $List = 1 + T\times List$ $BinaryTree = T\times 1+T\times BinaryTree\times BinaryTree$ $TernaryTree = T\times 1+T\times TernaryTree\times TernaryTree\times TernaryTree$ $GenericTree = T\times 1+T\times (List\circ GenericTree)$

The real power of this way of rewriting types comes in the recognition that we can use algebraic methods to reason about our data types. For instance:

List = 1 + T * List
= 1 + T * (1 + T * List)
= 1 + T * 1 + T * T* List
= 1 + T + T * T * List

so a list is either empty, contains one element, or contains at least two elements. Using, though, ideas from the theory of power series, or from continued fractions, we can start analyzing the data types using steps on the way that seem completely bizarre, but arriving at important property. Again, an easy example for illustration:

List = 1 + T * List               -- and thus
List - T * List = 1               -- even though (-) doesn't make sense for data types
(1 - T) * List = 1                -- still ignoring that (-)...
List = 1 / (1 - T)                -- even though (/) doesn't make sense for data types
= 1 + T + T*T + T*T*T + ...  -- by the geometric series identity

and hence, we can conclude - using formally algebraic steps in between - that a list by the given definition consists of either an empty list, a single value, a pair of values, three values, et.c.

At this point, I'd recommend anyone interested in more perspectives on this approach to data types, and thinks one may do with them, to read the following references:

#### 4.1 Blog posts and Wikipages

The ideas in this last section originate in a sequence of research papers from Conor McBride - however, these are research papers in logic, and thus come with all the quirks such research papers usually carry. Instead, the ideas have been described in several places by various blog authors from the Haskell community - which make for a more accessible but much less strict read.

### 5 Homework

Complete points for this homework consists of four out of 6 exercises. Partial credit is given.

1. What are the products in the category C(P) of a poset P? What are the coproducts?
2. Prove that any two coproducts are isomorphic.
3. Prove that any two exponentials are isomorphic.
4. Prove that currying/uncurrying are isomorphisms in a CCC. Hint: the map $f\mapsto\lambda f$ is a map $Hom(C\times A, B)\to Hom(C,[A\to B])$.
5. Write down the type declaration for at least two of the example data types from the section of the algebra of datatypes, and write a
Functor
implementation for each.
6. * Read up on Zippers and on differentiating data structures. Find the derivative of List, as defined above. Prove that $\partial List = List \times List$. Find the derivatives of BinaryTree, and of GenericTree.