Difference between revisions of "User:Michiexile/MATH198/Lecture 8"

We recall from the last lecture the definition of an Eilenberg-Moore algebra over a monad $T = (T, \eta, \mu)$:

Definition An algebra over a monad $T$ in a category $C$ (a $T$-algebra) is a morphism $\alpha\in C(TA, A)$, such that the diagrams below both commute:

While a monad corresponds to the imposition of some structure on the objects in a category, an algebra over that monad corresponds to some evaluation of that structure.

Example: monoids

Let $T$ be the Kleene star monad - the one we get from the adjunction of free and forgetful functors between Monoids and Sets. Then a $T$-algebra on a set $A$ is equivalent to a monoid structure on $A$.

Indeed, if we have a monoid structure on $A$, given by $m:A^2\to A$ and $u:1\to A$, we can construct a $T$-algebra by

$\alpha([]) = u$
$\alpha([a_1,a_2,\dots,a_n]) = m(a_1,\alpha([a_2,\dots,a_n]))$

This gives us, indeed, a $T$-algebra structure on $A$. Associativity and unity follows from the corresponding properties in the monoid.

On the other hand, if we have a $T$-algebra structure on $A$, we can construct a monoid structure by setting

$u = \alpha([])$
$m(a,b) = \alpha([a,b])$

It is clear that associativity of $m$ follows from the associativity of $\alpha$, and unitality of $u$ follows from the unitality of $\alpha$.

Example: Vector spaces

We have free and forgetful functors

$Set \to^{free} k-Vect \to^{forgetful} Set$

forming an adjoint pair; where the free functor takes a set $S$ and returns the vector space with basis $S$; while the forgetful functor takes a vector space and returns the set of all its elements.

The composition of these yields a monad $T$ in $Set$ taking a set $S$ to the set of all formal linear combinations of elements in $S$. The monad multiplication takes formal linear combinations of formal linear combinations and multiplies them out:

$3(2v+5w)-5(3v+2w) = 6v+15w-15v-10w = -9v+5w$

A $T$-algebra is a map $\alpha: TA\to A$ that acts like a vector space in the sense that $\alpha(\sum\alpha_i(\sum\beta_jv_j)) = \alpha(\sum\alpha_i\beta_jv_j)$.

We can define $\lambda\cdot v = \alpha(\lambda v)$ and $v+w=\alpha(v+w)$. The operations thus defined are associative, distributive, commutative, and everything else we could wish for in order to define a vector space - precisely because the operations inside $TA$ are, and $\alpha$ is associative.

The moral behind these examples is that using monads and monad algebras, we have significant power in defining and studying algebraic structures with categorical and algebraic tools. This paradigm ties in closely with the theory of operads - which has its origins in topology, but has come to good use within certain branches of universal algebra.

An (non-symmetric) operad is a graded set $O = \bigoplus_i O_i$ equipped with composition operations $\circ_i: O_n\oplus O_m\to O_{n+m-1}$ that obey certain unity and associativity conditions. As it turns out, non-symmetric operads correspond to the summands in a monad with polynomial underlying functor, and from a non-symmetric operad we can construct a corresponding monad.

The designator non-symmetric floats in this text o avoid dealing with the slightly more general theory of symmetric operads - which allow us to resort the input arguments, thus including the symmetrizer of a symmetric monoidal category in the entire definition.

Algebras over endofunctors

Suppose we started out with an endofunctor that is not the underlying functor of a monad - or an endofunctor for which we don't want to settle on a monadic structure. We can still do a lot of the Eilenberg-Moore machinery on this endofunctor - but we don't get quite the power of algebraic specification that monads offer us. At the core, here, lies the lack of associativity for a generic endofunctor - and algebras over endofunctors, once defined, will correspond to non-associative correspondences to their monadic counterparts.

Definition For an endofunctor $P:C\to C$, we define a $P$-algebra to be an arrow $\alpha\in C(PA,A)$.

A homomorphism of $P$-algebras $\alpha\to\beta$ is some arrow $f:A\to B$ such that the diagram below commutes:

This homomorphism definition does not need much work to apply to the monadic case as well.

Example: Groups

A group is a set $G$ with operations $u: 1\to G, i: G\to G, m: G\times G\to G$, such that $u$ is a unit for $m$, $m$ is associative, and $i$ is an inverse.

Ignoring for a moment the properties, the theory of groups is captured by these three maps, or by a diagram

We can summarize the diagram as

$1+G+G\times G \mapsto^{[u,i,m]} G$

and thus recognize that groups are some equationally defined subcategory of the category of $T$-algebras for the polynomial functor $T(X) = 1 + X + X\times X$. The subcategory is full, since if we have two algebras $\gamma: T(G)\to G$ and $\eta: T(H)\to H$, that both lie within the subcategory that fulfills all the additional axioms, then certainly any morphism $\gamma\to\eta$ will be compatible with the structure maps, and thus will be a group homomorphism.

We shall denote the category of $P$-algebras in a category $C$ by $P-Alg(C)$, or just $P-Alg$ if the category is implicitly understood.

This category is wider than the corresponding concept for a monad. We don't require the kind of associativity we would for a monad - we just lock down the underlying structure. This distinction is best understood with an example:

The free monoids monad has monoids for its algebras. On the other hand, we can pick out the underlying functor of that monad, forgetting about the unit and multiplication. An algebra over this structure is a slightly more general object: we no longer require $(a\cdot b)\cdot c = a\cdot (b\cdot c)$, and thus, the theory we get is that of a magma. We have concatenation, but we can't drop the brackets, and so we get something more reminiscent of a binary tree.

Initial $P$-algebras and recursion

Consider the polynomical functor $P(X) = 1 + X$ on the category of sets. It's algebras form a category, by the definitions above - and an algebra needs to pick out one special element 0, and one endomorphism T, for a given set.

What would an initial object in this category of P-algebras look like? It would be an object $I$ equipped with maps $1 \to^o I \leftarrow^n I$. For any other pair of maps $a: 1\to X, s: X\to X$, we'd have a unique arrow $u: I\to X$ such that

commutes, or in equations such that

$u(o) = a$
$u(n(x)) = s(u(x))$

Now, unwrapping the definitions in place, we notice that we will have elements $o, n(o), n(n(o), \dots$ in $I$, and the initiality will force us to not have any other elements floating around. Also, intiality will prevent us from having any elements not in this minimally forced list.

We can rename the elements to form something more recognizable - by equating an element in $I$ with the number of applications of $n$ to $o$. This yields, for us, elements $0, 1, 2, \dots$ with one function that picks out the $0$, and another that gives us the successor.

This should be recognizable as exactly the natural numbers; with just enough structure on them to make the principle of mathematical induction work: suppose we can prove some statement $P(0)$, and we can extend a proof of $P(n)$ to $P(n+1)$. Then induction tells us that the statement holds for all $P(n)$.

More importantly, recursive definitions of functions from natural numbers can be performed here by choosing an appropriate algebra mapping to.

This correspondence between the initial object of $P(X) = 1 + X$ is the reason such an initial object in a category with coproducts and terminal objects is called a natural numbers object.

For another example, we consider the functor $P(X) = 1 + X\times X$.

Pop Quiz Can you think of a structure with this as underlying defining functor?

An initial $1+X\times X$-algebra would be some diagram

$1 \to^o I \leftarrow^m I\times I$

such that for any other such diagram

$1 \to^a X \leftarrow^* X\times X$

we have a unique arrow $u:I\to X$ such that

commutes.

Unwrapping the definition, working over Sets again, we find we are forced to have some element $*$, the image of $o$. Any two elements $S,T$ in the set give rise to some $(S,T)$, which we can view as being the binary tree

The same way that we could construct induction as an algebra map from a natural numbers object, we can use this object to construct a tree-shaped induction; and similarily, we can develop what amounts to the theory of structural induction using these more general approaches to induction.

Example of structural induction

Using the structure of $1+X\times X$-algebras we shall prove the following statement:

Proposition The number of leaves in a binary tree is one more than the number of internal nodes.

Proof We write down the actual Haskell data type for the binary tree initial algebra.

data Tree = Leaf | Node Tree Tree

nLeaves Leaf = 1
nLeaves (Node s t) = nLeaves s + nLeaves t

nNodes Leaf = 0
nNodes (Node s t) = 1 + nNodes s + nNodes t


Now, it is clear, as a base case, that for the no-nodes tree Leaf:

nLeaves Leaf = 1 + nNodes Leaf


For the structural induction, now, we consider some binary tree, where we assume the statement to be known for each of the two subtrees. Hence, we have

tree = Node s t

nLeaves s = 1 + nNodes s
nLeaves t = 1 + nNodes t


and we may compute

nLeaves tree = nLeaves s + nLeaves t
= 1 + nNodes s + 1 + nNodes t
= 2 + nNodes s + nNodes t

nNodes tree  = 1 + nNodes s + nNodes t


Now, since the statement is proven for each of the cases in the structural description of the data, it follows form the principle of structural induction that the proof is finished.

In order to really nail down what we are doing here, we need to define what we mean by predicates in a strict manner. There is a way to do this using fibrations, but this reaches far outside the scope of this course. For the really interested reader, I'll refer to [2].

Another way to do this is to introduce a topos, and work it all out in terms of its internal logic, but again, this reaches outside the scope of this course.

Lambek's lemma

What we do when we write a recursive data type definition in Haskell really to some extent is to define a data type as the initial algebra of the corresponding functor. This intuitive equivalence is vindicated by the following

Lemma Lambek If $P: C\to C$ has an initial algebra $I$, then $P(I) = I$.

Proof Let $a: PA\to A$ be an initial $P$-algebra. We can apply $P$ again, and get a chain

$PPA \to^{Pa} PA \to^a A$

We can fill out the diagram

to form the diagram

where $f$ is induced by initiality, since $Pa \colon PPA \to PA$ is also a $P$-algebra.

The diagram above commutes, and thus $af = 1_{PA}$ and $fa = 1_A$. Thus $f$ is an inverse to $a$. QED.

Thus, by Lambek's lemma we know that if $P_A(X) = 1 + A\times X$ then for that $P_A$, the initial algebra - should it exist - will fulfill$I = 1 + A\times I$, which in turn is exactly what we write, defining this, in Haskell code:

List a = Nil | Cons a List


Recursive definitions with the unique maps from the initial algebra

Consider the following $P_A(X)$-algebra structure $l: P_A(\mathbb N)\to\mathbb N$ on the natural numbers:

l(*) = 0
l(a,n) = 1 + n


We get a unique map $f$ from the initial algebra for $P_A(X)$ (lists of elements of type $A$) to $\mathbb N$ from this definition. This map will fulfill:

f(Nil) = l(*) = 0
f(Cons a xs) = l(a,f(xs)) = 1 + f(xs)


which starts taking on the shape of the usual definition of the length of a list:

length(Nil) = 0
length(Cons a xs) = 1 + length(xs)


And thus, the machinery of endofunctor algebras gives us a strong method for doing recursive definitions in a theoretically sound manner.

Homework

Complete credit will be given for 8 of the 13 questions.

1. Find a monad whose algebras are associative algebras: vector spaces with a binary, associative, unitary operation (multiplication) defined on them. Factorize the monad into a free/forgetful adjoint pair.
2. Find an endofunctor of $Hask$ whose initial object describes trees that are either binary of ternary at each point, carrying values from some $A$ in the leaves.
3. Write an implementation of the monad of vector spaces in Haskell. If this is tricky, restrict the domain of the monad to, say, a 3-element set, and implement the specific example of a 3-dimensional vector space as a monad. Hint: [3] has written about this approach.
4. Find a $X\mapsto 1+A\times X$-algebra $L$ such that the unique map from the initial algebra $I$ to $L$ results in the function that will reverse a given list.
5. Find a $X\mapsto 1+A\times X$-algebra structure on the object $1+A$ that will pick out the first element of a list, if possible.
6. Find a $X\mapsto \mathbb N+X\times X$-algebra structure on the object $\mathbb N$ that will pick out the sum of the leaf values for the binary tree in the initial object.
7. Complete the proof of Lambek's lemma by proving the diagram commutes.
8. * We define a coalgebra for an endofunctor $T$ to be some arrow $\gamma: A \to TA$. If $T$ is a comonad - i.e. equipped with a counit $\epsilon: T\to 1$ and a cocomposition $\Delta: T\to T\times T$, then we define a coalgebra for the comonad $T$ to additionally fulfill $\gamma\circ T\gamma = \gamma\circ\Delta$ (compatibility) and $\epsilon_A\circ\gamma = 1_A$ (counitality).
1. (2pt) Prove that if $T$ is an endofunctor, then if $T$ has an initial algebra, then it is a coalgebra. Does $T$ necessarily have a final coalgebra?
2. (2pt) Prove that if $U,F$ are an adjoint pair, then $FU$ forms a comonad.
3. (2pt) Describe a final coalgebra over the comonad formed from the free/forgetful adjunction between the categories of Monoids and Sets.
4. (2pt) Describe a final coalgebra over the endofunctor $P(X) = 1 + X$.
5. (2pt) Describe a final coalgebra over the endofunctor $P(X) = 1 + A\times X$.
6. (2pt) Prove that if $c: C\to PC$ is a final coalgebra for an endofunctor $P:C\to C$, then $c$ is an isomorphism.