# User:Michiexile/MATH198/Lecture 1

(Difference between revisions)

This page now includes additional information based on the notes taken in class. Hopefully this will make the notes reasonably complete for everybody.

## Contents

I'm Mikael Vejdemo-Johansson. I can be reached in my office 383-BB, especially during my office hours; or by email to mik@math.stanford.edu.

I encourage, strongly, student interactions.

I will be out of town September 24 - 29. I will monitor forum and email closely, and recommend electronic ways of getting in touch with me during this week. I will be back again in time for the office hours on the 30th.

## 2 Introduction

### 2.1 Why this course?

An introduction to Haskell will usually come with pointers toward Category Theory as a useful tool, though not with much more than the mention of the subject. This course is intended to fill that gap, and provide an introduction to Category Theory that ties into Haskell and functional programming as a source of examples and applications.

### 2.2 What will we cover?

The definition of categories, special objects and morphisms, functors, natural transformation, (co-)limits and special cases of these, adjunctions, freeness and presentations as categorical constructs, monads and Kleisli arrows, recursion with categorical constructs.

Maybe, just maybe, if we have enough time, we'll finish with looking at the definition of a topos, and how this encodes logic internal to a category. Applications to fuzzy sets.

### 2.3 What do we require?

Our examples will be drawn from discrete mathematics, logic, Haskell programming and linear algebra. I expect the following concepts to be at least vaguely familiar to anyone taking this course:

• Sets
• Functions
• Permutations
• Groups
• Partially ordered sets
• Vector spaces
• Linear maps
• Matrices
• Homomorphisms

### 2.4 Good references

On reserve in the mathematics/CS library are:

• Mac Lane: Categories for the working mathematician
This book is technical, written for a mathematical audience, and puts in more work than is strictly necessary in many of the definitions. When Awodey and Mac Lane deviate, we will give Awodey priority.
• Awodey: Category Theory
This book is also available as an ebook, accessible from the Stanford campus network. The coursework webpage has links to the ebook under Materials.

### 2.5 Monoids

In order to settle notation and ensure everybody's seen a definition before:

Definition A monoid is a set M equipped with a binary associative operation * (in Haskell:
mappend
) and an identity element $\emptyset$ (in Haskell:
mempty
).

A semigroup is a monoid without the requirement for an identity element.

A function $f:M\to N$ is a monoid homomorphism if the following conditions hold:

• $f(\emptyset) = \emptyset$
• f(m * m') = f(m) * f(m')

Examples

• Any group is a monoid. Thus, specifically, the integers with addition is a monoid.
• The natural numbers, with addition.
• Strings L * in an alphabet L is a monoid with string concatenation forming the operation and the empty string the identity.
• Non-empty strings form a semigroup.

Awodey: p. 10.

### 2.6 Partially ordered set

Definition A partially ordered set, or a partial order, or a poset is a set P equipped with a binary relation $\leq$ which is:

• Reflexive: $x\leq x$ for all $x\in P$
• Anti-symmetric: $x\leq y$ and $y\leq x$ implies x = y for all $x,y\in P$.
• Transitive: $x\leq y$ and $y\leq z$ implies $x\leq z$ for all $x,y,z\in P$.

If $x\leq y$ or $y\leq x$, we call x,y comparable. Otherwise we call them incomparable. A poset where all elements are mutually comparable is called a totally ordered set or a total order.

If we drop the requirement for anti-symmetry, we get a pre-ordered set or a pre-order.

If we have several posets, we may indicate which poset we're comparing in byindicating the poset as a subscript to the relation symbol.

A monotonic map of posets is a function $f:P\to Q$ such that $x\leq_P y$ implies $f(x)\leq_Q f(y)$.

Examples

• The reals, natural numbers, integers all are posets with the usual comparison relation. A poset in which all elements are comparable.
• The natural numbers, excluding 0, form a poset with $a\leq b$ if a | b.
• Any family of subsets of a given set form a poset with the order given by inclusion.

Awodey: p. 6. Preorders are defined on page 8-9.

## 3 Category

Awodey has a slightly different exposition. Relevant pages in Awodey for this lecture are: sections 1.1-1.4 (except Def. 1.2), 1.6-1.8.

### 3.1 Graphs

We recall the definition of a (directed) graph. A graph G is a collection of edges (arrows) and vertices (nodes). Each edge is assigned a source node and a target node.

$source \to target$

Given a graph G, we denote the collection of nodes by G0 and the collection of arrows by G1. These two collections are connected, and the graph given its structure, by two functions: the source function $s:G_1\to G_0$ and the target function $t:G_1\to G_0$.

We shall not, in general, require either of the collections to be a set, but will happily accept larger collections; dealing with set-theoretical paradoxes as and when we have to. A graph where both nodes and arrows are sets shall be called small. A graph where either is a class shall be called large. 0 If both G0 and G1 are finite, the graph is called finite too.

The empty graph has $G_0 = G_1 = \emptyset$.

A discrete graph has $G_1=\emptyset$.

A complete graph has $G_1 = \{ (v,w) | v,w\in G_0\}$.

A simple graph has at most one arrow between each pair of nodes. Any relation on a set can be interpreted as a simple graph.

• Show some examples.

A homomorphism $f:G\to H$ of graphs is a pair of functions $f_0:G_0\to H_0$ and $f_1:G_1\to H_1$ such that sources map to sources and targets map to targets, or in other words:

• s(f1(e)) = f0(s(e))
• t(f1(e)) = f0(t(e))

By a path in a graph G from the node x to the node y of length k, we mean a sequence of edges $(f_1,f_2,\dots,f_k)$ such that:

• s(f1) = x
• t(fk) = y
• s(fi) = t(fi − 1) for all other i.

Paths with start and end point identical are called closed. For any node x, there is a unique closed path () starting and ending in x of length 0.

For any edge f, there is a unique path from s(f) to t(f) of length 1: (f).

We denote by Gk the set of paths in G of length k.

### 3.2 Categories

We now are ready to define a category. A category is a graph G equipped with an associative composition operation $\circ:G_2\to G_1$, and an identity element for composition 1x for each node x of the graph.

Note that G2 can be viewed as a subset of $G_1\times G_1$, the set of all pairs of arrows. It is intentional that we define the composition operator on only a subset of the set of all pairs of arrows - the composable pairs. Whenever you'd want to compose two arrows that don't line up to a path, you'll get nonsense, and so any statement about the composition operator has an implicit "whenever defined" attached to it.

The definition is not quite done yet - this composition operator, and the identity arrows both have a few rules to fulfill, and before I state these rules, there are some notation we need to cover.

#### 3.2.1 Backwards!

If we have a path given by the arrows (f,g) in G2, we expect $f:A\to B$ and $g:B\to C$ to compose to something that goes $A\to C$. The origin of all these ideas lies in geometry and algebra, and so the abstract arrows in a category are supposed to behave like functions under function composition, even though we don't say it explicitly.

Now, we are used to writing function application as f(x) - and possibly, from Haskell, as
f x
. This way, the composition of two functions would read g(f(x)).

On the other hand, the way we write our paths, we'd read f then g. This juxtaposition makes one of the two ways we write things seem backwards. We can resolve it either by making our paths in the category go backwards, or by reversing how we write function application.

In the latter case, we'd write x.f, say, for the application of f to x, and then write x.f.g for the composition. It all ends up looking a lot like Reverse Polish Notation, and has its strengths, but feels unnatural to most. It does, however, have the benefit that we can write out function composition as $(f,g) \mapsto f.g$ and have everything still make sense in all notations.

In the former case, which is the most common in the field, we accept that paths as we read along the arrows and compositions look backwards, and so, if $f:A\to B$ and $g:B\to C$, we write $g\circ f:A\to C$, remembering that elements are introduced from the right, and the functions have to consume the elements in the right order.

The existence of the identity map can be captured in a function language as well: it is the existence of a function $u:G_0\to G_1$.

Now for the remaining rules for composition. Whenever defined, we expect associativity - so that $h\circ(g\circ f)=(h\circ g)\circ f$. Furthermore, we expect:

1. Composition respects sources and targets, so that:
• $s(g\circ f) = s(f)$
• $t(g\circ f) = t(g)$
2. s(u(x)) = t(u(x)) = x

In a category, arrows are also called morphisms, and nodes are also called objects. This ties in with the algebraic roots of the field.

We denote by HomC(A,B), or if C is obvious from context, just Hom(A,B), the set of all arrows from A to B. This is the hom-set or set of morphisms, and may also be denoted C(A,B).

If a category is large or small or finite as a graph, it is called a large/small/finite category.

A category with objects a collection of sets and morphisms a selection from all possible set-valued functions such that the identity morphism for each object is a morphism, and composition in the category is just composition of functions is called concrete. Concrete categories form a very rich source of examples, though far from all categories are concrete.

Again, the Wikipedia page on Category (mathematics) [[3]] is a good starting point for many things we will be looking at throughout this course.

### 3.3 New Categories from old

As with most other algebraic objects, one essential part of our tool box is to take known objects and form new examples from them. This allows us generate a wealth of examples from the ones that shape our intuition.

Typical things to do here would be to talk about subobjects, products and coproducts, sometimes obvious variations on the structure, and what a typical object looks like. Remember from linear algebra how subspaces, cartesian products (which for finite-dimensional vectorspaces covers both products and coproducts) and dual spaces show up early, as well as the theorems giving dimension as a complete descriptor of a vectorspace.

We'll go through the same sequence here; with some significant small variations.

A category D is a subcategory of the category C if:

• $D_0\subseteq C_0$
• $D_1\subseteq C_1$
• D1 contains 1X for all $X\in D_0$
• sources and targets of all the arrows in D1 are all in D0
• the composition in D is the restriction of the composition in C.

Written this way, it does look somewhat obnoxious. It does become easier though, with the realization - studied closer in homework exercise 2 - that the really important part of a category is the collection of arrows. Thus, a subcategory is a subcollection of the collection of arrows - with identities for all objects present, and with at least all objects that the existing arrows imply.

A subcategory $D\subseteq C$ is full if D(A,B) = C(A,B) for all objects A,B of D. In other words, a full subcategory is completely determined by the selection of objects in the subcategory.

A subcategory $D\subseteq C$ is wide if the collection of objects is the same in both categories. Hence, a wide subcategory picks out a subcollection of the morphisms.

The dual of a category is to a large extent inspired by vector space duals. In the dual C * of a category C, we have the same objects, and the morphisms are given by the equality C * (A,B) = C(B,A) - every morphism from C is present, but it goes in the wrong direction. Dualizing has a tendency to add the prefix co- when it happens, so for instance coproducts are the dual notion to products. We'll return to this construction many times in the course.

Given two categories C,D, we can combine them in several ways:

1. We can form the category that has as objects the disjoint union of all the objects of C and D, and that sets $Hom(A,B)=\emptyset$ whenever A,B come from different original categories. If A,B come from the same original category, we simply take over the homset from that category. This yields a categorical coproduct, and we denote the result by C + D. Composition is inherited from the original categories.
2. We can also form the category with objects $\langle A,B\rangle$ for every pair of objects $A\in C, B\in D$. A morphism in $Hom(\langle A,B\rangle,\langle A',B'\rangle)$ is simply a pair $\langle f:A\to A',g:B\to B'\rangle$. Composition is defined componentwise. This category is the categorical correspondent to the cartesian product, and we denot it by $C\times D$.

In these three constructions - the dual, the product and the coproduct - the arrows in the categories are formal constructions, not functions; even if the original category was given by functions, the result is no longer given by a function.

Given a category C and an object A of that category, we can form the slice category C / A. Objects in the slice category are arrows $B\to A$ for some object B in C, and an arrow $\phi:f\to g$ is an arrow $s(f)\to s(g)$ such that $f=g\circ\phi$. Composites of arrows are just the composites in the base category.

Notice that the same arrow φ in the base category C represents potentially many different arrows in C / A: it represents one arrow for each choice of source and target compatible with it.

There is a dual notion: the coslice category $A\backslash C$, where the objects are paired with maps $A\to B$.

Slice categories can be used, among other things, to specify the idea of parametrization. The slice category C / A gives a sense to the idea of objects from C labeled by elements of A.

We get this characterization by interpreting the arrow representing an object as representing its source and a type function. Hence, in a way, the
Typeable
type class in Haskell builds a slice category on an appropriate subcategory of the category of datatypes.

Alternatively, we can phrase the importance of the arrow in a slice categories of, say, Set, by looking at preimages of the slice functions. That way, an object $f:B\to A$ gives us a family of (disjoint) subsets of B indexed by the elements of A.

Finally, any graph yields a category by just filling in the arrows that are missing. The result is called the free category generated by the graph, and is a concept we will return to in some depth. Free objects have a strict categorical definition, and they serve to give a model of thought for the things they are free objects for. Thus, categories are essentially graphs, possibly with restrictions or relations imposed; and monoids are essentially strings in some alphabet, with restrictions or relations.

### 3.4 Examples

• The empty category.
• No objects, no morphisms.
• The one object/one arrow category 1.
• A single object and its identity arrow.
• The categories 2 and 1 + 1.
• Two objects, A,B with identity arrows and a unique arrow $A\to B$.
• The category Set of sets.
• Sets for objects, functions for arrows.
• The catgeory FSet of finite sets.
• Finite sets for objects, functions for arrows.
• The category PFn of sets and partial functions.
• Sets for objects. Arrows are pairs $(S'\subseteq S,f:S'\to T)\in PFn(S,T)$.
• PFn(A,B) is a partially ordered set. $(S_f,f)\leq(S_g,g)$ precisely if $S_f\subseteq S_g$ and $f=g|_{S_f}$.
• The exposition at Wikipedia uses the construction here: [[4]].
• There is an alternative way to define a category of partial functions: For objects, we take sets, and for morphisms $S\to T$, we take subsets $F\subseteq S\times T$ such that each element in S occurs in at most one pair in the subset. Composition is by an interpretation of these subsets corresponding to the previous description. We'll call this category PFn'.
• Every partial order is a category. Each hom-set has at most one element.
• Objects are the elements of the poset. Arrows are unique, with $A\to B$ precisely if $A\leq B$.
• Every monoid is a category. Only one object. The elements of the monoid correspond to the endo-arrows of the one object.
• The category of Sets and injective functions.
• Objects are sets. Morphisms are injective functions between the sets.
• The category of Sets and surjective functions.
• Objects are sets. Morphisms are surjective functions between the sets.
• The category of k-vector spaces and linear maps.
• The category with objects the natural numbers and Hom(m,n) the set of $m\times n$-matrices.
• Composition is given by matrix multiplication.
• The category of Data Types with Computable Functions.
• Our ideal programming language has:
• Primitive data types.
• Constants of each primitive type.
• Operations, given as functions between types.
• Constructors, producing elements from data types, and producing derived data types and operations.
• We will assume that the language is equipped with
• A do-nothing operation for each data type. Haskell has
id
.
• An empty type 1, with the property that each type has exactly one function to this type. Haskell has
()
. We will use this to define the constants of type t as functions $1\to t$. Thus, constants end up being 0-ary functions.
• A composition constructor, taking an operator $f:A\to B$ and another operator $g:B\to C$ and producing an operator $g\circ f:A\to C$. Haskell has
(.)
.
• This allows us to model a functional programming language with a category.
• The category with objects logical propositions and arrows proofs.
• The category Rel has objects finite sets and morphisms $A\to B$ being subsets of $A\times B$. Composition is by $(a,c)\in g\circ f$ if there is some $b\in B$ such that $(a,b)\in f, (b,c)\in g$. Identity morphism is the diagonal $(a,a): a\in A$.

### 3.5 Homework

For a passing mark, a written, acceptable solution to at least 3 of the 6 questions should be given no later than midnight before the next lecture.

For each lecture, there will be a few exercises marked with the symbol *. These will be more difficult than the other exercises given, will require significant time and independent study, and will aim to complement the course with material not covered in lectures, but nevertheless interesting for the general philosophy of the lecture course.

1. Prove the general associative law: that for any path, and any bracketing of that path, the same composition results.
2. Which of the following form categories? Proof and disproof for each:
• Objects are finite sets, morphisms are functions such that $|f^{-1}(b)|\leq 2$ for all morphisms f, objects B and elements b.
• Objects are finite sets, morphisms are functions such that $|f^{-1}(b)|\geq 2$ for all morphisms f, objects B and elements b.
• Objects are finite sets, morphisms are functions such that $|f^{-1}(b)|<\infty$ for all morphisms f, objects B and elements b.
Recall that $f^{-1}(b)=\{a\in A: f(a)=b\}$.
1. Suppose $u:A\to A$ in some category C.
1. If $g\circ u=g$ for all $g:A\to B$ in the category, then u = 1A.
2. If $u\circ h=h$ for all $h:B\to A$ in the category, then u = 1A.
3. These two results characterize the objects in a category by the properties of their corresponding identity arrows completely. Specifically, there is a way to rephrase the definition of a category such that everything is stated in terms of arrows.
2. For as many of the examples given as you can, prove that they really do form a category. Passing mark is at least 60% of the given examples.
• Which of the categories are subcategories of which other categories? Which of these are wide? Which are full?
3. For this question, all parts are required:
1. For which sets is the free monoid on that set commutative.
2. Prove that for any category C, the set Hom(A,A) is a monoid under composition for every object A.
For details on the construction of a free monoid, see the Wikipedia pages on the Free Monoid [[5]] and on the Kleene star [[6]].
1. * Read up on ω-complete partial orders. Suppose S is some set and $\mathfrak P$ is the set of partial functions $S\to S$ - in other words, an element of $\mathfrak P$ is some pair $(S_0,f:S_0\to S)$ with $S_0\subseteq S$. We give this set a poset structure by $(S_0,f)\leq(S_1,g)$ precisely if $S_0\subseteq S_1$ and $f(s)=g(s)\forall s\in S_0$.
• Show that $\mathfrak P$ is a strict ω-CPO.
• An element x of S is a fixpoint of $f:S\to S$ if f(x) = x. Let $\mathfrak N$ be the ω-CPO of partially defined functions on the natural numbers. We define a function $\phi:\mathfrak N\to\mathfrak N$ by sending some $h:\mathbb N\to\mathbb N$ to a function k defined by
1. k(0) = 1
2. k(n) is defined only if h(n − 1) is defined, and then by k(n) = n * h(n − 1).
Describe $\phi(n\mapsto n^2)$ and $\phi(n\mapsto n^3)$. Show that φ is continuous. Find a fixpoint (S0,f) of φ such that any other fixpoint of the same function is larger than this one.
Find a continuous endofunction on some ω-CPO that has the fibonacci function F(0) = 0,F(1) = 1,F(n) = F(n − 1) + F(n − 2) as the least fixed point.
Implement a Haskell function that finds fixed points in an ω-CPO. Implement the two fixed points above as Haskell functions - using the ω-CPO fixed point approach in the implementation. It may well be worth looking at
Data.Map
to provide a Haskell context for a partial function for this part of the task.