# User:Michiexile/MATH198/Lecture 1

(Difference between revisions)

IMPORTANT NOTE: THESE NOTES ARE STILL UNDER DEVELOPMENT. PLEASE WAIT UNTIL AFTER THE FIRST LECTURE WITH HANDING ANYTHING IN, OR TAKING THE NOTES AS READY TO READ.

## Contents

I'm Mikael Vejdemo-Johansson. I can be reached in my office 383-BB, especially during my office hours; or by email to mik@math.stanford.edu.

I encourage, strongly, student interactions.

I will be out of town September 24 - 29. I will monitor forum and email closely, and recommend electronic ways of getting in touch with me during this week. I will be back again in time for the office hours on the 30th.

## 2 Introduction

### 2.1 Why this course?

An introduction to Haskell will usually come with pointers toward Category Theory as a useful tool, though not with much more than the mention of the subject. This course is intended to fill that gap, and provide an introduction to Category Theory that ties into Haskell and functional programming as a source of examples and applications.

### 2.2 What will we cover?

The definition of categories, special objects and morphisms, functors, natural transformation, (co-)limits and special cases of these, adjunctions, freeness and presentations as categorical constructs, monads and Kleisli arrows, recursion with categorical constructs.

Maybe, just maybe, if we have enough time, we'll finish with looking at the definition of a topos, and how this encodes logic internal to a category. Applications to fuzzy sets.

### 2.3 What do we require?

Our examples will be drawn from discrete mathematics, logic, Haskell programming and linear algebra. I expect the following concepts to be at least vaguely familiar to anyone taking this course:

• Sets
• Functions
• Permutations
• Groups
• Partially ordered sets
• Vector spaces
• Linear maps
• Matrices
• Homomorphisms

## 3 Category

### 3.1 Graphs

We recall the definition of a (directed) graph. A graph G is a collection of edges (arrows) and vertices (nodes). Each edge is assigned a source node and a target node.

$source \to target$

Given a graph G, we denote the collection of nodes by G0 and the collection of arrows by G1. These two collections are connected, and the graph given its structure, by two functions: the source function $s:G_1\to G_0$ and the target function $t:G_1\to G_0$.

We shall not, in general, require either of the collections to be a set, but will happily accept larger collections; dealing with set-theoretical paradoxes as and when we have to. A graph where both nodes and arrows are sets shall be called small. A graph where either is a class shall be called large.

If both G0 and G1 are finite, the graph is called finite too.

The empty graph has $G_0 = G_1 = \emptyset$.

A discrete graph has $G_1=\emptyset$.

A complete graph has $G_1 = \{ (v,w) | v,w\in G_0\}$.

A simple graph has at most one arrow between each pair of nodes. Any relation on a set can be interpreted as a simple graph.

• Show some examples.

A homomorphism $f:G\to H$ of graphs is a pair of functions $f_0:G_0\to H_0$ and $f_1:G_1\to H_1$ such that sources map to sources and targets map to targets, or in other words:

• s(f1(e)) = f0(s(e))
• t(f1(e)) = f0(t(e))

By a path in a graph G from the node x to the node y of length k, we mean a sequence of edges $(f_1,f_2,\dots,f_k)$ such that:

• s(f1) = x
• t(fk) = y
• s(fi) = t(fi − 1) for all other i.

Paths with start and end point identical are called closed. For any node x, there is a unique closed path () starting and ending in x of length 0.

For any edge f, there is a unique path from s(f) to t(f) of length 1: (f).

We denote by Gk the set of paths in G of length k.

### 3.2 Categories

We now are ready to define a category. A category is a graph C equipped with an associative composition operation $\circ:G_2\to G_1$, and an identity element for composition 1x for each node x of the graph.

Note that G2 can be viewed as a subset of $G_1\times G_1$, the set of all pairs of arrows. It is intentional that we define the composition operator on only a subset of the set of all pairs of arrows - the composable pairs. Whenever you'd want to compose two arrows that don't line up to a path, you'll get nonsense, and so any statement about the composition operator has an implicit "whenever defined" attached to it.

The definition is not quite done yet - this composition operator, and the identity arrows both have a few rules to fulfill, and before I state these rules, there are some notation we need to cover.

#### 3.2.1 Backwards!

If we have a path given by the arrows (f,g) in G2, we expect $f:A\to B$ and $g:B\to C$ to compose to something that goes $A\to C$. The origin of all these ideas lies in geometry and algebra, and so the abstract arrows in a category are supposed to behave like functions under function composition, even though we don't say it explicitly.

Now, we are used to writing function application as f(x) - and possibly, from Haskell, as
f x
. This way, the composition of two functions would read g(f(x)).

On the other hand, the way we write our paths, we'd read f then g. This juxtaposition makes one of the two ways we write things seem backwards. We can resolve it either by making our paths in the category go backwards, or by reversing how we write function application.

In the latter case, we'd write x.f, say, for the application of f to x, and then write x.f.g for the composition. It all ends up looking a lot like Reverse Polish Notation, and has its strengths, but feels unnatural to most. It does, however, have the benefit that we can write out function composition as $(f,g) \mapsto f.g$ and have everything still make sense in all notations.

In the former case, which is the most common in the field, we accept that paths as we read along the arrows and compositions look backwards, and so, if $f:A\to B$ and $g:B\to C$, we write $g\circ f:A\to C$, remembering that elements are introduced from the right, and the functions have to consume the elements in the right order.

The existence of the identity map can be captured in a function language as well: it is the existence of a function $u:G_0\to G_1$.

Now for the remaining rules for composition. Whenever defined, we expect associativity - so that $h\circ(g\circ f)=(h\circ g)\circ f$. Furthermore, we expect:

1. Composition respects sources and targets, so that:
• $s(g\circ f) = s(f)$
• $t(g\circ f) = t(g)$
2. s(u(x)) = t(u(x)) = x

In a category, arrows are also called morphisms, and nodes are also called objects. This ties in with the algebraic roots of the field.

We denote by HomC(A,B), or if C is obvious from context, just Hom(A,B), the set of all arrows from A to B. This is the hom-set or set of morphisms, and may also be denoted C(A,B).

If a category is large or small or finite as a graph, it is called a large/small/finite category.

A category with objects a collection of sets and morphisms a selection from all possible set-valued functions such that the identity morphism for each object is a morphism, and composition in the category is just composition of functions is called concrete. Concrete categories form a very rich source of examples, though far from all categories are concrete.

### 3.3 New Categories from old

As with most other algebraic objects, one essential part of our tool box is to take known objects and form new examples from them. This allows us generate a wealth of examples from the ones that shape our intuition.

Typical things to do here would be to talk about subobjects, products and coproducts, sometimes obvious variations on the structure, and what a typical object looks like. Remember from linear algebra how subspaces, cartesian products (which for finite-dimensional vectorspaces covers both products and coproducts) and dual spaces show up early, as well as the theorems giving dimension as a complete descriptor of a vectorspace.

We'll go through the same sequence here; with some significant small variations.

A category D is a subcategory of the category C if $D_0\subseteq C_0$, $D_1\subseteq C_1$ and the composition in D is the restriction of the composition in C.

A subcategory $D\subseteq C$ is full if D(A,B) = C(A,B) for all objects A,B of D. In other words, a full subcategory is completely determined by the selection of objects in the subcategory.

A subcategory $D\subseteq C$ is wide if the collection of objects is the same in both categories. Hence, a wide subcategory picks out a subcollection of the morphisms.

The dual of a category is to a large extent inspired by vector space duals. In the dual C * of a category C, we have the same objects, and the morphisms are given by the equality C * (A,B) = C(B,A) - every morphism from C is present, but it goes in the wrong direction. Dualizing has a tendency to add the prefix co- when it happens, so for instance coproducts are the dual notion to products. We'll return to this construction many times in the course.

Given two categories C,D, we can combine them in several ways:

1. We can form the category that has as objects the disjoint union of all the objects of C and D, and that sets $Hom(A,B)=\emptyset$ whenever A,B come from different original categories. If A,B come from the same original category, we simply take over the homset from that category. This yields a categorical coproduct, and we denote the result by C + D. Composition is inherited from the original categories.
2. We can also form the category with objects $\langle A,B\rangle$ for every pair of objects $A\in C, B\in D$. A morphism in $Hom(\langle A,B\rangle,\langle A',B'\rangle)$ is simply a pair $\langle f:A\to A',g:B\to B'\rangle$. Composition is defined componentwise. This category is the categorical correspondent to the cartesian product, and we denot it by $C\times D$.

Given a category C and an object A of that category, we can form the slice category $C\downarrow A$. Objects in this category are pairs $(B,\pi_B:B\to A)$ of objects and morphisms, and morphisms are maps between the objects in the pairs that are compatible with the morphisms.

There is a dual notion: the coslice category, where the objects are paired with maps $A\to B$.

Slice categories can be used, among other things, to specify the idea of parametrization. The slice category $(C\downarrow A)$ gives a sense to the idea of objects from C labeled by elements of A. (expound on this!!)

Finally, any graph yields a category by just filling in the arrows that are missing. The result is called the free category generated by the graph, and is a concept we will return to in some depth. Free objects have a strict categorical definition, and they serve to give a model of thought for the things they are free objects for. Thus, categories are essentially graphs, possibly with restrictions or relations imposed; and monoids are essentially strings in some alphabet, with restrictions or relatinos.

### 3.4 Examples

• The empty category.
• No objects, no morphisms.
• The one object/one arrow category 1.
• A single object and its identity arrow.
• The categories 2 and 1 + 1.
• Two objects, A,B with identity arrows and a unique arrow $A\to B$.
• The category Set of sets.
• Sets for objects, functions for arrows.
• The catgeory FSet of finite sets.
• Finite sets for objects, functions for arrows.
• The category PFn of sets and partial functions.
• Sets for objects. Arrows are pairs $(S'\subseteq S,f:S'\to T)\in PFn(S,T)$.
• PFn(A,B) is a partially ordered set. $(S_f,f)\leq(S_g,g)$ precisely if $S_f\subseteq S_g$ and $f=g|_{S_f}$.
• Every partial order is a category. Each hom-set has at most one element.
• Objects are the elements of the poset. Arrows are unique, with $A\to B$ precisely if $A\leq B$.
• Every monoid is a category. Only one object.
• Kleene closure. Free monoids.
• The category of Sets and injective functions.
• The category of Sets and surjective functions.
• The category of k-vector spaces and linear maps.
• The category with objects the natural numbers and Hom(m,n) the set of $m\times n$-matrices.
• The category of Data Types with Computable Functions.
• Our ideal programming language has:
• Primitive data types.
• Constants of each primitive type.
• Operations, given as functions between types.
• Constructors, producing elements from data types, and producing derived data types and operations.
• We will assume that the language is equipped with
• A do-nothing operation for each data type. Haskell has
id
.
• An empty type 1, with the property that each type has exactly one function to this type. Haskell has
()
. We will use this to define the constants of type t as functions $1\to t$. Thus, constants end up being 0-ary functions.
• A composition constructor, taking an operator $f:A\to B$ and another operator $g:B\to C$ and producing an operator $g\circ f:A\to C$. Haskell has
(.)
.
• This allows us to model a functional programming language with a category.
• The category with objects logical propositions and arrows proofs.

### 3.5 Homework

For a passing mark, a written, acceptable solution to at least 2 of the 5 questions should be given no later than midnight before the next lecture.

For each lecture, there will be a few exercises marked with the symbol *. These will be more difficult than the other exercises given, will require significant time and independent study, and will aim to complement the course with material not covered in lectures, but nevertheless interesting for the general philosophy of the lecture course.

1. Prove the general associative law: that for any path, and any bracketing of that path, the same composition may be found.
2. Suppose $u:A\to A$ in some category C.
1. If $g\circ u=g$ for all $g:A\to B$ in the category, then u = 1A.
2. If $u\circ h=h$ for all $h:B\to A$ in the category, then u = 1A.
3. These two results characterize the objects in a category by the properties of their corresponding identity arrows completely.
3. For as many of the examples given as you can, prove that they really do form a category. Passing mark is at least 60% of the given examples.
• Which of the categories are subcategories of which other categories? Which of these are wide? Which are full?
4. For this question, all parts are required:
1. For which sets is the free monoid on that set commutative.
2. Prove that for any category C, the set Hom(A,A) is a monoid under composition for every object A.
5. * Read up on ω-complete partial orders. Suppose S is some set and $\mathfrak P$ is the set of partial functions $S\to S$ - in other words, an element of $\mathfrak P$ is some pair $(S_0,f:S_0\to S)$ with $S_0\subseteq S$. We give this set a poset structure by $(S_0,f)\leq(S_1,g)$ precisely if $S_0\subseteq S_1$ and $f(s)=g(s)\forall s\in S_0$.
• Show that $\mathfrak P$ is a strict ω-CPO.
• An element x of S is a fixpoint of $f:S\to S$ if f(x) = x. Let $\mathfrak N$ be the ω-CPO of partially defined functions on the natural numbers. We define a function $\phi:\mathfrak N\to\mathfrak N$ by sending some $h:\mathbb N\to\mathbb N$ to a function k defined by
1. k(0) = 1
2. k(n) is defined only if h(n − 1) is defined, and then by k(n) = n * h(n − 1).
Describe $\phi(n\mapsto n^2)$ and $\phi(n\mapsto n^3)$. Show that φ is continuous. Find a fixpoint (S0,f) of φ such that any other fixpoint of the same function is less than this one.
Find a continuous endofunction on some ω-CPO that has the fibonacci function F(0) = 0,F(1) = 1,F(n) = F(n − 1) + F(n − 2) as the least fixed point.
Implement a Haskell function that finds fixed points in an ω-CPO. Implement the two fixed points above as Haskell functions - using the ω-CPO fixed point approach in the implementation. It may well be worth looking at
Data.Map
to provide a Haskell context for a partial function for this part of the task.