Revision as of 19:42, 9 October 2009
IMPORTANT NOTE: THESE NOTES ARE STILL UNDER DEVELOPMENT. PLEASE WAIT UNTIL AFTER THE LECTURE WITH HANDING ANYTHING IN, OR TREATING THE NOTES AS READY TO READ.
Recall the construction of a cartesian product: for sets S,T, the set .
The cartesian product is one of the canonical ways to combine sets with each other. This is how we build binary operations, and higher ones - as well as how we formally define functions, partial functions and relations in the first place.
This, too, is how we construct vector spaces: recall that is built out of tuples of elements from , with pointwise operations. This constructions reoccurs all over the place - sets with structure almost always have the structure carry over to products by pointwise operations.
Given the cartesian product in sets, the important thing about the product is that we can extract both parts, and doing so preserves any structure present, since the structure is defined pointwise.
This is what we use to define what we want to mean by products in a categorical setting.
Definition Let C be a category. The product of two objects A,B is an object equipped with maps and such that any other object V with maps has a unique map such that both maps from V factor through the p1,p2.
In the category of Set, the unique map from V to would be given by q(v) = (q1(v),q2(v)).
The uniqueness requirement is what, in the theoretical setting, forces the product to be what we expect it to be - pairing of elements with no additional changes, preserving as much of the structure as we possibly can make it preserve.
In the Haskell category, the product is simply the Pair type:
Product a b = (a,b)
Recall from the first lecture, the product construction on categories: objects are pairs of objects, morphisms are pairs of morphisms, identity morphisms are pairs of identity morphisms, and composition is componentwise.
This is, in fact, the product construction applied to Cat - or even to CAT: we get functors P1,P2 picking out the first and second components, and everything works out exactly as in the cases above.
The other thing you can do in a Haskell data type declaration looks like this:
Coproduct a b = A a | B b
This type provides us with functions
A :: a -> Coproduct a b B :: b -> Coproduct a b
and hence looks quite like a dual to the product construction, in that the guaranteed functions the type brings are in the reverse directions from the arrows that the product projection arrows.
So, maybe what we want to do is to simply dualize the entire definition?
Definition Let C be a category. The coproduct of two objects A,B is an object A + B equipped with maps and such that any other object V with maps has a unique map such that v1 = vi1 and v2 = vi2.
In the Haskell case, the maps i1,i2 are the type constructors A,B. And indeed, this Coproduct, the union type construction, is the type which guarantees inclusion of source types, but with minimal additional assumptions on the type.
In the category of sets, the coproduct construction is one where we can embed both sets into the coproduct, faithfully, and the result has no additional structure beyond that. Thus, the coproduct in set, is the disjoint union of the included sets: both sets are included without identifications made, and no extra elements are introduced.
- Diagram definition
- Disjoint union in Set
- Coproduct of categories construction
- Union types
3 Algebra of datatypes
Recall from last week that we can consider endofunctors as container datatypes. A few nice endofunctors we may have around include (with some abuse of notation):
data 0 a = Boring data 1 a = Singleton a
with the obvious Functor implementations. From these, we can start building new container types, such as:
Bool = 0 + 0 Maybe = 1 + 0 Pair = 1 * 1
and we can use this to produce recursive definitions
List = 0 + 1*List NonEmptyList = 1 + NonEmptyList
and to argue in a highly algebraic manner about data type definitions
List = 0 + 1*List = 0 + 1*(0 + 1*List) = 0 + 1*0 + 1*1*List = 0 + 1 + 1*1*List