https://wiki.haskell.org/api.php?action=feedcontributions&user=Dolio&feedformat=atomHaskellWiki - User contributions [en]2022-08-13T18:20:09ZUser contributionsMediaWiki 1.31.7https://wiki.haskell.org/index.php?title=Hac_NYC/Attendees&diff=57747Hac NYC/Attendees2014-04-03T19:58:19Z<p>Dolio: /* Attendees */</p>
<hr />
<div>= Attendees =<br />
<br />
Feel free to list yourself here -- though this is ''not'' registration. To register, see [https://docs.google.com/forms/d/1taZtjgYozFNebLt1TR2VnKv-ovD2Yv5sOdSZzmi_xFo/viewform this registration form]. Note that while this page is sparse, we have upwards of 40 registered attendees at the moment.<br />
<br />
{| class="wikitable"<br />
! Nickname<br />
! Real Name<br />
! Mobile #<br />
|-<br />
|<br />
| Gershom Bazerman<br />
| <br />
|-<br />
| ozataman<br />
| Ozgun Ataman<br />
|-<br />
| mightybyte<br />
| Doug Beardsley<br />
|-<br />
| vamega<br />
| Varun Madiath<br />
|-<br />
| copumpkin<br />
| Dan Peebles<br />
|-<br />
| carter<br />
| Carter Schonwald<br />
|-<br />
| dolio<br />
| Dan Doel<br />
|-<br />
| peterood<br />
| Peter Rood<br />
|-<br />
| dbp<br />
| Daniel Patterson<br />
|-<br />
| danharaj<br />
| Daniel Haraj<br />
|-<br />
| tlevine<br />
| Thomas Levine<br />
| +1 914 574 1328<br />
|-<br />
| matteo.campanelli<br />
| Matteo Campanelli<br />
|-<br />
| achudnov<br />
| Andrey Chudnov<br />
| <br />
|-<br />
| Rickasaurus<br />
| Richard Minerich<br />
| 860-922-3456<br />
|-<br />
| chrisleague<br />
| Chris League<br />
| league [at] contrapunctus [dot] net<br />
|-<br />
| katsupnfries<br />
| Kat Chuang<br />
| <br />
|-<br />
| ekmett<br />
| Edward Kmett<br />
| 857-244-1001<br />
|-<br />
| gelisam<br />
| Samuel Gélineau<br />
|-<br />
| amindfv<br />
| Tom Murphy<br />
|-<br />
| zuserm<br />
| Mike Zuser<br />
|-<br />
| imalsogreg<br />
| Greg Hale<br />
| (908)(797)(8281)<br />
|-<br />
| acowley<br />
| Anthony Cowley<br />
|-<br />
| arbn<br />
| Austin Robinson<br />
| 9198891172<br />
|-<br />
|-<br />
| hchinnan<br />
| Hari Chinnan<br />
| 713 248 6084<br />
|-<br />
| ryantrinkle<br />
| Ryan Trinkle<br />
|-<br />
| martingale<br />
| Jeff Rosenbluth <br />
|-<br />
| artagnon<br />
| Ramkumar Ramachandra <br />
|-<br />
|<br />
| Scott Walck<br />
|<br />
|-<br />
| <br />
| Christopher Young<br />
|<br />
|-<br />
|<br />
| Richard Eisenberg<br />
|<br />
|-<br />
| timmy_tofu<br />
| Tim Adams<br />
|<br />
|-<br />
| <br />
| Kirill Cherkashin<br />
|<br />
|-<br />
| jberryman<br />
| Brandon Simmons<br />
| brandon.m.simmons@gmail.com<br />
|-<br />
| S11001001<br />
| Stephen Compall<br />
|<br />
|}</div>Doliohttps://wiki.haskell.org/index.php?title=Hac_NYC/Attendees&diff=57569Hac NYC/Attendees2014-02-15T01:47:37Z<p>Dolio: /* Attendees */</p>
<hr />
<div>= Attendees =<br />
<br />
Feel free to list yourself here -- though this is ''not'' registration. To register, see [https://docs.google.com/forms/d/1taZtjgYozFNebLt1TR2VnKv-ovD2Yv5sOdSZzmi_xFo/viewform this registration form]. Note that while this page is sparse, we have upwards of 40 registered attendees at the moment.<br />
<br />
{| class="wikitable"<br />
! Nickname<br />
! Real Name<br />
! Mobile #<br />
|-<br />
|<br />
| Gershom Bazerman<br />
| <br />
|-<br />
| ozataman<br />
| Ozgun Ataman<br />
|-<br />
| mightybyte<br />
| Doug Beardsley<br />
|-<br />
| vamega<br />
| Varun Madiath<br />
|-<br />
| copumpkin<br />
| Dan Peebles<br />
|-<br />
| carter<br />
| Carter Schonwald<br />
|-<br />
| dolio<br />
| Dan Doel<br />
|}</div>Doliohttps://wiki.haskell.org/index.php?title=Modest_GHC_Proposals&diff=56329Modest GHC Proposals2013-06-25T01:53:31Z<p>Dolio: Less verbose template haskell</p>
<hr />
<div>There are many many proposals to augment GHC (and Haskell) that would be valuable yet languish because they have not be documented / collected anywhere aside from persisting in the Mailing lists.<br />
<br />
Such proposals are things, typically, that would be uncontroversial and welcomed, but which no core GHC developers have free cycles to work on.<br />
<br />
Proposals are suitable for this page if they do not require deep changes to GHC, though they may still be nontrivial, and which ghc-hq is likely to merge in when there is a strong community consensus and well-written patch is on hand.<br />
<br />
Many but not all of these may be associated with feature request tickets on the ghc trac: http://hackage.haskell.org/trac/ghc/query?status=new&status=assigned&status=reopened&type=feature+request&order=priority<br />
<br />
Many tickets tracked by SPJ also fall in this category: http://hackage.haskell.org/trac/ghc/wiki/Status/SLPJ-Tickets<br />
<br />
<br />
== Expanded Deprecated Pragma ==<br />
The current pragma can attach to modules or top level entities including functions, classes, and types. <br />
<br />
It cannot attach to exports (i.e. if we wish to not deprecate "foo" but only its reexport from module Bar).<br />
<br />
It also cannot attach to methods within classes.<br />
<br />
There are other possible things we may wish to deprecate as well. Expanding this pragma would make certain changes to libraries more tractable and easily managed.<br />
<br />
http://hackage.haskell.org/trac/ghc/ticket/4879<br />
<br />
Perhaps there is a framework to be designed for the following ad-hoc warnings as well: http://hackage.haskell.org/trac/ghc/ticket/8004<br />
<br />
== Records and Modules ==<br />
Yitzchak Gale's nested modules proposal l which would address one of the larger warts with the current module system and records while adding essentially no complexity to the GHC internals. (i.e. no changes would be needed to GHC beyond the parsing phase possibly, so an ''easy'' changeto experiment with )<br />
<br />
http://www.haskell.org/pipermail/glasgow-haskell-users/2012-January/021591.htm<br />
<br />
<br />
== Pattern Synonyms ==<br />
<br />
http://hackage.haskell.org/trac/ghc/ticket/5144<br />
and <br />
http://hackage.haskell.org/trac/ghc/ticket/5144<br />
<br />
== Make Template Haskell quieter ==<br />
<br />
Template Haskell spits out a lot of module loading text by default. Clearance has been given to increase the verbosity threshold for this to -v2, but someone needs to implement it. See:<br />
<br />
http://hackage.haskell.org/trac/ghc/ticket/7863<br />
<br />
[[Category:Proposals]]<br />
[[Category:GHC]]<br />
[[Category:Community]]</div>Doliohttps://wiki.haskell.org/index.php?title=Hac_Boston/Attendees&diff=42571Hac Boston/Attendees2011-10-26T04:52:22Z<p>Dolio: /* Attendees */ dolio's mobile number</p>
<hr />
<div>This is a partial list of attendees for [[Hac Boston]]. Please refer to the [[Hac Boston|main page]] for more information.<br />
<br />
= Attendees =<br />
<br />
Feel free to list yourself here -- though this is ''not'' registration. To register, see [[Hac Boston/Register|this page]].<br />
<br />
{| class="wikitable"<br />
! Nickname<br />
! Real Name<br />
! Mobile #<br />
! Arriving<br />
! Departing<br />
! Accommodation<br />
|-<br />
| edwardk<br />
| Edward Kmett<br />
| (857)244-1001<br />
| -<br />
| -<br />
| lives in the area<br />
|-<br />
| dolio<br />
| Dan Doel<br />
| (513) 503-8525<br />
| -<br />
| -<br />
| lives in the area<br />
|-<br />
| copumpkin<br />
| Dan Peebles<br />
| -<br />
| -<br />
| -<br />
| lives in the area<br />
|-<br />
| kmc<br />
| Keegan McAllister<br />
| -<br />
| -<br />
| -<br />
| lives in the area<br />
|}<br />
<br />
= Additional Comments =<br />
<br />
Please use this section to leave comments for other attendees, e.g. for organizing accommodation.</div>Doliohttps://wiki.haskell.org/index.php?title=Free_structure&diff=33950Free structure2010-03-03T17:49:14Z<p>Dolio: change 'injection' to 'embedding' in the informal description of freeness, since injectivity isn't necessarily implied.</p>
<hr />
<div>[[Category:Glossary]]<br />
[[Category:Theoretical foundations]]<br />
[[Category:Mathematics]]<br />
<br />
=== Introduction ===<br />
<br />
This article attempts to give a relatively informal understanding of "free" structures from algebra/category theory, with pointers to some of the formal material for those who desire it. The later sections make use of some notions from [[category theory]], so some familiarity with its basics will be useful.<br />
<br />
=== Algebra ===<br />
<br />
==== What sort of structures are we talking about? ====<br />
<br />
The distinction between free structures and other, non-free structures, originates in [http://en.wikipedia.org/wiki/Abstract_algebra abstract algebra], so that provides a good place to start. Some common structures considered in algebra are:<br />
<br />
* '''[[Monoid]]s'''<br />
** consisting of<br />
*** A set <math>M</math><br />
*** An identity <math>e \in M</math><br />
*** A binary operation <math>* : M \times M \to M</math><br />
** And satisfying the equations<br />
*** <math> x * (y * z) = (x * y) * z </math><br />
*** <math> e * x = x = x * e </math><br />
<br />
* '''[http://en.wikipedia.org/wiki/Group_(mathematics) Groups]'''<br />
** consisting of<br />
*** A monoid <math>(M, e, *)</math><br />
*** An additional unary operation <math>\,^{-1} : M \to M</math><br />
** satisfying<br />
*** <math> x * x^{-1} = e = x^{-1} * x</math><br />
<br />
* '''[http://en.wikipedia.org/wiki/Ring_(mathematics) Rings]'''<br />
** consisting of<br />
*** A set <math>R</math><br />
*** A unary operation <math>- : R \to R</math><br />
*** Two binary operations <math> +, * : R \times R \to R</math><br />
*** Distinguished elements <math>0, 1 \in R</math><br />
** such that<br />
*** <math>(R, 0, +, -)</math> is a group<br />
*** <math>(R, 1, *)</math> is a monoid<br />
*** <math> x + y = y + x </math><br />
*** <math> (x + y)*z = x*z + y*z </math><br />
*** <math> x * (y + z) = x*y + x*z </math><br />
<br />
So, for algebraic structures, we have sets equipped with operations that are expected to satisfy equational laws.<br />
<br />
==== Free algebraic structures ====<br />
<br />
Now, given such a description, we can talk about the free structure over a particular set <math>S</math> (or, possibly over some other underlying structure; but we'll stick with sets now). What this means is that given <math>S</math>, we want to find some set <math>M</math>, together with appropriate operations to make <math>M</math> the structure in question, along with the following two criteria:<br />
<br />
* There is an embedding <math>i : S \to M</math><br />
* The structure generated is as 'simple' as possible.<br />
** <math>M</math> should contain only elements that are required to exist by <math>i</math> and the operations of the structure.<br />
** The only equational laws that should hold for the generated structure are those that are required to hold by the equational laws for structures of that type.<br />
<br />
So, in the case of a free monoid (from here on out, we'll assume that the structure in question is a monoid, since it's simplest), the equation <math>x * y = y * x</math> should not hold unless <math>x = y</math>, <math>x = e</math> or <math>y = e</math>. Further <math>i x \in M</math>, for all <math>x</math>, and <math>e \in M</math>, and <math>\forall x, y \in M.\,\, x * y \in M</math> (and these should all be distinct, except as required by the monoid laws), but there should be no 'extra' elements of <math>M</math> in addition to those.<br />
<br />
For monoids, the free structure over a set is given by the monoid of lists of elements of that set, with concatenation as multiplication. It should be easy to convince yourself of the following (in pseudo-Haskell):<br />
<br />
<haskell><br />
M = [S]<br />
e = []<br />
* = (++)<br />
<br />
i : S -> [S]<br />
i x = [x] -- i x = x : []<br />
<br />
[] ++ xs = xs = xs ++ []<br />
xs ++ (ys ++ zs) = (xs ++ ys) ++ zs<br />
<br />
xs ++ ys = ys ++ xs iff xs == ys || xs == [] || ys == []<br />
-- etc.<br />
</haskell><br />
<br />
=== The category connection ===<br />
<br />
==== Free structure functors ====<br />
<br />
One possible objection to the above description (even a more formal version thereof) is that the characterization of "simple" is somewhat vague. [[Category theory]] gives a somewhat better solution. Generally, structures-over-a-set will form a category, with arrows being structure-preserving homomorphisms. "Simplest" (in the sense we want) structures in that category will then either be [http://en.wikipedia.org/wiki/Initial_and_terminal_objects initial or terminal], [1] and thus, freeness can be defined in terms of such universal constructions.<br />
<br />
In its [http://en.wikipedia.org/wiki/Free_object full categorical generality], freeness isn't necessarily categorized by underlying set structure, either. Instead, one looks at "forgetful" functors [2] from the category of structures to some other category. For our free monoids above, it'd be:<br />
<br />
* <math>U : Mon \to Set</math><br />
<br />
The functor taking monoids <math>(M, e, *)</math> to their underlying set <math>M</math>. Then, the relevant universal property is given by finding an [http://en.wikipedia.org/wiki/Adjunction adjoint functor]:<br />
<br />
* <math>F : Set \to Mon</math>, <math> F</math> ⊣ <math>U </math><br />
<br />
<math>F</math> being the functor taking sets to the free monoids over those sets. So, free structure functors are left adjoint to forgetful functors. It turns out this categorical presentation also has a dual: cofree structure functors are right adjoint to forgetful functors.<br />
<br />
==== Algebraic constructions in a category ====<br />
<br />
Category theory also provides a way to extend specifications of algebraic structures to more general categories, which can allow us to extend the above informal understanding to new contexts. For instance, one can talk about monoid objects in an arbitrary [http://en.wikipedia.org/wiki/Monoidal_category monoidal category]. Such categories have a tensor product <math>\otimes</math> of objects, with a unit object <math>I</math> (both of which satisfy various laws).<br />
<br />
A monoid object in a monoidal category is then:<br />
<br />
* An object <math>M</math><br />
* A unit 'element' <math>e : I \to M</math><br />
* A multiplication <math>m : M \otimes M \to M</math><br />
<br />
such that:<br />
<br />
* <math>m \circ (id_{M} \otimes e) = u_l</math><br />
* <math>m \circ (e \otimes id_M) = u_r</math><br />
* <math> m \circ (id_M \otimes m) = m \circ (m \otimes id_M) \circ \alpha</math><br />
<br />
Where:<br />
<br />
* <math>u_l : M \otimes I \to M</math> and <math>u_r : I \otimes M \to M</math> are the identity isomorphisms for the monoidal category, and<br />
* <math> \alpha : M \otimes (M \otimes M) \to (M \otimes M) \otimes M </math> is part of the associativity isomorphism of the category.<br />
<br />
So, hopefully the connection is clear: we've generalized the carrier set to a carrier object, generalized the operations to morphisms in a category, and equational laws are promoted to being equations about composition of morphisms.<br />
<br />
==== Monads ====<br />
<br />
One example of a class of monoid objects happens to be [[monad (sans metaphors)|monads]]. Given a base category <math>C</math>, we have the monoidal category <math>C^C</math>:<br />
<br />
* Objects are endofunctors <math>F : C \to C</math><br />
* Morphisms are [[Category theory/Natural transformation|natural transformations]] [3] between the functors<br />
* The tensor product is composition: <math>F \otimes G = F \circ G</math><br />
* The identity object is the identity functor, <math>I</math>, taking objects and morphisms to themselves<br />
<br />
If we then specialize the definition of a monoid object to this situation, we get:<br />
<br />
* An endofunctor <math>M : C \to C</math><br />
* A natural transformation <math>\eta : I \to M</math><br />
* A natural transformation <math>\mu : M \circ M \to M</math><br />
<br />
which satisfy laws that turn out to be the standard monad laws. So, monads turn out to be monoid objects in the category of endofunctors.<br />
<br />
==== Free Monads ====<br />
<br />
But, what about our intuitive understanding of free monoids above? We wanted to promote an underlying set, but we have switched from sets to functors. So, presumably, a free monad is generated by an underlying (endo)functor, <math>F : C \to C</math>. We then expect there to be a natural transformation <math>i : F \to M</math>, 'injecting' the functor into the monad.<br />
<br />
In Haskell, we can write the type of free monads over Haskell endofunctors as follows:<br />
<br />
<haskell><br />
data Free f a = Return a | Roll (f (Free f a))<br />
<br />
instance Functor f => Monad (Free f) where<br />
return a = Return a<br />
Return a >>= f = f a<br />
Roll ffa >>= f = Roll $ fmap (>>= f) ffa<br />
<br />
-- join (Return fa) = fa<br />
-- join (Roll ffa) = Roll (fmap join ffa)<br />
<br />
inj :: Functor f => f a -> Free f a<br />
inj fa = Roll $ fmap Return fa<br />
</haskell><br />
<br />
This should bear some resemblance to free monoids over lists. <code>Return</code> is analogous to <code>[]</code>, and <code>Roll</code> is analogous to <code>(:)</code>. Lists let us create arbitrary length strings of elements from some set, while <code>Free f</code> lets us create structures involving <code>f</code> composed with itself an arbitrary number of times (recall, functor composition was the tensor product of our category). <code>Return</code> gives our type a way to handle the 0-ary composition of <code>f</code> (as <code>[]</code> is the 0-length string), while <code>Roll</code> is the way to extend the nesting level by one (just as <code>(:)</code> lets us create (n+1)-length strings out of n-length ones). Finally, both injections are built in a similar way:<br />
<br />
<haskell><br />
inj_list x = (:) x []<br />
inj_free fx = Roll (fmap Return fx)<br />
</haskell><br />
<br />
This, of course, is not completely rigorous, but it is a nice extension of the informal reasoning we started with.<br />
<br />
=== Further reading ===<br />
<br />
For those looking for an introduction to the necessary category theory used above, Steve Awodey's [http://www.math.uchicago.edu/~may/VIGRE/VIGRE2009/Awodey.pdf Category Theory] is a popular, freely available reference.<br />
<br />
=== Notes ===<br />
<br />
==== Universal constructions ====<br />
<br />
Initial (final) objects are those that have a single unique arrow from (to) the object to (from) every other object in the category. For instance, the empty set is initial in the category of sets, and any one-element set is final. Initial objects play an important role in the semantics of algebraic datatypes. For a datatype like:<br />
<br />
<haskell><br />
data T = C1 A B C | C2 D E T<br />
</haskell><br />
<br />
we consider the following:<br />
<br />
* A functor <math>F : Hask \to Hask</math>, <math>F X = A \times B \times C + D \times E \times X</math><br />
* F-algebras which are:<br />
** An object <math>A \in Hask</math><br />
** An action <math>a : FA \to A</math><br />
* Algebra homomorphisms <math>(A, a) \to (B, b)</math><br />
** These are given by <math>h : A \to B</math> such that <math> b \circ Fh = h \circ a</math><br />
<br />
The datatype <code>T</code> is then given by an initial F-algebra. This works out nicely because the unique algebra homomorphism whose existence is guaranteed by initiality is the [[fold]] or 'catamorphism' for the datatype.<br />
<br />
Intuitively, though, the fact that <code>T</code> is an F-algebra means that it is in some sense closed under forming terms of shape F---suppose we took the simpler signature <code>FX = 1 + X</code> of the natural numbers; then both Z = inl () and Sx = inr x can be incorporated into Nat. However, there are potentially many algebras; for instance, the naturals modulo some finite number, and successor modulo that number are an algebra for the natural signature.<br />
<br />
However, initiality constrains what Nat can be. Consider, for instance, the above modular sets 2 and 3. There can be no homomorphism <math>h : 2 \to 3</math>:<br />
<br />
* <math>h0=0 \,\, ;\, h1=0</math><br />
** <math>S(h1) = S0 = 1\,</math> but <math>h(S1) = h0 = 0 \neq 1</math><br />
* <math> h0=0 \,\,;\, h1=1</math><br />
** <math>S(h1) = S1 = 2\,</math> but <math>h(S1) = h0 = 0 \neq 2</math><br />
* <math> h0=0 \,\,;\, h1=2</math><br />
** <math>S(h0) = S0 = 1\,</math> but <math>h(S0) = h1 = 2 \neq 2</math><br />
* <math> h0 \neq 0 </math><br />
** <math> 0 = Z \neq hZ = h0</math><br />
<br />
This is caused by these algebras identifying elements in incompatible ways (2 makes SSZ = Z, but 3 doesn't, and 3 makes SSSZ = Z, but 2 doesn't). So, the values of an initial algebra must be compatible with any such identification scheme, and this is accomplished by identifying ''none'' of the terms in the initial algebra (so that h is free to send each term to an appropriate value in the target, according to the identifications there). A similar phenomenon occurs in the main section of this article, except that the structures in question have additional equational laws that terms must satisfy, so the initial structure ''is'' allowed to identify those, ''but no more'' than those.<br />
<br />
By the same argument, we can determine that 3 is not a final algebra. Nor are the naturals (for any modular set M, S(h(M-1)) = S(M-1) = M, but h(S(M-1)) = h0 = 0). The final algebra is the set {0}, with S0 = 0 and Z = 0, with unique homomorphism hx = 0. This can be seen as identifying as many elements as possible, rather than as few. Naturally, final algebras don't receive that much interest. However, finality is an important property of [http://en.wikipedia.org/wiki/Initial_algebra#Final_coalgebra coalgebras].<br />
<br />
==== Forgetful functors ====<br />
<br />
The term "forgetful functor" has no formal specification; only an intuitive one. The idea is that one starts in some category of structures, and then defines a functor by forgetting part or all of what defines those structures. For instance:<br />
<br />
* <math>U : Str \to Set</math>, where <math>Str</math> is any category of algebraic structures, and U simply forgets about all of the n-ary operations and equational laws, and takes structures to their underlying sets, and homomorphisms to functions over those sets.<br />
* <math>U : Grp \to Mon</math>, which takes a group and forgets about the inverse operation to give a monoid. This functor would then be related to "free groups over a monoid".<br />
<br />
==== Natural transformations ====<br />
<br />
The wiki article gives a formal definition of natural transformations, but a Haskell programmer can think of a natural transformation between functors F and G as:<br />
<br />
<haskell><br />
trans :: forall a. F a -> G a<br />
</haskell></div>Doliohttps://wiki.haskell.org/index.php?title=Free_structure&diff=33949Free structure2010-03-03T17:47:23Z<p>Dolio: Link to the wikipedia article on free objects in the section about adjoints and such</p>
<hr />
<div>[[Category:Glossary]]<br />
[[Category:Theoretical foundations]]<br />
[[Category:Mathematics]]<br />
<br />
=== Introduction ===<br />
<br />
This article attempts to give a relatively informal understanding of "free" structures from algebra/category theory, with pointers to some of the formal material for those who desire it. The later sections make use of some notions from [[category theory]], so some familiarity with its basics will be useful.<br />
<br />
=== Algebra ===<br />
<br />
==== What sort of structures are we talking about? ====<br />
<br />
The distinction between free structures and other, non-free structures, originates in [http://en.wikipedia.org/wiki/Abstract_algebra abstract algebra], so that provides a good place to start. Some common structures considered in algebra are:<br />
<br />
* '''[[Monoid]]s'''<br />
** consisting of<br />
*** A set <math>M</math><br />
*** An identity <math>e \in M</math><br />
*** A binary operation <math>* : M \times M \to M</math><br />
** And satisfying the equations<br />
*** <math> x * (y * z) = (x * y) * z </math><br />
*** <math> e * x = x = x * e </math><br />
<br />
* '''[http://en.wikipedia.org/wiki/Group_(mathematics) Groups]'''<br />
** consisting of<br />
*** A monoid <math>(M, e, *)</math><br />
*** An additional unary operation <math>\,^{-1} : M \to M</math><br />
** satisfying<br />
*** <math> x * x^{-1} = e = x^{-1} * x</math><br />
<br />
* '''[http://en.wikipedia.org/wiki/Ring_(mathematics) Rings]'''<br />
** consisting of<br />
*** A set <math>R</math><br />
*** A unary operation <math>- : R \to R</math><br />
*** Two binary operations <math> +, * : R \times R \to R</math><br />
*** Distinguished elements <math>0, 1 \in R</math><br />
** such that<br />
*** <math>(R, 0, +, -)</math> is a group<br />
*** <math>(R, 1, *)</math> is a monoid<br />
*** <math> x + y = y + x </math><br />
*** <math> (x + y)*z = x*z + y*z </math><br />
*** <math> x * (y + z) = x*y + x*z </math><br />
<br />
So, for algebraic structures, we have sets equipped with operations that are expected to satisfy equational laws.<br />
<br />
==== Free algebraic structures ====<br />
<br />
Now, given such a description, we can talk about the free structure over a particular set <math>S</math> (or, possibly over some other underlying structure; but we'll stick with sets now). What this means is that given <math>S</math>, we want to find some set <math>M</math>, together with appropriate operations to make <math>M</math> the structure in question, along with the following two criteria:<br />
<br />
* There is an injection <math>i : S \to M</math><br />
* The structure generated is as 'simple' as possible.<br />
** <math>M</math> should contain only elements that are required to exist by <math>i</math> and the operations of the structure.<br />
** The only equational laws that should hold for the generated structure are those that are required to hold by the equational laws for structures of that type.<br />
<br />
So, in the case of a free monoid (from here on out, we'll assume that the structure in question is a monoid, since it's simplest), the equation <math>x * y = y * x</math> should not hold unless <math>x = y</math>, <math>x = e</math> or <math>y = e</math>. Further <math>i x \in M</math>, for all <math>x</math>, and <math>e \in M</math>, and <math>\forall x, y \in M.\,\, x * y \in M</math> (and these should all be distinct, except as required by the monoid laws), but there should be no 'extra' elements of <math>M</math> in addition to those.<br />
<br />
For monoids, the free structure over a set is given by the monoid of lists of elements of that set, with concatenation as multiplication. It should be easy to convince yourself of the following (in pseudo-Haskell):<br />
<br />
<haskell><br />
M = [S]<br />
e = []<br />
* = (++)<br />
<br />
i : S -> [S]<br />
i x = [x] -- i x = x : []<br />
<br />
[] ++ xs = xs = xs ++ []<br />
xs ++ (ys ++ zs) = (xs ++ ys) ++ zs<br />
<br />
xs ++ ys = ys ++ xs iff xs == ys || xs == [] || ys == []<br />
-- etc.<br />
</haskell><br />
<br />
=== The category connection ===<br />
<br />
==== Free structure functors ====<br />
<br />
One possible objection to the above description (even a more formal version thereof) is that the characterization of "simple" is somewhat vague. [[Category theory]] gives a somewhat better solution. Generally, structures-over-a-set will form a category, with arrows being structure-preserving homomorphisms. "Simplest" (in the sense we want) structures in that category will then either be [http://en.wikipedia.org/wiki/Initial_and_terminal_objects initial or terminal], [1] and thus, freeness can be defined in terms of such universal constructions.<br />
<br />
In its [http://en.wikipedia.org/wiki/Free_object full categorical generality], freeness isn't necessarily categorized by underlying set structure, either. Instead, one looks at "forgetful" functors [2] from the category of structures to some other category. For our free monoids above, it'd be:<br />
<br />
* <math>U : Mon \to Set</math><br />
<br />
The functor taking monoids <math>(M, e, *)</math> to their underlying set <math>M</math>. Then, the relevant universal property is given by finding an [http://en.wikipedia.org/wiki/Adjunction adjoint functor]:<br />
<br />
* <math>F : Set \to Mon</math>, <math> F</math> ⊣ <math>U </math><br />
<br />
<math>F</math> being the functor taking sets to the free monoids over those sets. So, free structure functors are left adjoint to forgetful functors. It turns out this categorical presentation also has a dual: cofree structure functors are right adjoint to forgetful functors.<br />
<br />
==== Algebraic constructions in a category ====<br />
<br />
Category theory also provides a way to extend specifications of algebraic structures to more general categories, which can allow us to extend the above informal understanding to new contexts. For instance, one can talk about monoid objects in an arbitrary [http://en.wikipedia.org/wiki/Monoidal_category monoidal category]. Such categories have a tensor product <math>\otimes</math> of objects, with a unit object <math>I</math> (both of which satisfy various laws).<br />
<br />
A monoid object in a monoidal category is then:<br />
<br />
* An object <math>M</math><br />
* A unit 'element' <math>e : I \to M</math><br />
* A multiplication <math>m : M \otimes M \to M</math><br />
<br />
such that:<br />
<br />
* <math>m \circ (id_{M} \otimes e) = u_l</math><br />
* <math>m \circ (e \otimes id_M) = u_r</math><br />
* <math> m \circ (id_M \otimes m) = m \circ (m \otimes id_M) \circ \alpha</math><br />
<br />
Where:<br />
<br />
* <math>u_l : M \otimes I \to M</math> and <math>u_r : I \otimes M \to M</math> are the identity isomorphisms for the monoidal category, and<br />
* <math> \alpha : M \otimes (M \otimes M) \to (M \otimes M) \otimes M </math> is part of the associativity isomorphism of the category.<br />
<br />
So, hopefully the connection is clear: we've generalized the carrier set to a carrier object, generalized the operations to morphisms in a category, and equational laws are promoted to being equations about composition of morphisms.<br />
<br />
==== Monads ====<br />
<br />
One example of a class of monoid objects happens to be [[monad (sans metaphors)|monads]]. Given a base category <math>C</math>, we have the monoidal category <math>C^C</math>:<br />
<br />
* Objects are endofunctors <math>F : C \to C</math><br />
* Morphisms are [[Category theory/Natural transformation|natural transformations]] [3] between the functors<br />
* The tensor product is composition: <math>F \otimes G = F \circ G</math><br />
* The identity object is the identity functor, <math>I</math>, taking objects and morphisms to themselves<br />
<br />
If we then specialize the definition of a monoid object to this situation, we get:<br />
<br />
* An endofunctor <math>M : C \to C</math><br />
* A natural transformation <math>\eta : I \to M</math><br />
* A natural transformation <math>\mu : M \circ M \to M</math><br />
<br />
which satisfy laws that turn out to be the standard monad laws. So, monads turn out to be monoid objects in the category of endofunctors.<br />
<br />
==== Free Monads ====<br />
<br />
But, what about our intuitive understanding of free monoids above? We wanted to promote an underlying set, but we have switched from sets to functors. So, presumably, a free monad is generated by an underlying (endo)functor, <math>F : C \to C</math>. We then expect there to be a natural transformation <math>i : F \to M</math>, 'injecting' the functor into the monad.<br />
<br />
In Haskell, we can write the type of free monads over Haskell endofunctors as follows:<br />
<br />
<haskell><br />
data Free f a = Return a | Roll (f (Free f a))<br />
<br />
instance Functor f => Monad (Free f) where<br />
return a = Return a<br />
Return a >>= f = f a<br />
Roll ffa >>= f = Roll $ fmap (>>= f) ffa<br />
<br />
-- join (Return fa) = fa<br />
-- join (Roll ffa) = Roll (fmap join ffa)<br />
<br />
inj :: Functor f => f a -> Free f a<br />
inj fa = Roll $ fmap Return fa<br />
</haskell><br />
<br />
This should bear some resemblance to free monoids over lists. <code>Return</code> is analogous to <code>[]</code>, and <code>Roll</code> is analogous to <code>(:)</code>. Lists let us create arbitrary length strings of elements from some set, while <code>Free f</code> lets us create structures involving <code>f</code> composed with itself an arbitrary number of times (recall, functor composition was the tensor product of our category). <code>Return</code> gives our type a way to handle the 0-ary composition of <code>f</code> (as <code>[]</code> is the 0-length string), while <code>Roll</code> is the way to extend the nesting level by one (just as <code>(:)</code> lets us create (n+1)-length strings out of n-length ones). Finally, both injections are built in a similar way:<br />
<br />
<haskell><br />
inj_list x = (:) x []<br />
inj_free fx = Roll (fmap Return fx)<br />
</haskell><br />
<br />
This, of course, is not completely rigorous, but it is a nice extension of the informal reasoning we started with.<br />
<br />
=== Further reading ===<br />
<br />
For those looking for an introduction to the necessary category theory used above, Steve Awodey's [http://www.math.uchicago.edu/~may/VIGRE/VIGRE2009/Awodey.pdf Category Theory] is a popular, freely available reference.<br />
<br />
=== Notes ===<br />
<br />
==== Universal constructions ====<br />
<br />
Initial (final) objects are those that have a single unique arrow from (to) the object to (from) every other object in the category. For instance, the empty set is initial in the category of sets, and any one-element set is final. Initial objects play an important role in the semantics of algebraic datatypes. For a datatype like:<br />
<br />
<haskell><br />
data T = C1 A B C | C2 D E T<br />
</haskell><br />
<br />
we consider the following:<br />
<br />
* A functor <math>F : Hask \to Hask</math>, <math>F X = A \times B \times C + D \times E \times X</math><br />
* F-algebras which are:<br />
** An object <math>A \in Hask</math><br />
** An action <math>a : FA \to A</math><br />
* Algebra homomorphisms <math>(A, a) \to (B, b)</math><br />
** These are given by <math>h : A \to B</math> such that <math> b \circ Fh = h \circ a</math><br />
<br />
The datatype <code>T</code> is then given by an initial F-algebra. This works out nicely because the unique algebra homomorphism whose existence is guaranteed by initiality is the [[fold]] or 'catamorphism' for the datatype.<br />
<br />
Intuitively, though, the fact that <code>T</code> is an F-algebra means that it is in some sense closed under forming terms of shape F---suppose we took the simpler signature <code>FX = 1 + X</code> of the natural numbers; then both Z = inl () and Sx = inr x can be incorporated into Nat. However, there are potentially many algebras; for instance, the naturals modulo some finite number, and successor modulo that number are an algebra for the natural signature.<br />
<br />
However, initiality constrains what Nat can be. Consider, for instance, the above modular sets 2 and 3. There can be no homomorphism <math>h : 2 \to 3</math>:<br />
<br />
* <math>h0=0 \,\, ;\, h1=0</math><br />
** <math>S(h1) = S0 = 1\,</math> but <math>h(S1) = h0 = 0 \neq 1</math><br />
* <math> h0=0 \,\,;\, h1=1</math><br />
** <math>S(h1) = S1 = 2\,</math> but <math>h(S1) = h0 = 0 \neq 2</math><br />
* <math> h0=0 \,\,;\, h1=2</math><br />
** <math>S(h0) = S0 = 1\,</math> but <math>h(S0) = h1 = 2 \neq 2</math><br />
* <math> h0 \neq 0 </math><br />
** <math> 0 = Z \neq hZ = h0</math><br />
<br />
This is caused by these algebras identifying elements in incompatible ways (2 makes SSZ = Z, but 3 doesn't, and 3 makes SSSZ = Z, but 2 doesn't). So, the values of an initial algebra must be compatible with any such identification scheme, and this is accomplished by identifying ''none'' of the terms in the initial algebra (so that h is free to send each term to an appropriate value in the target, according to the identifications there). A similar phenomenon occurs in the main section of this article, except that the structures in question have additional equational laws that terms must satisfy, so the initial structure ''is'' allowed to identify those, ''but no more'' than those.<br />
<br />
By the same argument, we can determine that 3 is not a final algebra. Nor are the naturals (for any modular set M, S(h(M-1)) = S(M-1) = M, but h(S(M-1)) = h0 = 0). The final algebra is the set {0}, with S0 = 0 and Z = 0, with unique homomorphism hx = 0. This can be seen as identifying as many elements as possible, rather than as few. Naturally, final algebras don't receive that much interest. However, finality is an important property of [http://en.wikipedia.org/wiki/Initial_algebra#Final_coalgebra coalgebras].<br />
<br />
==== Forgetful functors ====<br />
<br />
The term "forgetful functor" has no formal specification; only an intuitive one. The idea is that one starts in some category of structures, and then defines a functor by forgetting part or all of what defines those structures. For instance:<br />
<br />
* <math>U : Str \to Set</math>, where <math>Str</math> is any category of algebraic structures, and U simply forgets about all of the n-ary operations and equational laws, and takes structures to their underlying sets, and homomorphisms to functions over those sets.<br />
* <math>U : Grp \to Mon</math>, which takes a group and forgets about the inverse operation to give a monoid. This functor would then be related to "free groups over a monoid".<br />
<br />
==== Natural transformations ====<br />
<br />
The wiki article gives a formal definition of natural transformations, but a Haskell programmer can think of a natural transformation between functors F and G as:<br />
<br />
<haskell><br />
trans :: forall a. F a -> G a<br />
</haskell></div>Doliohttps://wiki.haskell.org/index.php?title=Free_structure&diff=33948Free structure2010-03-03T17:45:10Z<p>Dolio: reword one of the freeness conditions to hopefully be clearer</p>
<hr />
<div>[[Category:Glossary]]<br />
[[Category:Theoretical foundations]]<br />
[[Category:Mathematics]]<br />
<br />
=== Introduction ===<br />
<br />
This article attempts to give a relatively informal understanding of "free" structures from algebra/category theory, with pointers to some of the formal material for those who desire it. The later sections make use of some notions from [[category theory]], so some familiarity with its basics will be useful.<br />
<br />
=== Algebra ===<br />
<br />
==== What sort of structures are we talking about? ====<br />
<br />
The distinction between free structures and other, non-free structures, originates in [http://en.wikipedia.org/wiki/Abstract_algebra abstract algebra], so that provides a good place to start. Some common structures considered in algebra are:<br />
<br />
* '''[[Monoid]]s'''<br />
** consisting of<br />
*** A set <math>M</math><br />
*** An identity <math>e \in M</math><br />
*** A binary operation <math>* : M \times M \to M</math><br />
** And satisfying the equations<br />
*** <math> x * (y * z) = (x * y) * z </math><br />
*** <math> e * x = x = x * e </math><br />
<br />
* '''[http://en.wikipedia.org/wiki/Group_(mathematics) Groups]'''<br />
** consisting of<br />
*** A monoid <math>(M, e, *)</math><br />
*** An additional unary operation <math>\,^{-1} : M \to M</math><br />
** satisfying<br />
*** <math> x * x^{-1} = e = x^{-1} * x</math><br />
<br />
* '''[http://en.wikipedia.org/wiki/Ring_(mathematics) Rings]'''<br />
** consisting of<br />
*** A set <math>R</math><br />
*** A unary operation <math>- : R \to R</math><br />
*** Two binary operations <math> +, * : R \times R \to R</math><br />
*** Distinguished elements <math>0, 1 \in R</math><br />
** such that<br />
*** <math>(R, 0, +, -)</math> is a group<br />
*** <math>(R, 1, *)</math> is a monoid<br />
*** <math> x + y = y + x </math><br />
*** <math> (x + y)*z = x*z + y*z </math><br />
*** <math> x * (y + z) = x*y + x*z </math><br />
<br />
So, for algebraic structures, we have sets equipped with operations that are expected to satisfy equational laws.<br />
<br />
==== Free algebraic structures ====<br />
<br />
Now, given such a description, we can talk about the free structure over a particular set <math>S</math> (or, possibly over some other underlying structure; but we'll stick with sets now). What this means is that given <math>S</math>, we want to find some set <math>M</math>, together with appropriate operations to make <math>M</math> the structure in question, along with the following two criteria:<br />
<br />
* There is an injection <math>i : S \to M</math><br />
* The structure generated is as 'simple' as possible.<br />
** <math>M</math> should contain only elements that are required to exist by <math>i</math> and the operations of the structure.<br />
** The only equational laws that should hold for the generated structure are those that are required to hold by the equational laws for structures of that type.<br />
<br />
So, in the case of a free monoid (from here on out, we'll assume that the structure in question is a monoid, since it's simplest), the equation <math>x * y = y * x</math> should not hold unless <math>x = y</math>, <math>x = e</math> or <math>y = e</math>. Further <math>i x \in M</math>, for all <math>x</math>, and <math>e \in M</math>, and <math>\forall x, y \in M.\,\, x * y \in M</math> (and these should all be distinct, except as required by the monoid laws), but there should be no 'extra' elements of <math>M</math> in addition to those.<br />
<br />
For monoids, the free structure over a set is given by the monoid of lists of elements of that set, with concatenation as multiplication. It should be easy to convince yourself of the following (in pseudo-Haskell):<br />
<br />
<haskell><br />
M = [S]<br />
e = []<br />
* = (++)<br />
<br />
i : S -> [S]<br />
i x = [x] -- i x = x : []<br />
<br />
[] ++ xs = xs = xs ++ []<br />
xs ++ (ys ++ zs) = (xs ++ ys) ++ zs<br />
<br />
xs ++ ys = ys ++ xs iff xs == ys || xs == [] || ys == []<br />
-- etc.<br />
</haskell><br />
<br />
=== The category connection ===<br />
<br />
==== Free structure functors ====<br />
<br />
One possible objection to the above description (even a more formal version thereof) is that the characterization of "simple" is somewhat vague. [[Category theory]] gives a somewhat better solution. Generally, structures-over-a-set will form a category, with arrows being structure-preserving homomorphisms. "Simplest" (in the sense we want) structures in that category will then either be [http://en.wikipedia.org/wiki/Initial_and_terminal_objects initial or terminal], [1] and thus, freeness can be defined in terms of such universal constructions.<br />
<br />
In its full categorical generality, freeness isn't necessarily categorized by underlying set structure, either. Instead, one looks at "forgetful" functors [2] from the category of structures to some other category. For our free monoids above, it'd be:<br />
<br />
* <math>U : Mon \to Set</math><br />
<br />
The functor taking monoids <math>(M, e, *)</math> to their underlying set <math>M</math>. Then, the relevant universal property is given by finding an [http://en.wikipedia.org/wiki/Adjunction adjoint functor]:<br />
<br />
* <math>F : Set \to Mon</math>, <math> F</math> ⊣ <math>U </math><br />
<br />
<math>F</math> being the functor taking sets to the free monoids over those sets. So, free structure functors are left adjoint to forgetful functors. It turns out this categorical presentation also has a dual: cofree structure functors are right adjoint to forgetful functors.<br />
<br />
==== Algebraic constructions in a category ====<br />
<br />
Category theory also provides a way to extend specifications of algebraic structures to more general categories, which can allow us to extend the above informal understanding to new contexts. For instance, one can talk about monoid objects in an arbitrary [http://en.wikipedia.org/wiki/Monoidal_category monoidal category]. Such categories have a tensor product <math>\otimes</math> of objects, with a unit object <math>I</math> (both of which satisfy various laws).<br />
<br />
A monoid object in a monoidal category is then:<br />
<br />
* An object <math>M</math><br />
* A unit 'element' <math>e : I \to M</math><br />
* A multiplication <math>m : M \otimes M \to M</math><br />
<br />
such that:<br />
<br />
* <math>m \circ (id_{M} \otimes e) = u_l</math><br />
* <math>m \circ (e \otimes id_M) = u_r</math><br />
* <math> m \circ (id_M \otimes m) = m \circ (m \otimes id_M) \circ \alpha</math><br />
<br />
Where:<br />
<br />
* <math>u_l : M \otimes I \to M</math> and <math>u_r : I \otimes M \to M</math> are the identity isomorphisms for the monoidal category, and<br />
* <math> \alpha : M \otimes (M \otimes M) \to (M \otimes M) \otimes M </math> is part of the associativity isomorphism of the category.<br />
<br />
So, hopefully the connection is clear: we've generalized the carrier set to a carrier object, generalized the operations to morphisms in a category, and equational laws are promoted to being equations about composition of morphisms.<br />
<br />
==== Monads ====<br />
<br />
One example of a class of monoid objects happens to be [[monad (sans metaphors)|monads]]. Given a base category <math>C</math>, we have the monoidal category <math>C^C</math>:<br />
<br />
* Objects are endofunctors <math>F : C \to C</math><br />
* Morphisms are [[Category theory/Natural transformation|natural transformations]] [3] between the functors<br />
* The tensor product is composition: <math>F \otimes G = F \circ G</math><br />
* The identity object is the identity functor, <math>I</math>, taking objects and morphisms to themselves<br />
<br />
If we then specialize the definition of a monoid object to this situation, we get:<br />
<br />
* An endofunctor <math>M : C \to C</math><br />
* A natural transformation <math>\eta : I \to M</math><br />
* A natural transformation <math>\mu : M \circ M \to M</math><br />
<br />
which satisfy laws that turn out to be the standard monad laws. So, monads turn out to be monoid objects in the category of endofunctors.<br />
<br />
==== Free Monads ====<br />
<br />
But, what about our intuitive understanding of free monoids above? We wanted to promote an underlying set, but we have switched from sets to functors. So, presumably, a free monad is generated by an underlying (endo)functor, <math>F : C \to C</math>. We then expect there to be a natural transformation <math>i : F \to M</math>, 'injecting' the functor into the monad.<br />
<br />
In Haskell, we can write the type of free monads over Haskell endofunctors as follows:<br />
<br />
<haskell><br />
data Free f a = Return a | Roll (f (Free f a))<br />
<br />
instance Functor f => Monad (Free f) where<br />
return a = Return a<br />
Return a >>= f = f a<br />
Roll ffa >>= f = Roll $ fmap (>>= f) ffa<br />
<br />
-- join (Return fa) = fa<br />
-- join (Roll ffa) = Roll (fmap join ffa)<br />
<br />
inj :: Functor f => f a -> Free f a<br />
inj fa = Roll $ fmap Return fa<br />
</haskell><br />
<br />
This should bear some resemblance to free monoids over lists. <code>Return</code> is analogous to <code>[]</code>, and <code>Roll</code> is analogous to <code>(:)</code>. Lists let us create arbitrary length strings of elements from some set, while <code>Free f</code> lets us create structures involving <code>f</code> composed with itself an arbitrary number of times (recall, functor composition was the tensor product of our category). <code>Return</code> gives our type a way to handle the 0-ary composition of <code>f</code> (as <code>[]</code> is the 0-length string), while <code>Roll</code> is the way to extend the nesting level by one (just as <code>(:)</code> lets us create (n+1)-length strings out of n-length ones). Finally, both injections are built in a similar way:<br />
<br />
<haskell><br />
inj_list x = (:) x []<br />
inj_free fx = Roll (fmap Return fx)<br />
</haskell><br />
<br />
This, of course, is not completely rigorous, but it is a nice extension of the informal reasoning we started with.<br />
<br />
=== Further reading ===<br />
<br />
For those looking for an introduction to the necessary category theory used above, Steve Awodey's [http://www.math.uchicago.edu/~may/VIGRE/VIGRE2009/Awodey.pdf Category Theory] is a popular, freely available reference.<br />
<br />
=== Notes ===<br />
<br />
==== Universal constructions ====<br />
<br />
Initial (final) objects are those that have a single unique arrow from (to) the object to (from) every other object in the category. For instance, the empty set is initial in the category of sets, and any one-element set is final. Initial objects play an important role in the semantics of algebraic datatypes. For a datatype like:<br />
<br />
<haskell><br />
data T = C1 A B C | C2 D E T<br />
</haskell><br />
<br />
we consider the following:<br />
<br />
* A functor <math>F : Hask \to Hask</math>, <math>F X = A \times B \times C + D \times E \times X</math><br />
* F-algebras which are:<br />
** An object <math>A \in Hask</math><br />
** An action <math>a : FA \to A</math><br />
* Algebra homomorphisms <math>(A, a) \to (B, b)</math><br />
** These are given by <math>h : A \to B</math> such that <math> b \circ Fh = h \circ a</math><br />
<br />
The datatype <code>T</code> is then given by an initial F-algebra. This works out nicely because the unique algebra homomorphism whose existence is guaranteed by initiality is the [[fold]] or 'catamorphism' for the datatype.<br />
<br />
Intuitively, though, the fact that <code>T</code> is an F-algebra means that it is in some sense closed under forming terms of shape F---suppose we took the simpler signature <code>FX = 1 + X</code> of the natural numbers; then both Z = inl () and Sx = inr x can be incorporated into Nat. However, there are potentially many algebras; for instance, the naturals modulo some finite number, and successor modulo that number are an algebra for the natural signature.<br />
<br />
However, initiality constrains what Nat can be. Consider, for instance, the above modular sets 2 and 3. There can be no homomorphism <math>h : 2 \to 3</math>:<br />
<br />
* <math>h0=0 \,\, ;\, h1=0</math><br />
** <math>S(h1) = S0 = 1\,</math> but <math>h(S1) = h0 = 0 \neq 1</math><br />
* <math> h0=0 \,\,;\, h1=1</math><br />
** <math>S(h1) = S1 = 2\,</math> but <math>h(S1) = h0 = 0 \neq 2</math><br />
* <math> h0=0 \,\,;\, h1=2</math><br />
** <math>S(h0) = S0 = 1\,</math> but <math>h(S0) = h1 = 2 \neq 2</math><br />
* <math> h0 \neq 0 </math><br />
** <math> 0 = Z \neq hZ = h0</math><br />
<br />
This is caused by these algebras identifying elements in incompatible ways (2 makes SSZ = Z, but 3 doesn't, and 3 makes SSSZ = Z, but 2 doesn't). So, the values of an initial algebra must be compatible with any such identification scheme, and this is accomplished by identifying ''none'' of the terms in the initial algebra (so that h is free to send each term to an appropriate value in the target, according to the identifications there). A similar phenomenon occurs in the main section of this article, except that the structures in question have additional equational laws that terms must satisfy, so the initial structure ''is'' allowed to identify those, ''but no more'' than those.<br />
<br />
By the same argument, we can determine that 3 is not a final algebra. Nor are the naturals (for any modular set M, S(h(M-1)) = S(M-1) = M, but h(S(M-1)) = h0 = 0). The final algebra is the set {0}, with S0 = 0 and Z = 0, with unique homomorphism hx = 0. This can be seen as identifying as many elements as possible, rather than as few. Naturally, final algebras don't receive that much interest. However, finality is an important property of [http://en.wikipedia.org/wiki/Initial_algebra#Final_coalgebra coalgebras].<br />
<br />
==== Forgetful functors ====<br />
<br />
The term "forgetful functor" has no formal specification; only an intuitive one. The idea is that one starts in some category of structures, and then defines a functor by forgetting part or all of what defines those structures. For instance:<br />
<br />
* <math>U : Str \to Set</math>, where <math>Str</math> is any category of algebraic structures, and U simply forgets about all of the n-ary operations and equational laws, and takes structures to their underlying sets, and homomorphisms to functions over those sets.<br />
* <math>U : Grp \to Mon</math>, which takes a group and forgets about the inverse operation to give a monoid. This functor would then be related to "free groups over a monoid".<br />
<br />
==== Natural transformations ====<br />
<br />
The wiki article gives a formal definition of natural transformations, but a Haskell programmer can think of a natural transformation between functors F and G as:<br />
<br />
<haskell><br />
trans :: forall a. F a -> G a<br />
</haskell></div>Doliohttps://wiki.haskell.org/index.php?title=Free_structure&diff=33931Free structure2010-03-03T09:23:31Z<p>Dolio: add some links</p>
<hr />
<div>[[Category:Glossary]]<br />
[[Category:Theoretical foundations]]<br />
[[Category:Mathematics]]<br />
<br />
=== Introduction ===<br />
<br />
This article attempts to give a relatively informal understanding of "free" structures from algebra/category theory, with pointers to some of the formal material for those who desire it. The later sections make use of some notions from [[category theory]], so some familiarity with its basics will be useful.<br />
<br />
=== Algebra ===<br />
<br />
==== What sort of structures are we talking about? ====<br />
<br />
The distinction between free structures and other, non-free structures, originates in [http://en.wikipedia.org/wiki/Abstract_algebra abstract algebra], so that provides a good place to start. Some common structures considered in algebra are:<br />
<br />
* '''[[Monoid]]s'''<br />
** consisting of<br />
*** A set <math>M</math><br />
*** An identity <math>e \in M</math><br />
*** A binary operation <math>* : M \times M \to M</math><br />
** And satisfying the equations<br />
*** <math> x * (y * z) = (x * y) * z </math><br />
*** <math> e * x = x = x * e </math><br />
<br />
* '''[http://en.wikipedia.org/wiki/Group_(mathematics) Groups]'''<br />
** consisting of<br />
*** A monoid <math>(M, e, *)</math><br />
*** An additional unary operation <math>\,^{-1} : M \to M</math><br />
** satisfying<br />
*** <math> x * x^{-1} = e = x^{-1} * x</math><br />
<br />
* '''[http://en.wikipedia.org/wiki/Ring_(mathematics) Rings]'''<br />
** consisting of<br />
*** A set <math>R</math><br />
*** A unary operation <math>- : R \to R</math><br />
*** Two binary operations <math> +, * : R \times R \to R</math><br />
*** Distinguished elements <math>0, 1 \in R</math><br />
** such that<br />
*** <math>(R, 0, +, -)</math> is a group<br />
*** <math>(R, 1, *)</math> is a monoid<br />
*** <math> x + y = y + x </math><br />
*** <math> (x + y)*z = x*z + y*z </math><br />
*** <math> x * (y + z) = x*y + x*z </math><br />
<br />
So, for algebraic structures, we have sets equipped with operations that are expected to satisfy equational laws.<br />
<br />
==== Free algebraic structures ====<br />
<br />
Now, given such a description, we can talk about the free structure over a particular set <math>S</math> (or, possibly over some other underlying structure; but we'll stick with sets now). What this means is that given <math>S</math>, we want to find some set <math>M</math>, together with appropriate operations to make <math>M</math> the structure in question, along with the following two criteria:<br />
<br />
* There is an injection <math>i : S \to M</math><br />
* The structure generated is as 'simple' as possible.<br />
** <math>M</math> should contain only elements that are required to exist by <math>i</math> and the operations of the structure.<br />
** The only equational laws that should hold for the generated structure are those that are required to hold by the equational laws for the structure.<br />
<br />
So, in the case of a free monoid (from here on out, we'll assume that the structure in question is a monoid, since it's simplest), the equation <math>x * y = y * x</math> should not hold unless <math>x = y</math>, <math>x = e</math> or <math>y = e</math>. Further <math>i x \in M</math>, for all <math>x</math>, and <math>e \in M</math>, and <math>\forall x, y \in M.\,\, x * y \in M</math> (and these should all be distinct, except as required by the monoid laws), but there should be no 'extra' elements of <math>M</math> in addition to those.<br />
<br />
For monoids, the free structure over a set is given by the monoid of lists of elements of that set, with concatenation as multiplication. It should be easy to convince yourself of the following (in pseudo-Haskell):<br />
<br />
<haskell><br />
M = [S]<br />
e = []<br />
* = (++)<br />
<br />
i : S -> [S]<br />
i x = [x] -- i x = x : []<br />
<br />
[] ++ xs = xs = xs ++ []<br />
xs ++ (ys ++ zs) = (xs ++ ys) ++ zs<br />
<br />
xs ++ ys = ys ++ xs iff xs == ys || xs == [] || ys == []<br />
-- etc.<br />
</haskell><br />
<br />
=== The category connection ===<br />
<br />
==== Free structure functors ====<br />
<br />
One possible objection to the above description (even a more formal version thereof) is that the characterization of "simple" is somewhat vague. [[Category theory]] gives a somewhat better solution. Generally, structures-over-a-set will form a category, with arrows being structure-preserving homomorphisms. "Simplest" (in the sense we want) structures in that category will then either be [http://en.wikipedia.org/wiki/Initial_and_terminal_objects initial or terminal], [1] and thus, freeness can be defined in terms of such universal constructions.<br />
<br />
In its full categorical generality, freeness isn't necessarily categorized by underlying set structure, either. Instead, one looks at "forgetful" functors [2] from the category of structures to some other category. For our free monoids above, it'd be:<br />
<br />
* <math>U : Mon \to Set</math><br />
<br />
The functor taking monoids <math>(M, e, *)</math> to their underlying set <math>M</math>. Then, the relevant universal property is given by finding an [http://en.wikipedia.org/wiki/Adjunction adjoint functor]:<br />
<br />
* <math>F : Set \to Mon</math>, <math> F</math> ⊣ <math>U </math><br />
<br />
<math>F</math> being the functor taking sets to the free monoids over those sets. So, free structure functors are left adjoint to forgetful functors. It turns out this categorical presentation also has a dual: cofree structure functors are right adjoint to forgetful functors.<br />
<br />
==== Algebraic constructions in a category ====<br />
<br />
Category theory also provides a way to extend specifications of algebraic structures to more general categories, which can allow us to extend the above informal understanding to new contexts. For instance, one can talk about monoid objects in an arbitrary [http://en.wikipedia.org/wiki/Monoidal_category monoidal category]. Such categories have a tensor product <math>\otimes</math> of objects, with a unit object <math>I</math> (both of which satisfy various laws).<br />
<br />
A monoid object in a monoidal category is then:<br />
<br />
* An object <math>M</math><br />
* A unit 'element' <math>e : I \to M</math><br />
* A multiplication <math>m : M \otimes M \to M</math><br />
<br />
such that:<br />
<br />
* <math>m \circ (id_{M} \otimes e) = u_l</math><br />
* <math>m \circ (e \otimes id_M) = u_r</math><br />
* <math> m \circ (id_M \otimes m) = m \circ (m \otimes id_M) \circ \alpha</math><br />
<br />
Where:<br />
<br />
* <math>u_l : M \otimes I \to M</math> and <math>u_r : I \otimes M \to M</math> are the identity isomorphisms for the monoidal category, and<br />
* <math> \alpha : M \otimes (M \otimes M) \to (M \otimes M) \otimes M </math> is part of the associativity isomorphism of the category.<br />
<br />
So, hopefully the connection is clear: we've generalized the carrier set to a carrier object, generalized the operations to morphisms in a category, and equational laws are promoted to being equations about composition of morphisms.<br />
<br />
==== Monads ====<br />
<br />
One example of a class of monoid objects happens to be [[monad (sans metaphors)|monads]]. Given a base category <math>C</math>, we have the monoidal category <math>C^C</math>:<br />
<br />
* Objects are endofunctors <math>F : C \to C</math><br />
* Morphisms are [[Category theory/Natural transformation|natural transformations]] [3] between the functors<br />
* The tensor product is composition: <math>F \otimes G = F \circ G</math><br />
* The identity object is the identity functor, <math>I</math>, taking objects and morphisms to themselves<br />
<br />
If we then specialize the definition of a monoid object to this situation, we get:<br />
<br />
* An endofunctor <math>M : C \to C</math><br />
* A natural transformation <math>\eta : I \to M</math><br />
* A natural transformation <math>\mu : M \circ M \to M</math><br />
<br />
which satisfy laws that turn out to be the standard monad laws. So, monads turn out to be monoid objects in the category of endofunctors.<br />
<br />
==== Free Monads ====<br />
<br />
But, what about our intuitive understanding of free monoids above? We wanted to promote an underlying set, but we have switched from sets to functors. So, presumably, a free monad is generated by an underlying (endo)functor, <math>F : C \to C</math>. We then expect there to be a natural transformation <math>i : F \to M</math>, 'injecting' the functor into the monad.<br />
<br />
In Haskell, we can write the type of free monads over Haskell endofunctors as follows:<br />
<br />
<haskell><br />
data Free f a = Return a | Roll (f (Free f a))<br />
<br />
instance Functor f => Monad (Free f) where<br />
return a = Return a<br />
Return a >>= f = f a<br />
Roll ffa >>= f = Roll $ fmap (>>= f) ffa<br />
<br />
-- join (Return fa) = fa<br />
-- join (Roll ffa) = Roll (fmap join ffa)<br />
<br />
inj :: Functor f => f a -> Free f a<br />
inj fa = Roll $ fmap Return fa<br />
</haskell><br />
<br />
This should bear some resemblance to free monoids over lists. <code>Return</code> is analogous to <code>[]</code>, and <code>Roll</code> is analogous to <code>(:)</code>. Lists let us create arbitrary length strings of elements from some set, while <code>Free f</code> lets us create structures involving <code>f</code> composed with itself an arbitrary number of times (recall, functor composition was the tensor product of our category). <code>Return</code> gives our type a way to handle the 0-ary composition of <code>f</code> (as <code>[]</code> is the 0-length string), while <code>Roll</code> is the way to extend the nesting level by one (just as <code>(:)</code> lets us create (n+1)-length strings out of n-length ones). Finally, both injections are built in a similar way:<br />
<br />
<haskell><br />
inj_list x = (:) x []<br />
inj_free fx = Roll (fmap Return fx)<br />
</haskell><br />
<br />
This, of course, is not completely rigorous, but it is a nice extension of the informal reasoning we started with.<br />
<br />
=== Further reading ===<br />
<br />
For those looking for an introduction to the necessary category theory used above, Steve Awodey's [http://www.math.uchicago.edu/~may/VIGRE/VIGRE2009/Awodey.pdf Category Theory] is a popular, freely available reference.<br />
<br />
=== Notes ===<br />
<br />
==== Universal constructions ====<br />
<br />
Initial (final) objects are those that have a single unique arrow from (to) the object to (from) every other object in the category. For instance, the empty set is initial in the category of sets, and any one-element set is final. Initial objects play an important role in the semantics of algebraic datatypes. For a datatype like:<br />
<br />
<haskell><br />
data T = C1 A B C | C2 D E T<br />
</haskell><br />
<br />
we consider the following:<br />
<br />
* A functor <math>F : Hask \to Hask</math>, <math>F X = A \times B \times C + D \times E \times X</math><br />
* F-algebras which are:<br />
** An object <math>A \in Hask</math><br />
** An action <math>a : FA \to A</math><br />
* Algebra homomorphisms <math>(A, a) \to (B, b)</math><br />
** These are given by <math>h : A \to B</math> such that <math> b \circ Fh = h \circ a</math><br />
<br />
The datatype <code>T</code> is then given by an initial F-algebra. This works out nicely because the unique algebra homomorphism whose existence is guaranteed by initiality is the [[fold]] or 'catamorphism' for the datatype.<br />
<br />
Intuitively, though, the fact that <code>T</code> is an F-algebra means that it is in some sense closed under forming terms of shape F---suppose we took the simpler signature <code>FX = 1 + X</code> of the natural numbers; then both Z = inl () and Sx = inr x can be incorporated into Nat. However, there are potentially many algebras; for instance, the naturals modulo some finite number, and successor modulo that number are an algebra for the natural signature.<br />
<br />
However, initiality constrains what Nat can be. Consider, for instance, the above modular sets 2 and 3. There can be no homomorphism <math>h : 2 \to 3</math>:<br />
<br />
* <math>h0=0 \,\, ;\, h1=0</math><br />
** <math>S(h1) = S0 = 1\,</math> but <math>h(S1) = h0 = 0 \neq 1</math><br />
* <math> h0=0 \,\,;\, h1=1</math><br />
** <math>S(h1) = S1 = 2\,</math> but <math>h(S1) = h0 = 0 \neq 2</math><br />
* <math> h0=0 \,\,;\, h1=2</math><br />
** <math>S(h0) = S0 = 1\,</math> but <math>h(S0) = h1 = 2 \neq 2</math><br />
* <math> h0 \neq 0 </math><br />
** <math> 0 = Z \neq hZ = h0</math><br />
<br />
This is caused by these algebras identifying elements in incompatible ways (2 makes SSZ = Z, but 3 doesn't, and 3 makes SSSZ = Z, but 2 doesn't). So, the values of an initial algebra must be compatible with any such identification scheme, and this is accomplished by identifying ''none'' of the terms in the initial algebra (so that h is free to send each term to an appropriate value in the target, according to the identifications there). A similar phenomenon occurs in the main section of this article, except that the structures in question have additional equational laws that terms must satisfy, so the initial structure ''is'' allowed to identify those, ''but no more'' than those.<br />
<br />
By the same argument, we can determine that 3 is not a final algebra. Nor are the naturals (for any modular set M, S(h(M-1)) = S(M-1) = M, but h(S(M-1)) = h0 = 0). The final algebra is the set {0}, with S0 = 0 and Z = 0, with unique homomorphism hx = 0. This can be seen as identifying as many elements as possible, rather than as few. Naturally, final algebras don't receive that much interest. However, finality is an important property of [http://en.wikipedia.org/wiki/Initial_algebra#Final_coalgebra coalgebras].<br />
<br />
==== Forgetful functors ====<br />
<br />
The term "forgetful functor" has no formal specification; only an intuitive one. The idea is that one starts in some category of structures, and then defines a functor by forgetting part or all of what defines those structures. For instance:<br />
<br />
* <math>U : Str \to Set</math>, where <math>Str</math> is any category of algebraic structures, and U simply forgets about all of the n-ary operations and equational laws, and takes structures to their underlying sets, and homomorphisms to functions over those sets.<br />
* <math>U : Grp \to Mon</math>, which takes a group and forgets about the inverse operation to give a monoid. This functor would then be related to "free groups over a monoid".<br />
<br />
==== Natural transformations ====<br />
<br />
The wiki article gives a formal definition of natural transformations, but a Haskell programmer can think of a natural transformation between functors F and G as:<br />
<br />
<haskell><br />
trans :: forall a. F a -> G a<br />
</haskell></div>Doliohttps://wiki.haskell.org/index.php?title=Free_structure&diff=33930Free structure2010-03-03T09:10:36Z<p>Dolio: add a note in the introduction that knowledge of category theory will help</p>
<hr />
<div>[[Category:Glossary]]<br />
[[Category:Theoretical foundations]]<br />
[[Category:Mathematics]]<br />
<br />
=== Introduction ===<br />
<br />
This article attempts to give a relatively informal understanding of "free" structures from algebra/category theory, with pointers to some of the formal material for those who desire it. The later sections make use of some notions from [[category theory]], so some familiarity with its basics will be useful.<br />
<br />
=== Algebra ===<br />
<br />
==== What sort of structures are we talking about? ====<br />
<br />
The distinction between free structures and other, non-free structures, originates in abstract algebra, so that provides a good place to start. Some common structures considered in algebra are:<br />
<br />
* '''[[Monoid]]s'''<br />
** consisting of<br />
*** A set <math>M</math><br />
*** An identity <math>e \in M</math><br />
*** A binary operation <math>* : M \times M \to M</math><br />
** And satisfying the equations<br />
*** <math> x * (y * z) = (x * y) * z </math><br />
*** <math> e * x = x = x * e </math><br />
<br />
* '''[http://en.wikipedia.org/wiki/Group_(mathematics) Groups]'''<br />
** consisting of<br />
*** A monoid <math>(M, e, *)</math><br />
*** An additional unary operation <math>\,^{-1} : M \to M</math><br />
** satisfying<br />
*** <math> x * x^{-1} = e = x^{-1} * x</math><br />
<br />
* '''[http://en.wikipedia.org/wiki/Ring_(mathematics) Rings]'''<br />
** consisting of<br />
*** A set <math>R</math><br />
*** A unary operation <math>- : R \to R</math><br />
*** Two binary operations <math> +, * : R \times R \to R</math><br />
*** Distinguished elements <math>0, 1 \in R</math><br />
** such that<br />
*** <math>(R, 0, +, -)</math> is a group<br />
*** <math>(R, 1, *)</math> is a monoid<br />
*** <math> x + y = y + x </math><br />
*** <math> (x + y)*z = x*z + y*z </math><br />
*** <math> x * (y + z) = x*y + x*z </math><br />
<br />
So, for algebraic structures, we have sets equipped with operations that are expected to satisfy equational laws.<br />
<br />
==== Free algebraic structures ====<br />
<br />
Now, given such a description, we can talk about the free structure over a particular set <math>S</math> (or, possibly over some other underlying structure; but we'll stick with sets now). What this means is that given <math>S</math>, we want to find some set <math>M</math>, together with appropriate operations to make <math>M</math> the structure in question, along with the following two criteria:<br />
<br />
* There is an injection <math>i : S \to M</math><br />
* The structure generated is as 'simple' as possible.<br />
** <math>M</math> should contain only elements that are required to exist by <math>i</math> and the operations of the structure.<br />
** The only equational laws that should hold for the generated structure are those that are required to hold by the equational laws for the structure.<br />
<br />
So, in the case of a free monoid (from here on out, we'll assume that the structure in question is a monoid, since it's simplest), the equation <math>x * y = y * x</math> should not hold unless <math>x = y</math>, <math>x = e</math> or <math>y = e</math>. Further <math>i x \in M</math>, for all <math>x</math>, and <math>e \in M</math>, and <math>\forall x, y \in M.\,\, x * y \in M</math> (and these should all be distinct, except as required by the monoid laws), but there should be no 'extra' elements of <math>M</math> in addition to those.<br />
<br />
For monoids, the free structure over a set is given by the monoid of lists of elements of that set, with concatenation as multiplication. It should be easy to convince yourself of the following (in pseudo-Haskell):<br />
<br />
<haskell><br />
M = [S]<br />
e = []<br />
* = (++)<br />
<br />
i : S -> [S]<br />
i x = [x] -- i x = x : []<br />
<br />
[] ++ xs = xs = xs ++ []<br />
xs ++ (ys ++ zs) = (xs ++ ys) ++ zs<br />
<br />
xs ++ ys = ys ++ xs iff xs == ys || xs == [] || ys == []<br />
-- etc.<br />
</haskell><br />
<br />
=== The category connection ===<br />
<br />
==== Free structure functors ====<br />
<br />
One possible objection to the above description (even a more formal version thereof) is that the characterization of "simple" is somewhat vague. [[Category theory]] gives a somewhat better solution. Generally, structures-over-a-set will form a category, with arrows being structure-preserving homomorphisms. "Simplest" (in the sense we want) structures in that category will then either be initial or terminal, [1] and thus, freeness can be defined in terms of such universal constructions.<br />
<br />
In its full categorical generality, freeness isn't necessarily categorized by underlying set structure, either. Instead, one looks at "forgetful" functors [2] from the category of structures to some other category. For our free monoids above, it'd be:<br />
<br />
* <math>U : Mon \to Set</math><br />
<br />
The functor taking monoids to their underlying set. Then, the relevant universal property is given by finding an [http://en.wikipedia.org/wiki/Adjunction adjoint functor]:<br />
<br />
* <math>F : Set \to Mon</math>, <math> F</math> ⊣ <math>U </math><br />
<br />
<math>F</math> being the functor taking sets to the free monoids over those sets. So, free structure functors are left adjoint to forgetful functors. It turns out this categorical presentation also has a dual: cofree structure functors are right adjoint to forgetful functors.<br />
<br />
==== Algebraic constructions in a category ====<br />
<br />
Category theory also provides a way to extend specifications of algebraic structures to more general categories, which can allow us to extend the above informal understanding to new contexts. For instance, one can talk about monoid objects in an arbitrary [http://en.wikipedia.org/wiki/Monoidal_category monoidal category]. Such categories have a tensor product <math>\otimes</math> of objects, with a unit object <math>I</math> (both of which satisfy various laws).<br />
<br />
A monoid object in a monoidal category is then:<br />
<br />
* An object <math>M</math><br />
* A unit 'element' <math>e : I \to M</math><br />
* A multiplication <math>m : M \otimes M \to M</math><br />
<br />
such that:<br />
<br />
* <math>m \circ (id_{M} \otimes e) = u_l</math><br />
* <math>m \circ (e \otimes id_M) = u_r</math><br />
* <math> m \circ (id_M \otimes m) = m \circ (m \otimes id_M) \circ \alpha</math><br />
<br />
Where:<br />
<br />
* <math>u_l : M \otimes I \to M</math> and <math>u_r : I \otimes M \to M</math> are the identity isomorphisms for the monoidal category, and<br />
* <math> \alpha : M \otimes (M \otimes M) \to (M \otimes M) \otimes M </math> is part of the associativity isomorphism of the category.<br />
<br />
So, hopefully the connection is clear: we've generalized the carrier set to a carrier object, generalized the operations to morphisms in a category, and equational laws are promoted to being equations about composition of morphisms.<br />
<br />
==== Monads ====<br />
<br />
One example of a class of monoid objects happens to be [[monad (sans metaphors)|monads]]. Given a base category <math>C</math>, we have the monoidal category <math>C^C</math>:<br />
<br />
* Objects are endofunctors <math>F : C \to C</math><br />
* Morphisms are [[Category theory/Natural transformation|natural transformations]] [3] between the functors<br />
* The tensor product is composition: <math>F \otimes G = F \circ G</math><br />
* The identity object is the identity functor, <math>I</math>, taking objects and morphisms to themselves<br />
<br />
If we then specialize the definition of a monoid object to this situation, we get:<br />
<br />
* An endofunctor <math>M : C \to C</math><br />
* A natural transformation <math>\eta : I \to M</math><br />
* A natural transformation <math>\mu : M \circ M \to M</math><br />
<br />
which satisfy laws that turn out to be the standard monad laws. So, monads turn out to be monoid objects in the category of endofunctors.<br />
<br />
==== Free Monads ====<br />
<br />
But, what about our intuitive understanding of free monoids above? We wanted to promote an underlying set, but we have switched from sets to functors. So, presumably, a free monad is generated by an underlying (endo)functor, <math>F : C \to C</math>. We then expect there to be a natural transformation <math>i : F \to M</math>, 'injecting' the functor into the monad.<br />
<br />
In Haskell, we can write the type of free monads over Haskell endofunctors as follows:<br />
<br />
<haskell><br />
data Free f a = Return a | Roll (f (Free f a))<br />
<br />
instance Functor f => Monad (Free f) where<br />
return a = Return a<br />
Return a >>= f = f a<br />
Roll ffa >>= f = Roll $ fmap (>>= f) ffa<br />
<br />
-- join (Return fa) = fa<br />
-- join (Roll ffa) = Roll (fmap join ffa)<br />
<br />
inj :: Functor f => f a -> Free f a<br />
inj fa = Roll $ fmap Return fa<br />
</haskell><br />
<br />
This should bear some resemblance to free monoids over lists. <code>Return</code> is analogous to <code>[]</code>, and <code>Roll</code> is analogous to <code>(:)</code>. Lists let us create arbitrary length strings of elements from some set, while <code>Free f</code> lets us create structures involving <code>f</code> composed with itself an arbitrary number of times (recall, functor composition was the tensor product of our category). <code>Return</code> gives our type a way to handle the 0-ary composition of <code>f</code> (as <code>[]</code> is the 0-length string), while <code>Roll</code> is the way to extend the nesting level by one (just as <code>(:)</code> lets us create (n+1)-length strings out of n-length ones). Finally, both injections are built in a similar way:<br />
<br />
<haskell><br />
inj_list x = (:) x []<br />
inj_free fx = Roll (fmap Return fx)<br />
</haskell><br />
<br />
This, of course, is not completely rigorous, but it is a nice extension of the informal reasoning we started with.<br />
<br />
=== Further reading ===<br />
<br />
For those looking for an introduction to the necessary category theory used above, Steve Awodey's [http://www.math.uchicago.edu/~may/VIGRE/VIGRE2009/Awodey.pdf Category Theory] is a popular, freely available reference.<br />
<br />
=== Notes ===<br />
<br />
==== Universal constructions ====<br />
<br />
Initial (final) objects are those that have a single unique arrow from (to) the object to (from) every other object in the category. For instance, the empty set is initial in the category of sets, and any one-element set is final. Initial objects play an important role in the semantics of algebraic datatypes. For a datatype like:<br />
<br />
<haskell><br />
data T = C1 A B C | C2 D E T<br />
</haskell><br />
<br />
we consider the following:<br />
<br />
* A functor <math>F : Hask \to Hask</math>, <math>F X = A \times B \times C + D \times E \times X</math><br />
* F-algebras which are:<br />
** An object <math>A \in Hask</math><br />
** An action <math>a : FA \to A</math><br />
* Algebra homomorphisms <math>(A, a) \to (B, b)</math><br />
** These are given by <math>h : A \to B</math> such that <math> b \circ Fh = h \circ a</math><br />
<br />
The datatype <code>T</code> is then given by an initial F-algebra. This works out nicely because the unique algebra homomorphism whose existence is guaranteed by initiality is the [[fold]] or 'catamorphism' for the datatype.<br />
<br />
Intuitively, though, the fact that <code>T</code> is an F-algebra means that it is in some sense closed under forming terms of shape F---suppose we took the simpler signature <code>FX = 1 + X</code> of the natural numbers; then both Z = inl () and Sx = inr x can be incorporated into Nat. However, there are potentially many algebras; for instance, the naturals modulo some finite number, and successor modulo that number are an algebra for the natural signature.<br />
<br />
However, initiality constrains what Nat can be. Consider, for instance, the above modular sets 2 and 3. There can be no homomorphism <math>h : 2 \to 3</math>:<br />
<br />
* <math>h0=0 \,\, ;\, h1=0</math><br />
** <math>S(h1) = S0 = 1\,</math> but <math>h(S1) = h0 = 0 \neq 1</math><br />
* <math> h0=0 \,\,;\, h1=1</math><br />
** <math>S(h1) = S1 = 2\,</math> but <math>h(S1) = h0 = 0 \neq 2</math><br />
* <math> h0=0 \,\,;\, h1=2</math><br />
** <math>S(h0) = S0 = 1\,</math> but <math>h(S0) = h1 = 2 \neq 2</math><br />
* <math> h0 \neq 0 </math><br />
** <math> 0 = Z \neq hZ = h0</math><br />
<br />
This is caused by these algebras identifying elements in incompatible ways (2 makes SSZ = Z, but 3 doesn't, and 3 makes SSSZ = Z, but 2 doesn't). So, the values of an initial algebra must be compatible with any such identification scheme, and this is accomplished by identifying ''none'' of the terms in the initial algebra (so that h is free to send each term to an appropriate value in the target, according to the identifications there). A similar phenomenon occurs in the main section of this article, except that the structures in question have additional equational laws that terms must satisfy, so the initial structure ''is'' allowed to identify those, ''but no more'' than those.<br />
<br />
By the same argument, we can determine that 3 is not a final algebra. Nor are the naturals (for any modular set M, S(h(M-1)) = S(M-1) = M, but h(S(M-1)) = h0 = 0). The final algebra is the set {0}, with S0 = 0 and Z = 0, with unique homomorphism hx = 0. This can be seen as identifying as many elements as possible, rather than as few. Naturally, final algebras don't receive that much interest. However, finality is an important property of [http://en.wikipedia.org/wiki/Initial_algebra#Final_coalgebra coalgebras].<br />
<br />
==== Forgetful functors ====<br />
<br />
The term "forgetful functor" has no formal specification; only an intuitive one. The idea is that one starts in some category of structures, and then defines a functor by forgetting part or all of what defines those structures. For instance:<br />
<br />
* <math>U : Str \to Set</math>, where <math>Str</math> is any category of algebraic structures, and U simply forgets about all of the n-ary operations and equational laws, and takes structures to their underlying sets, and homomorphisms to functions over those sets.<br />
* <math>U : Grp \to Mon</math>, which takes a group and forgets about the inverse operation to give a monoid. This functor would then be related to "free groups over a monoid".<br />
<br />
==== Natural transformations ====<br />
<br />
The wiki article gives a formal definition of natural transformations, but a Haskell programmer can think of a natural transformation between functors F and G as:<br />
<br />
<haskell><br />
trans :: forall a. F a -> G a<br />
</haskell></div>Doliohttps://wiki.haskell.org/index.php?title=Hask&diff=33928Hask2010-03-03T05:20:19Z<p>Dolio: Add a section on limits failing to match up with Haskell datatypes</p>
<hr />
<div>'''Hask''' is the name usually given to the [[Category theory|category]] having Haskell types as objects and Haskell functions between them as morphisms.<br />
<br />
A type-constructor that is an instance of the Functor class is an endofunctor on Hask.<br />
<br />
* [http://www.cs.gunma-u.ac.jp/~hamana/Papers/cpo.pdf Makoto Hamana: ''What is the category for Haskell?'']<br />
<br />
A solution approach to the issue of partiality making many of the identities required by categorical constructions not literally true in Haskell:<br />
<br />
* [http://www.cs.nott.ac.uk/~nad/publications/danielsson-popl2006-tr.pdf Nils A. Danielsson, John Hughes, Patrik Jansson, and Jeremy Gibbons. ''Fast and loose reasoning is morally correct.'']<br />
<br />
<br />
<br />
== The seq problem ==<br />
<br />
The right identity law fails in '''Hask''' if we distinguish values which can be distinguished by <hask>seq</hask>, since:<br />
<br />
<hask>id . undefined = \x -> id (undefined x) = \x -> undefined x</hask><br />
<br />
should be equal to <hask>undefined</hask>, but can be distinguished from it using <hask>seq</hask>:<br />
<br />
ghci> <hask>(undefined :: Int -> Int) `seq` ()</hask><br />
* Exception: Prelude.undefined<br />
ghci> <hask>(id . undefined :: Int -> Int) `seq` ()</hask><br />
()<br />
<br />
== The limits problem ==<br />
<br />
Even in the absence of seq, bottoms cause datatypes to not actually be instances of the expected categorical constructions. For instance, using some intuition from the category of sets, one might expect the following:<br />
<br />
<haskell><br />
data Void -- no elements ; initial object<br />
data () = () -- terminal object<br />
<br />
data (a, b) = (a, b) -- product<br />
data Either a b = Left a | Right b -- coproduct<br />
</haskell><br />
<br />
However, Void actually does contain an element, bottom, so for each <code>x :: T</code>, <code>const x</code> is a different function <code>Void -> T</code>, meaning <code>Void</code> isn't initial (it's actually terminal).<br />
<br />
Similarly, <code>const undefined</code> and <code>const ()</code> are two distinct functions into <code>()</code>. Consider:<br />
<br />
<haskell><br />
t :: () -> Int<br />
t () = 5<br />
<br />
t . const () = \x -> 5<br />
t . const undefined = \x -> undefined<br />
</haskell><br />
<br />
So, () is not terminal.<br />
<br />
Similar issues occur with (co)products. Categorically:<br />
<br />
<haskell><br />
\p -> (fst p, snd p) = id<br />
<br />
\s -> case s of Left x -> p (Left x) ; Right y -> p (Right y) = p<br />
</haskell><br />
<br />
but in Haskell<br />
<br />
<haskell><br />
id undefined = undefined /= (undefined, undefined) = (fst undefined, snd undefined)<br />
<br />
const 5 undefined = 5<br />
/= undefined = case undefined of <br />
Left x -> const 5 (Left x)<br />
Right y -> const 5 (Right y)<br />
</haskell><br />
<br />
{{stub}}<br />
[[Category:Mathematics]]<br />
[[Category:Theoretical foundations]]</div>Doliohttps://wiki.haskell.org/index.php?title=Free_structure&diff=33927Free structure2010-03-03T04:29:21Z<p>Dolio: tweak the categories</p>
<hr />
<div>[[Category:Glossary]]<br />
[[Category:Theoretical foundations]]<br />
[[Category:Mathematics]]<br />
<br />
=== Introduction ===<br />
<br />
This article attempts to give a relatively informal understanding of "free" structures from algebra/category theory, with pointers to some of the formal material for those who desire it.<br />
<br />
=== Algebra ===<br />
<br />
==== What sort of structures are we talking about? ====<br />
<br />
The distinction between free structures and other, non-free structures, originates in abstract algebra, so that provides a good place to start. Some common structures considered in algebra are:<br />
<br />
* '''[[Monoid]]s'''<br />
** consisting of<br />
*** A set <math>M</math><br />
*** An identity <math>e \in M</math><br />
*** A binary operation <math>* : M \times M \to M</math><br />
** And satisfying the equations<br />
*** <math> x * (y * z) = (x * y) * z </math><br />
*** <math> e * x = x = x * e </math><br />
<br />
* '''[http://en.wikipedia.org/wiki/Group_(mathematics) Groups]'''<br />
** consisting of<br />
*** A monoid <math>(M, e, *)</math><br />
*** An additional unary operation <math>\,^{-1} : M \to M</math><br />
** satisfying<br />
*** <math> x * x^{-1} = e = x^{-1} * x</math><br />
<br />
* '''[http://en.wikipedia.org/wiki/Ring_(mathematics) Rings]'''<br />
** consisting of<br />
*** A set <math>R</math><br />
*** A unary operation <math>- : R \to R</math><br />
*** Two binary operations <math> +, * : R \times R \to R</math><br />
*** Distinguished elements <math>0, 1 \in R</math><br />
** such that<br />
*** <math>(R, 0, +, -)</math> is a group<br />
*** <math>(R, 1, *)</math> is a monoid<br />
*** <math> x + y = y + x </math><br />
*** <math> (x + y)*z = x*z + y*z </math><br />
*** <math> x * (y + z) = x*y + x*z </math><br />
<br />
So, for algebraic structures, we have sets equipped with operations that are expected to satisfy equational laws.<br />
<br />
==== Free algebraic structures ====<br />
<br />
Now, given such a description, we can talk about the free structure over a particular set <math>S</math> (or, possibly over some other underlying structure; but we'll stick with sets now). What this means is that given <math>S</math>, we want to find some set <math>M</math>, together with appropriate operations to make <math>M</math> the structure in question, along with the following two criteria:<br />
<br />
* There is an injection <math>i : S \to M</math><br />
* The structure generated is as 'simple' as possible.<br />
** <math>M</math> should contain only elements that are required to exist by <math>i</math> and the operations of the structure.<br />
** The only equational laws that should hold for the generated structure are those that are required to hold by the equational laws for the structure.<br />
<br />
So, in the case of a free monoid (from here on out, we'll assume that the structure in question is a monoid, since it's simplest), the equation <math>x * y = y * x</math> should not hold unless <math>x = y</math>, <math>x = e</math> or <math>y = e</math>. Further <math>i x \in M</math>, for all <math>x</math>, and <math>e \in M</math>, and <math>\forall x, y \in M.\,\, x * y \in M</math> (and these should all be distinct, except as required by the monoid laws), but there should be no 'extra' elements of <math>M</math> in addition to those.<br />
<br />
For monoids, the free structure over a set is given by the monoid of lists of elements of that set, with concatenation as multiplication. It should be easy to convince yourself of the following (in pseudo-Haskell):<br />
<br />
<haskell><br />
M = [S]<br />
e = []<br />
* = (++)<br />
<br />
i : S -> [S]<br />
i x = [x] -- i x = x : []<br />
<br />
[] ++ xs = xs = xs ++ []<br />
xs ++ (ys ++ zs) = (xs ++ ys) ++ zs<br />
<br />
xs ++ ys = ys ++ xs iff xs == ys || xs == [] || ys == []<br />
-- etc.<br />
</haskell><br />
<br />
=== The category connection ===<br />
<br />
==== Free structure functors ====<br />
<br />
One possible objection to the above description (even a more formal version thereof) is that the characterization of "simple" is somewhat vague. [http://en.wikipedia.org/wiki/Category_theory Category theory] gives a somewhat better solution. Generally, structures-over-a-set will form a category, with arrows being structure-preserving homomorphisms. "Simplest" (in the sense we want) structures in that category will then either be initial or terminal, [1] and thus, freeness can be defined in terms of such universal constructions.<br />
<br />
In its full categorical generality, freeness isn't necessarily categorized by underlying set structure, either. Instead, one looks at "forgetful" functors [2] from the category of structures to some other category. For our free monoids above, it'd be:<br />
<br />
* <math>U : Mon \to Set</math><br />
<br />
The functor taking monoids to their underlying set. Then, the relevant universal property is given by finding an [http://en.wikipedia.org/wiki/Adjunction adjoint functor]:<br />
<br />
* <math>F : Set \to Mon</math>, <math> F</math> ⊣ <math>U </math><br />
<br />
<math>F</math> being the functor taking sets to the free monoids over those sets. So, free structure functors are left adjoint to forgetful functors. It turns out this categorical presentation also has a dual: cofree structure functors are right adjoint to forgetful functors.<br />
<br />
==== Algebraic constructions in a category ====<br />
<br />
Category theory also provides a way to extend specifications of algebraic structures to more general categories, which can allow us to extend the above informal understanding to new contexts. For instance, one can talk about monoid objects in an arbitrary [http://en.wikipedia.org/wiki/Monoidal_category monoidal category]. Such categories have a tensor product <math>\otimes</math> of objects, with a unit object <math>I</math> (both of which satisfy various laws).<br />
<br />
A monoid object in a monoidal category is then:<br />
<br />
* An object <math>M</math><br />
* A unit 'element' <math>e : I \to M</math><br />
* A multiplication <math>m : M \otimes M \to M</math><br />
<br />
such that:<br />
<br />
* <math>m \circ (id_{M} \otimes e) = u_l</math><br />
* <math>m \circ (e \otimes id_M) = u_r</math><br />
* <math> m \circ (id_M \otimes m) = m \circ (m \otimes id_M) \circ \alpha</math><br />
<br />
Where:<br />
<br />
* <math>u_l : M \otimes I \to M</math> and <math>u_r : I \otimes M \to M</math> are the identity isomorphisms for the monoidal category, and<br />
* <math> \alpha : M \otimes (M \otimes M) \to (M \otimes M) \otimes M </math> is part of the associativity isomorphism of the category.<br />
<br />
So, hopefully the connection is clear: we've generalized the carrier set to a carrier object, generalized the operations to morphisms in a category, and equational laws are promoted to being equations about composition of morphisms.<br />
<br />
==== Monads ====<br />
<br />
One example of a class of monoid objects happens to be [[monad (sans metaphors)|monads]]. Given a base category <math>C</math>, we have the monoidal category <math>C^C</math>:<br />
<br />
* Objects are endofunctors <math>F : C \to C</math><br />
* Morphisms are [[Category theory/Natural transformation|natural transformations]] [3] between the functors<br />
* The tensor product is composition: <math>F \otimes G = F \circ G</math><br />
* The identity object is the identity functor, <math>I</math>, taking objects and morphisms to themselves<br />
<br />
If we then specialize the definition of a monoid object to this situation, we get:<br />
<br />
* An endofunctor <math>M : C \to C</math><br />
* A natural transformation <math>\eta : I \to M</math><br />
* A natural transformation <math>\mu : M \circ M \to M</math><br />
<br />
which satisfy laws that turn out to be the standard monad laws. So, monads turn out to be monoid objects in the category of endofunctors.<br />
<br />
==== Free Monads ====<br />
<br />
But, what about our intuitive understanding of free monoids above? We wanted to promote an underlying set, but we have switched from sets to functors. So, presumably, a free monad is generated by an underlying (endo)functor, <math>F : C \to C</math>. We then expect there to be a natural transformation <math>i : F \to M</math>, 'injecting' the functor into the monad.<br />
<br />
In Haskell, we can write the type of free monads over Haskell endofunctors as follows:<br />
<br />
<haskell><br />
data Free f a = Return a | Roll (f (Free f a))<br />
<br />
instance Functor f => Monad (Free f) where<br />
return a = Return a<br />
Return a >>= f = f a<br />
Roll ffa >>= f = Roll $ fmap (>>= f) ffa<br />
<br />
-- join (Return fa) = fa<br />
-- join (Roll ffa) = Roll (fmap join ffa)<br />
<br />
inj :: Functor f => f a -> Free f a<br />
inj fa = Roll $ fmap Return fa<br />
</haskell><br />
<br />
This should bear some resemblance to free monoids over lists. <code>Return</code> is analogous to <code>[]</code>, and <code>Roll</code> is analogous to <code>(:)</code>. Lists let us create arbitrary length strings of elements from some set, while <code>Free f</code> lets us create structures involving <code>f</code> composed with itself an arbitrary number of times (recall, functor composition was the tensor product of our category). <code>Return</code> gives our type a way to handle the 0-ary composition of <code>f</code> (as <code>[]</code> is the 0-length string), while <code>Roll</code> is the way to extend the nesting level by one (just as <code>(:)</code> lets us create (n+1)-length strings out of n-length ones). Finally, both injections are built in a similar way:<br />
<br />
<haskell><br />
inj_list x = (:) x []<br />
inj_free fx = Roll (fmap Return fx)<br />
</haskell><br />
<br />
This, of course, is not completely rigorous, but it is a nice extension of the informal reasoning we started with.<br />
<br />
=== Further reading ===<br />
<br />
For those looking for an introduction to the necessary category theory used above, Steve Awodey's [http://www.math.uchicago.edu/~may/VIGRE/VIGRE2009/Awodey.pdf Category Theory] is a popular, freely available reference.<br />
<br />
=== Notes ===<br />
<br />
==== Universal constructions ====<br />
<br />
Initial (final) objects are those that have a single unique arrow from (to) the object to (from) every other object in the category. For instance, the empty set is initial in the category of sets, and any one-element set is final. Initial objects play an important role in the semantics of algebraic datatypes. For a datatype like:<br />
<br />
<haskell><br />
data T = C1 A B C | C2 D E T<br />
</haskell><br />
<br />
we consider the following:<br />
<br />
* A functor <math>F : Hask \to Hask</math>, <math>F X = A \times B \times C + D \times E \times X</math><br />
* F-algebras which are:<br />
** An object <math>A \in Hask</math><br />
** An action <math>a : FA \to A</math><br />
* Algebra homomorphisms <math>(A, a) \to (B, b)</math><br />
** These are given by <math>h : A \to B</math> such that <math> b \circ Fh = h \circ a</math><br />
<br />
The datatype <code>T</code> is then given by an initial F-algebra. This works out nicely because the unique algebra homomorphism whose existence is guaranteed by initiality is the [[fold]] or 'catamorphism' for the datatype.<br />
<br />
Intuitively, though, the fact that <code>T</code> is an F-algebra means that it is in some sense closed under forming terms of shape F---suppose we took the simpler signature <code>FX = 1 + X</code> of the natural numbers; then both Z = inl () and Sx = inr x can be incorporated into Nat. However, there are potentially many algebras; for instance, the naturals modulo some finite number, and successor modulo that number are an algebra for the natural signature.<br />
<br />
However, initiality constrains what Nat can be. Consider, for instance, the above modular sets 2 and 3. There can be no homomorphism <math>h : 2 \to 3</math>:<br />
<br />
* <math>h0=0 \,\, ;\, h1=0</math><br />
** <math>S(h1) = S0 = 1\,</math> but <math>h(S1) = h0 = 0 \neq 1</math><br />
* <math> h0=0 \,\,;\, h1=1</math><br />
** <math>S(h1) = S1 = 2\,</math> but <math>h(S1) = h0 = 0 \neq 2</math><br />
* <math> h0=0 \,\,;\, h1=2</math><br />
** <math>S(h0) = S0 = 1\,</math> but <math>h(S0) = h1 = 2 \neq 2</math><br />
* <math> h0 \neq 0 </math><br />
** <math> 0 = Z \neq hZ = h0</math><br />
<br />
This is caused by these algebras identifying elements in incompatible ways (2 makes SSZ = Z, but 3 doesn't, and 3 makes SSSZ = Z, but 2 doesn't). So, the values of an initial algebra must be compatible with any such identification scheme, and this is accomplished by identifying ''none'' of the terms in the initial algebra (so that h is free to send each term to an appropriate value in the target, according to the identifications there). A similar phenomenon occurs in the main section of this article, except that the structures in question have additional equational laws that terms must satisfy, so the initial structure ''is'' allowed to identify those, ''but no more'' than those.<br />
<br />
By the same argument, we can determine that 3 is not a final algebra. Nor are the naturals (for any modular set M, S(h(M-1)) = S(M-1) = M, but h(S(M-1)) = h0 = 0). The final algebra is the set {0}, with S0 = 0 and Z = 0, with unique homomorphism hx = 0. This can be seen as identifying as many elements as possible, rather than as few. Naturally, final algebras don't receive that much interest. However, finality is an important property of [http://en.wikipedia.org/wiki/Initial_algebra#Final_coalgebra coalgebras].<br />
<br />
==== Forgetful functors ====<br />
<br />
The term "forgetful functor" has no formal specification; only an intuitive one. The idea is that one starts in some category of structures, and then defines a functor by forgetting part or all of what defines those structures. For instance:<br />
<br />
* <math>U : Str \to Set</math>, where <math>Str</math> is any category of algebraic structures, and U simply forgets about all of the n-ary operations and equational laws, and takes structures to their underlying sets, and homomorphisms to functions over those sets.<br />
* <math>U : Grp \to Mon</math>, which takes a group and forgets about the inverse operation to give a monoid. This functor would then be related to "free groups over a monoid".<br />
<br />
==== Natural transformations ====<br />
<br />
The wiki article gives a formal definition of natural transformations, but a Haskell programmer can think of a natural transformation between functors F and G as:<br />
<br />
<haskell><br />
trans :: forall a. F a -> G a<br />
</haskell></div>Doliohttps://wiki.haskell.org/index.php?title=Free_structure&diff=33926Free structure2010-03-03T04:27:41Z<p>Dolio: Link to the haskellwiki about natural transformations. It's pretty good.</p>
<hr />
<div>[[Category:Glossary]]<br />
[[Category:Algebra]]<br />
[[Category:Category Theory]]<br />
[[Category:Mathematics]]<br />
<br />
=== Introduction ===<br />
<br />
This article attempts to give a relatively informal understanding of "free" structures from algebra/category theory, with pointers to some of the formal material for those who desire it.<br />
<br />
=== Algebra ===<br />
<br />
==== What sort of structures are we talking about? ====<br />
<br />
The distinction between free structures and other, non-free structures, originates in abstract algebra, so that provides a good place to start. Some common structures considered in algebra are:<br />
<br />
* '''[[Monoid]]s'''<br />
** consisting of<br />
*** A set <math>M</math><br />
*** An identity <math>e \in M</math><br />
*** A binary operation <math>* : M \times M \to M</math><br />
** And satisfying the equations<br />
*** <math> x * (y * z) = (x * y) * z </math><br />
*** <math> e * x = x = x * e </math><br />
<br />
* '''[http://en.wikipedia.org/wiki/Group_(mathematics) Groups]'''<br />
** consisting of<br />
*** A monoid <math>(M, e, *)</math><br />
*** An additional unary operation <math>\,^{-1} : M \to M</math><br />
** satisfying<br />
*** <math> x * x^{-1} = e = x^{-1} * x</math><br />
<br />
* '''[http://en.wikipedia.org/wiki/Ring_(mathematics) Rings]'''<br />
** consisting of<br />
*** A set <math>R</math><br />
*** A unary operation <math>- : R \to R</math><br />
*** Two binary operations <math> +, * : R \times R \to R</math><br />
*** Distinguished elements <math>0, 1 \in R</math><br />
** such that<br />
*** <math>(R, 0, +, -)</math> is a group<br />
*** <math>(R, 1, *)</math> is a monoid<br />
*** <math> x + y = y + x </math><br />
*** <math> (x + y)*z = x*z + y*z </math><br />
*** <math> x * (y + z) = x*y + x*z </math><br />
<br />
So, for algebraic structures, we have sets equipped with operations that are expected to satisfy equational laws.<br />
<br />
==== Free algebraic structures ====<br />
<br />
Now, given such a description, we can talk about the free structure over a particular set <math>S</math> (or, possibly over some other underlying structure; but we'll stick with sets now). What this means is that given <math>S</math>, we want to find some set <math>M</math>, together with appropriate operations to make <math>M</math> the structure in question, along with the following two criteria:<br />
<br />
* There is an injection <math>i : S \to M</math><br />
* The structure generated is as 'simple' as possible.<br />
** <math>M</math> should contain only elements that are required to exist by <math>i</math> and the operations of the structure.<br />
** The only equational laws that should hold for the generated structure are those that are required to hold by the equational laws for the structure.<br />
<br />
So, in the case of a free monoid (from here on out, we'll assume that the structure in question is a monoid, since it's simplest), the equation <math>x * y = y * x</math> should not hold unless <math>x = y</math>, <math>x = e</math> or <math>y = e</math>. Further <math>i x \in M</math>, for all <math>x</math>, and <math>e \in M</math>, and <math>\forall x, y \in M.\,\, x * y \in M</math> (and these should all be distinct, except as required by the monoid laws), but there should be no 'extra' elements of <math>M</math> in addition to those.<br />
<br />
For monoids, the free structure over a set is given by the monoid of lists of elements of that set, with concatenation as multiplication. It should be easy to convince yourself of the following (in pseudo-Haskell):<br />
<br />
<haskell><br />
M = [S]<br />
e = []<br />
* = (++)<br />
<br />
i : S -> [S]<br />
i x = [x] -- i x = x : []<br />
<br />
[] ++ xs = xs = xs ++ []<br />
xs ++ (ys ++ zs) = (xs ++ ys) ++ zs<br />
<br />
xs ++ ys = ys ++ xs iff xs == ys || xs == [] || ys == []<br />
-- etc.<br />
</haskell><br />
<br />
=== The category connection ===<br />
<br />
==== Free structure functors ====<br />
<br />
One possible objection to the above description (even a more formal version thereof) is that the characterization of "simple" is somewhat vague. [http://en.wikipedia.org/wiki/Category_theory Category theory] gives a somewhat better solution. Generally, structures-over-a-set will form a category, with arrows being structure-preserving homomorphisms. "Simplest" (in the sense we want) structures in that category will then either be initial or terminal, [1] and thus, freeness can be defined in terms of such universal constructions.<br />
<br />
In its full categorical generality, freeness isn't necessarily categorized by underlying set structure, either. Instead, one looks at "forgetful" functors [2] from the category of structures to some other category. For our free monoids above, it'd be:<br />
<br />
* <math>U : Mon \to Set</math><br />
<br />
The functor taking monoids to their underlying set. Then, the relevant universal property is given by finding an [http://en.wikipedia.org/wiki/Adjunction adjoint functor]:<br />
<br />
* <math>F : Set \to Mon</math>, <math> F</math> ⊣ <math>U </math><br />
<br />
<math>F</math> being the functor taking sets to the free monoids over those sets. So, free structure functors are left adjoint to forgetful functors. It turns out this categorical presentation also has a dual: cofree structure functors are right adjoint to forgetful functors.<br />
<br />
==== Algebraic constructions in a category ====<br />
<br />
Category theory also provides a way to extend specifications of algebraic structures to more general categories, which can allow us to extend the above informal understanding to new contexts. For instance, one can talk about monoid objects in an arbitrary [http://en.wikipedia.org/wiki/Monoidal_category monoidal category]. Such categories have a tensor product <math>\otimes</math> of objects, with a unit object <math>I</math> (both of which satisfy various laws).<br />
<br />
A monoid object in a monoidal category is then:<br />
<br />
* An object <math>M</math><br />
* A unit 'element' <math>e : I \to M</math><br />
* A multiplication <math>m : M \otimes M \to M</math><br />
<br />
such that:<br />
<br />
* <math>m \circ (id_{M} \otimes e) = u_l</math><br />
* <math>m \circ (e \otimes id_M) = u_r</math><br />
* <math> m \circ (id_M \otimes m) = m \circ (m \otimes id_M) \circ \alpha</math><br />
<br />
Where:<br />
<br />
* <math>u_l : M \otimes I \to M</math> and <math>u_r : I \otimes M \to M</math> are the identity isomorphisms for the monoidal category, and<br />
* <math> \alpha : M \otimes (M \otimes M) \to (M \otimes M) \otimes M </math> is part of the associativity isomorphism of the category.<br />
<br />
So, hopefully the connection is clear: we've generalized the carrier set to a carrier object, generalized the operations to morphisms in a category, and equational laws are promoted to being equations about composition of morphisms.<br />
<br />
==== Monads ====<br />
<br />
One example of a class of monoid objects happens to be [[monad (sans metaphors)|monads]]. Given a base category <math>C</math>, we have the monoidal category <math>C^C</math>:<br />
<br />
* Objects are endofunctors <math>F : C \to C</math><br />
* Morphisms are [[Category theory/Natural transformation|natural transformations]] [3] between the functors<br />
* The tensor product is composition: <math>F \otimes G = F \circ G</math><br />
* The identity object is the identity functor, <math>I</math>, taking objects and morphisms to themselves<br />
<br />
If we then specialize the definition of a monoid object to this situation, we get:<br />
<br />
* An endofunctor <math>M : C \to C</math><br />
* A natural transformation <math>\eta : I \to M</math><br />
* A natural transformation <math>\mu : M \circ M \to M</math><br />
<br />
which satisfy laws that turn out to be the standard monad laws. So, monads turn out to be monoid objects in the category of endofunctors.<br />
<br />
==== Free Monads ====<br />
<br />
But, what about our intuitive understanding of free monoids above? We wanted to promote an underlying set, but we have switched from sets to functors. So, presumably, a free monad is generated by an underlying (endo)functor, <math>F : C \to C</math>. We then expect there to be a natural transformation <math>i : F \to M</math>, 'injecting' the functor into the monad.<br />
<br />
In Haskell, we can write the type of free monads over Haskell endofunctors as follows:<br />
<br />
<haskell><br />
data Free f a = Return a | Roll (f (Free f a))<br />
<br />
instance Functor f => Monad (Free f) where<br />
return a = Return a<br />
Return a >>= f = f a<br />
Roll ffa >>= f = Roll $ fmap (>>= f) ffa<br />
<br />
-- join (Return fa) = fa<br />
-- join (Roll ffa) = Roll (fmap join ffa)<br />
<br />
inj :: Functor f => f a -> Free f a<br />
inj fa = Roll $ fmap Return fa<br />
</haskell><br />
<br />
This should bear some resemblance to free monoids over lists. <code>Return</code> is analogous to <code>[]</code>, and <code>Roll</code> is analogous to <code>(:)</code>. Lists let us create arbitrary length strings of elements from some set, while <code>Free f</code> lets us create structures involving <code>f</code> composed with itself an arbitrary number of times (recall, functor composition was the tensor product of our category). <code>Return</code> gives our type a way to handle the 0-ary composition of <code>f</code> (as <code>[]</code> is the 0-length string), while <code>Roll</code> is the way to extend the nesting level by one (just as <code>(:)</code> lets us create (n+1)-length strings out of n-length ones). Finally, both injections are built in a similar way:<br />
<br />
<haskell><br />
inj_list x = (:) x []<br />
inj_free fx = Roll (fmap Return fx)<br />
</haskell><br />
<br />
This, of course, is not completely rigorous, but it is a nice extension of the informal reasoning we started with.<br />
<br />
=== Further reading ===<br />
<br />
For those looking for an introduction to the necessary category theory used above, Steve Awodey's [http://www.math.uchicago.edu/~may/VIGRE/VIGRE2009/Awodey.pdf Category Theory] is a popular, freely available reference.<br />
<br />
=== Notes ===<br />
<br />
==== Universal constructions ====<br />
<br />
Initial (final) objects are those that have a single unique arrow from (to) the object to (from) every other object in the category. For instance, the empty set is initial in the category of sets, and any one-element set is final. Initial objects play an important role in the semantics of algebraic datatypes. For a datatype like:<br />
<br />
<haskell><br />
data T = C1 A B C | C2 D E T<br />
</haskell><br />
<br />
we consider the following:<br />
<br />
* A functor <math>F : Hask \to Hask</math>, <math>F X = A \times B \times C + D \times E \times X</math><br />
* F-algebras which are:<br />
** An object <math>A \in Hask</math><br />
** An action <math>a : FA \to A</math><br />
* Algebra homomorphisms <math>(A, a) \to (B, b)</math><br />
** These are given by <math>h : A \to B</math> such that <math> b \circ Fh = h \circ a</math><br />
<br />
The datatype <code>T</code> is then given by an initial F-algebra. This works out nicely because the unique algebra homomorphism whose existence is guaranteed by initiality is the [[fold]] or 'catamorphism' for the datatype.<br />
<br />
Intuitively, though, the fact that <code>T</code> is an F-algebra means that it is in some sense closed under forming terms of shape F---suppose we took the simpler signature <code>FX = 1 + X</code> of the natural numbers; then both Z = inl () and Sx = inr x can be incorporated into Nat. However, there are potentially many algebras; for instance, the naturals modulo some finite number, and successor modulo that number are an algebra for the natural signature.<br />
<br />
However, initiality constrains what Nat can be. Consider, for instance, the above modular sets 2 and 3. There can be no homomorphism <math>h : 2 \to 3</math>:<br />
<br />
* <math>h0=0 \,\, ;\, h1=0</math><br />
** <math>S(h1) = S0 = 1\,</math> but <math>h(S1) = h0 = 0 \neq 1</math><br />
* <math> h0=0 \,\,;\, h1=1</math><br />
** <math>S(h1) = S1 = 2\,</math> but <math>h(S1) = h0 = 0 \neq 2</math><br />
* <math> h0=0 \,\,;\, h1=2</math><br />
** <math>S(h0) = S0 = 1\,</math> but <math>h(S0) = h1 = 2 \neq 2</math><br />
* <math> h0 \neq 0 </math><br />
** <math> 0 = Z \neq hZ = h0</math><br />
<br />
This is caused by these algebras identifying elements in incompatible ways (2 makes SSZ = Z, but 3 doesn't, and 3 makes SSSZ = Z, but 2 doesn't). So, the values of an initial algebra must be compatible with any such identification scheme, and this is accomplished by identifying ''none'' of the terms in the initial algebra (so that h is free to send each term to an appropriate value in the target, according to the identifications there). A similar phenomenon occurs in the main section of this article, except that the structures in question have additional equational laws that terms must satisfy, so the initial structure ''is'' allowed to identify those, ''but no more'' than those.<br />
<br />
By the same argument, we can determine that 3 is not a final algebra. Nor are the naturals (for any modular set M, S(h(M-1)) = S(M-1) = M, but h(S(M-1)) = h0 = 0). The final algebra is the set {0}, with S0 = 0 and Z = 0, with unique homomorphism hx = 0. This can be seen as identifying as many elements as possible, rather than as few. Naturally, final algebras don't receive that much interest. However, finality is an important property of [http://en.wikipedia.org/wiki/Initial_algebra#Final_coalgebra coalgebras].<br />
<br />
==== Forgetful functors ====<br />
<br />
The term "forgetful functor" has no formal specification; only an intuitive one. The idea is that one starts in some category of structures, and then defines a functor by forgetting part or all of what defines those structures. For instance:<br />
<br />
* <math>U : Str \to Set</math>, where <math>Str</math> is any category of algebraic structures, and U simply forgets about all of the n-ary operations and equational laws, and takes structures to their underlying sets, and homomorphisms to functions over those sets.<br />
* <math>U : Grp \to Mon</math>, which takes a group and forgets about the inverse operation to give a monoid. This functor would then be related to "free groups over a monoid".<br />
<br />
==== Natural transformations ====<br />
<br />
The wiki article gives a formal definition of natural transformations, but a Haskell programmer can think of a natural transformation between functors F and G as:<br />
<br />
<haskell><br />
trans :: forall a. F a -> G a<br />
</haskell></div>Doliohttps://wiki.haskell.org/index.php?title=Free_structure&diff=33923Free structure2010-03-03T04:18:19Z<p>Dolio: Talk about the 'no extra elements' part of freeness in relation to monoids.</p>
<hr />
<div>[[Category:Glossary]]<br />
[[Category:Algebra]]<br />
[[Category:Category Theory]]<br />
[[Category:Mathematics]]<br />
<br />
=== Introduction ===<br />
<br />
This article attempts to give a relatively informal understanding of "free" structures from algebra/category theory, with pointers to some of the formal material for those who desire it.<br />
<br />
=== Algebra ===<br />
<br />
==== What sort of structures are we talking about? ====<br />
<br />
The distinction between free structures and other, non-free structures, originates in abstract algebra, so that provides a good place to start. Some common structures considered in algebra are:<br />
<br />
* '''[[Monoid]]s'''<br />
** consisting of<br />
*** A set <math>M</math><br />
*** An identity <math>e \in M</math><br />
*** A binary operation <math>* : M \times M \to M</math><br />
** And satisfying the equations<br />
*** <math> x * (y * z) = (x * y) * z </math><br />
*** <math> e * x = x = x * e </math><br />
<br />
* '''[http://en.wikipedia.org/wiki/Group_(mathematics) Groups]'''<br />
** consisting of<br />
*** A monoid <math>(M, e, *)</math><br />
*** An additional unary operation <math>\,^{-1} : M \to M</math><br />
** satisfying<br />
*** <math> x * x^{-1} = e = x^{-1} * x</math><br />
<br />
* '''[http://en.wikipedia.org/wiki/Ring_(mathematics) Rings]'''<br />
** consisting of<br />
*** A set <math>R</math><br />
*** A unary operation <math>- : R \to R</math><br />
*** Two binary operations <math> +, * : R \times R \to R</math><br />
*** Distinguished elements <math>0, 1 \in R</math><br />
** such that<br />
*** <math>(R, 0, +, -)</math> is a group<br />
*** <math>(R, 1, *)</math> is a monoid<br />
*** <math> x + y = y + x </math><br />
*** <math> (x + y)*z = x*z + y*z </math><br />
*** <math> x * (y + z) = x*y + x*z </math><br />
<br />
So, for algebraic structures, we have sets equipped with operations that are expected to satisfy equational laws.<br />
<br />
==== Free algebraic structures ====<br />
<br />
Now, given such a description, we can talk about the free structure over a particular set <math>S</math> (or, possibly over some other underlying structure; but we'll stick with sets now). What this means is that given <math>S</math>, we want to find some set <math>M</math>, together with appropriate operations to make <math>M</math> the structure in question, along with the following two criteria:<br />
<br />
* There is an injection <math>i : S \to M</math><br />
* The structure generated is as 'simple' as possible.<br />
** <math>M</math> should contain only elements that are required to exist by <math>i</math> and the operations of the structure.<br />
** The only equational laws that should hold for the generated structure are those that are required to hold by the equational laws for the structure.<br />
<br />
So, in the case of a free monoid (from here on out, we'll assume that the structure in question is a monoid, since it's simplest), the equation <math>x * y = y * x</math> should not hold unless <math>x = y</math>, <math>x = e</math> or <math>y = e</math>. Further <math>i x \in M</math>, for all <math>x</math>, and <math>e \in M</math>, and <math>\forall x, y \in M.\,\, x * y \in M</math> (and these should all be distinct, except as required by the monoid laws), but there should be no 'extra' elements of <math>M</math> in addition to those.<br />
<br />
For monoids, the free structure over a set is given by the monoid of lists of elements of that set, with concatenation as multiplication. It should be easy to convince yourself of the following (in pseudo-Haskell):<br />
<br />
<haskell><br />
M = [S]<br />
e = []<br />
* = (++)<br />
<br />
i : S -> [S]<br />
i x = [x] -- i x = x : []<br />
<br />
[] ++ xs = xs = xs ++ []<br />
xs ++ (ys ++ zs) = (xs ++ ys) ++ zs<br />
<br />
xs ++ ys = ys ++ xs iff xs == ys || xs == [] || ys == []<br />
-- etc.<br />
</haskell><br />
<br />
=== The category connection ===<br />
<br />
==== Free structure functors ====<br />
<br />
One possible objection to the above description (even a more formal version thereof) is that the characterization of "simple" is somewhat vague. [http://en.wikipedia.org/wiki/Category_theory Category theory] gives a somewhat better solution. Generally, structures-over-a-set will form a category, with arrows being structure-preserving homomorphisms. "Simplest" (in the sense we want) structures in that category will then either be initial or terminal, [1] and thus, freeness can be defined in terms of such universal constructions.<br />
<br />
In its full categorical generality, freeness isn't necessarily categorized by underlying set structure, either. Instead, one looks at "forgetful" functors [2] from the category of structures to some other category. For our free monoids above, it'd be:<br />
<br />
* <math>U : Mon \to Set</math><br />
<br />
The functor taking monoids to their underlying set. Then, the relevant universal property is given by finding an [http://en.wikipedia.org/wiki/Adjunction adjoint functor]:<br />
<br />
* <math>F : Set \to Mon</math>, <math> F</math> ⊣ <math>U </math><br />
<br />
<math>F</math> being the functor taking sets to the free monoids over those sets. So, free structure functors are left adjoint to forgetful functors. It turns out this categorical presentation also has a dual: cofree structure functors are right adjoint to forgetful functors.<br />
<br />
==== Algebraic constructions in a category ====<br />
<br />
Category theory also provides a way to extend specifications of algebraic structures to more general categories, which can allow us to extend the above informal understanding to new contexts. For instance, one can talk about monoid objects in an arbitrary [http://en.wikipedia.org/wiki/Monoidal_category monoidal category]. Such categories have a tensor product <math>\otimes</math> of objects, with a unit object <math>I</math> (both of which satisfy various laws).<br />
<br />
A monoid object in a monoidal category is then:<br />
<br />
* An object <math>M</math><br />
* A unit 'element' <math>e : I \to M</math><br />
* A multiplication <math>m : M \otimes M \to M</math><br />
<br />
such that:<br />
<br />
* <math>m \circ (id_{M} \otimes e) = u_l</math><br />
* <math>m \circ (e \otimes id_M) = u_r</math><br />
* <math> m \circ (id_M \otimes m) = m \circ (m \otimes id_M) \circ \alpha</math><br />
<br />
Where:<br />
<br />
* <math>u_l : M \otimes I \to M</math> and <math>u_r : I \otimes M \to M</math> are the identity isomorphisms for the monoidal category, and<br />
* <math> \alpha : M \otimes (M \otimes M) \to (M \otimes M) \otimes M </math> is part of the associativity isomorphism of the category.<br />
<br />
So, hopefully the connection is clear: we've generalized the carrier set to a carrier object, generalized the operations to morphisms in a category, and equational laws are promoted to being equations about composition of morphisms.<br />
<br />
==== Monads ====<br />
<br />
One example of a class of monoid objects happens to be [[monad (sans metaphors)|monads]]. Given a base category <math>C</math>, we have the monoidal category <math>C^C</math>:<br />
<br />
* Objects are endofunctors <math>F : C \to C</math><br />
* Morphisms are [http://en.wikipedia.org/wiki/Natural_transformation natural transformations] [3] between the functors<br />
* The tensor product is composition: <math>F \otimes G = F \circ G</math><br />
* The identity object is the identity functor, <math>I</math>, taking objects and morphisms to themselves<br />
<br />
If we then specialize the definition of a monoid object to this situation, we get:<br />
<br />
* An endofunctor <math>M : C \to C</math><br />
* A natural transformation <math>\eta : I \to M</math><br />
* A natural transformation <math>\mu : M \circ M \to M</math><br />
<br />
which satisfy laws that turn out to be the standard monad laws. So, monads turn out to be monoid objects in the category of endofunctors.<br />
<br />
==== Free Monads ====<br />
<br />
But, what about our intuitive understanding of free monoids above? We wanted to promote an underlying set, but we have switched from sets to functors. So, presumably, a free monad is generated by an underlying (endo)functor, <math>F : C \to C</math>. We then expect there to be a natural transformation <math>i : F \to M</math>, 'injecting' the functor into the monad.<br />
<br />
In Haskell, we can write the type of free monads over Haskell endofunctors as follows:<br />
<br />
<haskell><br />
data Free f a = Return a | Roll (f (Free f a))<br />
<br />
instance Functor f => Monad (Free f) where<br />
return a = Return a<br />
Return a >>= f = f a<br />
Roll ffa >>= f = Roll $ fmap (>>= f) ffa<br />
<br />
-- join (Return fa) = fa<br />
-- join (Roll ffa) = Roll (fmap join ffa)<br />
<br />
inj :: Functor f => f a -> Free f a<br />
inj fa = Roll $ fmap Return fa<br />
</haskell><br />
<br />
This should bear some resemblance to free monoids over lists. <code>Return</code> is analogous to <code>[]</code>, and <code>Roll</code> is analogous to <code>(:)</code>. Lists let us create arbitrary length strings of elements from some set, while <code>Free f</code> lets us create structures involving <code>f</code> composed with itself an arbitrary number of times (recall, functor composition was the tensor product of our category). <code>Return</code> gives our type a way to handle the 0-ary composition of <code>f</code> (as <code>[]</code> is the 0-length string), while <code>Roll</code> is the way to extend the nesting level by one (just as <code>(:)</code> lets us create (n+1)-length strings out of n-length ones). Finally, both injections are built in a similar way:<br />
<br />
<haskell><br />
inj_list x = (:) x []<br />
inj_free fx = Roll (fmap Return fx)<br />
</haskell><br />
<br />
This, of course, is not completely rigorous, but it is a nice extension of the informal reasoning we started with.<br />
<br />
=== Further reading ===<br />
<br />
For those looking for an introduction to the necessary category theory used above, Steve Awodey's [http://www.math.uchicago.edu/~may/VIGRE/VIGRE2009/Awodey.pdf Category Theory] is a popular, freely available reference.<br />
<br />
=== Notes ===<br />
<br />
==== Universal constructions ====<br />
<br />
Initial (final) objects are those that have a single unique arrow from (to) the object to (from) every other object in the category. For instance, the empty set is initial in the category of sets, and any one-element set is final. Initial objects play an important role in the semantics of algebraic datatypes. For a datatype like:<br />
<br />
<haskell><br />
data T = C1 A B C | C2 D E T<br />
</haskell><br />
<br />
we consider the following:<br />
<br />
* A functor <math>F : Hask \to Hask</math>, <math>F X = A \times B \times C + D \times E \times X</math><br />
* F-algebras which are:<br />
** An object <math>A \in Hask</math><br />
** An action <math>a : FA \to A</math><br />
* Algebra homomorphisms <math>(A, a) \to (B, b)</math><br />
** These are given by <math>h : A \to B</math> such that <math> b \circ Fh = h \circ a</math><br />
<br />
The datatype <code>T</code> is then given by an initial F-algebra. This works out nicely because the unique algebra homomorphism whose existence is guaranteed by initiality is the [[fold]] or 'catamorphism' for the datatype.<br />
<br />
Intuitively, though, the fact that <code>T</code> is an F-algebra means that it is in some sense closed under forming terms of shape F---suppose we took the simpler signature <code>FX = 1 + X</code> of the natural numbers; then both Z = inl () and Sx = inr x can be incorporated into Nat. However, there are potentially many algebras; for instance, the naturals modulo some finite number, and successor modulo that number are an algebra for the natural signature.<br />
<br />
However, initiality constrains what Nat can be. Consider, for instance, the above modular sets 2 and 3. There can be no homomorphism <math>h : 2 \to 3</math>:<br />
<br />
* <math>h0=0 \,\, ;\, h1=0</math><br />
** <math>S(h1) = S0 = 1\,</math> but <math>h(S1) = h0 = 0 \neq 1</math><br />
* <math> h0=0 \,\,;\, h1=1</math><br />
** <math>S(h1) = S1 = 2\,</math> but <math>h(S1) = h0 = 0 \neq 2</math><br />
* <math> h0=0 \,\,;\, h1=2</math><br />
** <math>S(h0) = S0 = 1\,</math> but <math>h(S0) = h1 = 2 \neq 2</math><br />
* <math> h0 \neq 0 </math><br />
** <math> 0 = Z \neq hZ = h0</math><br />
<br />
This is caused by these algebras identifying elements in incompatible ways (2 makes SSZ = Z, but 3 doesn't, and 3 makes SSSZ = Z, but 2 doesn't). So, the values of an initial algebra must be compatible with any such identification scheme, and this is accomplished by identifying ''none'' of the terms in the initial algebra (so that h is free to send each term to an appropriate value in the target, according to the identifications there). A similar phenomenon occurs in the main section of this article, except that the structures in question have additional equational laws that terms must satisfy, so the initial structure ''is'' allowed to identify those, ''but no more'' than those.<br />
<br />
By the same argument, we can determine that 3 is not a final algebra. Nor are the naturals (for any modular set M, S(h(M-1)) = S(M-1) = M, but h(S(M-1)) = h0 = 0). The final algebra is the set {0}, with S0 = 0 and Z = 0, with unique homomorphism hx = 0. This can be seen as identifying as many elements as possible, rather than as few. Naturally, final algebras don't receive that much interest. However, finality is an important property of [http://en.wikipedia.org/wiki/Initial_algebra#Final_coalgebra coalgebras].<br />
<br />
==== Forgetful functors ====<br />
<br />
The term "forgetful functor" has no formal specification; only an intuitive one. The idea is that one starts in some category of structures, and then defines a functor by forgetting part or all of what defines those structures. For instance:<br />
<br />
* <math>U : Str \to Set</math>, where <math>Str</math> is any category of algebraic structures, and U simply forgets about all of the n-ary operations and equational laws, and takes structures to their underlying sets, and homomorphisms to functions over those sets.<br />
* <math>U : Grp \to Mon</math>, which takes a group and forgets about the inverse operation to give a monoid. This functor would then be related to "free groups over a monoid".<br />
<br />
==== Natural transformations ====<br />
<br />
The wikipedia article gives a formal definition of natural transformations, but a Haskell programmer can think of a natural transformation between functors F and G as:<br />
<br />
<haskell><br />
trans :: forall a. F a -> G a<br />
</haskell></div>Doliohttps://wiki.haskell.org/index.php?title=Free_structure&diff=33922Free structure2010-03-03T03:56:14Z<p>Dolio: Minor thinko when talking about homomorphisms M -> Nat; M = 0, not SM = 0</p>
<hr />
<div>[[Category:Glossary]]<br />
[[Category:Algebra]]<br />
[[Category:Category Theory]]<br />
[[Category:Mathematics]]<br />
<br />
=== Introduction ===<br />
<br />
This article attempts to give a relatively informal understanding of "free" structures from algebra/category theory, with pointers to some of the formal material for those who desire it.<br />
<br />
=== Algebra ===<br />
<br />
==== What sort of structures are we talking about? ====<br />
<br />
The distinction between free structures and other, non-free structures, originates in abstract algebra, so that provides a good place to start. Some common structures considered in algebra are:<br />
<br />
* '''[[Monoid]]s'''<br />
** consisting of<br />
*** A set <math>M</math><br />
*** An identity <math>e \in M</math><br />
*** A binary operation <math>* : M \times M \to M</math><br />
** And satisfying the equations<br />
*** <math> x * (y * z) = (x * y) * z </math><br />
*** <math> e * x = x = x * e </math><br />
<br />
* '''[http://en.wikipedia.org/wiki/Group_(mathematics) Groups]'''<br />
** consisting of<br />
*** A monoid <math>(M, e, *)</math><br />
*** An additional unary operation <math>\,^{-1} : M \to M</math><br />
** satisfying<br />
*** <math> x * x^{-1} = e = x^{-1} * x</math><br />
<br />
* '''[http://en.wikipedia.org/wiki/Ring_(mathematics) Rings]'''<br />
** consisting of<br />
*** A set <math>R</math><br />
*** A unary operation <math>- : R \to R</math><br />
*** Two binary operations <math> +, * : R \times R \to R</math><br />
*** Distinguished elements <math>0, 1 \in R</math><br />
** such that<br />
*** <math>(R, 0, +, -)</math> is a group<br />
*** <math>(R, 1, *)</math> is a monoid<br />
*** <math> x + y = y + x </math><br />
*** <math> (x + y)*z = x*z + y*z </math><br />
*** <math> x * (y + z) = x*y + x*z </math><br />
<br />
So, for algebraic structures, we have sets equipped with operations that are expected to satisfy equational laws.<br />
<br />
==== Free algebraic structures ====<br />
<br />
Now, given such a description, we can talk about the free structure over a particular set <math>S</math> (or, possibly over some other underlying structure; but we'll stick with sets now). What this means is that given <math>S</math>, we want to find some set <math>M</math>, together with appropriate operations to make <math>M</math> the structure in question, along with the following two criteria:<br />
<br />
* There is an injection <math>i : S \to M</math><br />
* The structure generated is as 'simple' as possible.<br />
** <math>M</math> should contain only elements that are required to exist by <math>i</math> and the operations of the structure.<br />
** The only equational laws that should hold for the generated structure are those that are required to hold by the equational laws for the structure.<br />
<br />
So, in the case of a free monoid (from here on out, we'll assume that the structure in question is a monoid, since it's simplest), the equation <math>x * y = y * x</math> should not hold unless <math>x = y</math>, <math>x = e</math> or <math>y = e</math>.<br />
<br />
For monoids, the free structure over a set is given by the monoid of lists of elements of that set, with concatenation as multiplication. It should be easy to convince yourself of the following (in pseudo-Haskell):<br />
<br />
<haskell><br />
M = [S]<br />
e = []<br />
* = (++)<br />
<br />
i : S -> [S]<br />
i x = [x] -- i x = x : []<br />
<br />
[] ++ xs = xs = xs ++ []<br />
xs ++ (ys ++ zs) = (xs ++ ys) ++ zs<br />
<br />
xs ++ ys = ys ++ xs iff xs == ys || xs == [] || ys == []<br />
-- etc.<br />
</haskell><br />
<br />
=== The category connection ===<br />
<br />
==== Free structure functors ====<br />
<br />
One possible objection to the above description (even a more formal version thereof) is that the characterization of "simple" is somewhat vague. [http://en.wikipedia.org/wiki/Category_theory Category theory] gives a somewhat better solution. Generally, structures-over-a-set will form a category, with arrows being structure-preserving homomorphisms. "Simplest" (in the sense we want) structures in that category will then either be initial or terminal, [1] and thus, freeness can be defined in terms of such universal constructions.<br />
<br />
In its full categorical generality, freeness isn't necessarily categorized by underlying set structure, either. Instead, one looks at "forgetful" functors [2] from the category of structures to some other category. For our free monoids above, it'd be:<br />
<br />
* <math>U : Mon \to Set</math><br />
<br />
The functor taking monoids to their underlying set. Then, the relevant universal property is given by finding an [http://en.wikipedia.org/wiki/Adjunction adjoint functor]:<br />
<br />
* <math>F : Set \to Mon</math>, <math> F</math> ⊣ <math>U </math><br />
<br />
<math>F</math> being the functor taking sets to the free monoids over those sets. So, free structure functors are left adjoint to forgetful functors. It turns out this categorical presentation also has a dual: cofree structure functors are right adjoint to forgetful functors.<br />
<br />
==== Algebraic constructions in a category ====<br />
<br />
Category theory also provides a way to extend specifications of algebraic structures to more general categories, which can allow us to extend the above informal understanding to new contexts. For instance, one can talk about monoid objects in an arbitrary [http://en.wikipedia.org/wiki/Monoidal_category monoidal category]. Such categories have a tensor product <math>\otimes</math> of objects, with a unit object <math>I</math> (both of which satisfy various laws).<br />
<br />
A monoid object in a monoidal category is then:<br />
<br />
* An object <math>M</math><br />
* A unit 'element' <math>e : I \to M</math><br />
* A multiplication <math>m : M \otimes M \to M</math><br />
<br />
such that:<br />
<br />
* <math>m \circ (id_{M} \otimes e) = u_l</math><br />
* <math>m \circ (e \otimes id_M) = u_r</math><br />
* <math> m \circ (id_M \otimes m) = m \circ (m \otimes id_M) \circ \alpha</math><br />
<br />
Where:<br />
<br />
* <math>u_l : M \otimes I \to M</math> and <math>u_r : I \otimes M \to M</math> are the identity isomorphisms for the monoidal category, and<br />
* <math> \alpha : M \otimes (M \otimes M) \to (M \otimes M) \otimes M </math> is part of the associativity isomorphism of the category.<br />
<br />
So, hopefully the connection is clear: we've generalized the carrier set to a carrier object, generalized the operations to morphisms in a category, and equational laws are promoted to being equations about composition of morphisms.<br />
<br />
==== Monads ====<br />
<br />
One example of a class of monoid objects happens to be [[monad (sans metaphors)|monads]]. Given a base category <math>C</math>, we have the monoidal category <math>C^C</math>:<br />
<br />
* Objects are endofunctors <math>F : C \to C</math><br />
* Morphisms are [http://en.wikipedia.org/wiki/Natural_transformation natural transformations] [3] between the functors<br />
* The tensor product is composition: <math>F \otimes G = F \circ G</math><br />
* The identity object is the identity functor, <math>I</math>, taking objects and morphisms to themselves<br />
<br />
If we then specialize the definition of a monoid object to this situation, we get:<br />
<br />
* An endofunctor <math>M : C \to C</math><br />
* A natural transformation <math>\eta : I \to M</math><br />
* A natural transformation <math>\mu : M \circ M \to M</math><br />
<br />
which satisfy laws that turn out to be the standard monad laws. So, monads turn out to be monoid objects in the category of endofunctors.<br />
<br />
==== Free Monads ====<br />
<br />
But, what about our intuitive understanding of free monoids above? We wanted to promote an underlying set, but we have switched from sets to functors. So, presumably, a free monad is generated by an underlying (endo)functor, <math>F : C \to C</math>. We then expect there to be a natural transformation <math>i : F \to M</math>, 'injecting' the functor into the monad.<br />
<br />
In Haskell, we can write the type of free monads over Haskell endofunctors as follows:<br />
<br />
<haskell><br />
data Free f a = Return a | Roll (f (Free f a))<br />
<br />
instance Functor f => Monad (Free f) where<br />
return a = Return a<br />
Return a >>= f = f a<br />
Roll ffa >>= f = Roll $ fmap (>>= f) ffa<br />
<br />
-- join (Return fa) = fa<br />
-- join (Roll ffa) = Roll (fmap join ffa)<br />
<br />
inj :: Functor f => f a -> Free f a<br />
inj fa = Roll $ fmap Return fa<br />
</haskell><br />
<br />
This should bear some resemblance to free monoids over lists. <code>Return</code> is analogous to <code>[]</code>, and <code>Roll</code> is analogous to <code>(:)</code>. Lists let us create arbitrary length strings of elements from some set, while <code>Free f</code> lets us create structures involving <code>f</code> composed with itself an arbitrary number of times (recall, functor composition was the tensor product of our category). <code>Return</code> gives our type a way to handle the 0-ary composition of <code>f</code> (as <code>[]</code> is the 0-length string), while <code>Roll</code> is the way to extend the nesting level by one (just as <code>(:)</code> lets us create (n+1)-length strings out of n-length ones). Finally, both injections are built in a similar way:<br />
<br />
<haskell><br />
inj_list x = (:) x []<br />
inj_free fx = Roll (fmap Return fx)<br />
</haskell><br />
<br />
This, of course, is not completely rigorous, but it is a nice extension of the informal reasoning we started with.<br />
<br />
=== Further reading ===<br />
<br />
For those looking for an introduction to the necessary category theory used above, Steve Awodey's [http://www.math.uchicago.edu/~may/VIGRE/VIGRE2009/Awodey.pdf Category Theory] is a popular, freely available reference.<br />
<br />
=== Notes ===<br />
<br />
==== Universal constructions ====<br />
<br />
Initial (final) objects are those that have a single unique arrow from (to) the object to (from) every other object in the category. For instance, the empty set is initial in the category of sets, and any one-element set is final. Initial objects play an important role in the semantics of algebraic datatypes. For a datatype like:<br />
<br />
<haskell><br />
data T = C1 A B C | C2 D E T<br />
</haskell><br />
<br />
we consider the following:<br />
<br />
* A functor <math>F : Hask \to Hask</math>, <math>F X = A \times B \times C + D \times E \times X</math><br />
* F-algebras which are:<br />
** An object <math>A \in Hask</math><br />
** An action <math>a : FA \to A</math><br />
* Algebra homomorphisms <math>(A, a) \to (B, b)</math><br />
** These are given by <math>h : A \to B</math> such that <math> b \circ Fh = h \circ a</math><br />
<br />
The datatype <code>T</code> is then given by an initial F-algebra. This works out nicely because the unique algebra homomorphism whose existence is guaranteed by initiality is the [[fold]] or 'catamorphism' for the datatype.<br />
<br />
Intuitively, though, the fact that <code>T</code> is an F-algebra means that it is in some sense closed under forming terms of shape F---suppose we took the simpler signature <code>FX = 1 + X</code> of the natural numbers; then both Z = inl () and Sx = inr x can be incorporated into Nat. However, there are potentially many algebras; for instance, the naturals modulo some finite number, and successor modulo that number are an algebra for the natural signature.<br />
<br />
However, initiality constrains what Nat can be. Consider, for instance, the above modular sets 2 and 3. There can be no homomorphism <math>h : 2 \to 3</math>:<br />
<br />
* <math>h0=0 \,\, ;\, h1=0</math><br />
** <math>S(h1) = S0 = 1\,</math> but <math>h(S1) = h0 = 0 \neq 1</math><br />
* <math> h0=0 \,\,;\, h1=1</math><br />
** <math>S(h1) = S1 = 2\,</math> but <math>h(S1) = h0 = 0 \neq 2</math><br />
* <math> h0=0 \,\,;\, h1=2</math><br />
** <math>S(h0) = S0 = 1\,</math> but <math>h(S0) = h1 = 2 \neq 2</math><br />
* <math> h0 \neq 0 </math><br />
** <math> 0 = Z \neq hZ = h0</math><br />
<br />
This is caused by these algebras identifying elements in incompatible ways (2 makes SSZ = Z, but 3 doesn't, and 3 makes SSSZ = Z, but 2 doesn't). So, the values of an initial algebra must be compatible with any such identification scheme, and this is accomplished by identifying ''none'' of the terms in the initial algebra (so that h is free to send each term to an appropriate value in the target, according to the identifications there). A similar phenomenon occurs in the main section of this article, except that the structures in question have additional equational laws that terms must satisfy, so the initial structure ''is'' allowed to identify those, ''but no more'' than those.<br />
<br />
By the same argument, we can determine that 3 is not a final algebra. Nor are the naturals (for any modular set M, S(h(M-1)) = S(M-1) = M, but h(S(M-1)) = h0 = 0). The final algebra is the set {0}, with S0 = 0 and Z = 0, with unique homomorphism hx = 0. This can be seen as identifying as many elements as possible, rather than as few. Naturally, final algebras don't receive that much interest. However, finality is an important property of [http://en.wikipedia.org/wiki/Initial_algebra#Final_coalgebra coalgebras].<br />
<br />
==== Forgetful functors ====<br />
<br />
The term "forgetful functor" has no formal specification; only an intuitive one. The idea is that one starts in some category of structures, and then defines a functor by forgetting part or all of what defines those structures. For instance:<br />
<br />
* <math>U : Str \to Set</math>, where <math>Str</math> is any category of algebraic structures, and U simply forgets about all of the n-ary operations and equational laws, and takes structures to their underlying sets, and homomorphisms to functions over those sets.<br />
* <math>U : Grp \to Mon</math>, which takes a group and forgets about the inverse operation to give a monoid. This functor would then be related to "free groups over a monoid".<br />
<br />
==== Natural transformations ====<br />
<br />
The wikipedia article gives a formal definition of natural transformations, but a Haskell programmer can think of a natural transformation between functors F and G as:<br />
<br />
<haskell><br />
trans :: forall a. F a -> G a<br />
</haskell></div>Doliohttps://wiki.haskell.org/index.php?title=Free_structure&diff=33905Free structure2010-03-02T11:19:36Z<p>Dolio: Add a note about forgetful functors</p>
<hr />
<div>[[Category:Glossary]]<br />
[[Category:Algebra]]<br />
[[Category:Category Theory]]<br />
[[Category:Mathematics]]<br />
<br />
=== Introduction ===<br />
<br />
This article attempts to give a relatively informal understanding of "free" structures from algebra/category theory, with pointers to some of the formal material for those who desire it.<br />
<br />
=== Algebra ===<br />
<br />
==== What sort of structures are we talking about? ====<br />
<br />
Free structures originate in abstract algebra, so that provides a good place to start. Some common structures in algebra are:<br />
<br />
* '''[[Monoid]]s'''<br />
** consisting of<br />
*** A set <math>M</math><br />
*** An identity <math>e \in M</math><br />
*** A binary operation <math>* : M \times M \to M</math><br />
** And satisfying the equations<br />
*** <math> x * (y * z) = (x * y) * z </math><br />
*** <math> e * x = x = x * e </math><br />
<br />
* '''[http://en.wikipedia.org/wiki/Group_(mathematics) Groups]'''<br />
** consisting of<br />
*** A monoid <math>(M, e, *)</math><br />
*** An additional unary operation <math>\,^{-1} : M \to M</math><br />
** satisfying<br />
*** <math> x * x^{-1} = e = x^{-1} * x</math><br />
<br />
* '''[http://en.wikipedia.org/wiki/Ring_(mathematics) Rings]'''<br />
** consisting of<br />
*** A set <math>R</math><br />
*** A unary operation <math>- : R \to R</math><br />
*** Two binary operations <math> +, * : R \times R \to R</math><br />
*** Distinguished elements <math>0, 1 \in R</math><br />
** such that<br />
*** <math>(R, 0, +, -)</math> is a group<br />
*** <math>(R, 1, *)</math> is a monoid<br />
*** <math> x + y = y + x </math><br />
*** <math> (x + y)*z = x*z + y*z </math><br />
*** <math> x * (y + z) = x*y + x*z </math><br />
<br />
So, for algebraic structures, we have sets equipped with operations that are expected to satisfy equational laws.<br />
<br />
==== Free algebraic structures ====<br />
<br />
Now, given such a description, we can talk about the free structure over a particular set <math>S</math> (or, possibly over some other underlying structure; but we'll stick with sets now). What this means is that given <math>S</math>, we want to find some set <math>M</math>, together with appropriate operations to make <math>M</math> the structure in question, along with the following two criteria:<br />
<br />
* There is an injection <math>i : S \to M</math><br />
* The structure generated is as 'simple' as possible.<br />
** <math>M</math> should contain only elements that are required to exist by <math>i</math> and the operations of the structure.<br />
** The only equational laws that should hold for the generated structure are those that are required to hold by the equational laws for the structure.<br />
<br />
So, in the case of a free monoid (from here on out, we'll assume that the structure in question is a monoid, since it's simplest), the equation <math>x * y = y * x</math> should not hold unless <math>x = y</math>, <math>x = e</math> or <math>y = e</math>.<br />
<br />
For monoids, the free structure over a set is given by the monoid of lists of elements of that set, with concatenation as multiplication. It should be easy to convince yourself of the following (in pseudo-Haskell):<br />
<br />
<haskell><br />
M = [S]<br />
e = []<br />
* = (++)<br />
<br />
i : S -> [S]<br />
i x = [x] -- i x = x : []<br />
<br />
[] ++ xs = xs = xs ++ []<br />
xs ++ (ys ++ zs) = (xs ++ ys) ++ zs<br />
<br />
xs ++ ys = ys ++ xs iff xs == ys || xs == [] || ys == []<br />
-- etc.<br />
</haskell><br />
<br />
=== The category connection ===<br />
<br />
==== Free structure functors ====<br />
<br />
One possible objection to the above description (even a more formal version thereof) is that the characterization of "simple" is somewhat vague. [http://en.wikipedia.org/wiki/Category_theory Category theory] gives a somewhat better solution. Generally, structures-over-a-set will form a category, with arrows being structure-preserving homomorphisms. "Simplest" (in the sense we want) structures in that category will then either be initial or terminal, [1] and thus, freeness can be defined in terms of such universal constructions.<br />
<br />
In its full categorical generality, freeness isn't necessarily categorized by underlying set structure, either. Instead, one looks at "forgetful" functors [2] from the category of structures to some other category. For our free monoids above, it'd be:<br />
<br />
* <math>U : Mon \to Set</math><br />
<br />
The functor taking monoids to their underlying set. Then, the relevant universal property is given by finding an [http://en.wikipedia.org/wiki/Adjunction adjoint functor]:<br />
<br />
* <math>F : Set \to Mon</math>, <math> F</math> ⊣ <math>U </math><br />
<br />
<math>F</math> being the functor taking sets to the free monoids over those sets. So, free structure functors are left adjoint to forgetful functors. It turns out this categorical presentation also has a dual: cofree structure functors are right adjoint to forgetful functors.<br />
<br />
==== Algebraic constructions in a category ====<br />
<br />
Category theory also provides a way to extend specifications of algebraic structures to more general categories, which can allow us to extend the above informal understanding to new contexts. For instance, one can talk about monoid objects in an arbitrary [http://en.wikipedia.org/wiki/Monoidal_category monoidal category]. Such categories have a tensor product <math>\otimes</math> of objects, with a unit object <math>I</math> (both of which satisfy various laws).<br />
<br />
A monoid object in a monoidal category is then:<br />
<br />
* An object <math>M</math><br />
* A unit 'element' <math>e : I \to M</math><br />
* A multiplication <math>m : M \otimes M \to M</math><br />
<br />
such that:<br />
<br />
* <math>m \circ (id_{M} \otimes e) = u_l</math><br />
* <math>m \circ (e \otimes id_M) = u_r</math><br />
* <math> m \circ (id_M \otimes m) = m \circ (m \otimes id_M) \circ \alpha</math><br />
<br />
Where:<br />
<br />
* <math>u_l : M \otimes I \to M</math> and <math>u_r : I \otimes M \to M</math> are the identity isomorphisms for the monoidal category, and<br />
* <math> \alpha : M \otimes (M \otimes M) \to (M \otimes M) \otimes M </math> is part of the associativity isomorphism of the category.<br />
<br />
So, hopefully the connection is clear: we've generalized the carrier set to a carrier object, generalized the operations to morphisms in a category, and equational laws are promoted to being equations about composition of morphisms.<br />
<br />
==== Monads ====<br />
<br />
One example of a class of monoid objects happens to be [[monad (sans metaphors)|monads]]. Given a base category <math>C</math>, we have the monoidal category <math>C^C</math>:<br />
<br />
* Objects are endofunctors <math>F : C \to C</math><br />
* Morphisms are [http://en.wikipedia.org/wiki/Natural_transformation natural transformations] [3] between the functors<br />
* The tensor product is composition: <math>F \otimes G = F \circ G</math><br />
* The identity object is the identity functor, <math>I</math>, taking objects and morphisms to themselves<br />
<br />
If we then specialize the definition of a monoid object to this situation, we get:<br />
<br />
* An endofunctor <math>M : C \to C</math><br />
* A natural transformation <math>\eta : I \to M</math><br />
* A natural transformation <math>\mu : M \circ M \to M</math><br />
<br />
which satisfy laws that turn out to be the standard monad laws. So, monads turn out to be monoid objects in the category of endofunctors.<br />
<br />
==== Free Monads ====<br />
<br />
But, what about our intuitive understanding of free monoids above? We wanted to promote an underlying set, but we have switched from sets to functors. So, presumably, a free monad is generated by an underlying (endo)functor, <math>F : C \to C</math>. We then expect there to be a natural transformation <math>i : F \to M</math>, 'injecting' the functor into the monad.<br />
<br />
In Haskell, we can write the type of free monads over Haskell endofunctors as follows:<br />
<br />
<haskell><br />
data Free f a = Return a | Roll (f (Free f a))<br />
<br />
instance Functor f => Monad (Free f) where<br />
return a = Return a<br />
Return a >>= f = f a<br />
Roll ffa >>= f = Roll $ fmap (>>= f) ffa<br />
<br />
-- join (Return fa) = fa<br />
-- join (Roll ffa) = Roll (fmap join ffa)<br />
<br />
inj :: Functor f => f a -> Free f a<br />
inj fa = Roll $ fmap Return fa<br />
</haskell><br />
<br />
This should bear some resemblance to free monoids over lists. <code>Return</code> is analogous to <code>[]</code>, and <code>Roll</code> is analogous to <code>(:)</code>. Lists let us create arbitrary length strings of elements from some set, while <code>Free f</code> lets us create structures involving <code>f</code> composed with itself an arbitrary number of times (recall, functor composition was the tensor product of our category). <code>Return</code> gives our type a way to handle the 0-ary composition of <code>f</code> (as <code>[]</code> is the 0-length string), while <code>Roll</code> is the way to extend the nesting level by one (just as <code>(:)</code> lets us create (n+1)-length strings out of n-length ones). Finally, both injections are built in a similar way:<br />
<br />
<haskell><br />
inj_list x = (:) x []<br />
inj_free fx = Roll (fmap Return fx)<br />
</haskell><br />
<br />
This, of course, is not completely rigorous, but it is a nice extension of the informal reasoning we started with.<br />
<br />
=== Further reading ===<br />
<br />
For those looking for an introduction to the necessary category theory used above, Steve Awodey's [http://www.math.uchicago.edu/~may/VIGRE/VIGRE2009/Awodey.pdf Category Theory] is a popular, freely available reference.<br />
<br />
=== Notes ===<br />
<br />
==== Universal constructions ====<br />
<br />
Initial (final) objects are those that have a single unique arrow from (to) the object to (from) every other object in the category. For instance, the empty set is initial in the category of sets, and any one-element set is final. Initial objects play an important role in the semantics of algebraic datatypes. For a datatype like:<br />
<br />
<haskell><br />
data T = C1 A B C | C2 D E T<br />
</haskell><br />
<br />
we consider the following:<br />
<br />
* A functor <math>F : Hask \to Hask</math>, <math>F X = A \times B \times C + D \times E \times X</math><br />
* F-algebras which are:<br />
** An object <math>A \in Hask</math><br />
** An action <math>a : FA \to A</math><br />
* Algebra homomorphisms <math>(A, a) \to (B, b)</math><br />
** These are given by <math>h : A \to B</math> such that <math> b \circ Fh = h \circ a</math><br />
<br />
The datatype <code>T</code> is then given by an initial F-algebra. This works out nicely because the unique algebra homomorphism whose existence is guaranteed by initiality is the [[fold]] or 'catamorphism' for the datatype.<br />
<br />
Intuitively, though, the fact that <code>T</code> is an F-algebra means that it is in some sense closed under forming terms of shape F---suppose we took the simpler signature <code>FX = 1 + X</code> of the natural numbers; then both Z = inl () and Sx = inr x can be incorporated into Nat. However, there are potentially many algebras; for instance, the naturals modulo some finite number, and successor modulo that number are an algebra for the natural signature.<br />
<br />
However, initiality constrains what Nat can be. Consider, for instance, the above modular sets 2 and 3. There can be no homomorphism <math>h : 2 \to 3</math>:<br />
<br />
* <math>h0=0 \,\, ;\, h1=0</math><br />
** <math>S(h1) = S0 = 1\,</math> but <math>h(S1) = h0 = 0 \neq 1</math><br />
* <math> h0=0 \,\,;\, h1=1</math><br />
** <math>S(h1) = S1 = 2\,</math> but <math>h(S1) = h0 = 0 \neq 2</math><br />
* <math> h0=0 \,\,;\, h1=2</math><br />
** <math>S(h0) = S0 = 1\,</math> but <math>h(S0) = h1 = 2 \neq 2</math><br />
* <math> h0 \neq 0 </math><br />
** <math> 0 = Z \neq hZ = h0</math><br />
<br />
This is caused by these algebras identifying elements in incompatible ways (2 makes SSZ = Z, but 3 doesn't, and 3 makes SSSZ = Z, but 2 doesn't). So, the values of an initial algebra must be compatible with any such identification scheme, and this is accomplished by identifying ''none'' of the terms in the initial algebra (so that h is free to send each term to an appropriate value in the target, according to the identifications there). A similar phenomenon occurs in the main section of this article, except that the structures in question have additional equational laws that terms must satisfy, so the initial structure ''is'' allowed to identify those, ''but no more'' than those.<br />
<br />
By the same argument, we can determine that 3 is not a final algebra. Nor are the naturals (for any modular set M, S(hM) = SM, but h(SM) = h0 = 0). The final algebra is the set {0}, with S0 = 0 and Z = 0, with unique homomorphism hx = 0. This can be seen as identifying as many elements as possible, rather than as few. Naturally, final algebras don't receive that much interest. However, finality is an important property of [http://en.wikipedia.org/wiki/Initial_algebra#Final_coalgebra coalgebras].<br />
<br />
==== Forgetful functors ====<br />
<br />
The term "forgetful functor" has no formal specification; only an intuitive one. The idea is that one starts in some category of structures, and then defines a functor by forgetting part or all of what defines those structures. For instance:<br />
<br />
* <math>U : Str \to Set</math>, where <math>Str</math> is any category of algebraic structures, and U simply forgets about all of the n-ary operations and equational laws, and takes structures to their underlying sets, and homomorphisms to functions over those sets.<br />
* <math>U : Grp \to Mon</math>, which takes a group and forgets about the inverse operation to give a monoid. This functor would then be related to "free groups over a monoid".<br />
<br />
==== Natural transformations ====<br />
<br />
The wikipedia article gives a formal definition of natural transformations, but a Haskell programmer can think of a natural transformation between functors F and G as:<br />
<br />
<haskell><br />
trans :: forall a. F a -> G a<br />
</haskell></div>Doliohttps://wiki.haskell.org/index.php?title=Free_structure&diff=33904Free structure2010-03-02T11:02:01Z<p>Dolio: Add some end notes about category stuff.</p>
<hr />
<div>[[Category:Glossary]]<br />
[[Category:Algebra]]<br />
[[Category:Category Theory]]<br />
[[Category:Mathematics]]<br />
<br />
=== Introduction ===<br />
<br />
This article attempts to give a relatively informal understanding of "free" structures from algebra/category theory, with pointers to some of the formal material for those who desire it.<br />
<br />
=== Algebra ===<br />
<br />
==== What sort of structures are we talking about? ====<br />
<br />
Free structures originate in abstract algebra, so that provides a good place to start. Some common structures in algebra are:<br />
<br />
* '''[[Monoid]]s'''<br />
** consisting of<br />
*** A set <math>M</math><br />
*** An identity <math>e \in M</math><br />
*** A binary operation <math>* : M \times M \to M</math><br />
** And satisfying the equations<br />
*** <math> x * (y * z) = (x * y) * z </math><br />
*** <math> e * x = x = x * e </math><br />
<br />
* '''[http://en.wikipedia.org/wiki/Group_(mathematics) Groups]'''<br />
** consisting of<br />
*** A monoid <math>(M, e, *)</math><br />
*** An additional unary operation <math>\,^{-1} : M \to M</math><br />
** satisfying<br />
*** <math> x * x^{-1} = e = x^{-1} * x</math><br />
<br />
* '''[http://en.wikipedia.org/wiki/Ring_(mathematics) Rings]'''<br />
** consisting of<br />
*** A set <math>R</math><br />
*** A unary operation <math>- : R \to R</math><br />
*** Two binary operations <math> +, * : R \times R \to R</math><br />
*** Distinguished elements <math>0, 1 \in R</math><br />
** such that<br />
*** <math>(R, 0, +, -)</math> is a group<br />
*** <math>(R, 1, *)</math> is a monoid<br />
*** <math> x + y = y + x </math><br />
*** <math> (x + y)*z = x*z + y*z </math><br />
*** <math> x * (y + z) = x*y + x*z </math><br />
<br />
So, for algebraic structures, we have sets equipped with operations that are expected to satisfy equational laws.<br />
<br />
==== Free algebraic structures ====<br />
<br />
Now, given such a description, we can talk about the free structure over a particular set <math>S</math> (or, possibly over some other underlying structure; but we'll stick with sets now). What this means is that given <math>S</math>, we want to find some set <math>M</math>, together with appropriate operations to make <math>M</math> the structure in question, along with the following two criteria:<br />
<br />
* There is an injection <math>i : S \to M</math><br />
* The structure generated is as 'simple' as possible.<br />
** <math>M</math> should contain only elements that are required to exist by <math>i</math> and the operations of the structure.<br />
** The only equational laws that should hold for the generated structure are those that are required to hold by the equational laws for the structure.<br />
<br />
So, in the case of a free monoid (from here on out, we'll assume that the structure in question is a monoid, since it's simplest), the equation <math>x * y = y * x</math> should not hold unless <math>x = y</math>, <math>x = e</math> or <math>y = e</math>.<br />
<br />
For monoids, the free structure over a set is given by the monoid of lists of elements of that set, with concatenation as multiplication. It should be easy to convince yourself of the following (in pseudo-Haskell):<br />
<br />
<haskell><br />
M = [S]<br />
e = []<br />
* = (++)<br />
<br />
i : S -> [S]<br />
i x = [x] -- i x = x : []<br />
<br />
[] ++ xs = xs = xs ++ []<br />
xs ++ (ys ++ zs) = (xs ++ ys) ++ zs<br />
<br />
xs ++ ys = ys ++ xs iff xs == ys || xs == [] || ys == []<br />
-- etc.<br />
</haskell><br />
<br />
=== The category connection ===<br />
<br />
==== Free structure functors ====<br />
<br />
One possible objection to the above description (even a more formal version thereof) is that the characterization of "simple" is somewhat vague. [http://en.wikipedia.org/wiki/Category_theory Category theory] gives a somewhat better solution. Generally, structures-over-a-set will form a category, with arrows being structure-preserving homomorphisms. "Simplest" (in the sense we want) structures in that category will then either be initial or terminal, [1] and thus, freeness can be defined in terms of such universal constructions.<br />
<br />
In its full categorical generality, freeness isn't necessarily categorized by underlying set structure, either. Instead, one looks at "forgetful" functors from the category of structures to some other category. For our free monoids above, it'd be:<br />
<br />
* <math>U : Mon \to Set</math><br />
<br />
The functor taking monoids to their underlying set. Then, the relevant universal property is given by finding an [http://en.wikipedia.org/wiki/Adjunction adjoint functor]:<br />
<br />
* <math>F : Set \to Mon</math>, <math> F</math> ⊣ <math>U </math><br />
<br />
<math>F</math> being the functor taking sets to the free monoids over those sets. So, free structure functors are left adjoint to forgetful functors. It turns out this categorical presentation also has a dual: cofree structure functors are right adjoint to forgetful functors.<br />
<br />
==== Algebraic constructions in a category ====<br />
<br />
Category theory also provides a way to extend specifications of algebraic structures to more general categories, which can allow us to extend the above informal understanding to new contexts. For instance, one can talk about monoid objects in an arbitrary [http://en.wikipedia.org/wiki/Monoidal_category monoidal category]. Such categories have a tensor product <math>\otimes</math> of objects, with a unit object <math>I</math> (both of which satisfy various laws).<br />
<br />
A monoid object in a monoidal category is then:<br />
<br />
* An object <math>M</math><br />
* A unit 'element' <math>e : I \to M</math><br />
* A multiplication <math>m : M \otimes M \to M</math><br />
<br />
such that:<br />
<br />
* <math>m \circ (id_{M} \otimes e) = u_l</math><br />
* <math>m \circ (e \otimes id_M) = u_r</math><br />
* <math> m \circ (id_M \otimes m) = m \circ (m \otimes id_M) \circ \alpha</math><br />
<br />
Where:<br />
<br />
* <math>u_l : M \otimes I \to M</math> and <math>u_r : I \otimes M \to M</math> are the identity isomorphisms for the monoidal category, and<br />
* <math> \alpha : M \otimes (M \otimes M) \to (M \otimes M) \otimes M </math> is part of the associativity isomorphism of the category.<br />
<br />
So, hopefully the connection is clear: we've generalized the carrier set to a carrier object, generalized the operations to morphisms in a category, and equational laws are promoted to being equations about composition of morphisms.<br />
<br />
==== Monads ====<br />
<br />
One example of a class of monoid objects happens to be [[monad (sans metaphors)|monads]]. Given a base category <math>C</math>, we have the monoidal category <math>C^C</math>:<br />
<br />
* Objects are endofunctors <math>F : C \to C</math><br />
* Morphisms are [http://en.wikipedia.org/wiki/Natural_transformation natural transformations] [2] between the functors<br />
* The tensor product is composition: <math>F \otimes G = F \circ G</math><br />
* The identity object is the identity functor, <math>I</math>, taking objects and morphisms to themselves<br />
<br />
If we then specialize the definition of a monoid object to this situation, we get:<br />
<br />
* An endofunctor <math>M : C \to C</math><br />
* A natural transformation <math>\eta : I \to M</math><br />
* A natural transformation <math>\mu : M \circ M \to M</math><br />
<br />
which satisfy laws that turn out to be the standard monad laws. So, monads turn out to be monoid objects in the category of endofunctors.<br />
<br />
==== Free Monads ====<br />
<br />
But, what about our intuitive understanding of free monoids above? We wanted to promote an underlying set, but we have switched from sets to functors. So, presumably, a free monad is generated by an underlying (endo)functor, <math>F : C \to C</math>. We then expect there to be a natural transformation <math>i : F \to M</math>, 'injecting' the functor into the monad.<br />
<br />
In Haskell, we can write the type of free monads over Haskell endofunctors as follows:<br />
<br />
<haskell><br />
data Free f a = Return a | Roll (f (Free f a))<br />
<br />
instance Functor f => Monad (Free f) where<br />
return a = Return a<br />
Return a >>= f = f a<br />
Roll ffa >>= f = Roll $ fmap (>>= f) ffa<br />
<br />
-- join (Return fa) = fa<br />
-- join (Roll ffa) = Roll (fmap join ffa)<br />
<br />
inj :: Functor f => f a -> Free f a<br />
inj fa = Roll $ fmap Return fa<br />
</haskell><br />
<br />
This should bear some resemblance to free monoids over lists. <code>Return</code> is analogous to <code>[]</code>, and <code>Roll</code> is analogous to <code>(:)</code>. Lists let us create arbitrary length strings of elements from some set, while <code>Free f</code> lets us create structures involving <code>f</code> composed with itself an arbitrary number of times (recall, functor composition was the tensor product of our category). <code>Return</code> gives our type a way to handle the 0-ary composition of <code>f</code> (as <code>[]</code> is the 0-length string), while <code>Roll</code> is the way to extend the nesting level by one (just as <code>(:)</code> lets us create (n+1)-length strings out of n-length ones). Finally, both injections are built in a similar way:<br />
<br />
<haskell><br />
inj_list x = (:) x []<br />
inj_free fx = Roll (fmap Return fx)<br />
</haskell><br />
<br />
This, of course, is not completely rigorous, but it is a nice extension of the informal reasoning we started with.<br />
<br />
=== Further reading ===<br />
<br />
For those looking for an introduction to the necessary category theory used above, Steve Awodey's [http://www.math.uchicago.edu/~may/VIGRE/VIGRE2009/Awodey.pdf Category Theory] is a popular, freely available reference.<br />
<br />
=== Notes ===<br />
<br />
==== Universal constructions ====<br />
<br />
Initial (final) objects are those that have a single unique arrow from (to) the object to (from) every other object in the category. For instance, the empty set is initial in the category of sets, and any one-element set is final. Initial objects play an important role in the semantics of algebraic datatypes. For a datatype like:<br />
<br />
<haskell><br />
data T = C1 A B C | C2 D E T<br />
</haskell><br />
<br />
we consider the following:<br />
<br />
* A functor <math>F : Hask \to Hask</math>, <math>F X = A \times B \times C + D \times E \times X</math><br />
* F-algebras which are:<br />
** An object <math>A \in Hask</math><br />
** An action <math>a : FA \to A</math><br />
* Algebra homomorphisms <math>(A, a) \to (B, b)</math><br />
** These are given by <math>h : A \to B</math> such that <math> b \circ Fh = h \circ a</math><br />
<br />
The datatype <code>T</code> is then given by an initial F-algebra. This works out nicely because the unique algebra homomorphism whose existence is guaranteed by initiality is the [[fold]] or 'catamorphism' for the datatype.<br />
<br />
Intuitively, though, the fact that <code>T</code> is an F-algebra means that it is in some sense closed under forming terms of shape F---suppose we took the simpler signature <code>FX = 1 + X</code> of the natural numbers; then both Z = inl () and Sx = inr x can be incorporated into Nat. However, there are potentially many algebras; for instance, the naturals modulo some finite number, and successor modulo that number are an algebra for the natural signature.<br />
<br />
However, initiality constrains what Nat can be. Consider, for instance, the above modular sets 2 and 3. There can be no homomorphism <math>h : 2 \to 3</math>:<br />
<br />
* <math>h0=0 \,\, ;\, h1=0</math><br />
** <math>S(h1) = S0 = 1\,</math> but <math>h(S1) = h0 = 0 \neq 1</math><br />
* <math> h0=0 \,\,;\, h1=1</math><br />
** <math>S(h1) = S1 = 2\,</math> but <math>h(S1) = h0 = 0 \neq 2</math><br />
* <math> h0=0 \,\,;\, h1=2</math><br />
** <math>S(h0) = S0 = 1\,</math> but <math>h(S0) = h1 = 2 \neq 2</math><br />
* <math> h0 \neq 0 </math><br />
** <math> 0 = Z \neq hZ = h0</math><br />
<br />
This is caused by these algebras identifying elements in incompatible ways (2 makes SSZ = Z, but 3 doesn't, and 3 makes SSSZ = Z, but 2 doesn't). So, the values of an initial algebra must be compatible with any such identification scheme, and this is accomplished by identifying ''none'' of the terms in the initial algebra (so that h is free to send each term to an appropriate value in the target, according to the identifications there). A similar phenomenon occurs in the main section of this article, except that the structures in question have additional equational laws that terms must satisfy, so the initial structure ''is'' allowed to identify those, ''but no more'' than those.<br />
<br />
By the same argument, we can determine that 3 is not a final algebra. Nor are the naturals (for any modular set M, S(hM) = SM, but h(SM) = h0 = 0). The final algebra is the set {0}, with S0 = 0 and Z = 0, with unique homomorphism hx = 0. This can be seen as identifying as many elements as possible, rather than as few. Naturally, final algebras don't receive that much interest. However, finality is an important property of [http://en.wikipedia.org/wiki/Initial_algebra#Final_coalgebra coalgebras].<br />
<br />
==== Natural transformations ====<br />
<br />
The wikipedia article gives a formal definition of natural transformations, but a Haskell programmer can think of a natural transformation between functors F and G as:<br />
<br />
<haskell><br />
trans :: forall a. F a -> G a<br />
</haskell></div>Doliohttps://wiki.haskell.org/index.php?title=Free_monad&diff=33901Free monad2010-03-02T04:27:21Z<p>Dolio: </p>
<hr />
<div>[[Category:Glossary]]<br />
<br />
A free monad generated by a functor is a special case of the more general free (algebraic) structure over some underlying structure. For an explanation of the general case, culminating with an explanation of free monads, see the article on [[free structure]]s.</div>Doliohttps://wiki.haskell.org/index.php?title=Free_structure&diff=33900Free structure2010-03-02T04:24:21Z<p>Dolio: </p>
<hr />
<div>[[Category:Glossary]]<br />
[[Category:Algebra]]<br />
[[Category:Category Theory]]<br />
[[Category:Mathematics]]<br />
<br />
=== Introduction ===<br />
<br />
This article attempts to give a relatively informal understanding of "free" structures from algebra/category theory, with pointers to some of the formal material for those who desire it.<br />
<br />
=== Algebra ===<br />
<br />
==== What sort of structures are we talking about? ====<br />
<br />
Free structures originate in abstract algebra, so that provides a good place to start. Some common structures in algebra are:<br />
<br />
* '''[[Monoid]]s'''<br />
** consisting of<br />
*** A set <math>M</math><br />
*** An identity <math>e \in M</math><br />
*** A binary operation <math>* : M \times M \to M</math><br />
** And satisfying the equations<br />
*** <math> x * (y * z) = (x * y) * z </math><br />
*** <math> e * x = x = x * e </math><br />
<br />
* '''[http://en.wikipedia.org/wiki/Group_(mathematics) Groups]'''<br />
** consisting of<br />
*** A monoid <math>(M, e, *)</math><br />
*** An additional unary operation <math>\,^{-1} : M \to M</math><br />
** satisfying<br />
*** <math> x * x^{-1} = e = x^{-1} * x</math><br />
<br />
* '''[http://en.wikipedia.org/wiki/Ring_(mathematics) Rings]'''<br />
** consisting of<br />
*** A set <math>R</math><br />
*** A unary operation <math>- : R \to R</math><br />
*** Two binary operations <math> +, * : R \times R \to R</math><br />
*** Distinguished elements <math>0, 1 \in R</math><br />
** such that<br />
*** <math>(R, 0, +, -)</math> is a group<br />
*** <math>(R, 1, *)</math> is a monoid<br />
*** <math> x + y = y + x </math><br />
*** <math> (x + y)*z = x*z + y*z </math><br />
*** <math> x * (y + z) = x*y + x*z </math><br />
<br />
So, for algebraic structures, we have sets equipped with operations that are expected to satisfy equational laws.<br />
<br />
==== Free algebraic structures ====<br />
<br />
Now, given such a description, we can talk about the free structure over a particular set <math>S</math> (or, possibly over some other underlying structure; but we'll stick with sets now). What this means is that given <math>S</math>, we want to find some set <math>M</math>, together with appropriate operations to make <math>M</math> the structure in question, along with the following two criteria:<br />
<br />
* There is an injection <math>i : S \to M</math><br />
* The structure generated is as 'simple' as possible.<br />
** <math>M</math> should contain only elements that are required to exist by <math>i</math> and the operations of the structure.<br />
** The only equational laws that should hold for the generated structure are those that are required to hold by the equational laws for the structure.<br />
<br />
So, in the case of a free monoid (from here on out, we'll assume that the structure in question is a monoid, since it's simplest), the equation <math>x * y = y * x</math> should not hold unless <math>x = y</math>, <math>x = e</math> or <math>y = e</math>.<br />
<br />
For monoids, the free structure over a set is given by the monoid of lists of elements of that set, with concatenation as multiplication. It should be easy to convince yourself of the following (in pseudo-Haskell):<br />
<br />
<haskell><br />
M = [S]<br />
e = []<br />
* = (++)<br />
<br />
i : S -> [S]<br />
i x = [x] -- i x = x : []<br />
<br />
[] ++ xs = xs = xs ++ []<br />
xs ++ (ys ++ zs) = (xs ++ ys) ++ zs<br />
<br />
xs ++ ys = ys ++ xs iff xs == ys || xs == [] || ys == []<br />
-- etc.<br />
</haskell><br />
<br />
=== The category connection ===<br />
<br />
==== Free structure functors ====<br />
<br />
One possible objection to the above description (even a more formal version thereof) is that the characterization of "simple" is somewhat vague. [http://en.wikipedia.org/wiki/Category_theory Category theory] gives a somewhat better solution. Generally, structures-over-a-set will form a category, with arrows being structure-preserving homomorphisms. "Simplest" (in the sense we want) structures in that category will then either be initial or terminal, and thus, freeness can be defined in terms of such universal constructions.<br />
<br />
In its full categorical generality, freeness isn't necessarily categorized by underlying set structure, either. Instead, one looks at "forgetful" functors from the category of structures to some other category. For our free monoids above, it'd be:<br />
<br />
* <math>U : Mon \to Set</math><br />
<br />
The functor taking monoids to their underlying set. Then, the relevant universal property is given by finding an [http://en.wikipedia.org/wiki/Adjunction adjoint functor]:<br />
<br />
* <math>F : Set \to Mon</math>, <math> F</math> ⊣ <math>U </math><br />
<br />
<math>F</math> being the functor taking sets to the free monoids over those sets. So, free structure functors are left adjoint to forgetful functors. It turns out this categorical presentation also has a dual: cofree structure functors are right adjoint to forgetful functors.<br />
<br />
==== Algebraic constructions in a category ====<br />
<br />
Category theory also provides a way to extend specifications of algebraic structures to more general categories, which can allow us to extend the above informal understanding to new contexts. For instance, one can talk about monoid objects in an arbitrary [http://en.wikipedia.org/wiki/Monoidal_category monoidal category]. Such categories have a tensor product <math>\otimes</math> of objects, with a unit object <math>I</math> (both of which satisfy various laws).<br />
<br />
A monoid object in a monoidal category is then:<br />
<br />
* An object <math>M</math><br />
* A unit 'element' <math>e : I \to M</math><br />
* A multiplication <math>m : M \otimes M \to M</math><br />
<br />
such that:<br />
<br />
* <math>m \circ (id_{M} \otimes e) = u_l</math><br />
* <math>m \circ (e \otimes id_M) = u_r</math><br />
* <math> m \circ (id_M \otimes m) = m \circ (m \otimes id_M) \circ \alpha</math><br />
<br />
Where:<br />
<br />
* <math>u_l : M \otimes I \to M</math> and <math>u_r : I \otimes M \to M</math> are the identity isomorphisms for the monoidal category, and<br />
* <math> \alpha : M \otimes (M \otimes M) \to (M \otimes M) \otimes M </math> is part of the associativity isomorphism of the category.<br />
<br />
So, hopefully the connection is clear: we've generalized the carrier set to a carrier object, generalized the operations to morphisms in a category, and equational laws are promoted to being equations about composition of morphisms.<br />
<br />
==== Monads ====<br />
<br />
One example of a class of monoid objects happens to be [[monad (sans metaphors)|monads]]. Given a base category <math>C</math>, we have the monoidal category <math>C^C</math>:<br />
<br />
* Objects are endofunctors <math>F : C \to C</math><br />
* Morphisms are natural transformations between the functors<br />
* The tensor product is composition: <math>F \otimes G = F \circ G</math><br />
* The identity object is the identity functor, <math>I</math>, taking objects and morphisms to themselves<br />
<br />
If we then specialize the definition of a monoid object to this situation, we get:<br />
<br />
* An endofunctor <math>M : C \to C</math><br />
* A natural transformation <math>\eta : I \to M</math><br />
* A natural transformation <math>\mu : M \circ M \to M</math><br />
<br />
which satisfy laws that turn out to be the standard monad laws. So, monads turn out to be monoid objects in the category of endofunctors.<br />
<br />
==== Free Monads ====<br />
<br />
But, what about our intuitive understanding of free monoids above? We wanted to promote an underlying set, but we have switched from sets to functors. So, presumably, a free monad is generated by an underlying (endo)functor, <math>F : C \to C</math>. We then expect there to be a natural transformation <math>i : F \to M</math>, 'injecting' the functor into the monad.<br />
<br />
In Haskell, we can write the type of free monads over Haskell endofunctors as follows:<br />
<br />
<haskell><br />
data Free f a = Return a | Roll (f (Free f a))<br />
<br />
instance Functor f => Monad (Free f) where<br />
return a = Return a<br />
Return a >>= f = f a<br />
Roll ffa >>= f = Roll $ fmap (>>= f) ffa<br />
<br />
-- join (Return fa) = fa<br />
-- join (Roll ffa) = Roll (fmap join ffa)<br />
<br />
inj :: Functor f => f a -> Free f a<br />
inj fa = Roll $ fmap Return fa<br />
</haskell><br />
<br />
This should bear some resemblance to free monoids over lists. <code>Return</code> is analogous to <code>[]</code>, and <code>Roll</code> is analogous to <code>(:)</code>. Lists let us create arbitrary length strings of elements from some set, while <code>Free f</code> lets us create structures involving <code>f</code> composed with itself an arbitrary number of times (recall, functor composition was the tensor product of our category). <code>Return</code> gives our type a way to handle the 0-ary composition of <code>f</code> (as <code>[]</code> is the 0-length string), while <code>Roll</code> is the way to extend the nesting level by one (just as <code>(:)</code> lets us create (n+1)-length strings out of n-length ones). Finally, both injections are built in a similar way:<br />
<br />
<haskell><br />
inj_list x = (:) x []<br />
inj_free fx = Roll (fmap Return fx)<br />
</haskell><br />
<br />
This, of course, is not completely rigorous, but it is a nice extension of the informal reasoning we started with.<br />
<br />
=== Further reading ===<br />
<br />
For those looking for an introduction to the necessary category theory used above, Steve Awodey's [http://www.math.uchicago.edu/~may/VIGRE/VIGRE2009/Awodey.pdf Category Theory] is a popular, freely available reference.</div>Doliohttps://wiki.haskell.org/index.php?title=Haskell_Quiz/Astar/Solution_Dolio&diff=25205Haskell Quiz/Astar/Solution Dolio2008-12-24T14:32:36Z<p>Dolio: use foldl' for maximum speed</p>
<hr />
<div>[[Category:Haskell Quiz solutions|Astar]]<br />
<br />
A* requires keeping a priority queue of places to visit. This can be done with a simple sorted list, but I decided to make a PriorityQueue data type for use in the algorithm instead. The implementation uses lazy pairing heaps from Chris Okasaki's '''Purely Functional Data Structures'''.<br />
<br />
<haskell><br />
module PriorityQueue (<br />
PriorityQueue,<br />
empty,<br />
singleton,<br />
fromList,<br />
null,<br />
deleteFindMin,<br />
deleteMin,<br />
findMin,<br />
insert,<br />
union<br />
) where<br />
<br />
import Prelude hiding (null)<br />
import Data.List (foldl')<br />
<br />
data Ord k => PriorityQueue k a = Nil | Branch k a (PriorityQueue k a) (PriorityQueue k a)<br />
<br />
empty :: Ord k => PriorityQueue k a<br />
empty = Nil<br />
<br />
singleton :: Ord k => k -> a -> PriorityQueue k a<br />
singleton k a = Branch k a Nil Nil<br />
<br />
fromList :: Ord k => [(k,a)] -> PriorityQueue k a<br />
fromList = foldl' (\q (k,a) -> singleton k a `union` q) empty<br />
<br />
null :: Ord k => PriorityQueue k a -> Bool<br />
null Nil = True<br />
null _ = False<br />
<br />
deleteFindMin :: Ord k => PriorityQueue k a -> ((k,a), PriorityQueue k a)<br />
deleteFindMin Nil = error "Empty heap."<br />
deleteFindMin (Branch k a l r) = ((k,a), union l r)<br />
<br />
deleteMin :: Ord k => PriorityQueue k a -> PriorityQueue k a<br />
deleteMin h = snd (deleteFindMin h)<br />
<br />
findMin :: Ord k => PriorityQueue k a -> (k, a)<br />
findMin h = fst (deleteFindMin h)<br />
<br />
insert :: Ord k => k -> a -> PriorityQueue k a -> PriorityQueue k a<br />
insert k a h = union (singleton k a) h<br />
<br />
union :: Ord k => PriorityQueue k a -> PriorityQueue k a -> PriorityQueue k a<br />
union l Nil = l<br />
union Nil r = r<br />
union l@(Branch kl _ _ _) r@(Branch kr _ _ _)<br />
| kl <= kr = link l r<br />
| otherwise = link r l<br />
<br />
link (Branch k a Nil m) r = Branch k a r m<br />
link (Branch k a ll lr) r = Branch k a Nil (union (union r ll) lr)<br />
</haskell><br />
<br />
Not all the functions from data structures in the standard library (Data.Map, Data.Set, etc.) are provided; I only wrote those that are needed for the algorithm. However, this could be extended easily.<br />
<br />
The rest is just a general A* function, which takes a starting place, and functions for successors, testing for completion, cost of a place, and heuristic estimation from a place to the end, returning the path taken (a list from end to start). The rest of the code deals with the specifics of the ASCII map:<br />
<br />
<haskell><br />
{-# OPTIONS_GHC -fglasgow-exts #-}<br />
<br />
module Main where<br />
import Control.Monad (guard, liftM2)<br />
import Control.Monad.Instances<br />
import Data.List (findIndex)<br />
import qualified Data.Set as S<br />
import qualified Data.Map as M<br />
import qualified PriorityQueue as Q<br />
<br />
type Point = (Int, Int)<br />
type Map = [[Char]]<br />
<br />
find :: Char -> Map -> Point<br />
find c m = find' 0 m<br />
where find' _ [] = error "Can't find tile."<br />
find' y (h:t)<br />
| Just x <- findIndex (==c) h = (y, x)<br />
| otherwise = find' (y+1) t<br />
<br />
heuristic :: Point -> Point -> Int<br />
heuristic (x, y) (u, v) = abs (x - u) `max` abs (y - v)<br />
<br />
successor :: Map -> Point -> [Point]<br />
successor m (x,y) = do u <- [x + 1, x, x - 1]<br />
v <- [y + 1, y, y - 1]<br />
guard (0 <= u && u < length m)<br />
guard (0 <= v && v < length (head m))<br />
guard (u /= x || y /= v)<br />
guard (m !! u !! v /= '~')<br />
return (u, v)<br />
<br />
astar start succ end cost heur <br />
= astar' (S.singleton start) (Q.singleton (heur start) [start])<br />
where<br />
astar' seen q<br />
| Q.null q = error "No Solution."<br />
| end n = next<br />
| otherwise = astar' seen' q'<br />
where<br />
((c,next), dq) = Q.deleteFindMin q<br />
n = head next<br />
succs = filter (`S.notMember` seen) $ succ n<br />
costs = map ((+ c) . (subtract $ heur n) . liftM2 (+) cost heur) succs<br />
q' = dq `Q.union` Q.fromList (zip costs (map (:next) succs))<br />
seen' = seen `S.union` S.fromList succs<br />
<br />
path :: [[Char]] -> [Point] -> [[Char]]<br />
path m l = iterY m l 0<br />
where iterY [] _ _ = []<br />
iterY (h:t) l n = iterX h l n 0 : iterY t l (n+1)<br />
iterX [] _ _ _ = []<br />
iterX (h:t) l n m = (if (n,m) `elem` l then '#' else h) : iterX t l n (m+1)<br />
<br />
doit s = unlines . path m $ astar start succ (== end) cost h<br />
where m = lines s<br />
start = find '@' m<br />
end = find 'X' m<br />
succ = successor m<br />
h = heuristic end<br />
cost (x, y) = costsM M.! (m !! x !! y)<br />
costsM = M.fromList [('@',1),('x',1),('X',1),('.',1),('*',2),('^',3)]<br />
<br />
main = interact doit<br />
</haskell></div>Doliohttps://wiki.haskell.org/index.php?title=Haskell_Quiz/Astar/Solution_Dolio&diff=25204Haskell Quiz/Astar/Solution Dolio2008-12-24T14:26:59Z<p>Dolio: Fixed a problem in link that caused a massive slowdown.</p>
<hr />
<div>[[Category:Haskell Quiz solutions|Astar]]<br />
<br />
A* requires keeping a priority queue of places to visit. This can be done with a simple sorted list, but I decided to make a PriorityQueue data type for use in the algorithm instead. The implementation uses lazy pairing heaps from Chris Okasaki's '''Purely Functional Data Structures'''.<br />
<br />
<haskell><br />
module PriorityQueue (<br />
PriorityQueue,<br />
empty,<br />
singleton,<br />
fromList,<br />
null,<br />
deleteFindMin,<br />
deleteMin,<br />
findMin,<br />
insert,<br />
union<br />
) where<br />
<br />
import Prelude hiding (null)<br />
<br />
data Ord k => PriorityQueue k a = Nil | Branch k a (PriorityQueue k a) (PriorityQueue k a)<br />
<br />
empty :: Ord k => PriorityQueue k a<br />
empty = Nil<br />
<br />
singleton :: Ord k => k -> a -> PriorityQueue k a<br />
singleton k a = Branch k a Nil Nil<br />
<br />
fromList :: Ord k => [(k,a)] -> PriorityQueue k a<br />
fromList = foldr (\(k,a) q -> singleton k a `union` q) empty<br />
<br />
null :: Ord k => PriorityQueue k a -> Bool<br />
null Nil = True<br />
null _ = False<br />
<br />
deleteFindMin :: Ord k => PriorityQueue k a -> ((k,a), PriorityQueue k a)<br />
deleteFindMin Nil = error "Empty heap."<br />
deleteFindMin (Branch k a l r) = ((k,a), union l r)<br />
<br />
deleteMin :: Ord k => PriorityQueue k a -> PriorityQueue k a<br />
deleteMin h = snd (deleteFindMin h)<br />
<br />
findMin :: Ord k => PriorityQueue k a -> (k, a)<br />
findMin h = fst (deleteFindMin h)<br />
<br />
insert :: Ord k => k -> a -> PriorityQueue k a -> PriorityQueue k a<br />
insert k a h = union (singleton k a) h<br />
<br />
union :: Ord k => PriorityQueue k a -> PriorityQueue k a -> PriorityQueue k a<br />
union l Nil = l<br />
union Nil r = r<br />
union l@(Branch kl _ _ _) r@(Branch kr _ _ _)<br />
| kl <= kr = link l r<br />
| otherwise = link r l<br />
<br />
link (Branch k a Nil m) r = Branch k a r m<br />
link (Branch k a ll lr) r = Branch k a Nil (union (union r ll) lr)<br />
</haskell><br />
<br />
Not all the functions from data structures in the standard library (Data.Map, Data.Set, etc.) are provided; I only wrote those that are needed for the algorithm. However, this could be extended easily.<br />
<br />
The rest is just a general A* function, which takes a starting place, and functions for successors, testing for completion, cost of a place, and heuristic estimation from a place to the end, returning the path taken (a list from end to start). The rest of the code deals with the specifics of the ASCII map:<br />
<br />
<haskell><br />
{-# OPTIONS_GHC -fglasgow-exts #-}<br />
<br />
module Main where<br />
import Control.Monad (guard, liftM2)<br />
import Control.Monad.Instances<br />
import Data.List (findIndex)<br />
import qualified Data.Set as S<br />
import qualified Data.Map as M<br />
import qualified PriorityQueue as Q<br />
<br />
type Point = (Int, Int)<br />
type Map = [[Char]]<br />
<br />
find :: Char -> Map -> Point<br />
find c m = find' 0 m<br />
where find' _ [] = error "Can't find tile."<br />
find' y (h:t)<br />
| Just x <- findIndex (==c) h = (y, x)<br />
| otherwise = find' (y+1) t<br />
<br />
heuristic :: Point -> Point -> Int<br />
heuristic (x, y) (u, v) = abs (x - u) `max` abs (y - v)<br />
<br />
successor :: Map -> Point -> [Point]<br />
successor m (x,y) = do u <- [x + 1, x, x - 1]<br />
v <- [y + 1, y, y - 1]<br />
guard (0 <= u && u < length m)<br />
guard (0 <= v && v < length (head m))<br />
guard (u /= x || y /= v)<br />
guard (m !! u !! v /= '~')<br />
return (u, v)<br />
<br />
astar start succ end cost heur <br />
= astar' (S.singleton start) (Q.singleton (heur start) [start])<br />
where<br />
astar' seen q<br />
| Q.null q = error "No Solution."<br />
| end n = next<br />
| otherwise = astar' seen' q'<br />
where<br />
((c,next), dq) = Q.deleteFindMin q<br />
n = head next<br />
succs = filter (`S.notMember` seen) $ succ n<br />
costs = map ((+ c) . (subtract $ heur n) . liftM2 (+) cost heur) succs<br />
q' = dq `Q.union` Q.fromList (zip costs (map (:next) succs))<br />
seen' = seen `S.union` S.fromList succs<br />
<br />
path :: [[Char]] -> [Point] -> [[Char]]<br />
path m l = iterY m l 0<br />
where iterY [] _ _ = []<br />
iterY (h:t) l n = iterX h l n 0 : iterY t l (n+1)<br />
iterX [] _ _ _ = []<br />
iterX (h:t) l n m = (if (n,m) `elem` l then '#' else h) : iterX t l n (m+1)<br />
<br />
doit s = unlines . path m $ astar start succ (== end) cost h<br />
where m = lines s<br />
start = find '@' m<br />
end = find 'X' m<br />
succ = successor m<br />
h = heuristic end<br />
cost (x, y) = costsM M.! (m !! x !! y)<br />
costsM = M.fromList [('@',1),('x',1),('X',1),('.',1),('*',2),('^',3)]<br />
<br />
main = interact doit<br />
</haskell></div>Doliohttps://wiki.haskell.org/index.php?title=The_Knights_Tour&diff=24367The Knights Tour2008-12-01T02:45:23Z<p>Dolio: showBoard tweaks</p>
<hr />
<div>[[Category:Tutorials]]<br />
<br />
<br />
[http://en.wikipedia.org/wiki/Knight's_tour The Knight's Tour] is a<br />
mathematical problem involving a knight on a chessboard. The knight is<br />
placed on the empty board and, moving according to the rules of chess,<br />
must visit each square exactly once.<br />
<br />
Here are some Haskell implementations.<br />
<br />
__TOC__<br />
<br />
== One ==<br />
<br />
<haskell><br />
--<br />
-- Quick implementation by dmwit on #haskell<br />
-- Faster, shorter, uses less memory than the Python version.<br />
--<br />
<br />
import Control.Arrow<br />
import Control.Monad<br />
import Data.List<br />
import Data.Maybe<br />
import Data.Ord<br />
import System.Environment<br />
import qualified Data.Map as M<br />
<br />
sortOn f = map snd . sortBy (comparing fst) . map (f &&& id)<br />
<br />
clip coord size = coord >= 0 && coord < size<br />
valid size solution xy@(x, y) = and [clip x size, clip y size, isNothing (M.lookup xy solution)]<br />
neighbors size solution xy = length . filter (valid size solution) $ sequence moves xy<br />
<br />
moves = do<br />
f <- [(+), subtract]<br />
g <- [(+), subtract]<br />
(x, y) <- [(1, 2), (2, 1)]<br />
[f x *** g y]<br />
<br />
solve size solution n xy = do<br />
guard (valid size solution xy)<br />
let solution' = M.insert xy n solution<br />
sortedMoves = sortOn (neighbors size solution) (sequence moves xy)<br />
if n == size * size<br />
then [solution']<br />
else sortedMoves >>= solve size solution' (n+1)<br />
<br />
printBoard size solution = board [0..size-1] where<br />
sqSize = size * size<br />
elemSize = length (show sqSize)<br />
separator = intercalate (replicate elemSize '-') (replicate (size + 1) "+")<br />
pad n s = replicate (elemSize - length s) ' ' ++ s<br />
elem xy = pad elemSize . show $ solution M.! xy<br />
line y = concat . intersperseWrap "|" $ [elem (x, y) | x <- [0..size-1]]<br />
board = unlines . intersperseWrap separator . map line<br />
intersperseWrap s ss = s : intersperse s ss ++ [s]<br />
<br />
go size = case solve size M.empty 1 (0, 0) of<br />
[] -> "No solution found"<br />
(s:_) -> printBoard size s<br />
<br />
main = do<br />
args <- getArgs<br />
name <- getProgName<br />
putStrLn $ case map reads args of<br />
[] -> go 8<br />
[[(size, "")]] -> go size<br />
_ -> "Usage: " ++ name ++ " <size>"<br />
<br />
</haskell><br />
<br />
<br />
== Using Continuations ==<br />
<br />
An efficient version (some 10x faster than the example Python solution) using continuations.<br />
<br />
This is about as direct a translation of the Python algorithm as you'll get without sticking the whole thing in IO. The Python version prints the board and exits immediately upon finding it, so it can roll back changes if that doesn't happen. Instead, this version sets up an exit continuation using callCC and calls that to immediately return the first solution found. The Logic version below takes around 50% more time.<br />
<br />
<haskell><br />
import Control.Monad.Cont<br />
import Control.Monad.ST<br />
<br />
import Data.Array.ST<br />
import Data.List<br />
import Data.Ord<br />
import Data.Ix<br />
<br />
import System.Environment<br />
<br />
type Square = (Int, Int)<br />
type Board s = STUArray s (Int,Int) Int<br />
type ChessM r s = ContT r (ST s)<br />
type ChessK r s = String -> ChessM r s ()<br />
<br />
successors :: Int -> Board s -> Square -> ChessM r s [Square]<br />
successors n b = sortWith (fmap length . succs) <=< succs<br />
where<br />
sortWith f l = map fst `fmap` sortBy (comparing snd)<br />
`fmap` mapM (\x -> (,) x `fmap` f x) l<br />
succs (i,j) = filterM (empty b)<br />
[ (i', j') | (dx,dy) <- [(1,2),(2,1)]<br />
, i' <- [i+dx,i-dx] , j' <- [j+dy, j-dy]<br />
, inRange ((1,1),(n,n)) (i',j') ]<br />
<br />
empty :: Board s -> Square -> ChessM r s Bool<br />
empty b s = fmap (<1) . lift $ readArray b s<br />
<br />
mark :: Square -> Int -> Board s -> ChessM r s ()<br />
mark s k b = lift $ writeArray b s k<br />
<br />
tour :: Int -> Int -> ChessK r s -> Square -> Board s -> ChessM r s ()<br />
tour n k exit s b | k > n*n = showBoard n b >>= exit<br />
| otherwise = successors n b s >>=<br />
mapM_ (\x -> do mark x k b<br />
tour n (k+1) exit x b<br />
-- failed<br />
mark x 0 b)<br />
<br />
showBoard :: Int -> Board s -> ChessM r s String<br />
showBoard n b = fmap unlines . forM [1..n] $ \i -><br />
fmap unwords . forM [1..n] $ \j -><br />
pad `fmap` lift (readArray b (i,j))<br />
where<br />
k = ceiling . logBase 10 . fromIntegral $ n*n + 1<br />
pad i = let s = show i in replicate (k-length s) ' ' ++ s<br />
<br />
main = do (n:_) <- map read `fmap` getArgs<br />
s <- stToIO . flip runContT return $<br />
(do b <- lift $ newArray ((1,1),(n,n)) 0<br />
mark (1,1) 1 b<br />
callCC $ \k -> tour n 2 k (1,1) b >> fail "No solution!")<br />
putStrLn s<br />
<br />
</haskell><br />
<br />
== LogicT monad ==<br />
<br />
A very short implementation using [http://hackage.haskell.org/cgi-bin/hackage-scripts/package/logict the LogicT monad]<br />
<br />
16 lines of code. 7 imports.<br />
<br />
<haskell><br />
import Control.Monad.Logic<br />
<br />
import Data.List<br />
import Data.Maybe<br />
import Data.Ord<br />
import Data.Ix<br />
import qualified Data.Map as Map<br />
import System.Environment<br />
<br />
successors n b = sortWith (length . succs) . succs<br />
where sortWith f = map fst . sortBy (comparing snd) . map (\x -> (x, f x))<br />
succs (i,j) = [ (i', j') | (dx,dy) <- [(1,2),(2,1)]<br />
, i' <- [i+dx,i-dx] , j' <- [j+dy, j-dy]<br />
, isNothing (Map.lookup (i',j') b)<br />
, inRange ((1,1),(n,n)) (i',j') ]<br />
<br />
tour n k s b | k > n*n = return b<br />
| otherwise = do next <- msum . map return $ successors n b s<br />
tour n (k+1) next $ Map.insert next k b<br />
<br />
showBoard n b = unlines . map (\i -> unwords . map (\j -><br />
pad . fromJust $ Map.lookup (i,j) b) $ [1..n]) $ [1..n]<br />
where k = ceiling . logBase 10 . fromIntegral $ n*n + 1<br />
pad i = let s = show i in replicate (k-length s) ' ' ++ s<br />
<br />
main = do (n:_) <- map read `fmap` getArgs<br />
let b = observe . tour n 2 (1,1) $ Map.singleton (1,1) 1<br />
putStrLn $ showBoard n b<br />
</haskell></div>Doliohttps://wiki.haskell.org/index.php?title=The_Knights_Tour&diff=24366The Knights Tour2008-12-01T02:42:18Z<p>Dolio: Logic monad tweaks</p>
<hr />
<div>[[Category:Tutorials]]<br />
<br />
<br />
[http://en.wikipedia.org/wiki/Knight's_tour The Knight's Tour] is a<br />
mathematical problem involving a knight on a chessboard. The knight is<br />
placed on the empty board and, moving according to the rules of chess,<br />
must visit each square exactly once.<br />
<br />
Here are some Haskell implementations.<br />
<br />
__TOC__<br />
<br />
== One ==<br />
<br />
<haskell><br />
--<br />
-- Quick implementation by dmwit on #haskell<br />
-- Faster, shorter, uses less memory than the Python version.<br />
--<br />
<br />
import Control.Arrow<br />
import Control.Monad<br />
import Data.List<br />
import Data.Maybe<br />
import Data.Ord<br />
import System.Environment<br />
import qualified Data.Map as M<br />
<br />
sortOn f = map snd . sortBy (comparing fst) . map (f &&& id)<br />
<br />
clip coord size = coord >= 0 && coord < size<br />
valid size solution xy@(x, y) = and [clip x size, clip y size, isNothing (M.lookup xy solution)]<br />
neighbors size solution xy = length . filter (valid size solution) $ sequence moves xy<br />
<br />
moves = do<br />
f <- [(+), subtract]<br />
g <- [(+), subtract]<br />
(x, y) <- [(1, 2), (2, 1)]<br />
[f x *** g y]<br />
<br />
solve size solution n xy = do<br />
guard (valid size solution xy)<br />
let solution' = M.insert xy n solution<br />
sortedMoves = sortOn (neighbors size solution) (sequence moves xy)<br />
if n == size * size<br />
then [solution']<br />
else sortedMoves >>= solve size solution' (n+1)<br />
<br />
printBoard size solution = board [0..size-1] where<br />
sqSize = size * size<br />
elemSize = length (show sqSize)<br />
separator = intercalate (replicate elemSize '-') (replicate (size + 1) "+")<br />
pad n s = replicate (elemSize - length s) ' ' ++ s<br />
elem xy = pad elemSize . show $ solution M.! xy<br />
line y = concat . intersperseWrap "|" $ [elem (x, y) | x <- [0..size-1]]<br />
board = unlines . intersperseWrap separator . map line<br />
intersperseWrap s ss = s : intersperse s ss ++ [s]<br />
<br />
go size = case solve size M.empty 1 (0, 0) of<br />
[] -> "No solution found"<br />
(s:_) -> printBoard size s<br />
<br />
main = do<br />
args <- getArgs<br />
name <- getProgName<br />
putStrLn $ case map reads args of<br />
[] -> go 8<br />
[[(size, "")]] -> go size<br />
_ -> "Usage: " ++ name ++ " <size>"<br />
<br />
</haskell><br />
<br />
<br />
== Using Continuations ==<br />
<br />
An efficient version (some 10x faster than the example Python solution) using continuations.<br />
<br />
This is about as direct a translation of the Python algorithm as you'll get without sticking the whole thing in IO. The Python version prints the board and exits immediately upon finding it, so it can roll back changes if that doesn't happen. Instead, this version sets up an exit continuation using callCC and calls that to immediately return the first solution found. The Logic version below takes around 50% more time.<br />
<br />
<haskell><br />
import Control.Monad.Cont<br />
import Control.Monad.ST<br />
<br />
import Data.Array.ST<br />
import Data.List<br />
import Data.Ord<br />
import Data.Ix<br />
<br />
import System.Environment<br />
<br />
type Square = (Int, Int)<br />
type Board s = STUArray s (Int,Int) Int<br />
type ChessM r s = ContT r (ST s)<br />
type ChessK r s = String -> ChessM r s ()<br />
<br />
successors :: Int -> Board s -> Square -> ChessM r s [Square]<br />
successors n b = sortWith (fmap length . succs) <=< succs<br />
where<br />
sortWith f l = map fst `fmap` sortBy (comparing snd)<br />
`fmap` mapM (\x -> (,) x `fmap` f x) l<br />
succs (i,j) = filterM (empty b)<br />
[ (i', j') | (dx,dy) <- [(1,2),(2,1)]<br />
, i' <- [i+dx,i-dx] , j' <- [j+dy, j-dy]<br />
, inRange ((1,1),(n,n)) (i',j') ]<br />
<br />
empty :: Board s -> Square -> ChessM r s Bool<br />
empty b s = fmap (<1) . lift $ readArray b s<br />
<br />
mark :: Square -> Int -> Board s -> ChessM r s ()<br />
mark s k b = lift $ writeArray b s k<br />
<br />
tour :: Int -> Int -> ChessK r s -> Square -> Board s -> ChessM r s ()<br />
tour n k exit s b | k > n*n = showBoard n b >>= exit<br />
| otherwise = successors n b s >>=<br />
mapM_ (\x -> do mark x k b<br />
tour n (k+1) exit x b<br />
-- failed<br />
mark x 0 b)<br />
<br />
showBoard :: Int -> Board s -> ChessM r s String<br />
showBoard n b = fmap unlines . forM [1..n] $ \i -><br />
fmap unwords . forM [1..n] $ \j -><br />
pad `fmap` lift (readArray b (i,j))<br />
where<br />
k = floor . log . fromIntegral $ n*n<br />
pad i = let s = show i in replicate (k-length s) ' ' ++ s<br />
<br />
main = do (n:_) <- map read `fmap` getArgs<br />
s <- stToIO . flip runContT return $<br />
(do b <- lift $ newArray ((1,1),(n,n)) 0<br />
mark (1,1) 1 b<br />
callCC $ \k -> tour n 2 k (1,1) b >> fail "No solution!")<br />
putStrLn s<br />
<br />
</haskell><br />
<br />
== LogicT monad ==<br />
<br />
A very short implementation using [http://hackage.haskell.org/cgi-bin/hackage-scripts/package/logict the LogicT monad]<br />
<br />
16 lines of code. 7 imports.<br />
<br />
<haskell><br />
import Control.Monad.Logic<br />
<br />
import Data.List<br />
import Data.Maybe<br />
import Data.Ord<br />
import Data.Ix<br />
import qualified Data.Map as Map<br />
import System.Environment<br />
<br />
successors n b = sortWith (length . succs) . succs<br />
where sortWith f = map fst . sortBy (comparing snd) . map (\x -> (x, f x))<br />
succs (i,j) = [ (i', j') | (dx,dy) <- [(1,2),(2,1)]<br />
, i' <- [i+dx,i-dx] , j' <- [j+dy, j-dy]<br />
, isNothing (Map.lookup (i',j') b)<br />
, inRange ((1,1),(n,n)) (i',j') ]<br />
<br />
tour n k s b | k > n*n = return b<br />
| otherwise = do next <- msum . map return $ successors n b s<br />
tour n (k+1) next $ Map.insert next k b<br />
<br />
showBoard n b = unlines . map (\i -> unwords . map (\j -><br />
pad . fromJust $ Map.lookup (i,j) b) $ [1..n]) $ [1..n]<br />
where k = ceiling . logBase 10 . fromIntegral $ n*n<br />
pad i = let s = show i in replicate (k-length s) ' ' ++ s<br />
<br />
main = do (n:_) <- map read `fmap` getArgs<br />
let b = observe . tour n 2 (1,1) $ Map.singleton (1,1) 1<br />
putStrLn $ showBoard n b<br />
</haskell></div>Doliohttps://wiki.haskell.org/index.php?title=The_Knights_Tour&diff=24364The Knights Tour2008-12-01T02:18:26Z<p>Dolio: Improved ContT r (ST s) code</p>
<hr />
<div>[[Category:Tutorials]]<br />
<br />
<br />
[http://en.wikipedia.org/wiki/Knight's_tour The Knight's Tour] is a<br />
mathematical problem involving a knight on a chessboard. The knight is<br />
placed on the empty board and, moving according to the rules of chess,<br />
must visit each square exactly once.<br />
<br />
Here are some Haskell implementations.<br />
<br />
__TOC__<br />
<br />
== One ==<br />
<br />
<haskell><br />
--<br />
-- Quick implementation by dmwit on #haskell<br />
-- Faster, shorter, uses less memory than the Python version.<br />
--<br />
<br />
import Control.Arrow<br />
import Control.Monad<br />
import Data.List<br />
import Data.Maybe<br />
import Data.Ord<br />
import System.Environment<br />
import qualified Data.Map as M<br />
<br />
sortOn f = map snd . sortBy (comparing fst) . map (f &&& id)<br />
<br />
clip coord size = coord >= 0 && coord < size<br />
valid size solution xy@(x, y) = and [clip x size, clip y size, isNothing (M.lookup xy solution)]<br />
neighbors size solution xy = length . filter (valid size solution) $ sequence moves xy<br />
<br />
moves = do<br />
f <- [(+), subtract]<br />
g <- [(+), subtract]<br />
(x, y) <- [(1, 2), (2, 1)]<br />
[f x *** g y]<br />
<br />
solve size solution n xy = do<br />
guard (valid size solution xy)<br />
let solution' = M.insert xy n solution<br />
sortedMoves = sortOn (neighbors size solution) (sequence moves xy)<br />
if n == size * size<br />
then [solution']<br />
else sortedMoves >>= solve size solution' (n+1)<br />
<br />
printBoard size solution = board [0..size-1] where<br />
sqSize = size * size<br />
elemSize = length (show sqSize)<br />
separator = intercalate (replicate elemSize '-') (replicate (size + 1) "+")<br />
pad n s = replicate (elemSize - length s) ' ' ++ s<br />
elem xy = pad elemSize . show $ solution M.! xy<br />
line y = concat . intersperseWrap "|" $ [elem (x, y) | x <- [0..size-1]]<br />
board = unlines . intersperseWrap separator . map line<br />
intersperseWrap s ss = s : intersperse s ss ++ [s]<br />
<br />
go size = case solve size M.empty 1 (0, 0) of<br />
[] -> "No solution found"<br />
(s:_) -> printBoard size s<br />
<br />
main = do<br />
args <- getArgs<br />
name <- getProgName<br />
putStrLn $ case map reads args of<br />
[] -> go 8<br />
[[(size, "")]] -> go size<br />
_ -> "Usage: " ++ name ++ " <size>"<br />
<br />
</haskell><br />
<br />
<br />
== Using Continuations ==<br />
<br />
An efficient version (some 10x faster than the example Python solution) using continuations.<br />
<br />
This is about as direct a translation of the Python algorithm as you'll get without sticking the whole thing in IO. The Python version prints the board and exits immediately upon finding it, so it can roll back changes if that doesn't happen. Instead, this version sets up an exit continuation using callCC and calls that to immediately return the first solution found. The Logic version below takes around 50% more time.<br />
<br />
<haskell><br />
import Control.Monad.Cont<br />
import Control.Monad.ST<br />
<br />
import Data.Array.ST<br />
import Data.List<br />
import Data.Ord<br />
import Data.Ix<br />
<br />
import System.Environment<br />
<br />
type Square = (Int, Int)<br />
type Board s = STUArray s (Int,Int) Int<br />
type ChessM r s = ContT r (ST s)<br />
type ChessK r s = String -> ChessM r s ()<br />
<br />
successors :: Int -> Board s -> Square -> ChessM r s [Square]<br />
successors n b = sortWith (fmap length . succs) <=< succs<br />
where<br />
sortWith f l = map fst `fmap` sortBy (comparing snd)<br />
`fmap` mapM (\x -> (,) x `fmap` f x) l<br />
succs (i,j) = filterM (empty b)<br />
[ (i', j') | (dx,dy) <- [(1,2),(2,1)]<br />
, i' <- [i+dx,i-dx] , j' <- [j+dy, j-dy]<br />
, inRange ((1,1),(n,n)) (i',j') ]<br />
<br />
empty :: Board s -> Square -> ChessM r s Bool<br />
empty b s = fmap (<1) . lift $ readArray b s<br />
<br />
mark :: Square -> Int -> Board s -> ChessM r s ()<br />
mark s k b = lift $ writeArray b s k<br />
<br />
tour :: Int -> Int -> ChessK r s -> Square -> Board s -> ChessM r s ()<br />
tour n k exit s b | k > n*n = showBoard n b >>= exit<br />
| otherwise = successors n b s >>=<br />
mapM_ (\x -> do mark x k b<br />
tour n (k+1) exit x b<br />
-- failed<br />
mark x 0 b)<br />
<br />
showBoard :: Int -> Board s -> ChessM r s String<br />
showBoard n b = fmap unlines . forM [1..n] $ \i -><br />
fmap unwords . forM [1..n] $ \j -><br />
pad `fmap` lift (readArray b (i,j))<br />
where<br />
k = floor . log . fromIntegral $ n*n<br />
pad i = let s = show i in replicate (k-length s) ' ' ++ s<br />
<br />
main = do (n:_) <- map read `fmap` getArgs<br />
s <- stToIO . flip runContT return $<br />
(do b <- lift $ newArray ((1,1),(n,n)) 0<br />
mark (1,1) 1 b<br />
callCC $ \k -> tour n 2 k (1,1) b >> fail "No solution!")<br />
putStrLn s<br />
<br />
</haskell><br />
<br />
== LogicT monad ==<br />
<br />
A very short implementation using [http://hackage.haskell.org/cgi-bin/hackage-scripts/package/logict the LogicT monad]<br />
<br />
19 lines of code. 8 imports.<br />
<br />
<haskell><br />
import Control.Monad.Logic<br />
<br />
import Prelude hiding (lookup)<br />
import Data.List hiding (lookup, insert)<br />
import Data.Maybe<br />
import Data.Ord<br />
import Data.Ix<br />
import Data.Map (Map, lookup, singleton, insert)<br />
import System.Environment<br />
<br />
successors n b = sortWith (length . succs) . succs<br />
where<br />
sortWith f = map fst . sortBy (comparing snd) . map (\x -> (x, f x))<br />
succs (i,j) = [ (i', j') | (dx,dy) <- [(1,2),(2,1)]<br />
, i' <- [i+dx,i-dx] , j' <- [j+dy, j-dy]<br />
, empty (i',j') b, inRange ((1,1),(n,n)) (i',j') ]<br />
<br />
empty s = isNothing . lookup s<br />
<br />
choose = msum . map return<br />
<br />
tour n k s b | k > n*n = return b<br />
| otherwise = do next <- choose $ successors n b s<br />
tour n (k+1) next (insert next k b)<br />
<br />
showBoard n b = unlines . map unwords<br />
$ [ [ fmt . fromJust $ lookup (i,j) b | i <- [1..n] ] | j <- [1..n] ]<br />
where<br />
fmt i | i < 10 = ' ': show i<br />
| otherwise = show i<br />
<br />
main = do (n:_) <- map read `fmap` getArgs<br />
let b = observe . tour n 2 (1,1) $ singleton (1,1) 1<br />
putStrLn $ showBoard n b<br />
</haskell></div>Doliohttps://wiki.haskell.org/index.php?title=Haskell_Quiz/Goedel/Solution_Dolio&diff=19088Haskell Quiz/Goedel/Solution Dolio2008-02-11T22:51:27Z<p>Dolio: </p>
<hr />
<div>[[Category:Haskell Quiz solutions|Goedel]]<br />
<br />
Encoding is quite simple in Haskell, as it's simply combining the stream of characters with a stream of primes in the right way. Extracting a message from the encoded number is somewhat more cumbersome, especially if one wants to implement the somewhat more efficient factoring discussed in the ruby quiz writeup. However, once such operations are defined for a single number, retrieving a stream is just an unfoldr away.<br />
<br />
<haskell><br />
module Main (main) where<br />
<br />
import Control.Arrow<br />
import Data.Char<br />
import Data.List<br />
import System.Environment<br />
<br />
primes :: [Integer]<br />
primes = 2 : sieve [3,5..]<br />
where sieve (x:xs) = x : sieve (filter (\n -> n `mod` x /= 0) xs)<br />
<br />
goedel :: String -> Integer<br />
goedel = product . zipWith (^) primes . map ord<br />
<br />
letter :: ([Integer], Integer) -> Maybe (Char, ([Integer], Integer))<br />
letter (_, 1) = Nothing<br />
letter ([], _) = Nothing<br />
letter (p:ps,n) = Just (chr k, (ps, n'))<br />
where (n', k) = extract p n<br />
<br />
extract :: Integer -> Integer -> (Integer, Int)<br />
extract p n = foldl' extract' (n,0) eps<br />
where eps = map (id &&& (p^)) [64,32,16,8,4,2,1]<br />
extract' (n,s) (k,pp)<br />
| m /= 0 = (n,s)<br />
| otherwise = (d,s + k)<br />
where (d,m) = n `divMod` pp<br />
<br />
ungoedel :: Integer -> String<br />
ungoedel = unfoldr letter . (,) primes<br />
<br />
main = do (mode:_) <- getArgs<br />
case mode of<br />
'e':_ -> interact $ show . goedel<br />
'd':_ -> interact $ ungoedel . read<br />
_ -> putStrLn "Unrecognized mode. 'e' for encode, 'd' for decode."<br />
</haskell></div>Doliohttps://wiki.haskell.org/index.php?title=Haskell_Quiz/Goedel/Solution_Dolio&diff=19087Haskell Quiz/Goedel/Solution Dolio2008-02-11T22:50:40Z<p>Dolio: </p>
<hr />
<div>[[Category:Haskell Quiz Solutions|Goedel]]<br />
<br />
Encoding is quite simple in Haskell, as it's simply combining the stream of characters with a stream of primes in the right way. Extracting a message from the encoded number is somewhat more cumbersome, especially if one wants to implement the somewhat more efficient factoring discussed in the ruby quiz writeup. However, once such operations are defined for a single number, retrieving a stream is just an unfoldr away.<br />
<br />
<haskell><br />
module Main (main) where<br />
<br />
import Control.Arrow<br />
import Data.Char<br />
import Data.List<br />
import System.Environment<br />
<br />
primes :: [Integer]<br />
primes = 2 : sieve [3,5..]<br />
where sieve (x:xs) = x : sieve (filter (\n -> n `mod` x /= 0) xs)<br />
<br />
goedel :: String -> Integer<br />
goedel = product . zipWith (^) primes . map ord<br />
<br />
letter :: ([Integer], Integer) -> Maybe (Char, ([Integer], Integer))<br />
letter (_, 1) = Nothing<br />
letter ([], _) = Nothing<br />
letter (p:ps,n) = Just (chr k, (ps, n'))<br />
where (n', k) = extract p n<br />
<br />
extract :: Integer -> Integer -> (Integer, Int)<br />
extract p n = foldl' extract' (n,0) eps<br />
where eps = map (id &&& (p^)) [64,32,16,8,4,2,1]<br />
extract' (n,s) (k,pp)<br />
| m /= 0 = (n,s)<br />
| otherwise = (d,s + k)<br />
where (d,m) = n `divMod` pp<br />
<br />
ungoedel :: Integer -> String<br />
ungoedel = unfoldr letter . (,) primes<br />
<br />
main = do (mode:_) <- getArgs<br />
case mode of<br />
'e':_ -> interact $ show . goedel<br />
'd':_ -> interact $ ungoedel . read<br />
_ -> putStrLn "Unrecognized mode. 'e' for encode, 'd' for decode."<br />
</haskell></div>Doliohttps://wiki.haskell.org/index.php?title=Haskell_Quiz/Goedel&diff=19086Haskell Quiz/Goedel2008-02-11T22:44:31Z<p>Dolio: </p>
<hr />
<div>[[Category:Haskell Quiz]]<br />
<br />
This quiz involved a sort of encryption via Goedel numbering. A message is encoded by finding the product of all p_n^m_n, where p_n is the nth prime, and m_n is the ascii value of the nth character in the message. The task was to create a program to both encode and decode messages for this format.<br />
<br />
== The Problem ==<br />
<br />
* http://www.rubyquiz.com/quiz147.html<br />
<br />
== Solutions ==<br />
<br />
* [[Haskell Quiz/Goedel/Solution Dolio|Dan Doel]]</div>Doliohttps://wiki.haskell.org/index.php?title=Haskell_Quiz&diff=19085Haskell Quiz2008-02-11T22:39:42Z<p>Dolio: goedel</p>
<hr />
<div>A collection of solutions to the [http://www.rubyquiz.com Ruby quiz] puzzles in simple, elegant Haskell.<br />
<br />
As you solve the puzzles, please contribute your code, and create a page<br />
for the puzzle entries. When creating a new page for your source, be<br />
sure to categorise it as code, with a [ [ Category:Code ] ] tag.<br />
<br />
== The Puzzles ==<br />
<br />
1. [[Haskell Quiz/The Solitaire Cipher|The Solitaire Cipher]]<br />
<br />
2. [[Haskell Quiz/Secret Santas|Secret Santas]]<br />
<br />
5. [[Haskell Quiz/Sokoban|Sokoban]]<br />
<br />
7. [[Haskell Quiz/Countdown|Countdown]]<br />
<br />
15. [[Haskell Quiz/Animal Quiz|Animal Quiz]]<br />
<br />
19. [[Haskell Quiz/Yahtzee|Yahtzee]]<br />
<br />
20. [[Haskell Quiz/Phone Number Words|Phone Number Words]]<br />
<br />
22. [[Haskell Quiz/Roman Numerals|Roman Numerals]]<br />
<br />
25. [[Haskell Quiz/English Numerals|English Numerals]]<br />
<br />
27. [[Haskell Quiz/Knight's Travails|Knight's Travails]]<br />
<br />
31. [[Haskell Quiz/Amazing Mazes|Amazing Mazes]]<br />
<br />
33. [[Haskell Quiz/Tiling Turmoil|Tiling Turmoil]]<br />
<br />
37. [[Haskell Quiz/Inference Engine|Inference Engine]]<br />
<br />
39. [[Haskell Quiz/Sampling|Sampling]]<br />
<br />
43. [[Haskell Quiz/Sodoku Solver|Sodoku Solver]]<br />
<br />
54. [[Haskell Quiz/Index and Query|Text Index and Query]]<br />
<br />
57. [[Haskell Quiz/Weird Numbers|Weird Numbers]]<br />
<br />
60. [[Haskell Quiz/Numeric Maze|Numeric Maze]]<br />
<br />
63. [[Haskell Quiz/Grid Folding|Grid Folding]]<br />
<br />
65. [[Haskell Quiz/Splitting the Loot|Splitting the Loot]]<br />
<br />
70. [[Haskell Quiz/Constraint Processing|Constraint Processing]] <br />
<br />
76. [[Haskell Quiz/Text Munger|Text Munger]]<br />
<br />
77. [[Haskell Quiz/Cat2Rafb|cat2rafb]]<br />
<br />
84. [[Haskell Quiz/PP Pascal|PP Pascal]]<br />
<br />
88. [[Haskell Quiz/Chip Eight|Chip Eight]]<br />
<br />
92. [[Haskell Quiz/DayRange|DayRange]]<br />
<br />
93. [[Haskell Quiz/Happy Numbers|Happy Numbers]]<br />
<br />
97. [[Haskell Quiz/Posix Pangrams|Posix Pangrams]]<br />
<br />
98. [[Haskell Quiz/Astar|A*]]<br />
<br />
99. [[Haskell Quiz/Fuzzy Time|Fuzzy Time]]<br />
<br />
100. [[Haskell Quiz/Bytecode Compiler|Bytecode Compiler]]<br />
<br />
106. [[Haskell Quiz/Chess960|Chess960]]<br />
<br />
107. [[Haskell Quiz/Word Search|Word Search]]<br />
<br />
108. [[Haskell Quiz/Word Blender|Word Blender]]<br />
<br />
114. [[Haskell Quiz/Housie|Housie]]<br />
<br />
117. [[Haskell Quiz/SimFrost|SimFrost]]<br />
<br />
121. [[Haskell Quiz/Morse Code|Morse Code]]<br />
<br />
122. [[Haskell Quiz/Credit Cards|Checking Credit Cards]]<br />
<br />
128. [[Haskell Quiz/Verbal Arithmetic|Verbal Arithmetic]]<br />
<br />
131. [[Haskell Quiz/Maximum Sub-Array|Maximum Sub-Array]]<br />
<br />
138. [[Haskell Quiz/Count and Say|Count and Say]]<br />
<br />
139. [[Haskell Quiz/IP to Country|IP to Country]]<br />
<br />
141. [[Haskell Quiz/Probable Iterations|Probable Iterations]]<br />
<br />
147. [[Haskell Quiz/Goedel|Goedel]]<br />
<br />
156. [[Haskell Quiz/Internal Rate of Return|Internal Rate of Return]]<br />
<br />
==Possibly fun ones not yet done in haskell==<br />
<br />
3. Geodesic Dome Faces http://www.rubyquiz.com/quiz3.html<br />
<br />
11. Learning Tic-Tac-Toe http://www.rubyquiz.com/quiz11.html<br />
<br />
48. Math Captcha http://www.rubyquiz.com/quiz48.html<br />
<br />
49. Text Image http://www.rubyquiz.com/quiz50.html (Not sure how image loading will work)<br />
<br />
85. C-Style Ints http://www.rubyquiz.com/quiz85.html<br />
<br />
87. Negative Sleep http://www.rubyquiz.com/quiz87.html (As a Monad!!!)<br />
<br />
Many weren't included because of either clumsy ASCII output, or requiring a dictionary. Perhaps a dictionary module could be created and those problems attacked in a unified fashion.<br />
<br />
[[Category:Code]]<br />
[[Category:Haskell Quiz|*]]</div>Doliohttps://wiki.haskell.org/index.php?title=Haskell_Quiz/Internal_Rate_of_Return/Solution_Dolio&diff=18983Haskell Quiz/Internal Rate of Return/Solution Dolio2008-02-09T13:41:12Z<p>Dolio: use until instead of iterate</p>
<hr />
<div>[[Category:Haskell Quiz solutions|Internal Rate of Return]]<br />
<br />
My solution for this quiz uses the [http://en.wikipedia.org/wiki/Secant_method secant method], which is quite easy to implement.<br />
<br />
<haskell><br />
import Data.Function<br />
import Numeric<br />
import System.Environment<br />
<br />
secant :: (Double -> Double) -> Double -> Double<br />
secant f delta = fst $ until err update (0,1)<br />
where<br />
update (x,y) = (x - (x - y)*(f x)/(f x - f y), x)<br />
err (x,y) = abs (x - y) < delta<br />
<br />
npv :: Double -> [Double] -> Double<br />
npv i = sum . zipWith (\t c -> c / (1 + i)**t) [0..]<br />
<br />
main = do (s:t) <- getArgs<br />
let sig = read s<br />
cs = map read t<br />
putStrLn . ($"") . showFFloat (Just sig) $ secant (flip npv cs) (0.1^sig)<br />
</haskell><br />
<br />
The resulting program expects the first argument to be the number of digits to be displayed after the decimal point, while the rest are the yearly income. For instance:<br />
<br />
./IRR 4 -100 30 35 40 45<br />
0.1709</div>Doliohttps://wiki.haskell.org/index.php?title=Haskell_Quiz&diff=18979Haskell Quiz2008-02-09T12:43:25Z<p>Dolio: internal rate of return</p>
<hr />
<div>A collection of solutions to the [http://www.rubyquiz.com Ruby quiz] puzzles in simple, elegant Haskell.<br />
<br />
As you solve the puzzles, please contribute your code, and create a page<br />
for the puzzle entries. When creating a new page for your source, be<br />
sure to categorise it as code, with a [ [ Category:Code ] ] tag.<br />
<br />
== The Puzzles ==<br />
<br />
1. [[Haskell Quiz/The Solitaire Cipher|The Solitaire Cipher]]<br />
<br />
2. [[Haskell Quiz/Secret Santas|Secret Santas]]<br />
<br />
5. [[Haskell Quiz/Sokoban|Sokoban]]<br />
<br />
7. [[Haskell Quiz/Countdown|Countdown]]<br />
<br />
15. [[Haskell Quiz/Animal Quiz|Animal Quiz]]<br />
<br />
19. [[Haskell Quiz/Yahtzee|Yahtzee]]<br />
<br />
20. [[Haskell Quiz/Phone Number Words|Phone Number Words]]<br />
<br />
22. [[Haskell Quiz/Roman Numerals|Roman Numerals]]<br />
<br />
25. [[Haskell Quiz/English Numerals|English Numerals]]<br />
<br />
27. [[Haskell Quiz/Knight's Travails|Knight's Travails]]<br />
<br />
31. [[Haskell Quiz/Amazing Mazes|Amazing Mazes]]<br />
<br />
33. [[Haskell Quiz/Tiling Turmoil|Tiling Turmoil]]<br />
<br />
37. [[Haskell Quiz/Inference Engine|Inference Engine]]<br />
<br />
39. [[Haskell Quiz/Sampling|Sampling]]<br />
<br />
43. [[Haskell Quiz/Sodoku Solver|Sodoku Solver]]<br />
<br />
54. [[Haskell Quiz/Index and Query|Text Index and Query]]<br />
<br />
57. [[Haskell Quiz/Weird Numbers|Weird Numbers]]<br />
<br />
60. [[Haskell Quiz/Numeric Maze|Numeric Maze]]<br />
<br />
63. [[Haskell Quiz/Grid Folding|Grid Folding]]<br />
<br />
65. [[Haskell Quiz/Splitting the Loot|Splitting the Loot]]<br />
<br />
70. [[Haskell Quiz/Constraint Processing|Constraint Processing]] <br />
<br />
76. [[Haskell Quiz/Text Munger|Text Munger]]<br />
<br />
77. [[Haskell Quiz/Cat2Rafb|cat2rafb]]<br />
<br />
84. [[Haskell Quiz/PP Pascal|PP Pascal]]<br />
<br />
88. [[Haskell Quiz/Chip Eight|Chip Eight]]<br />
<br />
92. [[Haskell Quiz/DayRange|DayRange]]<br />
<br />
93. [[Haskell Quiz/Happy Numbers|Happy Numbers]]<br />
<br />
97. [[Haskell Quiz/Posix Pangrams|Posix Pangrams]]<br />
<br />
98. [[Haskell Quiz/Astar|A*]]<br />
<br />
99. [[Haskell Quiz/Fuzzy Time|Fuzzy Time]]<br />
<br />
100. [[Haskell Quiz/Bytecode Compiler|Bytecode Compiler]]<br />
<br />
106. [[Haskell Quiz/Chess960|Chess960]]<br />
<br />
107. [[Haskell Quiz/Word Search|Word Search]]<br />
<br />
108. [[Haskell Quiz/Word Blender|Word Blender]]<br />
<br />
114. [[Haskell Quiz/Housie|Housie]]<br />
<br />
117. [[Haskell Quiz/SimFrost|SimFrost]]<br />
<br />
121. [[Haskell Quiz/Morse Code|Morse Code]]<br />
<br />
122. [[Haskell Quiz/Credit Cards|Checking Credit Cards]]<br />
<br />
128. [[Haskell Quiz/Verbal Arithmetic|Verbal Arithmetic]]<br />
<br />
131. [[Haskell Quiz/Maximum Sub-Array|Maximum Sub-Array]]<br />
<br />
138. [[Haskell Quiz/Count and Say|Count and Say]]<br />
<br />
139. [[Haskell Quiz/IP to Country|IP to Country]]<br />
<br />
141. [[Haskell Quiz/Probable Iterations|Probable Iterations]]<br />
<br />
156. [[Haskell Quiz/Internal Rate of Return|Internal Rate of Return]]<br />
<br />
==Possibly fun ones not yet done in haskell==<br />
<br />
3. Geodesic Dome Faces http://www.rubyquiz.com/quiz3.html<br />
<br />
11. Learning Tic-Tac-Toe http://www.rubyquiz.com/quiz11.html<br />
<br />
48. Math Captcha http://www.rubyquiz.com/quiz48.html<br />
<br />
49. Text Image http://www.rubyquiz.com/quiz50.html (Not sure how image loading will work)<br />
<br />
85. C-Style Ints http://www.rubyquiz.com/quiz85.html<br />
<br />
87. Negative Sleep http://www.rubyquiz.com/quiz87.html (As a Monad!!!)<br />
<br />
Many weren't included because of either clumsy ASCII output, or requiring a dictionary. Perhaps a dictionary module could be created and those problems attacked in a unified fashion.<br />
<br />
[[Category:Code]]<br />
[[Category:Haskell Quiz|*]]</div>Doliohttps://wiki.haskell.org/index.php?title=Haskell_Quiz/Internal_Rate_of_Return/Solution_Dolio&diff=18978Haskell Quiz/Internal Rate of Return/Solution Dolio2008-02-09T12:41:59Z<p>Dolio: category</p>
<hr />
<div>[[Category:Haskell Quiz solutions|Internal Rate of Return]]<br />
<br />
My solution for this quiz uses the [http://en.wikipedia.org/wiki/Secant_method secant method], which is quite easy to implement.<br />
<br />
<haskell><br />
import Data.Function<br />
import Numeric<br />
import System.Environment<br />
<br />
secant :: (Double -> Double) -> Double -> Double<br />
secant f delta = fst . head . dropWhile err . iterate update $ (0,1)<br />
where<br />
update (x,y) = (x - (x - y)*(f x)/(f x - f y), x)<br />
err (x,y) = abs (x - y) > delta<br />
<br />
npv :: Double -> [Double] -> Double<br />
npv i = sum . zipWith (\t c -> c / (1 + i)**t) [0..]<br />
<br />
main = do (s:t) <- getArgs<br />
let sig = read s<br />
cs = map read t<br />
putStrLn . ($"") . showFFloat (Just sig) $ secant (flip npv cs) (0.1^sig)<br />
</haskell><br />
<br />
The resulting program expects the first argument to be the number of digits to be displayed after the decimal point, while the rest are the yearly income. For instance:<br />
<br />
./IRR 4 -100 30 35 40 45<br />
0.1709</div>Doliohttps://wiki.haskell.org/index.php?title=Haskell_Quiz/Internal_Rate_of_Return/Solution_Dolio&diff=18977Haskell Quiz/Internal Rate of Return/Solution Dolio2008-02-09T12:40:40Z<p>Dolio: creation</p>
<hr />
<div>My solution for this quiz uses the [http://en.wikipedia.org/wiki/Secant_method secant method], which is quite easy to implement.<br />
<br />
<haskell><br />
import Data.Function<br />
import Numeric<br />
import System.Environment<br />
<br />
secant :: (Double -> Double) -> Double -> Double<br />
secant f delta = fst . head . dropWhile err . iterate update $ (0,1)<br />
where<br />
update (x,y) = (x - (x - y)*(f x)/(f x - f y), x)<br />
err (x,y) = abs (x - y) > delta<br />
<br />
npv :: Double -> [Double] -> Double<br />
npv i = sum . zipWith (\t c -> c / (1 + i)**t) [0..]<br />
<br />
main = do (s:t) <- getArgs<br />
let sig = read s<br />
cs = map read t<br />
putStrLn . ($"") . showFFloat (Just sig) $ secant (flip npv cs) (0.1^sig)<br />
</haskell><br />
<br />
The resulting program expects the first argument to be the number of digits to be displayed after the decimal point, while the rest are the yearly income. For instance:<br />
<br />
./IRR 4 -100 30 35 40 45<br />
0.1709</div>Doliohttps://wiki.haskell.org/index.php?title=Haskell_Quiz/Internal_Rate_of_Return&diff=18976Haskell Quiz/Internal Rate of Return2008-02-09T12:25:57Z<p>Dolio: creation</p>
<hr />
<div>[[Category:Haskell Quiz]]<br />
<br />
The objective of this quiz was to compute the internal rate of return of a business given a list of its yearly cash flow (note: though the quiz doesn't specifically say so, the wikipedia article states that the IRR is found by solving the given equation when the NPV is 0).<br />
<br />
==The Problem==<br />
<br />
* http://www.rubyquiz.com/quiz156.html<br />
<br />
==Solutions==<br />
<br />
* [[Haskell Quiz/Internal Rate of Return/Solution Dolio|Dan Doel]]</div>Doliohttps://wiki.haskell.org/index.php?title=Haskell_Quiz/Probable_Iterations/Solution_Dolio&diff=15910Haskell Quiz/Probable Iterations/Solution Dolio2007-09-29T02:43:03Z<p>Dolio: creation</p>
<hr />
<div>[[Category:Haskell Quiz solutions|Probable Iterations]]<br />
<br />
This quiz was pretty simple. The list monad makes generation of the test cases simple, and the writer monad is handy for capturing potential output for each line. I used a [http://hackage.haskell.org/cgi-bin/hackage-scripts/package/dlist-0.3.1 DList] for the writer accumulator to avoid repeated copying.<br />
<br />
<haskell><br />
<br />
module Main where<br />
<br />
import Data.DList<br />
<br />
import Control.Monad.Writer.Lazy<br />
<br />
import System.Environment<br />
import System.Exit<br />
<br />
import Text.Printf<br />
<br />
die = [1..6]<br />
<br />
check :: ([Int] -> Bool) -> (Int, [Int]) -> Writer (DList String) Bool<br />
check p (line, roll) = do tell $ if b then singleton hit else singleton noHit ; return b<br />
where<br />
b = p roll<br />
noHit = printf "%12d %s" line (show roll)<br />
hit = noHit ++ " <=="<br />
<br />
sample :: Int -> Int -> (Int, (Int, DList String))<br />
sample i j = (length l, runWriter . liftM length . filterM (check p) $ zip [1..] l)<br />
where<br />
p l = length (filter (==5) l) >= j<br />
l = sequence $ replicate i die<br />
<br />
chop :: [a] -> [a]<br />
chop [] = []<br />
chop (x:xs) = x : chop (drop 49999 xs)<br />
<br />
main = do (v,s,i,j) <- processArgs<br />
let (total, (selected, out)) = sample i j<br />
if v<br />
then mapM_ putStrLn $ toList out<br />
else when s . mapM_ putStrLn . chop $ toList out<br />
putStrLn ""<br />
putStr "Number of desirable outcomes is "<br />
print selected<br />
putStr "Number of possible outcomes is "<br />
print total<br />
putStrLn ""<br />
putStr "Probability is "<br />
print $ fromIntegral selected / fromIntegral total<br />
<br />
processArgs = do l <- getArgs<br />
case l of<br />
[i,j] -> return (False, False, read i, read j)<br />
["-v", i, j] -> return (True, False, read i, read j)<br />
["-s", i, j] -> return (False, True, read i, read j)<br />
_ -> do putStrLn "Unrecognized arguments."<br />
exitFailure<br />
<br />
</haskell></div>Doliohttps://wiki.haskell.org/index.php?title=Haskell_Quiz/Probable_Iterations&diff=15909Haskell Quiz/Probable Iterations2007-09-29T02:38:15Z<p>Dolio: creation</p>
<hr />
<div>[[Category:Haskell Quiz]]<br />
<br />
The object of this quiz was to write a program that verifies a particular probability calculation involving dice by generating all possible outcomes and testing how many there are and how many are true.<br />
<br />
==The Problem==<br />
<br />
* http://www.rubyquiz.com/quiz141.html<br />
<br />
==Solutions==<br />
<br />
* [[Haskell Quiz/Probable Iterations/Solution Dolio|Dan Doel]]</div>Doliohttps://wiki.haskell.org/index.php?title=Haskell_Quiz&diff=15908Haskell Quiz2007-09-29T02:35:34Z<p>Dolio: Probable Iterations</p>
<hr />
<div>A collection of solutions to the [http://www.rubyquiz.com Ruby quiz] puzzles in simple, elegant Haskell.<br />
<br />
As you solve the puzzles, please contribute your code, and create a page<br />
for the puzzle entries. When creating a new page for your source, be<br />
sure to categorise it as code, with a [ [ Category:Code ] ] tag.<br />
<br />
== The Puzzles ==<br />
<br />
1. [[Haskell Quiz/The Solitaire Cipher|The Solitaire Cipher]]<br />
<br />
2. [[Haskell Quiz/Secret Santas|Secret Santas]]<br />
<br />
5. [[Haskell Quiz/Sokoban|Sokoban]]<br />
<br />
7. [[Haskell Quiz/Countdown|Countdown]]<br />
<br />
15. [[Haskell Quiz/Animal Quiz|Animal Quiz]]<br />
<br />
19. [[Haskell Quiz/Yahtzee|Yahtzee]]<br />
<br />
20. [[Haskell Quiz/Phone Number Words|Phone Number Words]]<br />
<br />
22. [[Haskell Quiz/Roman Numerals|Roman Numerals]]<br />
<br />
25. [[Haskell Quiz/English Numerals|English Numerals]]<br />
<br />
27. [[Haskell Quiz/Knight's Travails|Knight's Travails]]<br />
<br />
31. [[Haskell Quiz/Amazing Mazes|Amazing Mazes]]<br />
<br />
33. [[Haskell Quiz/Tiling Turmoil|Tiling Turmoil]]<br />
<br />
39. [[Haskell Quiz/Sampling|Sampling]]<br />
<br />
43. [[Haskell Quiz/Sodoku Solver|Sodoku Solver]]<br />
<br />
54. [[Haskell Quiz/Index and Query|Text Index and Query]]<br />
<br />
57. [[Haskell Quiz/Weird Numbers|Weird Numbers]]<br />
<br />
60. [[Haskell Quiz/Numeric Maze|Numeric Maze]]<br />
<br />
63. [[Haskell Quiz/Grid Folding|Grid Folding]]<br />
<br />
65. [[Haskell Quiz/Splitting the Loot|Splitting the Loot]]<br />
<br />
70. [[Haskell Quiz/Constraint Processing|Constraint Processing]] <br />
<br />
76. [[Haskell Quiz/Text Munger|Text Munger]]<br />
<br />
77. [[Haskell Quiz/Cat2Rafb|cat2rafb]]<br />
<br />
84. [[Haskell Quiz/PP Pascal|PP Pascal]]<br />
<br />
88. [[Haskell Quiz/Chip Eight|Chip Eight]]<br />
<br />
92. [[Haskell Quiz/DayRange|DayRange]]<br />
<br />
93. [[Haskell Quiz/Happy Numbers|Happy Numbers]]<br />
<br />
97. [[Haskell Quiz/Posix Pangrams|Posix Pangrams]]<br />
<br />
98. [[Haskell Quiz/Astar|A*]]<br />
<br />
99. [[Haskell Quiz/Fuzzy Time|Fuzzy Time]]<br />
<br />
100. [[Haskell Quiz/Bytecode Compiler|Bytecode Compiler]]<br />
<br />
106. [[Haskell Quiz/Chess960|Chess960]]<br />
<br />
107. [[Haskell Quiz/Word Search|Word Search]]<br />
<br />
108. [[Haskell Quiz/Word Blender|Word Blender]]<br />
<br />
114. [[Haskell Quiz/Housie|Housie]]<br />
<br />
117. [[Haskell Quiz/SimFrost|SimFrost]]<br />
<br />
121. [[Haskell Quiz/Morse Code|Morse Code]]<br />
<br />
122. [[Haskell Quiz/Credit Cards|Checking Credit Cards]]<br />
<br />
128. [[Haskell Quiz/Verbal Arithmetic|Verbal Arithmetic]]<br />
<br />
131. [[Haskell Quiz/Maximum Sub-Array|Maximum Sub-Array]]<br />
<br />
138. [[Haskell Quiz/Count and Say|Count and Say]]<br />
<br />
139. [[Haskell Quiz/IP to Country|IP to Country]]<br />
<br />
141. [[Haskell Quiz/Probable Iterations|Probable Iterations]]<br />
<br />
==Possibly fun ones not yet done in haskell==<br />
<br />
3. Geodesic Dome Faces http://www.rubyquiz.com/quiz3.html<br />
<br />
11. Learning Tic-Tac-Toe http://www.rubyquiz.com/quiz11.html<br />
<br />
37. Inference Engine http://www.rubyquiz.com/quiz37.html<br />
<br />
48. Math Captcha http://www.rubyquiz.com/quiz48.html<br />
<br />
49. Text Image http://www.rubyquiz.com/quiz50.html (Not sure how image loading will work)<br />
<br />
85. C-Style Ints http://www.rubyquiz.com/quiz85.html<br />
<br />
87. Negative Sleep http://www.rubyquiz.com/quiz87.html (As a Monad!!!)<br />
<br />
88. Chip-8 http://www.rubyquiz.com/quiz88.html<br />
<br />
Many weren't included because of either clumsy ASCII output, or requiring a dictionary. Perhaps a dictionary module could be created and those problems attacked in a unified fashion.<br />
<br />
[[Category:Code]]<br />
[[Category:Haskell Quiz|*]]</div>Doliohttps://wiki.haskell.org/index.php?title=Haskell_Quiz/IP_to_Country/Solution_Dolio&diff=15674Haskell Quiz/IP to Country/Solution Dolio2007-09-19T04:04:20Z<p>Dolio: creation</p>
<hr />
<div>[[Category:Haskell Quiz solutions|IP to Country]]<br />
<br />
Searching a big CSV file seemed like an ideal use of the famed ByteString library, so I hacked up a quick solution. It uses lazy chunked input for hopefully cache-efficient processing, but deals with the chunks in terms of their strict byte string implementations to also avoid as much overhead as possible (not a particularly hard scheme to set up once you've seen it).<br />
<br />
It's fast. Looking up the IP on the quiz page takes roughly 0.04 seconds (as opposed to 0.30 on the reference implementation), and about half a second on an IP that isn't in the database (forcing the entire file to be processed), which seems not too shabby. However, no specs were given for the machine the reference implementation was run on, so the comparisons above are rather worthless. :)<br />
<br />
This just processes the raw file downloaded from the website linked in the quiz, and processes it linearly. One could probably devise an optimized version of the database, or a more efficient searching scheme and gain performance, but the naive solution is still plenty fast.<br />
<br />
<haskell><br />
{-# LANGUAGE PatternGuards #-}<br />
<br />
module Main(main) where<br />
<br />
import Data.Maybe<br />
import System.Environment<br />
<br />
import qualified Data.ByteString.Char8 as B<br />
import qualified Data.ByteString.Lazy.Char8 as L<br />
<br />
-- Process a file by line. For each line in the file denoted by<br />
-- the FilePath, the function is called. If the result of the<br />
-- computation is True, processing is cut off early.<br />
--<br />
-- This uses lazy chunked reading, but operates on the chunks<br />
-- one by one for (hopefully) maximum speed.<br />
processFile :: FilePath -> (B.ByteString -> IO Bool) -> IO ()<br />
processFile path op = proc . L.toChunks =<< L.readFile path<br />
where<br />
proc [] = return ()<br />
proc [c] = proc' (B.lines c) >> return ()<br />
proc (c:cc:cs) = do b <- proc' (B.lines c')<br />
if b then return () else proc cs'<br />
where (c', t) = B.breakEnd (=='\n') c<br />
cs' = B.append t cc : cs<br />
proc' [] = return False<br />
proc' (x:xs) = do b <- op x<br />
if b then return True else proc' xs<br />
<br />
-- Given an ip, represented as a 4-tuple, and a line expected to come<br />
-- from the ip database, determines whether the ip matches. If it does,<br />
-- the corresponding country is printed, and an exit is signaled.<br />
ipSearch :: (Int, Int, Int, Int) -> B.ByteString -> IO Bool<br />
ipSearch (a,b,c,d) s<br />
| Just (from, to, country) <- parse s,<br />
from <= ip,<br />
ip <= to = B.putStrLn country >> return True<br />
| otherwise = return False<br />
where<br />
ip = d + 256*c + 256*256*b + 256*256*256*a<br />
<br />
parse s = case B.split ',' s of<br />
[f,t,_,_,_,_,c] -> do (from,_) <- B.readInt (B.tail f)<br />
(to, _) <- B.readInt (B.tail t)<br />
return (from, to, B.tail (B.init c))<br />
_ -> Nothing<br />
<br />
main = do (ips:_) <- getArgs<br />
processFile "IpToCountry.csv" (ipSearch $ ipParse ips)<br />
where<br />
ipParse = convert . B.split '.' . B.pack<br />
convert [a,b,c,d] = fromJust $ do (a',_) <- B.readInt a<br />
(b',_) <- B.readInt b<br />
(c',_) <- B.readInt c<br />
(d',_) <- B.readInt d<br />
return (a',b',c',d')<br />
</haskell></div>Doliohttps://wiki.haskell.org/index.php?title=Haskell_Quiz/IP_to_Country&diff=15673Haskell Quiz/IP to Country2007-09-19T03:45:30Z<p>Dolio: creation</p>
<hr />
<div>[[Category:Haskell Quiz]]<br />
<br />
The object of this quiz was to use a database associating IPs with their hosting countries to look up a country given a particular IP address. The less time and memory used, the better.<br />
<br />
==The Problem==<br />
<br />
* http://www.rubyquiz.com/quiz139.html<br />
<br />
==Solutions==<br />
<br />
* [[Haskell Quiz/IP to Country/Solution Dolio|Dan Doel]]</div>Doliohttps://wiki.haskell.org/index.php?title=Haskell_Quiz&diff=15672Haskell Quiz2007-09-19T03:41:53Z<p>Dolio: ip to country</p>
<hr />
<div>A collection of solutions to the [http://www.rubyquiz.com Ruby quiz] puzzles in simple, elegant Haskell.<br />
<br />
As you solve the puzzles, please contribute your code, and create a page<br />
for the puzzle entries. When creating a new page for your source, be<br />
sure to categorise it as code, with a [ [ Category:Code ] ] tag.<br />
<br />
== The Puzzles ==<br />
<br />
1. [[Haskell Quiz/The Solitaire Cipher|The Solitaire Cipher]]<br />
<br />
2. [[Haskell Quiz/Secret Santas|Secret Santas]]<br />
<br />
5. [[Haskell Quiz/Sokoban|Sokoban]]<br />
<br />
7. [[Haskell Quiz/Countdown|Countdown]]<br />
<br />
15. [[Haskell Quiz/Animal Quiz|Animal Quiz]]<br />
<br />
19. [[Haskell Quiz/Yahtzee|Yahtzee]]<br />
<br />
20. [[Haskell Quiz/Phone Number Words|Phone Number Words]]<br />
<br />
22. [[Haskell Quiz/Roman Numerals|Roman Numerals]]<br />
<br />
25. [[Haskell Quiz/English Numerals|English Numerals]]<br />
<br />
27. [[Haskell Quiz/Knight's Travails|Knight's Travails]]<br />
<br />
31. [[Haskell Quiz/Amazing Mazes|Amazing Mazes]]<br />
<br />
33. [[Haskell Quiz/Tiling Turmoil|Tiling Turmoil]]<br />
<br />
39. [[Haskell Quiz/Sampling|Sampling]]<br />
<br />
43. [[Haskell Quiz/Sodoku Solver|Sodoku Solver]]<br />
<br />
54. [[Haskell Quiz/Index and Query|Text Index and Query]]<br />
<br />
57. [[Haskell Quiz/Weird Numbers|Weird Numbers]]<br />
<br />
60. [[Haskell Quiz/Numeric Maze|Numeric Maze]]<br />
<br />
63. [[Haskell Quiz/Grid Folding|Grid Folding]]<br />
<br />
65. [[Haskell Quiz/Splitting the Loot|Splitting the Loot]]<br />
<br />
70. [[Haskell Quiz/Constraint Processing|Constraint Processing]] <br />
<br />
76. [[Haskell Quiz/Text Munger|Text Munger]]<br />
<br />
77. [[Haskell Quiz/Cat2Rafb|cat2rafb]]<br />
<br />
84. [[Haskell Quiz/PP Pascal|PP Pascal]]<br />
<br />
88. [[Haskell Quiz/Chip Eight|Chip Eight]]<br />
<br />
92. [[Haskell Quiz/DayRange|DayRange]]<br />
<br />
93. [[Haskell Quiz/Happy Numbers|Happy Numbers]]<br />
<br />
97. [[Haskell Quiz/Posix Pangrams|Posix Pangrams]]<br />
<br />
98. [[Haskell Quiz/Astar|A*]]<br />
<br />
99. [[Haskell Quiz/Fuzzy Time|Fuzzy Time]]<br />
<br />
100. [[Haskell Quiz/Bytecode Compiler|Bytecode Compiler]]<br />
<br />
106. [[Haskell Quiz/Chess960|Chess960]]<br />
<br />
107. [[Haskell Quiz/Word Search|Word Search]]<br />
<br />
108. [[Haskell Quiz/Word Blender|Word Blender]]<br />
<br />
114. [[Haskell Quiz/Housie|Housie]]<br />
<br />
117. [[Haskell Quiz/SimFrost|SimFrost]]<br />
<br />
121. [[Haskell Quiz/Morse Code|Morse Code]]<br />
<br />
122. [[Haskell Quiz/Credit Cards|Checking Credit Cards]]<br />
<br />
128. [[Haskell Quiz/Verbal Arithmetic|Verbal Arithmetic]]<br />
<br />
131. [[Haskell Quiz/Maximum Sub-Array|Maximum Sub-Array]]<br />
<br />
138. [[Haskell Quiz/Count and Say|Count and Say]]<br />
<br />
139. [[Haskell Quiz/IP to Country|IP to Country]]<br />
<br />
==Possibly fun ones not yet done in haskell==<br />
<br />
3. Geodesic Dome Faces http://www.rubyquiz.com/quiz3.html<br />
<br />
11. Learning Tic-Tac-Toe http://www.rubyquiz.com/quiz11.html<br />
<br />
37. Inference Engine http://www.rubyquiz.com/quiz37.html<br />
<br />
48. Math Captcha http://www.rubyquiz.com/quiz48.html<br />
<br />
49. Text Image http://www.rubyquiz.com/quiz50.html (Not sure how image loading will work)<br />
<br />
85. C-Style Ints http://www.rubyquiz.com/quiz85.html<br />
<br />
87. Negative Sleep http://www.rubyquiz.com/quiz87.html (As a Monad!!!)<br />
<br />
88. Chip-8 http://www.rubyquiz.com/quiz88.html<br />
<br />
Many weren't included because of either clumsy ASCII output, or requiring a dictionary. Perhaps a dictionary module could be created and those problems attacked in a unified fashion.<br />
<br />
[[Category:Code]]<br />
[[Category:Haskell Quiz|*]]</div>Doliohttps://wiki.haskell.org/index.php?title=Haskell_Quiz/Count_and_Say/Solution_Dolio&diff=15595Haskell Quiz/Count and Say/Solution Dolio2007-09-14T20:45:35Z<p>Dolio: new page</p>
<hr />
<div>[[Category:Haskell Quiz solutions|Count and Say]]<br />
<br />
While reading the description for this quiz, I thought it was a perfect problem to make use of the handy clusterBy function Tom Moertel recently [http://blog.moertel.com/articles/2007/09/01/clusterby-a-handy-little-function-for-the-toolbox discussed on his blog]. So, I whipped up this solution to see how it'd work.<br />
<br />
<haskell><br />
module Main (main, say, search) where<br />
<br />
import Data.Char<br />
import Data.List<br />
import Data.Maybe<br />
import qualified Data.Map as M<br />
<br />
import Control.Arrow<br />
import Control.Monad<br />
import System.Environment<br />
<br />
clusterBy :: Ord b => (a -> b) -> [a] -> [[a]]<br />
clusterBy p = M.elems . M.map reverse . M.fromListWith (++) . map (p &&& return)<br />
<br />
cluster :: Ord a => [a] -> [[a]]<br />
cluster = clusterBy id<br />
<br />
speak :: Int -> String<br />
speak 1 = "ONE"<br />
speak 2 = "TWO"<br />
speak 3 = "THREE"<br />
speak 4 = "FOUR"<br />
speak 5 = "FIVE"<br />
speak 6 = "SIX"<br />
speak 7 = "SEVEN"<br />
speak 8 = "EIGHT"<br />
speak 9 = "NINE"<br />
speak 10 = "TEN"<br />
speak 11 = "ELEVEN"<br />
speak 12 = "TWELVE"<br />
speak 13 = "THIRTEEN"<br />
speak 15 = "FIFTEEN"<br />
speak 18 = "EIGHTEEN"<br />
speak 20 = "TWENTY"<br />
speak 30 = "THIRTY"<br />
speak 40 = "FORTY"<br />
speak 50 = "FIFTY"<br />
speak 60 = "SIXTY"<br />
speak 70 = "SEVENTY"<br />
speak 80 = "EIGHTY"<br />
speak 90 = "NINETY"<br />
speak n | n < 20 = speak (n - 10) ++ "TEEN"<br />
| n < 100 = speak (n - m) ++ speak m<br />
| otherwise = error "Unanticipated number."<br />
where m = n `mod` 10<br />
<br />
say :: String -> String<br />
say = intercalate " " . map (\c -> speak (length c) ++ " " ++ take 1 c)<br />
. cluster . filter isAlpha<br />
<br />
search :: String -> Int<br />
search = (1+) . fromJust . search' []<br />
where search' l s = elemIndex s l `mplus` search' (s:l) (say s)<br />
<br />
main = print . search . map toUpper . head =<< getArgs<br />
</haskell></div>Doliohttps://wiki.haskell.org/index.php?title=Haskell_Quiz/Count_and_Say&diff=15593Haskell Quiz/Count and Say2007-09-14T20:28:21Z<p>Dolio: new page</p>
<hr />
<div>[[Category:Haskell Quiz|Count and Say]]<br />
<br />
Ruby Quiz #138: This quiz involved counting the number of letters in a particular string, and generating a new string that would be the result of saying the results of the count. For instance, if we start with "HASKELL RULES", we get the sequence:<br />
<br />
HASKELL RULES<br />
ONE A TWO E ONE H ONE K THREE L ONE R TWO S ONE U<br />
ONE A EIGHT E TWO H ONE K ONE L FIVE N SEVEN O TWO R ONE S THREE T ONE U TWO W<br />
<br />
and so on. The object of the quiz is to determine, given an initial string, whether the sequence ever goes into a loop, and if so, how long the cycle length is.<br />
<br />
==The Problem==<br />
<br />
* http://www.rubyquiz.com/quiz138.html<br />
<br />
==Solutions==<br />
<br />
* [[Haskell Quiz/Count and Say/Solution Dolio|Dan Doel]]</div>Doliohttps://wiki.haskell.org/index.php?title=Haskell_Quiz&diff=15567Haskell Quiz2007-09-14T11:07:33Z<p>Dolio: count and say</p>
<hr />
<div>A collection of solutions to the [http://www.rubyquiz.com Ruby quiz] puzzles in simple, elegant Haskell.<br />
<br />
As you solve the puzzles, please contribute your code, and create a page<br />
for the puzzle entries. When creating a new page for your source, be<br />
sure to categorise it as code, with a [ [ Category:Code ] ] tag.<br />
<br />
== The Puzzles ==<br />
<br />
1. [[Haskell Quiz/The Solitaire Cipher|The Solitaire Cipher]]<br />
<br />
2. [[Haskell Quiz/Secret Santas|Secret Santas]]<br />
<br />
5. [[Haskell Quiz/Sokoban|Sokoban]]<br />
<br />
7. [[Haskell Quiz/Countdown|Countdown]]<br />
<br />
15. [[Haskell Quiz/Animal Quiz|Animal Quiz]]<br />
<br />
19. [[Haskell Quiz/Yahtzee|Yahtzee]]<br />
<br />
20. [[Haskell Quiz/Phone Number Words|Phone Number Words]]<br />
<br />
22. [[Haskell Quiz/Roman Numerals|Roman Numerals]]<br />
<br />
25. [[Haskell Quiz/English Numerals|English Numerals]]<br />
<br />
27. [[Haskell Quiz/Knight's Travails|Knight's Travails]]<br />
<br />
31. [[Haskell Quiz/Amazing Mazes|Amazing Mazes]]<br />
<br />
33. [[Haskell Quiz/Tiling Turmoil|Tiling Turmoil]]<br />
<br />
39. [[Haskell Quiz/Sampling|Sampling]]<br />
<br />
43. [[Haskell Quiz/Sodoku Solver|Sodoku Solver]]<br />
<br />
54. [[Haskell Quiz/Index and Query|Text Index and Query]]<br />
<br />
57. [[Haskell Quiz/Weird Numbers|Weird Numbers]]<br />
<br />
60. [[Haskell Quiz/Numeric Maze|Numeric Maze]]<br />
<br />
63. [[Haskell Quiz/Grid Folding|Grid Folding]]<br />
<br />
65. [[Haskell Quiz/Splitting the Loot|Splitting the Loot]]<br />
<br />
70. [[Haskell Quiz/Constraint Processing|Constraint Processing]] <br />
<br />
76. [[Haskell Quiz/Text Munger|Text Munger]]<br />
<br />
77. [[Haskell Quiz/Cat2Rafb|cat2rafb]]<br />
<br />
84. [[Haskell Quiz/PP Pascal|PP Pascal]]<br />
<br />
88. [[Haskell Quiz/Chip Eight|Chip Eight]]<br />
<br />
92. [[Haskell Quiz/DayRange|DayRange]]<br />
<br />
93. [[Haskell Quiz/Happy Numbers|Happy Numbers]]<br />
<br />
97. [[Haskell Quiz/Posix Pangrams|Posix Pangrams]]<br />
<br />
98. [[Haskell Quiz/Astar|A*]]<br />
<br />
99. [[Haskell Quiz/Fuzzy Time|Fuzzy Time]]<br />
<br />
100. [[Haskell Quiz/Bytecode Compiler|Bytecode Compiler]]<br />
<br />
106. [[Haskell Quiz/Chess960|Chess960]]<br />
<br />
107. [[Haskell Quiz/Word Search|Word Search]]<br />
<br />
108. [[Haskell Quiz/Word Blender|Word Blender]]<br />
<br />
114. [[Haskell Quiz/Housie|Housie]]<br />
<br />
117. [[Haskell Quiz/SimFrost|SimFrost]]<br />
<br />
121. [[Haskell Quiz/Morse Code|Morse Code]]<br />
<br />
122. [[Haskell Quiz/Credit Cards|Checking Credit Cards]]<br />
<br />
128. [[Haskell Quiz/Verbal Arithmetic|Verbal Arithmetic]]<br />
<br />
131. [[Haskell Quiz/Maximum Sub-Array|Maximum Sub-Array]]<br />
<br />
138. [[Haskell Quiz/Count and Say|Count and Say]]<br />
<br />
==Possibly fun ones not yet done in haskell==<br />
<br />
3. Geodesic Dome Faces http://www.rubyquiz.com/quiz3.html<br />
<br />
11. Learning Tic-Tac-Toe http://www.rubyquiz.com/quiz11.html<br />
<br />
37. Inference Engine http://www.rubyquiz.com/quiz37.html<br />
<br />
48. Math Captcha http://www.rubyquiz.com/quiz48.html<br />
<br />
49. Text Image http://www.rubyquiz.com/quiz50.html (Not sure how image loading will work)<br />
<br />
85. C-Style Ints http://www.rubyquiz.com/quiz85.html<br />
<br />
87. Negative Sleep http://www.rubyquiz.com/quiz87.html (As a Monad!!!)<br />
<br />
88. Chip-8 http://www.rubyquiz.com/quiz88.html<br />
<br />
Many weren't included because of either clumsy ASCII output, or requiring a dictionary. Perhaps a dictionary module could be created and those problems attacked in a unified fashion.<br />
<br />
[[Category:Code]]<br />
[[Category:Haskell Quiz|*]]</div>Doliohttps://wiki.haskell.org/index.php?title=Library/CC-delcont&diff=14690Library/CC-delcont2007-07-25T20:43:29Z<p>Dolio: breadth-first traversal</p>
<hr />
<div>[[Category:Libraries]]<br />
[[Category:Monad]]<br />
[[Category:Tutorials]]<br />
<br />
== Introduction ==<br />
<br />
This page is intended as a brief overview of delimited continuations and related constructs, and how they can be used in Haskell. It uses the library CC-delcont as a vehicle for doing so, but the examples should be general enough so that if you have another implementation, they should be relatively straight forward to port (whenever possible, I have endeavored not to use the operators on abstract prompt and sub-continuation [[type]]s from CC-delcont, instead using the more typical, functional operators).<br />
<br />
== The basics ==<br />
<br />
=== Undelimited continuations ===<br />
<br />
If you've taken university courses in computer science, or done much investigation of [[language design]], you've probably encountered [[continuation]]s before. The author first recalls learning about them in a class on said subject, where they were covered very briefly, and it was mentioned (without proof; and no proof will be provided here) that they could be used as a basis upon which all control flow operators could be built. At the time, they seemed rather abstract and unwieldy. Perhaps they could be used to implement any more common control flow pattern, but why bother, when, as far as language implementation concerns go, it's easier to implement (and understand) most common control flow directly than it is to implement continuations?<br />
<br />
As far as usage goes, continuations are probably most closely associated with Scheme, and its call-with-current-continuation function (abbreviated to Haskell's version, callCC from now on), although many other languages have them (undelimited continuations for Haskell are provided by the [http://haskell.org/ghc/docs/latest/html/libraries/mtl/Control-Monad-Cont.html Cont monad and ContT transformer]). They're often regarded as being difficult to understand, as their use can cause very complex control flow patterns (much like GOTO, although more sophisticated), though reduced to their basics, they aren't that hard to understand.<br />
<br />
A continuation of an expression is, in a loose sense, 'the stuff that happens after the expression.' An example to refer to may help:<br />
<br />
<haskell><br />
m >>= f >>= g >>= h<br />
</haskell><br />
<br />
Here we have an ordinary [[:Category:Monad |monadic]] pipeline. A computation m is run, and its result is fed into f, and so on. We might ask what the continuation of 'm' is, the portion of the program that executes after m, and it looks something like:<br />
<br />
<haskell><br />
\x -> f x >>= g >>= h<br />
</haskell><br />
<br />
The continuation takes the value produced by m, and feeds it into 'the rest of the program.' But, the fact that we can represent this using [[function]]s as above should be a clue that continuations can be built up using them, and indeed, this is the case. There is a standard way to transform a program written normally (or in a monadic style, as above) into a program in which continuations, represented as functions, are passed around explicitly (known as the CPS transform), and this is what Cont/ContT does.<br />
<br />
However, such a transform would be of little use if the passed continuations were inaccessible (as with any monad), and callCC is just the operator for the job. It will call a function with the implicitly passed continuation, so in:<br />
<br />
<haskell><br />
callCC (\k -> e) >>= f >>= g >>= h<br />
</haskell><br />
<br />
'k' will be set to a function that is something like the above '\x -> f x >>= g >>= h'. However, in some sense, it is not an ordinary function, as it will never return to the point where it is invoked. Instead, calling 'k' should be viewed as execution jumping to the point where callCC was invoked, with the entire 'callCC (..)' expression replaced with the value passed to 'k'. So k is not merely a normal function, but a way of feeding a value into into an execution context (and this is reflected in its monadic type: a -> Cont b).<br />
<br />
So, what is all this good for? Well, a standard example is that one can use continuations to capture a method of escaping from loops (particularly nested ones), and if you ponder for a while, you might be able to imagine implementing some sort of exception mechanism with them. A simple example is computing the product of a list of numbers:<br />
<br />
<haskell><br />
prod l = callCC (\k -> loop k l)<br />
where<br />
loop _ [] = return 1<br />
loop k (0:_) = k 0<br />
loop k (x:xs) = do n <- loop k xs ; return (n*x)<br />
</haskell><br />
<br />
Under normal circumstances, the loop will simply multiply all the numbers. However, if a 0 is detected, there is no need to multiply anything, the answer will always be 0. So, the continuation is invoked, and 0 is returned immediately, without performing any multiplications.<br />
<br />
=== Delimited continuations ===<br />
<br />
So, [[continuation]]s (hopefully) seem pretty clear, and at least theoretically useful. Where do delimited continuations come into the picture?<br />
<br />
The story (according to the hearsay the author has come across) goes back again to Scheme. As was mentioned earlier, callCC is often associated with it. Another thing closely associated with Scheme (and Lisp in general) is interactive environments in which code can be defined and run (much like our own [[Hugs]] and [[GHC/GHCi | GHCi]]). Naturally, it would be nice if such environments could themselves be written in Scheme.<br />
<br />
However, continuations in Scheme are not implemented as they are in Haskell. In Haskell, continuation using code is tagged with a monadic type, and one must use runCont(T) to run such computations, and the effects can't escape it. In Scheme, continuations are native, and all code can capture them, and capturing them captures not 'the rest of the Cont(T) computation,' but 'the rest of the program.' And if the interactive loop is written in Scheme, this includes the loop itself, so programs run within the session can affect the session itself.<br />
<br />
Now, this might be a minor nit, but it is a nit nonetheless, and luckily for us, it led to the idea of delimited continuations. The idea was, of course, to tag a point at which the interactive loop invoked some sub-program, and then control flow operators such as callCC would only be able to capture a portion of the program up to the marker. To the sub-program, this is all that's of interest anyhow. Such a setup would solve the issue nicely.<br />
<br />
However, once one has the ability to create such markers, why not put them in the hands of the programmer? Then, instead of them being able to capture 'the rest of the program's execution,' they would be able to delimit, capture and manipulate arbitrary portions of their programs. And indeed, such operations can be useful.<br />
<br />
== Samples ==<br />
<br />
=== Iterators ===<br />
<br />
So, what are delimited continuations good for? Well, suppose we have a binary tree data [[type]] like so:<br />
<br />
<haskell><br />
data Tree a = Leaf | Branch a (Tree a) (Tree a)<br />
<br />
empty = Leaf<br />
singleton a = Branch a Leaf Leaf<br />
<br />
insert b Leaf = Branch b Leaf Leaf<br />
insert b (Branch a l r)<br />
| b < a = Branch a (insert b l) r<br />
| otherwise = Branch a l (insert b r)<br />
<br />
fold :: (a -> b -> b -> b) -> b -> Tree a -> b<br />
fold f z Leaf = z<br />
fold f z (Branch a l r) = f a (fold f z l) (fold f z r)<br />
<br />
for :: Monad m => Tree a -> (a -> m b) -> m ()<br />
for t f = fold (\a l r -> l >> f a >> r) (return ()) t<br />
</haskell><br />
<br />
Now, we have a [[fold]] over our data type, and as shown, we can therefore write a monadic iteration function 'for' over it (this is actually done for arbitrary data types in <hask>Data.Foldable</hask>). The fold is a fine method of traversing the data structure to operate on elements in most cases. However, what if we wanted something more like an iterator object, which somehow captured the traversal of the tree, remembering what element we're currently at, and which come next?<br />
<br />
Well, it turns out one can build just such an object using continuations. It is indeed possible to build it using undelimited continuations, but it's rather complex to do so (I'll not include code that does, as I don't feel like figuring out all the details). However, it turns out it's easy using delimited continuations:<br />
<br />
<haskell><br />
data Iterator m a = Done | Cur a (m (Iterator m a))<br />
<br />
begin :: MonadDelimitedCont p s m => Tree a -> m (Iterator m a)<br />
begin t = reset $ \p -><br />
for t (\a -><br />
shift p (\k -> return (Cur a (k $ return ())))) >> return Done<br />
<br />
current :: Iterator m a -> Maybe a<br />
current Done = Nothing<br />
current (Cur a _) = Just a<br />
<br />
next :: Monad m => Iterator m a -> m (Iterator m a)<br />
next Done = return Done<br />
next (Cur _ i) = i<br />
<br />
finished :: Iterator m a -> Bool<br />
finished Done = True<br />
finished _ = False<br />
</haskell><br />
<br />
So, clearly, Iterator is the type of iterators over a tree. current, next and done are some utility functions for operating on them. The interesting work is done in the begin function.<br />
<br />
There are two delimited control operators in play here. First is reset, which is a way to place a delimiter around a computation. The term 'p' is simply a way to reference that delimiter; the library I'm working with allows for many named delimiters to exist, and for control operators to specify which delimiters they're working with (so a control operator may capture the continuation up to p, even if it runs into a delimiter q sooner, provided p /= q).<br />
<br />
The other operator is shift, which is used to capture the delimited continuation. In many ways, it's like callCC, but with an important difference: it aborts the captured continuation. When callCC is called on a function f, if f returns normally, execution will pick up from just after the callCC. However, when shift is called, the continuation between the call and the enclosing prompt is packaged up (into 'k' here), and passed to the function, and a normal return will return to the place where the delimiter was set, not where shift was called.<br />
<br />
With this in mind, we can begin to analyze the 'begin' function. First, it delimits a computation with the delimiter 'p'. Next, it begins to loop over the tree. For each element, we use 'shift' to capture "the rest of the loop", calling it 'k'. We then package that, and the current tree element, into an Iterator object, and return it. Since the shift has aborted the rest of the loop (for the time being), it returns to where 'reset' was called, and the function returns the iterator object (wrapped in a monad, of course).<br />
<br />
The main remaining piece of interest is when next goes to get the next element of the traversal. When this happens, 'k $ return ()' is executed, which invokes the captured continuation (with the value (), because the loop doesn't take the return value of the traversal function into account anyway). This, essentially, re-enters the loop. If there is a next element, then the traversal function is called with it, shift will once again capture 'the rest of the loop' (from a later point that before, though), and return an iterator object with the new current element and continuation. If there are no new elements, then control will pass out of the loop to the following computation, which is, in this case, 'return Done', so in either case, an Iterator object is the result, and the types work out.<br />
<br />
We can test our iterator like so:<br />
<br />
<haskell><br />
main :: IO ()<br />
main = runCCT $ do t <- randomTree 10<br />
i <- begin t<br />
doStuff i<br />
where<br />
doStuff i<br />
| finished i = return ()<br />
| otherwise = do i' <- next i<br />
i'' <- next i<br />
liftIO $ print (fromJust $ current i :: Int)<br />
doStuff i'<br />
<br />
randomTree n = rt empty n<br />
where<br />
rt t 0 = return t<br />
rt t n = do r <- liftIO randomIO<br />
rt (insert r t) (n - 1)<br />
</haskell><br />
<br />
The output of which might go something like:<br />
<br />
-1937814587<br />
-1171184756<br />
-1068642732<br />
-741588272<br />
-553872051<br />
-499564662<br />
-421862876<br />
-59900888<br />
315891595<br />
1868487875<br />
<br />
The example shows one possibly interesting property: one can re-use old iterators without affecting new ones. In this case, we call 'next' on the same iterator twice, but it doesn't advance the iterator twice. Our iterators behave like an ordinary functional data structure, even though they're built out of somewhat out-of-the-ordinary components.<br />
<br />
=== Breadth-first Traversal ===<br />
<br />
This example is an adaptation of an example from a set of slides by Olivier Danvy, [http://www.brics.dk/~danvy/delimited-continuations-blues.pdf Delimited-Continuations Blues]. It involves the traversal of a binary tree, so let's first define such a type:<br />
<br />
<haskell><br />
data Tree a = Node a (Tree a) (Tree a) | Leaf a<br />
<br />
t = Node 1 (Node 2 (Leaf 3)<br />
(Node 4) (Leaf 5)<br />
(Leaf 6)))<br />
(Node 7 (Node 8 (Leaf 9)<br />
(Leaf 10))<br />
(Leaf 11))<br />
<br />
toList (Leaf i) = [i]<br />
toList (Node a t1 t2) = a : toList t1 ++ toList t2<br />
</haskell><br />
<br />
toList is a pre-order, depth-first traversal, and t is ordered so that such a traversal yields [1 .. 11]. Depth-first traversals are clearly the easiest to write in a language like Haskell, since recursive descent on the trees can be used. To perform a breadth-first traversal, one would likely keep a list of sub-trees at a given level, and pass through the list at each level, visiting roots, and producing a new list of the children one level down. Which is a bit more bookkeeping. However, it turns out that delimited control can allow one to write breadth-first traversal in a recursive descent style similar to the depth-first traversal (modulo the need for monads):<br />
<br />
<haskell><br />
visit :: MonadDelimitedCont p s m => p [a] -> Tree a -> m ()<br />
visit p = visit'<br />
where<br />
visit' (Leaf i) = control p $ \k -> (i:) `liftM` k (return ())<br />
visit' (Node i t1 t2) = control p $ \k -> do a <- k (return ())<br />
visit' t2<br />
visit' t1<br />
(i:) `liftM` return a<br />
<br />
bf :: MonadDelimitedCont p s m => Tree a -> m [a]<br />
bf t = reset $ \p -> visit p t >> return []<br />
</haskell><br />
<br />
And, a quick check shows that 'bf t2' yields:<br />
<br />
[5,6,9,10,3,4,8,11,2,7,1]<br />
<br />
(Note that in this example, since elements are always pre-pended, the element visited first will be last in the list, and vice versa; so this is a pre-order breadth-first traversal).<br />
<br />
So, how exactly does this work? As the slides say, the key idea is to "return before recursively traversing the subtrees." This is accomplished through the use of the delimited control operator 'control.' At the Node stage of a traversal, control is used to capture the sub-continuation that comes after said Node (which is, effectively, the traversal over the rest of the nodes at the same level). However, instead of descending depth-first style, that sub-continuation is immediately invoked, the result being called a. Only after that are the sub-trees descended into.<br />
<br />
It should be noted, also, that this particular example can be used to display a difference between 'shift' (the so-called 'static' delimited operator) and 'control' (which is one of the 'dynamic' operators). The difference between the two is that in 'shift p (\k -> e)' calls to k are delimited by the prompt p, whereas in control, they are not (in both, e is). This has important consequences. For instance, at some point in a traversal an evaluation may look something like:<br />
<br />
<haskell><br />
delimited (visit' t2 >> visit' t1)<br />
</haskell><br />
<br />
Which, using some simplified notation/traversal, expands to:<br />
<br />
<haskell><br />
delimited (control (\k -> k () >> visit' t22 >> visit' t21)<br />
>> control (\k -> k () >> visit' t12 >> visit' t11))<br />
</haskell><br />
<br />
Which, due to the effects of control turns into:<br />
<br />
<haskell><br />
delimited ((control (\k -> k () >> visit' t12 >> visit' t11)) >> visit' t22 >> visit' t21)<br />
<br />
==><br />
<br />
delimited (visit' t22 >> visit' t21 >> visit' t12 >> visit' t11)<br />
</haskell><br />
<br />
In other words, using 'control' ends up building and executing a sequence of traversals at the same level, after the actions for the above level performed by the 'k ()'. The control operators of the lower level are then free to close over, and manipulate all the visitations on there level. This is why the result is a breadth-first traversal. However, replacing control with shift, we get:<br />
<br />
<haskell><br />
delmited (visit' t2 >> visit' t1)<br />
<br />
==><br />
<br />
delimited ((shift (\k -> k () >> visit' t22 >> visit' t21))<br />
>> (shift (\k -> k () >> visit' t12 >> visit' t11)))<br />
<br />
==><br />
<br />
delimited (delimited (shift (\k -> k () >> visit' t12 >> visit' t11)) >> visit' t22 >> visit' t21)<br />
</haskell><br />
<br />
And already we can see a difference. The sub-traversal of t1 is now isolated, and control effects (via shift, at least) therein cannot affect the sub-traversal of t2. So, control effects no longer affect an entire level of the whole tree, and instead are localized to a given node and its descendants. In such a case, we end up with an ordinary depth-first traversal (although the sub-continuations allow the visitation of each node to look a bit different than toList, and since we're always pre-pending, as we get to a node, the results are reversed compared to toList).<br />
<br />
In any case, the desired result has been achieved: A slightly modified recursive descent traversal has allowed us to express breadth-first search (and depth-first search in the same style is a matter of substitution of control operators) without having to do the normal list-of-sub-trees sort of bookkeeping (although the added weight of working with delimited control may more than outweigh that).<br />
<br />
For a more in-depth discussion of the differences between shift, control and other, similar operators, see Shift to Control, cited below.<br />
<br />
=== Resumable Parsing ===<br />
<br />
Our next example concerns a Haskell version of a [http://caml.inria.fr/pub/ml-archives/caml-list/2007/07/7a34650001bf6876b71c7b1060ac501f.en.html post to the OCaml mailing list]. The translation was [http://www.mail-archive.com/haskell-cafe%40haskell.org/msg27177.html originally given] on the haskell-cafe mailing list, and complete code and some additional discussion can be found there.<br />
<br />
The problem is similar to the above iterator example. Specifically, we are in need of a parser that can take fragments of input at a time, suspending for more input after each fragment, until such time as it can be provided. However, there are already plenty of fine parsing libraries available, and ideally, we don't want to have to re-write a new library from scratch just to have this resumable parser feature.<br />
<br />
As it turns out, delimited continuations provide a fairly straightforward way to have our cake and eat it too in this case. First, we'll need a data type for the resumable parser.<br />
<br />
<haskell><br />
data Request m a = Done a | ReqChar (Maybe Char -> m (Request m a))<br />
</haskell><br />
<br />
Such a parser is either complete, or in a state of requesting more characters. Again, we'll have some convenience functions for working on the data type:<br />
<br />
<haskell><br />
provide :: Monad m => Char -> Request m a -> m (Request m a)<br />
provide _ d@(Done _) = return d<br />
provide c (ReqChar k) = k (Just c)<br />
<br />
provideString :: Monad m => String -> Request m a -> m (Request m a)<br />
provideString [] s = return s<br />
provideString (x:xs) s = provide x s >>= provideString xs<br />
<br />
finish :: Monad m => Request m a -> m (Request m a)<br />
finish d@(Done _) = return d<br />
finish (ReqChar k) = k Nothing<br />
</haskell><br />
<br />
So, 'provide' feeds a character into a parser, 'provideString' feeds in a string, and 'finish' informs the parser that there are no more characters to be had.<br />
<br />
Finally, we need to have some way of suspending parsing and waiting for characters. This is exactly what delimited continuations do for us. The hook we'll use to get control over the parser is through the character stream it takes as input:<br />
<br />
<haskell><br />
toList :: Monad m => m (Maybe a) -> m [a]<br />
toList gen = gen >>= maybe (return []) (\c -> liftM (c:) $ toList gen)<br />
<br />
streamInvert :: MonadDelimitedCont p s m => p (Request m a) -> m (Maybe Char)<br />
streamInvert p = shift p (\k -> return $ ReqChar (k . return))<br />
<br />
invertParse :: MonadDelimitedCont p s m => (String -> a) -> m (Request m a)<br />
invertParse parser = reset $ \p -> (Done . parser) `liftM` toList (streamInvert p)<br />
</haskell><br />
<br />
So, 'toList' simply takes a monadic action that may produce a character, and uses it to produce a list of characters (stopping when it sees a 'Nothing'). 'streamInvert' is just such a monadic, character-producing action (given a delimiter). Each time it is run, it captures a sub-continuation (here, 'the rest of the list generation'), and puts it in a Request object. We can then pass around the Request object, and feed characters in as desired (via 'provide' and 'provideString' above), gradually building the list of characters to be parsed.<br />
<br />
In the 'invertParse' method, this gradually produced list is fed through a parser (of type String -> a, so it doesn't need to know about the delimited continuation monad we're using), and the output of the parser is packaged in a finished (Done) Request object, so when we finally call 'finish', we will be able to access the results of the parser.<br />
<br />
For this example, the words function suffices as a parser:<br />
<br />
<haskell><br />
gradualParse :: [String]<br />
gradualParse = runCC $ do p1 <- invertParse words<br />
p2 <- provideString "The quick" p1<br />
p3 <- provideString " brown fox jum" p2<br />
p4 <- provideString "ps over the laz" p3<br />
p5 <- provideString "y dog" p4 >>= finish<br />
p6 <- provideString "iest dog" p4 >>= finish<br />
let (Done l1) = p5<br />
(Done l2) = p6<br />
return (l1 ++ l2)<br />
<br />
main :: IO ()<br />
main = mapM_ putStrLn gradualParse<br />
</haskell><br />
<br />
And we get output:<br />
<br />
The<br />
quick<br />
brown<br />
fox<br />
jumps<br />
over<br />
the<br />
lazy<br />
dog<br />
The<br />
quick<br />
brown<br />
fox<br />
jumps<br />
over<br />
the<br />
laziest<br />
dog<br />
<br />
So, the resumable parser works. It will pause at arbitrary places in the parse, even in the middle of tokens, and wait for more input. And one can resume a parse from any point to which a Request pointer is saved without interfering with other resumable parser objects.<br />
<br />
(A note: depending on what exactly one wants to do with such parsers, there are a few nits in the above implementation. It doesn't exactly match the semantics of the OCaml parser. For more information on this topic, see the linked mailing-list thread, as it discusses the issues, their causes, and provides an alternate implementation (which changes mostly the parser, not the delimited continuation end) which matches the OCaml version much more closely)<br />
<br />
== CC-delcont ==<br />
<br />
=== Installation ===<br />
<br />
Packages are available on Hackage:<br />
<br />
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/CC-delcont<br />
<br />
The library is cabalized, so installation should be as simple as:<br />
<br />
runhaskell Setup.lhs configure<br />
runhaskell Setup.lhs build<br />
sudo runhaskell Setup.lhs install<br />
<br />
(to install to the default directory, /usr/local/lib on Unix)<br />
<br />
== More information ==<br />
<br />
A google search for delimited continuations will likely yield plenty of interesting resources on the subject. However, the following resources proved especially useful to the author when he was investigating them:<br />
<br />
* [http://okmij.org/ftp/papers/context-OS.pdf Delimited continuations in operating systems] -- This paper provides excellent insight into how delimited continuations can arise as a natural solution/model for real problems, specifically in the context of implementing an operating system.<br />
<br />
* [http://www.cs.indiana.edu/~sabry/papers/monadicDC.pdf A Monadic Framework for Delimited Continuations] -- This is the paper from which the implementation of the above library was derived. It's quite thorough in its explanation of the motivations for the interface, and also has several possible implementations thereof (though CC-delcont uses only one).<br />
<br />
* [http://okmij.org/ftp/papers/DDBinding.pdf Delimited Dynamic Binding] -- This paper, and related code, served as the basis for the dynamically scoped variable portion of the CC-delcont library. It explains the rationale for having dynamic scoping and delimited control interact in the way they do in the library, and goes through the implementation of dynamic variables in terms of delimited continuations.<br />
<br />
* [http://www.cs.rutgers.edu/~ccshan/recur/recur.pdf Shift to control] -- This paper explores four different sets of delimited control operators (all of which are implemented in CC-delcont), and their implementation. Though it's not directly relevant to this particular library, it provides some good insight into delimited continuations and their implementation in general.<br />
<br />
* [http://okmij.org/ftp/Computation/Continuations.html Oleg Kiselyov's continuation page] -- Contains plenty of excellent information on delimited continuations and the like (including some of the above papers), including examples of their use in Haskell.</div>Doliohttps://wiki.haskell.org/index.php?title=Library/CC-delcont&diff=14467Library/CC-delcont2007-07-18T03:04:23Z<p>Dolio: resumable parsers</p>
<hr />
<div>[[Category:Libraries]]<br />
[[Category:Monad]]<br />
[[Category:Tutorials]]<br />
<br />
== Introduction ==<br />
<br />
This page is intended as a brief overview of delimited continuations and related constructs, and how they can be used in Haskell. It uses the library CC-delcont as a vehicle for doing so, but the examples should be general enough so that if you have another implementation, they should be relatively straight forward to port (whenever possible, I have endeavored not to use the operators on abstract prompt and sub-continuation [[type]]s from CC-delcont, instead using the more typical, functional operators).<br />
<br />
== The basics ==<br />
<br />
=== Undelimited continuations ===<br />
<br />
If you've taken university courses in computer science, or done much investigation of [[language design]], you've probably encountered [[continuation]]s before. The author first recalls learning about them in a class on said subject, where they were covered very briefly, and it was mentioned (without proof; and no proof will be provided here) that they could be used as a basis upon which all control flow operators could be built. At the time, they seemed rather abstract and unwieldy. Perhaps they could be used to implement any more common control flow pattern, but why bother, when, as far as language implementation concerns go, it's easier to implement (and understand) most common control flow directly than it is to implement continuations?<br />
<br />
As far as usage goes, continuations are probably most closely associated with Scheme, and its call-with-current-continuation function (abbreviated to Haskell's version, callCC from now on), although many other languages have them (undelimited continuations for Haskell are provided by the [http://haskell.org/ghc/docs/latest/html/libraries/mtl/Control-Monad-Cont.html Cont monad and ContT transformer]). They're often regarded as being difficult to understand, as their use can cause very complex control flow patterns (much like GOTO, although more sophisticated), though reduced to their basics, they aren't that hard to understand.<br />
<br />
A continuation of an expression is, in a loose sense, 'the stuff that happens after the expression.' An example to refer to may help:<br />
<br />
<haskell><br />
m >>= f >>= g >>= h<br />
</haskell><br />
<br />
Here we have an ordinary [[:Category:Monad |monadic]] pipeline. A computation m is run, and its result is fed into f, and so on. We might ask what the continuation of 'm' is, the portion of the program that executes after m, and it looks something like:<br />
<br />
<haskell><br />
\x -> f x >>= g >>= h<br />
</haskell><br />
<br />
The continuation takes the value produced by m, and feeds it into 'the rest of the program.' But, the fact that we can represent this using [[function]]s as above should be a clue that continuations can be built up using them, and indeed, this is the case. There is a standard way to transform a program written normally (or in a monadic style, as above) into a program in which continuations, represented as functions, are passed around explicitly (known as the CPS transform), and this is what Cont/ContT does.<br />
<br />
However, such a transform would be of little use if the passed continuations were inaccessible (as with any monad), and callCC is just the operator for the job. It will call a function with the implicitly passed continuation, so in:<br />
<br />
<haskell><br />
callCC (\k -> e) >>= f >>= g >>= h<br />
</haskell><br />
<br />
'k' will be set to a function that is something like the above '\x -> f x >>= g >>= h'. However, in some sense, it is not an ordinary function, as it will never return to the point where it is invoked. Instead, calling 'k' should be viewed as execution jumping to the point where callCC was invoked, with the entire 'callCC (..)' expression replaced with the value passed to 'k'. So k is not merely a normal function, but a way of feeding a value into into an execution context (and this is reflected in its monadic type: a -> Cont b).<br />
<br />
So, what is all this good for? Well, a standard example is that one can use continuations to capture a method of escaping from loops (particularly nested ones), and if you ponder for a while, you might be able to imagine implementing some sort of exception mechanism with them. A simple example is computing the product of a list of numbers:<br />
<br />
<haskell><br />
prod l = callCC (\k -> loop k l)<br />
where<br />
loop _ [] = return 1<br />
loop k (0:_) = k 0<br />
loop k (x:xs) = do n <- loop k xs ; return (n*x)<br />
</haskell><br />
<br />
Under normal circumstances, the loop will simply multiply all the numbers. However, if a 0 is detected, there is no need to multiply anything, the answer will always be 0. So, the continuation is invoked, and 0 is returned immediately, without performing any multiplications.<br />
<br />
=== Delimited continuations ===<br />
<br />
So, [[continuation]]s (hopefully) seem pretty clear, and at least theoretically useful. Where do delimited continuations come into the picture?<br />
<br />
The story (according to the hearsay the author has come across) goes back again to Scheme. As was mentioned earlier, callCC is often associated with it. Another thing closely associated with Scheme (and Lisp in general) is interactive environments in which code can be defined and run (much like our own [[Hugs]] and [[GHC/GHCi | GHCi]]). Naturally, it would be nice if such environments could themselves be written in Scheme.<br />
<br />
However, continuations in Scheme are not implemented as they are in Haskell. In Haskell, continuation using code is tagged with a monadic type, and one must use runCont(T) to run such computations, and the effects can't escape it. In Scheme, continuations are native, and all code can capture them, and capturing them captures not 'the rest of the Cont(T) computation,' but 'the rest of the program.' And if the interactive loop is written in Scheme, this includes the loop itself, so programs run within the session can affect the session itself.<br />
<br />
Now, this might be a minor nit, but it is a nit nonetheless, and luckily for us, it led to the idea of delimited continuations. The idea was, of course, to tag a point at which the interactive loop invoked some sub-program, and then control flow operators such as callCC would only be able to capture a portion of the program up to the marker. To the sub-program, this is all that's of interest anyhow. Such a setup would solve the issue nicely.<br />
<br />
However, once one has the ability to create such markers, why not put them in the hands of the programmer? Then, instead of them being able to capture 'the rest of the program's execution,' they would be able to delimit, capture and manipulate arbitrary portions of their programs. And indeed, such operations can be useful.<br />
<br />
== Samples ==<br />
<br />
=== Iterators ===<br />
<br />
So, what are delimited continuations good for? Well, suppose we have a binary tree data [[type]] like so:<br />
<br />
<haskell><br />
data Tree a = Leaf | Branch a (Tree a) (Tree a)<br />
<br />
empty = Leaf<br />
singleton a = Branch a Leaf Leaf<br />
<br />
insert b Leaf = Branch b Leaf Leaf<br />
insert b (Branch a l r)<br />
| b < a = Branch a (insert b l) r<br />
| otherwise = Branch a l (insert b r)<br />
<br />
fold :: (a -> b -> b -> b) -> b -> Tree a -> b<br />
fold f z Leaf = z<br />
fold f z (Branch a l r) = f a (fold f z l) (fold f z r)<br />
<br />
for :: Monad m => Tree a -> (a -> m b) -> m ()<br />
for t f = fold (\a l r -> l >> f a >> r) (return ()) t<br />
</haskell><br />
<br />
Now, we have a [[fold]] over our data type, and as shown, we can therefore write a monadic iteration function 'for' over it (this is actually done for arbitrary data types in <hask>Data.Foldable</hask>). The fold is a fine method of traversing the data structure to operate on elements in most cases. However, what if we wanted something more like an iterator object, which somehow captured the traversal of the tree, remembering what element we're currently at, and which come next?<br />
<br />
Well, it turns out one can build just such an object using continuations. It is indeed possible to build it using undelimited continuations, but it's rather complex to do so (I'll not include code that does, as I don't feel like figuring out all the details). However, it turns out it's easy using delimited continuations:<br />
<br />
<haskell><br />
data Iterator m a = Done | Cur a (m (Iterator m a))<br />
<br />
begin :: MonadDelimitedCont p s m => Tree a -> m (Iterator m a)<br />
begin t = reset $ \p -><br />
for t (\a -><br />
shift p (\k -> return (Cur a (k $ return ())))) >> return Done<br />
<br />
current :: Iterator m a -> Maybe a<br />
current Done = Nothing<br />
current (Cur a _) = Just a<br />
<br />
next :: Monad m => Iterator m a -> m (Iterator m a)<br />
next Done = return Done<br />
next (Cur _ i) = i<br />
<br />
finished :: Iterator m a -> Bool<br />
finished Done = True<br />
finished _ = False<br />
</haskell><br />
<br />
So, clearly, Iterator is the type of iterators over a tree. current, next and done are some utility functions for operating on them. The interesting work is done in the begin function.<br />
<br />
There are two delimited control operators in play here. First is reset, which is a way to place a delimiter around a computation. The term 'p' is simply a way to reference that delimiter; the library I'm working with allows for many named delimiters to exist, and for control operators to specify which delimiters they're working with (so a control operator may capture the continuation up to p, even if it runs into a delimiter q sooner, provided p /= q).<br />
<br />
The other operator is shift, which is used to capture the delimited continuation. In many ways, it's like callCC, but with an important difference: it aborts the captured continuation. When callCC is called on a function f, if f returns normally, execution will pick up from just after the callCC. However, when shift is called, the continuation between the call and the enclosing prompt is packaged up (into 'k' here), and passed to the function, and a normal return will return to the place where the delimiter was set, not where shift was called.<br />
<br />
With this in mind, we can begin to analyze the 'begin' function. First, it delimits a computation with the delimiter 'p'. Next, it begins to loop over the tree. For each element, we use 'shift' to capture "the rest of the loop", calling it 'k'. We then package that, and the current tree element, into an Iterator object, and return it. Since the shift has aborted the rest of the loop (for the time being), it returns to where 'reset' was called, and the function returns the iterator object (wrapped in a monad, of course).<br />
<br />
The main remaining piece of interest is when next goes to get the next element of the traversal. When this happens, 'k $ return ()' is executed, which invokes the captured continuation (with the value (), because the loop doesn't take the return value of the traversal function into account anyway). This, essentially, re-enters the loop. If there is a next element, then the traversal function is called with it, shift will once again capture 'the rest of the loop' (from a later point that before, though), and return an iterator object with the new current element and continuation. If there are no new elements, then control will pass out of the loop to the following computation, which is, in this case, 'return Done', so in either case, an Iterator object is the result, and the types work out.<br />
<br />
We can test our iterator like so:<br />
<br />
<haskell><br />
main :: IO ()<br />
main = runCCT $ do t <- randomTree 10<br />
i <- begin t<br />
doStuff i<br />
where<br />
doStuff i<br />
| finished i = return ()<br />
| otherwise = do i' <- next i<br />
i'' <- next i<br />
liftIO $ print (fromJust $ current i :: Int)<br />
doStuff i'<br />
<br />
randomTree n = rt empty n<br />
where<br />
rt t 0 = return t<br />
rt t n = do r <- liftIO randomIO<br />
rt (insert r t) (n - 1)<br />
</haskell><br />
<br />
The output of which might go something like:<br />
<br />
-1937814587<br />
-1171184756<br />
-1068642732<br />
-741588272<br />
-553872051<br />
-499564662<br />
-421862876<br />
-59900888<br />
315891595<br />
1868487875<br />
<br />
The example shows one possibly interesting property: one can re-use old iterators without affecting new ones. In this case, we call 'next' on the same iterator twice, but it doesn't advance the iterator twice. Our iterators behave like an ordinary functional data structure, even though they're built out of somewhat out-of-the-ordinary components.<br />
<br />
=== Resumable Parsing ===<br />
<br />
Our next example concerns a Haskell version of a [http://caml.inria.fr/pub/ml-archives/caml-list/2007/07/7a34650001bf6876b71c7b1060ac501f.en.html post to the OCaml mailing list]. The translation was [http://www.mail-archive.com/haskell-cafe%40haskell.org/msg27177.html originally given] on the haskell-cafe mailing list, and complete code and some additional discussion can be found there.<br />
<br />
The problem is similar to the above iterator example. Specifically, we are in need of a parser that can take fragments of input at a time, suspending for more input after each fragment, until such time as it can be provided. However, there are already plenty of fine parsing libraries available, and ideally, we don't want to have to re-write a new library from scratch just to have this resumable parser feature.<br />
<br />
As it turns out, delimited continuations provide a fairly straightforward way to have our cake and eat it too in this case. First, we'll need a data type for the resumable parser.<br />
<br />
<haskell><br />
data Request m a = Done a | ReqChar (Maybe Char -> m (Request m a))<br />
</haskell><br />
<br />
Such a parser is either complete, or in a state of requesting more characters. Again, we'll have some convenience functions for working on the data type:<br />
<br />
<haskell><br />
provide :: Monad m => Char -> Request m a -> m (Request m a)<br />
provide _ d@(Done _) = return d<br />
provide c (ReqChar k) = k (Just c)<br />
<br />
provideString :: Monad m => String -> Request m a -> m (Request m a)<br />
provideString [] s = return s<br />
provideString (x:xs) s = provide x s >>= provideString xs<br />
<br />
finish :: Monad m => Request m a -> m (Request m a)<br />
finish d@(Done _) = return d<br />
finish (ReqChar k) = k Nothing<br />
</haskell><br />
<br />
So, 'provide' feeds a character into a parser, 'provideString' feeds in a string, and 'finish' informs the parser that there are no more characters to be had.<br />
<br />
Finally, we need to have some way of suspending parsing and waiting for characters. This is exactly what delimited continuations do for us. The hook we'll use to get control over the parser is through the character stream it takes as input:<br />
<br />
<haskell><br />
toList :: Monad m => m (Maybe a) -> m [a]<br />
toList gen = gen >>= maybe (return []) (\c -> liftM (c:) $ toList gen)<br />
<br />
streamInvert :: MonadDelimitedCont p s m => p (Request m a) -> m (Maybe Char)<br />
streamInvert p = shift p (\k -> return $ ReqChar (k . return))<br />
<br />
invertParse :: MonadDelimitedCont p s m => (String -> a) -> m (Request m a)<br />
invertParse parser = reset $ \p -> (Done . parser) `liftM` toList (streamInvert p)<br />
</haskell><br />
<br />
So, 'toList' simply takes a monadic action that may produce a character, and uses it to produce a list of characters (stopping when it sees a 'Nothing'). 'streamInvert' is just such a monadic, character-producing action (given a delimiter). Each time it is run, it captures a sub-continuation (here, 'the rest of the list generation'), and puts it in a Request object. We can then pass around the Request object, and feed characters in as desired (via 'provide' and 'provideString' above), gradually building the list of characters to be parsed.<br />
<br />
In the 'invertParse' method, this gradually produced list is fed through a parser (of type String -> a, so it doesn't need to know about the delimited continuation monad we're using), and the output of the parser is packaged in a finished (Done) Request object, so when we finally call 'finish', we will be able to access the results of the parser.<br />
<br />
For this example, the words function suffices as a parser:<br />
<br />
<haskell><br />
gradualParse :: [String]<br />
gradualParse = runCC $ do p1 <- invertParse words<br />
p2 <- provideString "The quick" p1<br />
p3 <- provideString " brown fox jum" p2<br />
p4 <- provideString "ps over the laz" p3<br />
p5 <- provideString "y dog" p4 >>= finish<br />
p6 <- provideString "iest dog" p4 >>= finish<br />
let (Done l1) = p5<br />
(Done l2) = p6<br />
return (l1 ++ l2)<br />
<br />
main :: IO ()<br />
main = mapM_ putStrLn gradualParse<br />
</haskell><br />
<br />
And we get output:<br />
<br />
The<br />
quick<br />
brown<br />
fox<br />
jumps<br />
over<br />
the<br />
lazy<br />
dog<br />
The<br />
quick<br />
brown<br />
fox<br />
jumps<br />
over<br />
the<br />
laziest<br />
dog<br />
<br />
So, the resumable parser works. It will pause at arbitrary places in the parse, even in the middle of tokens, and wait for more input. And one can resume a parse from any point to which a Request pointer is saved without interfering with other resumable parser objects.<br />
<br />
(A note: depending on what exactly one wants to do with such parsers, there are a few nits in the above implementation. It doesn't exactly match the semantics of the OCaml parser. For more information on this topic, see the linked mailing-list thread, as it discusses the issues, their causes, and provides an alternate implementation (which changes mostly the parser, not the delimited continuation end) which matches the OCaml version much more closely)<br />
<br />
== CC-delcont ==<br />
<br />
=== Installation ===<br />
<br />
Packages are available on Hackage:<br />
<br />
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/CC-delcont<br />
<br />
The library is cabalized, so installation should be as simple as:<br />
<br />
runhaskell Setup.lhs configure<br />
runhaskell Setup.lhs build<br />
sudo runhaskell Setup.lhs install<br />
<br />
(to install to the default directory, /usr/local/lib on Unix)<br />
<br />
== More information ==<br />
<br />
A google search for delimited continuations will likely yield plenty of interesting resources on the subject. However, the following resources proved especially useful to the author when he was investigating them:<br />
<br />
* [http://okmij.org/ftp/papers/context-OS.pdf Delimited continuations in operating systems] -- This paper provides excellent insight into how delimited continuations can arise as a natural solution/model for real problems, specifically in the context of implementing an operating system.<br />
<br />
* [http://www.cs.indiana.edu/~sabry/papers/monadicDC.pdf A Monadic Framework for Delimited Continuations] -- This is the paper from which the implementation of the above library was derived. It's quite thorough in its explanation of the motivations for the interface, and also has several possible implementations thereof (though CC-delcont uses only one).<br />
<br />
* [http://okmij.org/ftp/papers/DDBinding.pdf Delimited Dynamic Binding] -- This paper, and related code, served as the basis for the dynamically scoped variable portion of the CC-delcont library. It explains the rationale for having dynamic scoping and delimited control interact in the way they do in the library, and goes through the implementation of dynamic variables in terms of delimited continuations.<br />
<br />
* [http://www.cs.rutgers.edu/~ccshan/recur/recur.pdf Shift to control] -- This paper explores four different sets of delimited control operators (all of which are implemented in CC-delcont), and their implementation. Though it's not directly relevant to this particular library, it provides some good insight into delimited continuations and their implementation in general.<br />
<br />
* [http://okmij.org/ftp/Computation/Continuations.html Oleg Kiselyov's continuation page] -- Contains plenty of excellent information on delimited continuations and the like (including some of the above papers), including examples of their use in Haskell.</div>Doliohttps://wiki.haskell.org/index.php?title=Library/CC-delcont&diff=14441Library/CC-delcont2007-07-17T09:16:58Z<p>Dolio: iterators</p>
<hr />
<div>[[Category:Libraries]]<br />
[[Category:Monad]]<br />
[[Category:Tutorials]]<br />
<br />
== Introduction ==<br />
<br />
This page is intended as a brief overview of delimited continuations and related constructs, and how they can be used in Haskell. It uses the library CC-delcont as a vehicle for doing so, but the examples should be general enough so that if you have another implementation, they should be relatively straight forward to port (whenever possible, I have endeavored not to use the operators on abstract prompt and sub-continuation types from CC-delcont, instead using the more typical, functional operators).<br />
<br />
== The Basics ==<br />
<br />
=== Undelimited Continuations ===<br />
<br />
If you've taken university courses in computer science, or done much investigation of language design, you've probably encountered continuations before. The author first recalls learning about them in a class on said subject, where they were covered very briefly, and it was mentioned (without proof; and no proof will be provided here) that they could be used as a basis upon which all control flow operators could be built. At the time, they seemed rather abstract and unwieldy. Perhaps they could be used to implement any more common control flow pattern, but why bother, when, as far as language implementation concerns go, it's easier to implement (and understand) most common control flow directly than it is to implement continuations?<br />
<br />
As far as usage goes, continuations are probably most closely associated with Scheme, and its call-with-current-continuation function (abbreviated to Haskell's version, callCC from now on), although many other languages have them (undelimited continuations for Haskell are provided by the Cont monad and ContT transformer). They're often regarded as being difficult to understand, as their use can cause very complex control flow patterns (much like GOTO, although more sophisticated), though reduced to their basics, they aren't that hard to understand.<br />
<br />
A continuation of an expression is, in a loose sense, 'the stuff that happens after the expression.' An example to refer to may help:<br />
<br />
<haskell><br />
m >>= f >>= g >>= h<br />
</haskell><br />
<br />
Here we have an ordinary monadic pipeline. A computation m is run, and its result is fed into f, and so on. We might ask what the continuation of 'm' is, the portion of the program that executes after m, and it looks something like:<br />
<br />
<haskell><br />
\x -> f x >>= g >>= h<br />
</haskell><br />
<br />
The continuation takes the value produced by m, and feeds it into 'the rest of the program.' But, the fact that we can represent this using functions as above should be a clue that continuations can be built up using them, and indeed, this is the case. There is a standard way to transform a program written normally (or in a monadic style, as above) into a program in which continuations, represented as functions, are passed around explicitly (known as the CPS transform), and this is what Cont/ContT does.<br />
<br />
However, such a transform would be of little use if the passed continuations were inaccessible (as with any monad), and callCC is just the operator for the job. It will call a function with the implicitly passed continuation, so in:<br />
<br />
<haskell><br />
callCC (\k -> e) >>= f >>= g >>= h<br />
</haskell><br />
<br />
'k' will be set to a function that is something like the above '\x -> f x >>= g >>= h'. However, in some sense, it is not an ordinary function, as it will never return to the point where it is invoked. Instead, calling 'k' should be viewed as execution jumping to the point where callCC was invoked, with the entire 'callCC (..)' expression replaced with the value passed to 'k'. So k is not merely a normal function, but a way of feeding a value into into an execution context (and this is reflected in its monadic type: a -> Cont b).<br />
<br />
So, what is all this good for? Well, a standard example is that one can use continuations to capture a method of escaping from loops (particularly nested ones), and if you ponder for a while, you might be able to imagine implementing some sort of exception mechanism with them. A simple example is computing the product of a list of numbers:<br />
<br />
<haskell><br />
prod l = callCC (\k -> loop k l)<br />
where<br />
loop _ [] = return 1<br />
loop k (0:_) = k 0<br />
loop k (x:xs) = do n <- loop k xs ; return (n*x)<br />
</haskell><br />
<br />
Under normal circumstances, the loop will simply multiply all the numbers. However, if a 0 is detected, there is no need to multiply anything, the answer will always be 0. So, the continuation is invoked, and 0 is returned immediately, without performing any multiplications.<br />
<br />
=== Delimited Continuations ===<br />
<br />
So, continuations (hopefully) seem pretty clear, and at least theoretically useful. Where do delimited continuations come into the picture.<br />
<br />
The story (according to the hearsay the author has come across) goes back again to Scheme. As was mentioned earlier, callCC is often associated with it. Another thing closely associated with Scheme (and Lisp in general) is interactive environments in which code can be defined and run (much like our own Hugs and GHCi). Naturally, it would be nice if such environments could themselves be written in Scheme.<br />
<br />
However, continuations in Scheme are not implemented as they are in Haskell. In Haskell, continuation using code is tagged with a monadic type, and one must use runCont(T) to run such computations, and the effects can't escape it. In Scheme, continuations are native, and all code can capture them, and capturing them captures not 'the rest of the Cont(T) computation,' but 'the rest of the program.' And if the interactive loop is written in Scheme, this includes the loop itself, so programs run within the session can affect the session itself.<br />
<br />
Now, this might be a minor nit, but it is a nit nonetheless, and luckily for us, it led to the idea of delimited continuations. The idea was, of course, to tag a point at which the interactive loop invoked some sub-program, and then control flow operators such as callCC would only be able to capture a portion of the program up to the marker. To the sub-program, this is all that's of interest anyhow. Such a setup would solve the issue nicely.<br />
<br />
However, once one has the ability to create such markers, why not put them in the hands of the programmer? Then, instead of them being able to capture 'the rest of the program's execution,' they would be able to delimit, capture and manipulate arbitrary portions of their programs. And indeed, such operations can be useful.<br />
<br />
== Samples ==<br />
<br />
=== Iterators ===<br />
<br />
So, what are delimited continuations good for? Well, suppose we have a binary tree data type like so:<br />
<br />
<haskell><br />
data Tree a = Leaf | Branch a (Tree a) (Tree a)<br />
<br />
empty = Leaf<br />
singleton a = Branch a Leaf Leaf<br />
<br />
insert b Leaf = Branch b Leaf Leaf<br />
insert b (Branch a l r)<br />
| b < a = Branch a (insert b l) r<br />
| otherwise = Branch a l (insert b r)<br />
<br />
fold :: (a -> b -> b -> b) -> b -> Tree a -> b<br />
fold f z Leaf = z<br />
fold f z (Branch a l r) = f a (fold f z l) (fold f z r)<br />
<br />
for :: Monad m => Tree a -> (a -> m b) -> m ()<br />
for t f = fold (\a l r -> l >> f a >> r) (return ()) t<br />
</haskell><br />
<br />
Now, we have a fold over our data type, and as shown, we can therefore write a monadic iteration function 'for' over it (this is actually done for arbitrary data types in Data.Foldable). The fold is a fine method of traversing the data structure to operate on elements in most cases. However, what if we wanted something more like an iterator object, which somehow captured the traversal of the tree, remembering what element we're currently at, and which come next?<br />
<br />
Well, it turns out one can build just such an object using continuations. It is indeed possible to build it using undelimited continuations, but it's rather complex to do so (I'll not include code that does, as I don't feel like figuring out all the details). However, it turns out it's easy using delimited continuations:<br />
<br />
<haskell><br />
data Iterator m a = Done | Cur a (m (Iterator m a))<br />
<br />
begin :: MonadDelimitedCont p s m => Tree a -> m (Iterator m a)<br />
begin t = reset $ \p -><br />
for t (\a -><br />
shift p (\k -> return (Cur a (k $ return ())))) >> return Done<br />
<br />
current :: Iterator m a -> Maybe a<br />
current Done = Nothing<br />
current (Cur a _) = Just a<br />
<br />
next :: Monad m => Iterator m a -> m (Iterator m a)<br />
next Done = return Done<br />
next (Cur _ i) = i<br />
<br />
finished :: Iterator m a -> Bool<br />
finished Done = True<br />
finished _ = False<br />
</haskell><br />
<br />
So, clearly, Iterator is the type of iterators over a tree. current, next and done are some utility functions for operating on them. The interesting work is done in the begin function.<br />
<br />
There are two delimited control operators in play here. First is reset, which is a way to place a delimiter around a computation. The term 'p' is simply a way to reference that delimiter; the library I'm working with allows for many named delimiters to exist, and for control operators to specify which delimiters they're working with (so a control operator may capture the continuation up to p, even if it runs into a delimiter q sooner, provided p /= q).<br />
<br />
The other operator is shift, which is used to capture the delimited continuation. In many ways, it's like callCC, but with an important difference: it aborts the captured continuation. When callCC is called on a function f, if f returns normally, execution will pick up from just after the callCC. However, when shift is called, the continuation between the call and the enclosing prompt is packaged up (into 'k' here), and passed to the function, and a normal return will return to the place where the delimiter was set, not where shift was called.<br />
<br />
With this in mind, we can begin to analyze the 'begin' function. First, it delimits a computation with the delimiter 'p'. Next, it begins to loop over the tree. For each element, we use 'shift' to capture "the rest of the loop", calling it 'k'. We then package that, and the current tree element, into an Iterator object, and return it. Since the shift has aborted the rest of the loop (for the time being), it returns to where 'reset' was called, and the function returns the iterator object (wrapped in a monad, of course).<br />
<br />
The main remaining piece of interest is when next goes to get the next element of the traversal. When this happens, 'k $ return ()' is executed, which invokes the captured continuation (with the value (), because the loop doesn't take the return value of the traversal function into account anyway). This, essentially, re-enters the loop. If there is a next element, then the traversal function is called with it, shift will once again capture 'the rest of the loop' (from a later point that before, though), and return an iterator object with the new current element and continuation. If there are no new elements, then control will pass out of the loop to the following computation, which is, in this case, 'return Done', so in either case, an Iterator object is the result, and the types work out.<br />
<br />
We can test our iterator like so:<br />
<br />
<haskell><br />
main :: IO ()<br />
main = runCCT $ do t <- randomTree 10<br />
i <- begin t<br />
doStuff i<br />
where<br />
doStuff i<br />
| finished i = return ()<br />
| otherwise = do i' <- next i<br />
i'' <- next i<br />
liftIO $ print (fromJust $ current i :: Int)<br />
doStuff i'<br />
<br />
randomTree n = rt empty n<br />
where<br />
rt t 0 = return t<br />
rt t n = do r <- liftIO randomIO<br />
rt (insert r t) (n - 1)<br />
</haskell><br />
<br />
The output of which might go something like:<br />
<br />
-1937814587<br />
-1171184756<br />
-1068642732<br />
-741588272<br />
-553872051<br />
-499564662<br />
-421862876<br />
-59900888<br />
315891595<br />
1868487875<br />
<br />
The example shows one possibly interesting property: one can re-use old iterators without affecting new ones. In this case, we call 'next' on the same iterator twice, but it doesn't advance the iterator twice. Our iterators behave like an ordinary functional data structure, even though they're built out of somewhat out-of-the-ordinary components.<br />
<br />
== CC-delcont ==<br />
<br />
=== Installation ===<br />
<br />
Packages are available on Hackage:<br />
<br />
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/CC-delcont<br />
<br />
The library is cabalized, so installation should be as simple as:<br />
<br />
runhaskell Setup.lhs configure<br />
runhaskell Setup.lhs build<br />
sudo runhaskell Setup.lhs install<br />
<br />
(to install to the default directory, /usr/local/lib on Unix)<br />
<br />
== More Information ==<br />
<br />
A google search for delimited continuations will likely yield plenty of interesting resources on the subject. However, the following resources proved especially useful to the author when he was investigating them:<br />
<br />
* [http://okmij.org/ftp/papers/context-OS.pdf Delimited continuations in operating systems] -- This paper provides excellent insight into how delimited continuations can arise as a natural solution/model for real problems, specifically in the context of implementing an operating system.<br />
<br />
* [http://www.cs.indiana.edu/~sabry/papers/monadicDC.pdf A Monadic Framework for Delimited Continuations] -- This is the paper from which the implementation of the above library was derived. It's quite thorough in its explanation of the motivations for the interface, and also has several possible implementations thereof (though CC-delcont uses only one).<br />
<br />
* [http://okmij.org/ftp/papers/DDBinding.pdf Delimited Dynamic Binding] -- This paper, and related code, served as the basis for the dynamically scoped variable portion of the CC-delcont library. It explains the rationale for having dynamic scoping and delimited control interact in the way they do in the library, and goes through the implementation of dynamic variables in terms of delimited continuations.<br />
<br />
* [http://www.cs.rutgers.edu/~ccshan/recur/recur.pdf Shift to control] -- This paper explores four different sets of delimited control operators (all of which are implemented in CC-delcont), and their implementation. Though it's not directly relevant to this particular library, it provides some good insight into delimited continuations and their implementation in general.<br />
<br />
* [http://okmij.org/ftp/Computation/Continuations.html Oleg Kiselyov's continuation page] -- Contains plenty of excellent information on delimited continuations and the like (including some of the above papers), including examples of their use in Haskell.</div>Doliohttps://wiki.haskell.org/index.php?title=Library/CC-delcont&diff=14439Library/CC-delcont2007-07-17T07:09:15Z<p>Dolio: </p>
<hr />
<div>[[Category:Libraries]]<br />
[[Category:Monad]]<br />
[[Category:Tutorials]]<br />
<br />
== Introduction ==<br />
<br />
This page is intended as a brief overview of delimited continuations and related constructs, and how they can be used in Haskell. It uses the library CC-delcont as a vehicle for doing so, but the examples should be general enough so that if you have another implementation, they should be relatively straight forward to port (whenever possible, I have endeavored not to use the operators on abstract prompt and sub-continuation types from CC-delcont, instead using the more typical, functional operators).<br />
<br />
== The Basics ==<br />
<br />
=== Undelimited Continuations ===<br />
<br />
If you've taken university courses in computer science, or done much investigation of language design, you've probably encountered continuations before. The author first recalls learning about them in a class on said subject, where they were covered very briefly, and it was mentioned (without proof; and no proof will be provided here) that they could be used as a basis upon which all control flow operators could be built. At the time, they seemed rather abstract and unwieldy. Perhaps they could be used to implement any more common control flow pattern, but why bother, when, as far as language implementation concerns go, it's easier to implement (and understand) most common control flow directly than it is to implement continuations?<br />
<br />
As far as usage goes, continuations are probably most closely associated with Scheme, and its call-with-current-continuation function (abbreviated to Haskell's version, callCC from now on), although many other languages have them (undelimited continuations for Haskell are provided by the Cont monad and ContT transformer). They're often regarded as being difficult to understand, as their use can cause very complex control flow patterns (much like GOTO, although more sophisticated), though reduced to their basics, they aren't that hard to understand.<br />
<br />
A continuation of an expression is, in a loose sense, 'the stuff that happens after the expression.' An example to refer to may help:<br />
<br />
<haskell><br />
m >>= f >>= g >>= h<br />
</haskell><br />
<br />
Here we have an ordinary monadic pipeline. A computation m is run, and its result is fed into f, and so on. We might ask what the continuation of 'm' is, the portion of the program that executes after m, and it looks something like:<br />
<br />
<haskell><br />
\x -> f x >>= g >>= h<br />
</haskell><br />
<br />
The continuation takes the value produced by m, and feeds it into 'the rest of the program.' But, the fact that we can represent this using functions as above should be a clue that continuations can be built up using them, and indeed, this is the case. There is a standard way to transform a program written normally (or in a monadic style, as above) into a program in which continuations, represented as functions, are passed around explicitly (known as the CPS transform), and this is what Cont/ContT does.<br />
<br />
However, such a transform would be of little use if the passed continuations were inaccessible (as with any monad), and callCC is just the operator for the job. It will call a function with the implicitly passed continuation, so in:<br />
<br />
<haskell><br />
callCC (\k -> e) >>= f >>= g >>= h<br />
</haskell><br />
<br />
'k' will be set to a function that is something like the above '\x -> f x >>= g >>= h'. However, in some sense, it is not an ordinary function, as it will never return to the point where it is invoked. Instead, calling 'k' should be viewed as execution jumping to the point where callCC was invoked, with the entire 'callCC (..)' expression replaced with the value passed to 'k'. So k is not merely a normal function, but a way of feeding a value into into an execution context (and this is reflected in its monadic type: a -> Cont b).<br />
<br />
So, what is all this good for? Well, a standard example is that one can use continuations to capture a method of escaping from loops (particularly nested ones), and if you ponder for a while, you might be able to imagine implementing some sort of exception mechanism with them. A simple example is computing the product of a list of numbers:<br />
<br />
<haskell><br />
prod l = callCC (\k -> loop k l)<br />
where<br />
loop _ [] = return 1<br />
loop k (0:_) = k 0<br />
loop k (x:xs) = do n <- loop k xs ; return (n*x)<br />
</haskell><br />
<br />
Under normal circumstances, the loop will simply multiply all the numbers. However, if a 0 is detected, there is no need to multiply anything, the answer will always be 0. So, the continuation is invoked, and 0 is returned immediately, without performing any multiplications.<br />
<br />
=== Delimited Continuations ===<br />
<br />
So, continuations (hopefully) seem pretty clear, and at least theoretically useful. Where do delimited continuations come into the picture.<br />
<br />
The story (according to the hearsay the author has come across) goes back again to Scheme. As was mentioned earlier, callCC is often associated with it. Another thing closely associated with Scheme (and Lisp in general) is interactive environments in which code can be defined and run (much like our own Hugs and GHCi). Naturally, it would be nice if such environments could themselves be written in Scheme.<br />
<br />
However, continuations in Scheme are not implemented as they are in Haskell. In Haskell, continuation using code is tagged with a monadic type, and one must use runCont(T) to run such computations, and the effects can't escape it. In Scheme, continuations are native, and all code can capture them, and capturing them captures not 'the rest of the Cont(T) computation,' but 'the rest of the program.' And if the interactive loop is written in Scheme, this includes the loop itself, so programs run within the session can affect the session itself.<br />
<br />
Now, this might be a minor nit, but it is a nit nonetheless, and luckily for us, it led to the idea of delimited continuations. The idea was, of course, to tag a point at which the interactive loop invoked some sub-program, and then control flow operators such as callCC would only be able to capture a portion of the program up to the marker. To the sub-program, this is all that's of interest anyhow. Such a setup would solve the issue nicely.<br />
<br />
However, once one has the ability to create such markers, why not put them in the hands of the programmer? Then, instead of them being able to capture 'the rest of the program's execution,' they would be able to delimit, capture and manipulate arbitrary portions of their programs. And indeed, such operations can be useful.<br />
<br />
== CC-delcont ==<br />
<br />
== Installation ==<br />
<br />
Packages are available on Hackage:<br />
<br />
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/CC-delcont<br />
<br />
The library is cabalized, so installation should be as simple as:<br />
<br />
runhaskell Setup.lhs configure<br />
runhaskell Setup.lhs build<br />
sudo runhaskell Setup.lhs install<br />
<br />
(to install to the default directory, /usr/local/lib on Unix)<br />
<br />
== More Information ==<br />
<br />
A google search for delimited continuations will likely yield plenty of interesting resources on the subject. However, the following resources proved especially useful to the author when he was investigating them:<br />
<br />
* [http://okmij.org/ftp/papers/context-OS.pdf Delimited continuations in operating systems] -- This paper provides excellent insight into how delimited continuations can arise as a natural solution/model for real problems, specifically in the context of implementing an operating system.<br />
<br />
* [http://www.cs.indiana.edu/~sabry/papers/monadicDC.pdf A Monadic Framework for Delimited Continuations] -- This is the paper from which the implementation of the above library was derived. It's quite thorough in its explanation of the motivations for the interface, and also has several possible implementations thereof (though CC-delcont uses only one).<br />
<br />
* [http://okmij.org/ftp/papers/DDBinding.pdf Delimited Dynamic Binding] -- This paper, and related code, served as the basis for the dynamically scoped variable portion of the CC-delcont library. It explains the rationale for having dynamic scoping and delimited control interact in the way they do in the library, and goes through the implementation of dynamic variables in terms of delimited continuations.<br />
<br />
* [http://www.cs.rutgers.edu/~ccshan/recur/recur.pdf Shift to control] -- This paper explores four different sets of delimited control operators (all of which are implemented in CC-delcont), and their implementation. Though it's not directly relevant to this particular library, it provides some good insight into delimited continuations and their implementation in general.<br />
<br />
* [http://okmij.org/ftp/Computation/Continuations.html Oleg Kiselyov's continuation page] -- Contains plenty of excellent information on delimited continuations and the like (including some of the above papers), including examples of their use in Haskell.</div>Doliohttps://wiki.haskell.org/index.php?title=Library/CC-delcont&diff=14437Library/CC-delcont2007-07-17T06:38:58Z<p>Dolio: undelimited continuations</p>
<hr />
<div>[[Category:Libraries]]<br />
[[Category:Monad]]<br />
[[Category:Tutorials]]<br />
<br />
== Introduction ==<br />
<br />
This page is intended as a brief overview of delimited continuations and related constructs, and how they can be used in Haskell. It uses the library CC-delcont as a vehicle for doing so, but the examples should be general enough so that if you have another implementation, they should be relatively straight forward to port (whenever possible, I have endeavored not to use the operators on abstract prompt and sub-continuation types from CC-delcont, instead using the more typical, functional operators).<br />
<br />
== The Basics ==<br />
<br />
=== Undelimited Continuations ===<br />
<br />
If you've taken university courses in computer science, or done much investigation of language design, you've probably encountered continuations before. The author first recalls learning about them in a class on said subject, where they were covered very briefly, and it was mentioned (without proof; and no proof will be provided here) that they could be used as a basis upon which all control flow operators could be built. At the time, they seemed rather abstract and unwieldy. Perhaps they could be used to implement any more common control flow pattern, but why bother, when, as far as language implementation concerns go, it's easier to implement (and understand) most common control flow directly than it is to implement continuations?<br />
<br />
As far as usage goes, continuations are probably most closely associated with Scheme, and its call-with-current-continuation function (abbreviated to Haskell's version, callCC from now on), although many other languages have them (undelimited continuations for Haskell are provided by the Cont monad and ContT transformer). They're often regarded as being difficult to understand, as their use can cause very complex control flow patterns (much like GOTO, although more sophisticated), though reduced to their basics, they aren't that hard to understand.<br />
<br />
A continuation of an expression is, in a loose sense, 'the stuff that happens after the expression.' An example to refer to may help:<br />
<br />
m >>= f >>= g >>= h<br />
<br />
Here we have an ordinary monadic pipeline. A computation m is run, and its result is fed into f, and so on. We might ask what the continuation of 'm' is, the portion of the program that executes after m, and it looks something like:<br />
<br />
\x -> f x >>= g >>= h<br />
<br />
The continuation takes the value produced by m, and feeds it into 'the rest of the program.' But, the fact that we can represent this using functions as above should be a clue that continuations can be built up using them, and indeed, this is the case. There is a standard way to transform a program written normally (or in a monadic style, as above) into a program in which continuations, represented as functions, are passed around explicitly (known as the CPS transform), and this is what Cont/ContT does.<br />
<br />
However, such a transform would be of little use if the passed continuations were inaccessible (as with any monad), and callCC is just the operator for the job. It will call a function with the implicitly passed continuation, so in:<br />
<br />
callCC (\k -> e) >>= f >>= g >>= h<br />
<br />
'k' will be set to a function that is something like the above '\x -> f x >>= g >>= h'. However, in some sense, it is not an ordinary function, as it will never return to the point where it is invoked. Instead, calling 'k' should be viewed as execution jumping to the point where callCC was invoked, with the entire 'callCC (..)' expression replaced with the value passed to 'k'. So k is not merely a normal function, but a way of feeding a value into into an execution context (and this is reflected in its monadic type: a -> Cont b).<br />
<br />
So, what is all this good for? Well, a standard example is that one can use continuations to capture a method of escaping from loops (particularly nested ones), and if you ponder for a while, you might be able to imagine implementing some sort of exception mechanism with them. A simple example is computing the product of a list of numbers:<br />
<br />
<haskell><br />
prod l = callCC (\k -> loop k l)<br />
where<br />
loop _ [] = return 1<br />
loop k (0:_) = k 0<br />
look k (x:xs) = do n <- loop k xs ; return (n*x)<br />
</haskell><br />
<br />
Under normal circumstances, the loop will simply multiply all the numbers. However, if a 0 is detected, there is no need to multiply anything, the answer will always be 0. So, the continuation is invoked, and 0 is returned immediately, without performing any multiplications.<br />
<br />
== CC-delcont ==<br />
<br />
== Installation ==<br />
<br />
Packages are available on Hackage:<br />
<br />
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/CC-delcont<br />
<br />
The library is cabalized, so installation should be as simple as:<br />
<br />
runhaskell Setup.lhs configure<br />
runhaskell Setup.lhs build<br />
sudo runhaskell Setup.lhs install<br />
<br />
(to install to the default directory, /usr/local/lib on Unix)<br />
<br />
== More Information ==<br />
<br />
A google search for delimited continuations will likely yield plenty of interesting resources on the subject. However, the following resources proved especially useful to the author when he was investigating them:<br />
<br />
* [http://okmij.org/ftp/papers/context-OS.pdf Delimited continuations in operating systems] -- This paper provides excellent insight into how delimited continuations can arise as a natural solution/model for real problems, specifically in the context of implementing an operating system.<br />
<br />
* [http://www.cs.indiana.edu/~sabry/papers/monadicDC.pdf A Monadic Framework for Delimited Continuations] -- This is the paper from which the implementation of the above library was derived. It's quite thorough in its explanation of the motivations for the interface, and also has several possible implementations thereof (though CC-delcont uses only one).<br />
<br />
* [http://okmij.org/ftp/papers/DDBinding.pdf Delimited Dynamic Binding] -- This paper, and related code, served as the basis for the dynamically scoped variable portion of the CC-delcont library. It explains the rationale for having dynamic scoping and delimited control interact in the way they do in the library, and goes through the implementation of dynamic variables in terms of delimited continuations.<br />
<br />
* [http://www.cs.rutgers.edu/~ccshan/recur/recur.pdf Shift to control] -- This paper explores four different sets of delimited control operators (all of which are implemented in CC-delcont), and their implementation. Though it's not directly relevant to this particular library, it provides some good insight into delimited continuations and their implementation in general.<br />
<br />
* [http://okmij.org/ftp/Computation/Continuations.html Oleg Kiselyov's continuation page] -- Contains plenty of excellent information on delimited continuations and the like (including some of the above papers), including examples of their use in Haskell.</div>Doliohttps://wiki.haskell.org/index.php?title=Library/CC-delcont&diff=14431Library/CC-delcont2007-07-17T04:30:31Z<p>Dolio: intro</p>
<hr />
<div>[[Category:Libraries]]<br />
[[Category:Monad]]<br />
[[Category:Tutorials]]<br />
<br />
== Introduction ==<br />
<br />
This page is intended as a brief overview of delimited continuations and related constructs, and how they can be used in Haskell. It uses the library CC-delcont as a vehicle for doing so, but the examples should be general enough so that if you have another implementation, they should be relatively straight forward to port (whenever possible, I have endeavored not to use the operators on abstract prompt and sub-continuation types from CC-delcont, instead using the more typical, functional operators).<br />
<br />
== CC-delcont ==<br />
<br />
== Installation ==<br />
<br />
Packages are available on Hackage:<br />
<br />
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/CC-delcont<br />
<br />
The library is cabalized, so installation should be as simple as:<br />
<br />
runhaskell Setup.lhs configure<br />
runhaskell Setup.lhs build<br />
sudo runhaskell Setup.lhs install<br />
<br />
(to install to the default directory, /usr/local/lib on Unix)<br />
<br />
== More Information ==<br />
<br />
A google search for delimited continuations will likely yield plenty of interesting resources on the subject. However, the following resources proved especially useful to the author when he was investigating them:<br />
<br />
* [http://okmij.org/ftp/papers/context-OS.pdf Delimited continuations in operating systems] -- This paper provides excellent insight into how delimited continuations can arise as a natural solution/model for real problems, specifically in the context of implementing an operating system.<br />
<br />
* [http://www.cs.indiana.edu/~sabry/papers/monadicDC.pdf A Monadic Framework for Delimited Continuations] -- This is the paper from which the implementation of the above library was derived. It's quite thorough in its explanation of the motivations for the interface, and also has several possible implementations thereof (though CC-delcont uses only one).<br />
<br />
* [http://okmij.org/ftp/papers/DDBinding.pdf Delimited Dynamic Binding] -- This paper, and related code, served as the basis for the dynamically scoped variable portion of the CC-delcont library. It explains the rationale for having dynamic scoping and delimited control interact in the way they do in the library, and goes through the implementation of dynamic variables in terms of delimited continuations.<br />
<br />
* [http://www.cs.rutgers.edu/~ccshan/recur/recur.pdf Shift to control] -- This paper explores four different sets of delimited control operators (all of which are implemented in CC-delcont), and their implementation. Though it's not directly relevant to this particular library, it provides some good insight into delimited continuations and their implementation in general.<br />
<br />
* [http://okmij.org/ftp/Computation/Continuations.html Oleg Kiselyov's continuation page] -- Contains plenty of excellent information on delimited continuations and the like (including some of the above papers), including examples of their use in Haskell.</div>Doliohttps://wiki.haskell.org/index.php?title=Library/CC-delcont&diff=14430Library/CC-delcont2007-07-17T04:05:24Z<p>Dolio: init</p>
<hr />
<div>[[Category:Libraries]]<br />
[[Category:Monad]]<br />
[[Category:Tutorials]]<br />
<br />
== CC-delcont ==<br />
<br />
== Installation ==<br />
<br />
Packages are available on Hackage:<br />
<br />
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/CC-delcont<br />
<br />
The library is cabalized, so installation should be as simple as:<br />
<br />
runhaskell Setup.lhs configure<br />
runhaskell Setup.lhs build<br />
sudo runhaskell Setup.lhs install<br />
<br />
(to install to the default directory, /usr/local/lib on Unix)<br />
<br />
== More Information ==<br />
<br />
A google search for delimited continuations will likely yield plenty of interesting resources on the subject. However, the following resources proved especially useful to the author when he was investigating them:<br />
<br />
* [http://okmij.org/ftp/papers/context-OS.pdf Delimited continuations in operating systems] -- This paper provides excellent insight into how delimited continuations can arise as a natural solution/model for real problems, specifically in the context of implementing an operating system.<br />
<br />
* [http://www.cs.indiana.edu/~sabry/papers/monadicDC.pdf A Monadic Framework for Delimited Continuations] -- This is the paper from which the implementation of the above library was derived. It's quite thorough in its explanation of the motivations for the interface, and also has several possible implementations thereof (though CC-delcont uses only one).<br />
<br />
* [http://okmij.org/ftp/papers/DDBinding.pdf Delimited Dynamic Binding] -- This paper, and related code, served as the basis for the dynamically scoped variable portion of the CC-delcont library. It explains the rationale for having dynamic scoping and delimited control interact in the way they do in the library, and goes through the implementation of dynamic variables in terms of delimited continuations.<br />
<br />
* [http://www.cs.rutgers.edu/~ccshan/recur/recur.pdf Shift to control] -- This paper explores four different sets of delimited control operators (all of which are implemented in CC-delcont), and their implementation. Though it's not directly relevant to this particular library, it provides some good insight into delimited continuations and their implementation in general.<br />
<br />
* [http://okmij.org/ftp/Computation/Continuations.html Oleg Kiselyov's continuation page] -- Contains plenty of excellent information on delimited continuations and the like (including some of the above papers), including examples of their use in Haskell.</div>Doliohttps://wiki.haskell.org/index.php?title=Library&diff=14427Library2007-07-17T03:02:47Z<p>Dolio: CC-delcont</p>
<hr />
<div>[[Alternatives and extensions for libraries]]<br />
<br />
Wiki pages with documentation for some libraries:<br />
<br />
*[[Library/AltBinary]] - binary I/O and serialization<br />
*[[Library/ArrayRef]] - arrays and references<br />
*[[Library/CC-delcont]] - delimited continuations and applications thereof<br />
*[[Library/Compression]] - interface to best available C compression libraries<br />
*[[Library/Core]] - [[GHC]]'s core library.<br />
*[[Library/IO]] - a proposal for development of a new standard low-level IO library.<br />
*[[Library/New collections]] - A modern package of collections types.<br />
*[[Library/Streams]] - fast extensible general I/O library.<br />
*[[Library/VTY]] - A very simple terminal interface library.<br />
<br />
[[Category:Libraries]]</div>Doliohttps://wiki.haskell.org/index.php?title=Haskell_Cafe_migration&diff=14293Haskell Cafe migration2007-07-14T00:46:53Z<p>Dolio: Dan Doel</p>
<hr />
<div>Often people post wonderful material to the mailing lists. This can<br />
later be hard to find. The goal of this page is to collect a list of <br />
people who are happy for their contributions to be added directly to<br />
the Haskell wiki.<br />
<br />
If you are happy for your contributions (both new and old posts) on the<br />
Haskell mailing lists to be relicensed and moved to the new wiki when<br />
appropriate, please add your name to this list, so that others may move<br />
your contributions without fear.<br />
<br />
Contributions will be licensed specifically under a<br />
[[HaskellWiki:Copyrights|Simple Permissive License]].<br />
<br />
* Derek Elkins<br />
* Don Stewart<br />
* Stefan O'Rear<br />
* Tim Chevalier (aka Kirsten)<br />
* Brandon Allbery<br />
* Dan Doel<br />
<br />
[[Category:Community]]</div>Dolio