https://wiki.haskell.org/api.php?action=feedcontributions&user=WouterSwierstra&feedformat=atomHaskellWiki - User contributions [en]2022-05-21T02:15:21ZUser contributionsMediaWiki 1.31.7https://wiki.haskell.org/index.php?title=The_Monad.Reader/Issue4/Why_Attribute_Grammars_Matter&diff=64420The Monad.Reader/Issue4/Why Attribute Grammars Matter2021-05-30T21:20:32Z<p>WouterSwierstra: </p>
<hr />
<div>=Why Attribute Grammars Matter=<br />
:''by Wouter Swierstra for The Monad.Reader Issue Four''; 01-07-05<br />
<br />
==Introduction==<br />
Almost twenty years have passed since John Hughes influential paper [http://www.math.chalmers.se/~rjmh/Papers/whyfp.html Why Functional Programming Matters]. At the same time the first work on<br />
attribute grammars and their relation to functional programming<br />
appeared. Despite the growing popularity of functional programming,<br />
attribute grammars remain remarkably less renown.<br />
<br />
The purpose of this article is twofold. On the one hand it illustrates how<br />
functional programming sometimes scales poorly and how<br />
attribute grammars can remedy these problems. On the other hand it aims to<br />
provide a gentle introduction to attribute grammars for seasoned functional<br />
programmers.<br />
<br />
==The problem==<br />
John Hughes argues that with the increasing complexity of modern<br />
software systems, modularity has become of paramount importance to software<br />
development. Functional languages provide new kinds of ''glue'' that create<br />
new opportunities for more modular code. In particular, Hughes stresses<br />
the importance of higher-order functions and lazy evaluation. There are<br />
plenty of examples where this works nicely - yet situations arise where<br />
the glue that functional programming provides somehow isn't quite enough.<br />
<br />
Perhaps a small example is in order. Suppose we want to write a function<br />
<tt>diff :: [Float] -> [Float]</tt> that given a list <tt>xs</tt>, calculates a new list where every element <tt>x</tt> is replaced with the difference between <tt>x</tt> and the<br />
average of <tt>xs</tt>. Similar problems pop up in any library for performing<br />
statistical calculations.<br />
<br />
===Higher-order functions===<br />
Let's tackle the problem with some of Haskell's most powerful glue - higher-order functions. Any beginning Haskell programmer should be able to concoct the solution presented in Listing One. The average is computed using functions from the Prelude. The obvious function using this average is then mapped over the original list. So far, so good.<br />
<br />
<haskell><br />
--- Listing One ---<br />
<br />
diff :: [Float] -> [Float]<br />
diff xs = map (\x -> x - (avg xs)) xs<br />
<br />
avg :: [Float] -> Float<br />
avg xs = sum xs / genericLength xs<br />
</haskell><br />
<br />
There are, however, a few things swept under the rug in this example. First<br />
of all, this simple problem requires three traversals of the original<br />
list. Computing additional values from the original list will require even<br />
more traversals.<br />
<br />
Secondly, the solution is so concise because it depends on Prelude<br />
functions. If the values were stored in a slightly different data structure,<br />
the solution would require a lot of tedious work. We could, of course,<br />
define our own higher-order functions, such as <tt>map</tt> and <tt>fold</tt>, or even<br />
resort to generic programming. There are, however,<br />
more ways to skin this particular cat.<br />
<br />
This problem illustrates the sheer elegance of functional programming. We<br />
do pay a price for the succinctness of the solution. Multiple traversals<br />
and boilerplate code can both be quite a head-ache. If we want to perform<br />
complex computations over custom data structures, we may want to consider an<br />
alternative approach.<br />
<br />
Fortunately, as experienced functional programmers, we have another card up<br />
our sleeve.<br />
<br />
===Lazy evaluation===<br />
The second kind of glue that functional programming provides is lazy<br />
evaluation. In essence, lazy evaluation only evaluates expressions when<br />
they become absolutely necessary.<br />
<br />
In particular, lazy evaluation enables the definition of ''circular programs'' that bear a dangerous resemblance to undefined values. Circular<br />
programs tuple separate computations, relying on lazy evaluation to feed<br />
the results of one computation to another.<br />
<br />
In our example, we could simply compute the length and sum of the list at<br />
the same time:<br />
<br />
<haskell><br />
average :: [Float] -> Float<br />
average xs = let<br />
nil = (0.0, 0.0)<br />
cons x (s,l) = (x + s, 1.0 + l)<br />
(sum,length) = foldr cons nil xs<br />
in sum / length<br />
</haskell><br />
<br />
We can eliminate traversals by tupling computations! Can we compute the<br />
resulting list at the same time as computing the sum and length? Let's try:<br />
<br />
<haskell><br />
diff :: [Float] -> [Float]<br />
diff xs = let<br />
nil = (0.0, 0.0, [])<br />
cons x (s,l,rs) = (x+s, 1.0+l, (x - ....) : rs)<br />
(sum,length,res) = foldr cons nil xs<br />
in res<br />
</haskell><br />
<br />
We run into trouble when we try to use the average to construct the<br />
resulting list. The problem is, that we haven't computed the average, but<br />
somehow want to use it during the traversal. To solve this, we don't actually<br />
compute the resulting list, but rather compute a function taking the<br />
average to the resulting list:<br />
<br />
<haskell><br />
diff :: [Float] -> [Float]<br />
diff xs =<br />
let<br />
nil = (0.0, 0.0, \avg -> [])<br />
cons x (s,l,rs) = (x+s, 1.0+l, \avg -> (x - avg) : rs avg)<br />
(sum,length,res) = foldr cons nil xs<br />
in<br />
res (sum / length)<br />
</haskell><br />
<br />
We can generalize this idea a bit further. Suppose that we want to compute<br />
other values that use the average. We could just add an <tt>avg</tt> argument to<br />
every element of the tuple that needs the average. It is a bit nicer,<br />
however, to lift the <tt>avg</tt> argument outside the tuple. Our final listing<br />
now becomes:<br />
<br />
<haskell><br />
--- Listing Two ---<br />
<br />
diff :: [Float] -> [Float]<br />
diff xs =<br />
let<br />
nil avg = (0.0, 0.0, [])<br />
cons x fs avg =<br />
let<br />
(s,l,ds) = fs avg<br />
in<br />
(s+x,l+1.0,x-avg : ds)<br />
(sum,length,ds) = foldr cons nil xs (sum / length)<br />
in<br />
ds<br />
</haskell><br />
<br />
Now every element of the tuple can refer to the average, rather than just<br />
the final list.<br />
<br />
This ''credit card transformation'' eliminates multiple traversals by<br />
tupling computations. We use the average without worrying if we have<br />
actually managed to compute it. When we actually write the fold, however,<br />
we have to put our average where our mouth is. Fortunately, the <tt>sum</tt> and<br />
<tt>length</tt> don't depend on the average, so we are free to use these values to<br />
tie the recursive knot.<br />
<br />
The code in Listing Two only needs a single traversal and one<br />
higher-order function. It apparently solves the problems with the code in<br />
Listing One.<br />
<br />
Hold on a minute. What ever happened to the elegance of our previous<br />
solution? Our second solution appears to have sacrificed clarity for the<br />
sake of efficiency. Who in their right minds would want to write the code<br />
in Listing Two? I wouldn't. Maybe, just maybe, we can do a bit better.<br />
<br />
==Attribute Grammars==<br />
Before even explaining what an attribute grammar is, think back to when you<br />
first learned about ''folds''. Initially, a fold seems like a silly<br />
abstraction. Why should I bother writing simple functions as folds? After<br />
all, I already know how to write the straightforward solution. It's only<br />
after a great deal of experience with functional programming that you learn<br />
to recognize folds as actually being a worthwhile abstraction. Learning<br />
about attribute grammars is similar in more ways than one.<br />
<br />
So what are attribute grammars? I'll have a bit more to say about that<br />
later. For now, let's see what the attribute grammar solution to our<br />
running example looks like.<br />
<br />
===The attribute grammar solution===<br />
I'll introduce attribute grammars using the syntax of the [http://www.cs.uu.nl/wiki/bin/view/HUT/AttributeGrammarSystem Utrecht University Attribute Grammar] system or UUAG for short. The UUAG system takes a file<br />
containing an attribute grammar definition and generates a Haskell module<br />
containing ''semantic functions'', determined by the attribute grammar. The<br />
attribute grammar determines a computation over some data structure; the<br />
semantic functions correspond to the actual Haskell functions that perform<br />
the computation.<br />
<br />
Although the UUAG system's syntax closely resembles Haskell, it is<br />
important to realize that the UUAG system is a Haskell pre-processor and<br />
not a complete Haskell compiler.<br />
<br />
So what does an attribute grammar file look like? Well, first of all we<br />
have to declare the data structure we're working with. In our example, we<br />
simply have a list of Floats.<br />
<br />
<haskell><br />
--- Listing Three ---<br />
<br />
DATA Root<br />
| Root list : List<br />
DATA List<br />
| Nil<br />
| Cons head : Float tail : List<br />
</haskell><br />
<br />
Datatypes are declared with the keyword <tt>DATA</tt>, followed by a list of<br />
constructors. Every node explicitly gives the name and type of all its<br />
children. In our example we have an empty list <tt>Nil</tt> and a list constructor<br />
<tt>Cons</tt> with two children, <tt>head</tt> and <tt>tail</tt>. For reasons that will become<br />
apparent later on, we add an additional datatype corresponding to the root<br />
of our list.<br />
<br />
So now that we've declared our datatype, let's add some ''attributes''. If we<br />
want to compute the average element, we'll need the length of the<br />
list. Listing Four introduces our first attribute corresponding to<br />
a list's length.<br />
<br />
<haskell><br />
--- Listing Four ---<br />
<br />
ATTR List [|| length : Float]<br />
SEM List<br />
| Nil lhs.length = 0.0<br />
| Cons lhs.length = 1.0 + @tail.length<br />
</haskell><br />
<br />
Let's go over the code line by line.<br />
<br />
An attribute has to be declared before it can actually be defined. An<br />
attribute is declared using the <tt>ATTR</tt> statement. This example declares a<br />
single ''synthesized'' attribute called <tt>length</tt> of type <tt>Float</tt>. A<br />
synthesized attribute is typically a value you are trying to compute bottom<br />
up. Synthesized attributes are declared to the right of the second<br />
vertical bar. We'll see other kinds of attributes shortly.<br />
<br />
Now that we've declared our first attribute, we can actually define it. A<br />
<tt>SEM</tt> statement begins by declaring for which data type attributes are<br />
being defined. In our example we want to define an attribute on a <tt>List</tt>,<br />
hence we write <tt>SEM List</tt>. We can subsequently give attribute definitions<br />
for the constructors of our <tt>List</tt> data type.<br />
<br />
Every attribute definition consists of several parts. We begin by<br />
mentioning the constructor for which we define an attribute. In our example<br />
we give two definitions, one for <tt>Nil</tt> and one for <tt>Cons</tt>.<br />
<br />
The second part of the attribute definition describes which attribute is<br />
being defined. In our example we define the attribute <tt>length</tt> for the<br />
''left-hand side'', or <tt>lhs</tt>. A lot of the terminology associated with<br />
attribute grammars comes from the world of context-free grammars. As this<br />
tutorial focuses on functional programmers, rather than formal language<br />
gurus, feel free to read <tt>lhs</tt> as "parent node". It seems a bit odd to<br />
write <tt>lhs.length</tt> explicitly, but we'll see later on why merely writing<br />
<tt>length</tt> doesn't suffice.<br />
<br />
Basically, all we've only said that the two definitions define the <tt>length</tt><br />
of <tt>Nil</tt> and <tt>Cons</tt>. We still have to fill in the necessary definition. The<br />
actual definition of the attributes takes place to the right of the equals<br />
sign. Programmers are free to write any valid Haskell expression. In fact,<br />
the UUAG system does not analyse the attribute definitions at all, but merely<br />
copies them straight into the resulting Haskell module. In our example, we<br />
want the length of the empty list to be <tt>0.0</tt>. The case for <tt>Cons</tt> is a bit<br />
trickier.<br />
<br />
In the <tt>Cons</tt> case we want to increment the length computed so far. To<br />
do so we need to be able to refer to other attributes. In particular we<br />
want to refer to the <tt>length</tt> attribute of the tail. The expression<br />
<tt>@tail.length</tt> does just that. In general, you're free to refer to any<br />
synthesized attribute ''attr'' of a child node ''c'' by writing <tt>@c.attr</tt>.<br />
<br />
The <tt>length</tt> attribute can be depicted pictorally as follows:<br />
<br />
[[Image:WAGM-Avg.png|The length attribute]]<br />
<br />
'''Exercise:''' Declare and define a synthesized attribute <tt>sum</tt> that<br />
computes the sum of a <tt>List</tt>. You can refer to a value <tt>val</tt> stored at a node as <tt>@val</tt>. For instance, write <tt>@head</tt> to refer to the float stored in a <tt>Cons</tt> node. Draw the corresponding picture if you're stuck.<br />
<br />
Now we've defined <tt>length</tt> and <tt>sum</tt>, let's compute the average. We'll know the sum and the length of the entire list at the Root node. Using those attributes we can compute the average and ''broadcast'' the average through the rest of the list. Let's start with the picture this time:<br />
<br />
[[Image:WAGM-Length.png|The average attribute]]<br />
<br />
The previous synthesized attributes, <tt>length</tt> and <tt>sum</tt>, defined bottom-up<br />
computations. We're now in the situation, however, where we want to pass<br />
information through the tree from a parent node to its child nodes using an<br />
''inherited'' attribute. Listing Five defines an inherited attribute <tt>avg</tt><br />
that corresponds to the picture we just drew.<br />
<br />
<haskell><br />
--- Listing Five ---<br />
ATTR List [ avg : Float|| ]<br />
SEM Root<br />
| Root list.avg = @list.sum / @list.length<br />
<br />
SEM List<br />
| Cons tail.avg = @lhs.avg<br />
<br />
</haskell><br />
<br />
Inherited attributes are declared to the left of the two vertical<br />
bars. Once we've declared an inherited attribute <tt>avg</tt> on lists, we're<br />
obliged to define how every constructor passes an <tt>avg</tt> to its children of<br />
type <tt>List</tt>.<br />
<br />
In our example, there are only two constructors with children<br />
of type <tt>List</tt>, namely <tt>Root</tt> and <tt>Cons</tt>. At the <tt>Root</tt> we compute the<br />
average, using the synthesized attributes <tt>sum</tt> and <tt>length</tt>, and pass the<br />
result to the <tt>list</tt> child. At the <tt>Cons</tt> node, we merely copy down the<br />
<tt>avg</tt> we received from our parent. Analogous to synthesized attributes, we<br />
can refer to an inherited attribute <tt>attr</tt> by writing <tt>@lhs.attr</tt>.<br />
<br />
Admittedly, this inherited attribute is not terribly interesting. There are<br />
plenty of other examples, however, where an inherited attribute represents<br />
important contextual information. Think of passing around the set of<br />
assumptions when writing a type checker, for instance.<br />
<br />
'''Exercise:''' To complete the attribute grammar, define an attribute<br />
<tt>res</tt> that computes the resulting list. Should it be inherited or<br />
synthesized? You may want to draw a picture.<br />
<br />
===Running the UUAG===<br />
Now suppose you've completed the exercises and copied the examples in a<br />
single file called <tt>Diff.ag</tt>. How do we actually use the attribute grammar?<br />
This is were the UUAG compiler steps in. Running the UUAG compiler on the<br />
source attribute grammar file generates a new <tt>Diff.hs</tt> file, which we can<br />
then compile like any other Haskell file.<br />
<br />
<haskell><br />
> uuagc -a Diff.ag<br />
> ghci Diff.hs<br />
</haskell><br />
<br />
The <tt>Diff.hs</tt> file contains several ingredients.<br />
<br />
Firstly, new Haskell datatypes are generated corresponding to <tt>DATA</tt><br />
declarations in the attribute grammar. For every generated datatype a<br />
corresponding <tt>fold</tt> is generated. The attribute definitions determine the<br />
arguments passed to the folds. Browsing through the generated code can<br />
actually be quite instructive.<br />
<br />
Inherited attributes are passed to recursive calls of the fold. Synthesized<br />
attributes are tupled and returned as the result of the computation. In<br />
essence, we've reproduced our original solution in Listing Two - but now<br />
without the hassle associated with spelling out [[catamorphism]]s with a higher<br />
order domain and a compound codomain.<br />
<br />
The attribute grammar solution is just as efficient as our earlier solution<br />
relying on lazy evaluation, yet the code is hardly different from what we<br />
would write in a straightforward Haskell solution. It really is the best of<br />
both worlds. The two types of glue that John Hughes pinpoints in his<br />
original article just aren't enough. I would like to think that sometimes<br />
attribute grammars are sometimes capable of providing just the right<br />
missing bit of glue.<br />
<br />
===What are attribute grammars?===<br />
So just what are attribute grammars? Well, that depends on who you ask,<br />
really. I've tried to sum up some different views below.<br />
<br />
Attribute grammars add semantics to a context free grammar. Although it is<br />
easy enough to describe a language's syntax using a context free grammar,<br />
accurately describing a language's semantics is notoriously<br />
difficult. Attribute grammars specify a language's semantics by<br />
'decorating' a context free grammar with those attributes you are interested<br />
in.<br />
<br />
Attribute grammars describe tree traversals. All imperative implementations<br />
of attribute grammar systems perform tree traversals to compute some<br />
value. Basically an attribute grammar declares ''which'' values to compute<br />
and an attribute grammar system executes these computations. Once you've<br />
made this observation, the close relation to functional programming should<br />
not come as a surprise.<br />
<br />
Attribute grammars are a formalism for writing catamorphisms in a<br />
compositional fashion. Basically, the only thing the UUAG compiler does is<br />
generate large folds that I couldn't be bothered writing myself. It takes<br />
away all the elbow grease involved with maintaining and extending such<br />
code. In a sense the compiler does absolutely nothing new; it just makes<br />
life a lot easier.<br />
<br />
Attribute grammars provide framework for aspect oriented programming in<br />
functional languages. Lately there has been a lot of buzz about the<br />
importance of ''aspects'' and ''aspect oriented programming''. Attribute<br />
grammars provide a clear and well-established framework for splitting code<br />
into separate aspects. By spreading attribute definitions over several<br />
different files and grouping them according to aspect, attribute grammars<br />
provide a natural setting for aspect oriented programming.<br />
<br />
How do attribute grammars relate to other Haskell abstractions? I'll try to<br />
put my finger on some of the more obvious connections, but I'm pretty sure<br />
there's a great deal more that I don't cover here.<br />
<br />
==What else is out there?==<br />
Everyone loves monads. They're what makes IO possible in Haskell. There are<br />
extensive standard libraries and syntactic sugar specifically designed to<br />
make life with monads easier. There are an enormous number of Haskell<br />
libraries based on the monadic interface. They represent one of the most<br />
substantial developments of functional programming in the last decade.<br />
<br />
Yet somehow, the single most common question asked by fledgling Haskell<br />
programmers is probably ''What are monads?''. Beginners have a hard time<br />
grasping the concept of monads and yet connoisseurs recognize a monad in<br />
just about every code snippet. I think the more important question is:<br />
''What are monads good for?''<br />
<br />
Monads provide a simple yet powerful abstract notion of computation. In<br />
essence, a monad describes how sequence computations. This is crucial in<br />
order to perform IO in a functional language; by constraining all IO<br />
actions to a single interface of sequenced computations, the programmer is<br />
prevented from creating utter chaos. The real power of monads is in the<br />
interface they provide.<br />
<br />
John Hughes identified modularity as the single biggest blessing of<br />
functional programming. The obvious question is: how modular is the monadic<br />
interface? This really depends on you're definition of modularity. Let me<br />
be more specific. How can you combine two arbitrary monads? You can't. This<br />
is my greatest concern with monads. Once you choose your specific notion of<br />
computation, you have to stick to it through thick and thin.<br />
<br />
What about monad transformers? Monad transformers allow you to add a<br />
specific monad's functionality on top of any existing monad. What seems<br />
like a solution, more often than not, turns out to introduce more problems<br />
than you bargained for. Adding new functionality to a monad involves<br />
lifting all the computations from the previous monad to the new<br />
one. Although I could learn to live with this, it gets even worse. As every<br />
monad transformer really changes the underlying monad. The order in<br />
which monad transformers are applied really makes a difference. If I want<br />
to add error reporting and state to some existing monad, should I be forced<br />
to consider the order in which I add them?<br />
<br />
Monads are extremely worthwhile for the interface they provide. Monadic<br />
libraries are great, but changing and extending monadic code can be a<br />
pain. Can we do better? Well I probably wouldn't have started this monadic<br />
intermezzo if I didn't have some sort of answer.<br />
<br />
Let's start off with <tt>Reader</tt> monads, for instance. Essentially, <tt>Reader</tt><br />
monads adds an argument to some computation. Wait a minute, this reminds me<br />
of inherited attributes. What about <tt>Writer</tt> monads? They correspond to<br />
synthesized attributes of course. Finally, <tt>State</tt> monads correspond to<br />
''chained'' attributes, or attributes that are both synthesized and<br />
inherited. The real edge attribute grammars hold over monad transformers is that you<br />
can define new attributes ''without'' worrying about the order in which you<br />
define them or adapting existing code.<br />
<br />
Do other abstractions capture other notions related to attribute grammars?<br />
Of course they do! Just look at the function space arrows instance. The notion of<br />
combining two distinct computations using the <tt>(&&&)</tt> operator relates to<br />
the concept of ''joining'' two attribute grammars by collecting their<br />
attribute definitions. When you look at the <tt>loop</tt> combinator, I can<br />
only be grateful that an attribute grammar system deals with attribute<br />
dependencies automatically.<br />
<br />
There really is a lot of related work. Implicit parameters? Inherited attributes! Linear implicit parameters? Chained attributes! Concepts that are so natural in the setting of attribute grammars, yet seem contrived when added to Haskell. This strengthens my belief that functional programmers can really benefit from even the most fleeting experience with attribute grammars; although I'd like to think that if you've read this far, you're hungry for more.<br />
<br />
==Further reading==<br />
This more or less covers the tutorial section of this article. The best way<br />
to learn more about attribute grammars is by actually using them. To conclude the tutorial, I've<br />
included a small example for you to play with. I've written a parser for a very<br />
simple wiki formatting language not entirely unlike the one used to produce this<br />
document. So far the HTML generated after parsing a document is fairly poor. It's up to<br />
you to improve it!<br />
<br />
You can download the initial version here. Don't forget to install the [http://www.cs.uu.nl/wiki/bin/view/HUT/Download]. It might<br />
be worthwhile to have a look at the [http://www.cs.uu.nl/wiki/bin/view/HUT/AttributeGrammarManual UUAG manual] as there's a lot of technical detail that I haven't mentioned.<br />
<br />
If you're particularly daring, you may want to take a look at the [https://github.com/UU-ComputerScience/uhc Essential Haskell Compiler] being developed at Utrecht. It's almost completely written using the UUAG and is designed to be suitable for education and experimentation. The compiler was presented at the Summer School for Advanced Functional Programming in Tartu, Estonia last summer. As a result, there's a lot written about it already.<br />
<br />
Dive on in!<br />
<br />
[[Category:Article]]</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=File:Dutch_HUG_-_wouter.pdf&diff=45410File:Dutch HUG - wouter.pdf2012-04-24T09:54:15Z<p>WouterSwierstra: uploaded a new version of "Image:Dutch HUG - wouter.pdf"</p>
<hr />
<div></div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=File:Dutch_HUG_-_wouter.pdf&diff=45409File:Dutch HUG - wouter.pdf2012-04-24T09:53:54Z<p>WouterSwierstra: </p>
<hr />
<div></div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=File:Dutch_HUG2012Wouter.pdf&diff=45408File:Dutch HUG2012Wouter.pdf2012-04-24T09:53:44Z<p>WouterSwierstra: </p>
<hr />
<div></div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=DHD_UHac/DHD_Program&diff=45316DHD UHac/DHD Program2012-04-19T12:41:48Z<p>WouterSwierstra: </p>
<hr />
<div>__NOTOC__<br />
<br />
This is the current program for the Dutch HUG Day. It may still change.<br />
<br />
{| class="wikitable"<br />
! Time<br />
! Title<br />
! Speaker<br />
|-<br />
| 9:30<br />
| colspan="2" | ''Coffee and Tea''<br />
|-<br />
| 10:00<br />
| Welcome<br />
| Sean Leather, Stef Joosten<br />
|-<br />
| 10:15<br />
| [[#websockets|Supporting Different Versions of the WebSockets Protocol]]<br />
| Jasper Van der Jeugt<br />
|-<br />
| 10:45<br />
| [[#hesselink|Building Your Own Haskell Ecosystem]]<br />
| Erik Hesselink<br />
|-<br />
| 11:15<br />
| [[#pascal|Model Checking Abstract Syntax Trees]]<br />
| Pascal Hof<br />
|-<br />
| 11:45<br />
|colspan="2"| ''Lightning Talks''<br />
|-<br />
|<br />
| [[#dotfs|DotFS - or How Fred Solved His Config Clutter]]<br />
| Paul van der Walt, Sjoerd Timmer<br />
|-<br />
|<br />
| [[#gruze|Snap and Gruze]]<br />
| Kevin Jardine<br />
|-<br />
|<br />
| [[#case-study|Invitation to Participate in a Functional Programming Case Study]]<br />
| Jurriaan Hage<br />
|-<br />
| 12:15<br />
| colspan="2" | ''Lunch (provided by Ordina)''<br />
|-<br />
| 13:15<br />
| [[#practice|Haskell in Practice: How Haskell Has Been Used in a (Paid) IT Project]]<br />
| Stef Joosten, Martijn Schrage<br />
|-<br />
| 13:45<br />
| [[#fclabels|fclabels: First Class Record Labels for Haskell]]<br />
| Sebastiaan Visser<br />
|-<br />
| 14:15<br />
| [[#kinds|GHC 7.6, More Well-Typed Than Ever]]<br />
| José Pedro Magalhães<br />
|-<br />
| 14:45<br />
|colspan="2"| ''Lightning Talks''<br />
|-<br />
|<br />
| [[#holes|Holes in GHC]]<br />
| Thijs Alkemade<br />
|-<br />
|<br />
| [[#regex-applicative|Applicative Regular Expressions]]<br />
| Roman Cheplyaka<br />
|-<br />
|<br />
| How I generate my homepage<br />
| Wouter Swierstra<br />
|-<br />
| 15:15<br />
| Closing<br />
| Jurriën Stutterheim<br />
|-<br />
| 15:30<br />
|colspan="2"| ''Depart for UHac''<br />
|}<br />
<br />
== Summaries ==<br />
<br />
=== <span id="websockets"></span>Supporting Different Versions of the WebSockets Protocol ===<br />
<br />
Jasper Van der Jeugt (Ghent)<br />
<br />
The Haskell websockets library allows you to write WebSocket-enabled<br />
servers in Haskell, bidirectional communication with the browser.<br />
However, browsers and their related specifications change fast, and<br />
there are different versions of the WebSockets protocol. This talk<br />
discusses a type-safe technique which disallows the programmer from<br />
using primitives not available in the chosen version, while still<br />
allowing the latest features.<br />
<br />
=== <span id="hesselink"></span>Building Your Own Haskell ecosystem ===<br />
<br />
Erik Hesselink (Silk)<br />
<br />
When you develop a lot of different Haskell packages that work together, managing all these packages and their versions can be difficult. In this talk, I'll explain how we deal with this at Silk. I will show how to use Hackage 2.0 to build your own internal package repository, how to use cabal-dev to manage installed packages, and show a tool for bumping package versions. Together, this makes working on large amounts of packages with multiple people much easier.<br />
<br />
=== <span id="pascal"></span>Model Checking Abstract Syntax Trees ===<br />
<br />
Pascal Hof (TU Dortmund)<br />
<br />
Model checking turned out to be a useful tool for the analysis of programs. Usually one transforms abstract syntax trees to control flow graphs, which offer a abstract representation of program behavior. Whenever one is not focused on program behavior but on structural properties of the program (e.g. semantic analysis of a compiler), model checking the abstract syntax tree comes in handy. My talk introduces a problem, which can be solved using model checking abstract syntax trees. Additionally, different approaches for a implementation will be discussed.<br />
<br />
=== <span id="dotfs"></span>DotFS - or How Fred Solved His Config Clutter ===<br />
<br />
Paul van der Walt (UU), Sjoerd Timmer (UU)<br />
<br />
Everyone who has more than one account on Linux/Unix/OS X systems knows how hard is can be to keep track of all the different config files in your home directory. <tt>.vimrc</tt>, <tt>.muttrc</tt>, <tt>.hgrc</tt>, <tt>.screenrc</tt>, <tt>.bashrc</tt>, and <tt>.xinitrc</tt> are just a few, but we're sure you can come up with many more yourself. Imagine how wonderful your life could be if you just had an easy tool to keep track of different versions of all these files on all your machines. We argue that traditional version control systems on their own are not up the task and we provide an alternative.<br />
<br />
=== <span id="gruze"></span>Snap and Gruze ===<br />
<br />
Kevin Jardine<br />
<br />
Developing an astronomy application using Snap and an experimental entity-attribute-value store for Haskell.<br />
<br />
=== <span id="case-study"></span>Invitation to Participate in a Functional Programming Case Study ===<br />
<br />
Jurriaan Hage (UU)<br />
<br />
I want to invite you to participate in an experiment in Haskell.<br />
In this experiment we are going to pit HaRe (the Haskell Refactorer)<br />
against Holmes (my plagiarism detector). The goal is to find out how much<br />
time somebody needs to refactor a Haskell program into something that<br />
is not recognizable by Holmes as plagiarism. We shall be looking at<br />
two groups of study: experienced programmers (we shall pretend they<br />
are paid for by newbies to make their assignments for them, and to do<br />
so without starting from scratch), and the newbies themselves.<br />
This experiment is a collaboration with Simon Thompson of Kent.<br />
He will take charge of the newbies, my task is to perform the experiment<br />
with experienced Haskell programmers, which is why I am now seeking for<br />
participants.<br />
<br />
=== <span id="practice"></span>Haskell in Practice: How Haskell Has Been Used in a (Paid) IT Project ===<br />
<br />
Stef Joosten (Ordina), Martijn Schrage (Oblomov Systems)<br />
<br />
This presentation shows how new thinking helps the judiciary to gain control over and to reduce cost in a landscape of many different IT systems that serve the courts of law in the Netherlands.<br />
<br />
Although Haskell plays a role outside the limelight, the results have become possible because of a tool, Ampersand, which has been built in Haskell.<br />
<br />
The presentation is accompanied by a brief demonstration.<br />
<br />
=== <span id="fclabels"></span>fclabels: First Class Record Labels for Haskell ===<br />
<br />
Sebastiaan Visser (Silk)<br />
<br />
Haskell's record system for algebraic datatypes uses labels as accessors for fields within constructors. Record labels can be used for both selection and modification of individual fields within value, but only selection can be composed in a natural way. The special syntax for updates makes composing modifications very cumbersome. The fclabels package tries to solve this problem by implementing field accessors as first class Haskell values instead of special syntax. Labels are implemented as lenses and can easily be composed for both selection and modification. To avoid boilerplate labels can be derived using Template Haskell. This talk will give a brief introduction into the usage of the library and will show a bit of the inner workings as a bridge to future <br />
extensions.<br />
<br />
=== <span id="kinds"></span>GHC 7.6, More Well-Typed Than Ever ===<br />
<br />
José Pedro Magalhães (UU)<br />
<br />
With each new version, GHC brings new and exciting type-level features to the<br />
Haskell language. In this talk we look at some upcoming features for GHC 7.6:<br />
data kinds, kind polymorphism, type-level literals, and deferred type errors.<br />
We show through some example programs how to take advantage of the new features,<br />
and what possibilities they open for Haskell programmers.<br />
<br />
=== <span id="holes"></span>Holes in GHC ===<br />
<br />
Thijs Alkemade (UU)<br />
<br />
This will be a demonstration of work-in-progress on adding holes for type-based debugging with GHC. See the [http://hackage.haskell.org/trac/ghc/wiki/Holes GHC Trac page] for details.<br />
<br />
=== <span id="regex-applicative">Applicative Regular Expressions</span> ===<br />
<br />
Roman Cheplyaka<br />
<br />
In this short talk I am going to describe the<br />
[https://github.com/feuerbach/regex-applicative regex-applicative] project:<br />
* what it is about<br />
* how it compares to other parsing combinator libraries<br />
* its current state and unsolved problems<br />
<br />
I'll be glad to accept any help<br />
[http://www.haskell.org/haskellwiki/DHD_UHac/Projects#regex-applicative during UHac].</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=DHD_UHac/DHD_Program&diff=45314DHD UHac/DHD Program2012-04-19T12:01:01Z<p>WouterSwierstra: </p>
<hr />
<div>__NOTOC__<br />
<br />
This is the current program for the Dutch HUG Day. It may still change.<br />
<br />
{| class="wikitable"<br />
! Time<br />
! Title<br />
! Speaker<br />
|-<br />
| 9:30<br />
| colspan="2" | ''Coffee and Tea''<br />
|-<br />
| 10:00<br />
| Welcome<br />
| Sean Leather, Stef Joosten<br />
|-<br />
| 10:15<br />
| [[#websockets|Supporting Different Versions of the WebSockets Protocol]]<br />
| Jasper Van der Jeugt<br />
|-<br />
| 10:45<br />
| [[#hesselink|Building Your Own Haskell Ecosystem]]<br />
| Erik Hesselink<br />
|-<br />
| 11:15<br />
| [[#pascal|Model Checking Abstract Syntax Trees]]<br />
| Pascal Hof<br />
|-<br />
| 11:45<br />
|colspan="2"| ''Lightning Talks''<br />
|-<br />
|<br />
| [[#dotfs|DotFS - or How Fred Solved His Config Clutter]]<br />
| Paul van der Walt, Sjoerd Timmer<br />
|-<br />
|<br />
| [[#gruze|Snap and Gruze]]<br />
| Kevin Jardine<br />
|-<br />
|<br />
| [[#case-study|Invitation to Participate in a Functional Programming Case Study]]<br />
| Jurriaan Hage<br />
|-<br />
| 12:15<br />
| colspan="2" | ''Lunch (provided by Ordina)''<br />
|-<br />
| 13:15<br />
| [[#practice|Haskell in Practice: How Haskell Has Been Used in a (Paid) IT Project]]<br />
| Stef Joosten, Martijn Schrage<br />
|-<br />
| 13:45<br />
| [[#fclabels|fclabels: First Class Record Labels for Haskell]]<br />
| Sebastiaan Visser<br />
|-<br />
| 14:15<br />
| [[#kinds|GHC 7.6, More Well-Typed Than Ever]]<br />
| José Pedro Magalhães<br />
|-<br />
| 14:45<br />
|colspan="2"| ''Lightning Talks''<br />
|-<br />
|<br />
| [[#holes|Holes in GHC]]<br />
| Thijs Alkemade<br />
|-<br />
|<br />
| [[#regex-applicative|Applicative Regular Expressions]]<br />
| Roman Cheplyaka<br />
|-<br />
|<br />
| How I generate my homepage<br />
| Wouter Swierstra<br />
|-<br />
| 15:15<br />
| Closing<br />
| Jurriën Stutterheim<br />
|-<br />
| 15:30<br />
|colspan="2"| ''Depart for UHac''<br />
|}<br />
<br />
== Summaries ==<br />
<br />
=== <span id="websockets"></span>Supporting Different Versions of the WebSockets Protocol ===<br />
<br />
Jasper Van der Jeugt (Ghent)<br />
<br />
The Haskell websockets library allows you to write WebSocket-enabled<br />
servers in Haskell, bidirectional communication with the browser.<br />
However, browsers and their related specifications change fast, and<br />
there are different versions of the WebSockets protocol. This talk<br />
discusses a type-safe technique which disallows the programmer from<br />
using primitives not available in the chosen version, while still<br />
allowing the latest features.<br />
<br />
=== <span id="hesselink"></span>Building Your Own Haskell ecosystem ===<br />
<br />
Erik Hesselink (Silk)<br />
<br />
When you develop a lot of different Haskell packages that work together, managing all these packages and their versions can be difficult. In this talk, I'll explain how we deal with this at Silk. I will show how to use Hackage 2.0 to build your own internal package repository, how to use cabal-dev to manage installed packages, and show a tool for bumping package versions. Together, this makes working on large amounts of packages with multiple people much easier.<br />
<br />
=== <span id="pascal"></span>Model Checking Abstract Syntax Trees ===<br />
<br />
Pascal Hof (TU Dortmund)<br />
<br />
Model checking turned out to be a useful tool for the analysis of programs. Usually one transforms abstract syntax trees to control flow graphs, which offer a abstract representation of program behavior. Whenever one is not focused on program behavior but on structural properties of the program (e.g. semantic analysis of a compiler), model checking the abstract syntax tree comes in handy. My talk introduces a problem, which can be solved using model checking abstract syntax trees. Additionally, different approaches for a implementation will be discussed.<br />
<br />
=== <span id="dotfs"></span>DotFS - or How Fred Solved His Config Clutter ===<br />
<br />
Paul van der Walt (UU), Sjoerd Timmer (UU)<br />
<br />
Everyone who has more than one account on Linux/Unix/OS X systems knows how hard is can be to keep track of all the different config files in your home directory. <tt>.vimrc</tt>, <tt>.muttrc</tt>, <tt>.hgrc</tt>, <tt>.screenrc</tt>, <tt>.bashrc</tt>, and <tt>.xinitrc</tt> are just a few, but we're sure you can come up with many more yourself. Imagine how wonderful your life could be if you just had an easy tool to keep track of different versions of all these files on all your machines. We argue that traditional version control systems on their own are not up the task and we provide an alternative.<br />
<br />
=== <span id="gruze"></span>Snap and Gruze ===<br />
<br />
Kevin Jardine<br />
<br />
Developing an astronomy application using Snap and an experimental entity-attribute-value store for Haskell.<br />
<br />
=== <span id="case-study"></span>Invitation to Participate in a Functional Programming Case Study ===<br />
<br />
Jurriaan Hage (UU)<br />
<br />
I want to invite you to participate in an experiment in Haskell.<br />
In this experiment we are going to pit HaRe (the Haskell Refactorer)<br />
against Holmes (my plagiarism detector). The goal is to find out how much<br />
time somebody needs to refactor a Haskell program into something that<br />
is not recognizable by Holmes as plagiarism. We shall be looking at<br />
two groups of study: experienced programmers (we shall pretend they<br />
are paid for by newbies to make their assignments for them, and to do<br />
so without starting from scratch), and the newbies themselves.<br />
This experiment is a collaboration with Simon Thompson of Kent.<br />
He will take charge of the newbies, my task is to perform the experiment<br />
with experienced Haskell programmers, which is why I am now seeking for<br />
participants.<br />
<br />
=== <span id="practice"></span>Haskell in Practice: How Haskell Has Been Used in a (Paid) IT Project ===<br />
<br />
Stef Joosten (Ordina), Martijn Schrage (Oblomov Systems)<br />
<br />
This presentation shows how new thinking helps the judiciary to gain control over and to reduce cost in a landscape of many different IT systems that serve the courts of law in the Netherlands.<br />
<br />
Although Haskell plays a role outside the limelight, the results have become possible because of a tool, Ampersand, which has been built in Haskell.<br />
<br />
The presentation is accompanied by a brief demonstration.<br />
<br />
=== <span id="fclabels"></span>fclabels: First Class Record Labels for Haskell ===<br />
<br />
Sebastiaan Visser (Silk)<br />
<br />
Haskell's record system for algebraic datatypes uses labels as accessors for fields within constructors. Record labels can be used for both selection and modification of individual fields within value, but only selection can be composed in a natural way. The special syntax for updates makes composing modifications very cumbersome. The fclabels package tries to solve this problem by implementing field accessors as first class Haskell values instead of special syntax. Labels are implemented as lenses and can easily be composed for both selection and modification. To avoid boilerplate labels can be derived using Template Haskell. This talk will give a brief introduction into the usage of the library and will show a bit of the inner workings as a bridge to future <br />
extensions.<br />
<br />
=== <span id="kinds"></span>GHC 7.6, More Well-Typed Than Ever ===<br />
<br />
José Pedro Magalhães (UU)<br />
<br />
With each new version, GHC brings new and exciting type-level features to the<br />
Haskell language. In this talk we look at some upcoming features for GHC 7.6:<br />
data kinds, kind polymorphism, type-level literals, and deferred type errors.<br />
We show through some example programs how to take advantage of the new features,<br />
and what possibilities they open for Haskell programmers.<br />
<br />
=== <span id="holes"></span>Holes in GHC ===<br />
<br />
Thijs Alkemade (UU)<br />
<br />
This will be a demonstration of work-in-progress on adding holes for type-based debugging with GHC. See the [http://hackage.haskell.org/trac/ghc/wiki/Holes GHC Trac page] for details.<br />
<br />
=== <span id="regex-applicative">Applicative Regular Expressions</span> ===<br />
<br />
Roman Cheplyaka<br />
<br />
In this short talk I am going to describe the<br />
[https://github.com/feuerbach/regex-applicative regex-applicative] project:<br />
* what it is about<br />
* how it compares to other parsing combinator libraries<br />
* its current state and unsolved problems<br />
<br />
I'll be glad to accept any help<br />
[http://www.haskell.org/haskellwiki/DHD_UHac/Projects#regex-applicative during UHac].</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=DHD_UHac/DHD_Program&diff=45313DHD UHac/DHD Program2012-04-19T12:00:46Z<p>WouterSwierstra: </p>
<hr />
<div>__NOTOC__<br />
<br />
This is the current program for the Dutch HUG Day. It may still change.<br />
<br />
{| class="wikitable"<br />
! Time<br />
! Title<br />
! Speaker<br />
|-<br />
| 9:30<br />
| colspan="2" | ''Coffee and Tea''<br />
|-<br />
| 10:00<br />
| Welcome<br />
| Sean Leather, Stef Joosten<br />
|-<br />
| 10:15<br />
| [[#websockets|Supporting Different Versions of the WebSockets Protocol]]<br />
| Jasper Van der Jeugt<br />
|-<br />
| 10:45<br />
| [[#hesselink|Building Your Own Haskell Ecosystem]]<br />
| Erik Hesselink<br />
|-<br />
| 11:15<br />
| [[#pascal|Model Checking Abstract Syntax Trees]]<br />
| Pascal Hof<br />
|-<br />
| 11:45<br />
|colspan="2"| ''Lightning Talks''<br />
|-<br />
|<br />
| [[#dotfs|DotFS - or How Fred Solved His Config Clutter]]<br />
| Paul van der Walt, Sjoerd Timmer<br />
|-<br />
|<br />
| [[#gruze|Snap and Gruze]]<br />
| Kevin Jardine<br />
|-<br />
|<br />
| [[#case-study|Invitation to Participate in a Functional Programming Case Study]]<br />
| Jurriaan Hage<br />
|-<br />
| 12:15<br />
| colspan="2" | ''Lunch (provided by Ordina)''<br />
|-<br />
| 13:15<br />
| [[#practice|Haskell in Practice: How Haskell Has Been Used in a (Paid) IT Project]]<br />
| Stef Joosten, Martijn Schrage<br />
|-<br />
| 13:45<br />
| [[#fclabels|fclabels: First Class Record Labels for Haskell]]<br />
| Sebastiaan Visser<br />
|-<br />
| 14:15<br />
| [[#kinds|GHC 7.6, More Well-Typed Than Ever]]<br />
| José Pedro Magalhães<br />
|-<br />
| 14:45<br />
|colspan="2"| ''Lightning Talks''<br />
|-<br />
|<br />
| [[#holes|Holes in GHC]]<br />
| Thijs Alkemade<br />
|-<br />
|<br />
| [[#regex-applicative|Applicative Regular Expressions]]<br />
| Roman Cheplyaka<br />
|-<br />
| How I generate my homepage<br />
| Wouter Swierstra<br />
|-<br />
| 15:15<br />
| Closing<br />
| Jurriën Stutterheim<br />
|-<br />
| 15:30<br />
|colspan="2"| ''Depart for UHac''<br />
|}<br />
<br />
== Summaries ==<br />
<br />
=== <span id="websockets"></span>Supporting Different Versions of the WebSockets Protocol ===<br />
<br />
Jasper Van der Jeugt (Ghent)<br />
<br />
The Haskell websockets library allows you to write WebSocket-enabled<br />
servers in Haskell, bidirectional communication with the browser.<br />
However, browsers and their related specifications change fast, and<br />
there are different versions of the WebSockets protocol. This talk<br />
discusses a type-safe technique which disallows the programmer from<br />
using primitives not available in the chosen version, while still<br />
allowing the latest features.<br />
<br />
=== <span id="hesselink"></span>Building Your Own Haskell ecosystem ===<br />
<br />
Erik Hesselink (Silk)<br />
<br />
When you develop a lot of different Haskell packages that work together, managing all these packages and their versions can be difficult. In this talk, I'll explain how we deal with this at Silk. I will show how to use Hackage 2.0 to build your own internal package repository, how to use cabal-dev to manage installed packages, and show a tool for bumping package versions. Together, this makes working on large amounts of packages with multiple people much easier.<br />
<br />
=== <span id="pascal"></span>Model Checking Abstract Syntax Trees ===<br />
<br />
Pascal Hof (TU Dortmund)<br />
<br />
Model checking turned out to be a useful tool for the analysis of programs. Usually one transforms abstract syntax trees to control flow graphs, which offer a abstract representation of program behavior. Whenever one is not focused on program behavior but on structural properties of the program (e.g. semantic analysis of a compiler), model checking the abstract syntax tree comes in handy. My talk introduces a problem, which can be solved using model checking abstract syntax trees. Additionally, different approaches for a implementation will be discussed.<br />
<br />
=== <span id="dotfs"></span>DotFS - or How Fred Solved His Config Clutter ===<br />
<br />
Paul van der Walt (UU), Sjoerd Timmer (UU)<br />
<br />
Everyone who has more than one account on Linux/Unix/OS X systems knows how hard is can be to keep track of all the different config files in your home directory. <tt>.vimrc</tt>, <tt>.muttrc</tt>, <tt>.hgrc</tt>, <tt>.screenrc</tt>, <tt>.bashrc</tt>, and <tt>.xinitrc</tt> are just a few, but we're sure you can come up with many more yourself. Imagine how wonderful your life could be if you just had an easy tool to keep track of different versions of all these files on all your machines. We argue that traditional version control systems on their own are not up the task and we provide an alternative.<br />
<br />
=== <span id="gruze"></span>Snap and Gruze ===<br />
<br />
Kevin Jardine<br />
<br />
Developing an astronomy application using Snap and an experimental entity-attribute-value store for Haskell.<br />
<br />
=== <span id="case-study"></span>Invitation to Participate in a Functional Programming Case Study ===<br />
<br />
Jurriaan Hage (UU)<br />
<br />
I want to invite you to participate in an experiment in Haskell.<br />
In this experiment we are going to pit HaRe (the Haskell Refactorer)<br />
against Holmes (my plagiarism detector). The goal is to find out how much<br />
time somebody needs to refactor a Haskell program into something that<br />
is not recognizable by Holmes as plagiarism. We shall be looking at<br />
two groups of study: experienced programmers (we shall pretend they<br />
are paid for by newbies to make their assignments for them, and to do<br />
so without starting from scratch), and the newbies themselves.<br />
This experiment is a collaboration with Simon Thompson of Kent.<br />
He will take charge of the newbies, my task is to perform the experiment<br />
with experienced Haskell programmers, which is why I am now seeking for<br />
participants.<br />
<br />
=== <span id="practice"></span>Haskell in Practice: How Haskell Has Been Used in a (Paid) IT Project ===<br />
<br />
Stef Joosten (Ordina), Martijn Schrage (Oblomov Systems)<br />
<br />
This presentation shows how new thinking helps the judiciary to gain control over and to reduce cost in a landscape of many different IT systems that serve the courts of law in the Netherlands.<br />
<br />
Although Haskell plays a role outside the limelight, the results have become possible because of a tool, Ampersand, which has been built in Haskell.<br />
<br />
The presentation is accompanied by a brief demonstration.<br />
<br />
=== <span id="fclabels"></span>fclabels: First Class Record Labels for Haskell ===<br />
<br />
Sebastiaan Visser (Silk)<br />
<br />
Haskell's record system for algebraic datatypes uses labels as accessors for fields within constructors. Record labels can be used for both selection and modification of individual fields within value, but only selection can be composed in a natural way. The special syntax for updates makes composing modifications very cumbersome. The fclabels package tries to solve this problem by implementing field accessors as first class Haskell values instead of special syntax. Labels are implemented as lenses and can easily be composed for both selection and modification. To avoid boilerplate labels can be derived using Template Haskell. This talk will give a brief introduction into the usage of the library and will show a bit of the inner workings as a bridge to future <br />
extensions.<br />
<br />
=== <span id="kinds"></span>GHC 7.6, More Well-Typed Than Ever ===<br />
<br />
José Pedro Magalhães (UU)<br />
<br />
With each new version, GHC brings new and exciting type-level features to the<br />
Haskell language. In this talk we look at some upcoming features for GHC 7.6:<br />
data kinds, kind polymorphism, type-level literals, and deferred type errors.<br />
We show through some example programs how to take advantage of the new features,<br />
and what possibilities they open for Haskell programmers.<br />
<br />
=== <span id="holes"></span>Holes in GHC ===<br />
<br />
Thijs Alkemade (UU)<br />
<br />
This will be a demonstration of work-in-progress on adding holes for type-based debugging with GHC. See the [http://hackage.haskell.org/trac/ghc/wiki/Holes GHC Trac page] for details.<br />
<br />
=== <span id="regex-applicative">Applicative Regular Expressions</span> ===<br />
<br />
Roman Cheplyaka<br />
<br />
In this short talk I am going to describe the<br />
[https://github.com/feuerbach/regex-applicative regex-applicative] project:<br />
* what it is about<br />
* how it compares to other parsing combinator libraries<br />
* its current state and unsolved problems<br />
<br />
I'll be glad to accept any help<br />
[http://www.haskell.org/haskellwiki/DHD_UHac/Projects#regex-applicative during UHac].</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=DHD_UHac/Attendees&diff=45062DHD UHac/Attendees2012-03-30T12:26:35Z<p>WouterSwierstra: </p>
<hr />
<div>This is a list of attendees for [[DHD_UHac|DHD >>= UHac]].<br />
<br />
If you have [[DHD_UHac/Register|registered]], please consider adding yourself to the list. Your contact and travel information may help with coordination between participants.<br />
<br />
If you live around Utrecht or plan to commute from home each day, you may put "Local" for accommodation.<br />
<br />
{| class="wikitable"<br />
! IRC Nickname<br />
! Real Name (Affl)<br />
! Mobile #<br />
! Arrive<br />
! Depart<br />
! Accommodation<br />
|-<br />
| leather<br />
| Sean Leather (UU)<br />
| +31616158163<br />
|<br />
|<br />
| Local<br />
|-<br />
| norm2782<br />
| Jurriën Stutterheim (UU)<br />
| +31642392944<br />
|<br />
|<br />
| Local<br />
|-<br />
| ruud<br />
| Ruud Koot (UU)<br />
| +31623024223<br />
|<br />
|<br />
| Local<br />
|-<br />
| kosmikus<br />
| Andres Löh (Well-Typed LLP)<br />
|<br />
|<br />
|<br />
|<br />
|-<br />
| sol<br />
| Simon Hengel<br />
| +4917661064074<br />
| Wednesday<br />
| Sunday<br />
|<br />
|-<br />
| dreixel<br />
| José Pedro Magalhães (UU)<br />
| +31650459029<br />
|<br />
|<br />
| Local<br />
|-<br />
| marczoid<br />
| Marc van Zee (UU)<br />
| +31633610518<br />
|<br />
|<br />
| Local<br />
|-<br />
| paba<br />
| Patrick Bahr (University of Copenhagen)<br />
|<br />
|<br />
|<br />
| Local<br />
|-<br />
| toothbrush<br />
| Paul van der Walt (UU)<br />
| +31614681351<br />
|<br />
|<br />
| Local<br />
|-<br />
| spockz<br />
| Alessandro Vermeulen (UU)<br />
| +31646165747<br />
|<br />
|<br />
| Local<br />
|-<br />
| wlad<br />
| Vlad Hanciuta<br />
|<br />
|<br />
|<br />
|<br />
|-<br />
| gcollins<br />
| Gregory Collins (Google)<br />
| +41 79 441 6832<br />
|<br />
|<br />
| Karel V Hotel<br />
|-<br />
| <br />
| Pascal Hof<br />
| <br />
|<br />
|<br />
| <br />
|-<br />
| jaspervdj<br />
| Jasper Van der Jeugt<br />
| +32 476 26 48 47<br />
|<br />
|<br />
| Yet undecided<br />
|-<br />
| cameleon<br />
| Erik Hesselink<br />
| +31 6 50 994 887<br />
|<br />
|<br />
| Local<br />
|-<br />
| sjoerd_visscher<br />
| Sjoerd Visscher<br />
| +31 6 1508 4368<br />
|<br />
|<br />
| Local<br />
|-<br />
| mklinik<br />
| Markus Klinik<br />
| +4917666101511<br />
| Wednesday<br />
| Sunday<br />
|<br />
|-<br />
| <br />
| Jurriaan Hage<br />
| +31 611191976<br />
| <br />
| <br />
| Local<br />
|-<br />
| ncs<br />
| Nikos Savvidis (UU)<br />
| +31644321424<br />
|<br />
|<br />
| Local<br />
|-<br />
| <br />
| Henk-Jan van Tuyl<br />
| <br />
|<br />
|<br />
| Local (travelling from Rotterdam, DHD only)<br />
|-<br />
| sfvisser<br />
| Sebastiaan Visser<br />
| +31624828951<br />
|<br />
|<br />
| Local<br />
|-<br />
| dcoutts<br />
| Duncan Coutts (Well-Typed LLP)<br />
| <br />
|<br />
|<br />
|<br />
|-<br />
| igloo<br />
| Ian Lynagh (Well-Typed LLP)<br />
| <br />
|<br />
|<br />
|<br />
|-<br />
| cies<br />
| Cies Breijs<br />
| +31646469087<br />
|<br />
|<br />
| Local (travelling from Rotterdam)<br />
|-<br />
| <br />
| Patrick Weemeeuw<br />
| +32495590214<br />
| Friday morning<br />
| Friday evening<br />
| Traveling from Leuven (BE)<br />
|-<br />
|<br />
| Jan Bessai<br />
| <br />
| Friday<br />
| Sunday<br />
|<br />
|-<br />
|<br />
| Edsko de Vries<br />
|<br />
|<br />
|<br />
|<br />
|-<br />
| laar<br />
| Lars Corbijn<br />
| <br />
| <br />
| <br />
| Local (travelling from Hengelo)<br />
|-<br />
| <br />
| George Fourtounis<br />
| <br />
| <br />
| <br />
| Local<br />
|-<br />
|<br />
| Victor Denisov<br />
|<br />
|<br />
|<br />
| Not Yet decided<br />
|-<br />
|<br />
| Sjoerd Timmer<br />
| +31620086456<br />
|<br />
|<br />
| Local<br />
|-<br />
| nicmo<br />
| Augusto Passalaqua (UU)<br />
| +31644079781<br />
|<br />
|<br />
| Local<br />
|-<br />
| pcapriotti<br />
| Paolo Capriotti (Well-Typed LLP)<br />
| <br />
|<br />
|<br />
|<br />
|-<br />
| <br />
| Martijn van Steenbergen<br />
| <br />
|<br />
|<br />
| Local<br />
|-<br />
| Feuerbach<br />
| Roman Cheplyaka<br />
| +380662285780<br />
| Thursday evening<br />
| Sunday evening or Monday<br />
| Hostel Utrecht<br />
|-<br />
| arthurbaars<br />
| Arthur Baars (Universidad Politecnica de Valencia)<br />
| +34 646338710 <br />
| Thursday<br />
| Monday<br />
| Local<br />
|-<br />
| stefanooldeman<br />
| Stefano Oldeman<br />
| <br />
| Friday morning<br />
| Friday eve<br />
| Local<br />
|-<br />
|<br />
| Nikolaos Bezirgiannis (UU)<br />
| +31626845888<br />
|<br />
|<br />
| Local<br />
|-<br />
|<br />
| Tom Lokhorst (Q42)<br />
| <br />
|<br />
|<br />
| Local<br />
|-<br />
|<br />
| Ruben de Gooijer (UU)<br />
| +31 615462690<br />
| <br />
| <br />
| Local<br />
|-<br />
|<br />
| Bram Schuur (UU)<br />
| +31 644553557<br />
| <br />
| <br />
| Local<br />
|-<br />
|<br />
| Stanislav Chernichkin<br />
| +7 910 484 42 08<br />
| 19.04<br />
| 23.04<br />
| B&B Utrecht City Center<br />
|-<br />
| gdijkstra<br />
| Gabe Dijkstra (UU)<br />
| <br />
| <br />
| <br />
| Local<br />
|-<br />
| <br />
| Jeroen Bransen (UU)<br />
| <br />
|<br />
|<br />
| Local<br />
|-<br />
| <br />
| Atze Dijkstra (UU)<br />
| <br />
|<br />
|<br />
| Local<br />
|-<br />
-<br />
| doaitse<br />
| Doaitse Swierstra (UU)<br />
| +31 6 4613 6929<br />
|<br />
|<br />
| Local<br />
|-<br />
| <br />
| Wouter Swierstra (UU)<br />
| <br />
|<br />
|<br />
| Local</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=The_Monad.Reader&diff=29303The Monad.Reader2009-07-29T08:13:24Z<p>WouterSwierstra: </p>
<hr />
<div>[[Category:Community]]<br />
The Monad.Reader is an electronic magazine about all things Haskell. It is less formal than journal, but more enduring than a wiki-page or blog post. There have been a wide variety of articles, including: exciting code fragments, intriguing puzzles, book reviews, tutorials, and even half-baked research ideas.<br />
<br />
'''Please note that the Monad.Reader has moved to [http://themonadreader.wordpress.com http://themonadreader.wordpress.com]. This site will no longer be updated.'''<br />
<br />
<br />
==== Latest Issue ====<br />
[[Media:TMR-Issue13.pdf|The Monad.Reader Issue 13]] is out now. Issue 13 consists of the following four articles:<br />
<br />
;''Rapid Prototyping in TEX''<br />
:Stephen Hicks<br />
;''The Typeclassopedia''<br />
:Brent Yorgey<br />
;''<nowiki>Book Review: Real World Haskell</nowiki>''<br />
:Chris Eidhof and Eelco Lempsink<br />
;''Calculating Monads with Category Theory''<br />
:Derek Elkins<br />
<br />
Discussion of this Issue's articles may be found on a [[The_Monad.Reader/Discuss_Issue13|separate page.]]<br />
<br />
Feel free to [http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue13/ browse the source files]. You can check out the entire repository using darcs:<br />
<br />
<tt>darcs get http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue13/</tt><br />
<br />
The source code and LaTeX files have all been released under a BSD-style license.<br />
<br />
I'd welcome any general discussion and feedback [[Talk:The_Monad.Reader|on the Talk page]].<br />
<br />
==== [[The_Monad.Reader/Previous_issues | Previous editions]] ====<br />
<br />
All the previous editions have moved to a separate page: [[The_Monad.Reader/Previous_issues]]. Some of the older, wiki-published articles need some TLC. If you read any of the articles and can spare a few minutes to help clean up some of the formatting, your help would really be appreciated.<br />
<br />
==== Contributing ====<br />
<br />
If you're interested in writing something for The Monad.Reader, please download the [[Media:TMR.zip|instructions for authors]]. You can also check out the most recent version from the darcs repository at <tt>http://sneezy.cs.nott.ac.uk/darcs/TMR/Guidelines</tt>. The deadline for Issue 14 is '''15 May, 2009'''.<br />
<br />
Feel free to contact [http://www.cse.chalmers.se/~wouter Wouter Swierstra] with any questions.<br />
<br />
==== Merchandise ====<br />
<br />
You can buy The Monad.Reader t-shirts and merchandise from [http://www.cafepress.com/TheMonadReader Cafepress].<br />
<br />
==== The name ====<br />
The name is a pun, I'm afraid. A magazine is sometimes also referred to as a "reader". The articles are not necessarily about monads.</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=The_Monad.Reader&diff=29302The Monad.Reader2009-07-29T08:12:51Z<p>WouterSwierstra: </p>
<hr />
<div>[[Category:Community]]<br />
The Monad.Reader is an electronic magazine about all things Haskell. It is less formal than journal, but more enduring than a wiki-page or blog post. There have been a wide variety of articles, including: exciting code fragments, intriguing puzzles, book reviews, tutorials, and even half-baked research ideas.<br />
<br />
'''Please note that the Monad.Reader has moved to [http://themonadreader.wordpress.com]. This site will no longer be updated.'''<br />
<br />
<br />
==== Latest Issue ====<br />
[[Media:TMR-Issue13.pdf|The Monad.Reader Issue 13]] is out now. Issue 13 consists of the following four articles:<br />
<br />
;''Rapid Prototyping in TEX''<br />
:Stephen Hicks<br />
;''The Typeclassopedia''<br />
:Brent Yorgey<br />
;''<nowiki>Book Review: Real World Haskell</nowiki>''<br />
:Chris Eidhof and Eelco Lempsink<br />
;''Calculating Monads with Category Theory''<br />
:Derek Elkins<br />
<br />
Discussion of this Issue's articles may be found on a [[The_Monad.Reader/Discuss_Issue13|separate page.]]<br />
<br />
Feel free to [http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue13/ browse the source files]. You can check out the entire repository using darcs:<br />
<br />
<tt>darcs get http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue13/</tt><br />
<br />
The source code and LaTeX files have all been released under a BSD-style license.<br />
<br />
I'd welcome any general discussion and feedback [[Talk:The_Monad.Reader|on the Talk page]].<br />
<br />
==== [[The_Monad.Reader/Previous_issues | Previous editions]] ====<br />
<br />
All the previous editions have moved to a separate page: [[The_Monad.Reader/Previous_issues]]. Some of the older, wiki-published articles need some TLC. If you read any of the articles and can spare a few minutes to help clean up some of the formatting, your help would really be appreciated.<br />
<br />
==== Contributing ====<br />
<br />
If you're interested in writing something for The Monad.Reader, please download the [[Media:TMR.zip|instructions for authors]]. You can also check out the most recent version from the darcs repository at <tt>http://sneezy.cs.nott.ac.uk/darcs/TMR/Guidelines</tt>. The deadline for Issue 14 is '''15 May, 2009'''.<br />
<br />
Feel free to contact [http://www.cse.chalmers.se/~wouter Wouter Swierstra] with any questions.<br />
<br />
==== Merchandise ====<br />
<br />
You can buy The Monad.Reader t-shirts and merchandise from [http://www.cafepress.com/TheMonadReader Cafepress].<br />
<br />
==== The name ====<br />
The name is a pun, I'm afraid. A magazine is sometimes also referred to as a "reader". The articles are not necessarily about monads.</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=File:Issue13.pdf&diff=28509File:Issue13.pdf2009-06-07T16:06:51Z<p>WouterSwierstra: </p>
<hr />
<div></div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=The_Monad.Reader&diff=27471The Monad.Reader2009-04-14T08:48:00Z<p>WouterSwierstra: </p>
<hr />
<div>[[Category:Community]]<br />
The Monad.Reader is an electronic magazine about all things Haskell. It is less formal than journal, but more enduring than a wiki-page or blog post. There have been a wide variety of articles, including: exciting code fragments, intriguing puzzles, book reviews, tutorials, and even half-baked research ideas.<br />
<br />
==== Latest Issue ====<br />
[[Media:TMR-Issue13.pdf|The Monad.Reader Issue 13]] is out now. Issue 13 consists of the following four articles:<br />
<br />
;''Rapid Prototyping in TEX''<br />
:Stephen Hicks<br />
;''The Typeclassopedia''<br />
:Brent Yorgey<br />
;''<nowiki>Book Review: Real World Haskell</nowiki>''<br />
:Chris Eidhof and Eelco Lempsink<br />
;''Calculating Monads with Category Theory''<br />
:Derek Elkins<br />
<br />
Discussion of this Issue's articles may be found on a [[The_Monad.Reader/Discuss_Issue13|separate page.]]<br />
<br />
Feel free to [http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue13/ browse the source files]. You can check out the entire repository using darcs:<br />
<br />
<tt>darcs get http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue13/</tt><br />
<br />
The source code and LaTeX files have all been released under a BSD-style license.<br />
<br />
I'd welcome any general discussion and feedback [[Talk:The_Monad.Reader|on the Talk page]].<br />
<br />
==== [[The_Monad.Reader/Previous_issues | Previous editions]] ====<br />
<br />
All the previous editions have moved to a separate page: [[The_Monad.Reader/Previous_issues]]. Some of the older, wiki-published articles need some TLC. If you read any of the articles and can spare a few minutes to help clean up some of the formatting, your help would really be appreciated.<br />
<br />
==== Contributing ====<br />
<br />
If you're interested in writing something for The Monad.Reader, please download the [[Media:TMR.zip|instructions for authors]]. You can also check out the most recent version from the darcs repository at <tt>http://sneezy.cs.nott.ac.uk/darcs/TMR/Guidelines</tt>. The deadline for Issue 14 is '''15 May, 2009'''.<br />
<br />
Feel free to contact [http://www.cse.chalmers.se/~wouter Wouter Swierstra] with any questions.<br />
<br />
==== Merchandise ====<br />
<br />
You can buy The Monad.Reader t-shirts and merchandise from [http://www.cafepress.com/TheMonadReader Cafepress].<br />
<br />
==== The name ====<br />
The name is a pun, I'm afraid. A magazine is sometimes also referred to as a "reader". The articles are not necessarily about monads.</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=The_Monad.Reader&diff=27470The Monad.Reader2009-04-14T08:47:43Z<p>WouterSwierstra: </p>
<hr />
<div>[[Category:Community]]<br />
The Monad.Reader is an electronic magazine about all things Haskell. It is less formal than journal, but more enduring than a wiki-page or blog post. There have been a wide variety of articles, including: exciting code fragments, intriguing puzzles, book reviews, tutorials, and even half-baked research ideas.<br />
<br />
==== Latest Issue ====<br />
[[Media:TMR-Issue13.pdf|The Monad.Reader Issue 13]] is out now. Issue 13 consists of the following four articles:<br />
<br />
;''Rapid Prototyping in TEX''<br />
:Stephen Hicks<br />
;''The Typeclassopedia''<br />
:Brent Yorgey<br />
;''<nowiki>Book Review: Real World Haskell</nowiki>''<br />
:Chris Eidhof and Eelco Lempsink<br />
;''Calculating Monads with Category Theory''<br />
:Derek Elkins<br />
<br />
Discussion of this Issue's articles may be found on a [[The_Monad.Reader/Discuss_Issue13|separate page.]]<br />
<br />
Feel free to [http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue13/ browse the source files]. You can check out the entire repository using darcs:<br />
<br />
<tt>darcs get http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue13/</tt><br />
<br />
The source code and LaTeX files have all been released under a BSD-style license.<br />
<br />
I'd welcome any general discussion and feedback [[Talk:The_Monad.Reader|on the Talk page]].<br />
<br />
==== [[The_Monad.Reader/Previous_issues | Previous editions]] ====<br />
<br />
All the previous editions have moved to a separate page: [[The_Monad.Reader/Previous_issues]]. Some of the older, wiki-published articles need some TLC. If you read any of the articles and can spare a few minutes to help clean up some of the formatting, your help would really be appreciated.<br />
<br />
==== Contributing ====<br />
<br />
If you're interested in writing something for The Monad.Reader, please download the [[Media:TMR.zip|instructions for authors]]. You can also check out the most recent version from the darcs repository at <tt>http://sneezy.cs.nott.ac.uk/darcs/TMR/Guidelines</tt>. The deadline for Issue 14 is '''15 May, 2008'''.<br />
<br />
Feel free to contact [http://www.cse.chalmers.se/~wouter Wouter Swierstra] with any questions.<br />
<br />
==== Merchandise ====<br />
<br />
You can buy The Monad.Reader t-shirts and merchandise from [http://www.cafepress.com/TheMonadReader Cafepress].<br />
<br />
==== The name ====<br />
The name is a pun, I'm afraid. A magazine is sometimes also referred to as a "reader". The articles are not necessarily about monads.</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=The_Monad.Reader&diff=27402The Monad.Reader2009-04-09T17:06:33Z<p>WouterSwierstra: </p>
<hr />
<div>[[Category:Community]]<br />
The Monad.Reader is an electronic magazine about all things Haskell. It is less formal than journal, but more enduring than a wiki-page or blog post. There have been a wide variety of articles, including: exciting code fragments, intriguing puzzles, book reviews, tutorials, and even half-baked research ideas.<br />
<br />
==== Latest Issue ====<br />
[[Media:TMR-Issue13.pdf|The Monad.Reader Issue 13]] is out now. Issue 13 consists of the following four articles:<br />
<br />
;''Rapid Prototyping in TEX''<br />
:Stephen Hicks<br />
;''The Typeclassopedia''<br />
:Brent Yorgey<br />
;''<nowiki>Book Review: Real World Haskell</nowiki>''<br />
:Chris Eidhof and Eelco Lempsink<br />
;''Calculating Monads with Category Theory''<br />
:Derek Elkins<br />
<br />
Discussion of this Issue's articles may be found on a [[The_Monad.Reader/Discuss_Issue13|separate page.]]<br />
<br />
Feel free to [http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue13/ browse the source files]. You can check out the entire repository using darcs:<br />
<br />
<tt>darcs get http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue13/</tt><br />
<br />
The source code and LaTeX files have all been released under a BSD-style license.<br />
<br />
I'd welcome any general discussion and feedback [[Talk:The_Monad.Reader|on the Talk page]].<br />
<br />
==== [[The_Monad.Reader/Previous_issues | Previous editions]] ====<br />
<br />
All the previous editions have moved to a separate page: [[The_Monad.Reader/Previous_issues]]. Some of the older, wiki-published articles need some TLC. If you read any of the articles and can spare a few minutes to help clean up some of the formatting, your help would really be appreciated.<br />
<br />
==== Contributing ====<br />
<br />
If you're interested in writing something for The Monad.Reader, please download the [[Media:TMR.zip|instructions for authors]]. You can also check out the most recent version from the darcs repository at <tt>http://sneezy.cs.nott.ac.uk/darcs/TMR/Guidelines</tt>. The deadline for Issue 14 has not been set yet.<br />
<br />
Feel free to contact [http://www.cse.chalmers.se/~wouter Wouter Swierstra] with any questions.<br />
<br />
==== Merchandise ====<br />
<br />
You can buy The Monad.Reader t-shirts and merchandise from [http://www.cafepress.com/TheMonadReader Cafepress].<br />
<br />
==== The name ====<br />
The name is a pun, I'm afraid. A magazine is sometimes also referred to as a "reader". The articles are not necessarily about monads.</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=Hac5/Attendees&diff=27377Hac5/Attendees2009-04-07T07:32:58Z<p>WouterSwierstra: </p>
<hr />
<div>This is the attendee list for [[Hac5]]. Please refer to the [[Hac5|main page]] for more information.<br />
<br />
= Attendees =<br />
<br />
Once you've [[Hac5/Register|registered]], please add your name to the following table:<br />
<br />
{| class="wikitable"<br />
! Nickname<br />
! Real Name<br />
! Affiliation<br />
! Mobile #<br />
! Arriving<br />
! Departing<br />
! Accomodation<br />
|-<br />
| eelco<br />
| Eelco Lempsink<br />
| UU + Tupil<br />
| +31629486398<br />
| -<br />
| -<br />
| Lives in Utrecht.<br />
|-<br />
| kosmikus<br />
| Andres Löh<br />
| UU<br />
|<br />
| -<br />
| -<br />
| Lives close to Utrecht.<br />
|-<br />
| dreixel<br />
| José Pedro Magalhães<br />
| UU<br />
| +31 650459029<br />
| -<br />
| -<br />
| Lives in Utrecht.<br />
|-<br />
| Heffalump<br />
| Ganesh Sittampalam<br />
| Credit Suisse<br />
| +447968253467<br />
| 17th morning (overnight ferry arrives Hook of Holland at 0630)<br />
| 19th late afternoon (overnight ferry leaves Hook of Holland at 2200)<br />
| Strowis Hostel<br />
|-<br />
| kowey<br />
| Eric Kow<br />
| University of Brighton<br />
|<br />
| 17th morning (overnight ferry)<br />
| 19th late afternoon (overnight ferry)<br />
| Strowis Hostel<br />
|-<br />
|<br />
| Martijn van Steenbergen<br />
| UU<br />
|<br />
|<br />
|<br />
| Lives close to Utrecht.<br />
|-<br />
| Igloo<br />
| Ian Lynagh<br />
| Well-Typed LLP<br />
|<br />
| 17th morning (overnight ferry)<br />
| 19th late afternoon (overnight ferry)<br />
| Strowis hostel<br />
|-<br />
| thorkilnaur<br />
| Thorkil Naur<br />
| thorkilnaur.com<br />
| +45 24 82 85 98<br />
| April 17 (train 9.58, a bit late, but very convenient)<br />
| April 19 (train 19.29)<br />
| Hotel Oorsprongpark Utrecht<br />
|-<br />
| tux_rocker<br />
| Reinier Lamers<br />
| UU<br />
| <br />
| -<br />
| -<br />
| Lives in Utrecht<br />
|-<br />
| Jutaro<br />
| Jürgen Nicklisch-Franken<br />
| ICS AG<br />
|<br />
| 16th 23:00<br />
| 19th 17:00<br />
| Hotel de Admiraal<br />
|-<br />
| kolmodin<br />
| Lennart Kolmodin<br />
| <br />
| +46736223606<br />
| 16th<br />
| 20th<br />
| Don't know yet.<br />
|-<br />
| chr1s<br />
| Chris Eidhof<br />
| UU + Tupil<br />
| +31628887656<br />
| -<br />
| -<br />
| Lives in Utrecht.<br />
|-<br />
| sebas<br />
| Sebastiaan Visser<br />
| UU<br />
| +31624828951<br />
| -<br />
| -<br />
| Lives in Utrecht.<br />
|-<br />
| dcoutts<br />
| Duncan Coutts<br />
| Well-Typed LLP<br />
|<br />
| 16th<br />
| 20th<br />
| Don't know yet.<br />
|-<br />
| benmos<br />
| Ben Moseley<br />
| Barcap<br />
| +447788138855<br />
| 17th morning (overnight ferry arrives Hook of Holland at 0630)<br />
| 19th late afternoon (overnight ferry leaves Hook of Holland at 2200)<br />
| Hotel Oorsprongpark Utrecht<br />
|-<br />
| jeltsch<br />
| Wolfgang Jeltsch<br />
| BTU&nbsp;Cottbus<br />
| <br />
| at April&nbsp;17 in the morning (train arrives at 08:28)<br />
| at April&nbsp;19 in the afternoon (train departs at 16:59)<br />
| Hotel Oorsprongpark Utrecht<br />
|-<br />
-<br />
| beschmi<br />
| Benedikt Schmidt<br />
| ETH Zurich<br />
| +41 797417542<br />
| Don't know yet.<br />
| Don't know yet.<br />
| Don't know yet.<br />
|-<br />
| dons<br />
| Don Stewart<br />
| [http://galois.com Galois]<br />
|<br />
| United 0908 Apr 16<br />
| Over to London Apr 20<br />
| Don't know yet.<br />
|-<br />
| blancolioni<br />
| Fraser Wilson<br />
| Anago bv<br />
| +31 6 81462922<br />
| -<br />
| -<br />
| Lives in Utrecht<br />
|-<br />
| Feuerbach<br />
| Roman Cheplyaka<br />
|<br />
| +380 66 228 57 80<br />
| 16th in the evening<br />
| 21th in the morning<br />
| Strowis or hospitality club<br />
|-<br />
| hesselink<br />
| Erik Hesselink<br />
| UU<br />
| +31 650994887<br />
| -<br />
| -<br />
| Lives in Utrecht<br />
|-<br />
| -<br />
| Marnix Klooster<br />
| Infor/private<br />
| -<br />
| 17th in the morning<br />
| 17th in the afternoon, or sometime on the 18th<br />
| Lives close to Utrecht<br />
|-<br />
|arjanb<br />
|Arjan Boeijink<br />
| -<br />
| -<br />
|Either the 17th or 18th in morning<br />
|19th in the evening<br />
|Not decided yet on traveling or finding a place to sleep.<br />
|-<br />
| Chatterbox<br />
| Peter Verswyvelen<br />
| [http://www.anygma.com/ Anygma]<br />
| <br />
| April&nbsp;17 in the afternoon<br />
| April&nbsp;19 in the afternoon<br />
| Apollo Hotel Utrecht City Centre<br />
|-<br />
| Beelsebob<br />
| Thomas Davie<br />
| [http://www.anygma.com/ Anygma]<br />
| <br />
| April&nbsp;17 in the afternoon<br />
| April&nbsp;19 in the afternoon<br />
| Apollo Hotel Utrecht City Centre<br />
|-<br />
| basvandijk<br />
| Bas van Dijk<br />
| Radboud Universiteit Nijmegen<br />
| +31614065248<br />
| 17th, morning, by car<br />
| 19th, evening<br />
| Don't know yet.<br />
|-<br />
|<br />
| Roel van Dijk<br />
| Radboud Universiteit Nijmegen<br />
| +31612856453<br />
| 17th, morning, by car<br />
| 19th, evening<br />
| Don't know yet.<br />
|-<br />
| remi<br />
| Remi Turk<br />
| UvA / UU<br />
| <br />
| -<br />
| -<br />
| Don't know yet<br />
|-<br />
| npouillard (ertai)<br />
| Nicolas Pouillard<br />
| INRIA<br />
| +33680126526<br />
| 17th, morning<br />
| 19th, afternoon<br />
| B&B Utrecht<br />
|-<br />
|nominolo<br />
|Thomas Schilling<br />
|University of Kent<br />
|<br />
|16th, probably<br />
|20th<br />
|friend's place<br />
|-<br />
| waern<br />
| David Waern<br />
| Amadeus<br />
| +33 642508769<br />
| 16th<br />
| 20th<br />
| Hotel Valk De Biltsche Hoek<br />
|-<br />
| mornfall<br />
| Petr Ročkai<br />
| Masaryk University<br />
| <br />
| 17th 9:58 by train (to Utrecht Centraal)<br />
| 19th 19:29 by train<br />
| Friend's place<br />
|-<br />
|<br />
| Jeroen Fokker<br />
| UU<br />
|<br />
| -<br />
| -<br />
| Lives in Utrecht<br />
|-<br />
| sih<br />
| Simon Hengel<br />
| <br />
| +4917661064074<br />
| -<br />
| -<br />
| Don't know yet<br />
|-<br />
| Lemmih<br />
| David Himmelstrup<br />
|<br />
|<br />
| 16th (13:55 Schiphol)<br />
| 20th (21:10 Schiphol)<br />
| Hotel Valk De Biltsche Hoek<br />
|-<br />
|<br />
| Tom Lokhorst<br />
| UU<br />
|<br />
| 16th<br />
| 18th<br />
| Lives close to Utrecht.<br />
|-<br />
| <br />
| Wouter Swierstra<br />
| Chalmers University of Technology<br />
|<br />
| 17th<br />
| and maybe longer<br />
| <br />
|-<br />
|}<br />
<br />
= Additional Comments =<br />
<br />
Please use this section to leave comments for other attendees, e.g. for organizing accommodation.</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=The_Monad.Reader/Previous_issues&diff=27283The Monad.Reader/Previous issues2009-03-31T14:27:11Z<p>WouterSwierstra: </p>
<hr />
<div>[[Media:TMR-Issue1.pdf|The Monad.Reader Issue 1]] was released on March 1, 2005.<br />
;''<nowiki>Pseudocode: Natural Style</nowiki>''<br />
:Andrew J. Bromage<br />
;''Pugs Apocryphon 1 - Overview of the Pugs project''<br />
:Autrijus Tang<br />
;''An Introduction to Gtk2Hs, a Haskell GUI Library''<br />
:Kenneth Hoste<br />
;''Implementing Web-Services with the HAIFA Framework''<br />
:Simon D. Foster<br />
;''<nowiki>Code Probe - Issue one: Haskell XML-RPC, v.2004-06-17 [1]</nowiki>''<br />
:Sven Moritz Hallberg<br />
<br />
[[The Monad.Reader/Issue2| The Monad.Reader Issue 2]] was released May 2005.<br />
;''Impure Thoughts 1 - Thtatic Compilathionth (without a lisp)''<br />
:Philippa Cowderoy<br />
;''Eternal Compatibility In Theory''<br />
:Sven Moritz Hallberg<br />
;''Fun with Linear Implicit Parameters''<br />
:Thomas Jäger<br />
;''Haskore''<br />
:Bastiaan Zapf<br />
;''Bzlib2 Binding - An Introduction to the FFI''<br />
:Peter Eriksen<br />
<br />
[[The Monad.Reader/Issue3| The Monad.Reader Issue 3]] was released June 2005.<br />
;''Notes on Learning Haskell''<br />
:Graham Klyne <br />
;''Functional Programming vs Object Oriented Programming''<br />
:Alistair Bayley <br />
;''Concurrent and Distributed Programming with Join Hs''<br />
:Einar Karttunen <br />
;''"Haskell School Of Expression"<nowiki>:</nowiki> Review of The Haskell School of Expression''<br />
:Isaac Jones <br />
;''Review of "Purely Functional Data Structures"''<br />
:Andrew Cooke <br />
<br />
[[The Monad.Reader/Issue4 | The Monad.Reader Issue 4]] was released 5 July 2005.<br />
;''Impure Thoughts 2, B&D not S&M'' (off-wiki)<br />
:Philippa Cowderoy <br />
;''Why Attribute Grammars Matter''<br />
:Wouter Swierstra <br />
;''Solving Sudoku''<br />
:Dominic Fox <br />
;''On Treaps And Randomization''<br />
:Jesper Louis Andersen <br />
<br />
[[The Monad.Reader/Issue5 | The Monad.Reader Issue 5]] was released October 2005.<br />
;''<nowiki>Haskell: A Very Different Language</nowiki>''<br />
:John Goerzen<br />
;''Generating Polyominoes''<br />
:Dominic Fox<br />
;''<nowiki>HRay:A Haskell ray tracer</nowiki>''<br />
:Kenneth Hoste<br />
;''Number-parameterized types''<br />
:Oleg Kiselyov<br />
;''A Practical Approach to Graph Manipulation''<br />
:Jean Philippe Bernardy<br />
;''Software Testing With Haskell''<br />
:Shae Erisson<br />
<br />
[[Media:TMR-Issue6.pdf|The Monad.Reader Issue 6]] was released January 31, 2007.<br />
;''Getting a Fix from the Right Fold''<br />
:Bernie Pope<br />
;''Adventures in Classical-Land''<br />
:Dan Piponi<br />
;''Assembly: Circular Programming with Recursive do''<br />
:Russell O'Connor<br />
<br />
[[Media:TMR-Issue7.pdf|The Monad.Reader Issue 7]] was released April 30, 2007.<br />
;''A Recipe for controlling Lego using Lava''<br />
:Matthew Naylor<br />
;''<nowiki>Caml Trading: Experiences in Functional Programming on Wall Street</nowiki>''<br />
:Yaron Minsky<br />
;''<nowiki>Book Review: “Programming in Haskell” by Graham Hutton</nowiki>''<br />
:Duncan Coutts<br />
;''Yhc.Core – from Haskell to Core''<br />
:Dimitry Golubovsky, Neil Mitchell, Matthew Naylor<br />
<br />
[[Media:TMR-Issue8.pdf|The Monad.Reader Issue 8]] was released on September 10, 2007.<br />
;''Generating Multiset Partitions''<br />
:Brent Yorgey<br />
;''Type-Level Instant Insanity''<br />
:Conrad Parker<br />
<br />
[[Media:TMR-Issue9.pdf|The Monad.Reader Issue 9]], the [http://hackage.haskell.org/trac/summer-of-code/wiki Summer of Code] special, was released on November 19, 2007.<br />
;''Cabal Configurations''<br />
:Thomas Schilling<br />
;''Darcs Patch Theory''<br />
:Jason Dagit<br />
;''<nowiki>TaiChi: how to check your types with serenity</nowiki>''<br />
:Mathieu Boespflug<br />
<br />
[[Media:TMR-Issue10.pdf|The Monad.Reader Issue 10]] was released on April 8, 2008.<br />
;''Step inside the <nowiki>GHCi</nowiki> debugger''<br />
:Bernie Pope<br />
;''Evaluating Haskell in Haskell''<br />
:Matthew Naylor<br />
<br />
[[Media:TMR-Issue11.pdf|The Monad.Reader Issue 11]] was released on August 25, 2008.<br />
;''David F. Place''<br />
:How to Refold a Map<br />
;''Kenneth Knowles''<br />
:First-Order Logic à la Carte<br />
;''Douglas M. Auclair'' <br />
:<nowiki>MonadPlus: What a Super Monad!</nowiki><br />
<br />
[[Media:TMR-Issue12.pdf|The Monad.Reader Issue 12]], the second Summer of Code special, was released on November 19, 2008.<br />
<br />
;''Compiler Development Made Easy''<br />
:Max Bolingbroke<br />
;''How to Build a Physics Engine''<br />
:Roman Cheplyaka<br />
;''Hoogle Overview''<br />
:Neil Mitchell<br />
<br />
[[Media:TMR-Issue13.pdf|The Monad.Reader Issue 13]] was released on March 12, 2009.<br />
<br />
;''Rapid Prototyping in TEX''<br />
:Stephen Hicks<br />
;''The Typeclassopedia''<br />
:Brent Yorgey<br />
;''<nowiki>Book Review: Real World Haskell</nowiki>''<br />
:Chris Eidhof and Eelco Lempsink<br />
;''Calculating Monads with Category Theory''<br />
:Derek Elkins</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=The_Monad.Reader&diff=26937The Monad.Reader2009-03-12T07:56:37Z<p>WouterSwierstra: Prepare to release Issue 13</p>
<hr />
<div>[[Category:Community]]<br />
The Monad.Reader is an electronic magazine about all things Haskell. It is less formal than journal, but enduring than a wiki-page or blog post. There have been a wide variety of articles, including: exciting code fragments, intriguing puzzles, book reviews, tutorials, and even half-baked research ideas.<br />
<br />
==== Latest Issue ====<br />
[[Media:TMR-Issue13.pdf|The Monad.Reader Issue 13]] is out now. Issue 13 consists of the following three articles:<br />
<br />
;''Rapid Prototyping in TEX''<br />
:Stephen Hicks<br />
;''The Typeclassopedia''<br />
:Brent Yorgey<br />
;''<nowiki>Book Review: Real World Haskell</nowiki>''<br />
:Chris Eidhof and Eelco Lempsink<br />
;''Calculating Monads with Category Theory''<br />
:Derek Elkins<br />
<br />
Discussion of this Issue's articles may be found on a [[The_Monad.Reader/Discuss_Issue13|separate page.]]<br />
<br />
Feel free to [http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue13/ browse the source files]. You can check out the entire repository using darcs:<br />
<br />
<tt>darcs get http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue13/</tt><br />
<br />
The source code and LaTeX files have all been released under a BSD-style license.<br />
<br />
I'd welcome any general discussion and feedback [[Talk:The_Monad.Reader|on the Talk page]].<br />
<br />
==== [[The_Monad.Reader/Previous_issues | Previous editions]] ====<br />
<br />
All the previous editions have moved to a separate page: [[The_Monad.Reader/Previous_issues]]. Some of the older, wiki-published articles need some TLC. If you read any of the articles and can spare a few minutes to help clean up some of the formatting, your help would really be appreciated.<br />
<br />
==== Contributing ====<br />
<br />
If you're interested in writing something for The Monad.Reader, please download the [[Media:TMR.zip|instructions for authors]]. You can also check out the most recent version from the darcs repository at <tt>http://sneezy.cs.nott.ac.uk/darcs/TMR/Guidelines</tt>. The deadline for Issue 14 has not been set yet.<br />
<br />
Feel free to contact [http://www.cse.chalmers.se/~wouter Wouter Swierstra] with any questions.<br />
<br />
==== Merchandise ====<br />
<br />
You can buy The Monad.Reader t-shirts and merchandise from [http://www.cafepress.com/TheMonadReader Cafepress].<br />
<br />
==== The name ====<br />
The name is a pun, I'm afraid. A magazine is sometimes also referred to as a "reader". The articles are not necessarily about monads.</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=The_Monad.Reader/Previous_issues&diff=26936The Monad.Reader/Previous issues2009-03-12T07:47:50Z<p>WouterSwierstra: Added Issue 12</p>
<hr />
<div>[[Media:TMR-Issue1.pdf|The Monad.Reader Issue 1]] was released on March 1, 2005.<br />
;''<nowiki>Pseudocode: Natural Style</nowiki>''<br />
:Andrew J. Bromage<br />
;''Pugs Apocryphon 1 - Overview of the Pugs project''<br />
:Autrijus Tang<br />
;''An Introduction to Gtk2Hs, a Haskell GUI Library''<br />
:Kenneth Hoste<br />
;''Implementing Web-Services with the HAIFA Framework''<br />
:Simon D. Foster<br />
;''<nowiki>Code Probe - Issue one: Haskell XML-RPC, v.2004-06-17 [1]</nowiki>''<br />
:Sven Moritz Hallberg<br />
<br />
[[The Monad.Reader/Issue2| The Monad.Reader Issue 2]] was released May 2005.<br />
;''Impure Thoughts 1 - Thtatic Compilathionth (without a lisp)''<br />
:Philippa Cowderoy<br />
;''Eternal Compatibility In Theory''<br />
:Sven Moritz Hallberg<br />
;''Fun with Linear Implicit Parameters''<br />
:Thomas Jäger<br />
;''Haskore''<br />
:Bastiaan Zapf<br />
;''Bzlib2 Binding - An Introduction to the FFI''<br />
:Peter Eriksen<br />
<br />
[[The Monad.Reader/Issue3| The Monad.Reader Issue 3]] was released June 2005.<br />
;''Notes on Learning Haskell''<br />
:Graham Klyne <br />
;''Functional Programming vs Object Oriented Programming''<br />
:Alistair Bayley <br />
;''Concurrent and Distributed Programming with Join Hs''<br />
:Einar Karttunen <br />
;''"Haskell School Of Expression"<nowiki>:</nowiki> Review of The Haskell School of Expression''<br />
:Isaac Jones <br />
;''Review of "Purely Functional Data Structures"''<br />
:Andrew Cooke <br />
<br />
[[The Monad.Reader/Issue4 | The Monad.Reader Issue 4]] was released 5 July 2005.<br />
;''Impure Thoughts 2, B&D not S&M'' (off-wiki)<br />
:Philippa Cowderoy <br />
;''Why Attribute Grammars Matter''<br />
:Wouter Swierstra <br />
;''Solving Sudoku''<br />
:Dominic Fox <br />
;''On Treaps And Randomization''<br />
:Jesper Louis Andersen <br />
<br />
[[The Monad.Reader/Issue5 | The Monad.Reader Issue 5]] was released October 2005.<br />
;''<nowiki>Haskell: A Very Different Language</nowiki>''<br />
:John Goerzen<br />
;''Generating Polyominoes''<br />
:Dominic Fox<br />
;''<nowiki>HRay:A Haskell ray tracer</nowiki>''<br />
:Kenneth Hoste<br />
;''Number-parameterized types''<br />
:Oleg Kiselyov<br />
;''A Practical Approach to Graph Manipulation''<br />
:Jean Philippe Bernardy<br />
;''Software Testing With Haskell''<br />
:Shae Erisson<br />
<br />
[[Media:TMR-Issue6.pdf|The Monad.Reader Issue 6]] was released January 31, 2007.<br />
;''Getting a Fix from the Right Fold''<br />
:Bernie Pope<br />
;''Adventures in Classical-Land''<br />
:Dan Piponi<br />
;''Assembly: Circular Programming with Recursive do''<br />
:Russell O'Connor<br />
<br />
[[Media:TMR-Issue7.pdf|The Monad.Reader Issue 7]] was released April 30, 2007.<br />
;''A Recipe for controlling Lego using Lava''<br />
:Matthew Naylor<br />
;''<nowiki>Caml Trading: Experiences in Functional Programming on Wall Street</nowiki>''<br />
:Yaron Minsky<br />
;''<nowiki>Book Review: “Programming in Haskell” by Graham Hutton</nowiki>''<br />
:Duncan Coutts<br />
;''Yhc.Core – from Haskell to Core''<br />
:Dimitry Golubovsky, Neil Mitchell, Matthew Naylor<br />
<br />
[[Media:TMR-Issue8.pdf|The Monad.Reader Issue 8]] was released on September 10, 2007.<br />
;''Generating Multiset Partitions''<br />
:Brent Yorgey<br />
;''Type-Level Instant Insanity''<br />
:Conrad Parker<br />
<br />
[[Media:TMR-Issue9.pdf|The Monad.Reader Issue 9]], the [http://hackage.haskell.org/trac/summer-of-code/wiki Summer of Code] special, was released on November 19, 2007.<br />
;''Cabal Configurations''<br />
:Thomas Schilling<br />
;''Darcs Patch Theory''<br />
:Jason Dagit<br />
;''<nowiki>TaiChi: how to check your types with serenity</nowiki>''<br />
:Mathieu Boespflug<br />
<br />
[[Media:TMR-Issue10.pdf|The Monad.Reader Issue 10]] was released on April 8, 2008.<br />
;''Step inside the <nowiki>GHCi</nowiki> debugger''<br />
:Bernie Pope<br />
;''Evaluating Haskell in Haskell''<br />
:Matthew Naylor<br />
<br />
[[Media:TMR-Issue11.pdf|The Monad.Reader Issue 11]] was released on August 25, 2008.<br />
;''David F. Place''<br />
:How to Refold a Map<br />
;''Kenneth Knowles''<br />
:First-Order Logic à la Carte<br />
;''Douglas M. Auclair'' <br />
:<nowiki>MonadPlus: What a Super Monad!</nowiki><br />
<br />
[[Media:TMR-Issue12.pdf|The Monad.Reader Issue 12]], the second Summer of Code special, was released on November 19, 2008<br />
<br />
;''Compiler Development Made Easy''<br />
:Max Bolingbroke<br />
;''How to Build a Physics Engine''<br />
:Roman Cheplyaka<br />
;''Hoogle Overview''<br />
:Neil Mitchell</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=Twitter&diff=26581Twitter2009-02-22T21:37:51Z<p>WouterSwierstra: </p>
<hr />
<div>'''Haskell community members on Twitter'''<br />
<br />
* Bryan O’Sullivan ([http://twitter.com/bos31337 bos31337]) <br />
* Brandon Allbery ([http://twitter.com/allbery_b allberry_b]) <br />
* alpheccar ([http://twitter.com/alpheccar alpheccar])<br />
* Arnar Birgisson ([http://twitter.com/arnarbi arnarbi]) <br />
* Chris Eidhof ([http://twitter.com/chriseidhof chriseidhof]) <br />
* Conal Elliott ([http://twitter.com/conal conal]) <br />
* Conrad Parker ([http://twitter.com/conradparker conradparker]) <br />
* Don Stewart ([http://twitter.com/donsbot donsbot]) <br />
* Eelco Lempsink ([http://twitter.com/eeclo eeclo]) <br />
* Galois, Inc. ([http://twitter.com/galoisinc galoisinc]) <br />
* Jake McArthur ([http://twitter.com/geezusfreeek geezusfreeek]) <br />
* Pepe Iborra ([http://twitter.com/hate_pick_nick hate_pick_nick]) <br />
* John Goerzen ([http://twitter.com/jgoerzen jgoerzen]) <br />
* Eugene Kirpichov ([http://twitter.com/jkff jkff]) <br />
* Kazuya Sakakihara ([http://twitter.com/kazooya kazooya]) <br />
* Edward Kmett ([http://twitter.com/kmett kmett]) <br />
* Matthew Podwysocki ([http://twitter.com/mattpodwysocki mattpodwysocki]) <br />
* Mark Reid ([http://twitter.com/mdreid mdreid]) <br />
* Andy Adams-Moran ([http://twitter.com/morabbin morabbin]) <br />
* Neil Bartlett ([http://twitter.com/njbartlett njbartlett]) <br />
* Paul Brown ([http://twitter.com/paulrbrown paulrbrown]) <br />
* Shae Erisson ([http://twitter.com/shapr shapr]) <br />
* Sigbjorn Finne ([http://twitter.com/sigbjorn_finne sigbjorn_finne]) <br />
* Stefan Holdermans ([http://twitter.com/_dblhelix _dblhelix])<br />
* Dan Piponi ([http://twitter.com/sigfpe sigfpe]) <br />
* Spencer Janssen ([http://twitter.com/spencerjanssen spencerjanssen]) <br />
* Isaac Jones ([http://twitter.com/SyntaxPolice SyntaxPolice]) <br />
* Manuel Chakravarty ([http://twitter.com/TacticalGrace TacticalGrace]) <br />
* Tom Moertel ([http://twitter.com/tmoertel tmoertel]) <br />
* Thomas Sutton ([http://twitter.com/thsutton thsutton]) <br />
* Creighton Hogg ([http://twitter.com/wchogg wchogg]) <br />
* Jeff Wheeler ([http://twitter.com/jeffwheeler jeffwheeler])<br />
* Daniel Peebles ([http://twitter.com/pumpkingod pumpkingod])<br />
* Simon Marlow ([http://twitter.com/simonmar simonmar])<br />
* Andrew Wagner ([http://twitter.com/arwagner chessguy])<br />
* Magnus Therning ([http://twitter.com/magthe magthe])<br />
* Jan Xie ([http://twitter.com/flowerborn flowerborn])<br />
* Wouter Swierstra ([http://twitter.com/wouterswierstra wouterswierstra])<br />
* Tristan Allwood ([http://twitter.com/TotallyToRA TotallyToRA])<br />
<br />
'''Haskell buzz on Twitter'''<br />
<br />
* [http://twitter.com/paytonrules/statuses/946501437 Officially amazed at the Haskell chat room. I asked a simple question there, and they went nuts on it. In a good way.]<br />
* [http://twitter.com/lallysingh/statuses/945333684 Haskell has interactive plotting commands for charts/graphs/etc. That's it, I'm officially in love]<br />
* [http://twitter.com/gimboland/statuses/944893593 God, I love Haskell]<br />
* [http://twitter.com/tsmosca/statuses/943950292 Ease of Haskell vs. Java: amazing!]<br />
* [http://twitter.com/arnax/statuses/943659297 The joy of opening a mind to Haskell :-)]<br />
* [http://twitter.com/mattpodwysocki/statuses/942618649 Aw, sweet, building a MP3 decoder in Haskell. Geek explosion ensues]<br />
* [http://twitter.com/rbp/statuses/942546816 You know, haskell actually pretty much rules :)]<br />
* [http://twitter.com/pavan_mishra/statuses/941707547 Awed by Haskell]<br />
* [http://twitter.com/clehene/statuses/939600495 I can haskell from iPhone with hugs98]<br />
<br />
[[Category:Community]]</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=The_Monad.Reader&diff=26537The Monad.Reader2009-02-20T08:25:52Z<p>WouterSwierstra: </p>
<hr />
<div>[[Category:Community]]<br />
The Monad.Reader is an electronic magazine about all things Haskell. It is less formal than journal, but somehow more enduring than a wiki-page. There have been a wide variety of articles, including: exciting code fragments, intriguing puzzles, book reviews, tutorials, and even half-baked research ideas.<br />
<br />
==== Latest Issue ====<br />
[[Media:TMR-Issue12.pdf|The Monad.Reader Issue 12]] is now out.<br />
<br />
Issue 12 is a Summer of Code special and consists of the following three articles:<br />
<br />
;''Compiler Development Made Easy''<br />
:Max Bolingbroke<br />
;''How to Build a Physics Engine''<br />
:Roman Cheplyaka<br />
;''Hoogle Overview''<br />
:Neil Mitchell<br />
<br />
Discussion of this Issue's articles may be found on a [[The_Monad.Reader/Discuss_Issue12|separate page.]]<br />
<br />
Feel free to browse the source files [http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue12/]. You can check out the entire repository using darcs:<br />
<br />
<tt>darcs get http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue12/</tt><br />
<br />
The source code and LaTeX files have all been released under a BSD-style license.<br />
<br />
I'd welcome any general discussion and feedback [[Talk:The_Monad.Reader|on the Talk page]].<br />
<br />
==== Previous editions ====<br />
<br />
All the previous editions have moved to a [[The_Monad.Reader/Previous_issues|separate page]]. Some of the older, wiki-published articles need some TLC. If you read any of the articles and can spare a few minutes to help clean up some of the formatting, your help would really be appreciated.<br />
<br />
==== Contributing ====<br />
<br />
If you're interested in writing something for The Monad.Reader, please download the [[Media:TMR.zip|instructions for authors]]. You can also check out the most recent version from the darcs repository at <tt>http://sneezy.cs.nott.ac.uk/darcs/TMR/Guidelines</tt>. The deadline for Issue 13 is '''February 13, 2009'''.<br />
<br />
Feel free to contact [http://www.cse.chalmers.se/~wouter Wouter Swierstra] with any questions.<br />
<br />
==== Merchandise ====<br />
<br />
You can buy The Monad.Reader t-shirts and merchandise from [http://www.cafepress.com/TheMonadReader Cafepress].<br />
<br />
==== The name ====<br />
The name is a pun, I'm afraid. A magazine is sometimes also referred to as a "reader". The articles are not necessarily about monads.</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=The_Monad.Reader&diff=24476The Monad.Reader2008-12-02T17:06:38Z<p>WouterSwierstra: </p>
<hr />
<div>[[Category:Community]]<br />
The Monad.Reader is an electronic magazine about all things Haskell. It is less formal than journal, but somehow more enduring than a wiki-page. There have been a wide variety of articles, including: exciting code fragments, intriguing puzzles, book reviews, tutorials, and even half-baked research ideas.<br />
<br />
==== Latest Issue ====<br />
[[Media:TMR-Issue12.pdf|The Monad.Reader Issue 12]] is now out.<br />
<br />
Issue 12 is a Summer of Code special and consists of the following three articles:<br />
<br />
;''Compiler Development Made Easy''<br />
:Max Bolingbroke<br />
;''How to Build a Physics Engine''<br />
:Roman Cheplyaka<br />
;''Hoogle Overview''<br />
:Neil Mitchell<br />
<br />
Discussion of this Issue's articles may be found on a [[The_Monad.Reader/Discuss_Issue12|separate page.]]<br />
<br />
Feel free to browse the source files [http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue12/]. You can check out the entire repository using darcs:<br />
<br />
<tt>darcs get http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue12/</tt><br />
<br />
The source code and LaTeX files have all been released under a BSD-style license.<br />
<br />
I'd welcome any general discussion and feedback [[Talk:The_Monad.Reader|on the Talk page]].<br />
<br />
==== Previous editions ====<br />
<br />
All the previous editions have moved to a [[The_Monad.Reader/Previous_issues|separate page]]. Some of the older, wiki-published articles need some TLC. If you read any of the articles and can spare a few minutes to help clean up some of the formatting, your help would really be appreciated.<br />
<br />
==== Contributing ====<br />
<br />
If you're interested in writing something for The Monad.Reader, please download the [[Media:TMR.zip|instructions for authors]]. You can also check out the most recent version from the darcs repository at <tt>http://sneezy.cs.nott.ac.uk/darcs/TMR/Guidelines</tt>. The deadline for Issue 13 is '''February 13, 2009'''.<br />
<br />
Feel free to contact [http://www.cs.nott.ac.uk/~wss Wouter Swierstra] with any questions.<br />
<br />
==== Merchandise ====<br />
<br />
You can buy The Monad.Reader t-shirts and merchandise from [http://www.cafepress.com/TheMonadReader Cafepress].<br />
<br />
==== The name ====<br />
The name is a pun, I'm afraid. A magazine is sometimes also referred to as a "reader". The articles are not necessarily about monads.</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=The_Monad.Reader/Issue4/Why_Attribute_Grammars_Matter&diff=24155The Monad.Reader/Issue4/Why Attribute Grammars Matter2008-11-19T16:15:30Z<p>WouterSwierstra: </p>
<hr />
<div>=Why Attribute Grammars Matter=<br />
:''by Wouter Swierstra for The Monad.Reader Issue Four''; 01-07-05<br />
<br />
==Introduction==<br />
Almost twenty years have passed since John Hughes influential paper [http://www.math.chalmers.se/~rjmh/Papers/whyfp.html Why Functional Programming Matters]. At the same time the first work on<br />
attribute grammars and their relation to functional programming<br />
appeared. Despite the growing popularity of functional programming,<br />
attribute grammars remain remarkably less renown.<br />
<br />
The purpose of this article is twofold. On the one hand it illustrates how<br />
functional programming sometimes scales poorly and how<br />
attribute grammars can remedy these problems. On the other hand it aims to<br />
provide a gentle introduction to attribute grammars for seasoned functional<br />
programmers.<br />
<br />
==The problem==<br />
John Hughes argues that with the increasing complexity of modern<br />
software systems, modularity has become of paramount importance to software<br />
development. Functional languages provide new kinds of ''glue'' that create<br />
new opportunities for more modular code. In particular, Hughes stresses<br />
the importance of higher-order functions and lazy evaluation. There are<br />
plenty of examples where this works nicely - yet situations arise where<br />
the glue that functional programming provides somehow isn't quite enough.<br />
<br />
Perhaps a small example is in order. Suppose we want to write a function<br />
<tt>diff :: [Float] -> [Float]</tt> that given a list <tt>xs</tt>, calculates a new list where every element <tt>x</tt> is replaced with the difference between <tt>x</tt> and the<br />
average of <tt>xs</tt>. Similar problems pop up in any library for performing<br />
statistical calculations.<br />
<br />
===Higher-order functions===<br />
Let's tackle the problem with some of Haskell's most powerful glue - higher-order functions. Any beginning Haskell programmer should be able to concoct the solution presented in Listing One. The average is computed using functions from the Prelude. The obvious function using this average is then mapped over the original list. So far, so good.<br />
<br />
<haskell><br />
--- Listing One ---<br />
<br />
diff :: [Float] -> [Float]<br />
diff xs = map (\x -> x - (avg xs)) xs<br />
<br />
avg :: [Float] -> Float<br />
avg xs = sum xs / genericLength xs<br />
</haskell><br />
<br />
There are, however, a few things swept under the rug in this example. First<br />
of all, this simple problem requires three traversals of the original<br />
list. Computing additional values from the original list will require even<br />
more traversals.<br />
<br />
Secondly, the solution is so concise because it depends on Prelude<br />
functions. If the values were stored in a slightly different data structure,<br />
the solution would require a lot of tedious work. We could, of course,<br />
define our own higher-order functions, such as <tt>map</tt> and <tt>fold</tt>, or even<br />
resort to generic programming. There are, however,<br />
more ways to skin this particular cat.<br />
<br />
This problem illustrates the sheer elegance of functional programming. We<br />
do pay a price for the succinctness of the solution. Multiple traversals<br />
and boilerplate code can both be quite a head-ache. If we want to perform<br />
complex computations over custom data structures, we may want to consider an<br />
alternative approach.<br />
<br />
Fortunately, as experienced functional programmers, we have another card up<br />
our sleeve.<br />
<br />
===Lazy evaluation===<br />
The second kind of glue that functional programming provides is lazy<br />
evaluation. In essence, lazy evaluation only evaluates expressions when<br />
they become absolutely necessary.<br />
<br />
In particular, lazy evaluation enables the definition of ''circular programs'' that bear a dangerous resemblance to undefined values. Circular<br />
programs tuple separate computations, relying on lazy evaluation to feed<br />
the results of one computation to another.<br />
<br />
In our example, we could simply compute the length and sum of the list at<br />
the same time:<br />
<br />
<haskell><br />
average :: [Float] -> Float<br />
average xs = let<br />
nil = (0.0, 0.0)<br />
cons x (s,l) = (x + s, 1.0 + l)<br />
(sum,length) = foldr cons nil xs<br />
in sum / length<br />
</haskell><br />
<br />
We can eliminate traversals by tupling computations! Can we compute the<br />
resulting list at the same time as computing the sum and length? Let's try:<br />
<br />
<haskell><br />
diff :: [Float] -> [Float]<br />
diff xs = let<br />
nil = (0.0, 0.0, [])<br />
cons x (s,l,rs) = (x+s, 1.0+l, (x - ....) : rs)<br />
(sum,length,res) = foldr cons nil xs<br />
in res<br />
</haskell><br />
<br />
We run into trouble when we try to use the average to construct the<br />
resulting list. The problem is, that we haven't computed the average, but<br />
somehow want to use it during the traversal. To solve this, we don't actually<br />
compute the resulting list, but rather compute a function taking the<br />
average to the resulting list:<br />
<br />
<haskell><br />
diff :: [Float] -> [Float]<br />
diff xs = let<br />
nil = (0.0, 0.0, \avg -> [])<br />
cons x (s,l,rs) = (x+s, 1.0+l, \avg -> (x - avg) : rs avg)<br />
(sum,length,res) = foldr cons nil xs<br />
in res (sum / length)<br />
</haskell><br />
<br />
We can generalize this idea a bit further. Suppose that we want to compute<br />
other values that use the average. We could just add an <tt>avg</tt> argument to<br />
every element of the tuple that needs the average. It is a bit nicer,<br />
however, to lift the <tt>avg</tt> argument outside the tuple. Our final listing<br />
now becomes:<br />
<br />
<haskell><br />
--- Listing Two ---<br />
<br />
diff :: [Float] -> [Float]<br />
diff xs =<br />
let<br />
nil avg = (0.0, 0.0, [])<br />
cons x fs avg = let (s,l,ds) = fs avg<br />
in (s+x,l+1.0,x-avg : ds)<br />
(sum,length,ds) = foldr cons nil xs (sum / length)<br />
in ds<br />
</haskell><br />
<br />
Now every element of the tuple can refer to the average, rather than just<br />
the final list.<br />
<br />
This ''credit card transformation'' eliminates multiple traversals by<br />
tupling computations. We use the average without worrying if we have<br />
actually managed to compute it. When we actually write the fold, however,<br />
we have to put our average where our mouth is. Fortunately, the <tt>sum</tt> and<br />
<tt>length</tt> don't depend on the average, so we are free to use these values to<br />
tie the recursive knot.<br />
<br />
The code in Listing Two only needs a single traversal and one<br />
higher-order function. It apparently solves the problems with the code in<br />
Listing One.<br />
<br />
Hold on a minute. What ever happened to the elegance of our previous<br />
solution? Our second solution appears to have sacrificed clarity for the<br />
sake of efficiency. Who in their right minds would want to write the code<br />
in Listing Two? I wouldn't. Maybe, just maybe, we can do a bit better.<br />
<br />
==Attribute Grammars==<br />
Before even explaining what an attribute grammar is, think back to when you<br />
first learned about ''folds''. Initially, a fold seems like a silly<br />
abstraction. Why should I bother writing simple functions as folds? After<br />
all, I already know how to write the straightforward solution. It's only<br />
after a great deal of experience with functional programming that you learn<br />
to recognize folds as actually being a worthwhile abstraction. Learning<br />
about attribute grammars is similar in more ways than one.<br />
<br />
So what are attribute grammars? I'll have a bit more to say about that<br />
later. For now, let's see what the attribute grammar solution to our<br />
running example looks like.<br />
<br />
===The attribute grammar solution===<br />
I'll introduce attribute grammars using the syntax of the [http://www.cs.uu.nl/wiki/Center/AttributeGrammarSystem Utrecht University Attribute Grammar] system or UUAG for short. The UUAG system takes a file<br />
containing an attribute grammar definition and generates a Haskell module<br />
containing ''semantic functions'', determined by the attribute grammar. The<br />
attribute grammar determines a computation over some data structure; the<br />
semantic functions correspond to the actual Haskell functions that perform<br />
the computation.<br />
<br />
Although the UUAG system's syntax closely resembles Haskell, it is<br />
important to realize that the UUAG system is a Haskell pre-processor and<br />
not a complete Haskell compiler.<br />
<br />
So what does an attribute grammar file look like? Well, first of all we<br />
have to declare the data structure we're working with. In our example, we<br />
simply have a list of Floats.<br />
<br />
<haskell><br />
--- Listing Three ---<br />
<br />
DATA Root<br />
| Root list : List<br />
DATA List<br />
| Nil<br />
| Cons head : Float tail : List<br />
</haskell><br />
<br />
Datatypes are declared with the keyword <tt>DATA</tt>, followed by a list of<br />
constructors. Every node explicitly gives the name and type of all its<br />
children. In our example we have an empty list <tt>Nil</tt> and a list constructor<br />
<tt>Cons</tt> with two children, <tt>head</tt> and <tt>tail</tt>. For reasons that will become<br />
apparent later on, we add an additional datatype corresponding to the root<br />
of our list.<br />
<br />
So now that we've declared our datatype, let's add some ''attributes''. If we<br />
want to compute the average element, we'll need the length of the<br />
list. Listing Four contains introduces our first attribute corresponding to<br />
a list's length.<br />
<br />
<haskell><br />
--- Listing Four ---<br />
<br />
ATTR List [|| length : Float]<br />
SEM List<br />
| Nil lhs.length = 0.0<br />
| Cons lhs.length = 1.0 + @tail.length<br />
</haskell><br />
<br />
Let's go over the code line by line.<br />
<br />
An attribute has to be declared before it can actually be defined. An<br />
attribute is declared using the <tt>ATTR</tt> statement. This example declares a<br />
single ''synthesized'' attribute called <tt>length</tt> of type <tt>Float</tt>. A<br />
synthesized attribute is typically a value you are trying to compute bottom<br />
up. Synthesized attributes are declared to the right of the second<br />
vertical bar. We'll see other kinds attributes shortly.<br />
<br />
Now that we've declared our first attribute, we can actually define it. A<br />
<tt>SEM</tt> statement begins by declaring for which data type attributes are<br />
being defined. In our example we want to define an attribute on a <tt>List</tt>,<br />
hence we write <tt>SEM List</tt>. We can subsequently give attribute definitions<br />
for the constructors of our <tt>List</tt> data type.<br />
<br />
Every attribute definition consists of several parts. We begin by<br />
mentioning the constructor for which we define an attribute. In our example<br />
we give two definitions, one for <tt>Nil</tt> and one for <tt>Cons</tt>.<br />
<br />
The second part of the attribute definition describes which attribute is<br />
being defined. In our example we define the attribute <tt>length</tt> for the<br />
''left-hand side'', or <tt>lhs</tt>. A lot of the terminology associated with<br />
attribute grammars comes from the world of context-free grammars. As this<br />
tutorial focuses on functional programmers, rather than formal language<br />
gurus, feel free to read <tt>lhs</tt> as "parent node". It seems a bit odd to<br />
write <tt>lhs.length</tt> explicitly, but we'll see later on why merely writing<br />
<tt>length</tt> doesn't suffice.<br />
<br />
Basically, all we've only said that the two definitions define the <tt>length</tt><br />
of <tt>Nil</tt> and <tt>Cons</tt>. We still have to fill in the necessary definition. The<br />
actual definition of the attributes takes place to the right of the equals<br />
sign. Programmers are free to write any valid Haskell expression. In fact,<br />
the UUAG system does not analyse the attribute definitions at all, but merely<br />
copies them straight into the resulting Haskell module. In our example, we<br />
want the length of the empty list to be <tt>0.0</tt>. The case for <tt>Cons</tt> is a bit<br />
trickier.<br />
<br />
In the <tt>Cons</tt> case we want to be increment the length computed so far. To<br />
do so we need to be able to refer to other attributes. In particular we<br />
want to refer to the <tt>length</tt> attribute of the tail. The expression<br />
<tt>@tail.length</tt> does just that. In general, you're free to refer to any<br />
synthesized attribute ''attr'' of a child node ''c'' by writing <tt>@c.attr</tt>.<br />
<br />
The <tt>length</tt> attribute can be depicted pictorally as follows:<br />
<br />
[[Image:WAGM-Avg.png|The length attribute]]<br />
<br />
'''Exercise:''' Declare and define a synthesized attribute <tt>sum</tt> that<br />
computes the sum of a <tt>List</tt>. You can refer to a value <tt>val</tt> stored at a<br />
node as <tt>@val</tt>. For instance, write <tt>@head</tt> to refer to the float stored in<br />
at a <tt>Cons</tt> node. Draw the corresponding picture if you're stuck.<br />
<br />
Now we've defined <tt>length</tt> and <tt>sum</tt>, let's compute the average. We'll know<br />
the sum and the length of the entire list at the Root node. Using those<br />
attributes we can compute the average and ''broadcast'' the average through<br />
the rest of the list. Let's start with the picture this time:<br />
<br />
[[Image:WAGM-Length.png|The average attribute]]<br />
<br />
The previous synthesized attributes, <tt>length</tt> and <tt>sum</tt>, defined bottom-up<br />
computations. We're now in the situation, however, where we want to pass<br />
information through the tree from a parent node to its child nodes using an<br />
''inherited'' attribute. Listing Five defines an inherited attribute <tt>avg</tt><br />
that corresponds to the picture we just drew.<br />
<br />
<haskell><br />
--- Listing Five ---<br />
ATTR List [ avg : Float|| ]<br />
SEM Root<br />
| Root list.avg = @list.sum / @list.length<br />
<br />
SEM List<br />
| Cons tail.avg = @lhs.avg<br />
<br />
</haskell><br />
<br />
Inherited attributes are declared to the left of the two vertical<br />
bars. Once we've declared an inherited attribute <tt>avg</tt> on lists, we're<br />
obliged to define how every constructor passes an <tt>avg</tt> to its children of<br />
type <tt>List</tt>.<br />
<br />
In our example, there are only two constructors with children<br />
of type <tt>List</tt>, namely <tt>Root</tt> and <tt>Cons</tt>. At the <tt>Root</tt> we compute the<br />
average, using the synthesized attributes <tt>sum</tt> and <tt>length</tt>, and pass the<br />
result to the <tt>list</tt> child. At the <tt>Cons</tt> node, we merely copy down the<br />
<tt>avg</tt> we received from our parent. Analogous to synthesized attributes, we<br />
can refer to an inherited attribute <tt>attr</tt> by writing <tt>@lhs.attr</tt>.<br />
<br />
Admittedly, this inherited attribute is not terribly interesting. There are<br />
plenty of other examples, however, where an inherited attribute represents<br />
important contextual information. Think of passing around the set of<br />
assumptions when writing a type checker, for instance.<br />
<br />
'''Exercise:''' To complete the attribute grammar, define an attribute<br />
<tt>res</tt> that computes the resulting list. Should it be inherited or<br />
synthesized? You may want to draw a picture.<br />
<br />
===Running the UUAG===<br />
Now suppose you've completed the exercises and copied the examples in a<br />
single file called <tt>Diff.ag</tt>. How do we actually use the attribute grammar?<br />
This is were the UUAG compiler steps in. Running the UUAG compiler on the<br />
source attribute grammar file generates a new <tt>Diff.hs</tt> file, which we can<br />
then compile like any other Haskell file.<br />
<br />
<haskell><br />
> uuagc -a Diff.ag<br />
> ghci Diff.hs<br />
</haskell><br />
<br />
The <tt>Diff.hs</tt> file contains several ingredients.<br />
<br />
Firstly, new Haskell datatypes are generated corresponding to <tt>DATA</tt><br />
declarations in the attribute grammar. For every generated datatype a<br />
corresponding <tt>fold</tt> is generated. The attribute definitions determine the<br />
arguments passed to the folds. Browsing through the generated code can<br />
actually be quite instructive.<br />
<br />
Inherited attributes are passed to recursive calls of the fold. Synthesized<br />
attributes are tupled and returned as the result of the computation. In<br />
essence, we've reproduced our original solution in Listing Two - but now<br />
without the hassle associated with spelling out [[catamorphism]]s with a higher<br />
order domain and a compound codomain.<br />
<br />
The attribute grammar solution is just as efficient as our earlier solution<br />
relying on lazy evaluation, yet the code is hardly different from what we<br />
would write in a straightforward Haskell solution. It really is the best of<br />
both worlds. The two types of glue that John Hughes pinpoints in his<br />
original article just aren't enough. I would like to think that sometimes<br />
attribute grammars are sometimes capable of providing just the right<br />
missing bit of glue.<br />
<br />
===What are attribute grammars?===<br />
So just what are attribute grammars? Well, that depends on who you ask,<br />
really. I've tried to sum up some different views below.<br />
<br />
Attribute grammars add semantics to a context free grammar. Although it is<br />
easy enough to describe a language's syntax using a context free grammar,<br />
accurately describing a language's semantics is notoriously<br />
difficult. Attribute grammars specify a language's semantics by<br />
'decorating' a context free grammar with those attributes you are interested<br />
in.<br />
<br />
Attribute grammars describe tree traversals. All imperative implementations<br />
of attribute grammar systems perform tree traversals to compute some<br />
value. Basically an attribute grammar declares ''which'' values to compute<br />
and an attribute grammar system executes these computations. Once you've<br />
made this observation, the close relation to functional programming should<br />
not come as a surprise.<br />
<br />
Attribute grammars are a formalism for writing catamorphisms in a<br />
compositional fashion. Basically, the only thing the UUAG compiler does is<br />
generate large folds that I couldn't be bothered writing myself. It takes<br />
away all the elbow grease involved with maintaining and extending such<br />
code. In a sense the compiler does absolutely nothing new; it just makes<br />
life a lot easier.<br />
<br />
Attribute grammars provide framework for aspect oriented programming in<br />
functional languages. Lately there has been a lot of buzz about the<br />
importance of ''aspects'' and ''aspect oriented programming''. Attribute<br />
grammars provide a clear and well-established framework for splitting code<br />
into separate aspects. By spreading attribute definitions over several<br />
different files and grouping them according to aspect, attribute grammars<br />
provide a natural setting for aspect oriented programming.<br />
<br />
How do attribute grammars relate to other Haskell abstractions? I'll try to<br />
put my finger on some of the more obvious connections, but I'm pretty sure<br />
there's a great deal more that I don't cover here.<br />
<br />
==What else is out there?==<br />
Everyone loves monads. They're what makes IO possible in Haskell. There are<br />
extensive standard libraries and syntactic sugar specifically designed to<br />
make life with monads easier. There are an enormous number of Haskell<br />
libraries based on the monadic interface. They represent one of the most<br />
substantial developments of functional programming in the last decade.<br />
<br />
Yet somehow, the single most common question asked by fledgling Haskell<br />
programmers is probably ''What are monads?''. Beginners have a hard time<br />
grasping the concept of monads and yet connoisseurs recognize a monad in<br />
just about every code snippet. I think the more important question is:<br />
''What are monads good for?''<br />
<br />
Monads provide a simple yet powerful abstract notion of computation. In<br />
essence, a monad describes how sequence computations. This is crucial in<br />
order to perform IO in a functional language; by constraining all IO<br />
actions to a single interface of sequenced computations, the programmer is<br />
prevented from creating utter chaos. The real power of monads is in the<br />
interface they provide.<br />
<br />
John Hughes identified modularity as the single biggest blessing of<br />
functional programming. The obvious question is: how modular is the monadic<br />
interface? This really depends on you're definition of modularity. Let me<br />
be more specific. How can you combine two arbitrary monads? You can't. This<br />
is my greatest concern with monads. Once you choose your specific notion of<br />
computation, you have to stick to it through thick and thin.<br />
<br />
What about monad transformers? Monad transformers allow you to add a<br />
specific monad's functionality on top of any existing monad. What seems<br />
like a solution, more often than not, turns out to introduce more problems<br />
than you bargained for. Adding new functionality to a monad involves<br />
lifting all the computations from the previous monad to the new<br />
one. Although I could learn to live with this, it gets even worse. As every<br />
monad transformer really changes the underlying monad the ''order'' in<br />
which monad transformers are applied really makes a difference. If I want<br />
to add error reporting and state to some existing monad, should I be forced<br />
to consider the order in which I add them?<br />
<br />
Monads are extremely worthwhile for the interface they provide. Monadic<br />
libraries are great, but changing and extending monadic code can be a<br />
pain. Can we do better? Well I probably wouldn't have started this monadic<br />
intermezzo if I didn't have some sort of answer.<br />
<br />
Let's start off with <tt>Reader</tt> monads, for instance. Essentially, <tt>Reader</tt><br />
monads adds an argument to some computation. Wait a minute, this reminds me<br />
of inherited attributes. What about <tt>Writer</tt> monads? They correspond to<br />
synthesized attributes of course. Finally, <tt>State</tt> monads correspond to<br />
''chained'' attributes, or attributes that are both synthesized and<br />
inherited. The real edge attribute grammars hold over monad transformers is that you<br />
can define new attributes ''without'' worrying about the order in which you<br />
define them or adapting existing code.<br />
<br />
Do other abstractions capture other notions related to attribute grammars?<br />
Of course they do! Just look at the function space arrows instance. The notion of<br />
combining two distinct computations using the <tt>(&&&)</tt> operator relates to<br />
the concept of ''joining'' two attribute grammars by collecting their<br />
attribute definitions. When you look at the <tt>loop</tt> combinator, I can<br />
only be grateful that an attribute grammar system deals with attribute<br />
dependencies automatically.<br />
<br />
There really is a lot of related work. Implicit parameters? Inherited attributes! Linear implicit parameters? Chained attributes! Concepts that are so natural in the setting of attribute grammars, yet seem contrived when added to Haskell. This strengthens my belief that functional programmers can really benefit from even the most fleeting experience with attribute grammars; although I'd like to think that if you've read this far, you're hungry for more.<br />
<br />
==Further reading==<br />
This more or less covers the tutorial section of this article. The best way<br />
to learn more about attribute grammars is by actually using them. To conclude the tutorial, I've<br />
included a small example for you to play with. I've written a parser for a very<br />
simple wiki formatting language not entirely unlike the one used to produce this<br />
document. So far the HTML generated after parsing a document is fairly poor. It's up to<br />
you to improve it!<br />
<br />
You can download the initial version here. Don't forget to install the [http://www.cs.uu.nl/wiki/Center/AttributeGrammarSystem#Download UUAG]. It might<br />
be worthwhile to have a look at the [http://www.cs.uu.nl/~arthurb/data/AG/AGman.html UUAG manual] as there's a lot of technical detail that I haven't mentioned.<br />
<br />
If you're particularly daring, you may want to take a look at the [http://www.cs.uu.nl/wiki/Ehc/WebHome Essential Haskell Compiler] being developed at Utrecht. It's almost completely written using the UUAG and is designed to be suitable for education and experimentation. The compiler was presented at the Summer School for Advanced Functional Programming in Tartu, Estonia last summer. As a result, there's a lot written about it already.<br />
<br />
Dive on in!<br />
<br />
[[Category:Article]]</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=The_Monad.Reader/Issue4/Why_Attribute_Grammars_Matter&diff=24154The Monad.Reader/Issue4/Why Attribute Grammars Matter2008-11-19T16:13:48Z<p>WouterSwierstra: </p>
<hr />
<div>=Why Attribute Grammars Matter=<br />
:''by Wouter Swierstra for The Monad.Reader Issue Four''; 01-07-05<br />
<br />
==Introduction==<br />
Almost twenty years have passed since John Hughes influential paper [http://www.math.chalmers.se/~rjmh/Papers/whyfp.html Why Functional Programming Matters]. At the same time the first work on<br />
attribute grammars and their relation to functional programming<br />
appeared. Despite the growing popularity of functional programming,<br />
attribute grammars remain remarkably less renown.<br />
<br />
The purpose of this article is twofold. On the one hand it illustrates how<br />
functional programming sometimes scales poorly and how<br />
attribute grammars can remedy these problems. On the other hand it aims to<br />
provide a gentle introduction to attribute grammars for seasoned functional<br />
programmers.<br />
<br />
==The problem==<br />
John Hughes argues that with the increasing complexity of modern<br />
software systems, modularity has become of paramount importance to software<br />
development. Functional languages provide new kinds of ''glue'' that create<br />
new opportunities for more modular code. In particular, Hughes stresses<br />
the importance of higher-order functions and lazy evaluation. There are<br />
plenty of examples where this works nicely - yet situations arise where<br />
the glue that functional programming provides somehow isn't quite enough.<br />
<br />
Perhaps a small example is in order. Suppose we want to write a function<br />
<tt>diff :: [Float] -> [Float]</tt> that given a list <tt>xs</tt>, calculates a new list where every element <tt>x</tt> is replaced with the difference between <tt>x</tt> and the<br />
average of <tt>xs</tt>. Similar problems pop up in any library for performing<br />
statistical calculations.<br />
<br />
===Higher-order functions===<br />
Let's tackle the problem with some of Haskell's most powerful glue - higher-order functions. Any beginning Haskell programmer should be able to concoct the solution presented in Listing One. The average is computed using functions from the Prelude. The obvious function using this average is then mapped over the original list. So far, so good.<br />
<br />
<haskell><br />
--- Listing One ---<br />
<br />
diff :: [Float] -> [Float]<br />
diff xs = map (\x -> x - (avg xs)) xs<br />
<br />
avg :: [Float] -> Float<br />
avg xs = sum xs / genericLength xs<br />
</haskell><br />
<br />
There are, however, a few things swept under the rug in this example. First<br />
of all, this simple problem requires three traversals of the original<br />
list. Computing additional values from the original list will require even<br />
more traversals.<br />
<br />
Secondly, the solution is so concise because it depends on Prelude<br />
functions. If the values were stored in a slightly different data structure,<br />
the solution would require a lot of tedious work. We could, of course,<br />
define our own higher-order functions, such as <tt>map</tt> and <tt>fold</tt>, or even<br />
resort to generic programming. There are, however,<br />
more ways to skin this particular cat.<br />
<br />
This problem illustrates the sheer elegance of functional programming. We<br />
do pay a price for the succinctness of the solution. Multiple traversals<br />
and boilerplate code can both be quite a head-ache. If we want to perform<br />
complex computations over custom data structures, we may want to consider an<br />
alternative approach.<br />
<br />
Fortunately, as experienced functional programmers, we have another card up<br />
our sleeve.<br />
<br />
===Lazy evaluation===<br />
The second kind of glue that functional programming provides is lazy<br />
evaluation. In essence, lazy evaluation only evaluates expressions when<br />
they become absolutely necessary.<br />
<br />
In particular, lazy evaluation enables the definition of ''circular programs'' that bear a dangerous resemblance to undefined values. Circular<br />
programs tuple separate computations, relying on lazy evaluation to feed<br />
the results of one computation to another.<br />
<br />
In our example, we could simply compute the length and sum of the list at<br />
the same time:<br />
<br />
<haskell><br />
average :: [Float] -> Float<br />
average xs = let<br />
nil = (0.0, 0.0)<br />
cons x (s,l) = (x + s, 1.0 + l)<br />
(sum,length) = foldr cons nil xs<br />
in sum / length<br />
</haskell><br />
<br />
We can eliminate traversals by tupling computations! Can we compute the<br />
resulting list at the same time as computing the sum and length? Let's try:<br />
<br />
<haskell><br />
diff :: [Float] -> [Float]<br />
diff xs = let<br />
nil = (0.0, 0.0, [])<br />
cons x (s,l,rs) = (x+s, 1.0+l, (x - ....) : rs)<br />
(sum,length,res) = foldr cons nil xs<br />
in res<br />
</haskell><br />
<br />
We run into trouble when we try to use the average to construct the<br />
resulting list. The problem is, that we haven't computed the average, but<br />
somehow want to use it during the traversal. To solve this, we don't actually<br />
compute the resulting list, but rather compute a function taking the<br />
average to the resulting list:<br />
<br />
<haskell><br />
diff :: [Float] -> [Float]<br />
diff xs = let<br />
nil = (0.0, 0.0, \avg -> [])<br />
cons x (s,l,rs) = (x+s, 1.0+l, \avg -> (x - avg) : rs avg)<br />
(sum,length,res) = foldr cons nil xs<br />
in res (sum / length)<br />
</haskell><br />
<br />
We can generalize this idea a bit further. Suppose that we want to compute<br />
other values that use the average. We could just add an <tt>avg</tt> argument to<br />
every element of the tuple that needs the average. It is a bit nicer,<br />
however, to lift the <tt>avg</tt> argument outside the tuple. Our final listing<br />
now becomes:<br />
<br />
<haskell><br />
--- Listing Two ---<br />
<br />
diff :: [Float] -> [Float]<br />
diff xs =<br />
let<br />
nil avg = (0.0, 0.0, [])<br />
cons x fs avg = let (s,l,ds) = fs avg<br />
in (s+x,l+1.0,x-avg : ds)<br />
(sum,length,ds) = foldr cons nil xs (sum / length)<br />
in ds<br />
</haskell><br />
<br />
Now every element of the tuple can refer to the average, rather than just<br />
the final list.<br />
<br />
This ''credit card transformation'' eliminates multiple traversals by<br />
tupling computations. We use the average without worrying if we have<br />
actually managed to compute it. When we actually write the fold, however,<br />
we have to put our average where our mouth is. Fortunately, the <tt>sum</tt> and<br />
<tt>length</tt> don't depend on the average, so we are free to use these values to<br />
tie the recursive knot.<br />
<br />
The code in Listing Two only needs a single traversal and one<br />
higher-order function. It apparently solves the problems with the code in<br />
Listing One.<br />
<br />
Hold on a minute. What ever happened to the elegance of our previous<br />
solution? Our second solution appears to have sacrificed clarity for the<br />
sake of efficiency. Who in their right minds would want to write the code<br />
in Listing Two? I wouldn't. Maybe, just maybe, we can do a bit better.<br />
<br />
==Attribute Grammars==<br />
Before even explaining what an attribute grammar is, think back to when you<br />
first learned about ''folds''. Initially, a fold seems like a silly<br />
abstraction. Why should I bother writing simple functions as folds? After<br />
all, I already know how to write the straightforward solution. It's only<br />
after a great deal of experience with functional programming that you learn<br />
to recognize folds as actually being a worthwhile abstraction. Learning<br />
about attribute grammars is similar in more ways than one.<br />
<br />
So what are attribute grammars? I'll have a bit more to say about that<br />
later. For now, let's see what the attribute grammar solution to our<br />
running example looks like.<br />
<br />
===The attribute grammar solution===<br />
I'll introduce attribute grammars using the syntax of the [http://www.cs.uu.nl/wiki/Center/AttributeGrammarSystem Utrecht University Attribute Grammar] system or UUAG for short. The UUAG system takes a file<br />
containing an attribute grammar definition and generates a Haskell module<br />
containing ''semantic functions'', determined by the attribute grammar. The<br />
attribute grammar determines a computation over some data structure; the<br />
semantic functions correspond to the actual Haskell functions that perform<br />
the computation.<br />
<br />
Although the UUAG system's syntax closely resembles Haskell, it is<br />
important to realize that the UUAG system is a Haskell pre-processor and<br />
not a complete Haskell compiler.<br />
<br />
So what does an attribute grammar file look like? Well, first of all we<br />
have to declare the data structure we're working with. In our example, we<br />
simply have a list of Floats.<br />
<br />
<haskell><br />
--- Listing Three ---<br />
<br />
DATA Root<br />
| Root list : List<br />
DATA List<br />
| Nil<br />
| Cons head : Float tail : List<br />
</haskell><br />
<br />
Datatypes are declared with the keyword <tt>DATA</tt>, followed by a list of<br />
constructors. Every node explicitly gives the name and type of all its<br />
children. In our example we have an empty list <tt>Nil</tt> and a list constructor<br />
<tt>Cons</tt> with two children, <tt>head</tt> and <tt>tail</tt>. For reasons that will become<br />
apparent later on, we add an additional datatype corresponding to the root<br />
of our list.<br />
<br />
So now that we've declared our datatype, let's add some ''attributes''. If we<br />
want to compute the average element, we'll need the length of the<br />
list. Listing Four contains introduces our first attribute corresponding to<br />
a list's length.<br />
<br />
<haskell><br />
--- Listing Four ---<br />
<br />
ATTR List [|| length : Float]<br />
SEM List<br />
| Nil lhs.length = 0.0<br />
| Cons lhs.length = 1.0 + @tail.length<br />
</haskell><br />
<br />
Let's go over the code line by line.<br />
<br />
An attribute has to be declared before it can actually be defined. An<br />
attribute is declared using the <tt>ATTR</tt> statement. This example declares a<br />
single ''synthesized'' attribute called <tt>length</tt> of type <tt>Float</tt>. A<br />
synthesized attribute is typically a value you are trying to compute bottom<br />
up. Synthesized attributes are declared to the right of the second<br />
vertical bar. We'll see other kinds attributes shortly.<br />
<br />
Now that we've declared our first attribute, we can actually define it. A<br />
<tt>SEM</tt> statement begins by declaring for which data type attributes are<br />
being defined. In our example we want to define an attribute on a <tt>List</tt>,<br />
hence we write <tt>SEM List</tt>. We can subsequently give attribute definitions<br />
for the constructors of our <tt>List</tt> data type.<br />
<br />
Every attribute definition consists of several parts. We begin by<br />
mentioning the constructor for which we define an attribute. In our example<br />
we give two definitions, one for <tt>Nil</tt> and one for <tt>Cons</tt>.<br />
<br />
The second part of the attribute definition describes which attribute is<br />
being defined. In our example we define the attribute <tt>length</tt> for the<br />
''left-hand side'', or <tt>lhs</tt>. A lot of the terminology associated with<br />
attribute grammars comes from the world of context-free grammars. As this<br />
tutorial focuses on functional programmers, rather than formal language<br />
gurus, feel free to read <tt>lhs</tt> as "parent node". It seems a bit odd to<br />
write <tt>lhs.length</tt> explicitly, but we'll see later on why merely writing<br />
<tt>length</tt> doesn't suffice.<br />
<br />
Basically, all we've only said that the two definitions define the <tt>length</tt><br />
of <tt>Nil</tt> and <tt>Cons</tt>. We still have to fill in the necessary definition. The<br />
actual definition of the attributes takes place to the right of the equals<br />
sign. Programmers are free to write any valid Haskell expression. In fact,<br />
the UUAG system does not analyse the attribute definitions at all, but merely<br />
copies them straight into the resulting Haskell module. In our example, we<br />
want the length of the empty list to be <tt>0.0</tt>. The case for <tt>Cons</tt> is a bit<br />
trickier.<br />
<br />
In the <tt>Cons</tt> case we want to be increment the length computed so far. To<br />
do so we need to be able to refer to other attributes. In particular we<br />
want to refer to the <tt>length</tt> attribute of the tail. The expression<br />
<tt>@tail.length</tt> does just that. In general, you're free to refer to any<br />
synthesized attribute ''attr'' of a child node ''c'' by writing <tt>@c.attr</tt>.<br />
<br />
The <tt>length</tt> attribute can be depicted pictorally as follows:<br />
<br />
[[Image:WAGM-Length.png|The length attribute]]<br />
<br />
'''Exercise:''' Declare and define a synthesized attribute <tt>sum</tt> that<br />
computes the sum of a <tt>List</tt>. You can refer to a value <tt>val</tt> stored at a<br />
node as <tt>@val</tt>. For instance, write <tt>@head</tt> to refer to the float stored in<br />
at a <tt>Cons</tt> node. Draw the corresponding picture if you're stuck.<br />
<br />
Now we've defined <tt>length</tt> and <tt>sum</tt>, let's compute the average. We'll know<br />
the sum and the length of the entire list at the Root node. Using those<br />
attributes we can compute the average and ''broadcast'' the average through<br />
the rest of the list. Let's start with the picture this time:<br />
<br />
[[Image:WAGM-Avg.png|The average attribute]]<br />
<br />
The previous synthesized attributes, <tt>length</tt> and <tt>sum</tt>, defined bottom-up<br />
computations. We're now in the situation, however, where we want to pass<br />
information through the tree from a parent node to its child nodes using an<br />
''inherited'' attribute. Listing Five defines an inherited attribute <tt>avg</tt><br />
that corresponds to the picture we just drew.<br />
<br />
<haskell><br />
--- Listing Five ---<br />
ATTR List [ avg : Float|| ]<br />
SEM Root<br />
| Root list.avg = @list.sum / @list.length<br />
<br />
SEM List<br />
| Cons tail.avg = @lhs.avg<br />
<br />
</haskell><br />
<br />
Inherited attributes are declared to the left of the two vertical<br />
bars. Once we've declared an inherited attribute <tt>avg</tt> on lists, we're<br />
obliged to define how every constructor passes an <tt>avg</tt> to its children of<br />
type <tt>List</tt>.<br />
<br />
In our example, there are only two constructors with children<br />
of type <tt>List</tt>, namely <tt>Root</tt> and <tt>Cons</tt>. At the <tt>Root</tt> we compute the<br />
average, using the synthesized attributes <tt>sum</tt> and <tt>length</tt>, and pass the<br />
result to the <tt>list</tt> child. At the <tt>Cons</tt> node, we merely copy down the<br />
<tt>avg</tt> we received from our parent. Analogous to synthesized attributes, we<br />
can refer to an inherited attribute <tt>attr</tt> by writing <tt>@lhs.attr</tt>.<br />
<br />
Admittedly, this inherited attribute is not terribly interesting. There are<br />
plenty of other examples, however, where an inherited attribute represents<br />
important contextual information. Think of passing around the set of<br />
assumptions when writing a type checker, for instance.<br />
<br />
'''Exercise:''' To complete the attribute grammar, define an attribute<br />
<tt>res</tt> that computes the resulting list. Should it be inherited or<br />
synthesized? You may want to draw a picture.<br />
<br />
===Running the UUAG===<br />
Now suppose you've completed the exercises and copied the examples in a<br />
single file called <tt>Diff.ag</tt>. How do we actually use the attribute grammar?<br />
This is were the UUAG compiler steps in. Running the UUAG compiler on the<br />
source attribute grammar file generates a new <tt>Diff.hs</tt> file, which we can<br />
then compile like any other Haskell file.<br />
<br />
<haskell><br />
> uuagc -a Diff.ag<br />
> ghci Diff.hs<br />
</haskell><br />
<br />
The <tt>Diff.hs</tt> file contains several ingredients.<br />
<br />
Firstly, new Haskell datatypes are generated corresponding to <tt>DATA</tt><br />
declarations in the attribute grammar. For every generated datatype a<br />
corresponding <tt>fold</tt> is generated. The attribute definitions determine the<br />
arguments passed to the folds. Browsing through the generated code can<br />
actually be quite instructive.<br />
<br />
Inherited attributes are passed to recursive calls of the fold. Synthesized<br />
attributes are tupled and returned as the result of the computation. In<br />
essence, we've reproduced our original solution in Listing Two - but now<br />
without the hassle associated with spelling out [[catamorphism]]s with a higher<br />
order domain and a compound codomain.<br />
<br />
The attribute grammar solution is just as efficient as our earlier solution<br />
relying on lazy evaluation, yet the code is hardly different from what we<br />
would write in a straightforward Haskell solution. It really is the best of<br />
both worlds. The two types of glue that John Hughes pinpoints in his<br />
original article just aren't enough. I would like to think that sometimes<br />
attribute grammars are sometimes capable of providing just the right<br />
missing bit of glue.<br />
<br />
===What are attribute grammars?===<br />
So just what are attribute grammars? Well, that depends on who you ask,<br />
really. I've tried to sum up some different views below.<br />
<br />
Attribute grammars add semantics to a context free grammar. Although it is<br />
easy enough to describe a language's syntax using a context free grammar,<br />
accurately describing a language's semantics is notoriously<br />
difficult. Attribute grammars specify a language's semantics by<br />
'decorating' a context free grammar with those attributes you are interested<br />
in.<br />
<br />
Attribute grammars describe tree traversals. All imperative implementations<br />
of attribute grammar systems perform tree traversals to compute some<br />
value. Basically an attribute grammar declares ''which'' values to compute<br />
and an attribute grammar system executes these computations. Once you've<br />
made this observation, the close relation to functional programming should<br />
not come as a surprise.<br />
<br />
Attribute grammars are a formalism for writing catamorphisms in a<br />
compositional fashion. Basically, the only thing the UUAG compiler does is<br />
generate large folds that I couldn't be bothered writing myself. It takes<br />
away all the elbow grease involved with maintaining and extending such<br />
code. In a sense the compiler does absolutely nothing new; it just makes<br />
life a lot easier.<br />
<br />
Attribute grammars provide framework for aspect oriented programming in<br />
functional languages. Lately there has been a lot of buzz about the<br />
importance of ''aspects'' and ''aspect oriented programming''. Attribute<br />
grammars provide a clear and well-established framework for splitting code<br />
into separate aspects. By spreading attribute definitions over several<br />
different files and grouping them according to aspect, attribute grammars<br />
provide a natural setting for aspect oriented programming.<br />
<br />
How do attribute grammars relate to other Haskell abstractions? I'll try to<br />
put my finger on some of the more obvious connections, but I'm pretty sure<br />
there's a great deal more that I don't cover here.<br />
<br />
==What else is out there?==<br />
Everyone loves monads. They're what makes IO possible in Haskell. There are<br />
extensive standard libraries and syntactic sugar specifically designed to<br />
make life with monads easier. There are an enormous number of Haskell<br />
libraries based on the monadic interface. They represent one of the most<br />
substantial developments of functional programming in the last decade.<br />
<br />
Yet somehow, the single most common question asked by fledgling Haskell<br />
programmers is probably ''What are monads?''. Beginners have a hard time<br />
grasping the concept of monads and yet connoisseurs recognize a monad in<br />
just about every code snippet. I think the more important question is:<br />
''What are monads good for?''<br />
<br />
Monads provide a simple yet powerful abstract notion of computation. In<br />
essence, a monad describes how sequence computations. This is crucial in<br />
order to perform IO in a functional language; by constraining all IO<br />
actions to a single interface of sequenced computations, the programmer is<br />
prevented from creating utter chaos. The real power of monads is in the<br />
interface they provide.<br />
<br />
John Hughes identified modularity as the single biggest blessing of<br />
functional programming. The obvious question is: how modular is the monadic<br />
interface? This really depends on you're definition of modularity. Let me<br />
be more specific. How can you combine two arbitrary monads? You can't. This<br />
is my greatest concern with monads. Once you choose your specific notion of<br />
computation, you have to stick to it through thick and thin.<br />
<br />
What about monad transformers? Monad transformers allow you to add a<br />
specific monad's functionality on top of any existing monad. What seems<br />
like a solution, more often than not, turns out to introduce more problems<br />
than you bargained for. Adding new functionality to a monad involves<br />
lifting all the computations from the previous monad to the new<br />
one. Although I could learn to live with this, it gets even worse. As every<br />
monad transformer really changes the underlying monad the ''order'' in<br />
which monad transformers are applied really makes a difference. If I want<br />
to add error reporting and state to some existing monad, should I be forced<br />
to consider the order in which I add them?<br />
<br />
Monads are extremely worthwhile for the interface they provide. Monadic<br />
libraries are great, but changing and extending monadic code can be a<br />
pain. Can we do better? Well I probably wouldn't have started this monadic<br />
intermezzo if I didn't have some sort of answer.<br />
<br />
Let's start off with <tt>Reader</tt> monads, for instance. Essentially, <tt>Reader</tt><br />
monads adds an argument to some computation. Wait a minute, this reminds me<br />
of inherited attributes. What about <tt>Writer</tt> monads? They correspond to<br />
synthesized attributes of course. Finally, <tt>State</tt> monads correspond to<br />
''chained'' attributes, or attributes that are both synthesized and<br />
inherited. The real edge attribute grammars hold over monad transformers is that you<br />
can define new attributes ''without'' worrying about the order in which you<br />
define them or adapting existing code.<br />
<br />
Do other abstractions capture other notions related to attribute grammars?<br />
Of course they do! Just look at the function space arrows instance. The notion of<br />
combining two distinct computations using the <tt>(&&&)</tt> operator relates to<br />
the concept of ''joining'' two attribute grammars by collecting their<br />
attribute definitions. When you look at the <tt>loop</tt> combinator, I can<br />
only be grateful that an attribute grammar system deals with attribute<br />
dependencies automatically.<br />
<br />
There really is a lot of related work. Implicit parameters? Inherited attributes! Linear implicit parameters? Chained attributes! Concepts that are so natural in the setting of attribute grammars, yet seem contrived when added to Haskell. This strengthens my belief that functional programmers can really benefit from even the most fleeting experience with attribute grammars; although I'd like to think that if you've read this far, you're hungry for more.<br />
<br />
==Further reading==<br />
This more or less covers the tutorial section of this article. The best way<br />
to learn more about attribute grammars is by actually using them. To conclude the tutorial, I've<br />
included a small example for you to play with. I've written a parser for a very<br />
simple wiki formatting language not entirely unlike the one used to produce this<br />
document. So far the HTML generated after parsing a document is fairly poor. It's up to<br />
you to improve it!<br />
<br />
You can download the initial version here. Don't forget to install the [http://www.cs.uu.nl/wiki/Center/AttributeGrammarSystem#Download UUAG]. It might<br />
be worthwhile to have a look at the [http://www.cs.uu.nl/~arthurb/data/AG/AGman.html UUAG manual] as there's a lot of technical detail that I haven't mentioned.<br />
<br />
If you're particularly daring, you may want to take a look at the [http://www.cs.uu.nl/wiki/Ehc/WebHome Essential Haskell Compiler] being developed at Utrecht. It's almost completely written using the UUAG and is designed to be suitable for education and experimentation. The compiler was presented at the Summer School for Advanced Functional Programming in Tartu, Estonia last summer. As a result, there's a lot written about it already.<br />
<br />
Dive on in!<br />
<br />
[[Category:Article]]</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=File:WAGM-Length.png&diff=24152File:WAGM-Length.png2008-11-19T16:10:40Z<p>WouterSwierstra: </p>
<hr />
<div></div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=File:WAGM-Avg.png&diff=24151File:WAGM-Avg.png2008-11-19T16:10:08Z<p>WouterSwierstra: </p>
<hr />
<div></div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=The_Monad.Reader/Issue4/Why_Attribute_Grammars_Matter&diff=24150The Monad.Reader/Issue4/Why Attribute Grammars Matter2008-11-19T16:06:49Z<p>WouterSwierstra: </p>
<hr />
<div>=Why Attribute Grammars Matter=<br />
:''by Wouter Swierstra for The Monad.Reader Issue Four''; 01-07-05<br />
<br />
==Introduction==<br />
Almost twenty years have passed since John Hughes influential paper [http://www.math.chalmers.se/~rjmh/Papers/whyfp.html Why Functional Programming Matters]. At the same time the first work on<br />
attribute grammars and their relation to functional programming<br />
appeared. Despite the growing popularity of functional programming,<br />
attribute grammars remain remarkably less renown.<br />
<br />
The purpose of this article is twofold. On the one hand it illustrates how<br />
functional programming sometimes scales poorly and how<br />
attribute grammars can remedy these problems. On the other hand it aims to<br />
provide a gentle introduction to attribute grammars for seasoned functional<br />
programmers.<br />
<br />
==The problem==<br />
John Hughes argues that with the increasing complexity of modern<br />
software systems, modularity has become of paramount importance to software<br />
development. Functional languages provide new kinds of ''glue'' that create<br />
new opportunities for more modular code. In particular, Hughes stresses<br />
the importance of higher-order functions and lazy evaluation. There are<br />
plenty of examples where this works nicely - yet situations arise where<br />
the glue that functional programming provides somehow isn't quite enough.<br />
<br />
Perhaps a small example is in order. Suppose we want to write a function<br />
<tt>diff :: [Float] -> [Float]</tt> that given a list <tt>xs</tt>, calculates a new list where every element <tt>x</tt> is replaced with the difference between <tt>x</tt> and the<br />
average of <tt>xs</tt>. Similar problems pop up in any library for performing<br />
statistical calculations.<br />
<br />
===Higher-order functions===<br />
Let's tackle the problem with some of Haskell's most powerful glue - higher-order functions. Any beginning Haskell programmer should be able to concoct the solution presented in Listing One. The average is computed using functions from the Prelude. The obvious function using this average is then mapped over the original list. So far, so good.<br />
<br />
<haskell><br />
--- Listing One ---<br />
<br />
diff :: [Float] -> [Float]<br />
diff xs = map (\x -> x - (avg xs)) xs<br />
<br />
avg :: [Float] -> Float<br />
avg xs = sum xs / genericLength xs<br />
</haskell><br />
<br />
There are, however, a few things swept under the rug in this example. First<br />
of all, this simple problem requires three traversals of the original<br />
list. Computing additional values from the original list will require even<br />
more traversals.<br />
<br />
Secondly, the solution is so concise because it depends on Prelude<br />
functions. If the values were stored in a slightly different data structure,<br />
the solution would require a lot of tedious work. We could, of course,<br />
define our own higher-order functions, such as <tt>map</tt> and <tt>fold</tt>, or even<br />
resort to generic programming. There are, however,<br />
more ways to skin this particular cat.<br />
<br />
This problem illustrates the sheer elegance of functional programming. We<br />
do pay a price for the succinctness of the solution. Multiple traversals<br />
and boilerplate code can both be quite a head-ache. If we want to perform<br />
complex computations over custom data structures, we may want to consider an<br />
alternative approach.<br />
<br />
Fortunately, as experienced functional programmers, we have another card up<br />
our sleeve.<br />
<br />
===Lazy evaluation===<br />
The second kind of glue that functional programming provides is lazy<br />
evaluation. In essence, lazy evaluation only evaluates expressions when<br />
they become absolutely necessary.<br />
<br />
In particular, lazy evaluation enables the definition of ''circular programs'' that bear a dangerous resemblance to undefined values. Circular<br />
programs tuple separate computations, relying on lazy evaluation to feed<br />
the results of one computation to another.<br />
<br />
In our example, we could simply compute the length and sum of the list at<br />
the same time:<br />
<br />
<haskell><br />
average :: [Float] -> Float<br />
average xs = let<br />
nil = (0.0, 0.0)<br />
cons x (s,l) = (x + s, 1.0 + l)<br />
(sum,length) = foldr cons nil xs<br />
in sum / length<br />
</haskell><br />
<br />
We can eliminate traversals by tupling computations! Can we compute the<br />
resulting list at the same time as computing the sum and length? Let's try:<br />
<br />
<haskell><br />
diff :: [Float] -> [Float]<br />
diff xs = let<br />
nil = (0.0, 0.0, [])<br />
cons x (s,l,rs) = (x+s, 1.0+l, (x - ....) : rs)<br />
(sum,length,res) = foldr cons nil xs<br />
in res<br />
</haskell><br />
<br />
We run into trouble when we try to use the average to construct the<br />
resulting list. The problem is, that we haven't computed the average, but<br />
somehow want to use it during the traversal. To solve this, we don't actually<br />
compute the resulting list, but rather compute a function taking the<br />
average to the resulting list:<br />
<br />
<haskell><br />
diff :: [Float] -> [Float]<br />
diff xs = let<br />
nil = (0.0, 0.0, \avg -> [])<br />
cons x (s,l,rs) = (x+s, 1.0+l, \avg -> (x - avg) : rs avg)<br />
(sum,length,res) = foldr cons nil xs<br />
in res (sum / length)<br />
</haskell><br />
<br />
We can generalize this idea a bit further. Suppose that we want to compute<br />
other values that use the average. We could just add an <tt>avg</tt> argument to<br />
every element of the tuple that needs the average. It is a bit nicer,<br />
however, to lift the <tt>avg</tt> argument outside the tuple. Our final listing<br />
now becomes:<br />
<br />
<haskell><br />
--- Listing Two ---<br />
<br />
diff :: [Float] -> [Float]<br />
diff xs =<br />
let<br />
nil avg = (0.0, 0.0, [])<br />
cons x fs avg = let (s,l,ds) = fs avg<br />
in (s+x,l+1.0,x-avg : ds)<br />
(sum,length,ds) = foldr cons nil xs (sum / length)<br />
in ds<br />
</haskell><br />
<br />
Now every element of the tuple can refer to the average, rather than just<br />
the final list.<br />
<br />
This ''credit card transformation'' eliminates multiple traversals by<br />
tupling computations. We use the average without worrying if we have<br />
actually managed to compute it. When we actually write the fold, however,<br />
we have to put our average where our mouth is. Fortunately, the <tt>sum</tt> and<br />
<tt>length</tt> don't depend on the average, so we are free to use these values to<br />
tie the recursive knot.<br />
<br />
The code in Listing Two only needs a single traversal and one<br />
higher-order function. It apparently solves the problems with the code in<br />
Listing One.<br />
<br />
Hold on a minute. What ever happened to the elegance of our previous<br />
solution? Our second solution appears to have sacrificed clarity for the<br />
sake of efficiency. Who in their right minds would want to write the code<br />
in Listing Two? I wouldn't. Maybe, just maybe, we can do a bit better.<br />
<br />
==Attribute Grammars==<br />
Before even explaining what an attribute grammar is, think back to when you<br />
first learned about ''folds''. Initially, a fold seems like a silly<br />
abstraction. Why should I bother writing simple functions as folds? After<br />
all, I already know how to write the straightforward solution. It's only<br />
after a great deal of experience with functional programming that you learn<br />
to recognize folds as actually being a worthwhile abstraction. Learning<br />
about attribute grammars is similar in more ways than one.<br />
<br />
So what are attribute grammars? I'll have a bit more to say about that<br />
later. For now, let's see what the attribute grammar solution to our<br />
running example looks like.<br />
<br />
===The attribute grammar solution===<br />
I'll introduce attribute grammars using the syntax of the [http://www.cs.uu.nl/wiki/Center/AttributeGrammarSystem Utrecht University Attribute Grammar] system or UUAG for short. The UUAG system takes a file<br />
containing an attribute grammar definition and generates a Haskell module<br />
containing ''semantic functions'', determined by the attribute grammar. The<br />
attribute grammar determines a computation over some data structure; the<br />
semantic functions correspond to the actual Haskell functions that perform<br />
the computation.<br />
<br />
Although the UUAG system's syntax closely resembles Haskell, it is<br />
important to realize that the UUAG system is a Haskell pre-processor and<br />
not a complete Haskell compiler.<br />
<br />
So what does an attribute grammar file look like? Well, first of all we<br />
have to declare the data structure we're working with. In our example, we<br />
simply have a list of Floats.<br />
<br />
<haskell><br />
--- Listing Three ---<br />
<br />
DATA Root<br />
| Root list : List<br />
DATA List<br />
| Nil<br />
| Cons head : Float tail : List<br />
</haskell><br />
<br />
Datatypes are declared with the keyword <tt>DATA</tt>, followed by a list of<br />
constructors. Every node explicitly gives the name and type of all its<br />
children. In our example we have an empty list <tt>Nil</tt> and a list constructor<br />
<tt>Cons</tt> with two children, <tt>head</tt> and <tt>tail</tt>. For reasons that will become<br />
apparent later on, we add an additional datatype corresponding to the root<br />
of our list.<br />
<br />
So now that we've declared our datatype, let's add some ''attributes''. If we<br />
want to compute the average element, we'll need the length of the<br />
list. Listing Four contains introduces our first attribute corresponding to<br />
a list's length.<br />
<br />
<haskell><br />
--- Listing Four ---<br />
<br />
ATTR List [|| length : Float]<br />
SEM List<br />
| Nil lhs.length = 0.0<br />
| Cons lhs.length = 1.0 + @tail.length<br />
</haskell><br />
<br />
Let's go over the code line by line.<br />
<br />
An attribute has to be declared before it can actually be defined. An<br />
attribute is declared using the <tt>ATTR</tt> statement. This example declares a<br />
single ''synthesized'' attribute called <tt>length</tt> of type <tt>Float</tt>. A<br />
synthesized attribute is typically a value you are trying to compute bottom<br />
up. Synthesized attributes are declared to the right of the second<br />
vertical bar. We'll see other kinds attributes shortly.<br />
<br />
Now that we've declared our first attribute, we can actually define it. A<br />
<tt>SEM</tt> statement begins by declaring for which data type attributes are<br />
being defined. In our example we want to define an attribute on a <tt>List</tt>,<br />
hence we write <tt>SEM List</tt>. We can subsequently give attribute definitions<br />
for the constructors of our <tt>List</tt> data type.<br />
<br />
Every attribute definition consists of several parts. We begin by<br />
mentioning the constructor for which we define an attribute. In our example<br />
we give two definitions, one for <tt>Nil</tt> and one for <tt>Cons</tt>.<br />
<br />
The second part of the attribute definition describes which attribute is<br />
being defined. In our example we define the attribute <tt>length</tt> for the<br />
''left-hand side'', or <tt>lhs</tt>. A lot of the terminology associated with<br />
attribute grammars comes from the world of context-free grammars. As this<br />
tutorial focuses on functional programmers, rather than formal language<br />
gurus, feel free to read <tt>lhs</tt> as "parent node". It seems a bit odd to<br />
write <tt>lhs.length</tt> explicitly, but we'll see later on why merely writing<br />
<tt>length</tt> doesn't suffice.<br />
<br />
Basically, all we've only said that the two definitions define the <tt>length</tt><br />
of <tt>Nil</tt> and <tt>Cons</tt>. We still have to fill in the necessary definition. The<br />
actual definition of the attributes takes place to the right of the equals<br />
sign. Programmers are free to write any valid Haskell expression. In fact,<br />
the UUAG system does not analyse the attribute definitions at all, but merely<br />
copies them straight into the resulting Haskell module. In our example, we<br />
want the length of the empty list to be <tt>0.0</tt>. The case for <tt>Cons</tt> is a bit<br />
trickier.<br />
<br />
In the <tt>Cons</tt> case we want to be increment the length computed so far. To<br />
do so we need to be able to refer to other attributes. In particular we<br />
want to refer to the <tt>length</tt> attribute of the tail. The expression<br />
<tt>@tail.length</tt> does just that. In general, you're free to refer to any<br />
synthesized attribute ''attr'' of a child node ''c'' by writing <tt>@c.attr</tt>.<br />
<br />
The <tt>length</tt> attribute can be depicted as follows:<br />
<br />
attachment:length.png<br />
<br />
'''Exercise:''' Declare and define a synthesized attribute <tt>sum</tt> that<br />
computes the sum of a <tt>List</tt>. You can refer to a value <tt>val</tt> stored at a<br />
node as <tt>@val</tt>. For instance, write <tt>@head</tt> to refer to the float stored in<br />
at a <tt>Cons</tt> node. Draw the corresponding picture if you're stuck.<br />
<br />
Now we've defined <tt>length</tt> and <tt>sum</tt>, let's compute the average. We'll know<br />
the sum and the length of the entire list at the Root node. Using those<br />
attributes we can compute the average and ''broadcast'' the average through<br />
the rest of the list. Let's start with the picture this time:<br />
<br />
attachment:avg.png<br />
<br />
The previous synthesized attributes, <tt>length</tt> and <tt>sum</tt>, defined bottom-up<br />
computations. We're now in the situation, however, where we want to pass<br />
information through the tree from a parent node to its child nodes using an<br />
''inherited'' attribute. Listing Five defines an inherited attribute <tt>avg</tt><br />
that corresponds to the picture we just drew.<br />
<br />
<haskell><br />
--- Listing Five ---<br />
ATTR List [ avg : Float|| ]<br />
SEM Root<br />
| Root list.avg = @list.sum / @list.length<br />
<br />
SEM List<br />
| Cons tail.avg = @lhs.avg<br />
<br />
</haskell><br />
<br />
Inherited attributes are declared to the left of the two vertical<br />
bars. Once we've declared an inherited attribute <tt>avg</tt> on lists, we're<br />
obliged to define how every constructor passes an <tt>avg</tt> to its children of<br />
type <tt>List</tt>.<br />
<br />
In our example, there are only two constructors with children<br />
of type <tt>List</tt>, namely <tt>Root</tt> and <tt>Cons</tt>. At the <tt>Root</tt> we compute the<br />
average, using the synthesized attributes <tt>sum</tt> and <tt>length</tt>, and pass the<br />
result to the <tt>list</tt> child. At the <tt>Cons</tt> node, we merely copy down the<br />
<tt>avg</tt> we received from our parent. Analogous to synthesized attributes, we<br />
can refer to an inherited attribute <tt>attr</tt> by writing <tt>@lhs.attr</tt>.<br />
<br />
Admittedly, this inherited attribute is not terribly interesting. There are<br />
plenty of other examples, however, where an inherited attribute represents<br />
important contextual information. Think of passing around the set of<br />
assumptions when writing a type checker, for instance.<br />
<br />
'''Exercise:''' To complete the attribute grammar, define an attribute<br />
<tt>res</tt> that computes the resulting list. Should it be inherited or<br />
synthesized? You may want to draw a picture.<br />
<br />
===Running the UUAG===<br />
Now suppose you've completed the exercises and copied the examples in a<br />
single file called <tt>Diff.ag</tt>. How do we actually use the attribute grammar?<br />
This is were the UUAG compiler steps in. Running the UUAG compiler on the<br />
source attribute grammar file generates a new <tt>Diff.hs</tt> file, which we can<br />
then compile like any other Haskell file.<br />
<br />
<haskell><br />
> uuagc -a Diff.ag<br />
> ghci Diff.hs<br />
</haskell><br />
<br />
The <tt>Diff.hs</tt> file contains several ingredients.<br />
<br />
Firstly, new Haskell datatypes are generated corresponding to <tt>DATA</tt><br />
declarations in the attribute grammar. For every generated datatype a<br />
corresponding <tt>fold</tt> is generated. The attribute definitions determine the<br />
arguments passed to the folds. Browsing through the generated code can<br />
actually be quite instructive.<br />
<br />
Inherited attributes are passed to recursive calls of the fold. Synthesized<br />
attributes are tupled and returned as the result of the computation. In<br />
essence, we've reproduced our original solution in Listing Two - but now<br />
without the hassle associated with spelling out [[catamorphism]]s with a higher<br />
order domain and a compound codomain.<br />
<br />
The attribute grammar solution is just as efficient as our earlier solution<br />
relying on lazy evaluation, yet the code is hardly different from what we<br />
would write in a straightforward Haskell solution. It really is the best of<br />
both worlds. The two types of glue that John Hughes pinpoints in his<br />
original article just aren't enough. I would like to think that sometimes<br />
attribute grammars are sometimes capable of providing just the right<br />
missing bit of glue.<br />
<br />
===What are attribute grammars?===<br />
So just what are attribute grammars? Well, that depends on who you ask,<br />
really. I've tried to sum up some different views below.<br />
<br />
Attribute grammars add semantics to a context free grammar. Although it is<br />
easy enough to describe a language's syntax using a context free grammar,<br />
accurately describing a language's semantics is notoriously<br />
difficult. Attribute grammars specify a language's semantics by<br />
'decorating' a context free grammar with those attributes you are interested<br />
in.<br />
<br />
Attribute grammars describe tree traversals. All imperative implementations<br />
of attribute grammar systems perform tree traversals to compute some<br />
value. Basically an attribute grammar declares ''which'' values to compute<br />
and an attribute grammar system executes these computations. Once you've<br />
made this observation, the close relation to functional programming should<br />
not come as a surprise.<br />
<br />
Attribute grammars are a formalism for writing catamorphisms in a<br />
compositional fashion. Basically, the only thing the UUAG compiler does is<br />
generate large folds that I couldn't be bothered writing myself. It takes<br />
away all the elbow grease involved with maintaining and extending such<br />
code. In a sense the compiler does absolutely nothing new; it just makes<br />
life a lot easier.<br />
<br />
Attribute grammars provide framework for aspect oriented programming in<br />
functional languages. Lately there has been a lot of buzz about the<br />
importance of ''aspects'' and ''aspect oriented programming''. Attribute<br />
grammars provide a clear and well-established framework for splitting code<br />
into separate aspects. By spreading attribute definitions over several<br />
different files and grouping them according to aspect, attribute grammars<br />
provide a natural setting for aspect oriented programming.<br />
<br />
How do attribute grammars relate to other Haskell abstractions? I'll try to<br />
put my finger on some of the more obvious connections, but I'm pretty sure<br />
there's a great deal more that I don't cover here.<br />
<br />
==What else is out there?==<br />
Everyone loves monads. They're what makes IO possible in Haskell. There are<br />
extensive standard libraries and syntactic sugar specifically designed to<br />
make life with monads easier. There are an enormous number of Haskell<br />
libraries based on the monadic interface. They represent one of the most<br />
substantial developments of functional programming in the last decade.<br />
<br />
Yet somehow, the single most common question asked by fledgling Haskell<br />
programmers is probably ''What are monads?''. Beginners have a hard time<br />
grasping the concept of monads and yet connoisseurs recognize a monad in<br />
just about every code snippet. I think the more important question is:<br />
''What are monads good for?''<br />
<br />
Monads provide a simple yet powerful abstract notion of computation. In<br />
essence, a monad describes how sequence computations. This is crucial in<br />
order to perform IO in a functional language; by constraining all IO<br />
actions to a single interface of sequenced computations, the programmer is<br />
prevented from creating utter chaos. The real power of monads is in the<br />
interface they provide.<br />
<br />
John Hughes identified modularity as the single biggest blessing of<br />
functional programming. The obvious question is: how modular is the monadic<br />
interface? This really depends on you're definition of modularity. Let me<br />
be more specific. How can you combine two arbitrary monads? You can't. This<br />
is my greatest concern with monads. Once you choose your specific notion of<br />
computation, you have to stick to it through thick and thin.<br />
<br />
What about monad transformers? Monad transformers allow you to add a<br />
specific monad's functionality on top of any existing monad. What seems<br />
like a solution, more often than not, turns out to introduce more problems<br />
than you bargained for. Adding new functionality to a monad involves<br />
lifting all the computations from the previous monad to the new<br />
one. Although I could learn to live with this, it gets even worse. As every<br />
monad transformer really changes the underlying monad the ''order'' in<br />
which monad transformers are applied really makes a difference. If I want<br />
to add error reporting and state to some existing monad, should I be forced<br />
to consider the order in which I add them?<br />
<br />
Monads are extremely worthwhile for the interface they provide. Monadic<br />
libraries are great, but changing and extending monadic code can be a<br />
pain. Can we do better? Well I probably wouldn't have started this monadic<br />
intermezzo if I didn't have some sort of answer.<br />
<br />
Let's start off with <tt>Reader</tt> monads, for instance. Essentially, <tt>Reader</tt><br />
monads adds an argument to some computation. Wait a minute, this reminds me<br />
of inherited attributes. What about <tt>Writer</tt> monads? They correspond to<br />
synthesized attributes of course. Finally, <tt>State</tt> monads correspond to<br />
''chained'' attributes, or attributes that are both synthesized and<br />
inherited. The real edge attribute grammars hold over monad transformers is that you<br />
can define new attributes ''without'' worrying about the order in which you<br />
define them or adapting existing code.<br />
<br />
Do other abstractions capture other notions related to attribute grammars?<br />
Of course they do! Just look at the function space arrows instance. The notion of<br />
combining two distinct computations using the <tt>(&&&)</tt> operator relates to<br />
the concept of ''joining'' two attribute grammars by collecting their<br />
attribute definitions. When you look at the <tt>loop</tt> combinator, I can<br />
only be grateful that an attribute grammar system deals with attribute<br />
dependencies automatically.<br />
<br />
There really is a lot of related work. Implicit parameters? Inherited attributes! Linear implicit parameters? Chained attributes! Concepts that are so natural in the setting of attribute grammars, yet seem contrived when added to Haskell. This strengthens my belief that functional programmers can really benefit from even the most fleeting experience with attribute grammars; although I'd like to think that if you've read this far, you're hungry for more.<br />
<br />
==Further reading==<br />
This more or less covers the tutorial section of this article. The best way<br />
to learn more about attribute grammars is by actually using them. To conclude the tutorial, I've<br />
included a small example for you to play with. I've written a parser for a very<br />
simple wiki formatting language not entirely unlike the one used to produce this<br />
document. So far the HTML generated after parsing a document is fairly poor. It's up to<br />
you to improve it!<br />
<br />
You can download the initial version here. Don't forget to install the [http://www.cs.uu.nl/wiki/Center/AttributeGrammarSystem#Download UUAG]. It might<br />
be worthwhile to have a look at the [http://www.cs.uu.nl/~arthurb/data/AG/AGman.html UUAG manual] as there's a lot of technical detail that I haven't mentioned.<br />
<br />
If you're particularly daring, you may want to take a look at the [http://www.cs.uu.nl/wiki/Ehc/WebHome Essential Haskell Compiler] being developed at Utrecht. It's almost completely written using the UUAG and is designed to be suitable for education and experimentation. The compiler was presented at the Summer School for Advanced Functional Programming in Tartu, Estonia last summer. As a result, there's a lot written about it already.<br />
<br />
Dive on in!<br />
<br />
[[Category:Article]]</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=The_Monad.Reader&diff=24149The Monad.Reader2008-11-19T15:10:30Z<p>WouterSwierstra: </p>
<hr />
<div>[[Category:Community]]<br />
The Monad.Reader is an electronic magazine about all things Haskell. It is less formal than journal, but somehow more enduring than a wiki-page. There have been a wide variety of articles, including: exciting code fragments, intriguing puzzles, book reviews, tutorials, and even half-baked research ideas.<br />
<br />
==== Latest Issue ====<br />
[[Media:TMR-Issue12.pdf|The Monad.Reader Issue 12]] is now out.<br />
<br />
Issue 12 is a Summer of Code special and consists of the following three articles:<br />
<br />
;''Compiler Development Made Easy''<br />
:Max Bolingbroke<br />
;''How to Build a Physics Engine''<br />
:Roman Cheplyaka<br />
;''Hoogle Overview''<br />
:Neil Mitchell<br />
<br />
Discussion of this Issue's articles may be found on a [[The_Monad.Reader/Discuss_Issue12|separate page.]]<br />
<br />
Feel free to browse the source files [http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue12/]. You can check out the entire repository using darcs:<br />
<br />
<tt>darcs get http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue12/</tt><br />
<br />
The source code and LaTeX files have all been released under a BSD-style license.<br />
<br />
I'd welcome any general discussion and feedback [[Talk:The_Monad.Reader|on the Talk page]].<br />
<br />
==== Previous editions ====<br />
<br />
All the previous editions have moved to a [[The_Monad.Reader/Previous_issues|separate page]]. Some of the older, wiki-published articles need some TLC. If you read any of the articles and can spare a few minutes to help clean up some of the formatting, your help would really be appreciated.<br />
<br />
==== Contributing ====<br />
<br />
If you're interested in writing something for The Monad.Reader, please download the [[Media:TMR.zip|instructions for authors]]. You can also check out the most recent version from the darcs repository at <tt>http://sneezy.cs.nott.ac.uk/darcs/TMR/Guidelines</tt>. Expect the deadline for Issue 13 to be early 2009.<br />
<br />
Feel free to contact [http://www.cs.nott.ac.uk/~wss Wouter Swierstra] with any questions.<br />
<br />
==== Merchandise ====<br />
<br />
You can buy The Monad.Reader t-shirts and merchandise from [http://www.cafepress.com/TheMonadReader Cafepress].<br />
<br />
==== The name ====<br />
The name is a pun, I'm afraid. A magazine is sometimes also referred to as a "reader". The articles are not necessarily about monads.</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=The_Monad.Reader&diff=24148The Monad.Reader2008-11-19T15:07:28Z<p>WouterSwierstra: </p>
<hr />
<div>[[Category:Community]]<br />
The Monad.Reader is an electronic magazine about all things Haskell. It is less formal than journal, but somehow more enduring than a wiki-page. There have been a wide variety of articles, including: exciting code fragments, intriguing puzzles, book reviews, tutorials, and even half-baked research ideas.<br />
<br />
==== Latest Issue ====<br />
[[Media:TMR-Issue12.pdf|The Monad.Reader Issue 12]] is now out.<br />
<br />
Issue 12 is a Summer of Code special and consists of the following three articles:<br />
<br />
;''Compiler Development Made Easy''<br />
:Max Bolingbroke<br />
;''How to Build a Physics Engine''<br />
:Roman Cheplyaka<br />
;''Hoogle Overview''<br />
:Neil Mitchell<br />
<br />
Discussion of this Issue's articles may be found on a [[The_Monad.Reader/Discuss_Issue12|separate page.]]<br />
<br />
Feel free to browse the source files [http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue12/]. You can check out the entire repository using darcs:<br />
<br />
<tt>darcs get http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue12/</tt><br />
<br />
The source code and LaTeX files have all been released under a BSD-style license.<br />
<br />
I'd welcome any general discussion and feedback [[Talk:The_Monad.Reader|on the Talk page]].<br />
<br />
==== Previous editions ====<br />
<br />
All the previous editions have moved to a [[The_Monad.Reader/Previous_issues|separate page]]. Some of the older, wiki-published articles need some TLC. If you read any of the articles and can spare a few minutes to help clean up some of the formatting, your help would really be appreciated.<br />
<br />
==== Contributing ====<br />
<br />
If you're interested in writing something for The Monad.Reader, please download the [[Media:TMR.zip|instructions for authors]]. You can also check out the most recent version from the darcs repository at <tt>http://sneezy.cs.nott.ac.uk/darcs/TMR/Guidelines</tt>. I hope to release another Summer of Code Special in the fall of 2008. Expect the deadline for Issue 13 to be early 2009.<br />
<br />
Feel free to contact [http://www.cs.nott.ac.uk/~wss Wouter Swierstra] with any questions.<br />
<br />
==== Merchandise ====<br />
<br />
You can buy The Monad.Reader t-shirts and merchandise from [http://www.cafepress.com/TheMonadReader Cafepress].<br />
<br />
==== The name ====<br />
The name is a pun, I'm afraid. A magazine is sometimes also referred to as a "reader". The articles are not necessarily about monads.</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=The_Monad.Reader&diff=24147The Monad.Reader2008-11-19T15:07:17Z<p>WouterSwierstra: </p>
<hr />
<div>[[Category:Community]]<br />
The Monad.Reader is an electronic magazine about all things Haskell. It is less formal than journal, but somehow more enduring than a wiki-page. There have been a wide variety of articles, including: exciting code fragments, intriguing puzzles, book reviews, tutorials, and even half-baked research ideas.<br />
<br />
==== Latest Issue ====<br />
[Media:TMR-Issue12.pdf|The Monad.Reader Issue 12]] is now out.<br />
<br />
Issue 12 is a Summer of Code special and consists of the following three articles:<br />
<br />
;''Compiler Development Made Easy''<br />
:Max Bolingbroke<br />
;''How to Build a Physics Engine''<br />
:Roman Cheplyaka<br />
;''Hoogle Overview''<br />
:Neil Mitchell<br />
<br />
Discussion of this Issue's articles may be found on a [[The_Monad.Reader/Discuss_Issue12|separate page.]]<br />
<br />
Feel free to browse the source files [http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue12/]. You can check out the entire repository using darcs:<br />
<br />
<tt>darcs get http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue12/</tt><br />
<br />
The source code and LaTeX files have all been released under a BSD-style license.<br />
<br />
I'd welcome any general discussion and feedback [[Talk:The_Monad.Reader|on the Talk page]].<br />
<br />
==== Previous editions ====<br />
<br />
All the previous editions have moved to a [[The_Monad.Reader/Previous_issues|separate page]]. Some of the older, wiki-published articles need some TLC. If you read any of the articles and can spare a few minutes to help clean up some of the formatting, your help would really be appreciated.<br />
<br />
==== Contributing ====<br />
<br />
If you're interested in writing something for The Monad.Reader, please download the [[Media:TMR.zip|instructions for authors]]. You can also check out the most recent version from the darcs repository at <tt>http://sneezy.cs.nott.ac.uk/darcs/TMR/Guidelines</tt>. I hope to release another Summer of Code Special in the fall of 2008. Expect the deadline for Issue 13 to be early 2009.<br />
<br />
Feel free to contact [http://www.cs.nott.ac.uk/~wss Wouter Swierstra] with any questions.<br />
<br />
==== Merchandise ====<br />
<br />
You can buy The Monad.Reader t-shirts and merchandise from [http://www.cafepress.com/TheMonadReader Cafepress].<br />
<br />
==== The name ====<br />
The name is a pun, I'm afraid. A magazine is sometimes also referred to as a "reader". The articles are not necessarily about monads.</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=File:TMR-Issue12.pdf&diff=24146File:TMR-Issue12.pdf2008-11-19T15:06:02Z<p>WouterSwierstra: </p>
<hr />
<div></div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=The_Monad.Reader&diff=24145The Monad.Reader2008-11-19T15:04:59Z<p>WouterSwierstra: </p>
<hr />
<div>[[Category:Community]]<br />
The Monad.Reader is an electronic magazine about all things Haskell. It is less formal than journal, but somehow more enduring than a wiki-page. There have been a wide variety of articles, including: exciting code fragments, intriguing puzzles, book reviews, tutorials, and even half-baked research ideas.<br />
<br />
==== Latest Issue ====<br />
<br />
[[The Monad.Reader/Issue12| The Monad.Reader Issue 12]] is now out.<br />
<br />
Issue 12 is a Summer of Code special and consists of the following three articles:<br />
<br />
;''Compiler Development Made Easy''<br />
:Max Bolingbroke<br />
;''How to Build a Physics Engine''<br />
:Roman Cheplyaka<br />
;''Hoogle Overview''<br />
:Neil Mitchell<br />
<br />
Discussion of this Issue's articles may be found on a [[The_Monad.Reader/Discuss_Issue12|separate page.]]<br />
<br />
Feel free to browse the source files [http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue12/]. You can check out the entire repository using darcs:<br />
<br />
<tt>darcs get http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue12/</tt><br />
<br />
The source code and LaTeX files have all been released under a BSD-style license.<br />
<br />
I'd welcome any general discussion and feedback [[Talk:The_Monad.Reader|on the Talk page]].<br />
<br />
==== Previous editions ====<br />
<br />
All the previous editions have moved to a [[The_Monad.Reader/Previous_issues|separate page]]. Some of the older, wiki-published articles need some TLC. If you read any of the articles and can spare a few minutes to help clean up some of the formatting, your help would really be appreciated.<br />
<br />
==== Contributing ====<br />
<br />
If you're interested in writing something for The Monad.Reader, please download the [[Media:TMR.zip|instructions for authors]]. You can also check out the most recent version from the darcs repository at <tt>http://sneezy.cs.nott.ac.uk/darcs/TMR/Guidelines</tt>. I hope to release another Summer of Code Special in the fall of 2008. Expect the deadline for Issue 13 to be early 2009.<br />
<br />
Feel free to contact [http://www.cs.nott.ac.uk/~wss Wouter Swierstra] with any questions.<br />
<br />
==== Merchandise ====<br />
<br />
You can buy The Monad.Reader t-shirts and merchandise from [http://www.cafepress.com/TheMonadReader Cafepress].<br />
<br />
==== The name ====<br />
The name is a pun, I'm afraid. A magazine is sometimes also referred to as a "reader". The articles are not necessarily about monads.</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=The_Monad.Reader/Discuss_Issue12&diff=24144The Monad.Reader/Discuss Issue122008-11-19T15:04:07Z<p>WouterSwierstra: </p>
<hr />
<div>Comments welcome!</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=The_Monad.Reader&diff=24143The Monad.Reader2008-11-19T15:02:54Z<p>WouterSwierstra: </p>
<hr />
<div>[[Category:Community]]<br />
The Monad.Reader is an electronic magazine about all things Haskell. It is less formal than journal, but somehow more enduring than a wiki-page. There have been a wide variety of articles, including: exciting code fragments, intriguing puzzles, book reviews, tutorials, and even half-baked research ideas.<br />
<br />
==== Latest Issue ====<br />
<br />
[[The Monad.Reader/Issue12| The Monad.Reader Issue 12]] is now out.<br />
<br />
Issue 12 is a Summer of Code special and consists of the following three articles:<br />
<br />
;''Compiler Development Made Easy''<br />
:Max Bolingbroke<br />
;''How to Build a Physics Engine''<br />
:Roman Cheplyaka<br />
;''Hoogle Overview''<br />
:Neil Mitchell<br />
<br />
Discussion of this Issue's articles may be found on a [[The_Monad.Reader/Discuss_Issue12|separate page.]]<br />
<br />
Feel free to browse the source files [http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue12/]. You can check out the entire repository using darcs:<br />
<br />
<tt>darcs get http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue11/</tt><br />
<br />
The source code and LaTeX files have all been released under a BSD-style license.<br />
<br />
I'd welcome any general discussion and feedback [[Talk:The_Monad.Reader|on the Talk page]].<br />
<br />
==== Previous editions ====<br />
<br />
All the previous editions have moved to a [[The_Monad.Reader/Previous_issues|separate page]]. Some of the older, wiki-published articles need some TLC. If you read any of the articles and can spare a few minutes to help clean up some of the formatting, your help would really be appreciated.<br />
<br />
==== Contributing ====<br />
<br />
If you're interested in writing something for The Monad.Reader, please download the [[Media:TMR.zip|instructions for authors]]. You can also check out the most recent version from the darcs repository at <tt>http://sneezy.cs.nott.ac.uk/darcs/TMR/Guidelines</tt>. I hope to release another Summer of Code Special in the fall of 2008. Expect the deadline for Issue 13 to be early 2009.<br />
<br />
Feel free to contact [http://www.cs.nott.ac.uk/~wss Wouter Swierstra] with any questions.<br />
<br />
==== Merchandise ====<br />
<br />
You can buy The Monad.Reader t-shirts and merchandise from [http://www.cafepress.com/TheMonadReader Cafepress].<br />
<br />
==== The name ====<br />
The name is a pun, I'm afraid. A magazine is sometimes also referred to as a "reader". The articles are not necessarily about monads.</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=The_Monad.Reader/Previous_issues&diff=24142The Monad.Reader/Previous issues2008-11-19T14:53:46Z<p>WouterSwierstra: </p>
<hr />
<div>[[Media:TMR-Issue1.pdf|The Monad.Reader Issue 1]] was released on March 1, 2005.<br />
;''<nowiki>Pseudocode: Natural Style</nowiki>''<br />
:Andrew J. Bromage<br />
;''Pugs Apocryphon 1 - Overview of the Pugs project''<br />
:Autrijus Tang<br />
;''An Introduction to Gtk2Hs, a Haskell GUI Library''<br />
:Kenneth Hoste<br />
;''Implementing Web-Services with the HAIFA Framework''<br />
:Simon D. Foster<br />
;''<nowiki>Code Probe - Issue one: Haskell XML-RPC, v.2004-06-17 [1]</nowiki>''<br />
:Sven Moritz Hallberg<br />
<br />
[[The Monad.Reader/Issue2| The Monad.Reader Issue 2]] was released May 2005.<br />
;''Impure Thoughts 1 - Thtatic Compilathionth (without a lisp)''<br />
:Philippa Cowderoy<br />
;''Eternal Compatibility In Theory''<br />
:Sven Moritz Hallberg<br />
;''Fun with Linear Implicit Parameters''<br />
:Thomas Jäger<br />
;''Haskore''<br />
:Bastiaan Zapf<br />
;''Bzlib2 Binding - An Introduction to the FFI''<br />
:Peter Eriksen<br />
<br />
[[The Monad.Reader/Issue3| The Monad.Reader Issue 3]] was released June 2005.<br />
;''Notes on Learning Haskell''<br />
:Graham Klyne <br />
;''Functional Programming vs Object Oriented Programming''<br />
:Alistair Bayley <br />
;''Concurrent and Distributed Programming with Join Hs''<br />
:Einar Karttunen <br />
;''"Haskell School Of Expression"<nowiki>:</nowiki> Review of The Haskell School of Expression''<br />
:Isaac Jones <br />
;''Review of "Purely Functional Data Structures"''<br />
:Andrew Cooke <br />
<br />
[[The Monad.Reader/Issue4 | The Monad.Reader Issue 4]] was released 5 July 2005.<br />
;''Impure Thoughts 2, B&D not S&M'' (off-wiki)<br />
:Philippa Cowderoy <br />
;''Why Attribute Grammars Matter''<br />
:Wouter Swierstra <br />
;''Solving Sudoku''<br />
:Dominic Fox <br />
;''On Treaps And Randomization''<br />
:Jesper Louis Andersen <br />
<br />
[[The Monad.Reader/Issue5 | The Monad.Reader Issue 5]] was released October 2005.<br />
;''<nowiki>Haskell: A Very Different Language</nowiki>''<br />
:John Goerzen<br />
;''Generating Polyominoes''<br />
:Dominic Fox<br />
;''<nowiki>HRay:A Haskell ray tracer</nowiki>''<br />
:Kenneth Hoste<br />
;''Number-parameterized types''<br />
:Oleg Kiselyov<br />
;''A Practical Approach to Graph Manipulation''<br />
:Jean Philippe Bernardy<br />
;''Software Testing With Haskell''<br />
:Shae Erisson<br />
<br />
[[Media:TMR-Issue6.pdf|The Monad.Reader Issue 6]] was released January 31, 2007.<br />
;''Getting a Fix from the Right Fold''<br />
:Bernie Pope<br />
;''Adventures in Classical-Land''<br />
:Dan Piponi<br />
;''Assembly: Circular Programming with Recursive do''<br />
:Russell O'Connor<br />
<br />
[[Media:TMR-Issue7.pdf|The Monad.Reader Issue 7]] was released April 30, 2007.<br />
;''A Recipe for controlling Lego using Lava''<br />
:Matthew Naylor<br />
;''<nowiki>Caml Trading: Experiences in Functional Programming on Wall Street</nowiki>''<br />
:Yaron Minsky<br />
;''<nowiki>Book Review: “Programming in Haskell” by Graham Hutton</nowiki>''<br />
:Duncan Coutts<br />
;''Yhc.Core – from Haskell to Core''<br />
:Dimitry Golubovsky, Neil Mitchell, Matthew Naylor<br />
<br />
[[Media:TMR-Issue8.pdf|The Monad.Reader Issue 8]] was released on September 10, 2007.<br />
;''Generating Multiset Partitions''<br />
:Brent Yorgey<br />
;''Type-Level Instant Insanity''<br />
:Conrad Parker<br />
<br />
[[Media:TMR-Issue9.pdf|The Monad.Reader Issue 9]], the [http://hackage.haskell.org/trac/summer-of-code/wiki Summer of Code] special, was released on November 19, 2007.<br />
;''Cabal Configurations''<br />
:Thomas Schilling<br />
;''Darcs Patch Theory''<br />
:Jason Dagit<br />
;''<nowiki>TaiChi: how to check your types with serenity</nowiki>''<br />
:Mathieu Boespflug<br />
<br />
[[Media:TMR-Issue10.pdf|The Monad.Reader Issue 10]] was released on April 8, 2008.<br />
;''Step inside the <nowiki>GHCi</nowiki> debugger''<br />
:Bernie Pope<br />
;''Evaluating Haskell in Haskell''<br />
:Matthew Naylor<br />
<br />
[[Media:TMR-Issue11.pdf|The Monad.Reader Issue 11]] was released on August 25, 2008.<br />
;''David F. Place''<br />
:How to Refold a Map<br />
;''Kenneth Knowles''<br />
:First-Order Logic à la Carte<br />
;''Douglas M. Auclair'' <br />
:<nowiki>MonadPlus: What a Super Monad!</nowiki></div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=The_Monad.Reader&diff=24141The Monad.Reader2008-11-19T14:52:05Z<p>WouterSwierstra: </p>
<hr />
<div>[[Category:Community]]<br />
The Monad.Reader is an electronic magazine about all things Haskell. It is less formal than journal, but somehow more enduring than a wiki-page. There have been a wide variety of articles, including: exciting code fragments, intriguing puzzles, book reviews, tutorials, and even half-baked research ideas.<br />
<br />
==== Latest Issue ====<br />
<br />
<br />
<br />
Discussion of this Issue's articles may be found on a [[The_Monad.Reader/Discuss_Issue11|separate page.]]<br />
<br />
Feel free to browse the source files [http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue11/]. You can check out the entire repository using darcs:<br />
<br />
<tt>darcs get http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue11/</tt><br />
<br />
The source code and LaTeX files have all been released under a BSD-style license.<br />
<br />
I'd welcome any discussion and feedback [[Talk:The_Monad.Reader|on the Talk page]].<br />
<br />
==== Previous editions ====<br />
<br />
All the previous editions have moved to a [[The_Monad.Reader/Previous_issues|separate page]]. Some of the older, wiki-published articles need some TLC. If you read any of the articles and can spare a few minutes to help clean up some of the formatting, your help would really be appreciated.<br />
<br />
==== Contributing ====<br />
<br />
If you're interested in writing something for The Monad.Reader, please download the [[Media:TMR.zip|instructions for authors]]. You can also check out the most recent version from the darcs repository at <tt>http://sneezy.cs.nott.ac.uk/darcs/TMR/Guidelines</tt>. I hope to release another Summer of Code Special in the fall of 2008. Expect the deadline for Issue 13 to be early 2009.<br />
<br />
Feel free to contact [http://www.cs.nott.ac.uk/~wss Wouter Swierstra] with any questions.<br />
<br />
==== Merchandise ====<br />
<br />
You can buy The Monad.Reader t-shirts and merchandise from [http://www.cafepress.com/TheMonadReader Cafepress].<br />
<br />
==== The name ====<br />
The name is a pun, I'm afraid. A magazine is sometimes also referred to as a "reader". The articles are not necessarily about monads.</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=The_Monad.Reader/Issue4/Why_Attribute_Grammars_Matter&diff=23707The Monad.Reader/Issue4/Why Attribute Grammars Matter2008-10-27T09:45:33Z<p>WouterSwierstra: </p>
<hr />
<div>=Why Attribute Grammars Matter=<br />
:''by Wouter Swierstra for The Monad.Reader Issue Four''; 01-07-05<br />
<br />
==Introduction==<br />
Almost twenty years have passed since John Hughes influential paper [http://www.math.chalmers.se/~rjmh/Papers/whyfp.html Why Functional Programming Matters]. At the same time the first work on<br />
attribute grammars and their relation to functional programming<br />
appeared. Despite the growing popularity of functional programming,<br />
attribute grammars remain remarkably less renown.<br />
<br />
The purpose of this article is twofold. On the one hand it illustrates how<br />
functional programming sometimes scales poorly and how<br />
attribute grammars can remedy these problems. On the other hand it aims to<br />
provide a gentle introduction to attribute grammars for seasoned functional<br />
programmers.<br />
<br />
==The problem==<br />
John Hughes argues that with the increasing complexity of modern<br />
software systems, modularity has become of paramount importance to software<br />
development. Functional languages provide new kinds of ''glue'' that create<br />
new opportunities for more modular code. In particular, Hughes stresses<br />
the importance of higher-order functions and lazy evaluation. There are<br />
plenty of examples where this works nicely - yet situations arise where<br />
the glue that functional programming provides somehow isn't quite enough.<br />
<br />
Perhaps a small example is in order. Suppose we want to write a function<br />
<tt>diff :: [Float] -> [Float]</tt> that given a list `xs`, calculates a new list where every element `x` is replaced with the difference between `x` and the<br />
average of `xs`. Similar problems pop up in any library for performing<br />
statistical calculations.<br />
<br />
===Higher-order functions===<br />
Let's tackle the problem with some of Haskell's most powerful glue - higher-order functions. Any beginning Haskell programmer should be able to concoct the solution presented in Listing One. The average is computed using functions from the Prelude. The obvious function using this average is then mapped over the original list. So far, so good.<br />
<br />
<haskell><br />
--- Listing One ---<br />
<br />
diff :: [Float] -> [Float]<br />
diff xs = map (\x -> x - (avg xs)) xs<br />
<br />
avg :: [Float] -> Float<br />
avg xs = sum xs / genericLength xs<br />
</haskell><br />
<br />
There are, however, a few things swept under the rug in this example. First<br />
of all, this simple problem requires three traversals of the original<br />
list. Computing additional values from the original list will require even<br />
more traversals.<br />
<br />
Secondly, the solution is so concise because it depends on Prelude<br />
functions. If the values were stored in a slightly different data structure,<br />
the solution would require a lot of tedious work. We could, of course,<br />
define our own higher-order functions, such as `map` and `fold`, or even<br />
resort to generic programming. There are, however,<br />
more ways to skin this particular cat.<br />
<br />
This problem illustrates the sheer elegance of functional programming. We<br />
do pay a price for the succinctness of the solution. Multiple traversals<br />
and boilerplate code can both be quite a head-ache. If we want to perform<br />
complex computations over custom data structures, we may want to consider an<br />
alternative approach.<br />
<br />
Fortunately, as experienced functional programmers, we have another card up<br />
our sleeve.<br />
<br />
===Lazy evaluation===<br />
The second kind of glue that functional programming provides is ''lazy<br />
evaluation''. In essence, lazy evaluation only evaluates expressions when<br />
they become absolutely necessary.<br />
<br />
In particular, lazy evaluation enables the definition of ''circular<br />
programs'' that bear a dangerous resemblance to undefined values. Circular<br />
programs tuple separate computations, relying on lazy evaluation to feed<br />
the results of one computation to another.<br />
<br />
In our example, we could simply compute the length and sum of the list at<br />
the same time:<br />
<br />
<haskell><br />
average :: [Float] -> Float<br />
average xs = let<br />
nil = (0.0, 0.0)<br />
cons x (s,l) = (x + s, 1.0 + l)<br />
(sum,length) = foldr cons nil xs<br />
in sum / length<br />
</haskell><br />
<br />
We can eliminate traversals by tupling computations! Can we compute the<br />
resulting list at the same time as computing the sum and length? Let's try:<br />
<br />
<haskell><br />
diff :: [Float] -> [Float]<br />
diff xs = let<br />
nil = (0.0, 0.0, [])<br />
cons x (s,l,rs) = (x+s, 1.0+l, (x - ....) : rs)<br />
(sum,length,res) = foldr cons nil xs<br />
in res<br />
</haskell><br />
<br />
We run into trouble when we try to use the average to construct the<br />
resulting list. The problem is, that we haven't computed the average, but<br />
somehow want to use it during the traversal. To solve this, we don't actually<br />
compute the resulting list, but rather compute a function taking the<br />
average to the resulting list:<br />
<br />
<haskell><br />
diff :: [Float] -> [Float]<br />
diff xs = let<br />
nil = (0.0, 0.0, \avg -> [])<br />
cons x (s,l,rs) = (x+s, 1.0+l, \avg -> (x - avg) : rs avg)<br />
(sum,length,res) = foldr cons nil xs<br />
in res (sum / length)<br />
</haskell><br />
<br />
We can generalize this idea a bit further. Suppose that we want to compute<br />
other values that use the average. We could just add an `avg` argument to<br />
every element of the tuple that needs the average. It is a bit nicer,<br />
however, to lift the `avg` argument outside the tuple. Our final listing<br />
now becomes:<br />
<br />
<haskell><br />
--- Listing Two ---<br />
<br />
diff :: [Float] -> [Float]<br />
diff xs =<br />
let<br />
nil avg = (0.0, 0.0, [])<br />
cons x fs avg = let (s,l,ds) = fs avg<br />
in (s+x,l+1.0,x-avg : ds)<br />
(sum,length,ds) = foldr cons nil xs (sum / length)<br />
in ds<br />
</haskell><br />
<br />
Now every element of the tuple can refer to the average, rather than just<br />
the final list.<br />
<br />
This ''credit card transformation'' eliminates multiple traversals by<br />
tupling computations. We use the average without worrying if we have<br />
actually managed to compute it. When we actually write the fold, however,<br />
we have to put our average where our mouth is. Fortunately, the `sum` and<br />
`length` don't depend on the average, so we are free to use these values to<br />
tie the recursive knot.<br />
<br />
The code in Listing Two only needs a single traversal and one<br />
higher-order function. It apparently solves the problems with the code in<br />
Listing One.<br />
<br />
Hold on a minute. What ever happened to the elegance of our previous<br />
solution? Our second solution appears to have sacrificed clarity for the<br />
sake of efficiency. Who in their right minds would want to write the code<br />
in Listing Two? I wouldn't. Maybe, just maybe, we can do a bit better.<br />
<br />
==Attribute Grammars==<br />
Before even explaining what an attribute grammar is, think back to when you<br />
first learned about ''folds''. Initially, a fold seems like a silly<br />
abstraction. Why should I bother writing simple functions as folds? After<br />
all, I already know how to write the straightforward solution. It's only<br />
after a great deal of experience with functional programming that you learn<br />
to recognize folds as actually being a worthwhile abstraction. Learning<br />
about attribute grammars is similar in more ways than one.<br />
<br />
So what are attribute grammars? I'll have a bit more to say about that<br />
later. For now, let's see what the attribute grammar solution to our<br />
running example looks like.<br />
<br />
===The attribute grammar solution===<br />
I'll introduce attribute grammars using the syntax of the [http://www.cs.uu.nl/wiki/Center/AttributeGrammarSystem Utrecht University Attribute Grammar] system or UUAG for short. The UUAG system takes a file<br />
containing an attribute grammar definition and generates a Haskell module<br />
containing ''semantic functions'', determined by the attribute grammar. The<br />
attribute grammar determines a computation over some data structure; the<br />
semantic functions correspond to the actual Haskell functions that perform<br />
the computation.<br />
<br />
Although the UUAG system's syntax closely resembles Haskell, it is<br />
important to realize that the UUAG system is a Haskell pre-processor and<br />
not a complete Haskell compiler.<br />
<br />
So what does an attribute grammar file look like? Well, first of all we<br />
have to declare the data structure we're working with. In our example, we<br />
simply have a list of Floats.<br />
<br />
<haskell><br />
--- Listing Three ---<br />
<br />
DATA Root<br />
| Root list : List<br />
DATA List<br />
| Nil<br />
| Cons head : Float tail : List<br />
</haskell><br />
<br />
Datatypes are declared with the keyword `DATA`, followed by a list of<br />
constructors. Every node explicitly gives the name and type of all its<br />
children. In our example we have an empty list `Nil` and a list constructor<br />
`Cons` with two children, `head` and `tail`. For reasons that will become<br />
apparent later on, we add an additional datatype corresponding to the root<br />
of our list.<br />
<br />
So now that we've declared our datatype, let's add some ''attributes''. If we<br />
want to compute the average element, we'll need the length of the<br />
list. Listing Four contains introduces our first attribute corresponding to<br />
a list's length.<br />
<br />
<haskell><br />
--- Listing Four ---<br />
<br />
ATTR List [|| length : Float]<br />
SEM List<br />
| Nil lhs.length = 0.0<br />
| Cons lhs.length = 1.0 + @tail.length<br />
</haskell><br />
<br />
Let's go over the code line by line.<br />
<br />
An attribute has to be declared before it can actually be defined. An<br />
attribute is declared using the `ATTR` statement. This example declares a<br />
single ''synthesized'' attribute called `length` of type `Float`. A<br />
synthesized attribute is typically a value you are trying to compute bottom<br />
up. Synthesized attributes are declared to the right of the second<br />
vertical bar. We'll see other kinds attributes shortly.<br />
<br />
Now that we've declared our first attribute, we can actually define it. A<br />
`SEM` statement begins by declaring for which data type attributes are<br />
being defined. In our example we want to define an attribute on a `List`,<br />
hence we write `SEM List`. We can subsequently give attribute definitions<br />
for the constructors of our `List` data type.<br />
<br />
Every attribute definition consists of several parts. We begin by<br />
mentioning the constructor for which we define an attribute. In our example<br />
we give two definitions, one for `Nil` and one for `Cons`.<br />
<br />
The second part of the attribute definition describes which attribute is<br />
being defined. In our example we define the attribute `length` for the<br />
''left-hand side'', or `lhs`. A lot of the terminology associated with<br />
attribute grammars comes from the world of context-free grammars. As this<br />
tutorial focuses on functional programmers, rather than formal language<br />
gurus, feel free to read `lhs` as "parent node". It seems a bit odd to<br />
write `lhs.length` explicitly, but we'll see later on why merely writing<br />
`length` doesn't suffice.<br />
<br />
Basically, all we've only said that the two definitions define the `length`<br />
of `Nil` and `Cons`. We still have to fill in the necessary definition. The<br />
actual definition of the attributes takes place to the right of the equals<br />
sign. Programmers are free to write any valid Haskell expression. In fact,<br />
the UUAG system does not analyse the attribute definitions at all, but merely<br />
copies them straight into the resulting Haskell module. In our example, we<br />
want the length of the empty list to be `0.0`. The case for `Cons` is a bit<br />
trickier.<br />
<br />
In the `Cons` case we want to be increment the length computed so far. To<br />
do so we need to be able to refer to other attributes. In particular we<br />
want to refer to the `length` attribute of the tail. The expression<br />
`@tail.length` does just that. In general, you're free to refer to any<br />
synthesized attribute ''attr'' of a child node ''c'' by writing `@c.attr`.<br />
<br />
The `length` attribute can be depicted as follows:<br />
<br />
attachment:length.png<br />
<br />
'''Exercise:''' Declare and define a synthesized attribute `sum` that<br />
computes the sum of a `List`. You can refer to a value `val` stored at a<br />
node as `@val`. For instance, write `@head` to refer to the float stored in<br />
at a `Cons` node. Draw the corresponding picture if you're stuck.<br />
<br />
Now we've defined `length` and `sum`, let's compute the average. We'll know<br />
the sum and the length of the entire list at the Root node. Using those<br />
attributes we can compute the average and ''broadcast'' the average through<br />
the rest of the list. Let's start with the picture this time:<br />
<br />
attachment:avg.png<br />
<br />
The previous synthesized attributes, `length` and `sum`, defined bottom-up<br />
computations. We're now in the situation, however, where we want to pass<br />
information through the tree from a parent node to its child nodes using an<br />
''inherited'' attribute. Listing Five defines an inherited attribute `avg`<br />
that corresponds to the picture we just drew.<br />
<br />
<haskell><br />
--- Listing Five ---<br />
ATTR List [ avg : Float|| ]<br />
SEM Root<br />
| Root list.avg = @list.sum / @list.length<br />
<br />
SEM List<br />
| Cons tail.avg = @lhs.avg<br />
<br />
</haskell><br />
<br />
Inherited attributes are declared to the left of the two vertical<br />
bars. Once we've declared an inherited attribute `avg` on lists, we're<br />
obliged to define how every constructor passes an `avg` to its children of<br />
type `List`.<br />
<br />
In our example, there are only two constructors with children<br />
of type `List`, namely `Root` and `Cons`. At the `Root` we compute the<br />
average, using the synthesized attributes `sum` and `length`, and pass the<br />
result to the `list` child. At the `Cons` node, we merely copy down the<br />
`avg` we received from our parent. Analogous to synthesized attributes, we<br />
can refer to an inherited attribute `attr` by writing `@lhs.attr`.<br />
<br />
Admittedly, this inherited attribute is not terribly interesting. There are<br />
plenty of other examples, however, where an inherited attribute represents<br />
important contextual information. Think of passing around the set of<br />
assumptions when writing a type checker, for instance.<br />
<br />
'''Exercise:''' To complete the attribute grammar, define an attribute<br />
`res` that computes the resulting list. Should it be inherited or<br />
synthesized? You may want to draw a picture.<br />
<br />
===Running the UUAG===<br />
Now suppose you've completed the exercises and copied the examples in a<br />
single file called `Diff.ag`. How do we actually use the attribute grammar?<br />
This is were the UUAG compiler steps in. Running the UUAG compiler on the<br />
source attribute grammar file generates a new `Diff.hs` file, which we can<br />
then compile like any other Haskell file.<br />
<br />
<haskell><br />
> uuagc -a Diff.ag<br />
> ghci Diff.hs<br />
</haskell><br />
<br />
The `Diff.hs` file contains several ingredients.<br />
<br />
Firstly, new Haskell datatypes are generated corresponding to `DATA`<br />
declarations in the attribute grammar. For every generated datatype a<br />
corresponding `fold` is generated. The attribute definitions determine the<br />
arguments passed to the folds. Browsing through the generated code can<br />
actually be quite instructive.<br />
<br />
Inherited attributes are passed to recursive calls of the fold. Synthesized<br />
attributes are tupled and returned as the result of the computation. In<br />
essence, we've reproduced our original solution in Listing Two - but now<br />
without the hassle associated with spelling out [[catamorphism]]s with a higher<br />
order domain and a compound codomain.<br />
<br />
The attribute grammar solution is just as efficient as our earlier solution<br />
relying on lazy evaluation, yet the code is hardly different from what we<br />
would write in a straightforward Haskell solution. It really is the best of<br />
both worlds. The two types of glue that John Hughes pinpoints in his<br />
original article just aren't enough. I would like to think that sometimes<br />
attribute grammars are sometimes capable of providing just the right<br />
missing bit of glue.<br />
<br />
===What are attribute grammars?===<br />
So just what are attribute grammars? Well, that depends on who you ask,<br />
really. I've tried to sum up some different views below.<br />
<br />
Attribute grammars add semantics to a context free grammar. Although it is<br />
easy enough to describe a language's syntax using a context free grammar,<br />
accurately describing a language's semantics is notoriously<br />
difficult. Attribute grammars specify a language's semantics by<br />
'decorating' a context free grammar with those attributes you are interested<br />
in.<br />
<br />
Attribute grammars describe tree traversals. All imperative implementations<br />
of attribute grammar systems perform tree traversals to compute some<br />
value. Basically an attribute grammar declares ''which'' values to compute<br />
and an attribute grammar system executes these computations. Once you've<br />
made this observation, the close relation to functional programming should<br />
not come as a surprise.<br />
<br />
Attribute grammars are a formalism for writing catamorphisms in a<br />
compositional fashion. Basically, the only thing the UUAG compiler does is<br />
generate large folds that I couldn't be bothered writing myself. It takes<br />
away all the elbow grease involved with maintaining and extending such<br />
code. In a sense the compiler does absolutely nothing new; it just makes<br />
life a lot easier.<br />
<br />
Attribute grammars provide framework for aspect oriented programming in<br />
functional languages. Lately there has been a lot of buzz about the<br />
importance of ''aspects'' and ''aspect oriented programming''. Attribute<br />
grammars provide a clear and well-established framework for splitting code<br />
into separate aspects. By spreading attribute definitions over several<br />
different files and grouping them according to aspect, attribute grammars<br />
provide a natural setting for aspect oriented programming.<br />
<br />
How do attribute grammars relate to other Haskell abstractions? I'll try to<br />
put my finger on some of the more obvious connections, but I'm pretty sure<br />
there's a great deal more that I don't cover here.<br />
<br />
==What else is out there?==<br />
Everyone loves monads. They're what makes IO possible in Haskell. There are<br />
extensive standard libraries and syntactic sugar specifically designed to<br />
make life with monads easier. There are an enormous number of Haskell<br />
libraries based on the monadic interface. They represent one of the most<br />
substantial developments of functional programming in the last decade.<br />
<br />
Yet somehow, the single most common question asked by fledgling Haskell<br />
programmers is probably ''What are monads?''. Beginners have a hard time<br />
grasping the concept of monads and yet connoisseurs recognize a monad in<br />
just about every code snippet. I think the more important question is:<br />
''What are monads good for?''<br />
<br />
Monads provide a simple yet powerful abstract notion of computation. In<br />
essence, a monad describes how sequence computations. This is crucial in<br />
order to perform IO in a functional language; by constraining all IO<br />
actions to a single interface of sequenced computations, the programmer is<br />
prevented from creating utter chaos. The real power of monads is in the<br />
interface they provide.<br />
<br />
John Hughes identified modularity as the single biggest blessing of<br />
functional programming. The obvious question is: how modular is the monadic<br />
interface? This really depends on you're definition of modularity. Let me<br />
be more specific. How can you combine two arbitrary monads? You can't. This<br />
is my greatest concern with monads. Once you choose your specific notion of<br />
computation, you have to stick to it through thick and thin.<br />
<br />
What about monad transformers? Monad transformers allow you to add a<br />
specific monad's functionality on top of any existing monad. What seems<br />
like a solution, more often than not, turns out to introduce more problems<br />
than you bargained for. Adding new functionality to a monad involves<br />
lifting all the computations from the previous monad to the new<br />
one. Although I could learn to live with this, it gets even worse. As every<br />
monad transformer really changes the underlying monad the ''order'' in<br />
which monad transformers are applied really makes a difference. If I want<br />
to add error reporting and state to some existing monad, should I be forced<br />
to consider the order in which I add them?<br />
<br />
Monads are extremely worthwhile for the interface they provide. Monadic<br />
libraries are great, but changing and extending monadic code can be a<br />
pain. Can we do better? Well I probably wouldn't have started this monadic<br />
intermezzo if I didn't have some sort of answer.<br />
<br />
Let's start off with `Reader` monads, for instance. Essentially, `Reader`<br />
monads adds an argument to some computation. Wait a minute, this reminds me<br />
of inherited attributes. What about `Writer` monads? They correspond to<br />
synthesized attributes of course. Finally, `State` monads correspond to<br />
''chained'' attributes, or attributes that are both synthesized and<br />
inherited. The real edge attribute grammars hold over monad transformers is that you<br />
can define new attributes ''without'' worrying about the order in which you<br />
define them or adapting existing code.<br />
<br />
Do other abstractions capture other notions related to attribute grammars?<br />
Of course they do! Just look at the function space arrows instance. The notion of<br />
combining two distinct computations using the `(&&&)` operator relates to<br />
the concept of ''joining'' two attribute grammars by collecting their<br />
attribute definitions. When you look at the `loop` combinator, I can<br />
only be grateful that an attribute grammar system deals with attribute<br />
dependencies automatically.<br />
<br />
There really is a lot of related work. Implicit parameters? Inherited attributes! Linear implicit parameters? Chained attributes! Concepts that are so natural in the setting of attribute grammars, yet seem contrived when added to Haskell. This strengthens my belief that functional programmers can really benefit from even the most fleeting experience with attribute grammars; although I'd like to think that if you've read this far, you're hungry for more.<br />
<br />
==Further reading==<br />
This more or less covers the tutorial section of this article. The best way<br />
to learn more about attribute grammars is by actually using them. To conclude the tutorial, I've<br />
included a small example for you to play with. I've written a parser for a very<br />
simple wiki formatting language not entirely unlike the one used to produce this<br />
document. So far the HTML generated after parsing a document is fairly poor. It's up to<br />
you to improve it!<br />
<br />
You can download the [attachment:WAGM.zip initial version here]. Don't forget to install the [http://www.cs.uu.nl/wiki/Center/AttributeGrammarSystem#Download UUAG]. It might<br />
be worthwhile to have a look at the [http://www.cs.uu.nl/~arthurb/data/AG/AGman.html UUAG manual] as there's a lot of technical detail that I haven't mentioned.<br />
<br />
If you're particularly daring, you may want to take a look at the [http://www.cs.uu.nl/wiki/Ehc/WebHome Essential Haskell Compiler] being developed at Utrecht. It's almost completely written using the UUAG and is designed to be suitable for education and experimentation. The compiler was presented at the Summer School for Advanced Functional Programming in Tartu, Estonia last summer. As a result, there's a lot written about it already.<br />
<br />
Dive on in!<br />
<br />
[[Category:Article]]</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=Talk:The_Monad.Reader&diff=22600Talk:The Monad.Reader2008-08-25T08:34:47Z<p>WouterSwierstra: </p>
<hr />
<div>I'd welcome any feedback and discussion about The Monad.Reader. Don't like the class file? Love the articles? Shout it out!<br />
--[[User:WouterSwierstra|WouterSwierstra]] 10:38, 31 January 2007 (UTC)<br />
<br />
== The logo == <br />
<br />
Either those damn commies or the capitalist swine (mmorrow can't get his story straight on this) seem to have rasterized the logo -- what's the point of putting TMR in a PDF if the logo is going to be made of not one but TWO raster images?<br />
<br />
--[[User:SamB|SamB]] 02:18, 7 August 2008 (UTC)<br />
<br />
I'm not sure what you mean here... I think I've consistently used the same logo from Issue 6 onwards, but maybe I'm mistaken. --[[User:WouterSwierstra|WouterSwierstra]] 08:34, 25 August 2008 (UTC)<br />
<br />
== The layout ==<br />
<br />
I think the <tt>twosided</tt> should be dropped from the latex class file, that would make it a lot more pleasant to read on a screen.<br />
<br />
--[[User:Twanvl|Twanvl]] 13:50, 31 January 2007 (UTC)<br />
<br />
Although those of use with wide screens quite like putting the pdf up in 2-page side-by-side mode in which the twosided option looks better!<br />
<br />
--[[User:Pharm|Pharm]] 11:26, 1 February 2007 (UTC) (I have an ordinary 4:3 screen at home though. TBH the twosided thing doesn't bother me either way...)<br />
<br />
I think the twosided option is really nice if you print it. For me, this really outweighs any minor annoyances associated with reading it from a screen. --[[User:WouterSwierstra|WouterSwierstra]] 08:34, 25 August 2008 (UTC)<br />
<br />
== The format ==<br />
<br />
--[[User:Yang|Yang]] 17:04, 5 March 2007 (EST)<br />
Great publication, keep them coming! Please consider setting up an RSS/Atom/etc. feed for this.<br />
<br />
I announce all the issues on the Haskell and Haskell-cafe mailing lists - can you think of any other places that might be interested? The announcements always make it to the Haskell Weekly News that has an RSS feed... --[[User:WouterSwierstra|WouterSwierstra]] 08:34, 25 August 2008 (UTC)<br />
<br />
--[[User:raould|raould]] June 6 2007<br />
Any chance of an HTML version (no matter how lame the results of an automated PDF2HTML might be)?<br />
<br />
I'm a bit hesitant to release it more than one format. It means quite a bit more work for me. Unless there's an overwhelming vote for HTML publishing I'd prefer to stick with pdf for now. --[[User:WouterSwierstra|WouterSwierstra]] 08:34, 25 August 2008 (UTC)<br />
<br />
== The content ==<br />
<br />
--[[User:Mnislaih|Mnislaih]] 13:50, 3 February 2007 (UTC)<br />
Wow! This issue of TMR is excellent. I am loving the tutorial style of all the three articles. Right now I'm in section 2 of Russell's excellent assembler and found that the newtype definition for the AssemblyCodeMonad needs to be:<br />
<haskell><br />
newtype AssemblyCodeMonad a = <br />
AssemblyCodeMonad<br />
(RWS [(Label,Location)]<br />
[Either (Instruction Register) (Label,Location)]<br />
(Location, Integer)<br />
a)<br />
deriving (Monad, MonadReader [(Label,Location)], <br />
MonadWriter [Either (Instruction Register) (Label,Location)],<br />
MonadState (Location, Integer))<br />
</haskell><br />
<br />
'''TMR8''': Please note that there is a separate talk page for the article [[User_talk:ConradParker/InstantInsanity | Type-Level Instant Insanity]]<br />
--[[User:ConradParker|ConradParker]] 12:23, 10 September 2007 (UTC)</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=The_Monad.Reader&diff=22599The Monad.Reader2008-08-25T08:20:24Z<p>WouterSwierstra: </p>
<hr />
<div>[[Category:Community]]<br />
The Monad.Reader is an electronic magazine about all things Haskell. It is less formal than journal, but somehow more enduring than a wiki-page. There have been a wide variety of articles, including: exciting code fragments, intriguing puzzles, book reviews, tutorials, and even half-baked research ideas.<br />
<br />
==== Latest Issue ====<br />
<br />
[[Media:TMR-Issue11.pdf|The Monad.Reader Issue 11]] is now available. Issue 11 consists of the following three articles:<br />
<br />
;''David F. Place''<br />
:How to Refold a Map<br />
;''Kenneth Knowles''<br />
:First-Order Logic à la Carte<br />
;''Douglas M. Auclair'' <br />
:<nowiki>MonadPlus: What a Super Monad!</nowiki><br />
<br />
Feel free to browse the source files [http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue11/]. You can check out the entire repository using darcs:<br />
<br />
<tt>darcs get http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue11/</tt><br />
<br />
The source code and LaTeX files have all been released under a BSD-style license.<br />
<br />
I'd welcome any discussion and feedback [[Talk:The_Monad.Reader|on the Talk page]].<br />
<br />
==== Previous editions ====<br />
<br />
All the previous editions have moved to a [[The_Monad.Reader/Previous_issues|separate page]]. Some of the older, wiki-published articles need some TLC. If you read any of the articles and can spare a few minutes to help clean up some of the formatting, your help would really be appreciated.<br />
<br />
==== Contributing ====<br />
<br />
If you're interested in writing something for The Monad.Reader, please download the [[Media:TMR.zip|instructions for authors]]. You can also check out the most recent version from the darcs repository at <tt>http://sneezy.cs.nott.ac.uk/darcs/TMR/Guidelines</tt>. I hope to release another Summer of Code Special in the fall of 2008. Expect the deadline for Issue 13 to be early 2009.<br />
<br />
Feel free to contact [http://www.cs.nott.ac.uk/~wss Wouter Swierstra] with any questions.<br />
<br />
==== Merchandise ====<br />
<br />
You can buy The Monad.Reader t-shirts and merchandise from [http://www.cafepress.com/TheMonadReader Cafepress].<br />
<br />
==== The name ====<br />
The name is a pun, I'm afraid. A magazine is sometimes also referred to as a "reader". The articles are not necessarily about monads.</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=File:TMR-Issue11.pdf&diff=22598File:TMR-Issue11.pdf2008-08-25T08:18:25Z<p>WouterSwierstra: </p>
<hr />
<div></div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=The_Monad.Reader/Previous_issues&diff=22597The Monad.Reader/Previous issues2008-08-25T07:53:00Z<p>WouterSwierstra: </p>
<hr />
<div>[[Media:TMR-Issue1.pdf|The Monad.Reader Issue 1]] was released on March 1, 2005.<br />
;''<nowiki>Pseudocode: Natural Style</nowiki>''<br />
:Andrew J. Bromage<br />
;''Pugs Apocryphon 1 - Overview of the Pugs project''<br />
:Autrijus Tang<br />
;''An Introduction to Gtk2Hs, a Haskell GUI Library''<br />
:Kenneth Hoste<br />
;''Implementing Web-Services with the HAIFA Framework''<br />
:Simon D. Foster<br />
;''<nowiki>Code Probe - Issue one: Haskell XML-RPC, v.2004-06-17 [1]</nowiki>''<br />
:Sven Moritz Hallberg<br />
<br />
[[The Monad.Reader/Issue2| The Monad.Reader Issue 2]] was released May 2005.<br />
;''Impure Thoughts 1 - Thtatic Compilathionth (without a lisp)''<br />
:Philippa Cowderoy<br />
;''Eternal Compatibility In Theory''<br />
:Sven Moritz Hallberg<br />
;''Fun with Linear Implicit Parameters''<br />
:Thomas Jäger<br />
;''Haskore''<br />
:Bastiaan Zapf<br />
;''Bzlib2 Binding - An Introduction to the FFI''<br />
:Peter Eriksen<br />
<br />
[[The Monad.Reader/Issue3| The Monad.Reader Issue 3]] was released June 2005.<br />
;''Notes on Learning Haskell''<br />
:Graham Klyne <br />
;''Functional Programming vs Object Oriented Programming''<br />
:Alistair Bayley <br />
;''Concurrent and Distributed Programming with Join Hs''<br />
:Einar Karttunen <br />
;''"Haskell School Of Expression"<nowiki>:</nowiki> Review of The Haskell School of Expression''<br />
:Isaac Jones <br />
;''Review of "Purely Functional Data Structures"''<br />
:Andrew Cooke <br />
<br />
[[The Monad.Reader/Issue4 | The Monad.Reader Issue 4]] was released 5 July 2005.<br />
;''Impure Thoughts 2, B&D not S&M'' (off-wiki)<br />
:Philippa Cowderoy <br />
;''Why Attribute Grammars Matter''<br />
:Wouter Swierstra <br />
;''Solving Sudoku''<br />
:Dominic Fox <br />
;''On Treaps And Randomization''<br />
:Jesper Louis Andersen <br />
<br />
[[The Monad.Reader/Issue5 | The Monad.Reader Issue 5]] was released October 2005.<br />
;''<nowiki>Haskell: A Very Different Language</nowiki>''<br />
:John Goerzen<br />
;''Generating Polyominoes''<br />
:Dominic Fox<br />
;''<nowiki>HRay:A Haskell ray tracer</nowiki>''<br />
:Kenneth Hoste<br />
;''Number-parameterized types''<br />
:Oleg Kiselyov<br />
;''A Practical Approach to Graph Manipulation''<br />
:Jean Philippe Bernardy<br />
;''Software Testing With Haskell''<br />
:Shae Erisson<br />
<br />
[[Media:TMR-Issue6.pdf|The Monad.Reader Issue 6]] was released January 31, 2007.<br />
;''Getting a Fix from the Right Fold''<br />
:Bernie Pope<br />
;''Adventures in Classical-Land''<br />
:Dan Piponi<br />
;''Assembly: Circular Programming with Recursive do''<br />
:Russell O'Connor<br />
<br />
[[Media:TMR-Issue7.pdf|The Monad.Reader Issue 7]] was released April 30, 2007.<br />
;''A Recipe for controlling Lego using Lava''<br />
:Matthew Naylor<br />
;''<nowiki>Caml Trading: Experiences in Functional Programming on Wall Street</nowiki>''<br />
:Yaron Minsky<br />
;''<nowiki>Book Review: “Programming in Haskell” by Graham Hutton</nowiki>''<br />
:Duncan Coutts<br />
;''Yhc.Core – from Haskell to Core''<br />
:Dimitry Golubovsky, Neil Mitchell, Matthew Naylor<br />
<br />
[[Media:TMR-Issue8.pdf|The Monad.Reader Issue 8]] was released on September 10, 2007.<br />
;''Generating Multiset Partitions''<br />
:Brent Yorgey<br />
;''Type-Level Instant Insanity''<br />
:Conrad Parker<br />
<br />
[[Media:TMR-Issue9.pdf|The Monad.Reader Issue 9]], the [http://hackage.haskell.org/trac/summer-of-code/wiki Summer of Code] special, was released on November 19, 2007.<br />
;''Cabal Configurations''<br />
:Thomas Schilling<br />
;''Darcs Patch Theory''<br />
:Jason Dagit<br />
;''<nowiki>TaiChi: how to check your types with serenity</nowiki>''<br />
:Mathieu Boespflug<br />
<br />
[[Media:TMR-Issue10.pdf|The Monad.Reader Issue 10]] was released on April 8, 2008.<br />
;''Step inside the <nowiki>GHCi</nowiki> debugger''<br />
:Bernie Pope<br />
;''Evaluating Haskell in Haskell''<br />
:Matthew Naylor</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=The_Monad.Reader&diff=22596The Monad.Reader2008-08-25T07:50:35Z<p>WouterSwierstra: </p>
<hr />
<div>[[Category:Community]]<br />
The Monad.Reader is an electronic magazine about all things Haskell. It is less formal than journal, but somehow more enduring than a wiki-page. There have been a wide variety of articles, including: exciting code fragments, intriguing puzzles, book reviews, tutorials, and even half-baked research ideas.<br />
<br />
==== Latest Issue ====<br />
<br />
[[Media:TMR-Issue11.pdf|The Monad.Reader Issue 11]] is now available. Issue 11 consists of the following three articles:<br />
<br />
;''David F. Place''<br />
:How to Refold a Map<br />
;''Kenneth Knowles''<br />
:First-Order Logic à la Carte<br />
;''Douglas M. Auclair'' <br />
:<nowiki>MonadPlus: What a Super Monad!</nowiki><br />
<br />
Feel free to browse the source files [http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue11/]. You can check out the entire repository using darcs:<br />
<br />
<tt>darcs get http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue11/</tt><br />
<br />
The source code and LaTeX files have all been released under a BSD-style license.<br />
<br />
I'd welcome any discussion and feedback [[Talk:The_Monad.Reader|on the Talk page]].<br />
<br />
==== Previous editions ====<br />
<br />
All the previous editions have moved to a [[The_Monad.Reader/Previous_issues|separate page]]. Some of the older, wiki-published articles need some TLC. If you read any of the articles and can spare a few minutes to help clean up some of the formatting, your help would really be appreciated.<br />
<br />
==== Contributing ====<br />
<br />
If you're interested in writing something for The Monad.Reader, please download the [[Media:TMR.zip|instructions for authors]]. You can also check out the most recent version from the darcs repository at <tt>http://sneezy.cs.nott.ac.uk/darcs/TMR/Guidelines</tt>. <br />
<br />
Feel free to contact [http://www.cs.nott.ac.uk/~wss Wouter Swierstra] with any questions.<br />
<br />
==== Merchandise ====<br />
<br />
You can buy The Monad.Reader t-shirts and merchandise from [http://www.cafepress.com/TheMonadReader Cafepress].<br />
<br />
==== The name ====<br />
The name is a pun, I'm afraid. A magazine is sometimes also referred to as a "reader". The articles are not necessarily about monads.</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=The_Monad.Reader&diff=22506The Monad.Reader2008-08-17T19:32:21Z<p>WouterSwierstra: </p>
<hr />
<div>[[Category:Community]]<br />
The Monad.Reader is an electronic magazine about all things Haskell. It is less formal than journal, but somehow more enduring than a wiki-page. There have been a wide variety of articles, including: exciting code fragments, intriguing puzzles, book reviews, tutorials, and even half-baked research ideas.<br />
<br />
==== Latest Issue ====<br />
<br />
[[Media:TMR-Issue10.pdf|The Monad.Reader Issue 10]] is now available:<br />
;''Step inside the <nowiki>GHCi</nowiki> debugger''<br />
:Bernie Pope<br />
;''Evaluating Haskell in Haskell''<br />
:Matthew Naylor<br />
<br />
Feel free to browse the source files [http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue10/]. You can check out the entire repository using darcs:<br />
<br />
<tt>darcs get http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue10/</tt><br />
<br />
The source code and LaTeX files have all been released under a BSD-style license.<br />
<br />
I'd welcome any discussion and feedback [[Talk:The_Monad.Reader|on the Talk page]].<br />
<br />
==== Previous editions ====<br />
<br />
[[Media:TMR-Issue1.pdf|The Monad.Reader Issue 1]] was released on March 1, 2005.<br />
;''<nowiki>Pseudocode: Natural Style</nowiki>''<br />
:Andrew J. Bromage<br />
;''Pugs Apocryphon 1 - Overview of the Pugs project''<br />
:Autrijus Tang<br />
;''An Introduction to Gtk2Hs, a Haskell GUI Library''<br />
:Kenneth Hoste<br />
;''Implementing Web-Services with the HAIFA Framework''<br />
:Simon D. Foster<br />
;''<nowiki>Code Probe - Issue one: Haskell XML-RPC, v.2004-06-17 [1]</nowiki>''<br />
:Sven Moritz Hallberg<br />
<br />
[[The Monad.Reader/Issue2| The Monad.Reader Issue 2]] was released May 2005.<br />
;''Impure Thoughts 1 - Thtatic Compilathionth (without a lisp)''<br />
:by Philippa Cowderoy<br />
;''Eternal Compatibility In Theory''<br />
:by Sven Moritz Hallberg<br />
;''Fun with Linear Implicit Parameters''<br />
:by Thomas Jäger<br />
;''Haskore''<br />
:by Bastiaan Zapf<br />
;''Bzlib2 Binding - An Introduction to the FFI''<br />
:by Peter Eriksen<br />
<br />
[[The Monad.Reader/Issue3| The Monad.Reader Issue 3]] was released June 2005.<br />
;''Notes on Learning Haskell''<br />
:by Graham Klyne <br />
;''Functional Programming vs Object Oriented Programming''<br />
:by Alistair Bayley <br />
;''Concurrent and Distributed Programming with Join Hs''<br />
:by Einar Karttunen <br />
;''"Haskell School Of Expression"<nowiki>:</nowiki> Review of The Haskell School of Expression''<br />
:by Isaac Jones <br />
;''Review of "Purely Functional Data Structures"''<br />
:by Andrew Cooke <br />
<br />
[[The Monad.Reader/Issue4 | The Monad.Reader Issue 4]] was released 5 July 2005.<br />
;''Impure Thoughts 2, B&D not S&M'' (off-wiki)<br />
:by Philippa Cowderoy <br />
;''Why Attribute Grammars Matter''<br />
:by Wouter Swierstra <br />
;''Solving Sudoku''<br />
:by Dominic Fox <br />
;''On Treaps And Randomization''<br />
:by Jesper Louis Andersen <br />
<br />
[[The Monad.Reader/Issue5 | The Monad.Reader Issue 5]] was released October 2005.<br />
;''<nowiki>Haskell: A Very Different Language</nowiki>''<br />
:by John Goerzen<br />
;''Generating Polyominoes''<br />
:by Dominic Fox<br />
;''<nowiki>HRay:A Haskell ray tracer</nowiki>''<br />
:by Kenneth Hoste<br />
;''Number-parameterized types''<br />
:by Oleg Kiselyov<br />
;''A Practical Approach to Graph Manipulation''<br />
:by Jean Philippe Bernardy<br />
;''Software Testing With Haskell''<br />
:by Shae Erisson<br />
<br />
[[Media:TMR-Issue6.pdf|The Monad.Reader Issue 6]] was released January 31, 2007.<br />
;''Getting a Fix from the Right Fold''<br />
:Bernie Pope<br />
;''Adventures in Classical-Land''<br />
:Dan Piponi<br />
;''Assembly: Circular Programming with Recursive do''<br />
:Russell O'Connor<br />
<br />
[[Media:TMR-Issue7.pdf|The Monad.Reader Issue 7]] was released April 30, 2007.<br />
;''A Recipe for controlling Lego using Lava''<br />
:Matthew Naylor<br />
;''<nowiki>Caml Trading: Experiences in Functional Programming on Wall Street</nowiki>''<br />
:Yaron Minsky<br />
;''<nowiki>Book Review: “Programming in Haskell” by Graham Hutton</nowiki>''<br />
:Duncan Coutts<br />
;''Yhc.Core – from Haskell to Core''<br />
:Dimitry Golubovsky, Neil Mitchell, Matthew Naylor<br />
<br />
[[Media:TMR-Issue8.pdf|The Monad.Reader Issue 8]] was released on September 10, 2007.<br />
;''Generating Multiset Partitions''<br />
:Brent Yorgey<br />
;''Type-Level Instant Insanity''<br />
:Conrad Parker<br />
<br />
[[Media:TMR-Issue9.pdf|The Monad.Reader Issue 9]], the [http://hackage.haskell.org/trac/summer-of-code/wiki Summer of Code] special, is was released on November 19, 2007.<br />
;''Cabal Configurations''<br />
:Thomas Schilling<br />
;''Darcs Patch Theory''<br />
:Jason Dagit<br />
;''<nowiki>TaiChi: how to check your types with serenity</nowiki>''<br />
:Mathieu Boespflug<br />
<br />
==== Contributing ====<br />
<br />
If you're interested in writing something for The Monad.Reader, please download the [[Media:TMR.zip|instructions for authors]]. You can also check out the most recent version from the darcs repository at <tt>http://sneezy.cs.nott.ac.uk/darcs/TMR/Guidelines</tt>. <br />
<br />
The deadline for Issue 11 is '''August 1, 2008'''.<br />
<br />
Feel free to contact [http://www.cs.nott.ac.uk/~wss Wouter Swierstra] with any questions.<br />
<br />
==== Merchandise ====<br />
<br />
You can buy The Monad.Reader t-shirts and merchandise from [http://www.cafepress.com/TheMonadReader Cafepress].<br />
<br />
==== The name ====<br />
The name is a pun, I'm afraid. A magazine is sometimes also referred to as a "reader". The articles are not necessarily about monads.</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=The_Monad.Reader&diff=22505The Monad.Reader2008-08-17T19:31:45Z<p>WouterSwierstra: </p>
<hr />
<div>[[Category:Community]]<br />
The Monad.Reader is an electronic magazine about all things Haskell. It is less formal than journal, but somehow more enduring than a wiki-page. There have been a wide variety of articles, including: exciting code fragments, intriguing puzzles, book reviews, tutorials, and even half-baked research ideas.<br />
<br />
==== Latest Issue ====<br />
<br />
[[Media:TMR-Issue10.pdf|The Monad.Reader Issue 10]] is now available:<br />
;''Step inside the <nowiki>GHCi</nowiki> debugger''<br />
:Bernie Pope<br />
;''Evaluating Haskell in Haskell''<br />
:Matthew Naylor<br />
<br />
Feel free to browse the source files [http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue10/]. You can check out the entire repository using darcs:<br />
<br />
<tt>darcs get http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue10/</tt><br />
<br />
The source code and LaTeX files have all been released under a BSD-style license.<br />
<br />
I'd welcome any discussion and feedback [[Talk:The_Monad.Reader|on the Talk page]].<br />
<br />
==== Previous editions ====<br />
<br />
[[Media:TMR-Issue1.pdf|The Monad.Reader Issue 1]] was released on March 1, 2005.<br />
;''<nowiki>Pseudocode: Natural Style</nowiki>''<br />
:Andrew J. Bromage<br />
;''Pugs Apocryphon 1 - Overview of the Pugs project''<br />
:Autrijus Tang<br />
;''An Introduction to Gtk2Hs, a Haskell GUI Library''<br />
:Kenneth Hoste<br />
;''Implementing Web-Services with the HAIFA Framework''<br />
:Simon D. Foster<br />
;''<nowiki>Code Probe - Issue one: Haskell XML-RPC, v.2004-06-17 [1]</nowiki>''<br />
:Sven Moritz Hallberg<br />
<br />
[[The Monad.Reader/Issue2| The Monad.Reader Issue 2]] was released May 2005.<br />
;''Impure Thoughts 1 - Thtatic Compilathionth (without a lisp)''<br />
:by Philippa Cowderoy<br />
;''Eternal Compatibility In Theory''<br />
:by Sven Moritz Hallberg<br />
;''Fun with Linear Implicit Parameters''<br />
:by Thomas Jäger<br />
;''Haskore''<br />
:by Bastiaan Zapf<br />
;''Bzlib2 Binding - An Introduction to the FFI''<br />
:by Peter Eriksen<br />
<br />
[[The Monad.Reader/Issue3| The Monad.Reader Issue 3]] was released June 2005.<br />
;''Notes on Learning Haskell''<br />
:by Graham Klyne <br />
;''Functional Programming vs Object Oriented Programming''<br />
:by Alistair Bayley <br />
;''Concurrent and Distributed Programming with Join Hs''<br />
:by Einar Karttunen <br />
;''"Haskell School Of Expression"<nowiki>:</nowiki> Review of The Haskell School of Expression''<br />
:by Isaac Jones <br />
;''Review of "Purely Functional Data Structures"''<br />
:by Andrew Cooke <br />
<br />
[[The Monad.Reader/Issue4 | The Monad.Reader Issue 4]] was released 5 July 2005.<br />
;''Impure Thoughts 2, B&D not S&M'' (off-wiki)<br />
:by Philippa Cowderoy <br />
;''Why Attribute Grammars Matter''<br />
:by Wouter Swierstra <br />
;''Solving Sudoku''<br />
:by Dominic Fox <br />
;''On Treaps And Randomization''<br />
:by Jesper Louis Andersen <br />
<br />
[[The Monad.Reader/Issue5 | The Monad.Reader Issue 5]] was released October 2005.<br />
;''<nowiki>Haskell: A Very Different Language</nowiki>''<br />
:by John Goerzen<br />
;''Generating Polyominoes''<br />
:by Dominic Fox<br />
;''<nowiki>HRay:A Haskell ray tracer</nowiki>''<br />
:by Kenneth Hoste<br />
;''Number-parameterized types''<br />
:by Oleg Kiselyov<br />
;''A Practical Approach to Graph Manipulation''<br />
:by Jean Philippe Bernardy<br />
;''Software Testing With Haskell''<br />
:by Shae Erisson<br />
<br />
<br />
[[Media:TMR-Issue6.pdf|The Monad.Reader Issue 6]] was released January 31, 2007.<br />
;''Getting a Fix from the Right Fold''<br />
:Bernie Pope<br />
;''Adventures in Classical-Land''<br />
:Dan Piponi<br />
;''Assembly: Circular Programming with Recursive do''<br />
:Russell O'Connor<br />
<br />
[[Media:TMR-Issue7.pdf|The Monad.Reader Issue 7]] was released April 30, 2007.<br />
;''A Recipe for controlling Lego using Lava''<br />
:Matthew Naylor<br />
;''<nowiki>Caml Trading: Experiences in Functional Programming on Wall Street</nowiki>''<br />
:Yaron Minsky<br />
;''<nowiki>Book Review: “Programming in Haskell” by Graham Hutton</nowiki>''<br />
:Duncan Coutts<br />
;''Yhc.Core – from Haskell to Core''<br />
:Dimitry Golubovsky, Neil Mitchell, Matthew Naylor<br />
<br />
[[Media:TMR-Issue8.pdf|The Monad.Reader Issue 8]] was released on September 10, 2007.<br />
;''Generating Multiset Partitions''<br />
:Brent Yorgey<br />
;''Type-Level Instant Insanity''<br />
:Conrad Parker<br />
<br />
[[Media:TMR-Issue9.pdf|The Monad.Reader Issue 9]], the [http://hackage.haskell.org/trac/summer-of-code/wiki Summer of Code] special, is was released on November 19, 2007.<br />
;''Cabal Configurations''<br />
:Thomas Schilling<br />
;''Darcs Patch Theory''<br />
:Jason Dagit<br />
;''<nowiki>TaiChi: how to check your types with serenity</nowiki>''<br />
:Mathieu Boespflug<br />
<br />
==== Contributing ====<br />
<br />
If you're interested in writing something for The Monad.Reader, please download the [[Media:TMR.zip|instructions for authors]]. You can also check out the most recent version from the darcs repository at <tt>http://sneezy.cs.nott.ac.uk/darcs/TMR/Guidelines</tt>. <br />
<br />
The deadline for Issue 11 is '''August 1, 2008'''.<br />
<br />
Feel free to contact [http://www.cs.nott.ac.uk/~wss Wouter Swierstra] with any questions.<br />
<br />
==== Merchandise ====<br />
<br />
You can buy The Monad.Reader t-shirts and merchandise from [http://www.cafepress.com/TheMonadReader Cafepress].<br />
<br />
==== The name ====<br />
The name is a pun, I'm afraid. A magazine is sometimes also referred to as a "reader". The articles are not necessarily about monads.</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=The_Monad.Reader&diff=21180The Monad.Reader2008-06-05T11:39:35Z<p>WouterSwierstra: </p>
<hr />
<div>[[Category:Community]]<br />
The Monad.Reader is an electronic magazine about all things Haskell. It is less formal than journal, but somehow more enduring than a wiki-page. There have been a wide variety of articles, including: exciting code fragments, intriguing puzzles, book reviews, tutorials, and even half-baked research ideas.<br />
<br />
==== Latest Issue ====<br />
<br />
[[Media:TMR-Issue10.pdf|The Monad.Reader Issue 10]] is now available:<br />
;''Step inside the <nowiki>GHCi</nowiki> debugger''<br />
:Bernie Pope<br />
;''Evaluating Haskell in Haskell''<br />
:Matthew Naylor<br />
<br />
Feel free to browse the source files [http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue10/]. You can check out the entire repository using darcs:<br />
<br />
<tt>darcs get http://sneezy.cs.nott.ac.uk/darcs/TMR/Issue10/</tt><br />
<br />
The source code and LaTeX files have all been released under a BSD-style license.<br />
<br />
I'd welcome any discussion and feedback [[Talk:The_Monad.Reader|on the Talk page]].<br />
<br />
==== Previous editions ====<br />
<br />
[[Media:TMR-Issue1.pdf|The Monad.Reader Issue 1]] was released on March 1, 2005.<br />
;''<nowiki>Pseudocode: Natural Style</nowiki>''<br />
:Andrew J. Bromage<br />
;''Pugs Apocryphon 1 - Overview of the Pugs project''<br />
:Autrijus Tang<br />
;''An Introduction to Gtk2Hs, a Haskell GUI Library''<br />
:Kenneth Hoste<br />
;''Implementing Web-Services with the HAIFA Framework''<br />
:Simon D. Foster<br />
;''<nowiki>Code Probe - Issue one: Haskell XML-RPC, v.2004-06-17 [1]</nowiki>''<br />
:Sven Moritz Hallberg<br />
<br />
[[The Monad.Reader/Issue3| The Monad.Reader Issue 3]] was released June 2005.<br />
;''Notes on Learning Haskell''<br />
:by Graham Klyne <br />
;''Functional Programming vs Object Oriented Programming''<br />
:by Alistair Bayley <br />
;''Concurrent and Distributed Programming with Join Hs''<br />
:by Einar Karttunen <br />
;''"Haskell School Of Expression"<nowiki>:</nowiki> Review of The Haskell School of Expression''<br />
:by Isaac Jones <br />
;''Review of "Purely Functional Data Structures"''<br />
:by Andrew Cooke <br />
<br />
[[The Monad.Reader/Issue4 | The Monad.Reader Issue 4]] was released 5 July 2005.<br />
;''Impure Thoughts 2, B&D not S&M'' (off-wiki)<br />
:by Philippa Cowderoy <br />
;''Why Attribute Grammars Matter''<br />
:by Wouter Swierstra <br />
;''Solving Sudoku''<br />
:by Dominic Fox <br />
;''On Treaps And Randomization''<br />
:by Jesper Louis Andersen <br />
<br />
[[Media:TMR-Issue6.pdf|The Monad.Reader Issue 6]] was released January 31, 2007.<br />
;''Getting a Fix from the Right Fold''<br />
:Bernie Pope<br />
;''Adventures in Classical-Land''<br />
:Dan Piponi<br />
;''Assembly: Circular Programming with Recursive do''<br />
:Russell O'Connor<br />
<br />
[[Media:TMR-Issue7.pdf|The Monad.Reader Issue 7]] was released April 30, 2007.<br />
;''A Recipe for controlling Lego using Lava''<br />
:Matthew Naylor<br />
;''<nowiki>Caml Trading: Experiences in Functional Programming on Wall Street</nowiki>''<br />
:Yaron Minsky<br />
;''<nowiki>Book Review: “Programming in Haskell” by Graham Hutton</nowiki>''<br />
:Duncan Coutts<br />
;''Yhc.Core – from Haskell to Core''<br />
:Dimitry Golubovsky, Neil Mitchell, Matthew Naylor<br />
<br />
[[Media:TMR-Issue8.pdf|The Monad.Reader Issue 8]] was released on September 10, 2007.<br />
;''Generating Multiset Partitions''<br />
:Brent Yorgey<br />
;''Type-Level Instant Insanity''<br />
:Conrad Parker<br />
<br />
[[Media:TMR-Issue9.pdf|The Monad.Reader Issue 9]], the [http://hackage.haskell.org/trac/summer-of-code/wiki Summer of Code] special, is was released on November 19, 2007.<br />
;''Cabal Configurations''<br />
:Thomas Schilling<br />
;''Darcs Patch Theory''<br />
:Jason Dagit<br />
;''<nowiki>TaiChi: how to check your types with serenity</nowiki>''<br />
:Mathieu Boespflug<br />
<br />
==== Contributing ====<br />
<br />
If you're interested in writing something for The Monad.Reader, please download the [[Media:TMR.zip|instructions for authors]]. You can also check out the most recent version from the darcs repository at <tt>http://sneezy.cs.nott.ac.uk/darcs/TMR/Guidelines</tt>. <br />
<br />
The deadline for Issue 11 is '''August 1, 2008'''.<br />
<br />
Feel free to contact [http://www.cs.nott.ac.uk/~wss Wouter Swierstra] with any questions.<br />
<br />
==== Merchandise ====<br />
<br />
You can buy The Monad.Reader t-shirts and merchandise from [http://www.cafepress.com/TheMonadReader Cafepress].<br />
<br />
==== The name ====<br />
The name is a pun, I'm afraid. A magazine is sometimes also referred to as a "reader". The articles are not necessarily about monads.</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=The_Monad.Reader/Issue5/Software_Testing_With_Haskell&diff=20844The Monad.Reader/Issue5/Software Testing With Haskell2008-05-09T14:27:16Z<p>WouterSwierstra: </p>
<hr />
<div>'''This article needs reformatting! Please help tidy it up.'''--[[User:WouterSwierstra|WouterSwierstra]] 14:27, 9 May 2008 (UTC)<br />
<br />
= Software Testing With Haskell =<br />
''by ShaeErisson for The Monad.Reader Issue 5''<br />
[[BR]]<br />
''[[Date(2005-10-02T21:05:41Z)]]''<br />
<br />
'''Abstract.'''<br />
<br />
The two most commonly used libraries for testing software with Haskell are HUnit and QuickCheck. This article will shortly describe the two different approaches and show some demonstration code.<br />
<br />
Both HUnit and QuickCheck are included with GHC 6.4.1 under the module names of Test.HUnit and Test.QuickCheck<br />
= HUnit =<br />
HUnit was written by Dean Herington and is available on sourceforge at http://hunit.sourceforge.net/ .<br />
<br />
If you've ever used the [http://en.wikipedia.org/wiki/XUnit xUnit] framework in other programming languages, HUnit will feel familiar. The [http://hunit.sourceforge.net/HUnit-1.0/Guide.html User's Guide] includes a 'getting started' started section, and there are a thousand introductions to various flavors of the xUnit framework, so we'll mention HUnit only briefly.<br />
<br />
From the user's guide:<br />
<br />
"Tests are specified compositionally. [http://hunit.sourceforge.net/HUnit-1.0/Guide.html#Assertions Assertions] are combined to make a [http://hunit.sourceforge.net/HUnit-1.0/Guide.html#TestCase test case], and test cases are combined into [http://hunit.sourceforge.net/HUnit-1.0/Guide.html#Tests tests]."<br />
<br />
Here's a short demo:<br />
{{{#!syntax haskell<br />
module ProtoHunit where<br />
<br />
import Test.HUnit<br />
import Test.HUnit <br />
<br />
testList = TestList -- construct a TestList from a list of type TestCase<br />
[TestCase $ -- construct a TestCase from an assertion<br />
assertEqual "description" 2 (1 + 1) -- construct an assertion from a descriptive string, an expected result, and something to execute<br />
]<br />
<br />
t = runTestTT testList<br />
}}}<br />
<br />
= QuickCheck =<br />
QuickCheck was written by Koen Claessan and John Hughes, and is available from Chalmers at http://www.cs.chalmers.se/~rjmh/QuickCheck/ .<br />
<br />
QuickCheck takes a dramatically different approach to software testing. The programmer specifies a property that the code should follow, and the QuickCheck library generates random values and checks to see if the property always holds. <br />
<br />
Some demonstration properties are given below.<br />
{{{#!syntax haskell<br />
module ProtoQuickCheck where<br />
import Test.QuickCheck<br />
<br />
-- this succeeds in one case of the input.<br />
prop_Fail :: Int -> Bool<br />
prop_Fail x = <br />
x == 1<br />
<br />
-- this succeeds in three cases of the input.<br />
prop_RevUnit :: [Int] -> Bool<br />
prop_RevUnit x = <br />
reverse x == x<br />
<br />
-- what's wrong with this picture?<br />
prop_RevUnitConfusion :: [Int] -> Bool<br />
prop_RevUnitConfusion x = <br />
reverse [x] == [x]<br />
<br />
-- do you see a bug?<br />
prop_RevApp :: [Int] -> [Int] -> Bool<br />
prop_RevApp xs ys = <br />
reverse (xs ++ ys) == reverse xs ++ reverse ys<br />
<br />
prop_RevRev :: [Int] -> Bool<br />
prop_RevRev xs = <br />
reverse (reverse xs) == xs<br />
<br />
(f === g) x = f x == g x<br />
<br />
prop_CompAssoc :: (Int -> Int) -> (Int -> Int) -> (Int -> Int) -> Int -> Bool<br />
prop_CompAssoc f g h = (f . (g . h)) === ((f . g) . h)<br />
<br />
prop_CompCommut :: (Int -> Int) -> (Int -> Int) -> Int -> Bool<br />
prop_CompCommut f g = (f . g) === (g . f)<br />
<br />
-- this operator ==><br />
-- means filter inputs by that condition<br />
-- below an x and y are only accepted if x is less than or equal to y<br />
prop_MaxLe :: Int -> Int -> Property<br />
prop_MaxLe x y = x <= y ==> max x y == y<br />
<br />
instance Show (a -> b) where show _ = "<<function>>"<br />
<br />
}}}<br />
To test one of these properties, load the source into ghci and run "quickCheck prop_Fail". One possible response is:<br />
{{{<br />
Falsifiable, after 0 tests:<br />
-1<br />
}}}<br />
Since the type signature of prop_Fail is {{{Int -> Bool}}} QuickCheck generated an Int value and checked to see if the property held true. Since the value -1 is not equal to 1, the property is false.<br />
<br />
= Everything Else =<br />
The Haskell wiki has information on [http://www.haskell.org/hawiki/HaskellMode one button unit testing] with emacs' haskell-mode.<br />
<br />
Other libraries and applications that deal with software testing in Haskell are mentioned below, but are beyond the scope of this short introduction.<br />
* [http://www.haskell.org/hat/ Hat] The Haskell Tracer.<br />
* [http://www.cs.mu.oz.au/~bjpop/plargleflarp/ Plargeflarp] (declarative debugger formerly known as buddha)<br />
* [http://www.cse.ogi.edu/~hallgren/Programatica/ Programatica] is a collection of tools, that include the ability to specify inline 'certificates'. 'Certificates' are tests, they can be static unit tests, QuickCheck properties, or automated proofs.<br />
----<br />
CategoryArticle</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=The_Monad.Reader/Issue5/Practical_Graph_Handling&diff=20843The Monad.Reader/Issue5/Practical Graph Handling2008-05-09T14:26:50Z<p>WouterSwierstra: </p>
<hr />
<div>'''This article needs reformatting! Please help tidy it up.'''--[[User:WouterSwierstra|WouterSwierstra]] 14:26, 9 May 2008 (UTC)<br />
<br />
= A Practical Approach to Graph Manipulation =<br />
''by JeanPhilippeBernardy for The Monad.Reader Issue 5''<br />
[[BR]]<br />
''[[Date(2005-07-08T20:48:51Z)]]''<br />
<br />
'''Abstract.'''<br />
<br />
Tree-based data structures are easy to deal with in haskell. <br />
However, working with graph-like structures in practice is much less obvious. <br />
In this article I present a solution that has worked for me in many cases.<br />
<br />
<br />
== Introduction ==<br />
<br />
I always found that dealing with graphs in haskell is a tricky subject. <br />
Even something like a implementing a depth-first search, which is <br />
trivally achieved in an imperative language, deserves an article <br />
on its own for haskell [[#dfs 4]].<br />
A PhD thesis has been written on the subject of graphs and functional programming [[#king-thesis 2]], and it seems that it still <br />
doesn't exhaust the <br />
design-space: radically different ideas have been proposed afterwards [[#induct 3]].<br />
<br />
In this article I'll present (a simplified version of) a solution that <br />
I think deserves more coverage [[#cycle-therapy 1]]. The idea is to abstract graph <br />
manipulation by anamorphisms and catamorphisms.<br />
This approch features "separation of concerns" and "composability", hence it<br />
can be readily applied to <br />
practical problems.<br />
<br />
* Section 2 shows how anamorphisms and catamorphisms can be generalised to graphs.<br />
* Section 3 details the data structures used to represent graphs<br />
* Section 4 discusses various problems where cata/anamorphisms can be applied<br />
* Section 5 gives a sample implementation for the catamorphism and anamorphism<br />
* Section 6 concludes<br />
<br />
=== Nota ===<br />
<br />
This article has been generated from a literate haskell source. <br />
So, although the text of this wiki page will not compile, all the examples are <br />
real and run. The source can be accessed here: attachment:PracticalGraphHandling.lhs<br />
<br />
We will assume you know the hierarchcal libraries. Refer to http://haskell.org/ghc/docs/latest/html/libraries/index.html in case of doubt.<br />
<br />
== Origami with Graphs ==<br />
<br />
=== Fold & Unfold (the big deal) ===<br />
<br />
Most of you probably know what a "fold" (also known as catamorphism) <br />
is. For those who don't, intuitively, it's an higher-order operation <br />
that reduces a complex structure to a single value. It applies a <br />
function given as parameter to each node, propagating the results <br />
up to the root. This is a highly imprecise definition, for more <br />
details please read [[#bananas-lenses 5]].<br />
<br />
For example, the fold operation on lists can be typed as follows:<br />
{{{#!syntax haskell<br />
foldr :: (a -> b -> b) -> -- ^ operation to apply<br />
b -> -- ^ initial value<br />
[a] -> -- ^ input list<br />
b -- ^ result<br />
}}}<br />
<br />
Conversely, "unfold" builds a complex structure out of a building<br />
function, applying it iteratively.<br />
<br />
{{{#!syntax haskell<br />
unfoldr :: (b -> Maybe (a, b)) -> -- ^ building function (Nothing => end of list)<br />
b -> -- ^ seed value<br />
[a] -- ^ result<br />
}}}<br />
<br />
The second argument is the initial value from which the <br />
whole resulting list will be derived, by applying the 1st argument.<br />
In the following we'll refer to it as the "seed".<br />
<br />
<br />
The catamorphism/anamorphism abstractions have proven to be <br />
very useful in practise. They're ubiquitous to any haskell <br />
programming, either explicitly, or implicitly (hidden in <br />
higher-level operations). In this article I'll show how <br />
those abstractions can be generalised to graph structures, <br />
and argue that they are equally useful in this case.<br />
<br />
The rest of the article assumes the reader is fairly familiar <br />
with fold and unfold. Fortunately there are many articles on the<br />
subject. For example you can refer to [[#bananas-lenses 5]] if you ever feel uncomfortable.<br />
<br />
=== Generalisation ===<br />
<br />
Let's examine how fold/unfold can be generalized for graphs.<br />
Since we are working on graphs instead of lists, we must account for<br />
<br />
1. Any number of children for a node;<br />
1. "Backwards" arcs (cycles);<br />
1. Labelled edges.<br />
<br />
The most relevant point being 2, of course.<br />
<br />
==== unfoldG ====<br />
<br />
From the above, we can deduce that the type of unfoldG will be:<br />
<br />
{{{#!syntax haskell<br />
unfoldG :: (Ord s) => (s -> (n, [(e, s)])) -> s -> (Vertex, LabGraph n e)<br />
unfoldG f r = (r', res)<br />
where ([r'], res) = unfoldGMany f [r]<br />
}}}<br />
where {{{s}}} is the seed type, {{{n}}} is the node labels, {{{e}}} the edges labels.<br />
<br />
The {{{Ord s}}} constraint reflects point 2 above. <br />
It is needed because the unfoldG function must record every <br />
seed value encountered.<br />
Whenever a seed is seen a second time, {{{unfoldG}}} will recognize <br />
it and create a "backward arc".<br />
We use {{{Ord}}} instead of {{{Eq}}} because a mere equality test rules out using {{{Data.Map}}}.<br />
<br />
The attentive reader will note that we return an additional <br />
Vertex value. This is needed to identifty which node the root<br />
seed corresponds to.<br />
<br />
In order to get an intuitive feeling of how {{{unfoldG}}} works,<br />
let's examine a simple example.<br />
<br />
{{{#!syntax haskell<br />
gr1 :: LabGraph Int Char<br />
(_,gr1) = unfoldG gen (0::Int) <br />
where gen x = (x,[('a',(x+1) `mod` 10), ('b', (x+2) `mod` 10)])<br />
}}}<br />
<br />
{{{gr1}}} being defined as above, its structure is:<br />
<br />
attachment:gr1.png<br />
<br />
Because we might want to build a graph from a set of seeds <br />
instead of a single one, we will also need the following function:<br />
{{{#!syntax haskell<br />
unfoldGMany :: (Ord s) => (s -> (n, [(e, s)])) -> [s] -> ([Vertex], LabGraph n e)<br />
unfoldGMany f roots = runST ( unfoldGManyST f roots ) -- detailed later<br />
}}} <br />
<br />
{{{unfoldG}}}, alone, is already very a practical tool, because it <br />
lets you reify a function ({{{a -> a}}}) graph. It then can be examined, <br />
processed, etc. whereas the function can only be evaluated.<br />
<br />
==== foldG ====<br />
<br />
On a graph, the catamorphism (fold) type will become:<br />
{{{#!syntax haskell<br />
foldG :: (Eq r) => r -> (Vertex -> [(e, r)] -> r) -> Graph e -> Vertex -> r<br />
foldG i f g v = foldGAll i f g ! v<br />
}}}<br />
<br />
As for {{{unfoldG}}}, the {{{foldG}}} <br />
function must include a special mechanism to handle cycles.<br />
The idea is to apply the operation iteratively until the result <br />
converges. It's the purpose of the first <br />
parameter is to "bootstrapp" the process: <br />
it will be used as an initial value.<br />
<br />
Thus, {{{foldG i f g v}}} will iteratively <br />
apply {{{f}}} on nodes of graph {{{g}}}, <br />
using {{{i}}} as "bottom" value. It will return <br />
the value computed at vertex {{{v}}}. <br />
Of course, this will work only if {{{f}}} is well-behaved: <br />
it must converge at some point.<br />
I won't dwelve in to the theoretical details <br />
here, see [[#cycle-therapy 1]] for a<br />
formal explanation.<br />
<br />
Notice that {{{foldG}}} can work on a graph without node labels. <br />
If the parameter function needs to access node labels, it can <br />
do so without {{{foldG}}} needing to know.<br />
<br />
It's also worth noticing that, in our implementation, the <br />
information will be propagated in the reverse direction of arcs. <br />
<br />
It's very common to need the result value for each vertex, <br />
hence the need for :<br />
<br />
{{{#!syntax haskell<br />
foldGAll :: (Eq r) => r -> (Vertex -> [(e, r)] -> r) -> Graph e -> Table r<br />
}}}<br />
<br />
<br />
<br />
<br />
<br />
The implementation of these functions doesn't matter much. <br />
The point of the article is not how these can be implemented, <br />
but how they can be used for daily programming tasks. <br />
For completeness though, we'll provide a <br />
sample implemenation at the end of the article.<br />
<br />
== Data Structure & Accessors ==<br />
<br />
Without further ado, let's define the data structures we'll work on. <br />
<br />
<br />
{{{#!syntax haskell<br />
type Vertex = Int<br />
type Table a = Array Vertex a<br />
type Graph e = Table [(e, Vertex)]<br />
type Bounds = (Vertex, Vertex)<br />
type Edge e = (Vertex, e, Vertex)<br />
}}}<br />
A graph is a mere adjacency list table, tagged with edge labels.<br />
<br />
The above structure lacks labels for nodes.<br />
This is easily fixed by adding a labeling (or coloring) function.<br />
{{{#!syntax haskell<br />
type Labeling a = Vertex -> a<br />
data LabGraph n e = LabGraph (Graph e) (Labeling n)<br />
<br />
vertices (LabGraph gr _) = indices gr<br />
<br />
labels (LabGraph gr l) = map l (indices gr)<br />
}}}<br />
<br />
<br />
The above departs slightly from what's prescribed in [[#cycle-therapy 1]]. Instead of <br />
a ''true graph'' built by knot-tying, we chose to use an {{{Array}}}<br />
with integers as explicit vertex references.<br />
This is closely follows <br />
Data.Graph in the hierarchical libraries, <br />
the only difference being that we have labelled edges. <br />
<br />
Not only this is simpler, but it has the advantage that we can reuse<br />
most of the algorithms from Data.Graph with only minor changes:<br />
<br />
{{{#!syntax haskell<br />
-- | Build a graph from a list of edges.<br />
buildG :: Bounds -> [Edge e] -> Graph e<br />
buildG bounds0 edges0 = accumArray (flip (:)) [] bounds0 [(v, (l,w)) | (v,l,w) <- edges0]<br />
<br />
-- | The graph obtained by reversing all edges.<br />
transposeG :: Graph e -> Graph e<br />
transposeG g = buildG (bounds g) (reverseE g)<br />
<br />
reverseE :: Graph e -> [Edge e]<br />
reverseE g = [ (w, l, v) | (v, l, w) <- edges g ]<br />
}}}<br />
<br />
<br />
However, as previously said, we'll try to abstract <br />
away from the details of the structure.<br />
This is not always possible, but in such cases, <br />
I believe the array representation to be<br />
a good choice, because it's easy to work with. <br />
If anything, one can readily use all the<br />
standard array functions.<br />
<br />
For example, here's the function to output a graph as a GraphViz file:<br />
{{{#!syntax haskell<br />
showGraphViz (LabGraph gr lab) = <br />
"digraph name {\n" ++<br />
"rankdir=LR;\n" ++<br />
(concatMap showNode $ indices gr) ++<br />
(concatMap showEdge $ edges gr) ++<br />
"}\n"<br />
where showEdge (from, t, to) = show from ++ " -> " ++ show to ++<br />
" [label = \"" ++ show t ++ "\"];\n"<br />
showNode v = show v ++ " [label = " ++ (show $ lab v) ++ "];\n"<br />
<br />
edges :: Graph e -> [Edge e]<br />
edges g = [ (v, l, w) | v <- indices g, (l, w) <- g!v ]<br />
}}} <br />
<br />
<br />
== Applications ==<br />
<br />
I'll now enumerate a few problems where the "origami" approach can be applied successfully.<br />
<br />
=== Closure ===<br />
<br />
A simple application (special case) of "unfoldG" the <br />
computation of the transitive closure of a non-deterministic function.<br />
<br />
{{{#!syntax haskell<br />
closure :: Ord a => (a -> [a]) -> (a -> [a])<br />
closure f i = labels $ snd $ unfoldG f' i <br />
where f' x = (x, [((), fx) | fx <- f x])<br />
}}}<br />
<br />
In this context, "non deterministic" means that it yields many <br />
values, as a list. As noted before, this will work only when <br />
everything remains finite in size.<br />
<br />
<br />
For example, if we define<br />
<br />
{{{#!syntax haskell<br />
interleave (x1:x2:xs) = (x1:x2:xs) : (map (x2:) (interleave (x1:xs)))<br />
interleave xs = [xs]<br />
<br />
interleave "abcd" ==> ["abcd","bacd","bcad","bcda"]<br />
}}}<br />
<br />
a very bad way to compute the permutations of list can be<br />
<br />
{{{#!syntax haskell<br />
permutations = closure interleave<br />
<br />
permutations "abcd" ==> ["abcd","bacd","acbd","cabd","abdc","badc", <br />
"adbc","dabc","dbac","bdac","dacb","adcb",<br />
"dcab","cdab","cadb","acdb","cdba","dcba",<br />
"cbda","bcda","bdca","dbca","bcad","cbad"]<br />
}}}<br />
<br />
But sometimes the function to 'close' is more complicated than {{{interleave}}} and<br />
then {{{closure}}} becomes really useful.<br />
<br />
<br />
=== Shortest Path ===<br />
<br />
Let us now examine the toy problem of finding the distance <br />
to a given node from all the other nodes of the graph. <br />
Most readers probably know the Dijkstra's algorithm to <br />
compute the solution to the problem. We will not try <br />
to reproduce it here, instead we will define the computation in terms of {{{foldG}}}.<br />
<br />
Here it goes:<br />
{{{#!syntax haskell<br />
-- | Compute the distance to v for every vertex of gr.<br />
distsTo :: Vertex -> Graph Float -> Table Float<br />
distsTo v gr = foldGAll infinite distance gr <br />
where infinite = 10000000 -- well, you get the idea<br />
distance v' neighbours <br />
| v == v' = 0<br />
| otherwise = minimum [distV+arcWeight | (distV, arcWeight) <- neighbours]<br />
}}}<br />
<br />
So clear that it barely needs to be explained. :) <br />
Just notice how the minimize function assumes that the <br />
distance is already computed for all its neighbours. <br />
This works because {{{foldG}}} will iterate until it finds the fixed point.<br />
<br />
On this simple graph,<br />
<br />
{{{#!syntax haskell<br />
grDist = buildG (1,5) [(1,5.0,2), (2,5.0,3), (2,7.0,4), (3,5.0,4), (4,5.0,5), (4,3.0,1)]<br />
}}}<br />
<br />
attachment:grdist.png<br />
<br />
the result of {{{#!syntax haskell<br />
dists = distsTo 5 grDist<br />
}}} is<br />
<br />
attachment:grdist2.png<br />
<br />
(labeling each node with the its result, ie. distance to vertex {{{5}}})<br />
<br />
=== Finite Automaton ===<br />
<br />
Finite automatons are basically graphs, so let's see how we can apply the <br />
framework to their analysis.<br />
<br />
First, let's define an automaton. For our purposes, it is a graph <br />
of states/transitions, some of the states being marked as initial or final.<br />
<br />
{{{#!syntax haskell<br />
type Automaton t = (Vertex, Graph t, Set Vertex) -- ^ Initial, transitions, finals<br />
}}}<br />
<br />
For starters, here is how the {{{showGraphViz}}} function can be applied to automaton display:<br />
<br />
{{{#!syntax haskell<br />
automatonToGraphviz (i, gr, fs) = showGraphViz (LabGraph gr lab)<br />
where lab :: Labeling String<br />
lab v = (if v == i then (">"++) else id) $ <br />
(if v `Set.member` fs then (++"|") else id) []<br />
}}}<br />
<br />
Nothing ground breaking. We only label the nodes accordingly to<br />
their final or initial status.<br />
<br />
{{{#!syntax haskell<br />
aut1 = (1, buildG (1,3) [(1,'a',2),(2,'a',2),(2,'b',2),(2,'c',3),(1,'a',3)], Set.fromList [3])<br />
}}}<br />
<br />
attachment:aut1.png<br />
<br />
A more interesting example is how to transform a non-deterministic <br />
automaton to an equivalent deterministic one. The underlying idea <br />
is that non-deterministic execution of the automaton is equivalent <br />
to deterministic execution on all possible transitions at once. <br />
Refer to [[#hop&ull 6]] for details. This is relatively easily done using {{{unfoldG}}}.<br />
{{{#!syntax haskell<br />
simpleGenerator f x = (x, f x)<br />
<br />
nfaToDfa :: Ord e => Automaton e -> Automaton e<br />
nfaToDfa (initial1, aut1, finals1) = (initial2, aut2, finals2)<br />
where (initial2, LabGraph aut2 mapping) = unfoldG (simpleGenerator build) seed<br />
seed = Set.singleton initial1<br />
build state = Map.toList $ Map.fromListWith Set.union $ map lift $<br />
concat $ map (aut1 !) $ Set.toList state<br />
lift (t,s) = (t, Set.singleton s)<br />
isFinal = setAny (`Set.member` finals1) . mapping<br />
finals2 = Set.fromList $ filter isFinal $ indices aut2<br />
setAny f = any f . Set.toList<br />
}}}<br />
<br />
The 'build' function is the tricky part. Yet, it's not as complicated as it seems: all it does is <br />
1. Find all reachable nodes from a set of nodes; <br />
1. Classify them by transition label<br />
1. Build target state-sets accordingly.<br />
<br />
{{{#!syntax haskell<br />
aut2 = nfaToDfa aut1<br />
}}}<br />
<br />
attachment:aut2.png<br />
<br />
Another thing we possibly wish to compute is the set of <br />
strings accepted by the automaton, (aka. the language it <br />
defines). Most of the time this will be infinite, so <br />
we will limit ourselves to strings of length {{{n}}} maximum.<br />
We need finiteness because otherwise {{{foldG}}} would not find<br />
a fixed point: string sets would keep growing idefinitely.<br />
<br />
{{{#!syntax haskell<br />
accepted n (initial1, aut1, finals1) = Set.unions [resultTable ! v | v <- Set.toList finals1]<br />
-- gather what's accepted at all final states<br />
where resultTable = foldGAll Set.empty step (transposeG aut1)<br />
step v trans = Set.unions ((if v == initial1 then Set.singleton [] else Set.empty) : <br />
[Set.map ((++[t]) . take (n-1) ) s | (t,s) <- trans])<br />
}}}<br />
<br />
Notice that we need to reverse the graph arcs, otherwise the information propagates in the wrong direction.<br />
<br />
With <br />
{{{#!syntax haskell<br />
accAut1 = accepted 4 aut1<br />
accAut2 = accepted 4 aut2<br />
}}}<br />
we have <br />
{{{#!syntax haskell<br />
accAut1 == accAut2 == {"a","aaac","aabc","aac","abac","abbc","abc","ac"}<br />
}}}<br />
<br />
=== LALR Automaton ===<br />
<br />
Another area where I applied graph (un)folding is LALR(1) parser generation. The detailed code<br />
depends on just too many things to fit in this paper, <br />
thus we will only sketch how pieces fit<br />
together. Also, since a course on parsing is clearly beyond the scope of this article, <br />
please refer to local copy of the dragon book [[#dragon 7]] for details on the method.<br />
<br />
In the process of generating tables for a LALR automaton, <br />
there are three steps amenable to implementation by {{{foldG}}} and {{{unfoldG}}}.<br />
<br />
1. Construction of the closure of a LR-items kernel. This one is very similar to the {{{closure}}} function described above, except that we don't discard the graph structure. It'll be of use for step 3.<br />
1. LR(0) automaton generation. Then again a use for {{{unfoldG}}}.<br />
1. Propagation of the lookahead. It is a fold over the whole graph of LR-items, basically using set union as coalescing operation. It is very similar to computation of acceptable strings above.<br />
<br />
<br />
== Implementation ==<br />
<br />
=== UnfoldG ===<br />
<br />
For the sake of completeness, here's how to implement the {{{unfoldG}}} function.<br />
<br />
The algorithm effectively a depth-first search, written in imperative style.<br />
The only difference is that the search graph is remembered and returned as result.<br />
<br />
{{{#!syntax haskell<br />
<br />
unfoldGManyST :: (Ord a) => (a -> (c, [(b, a)]))<br />
-> [a] -> ST s ([Vertex], LabGraph c b)<br />
unfoldGManyST gen seeds =<br />
do mtab <- newSTRef (Map.empty)<br />
allNodes <- newSTRef []<br />
vertexRef <- newSTRef firstId<br />
let allocVertex = <br />
do vertex <- readSTRef vertexRef<br />
writeSTRef vertexRef (vertex + 1)<br />
return vertex<br />
let cyc src =<br />
do probe <- memTabFind mtab src<br />
case probe of<br />
Just result -> return result<br />
Nothing -> do<br />
v <- allocVertex<br />
memTabBind src v mtab <br />
let (lab, deps) = gen src<br />
ws <- mapM (cyc . snd) deps<br />
let res = (v, lab, [(fst d, w) | d <- deps | w <- ws])<br />
modifySTRef allNodes (res:)<br />
return v<br />
mapM_ cyc seeds<br />
list <- readSTRef allNodes<br />
seedsResult <- (return . map fromJust) =<< mapM (memTabFind mtab) seeds<br />
lastId <- readSTRef vertexRef<br />
let cycamore = array (firstId, lastId-1) [(i, k) | (i, a, k) <- list]<br />
let labels = array (firstId, lastId-1) [(i, a) | (i, a, k) <- list]<br />
return (seedsResult, LabGraph cycamore (labels !))<br />
where firstId = 0::Vertex<br />
memTabFind mt key = return . Map.lookup key =<< readSTRef mt<br />
memTabBind key val mt = modifySTRef mt (Map.insert key val)<br />
<br />
}}}<br />
<br />
Notice how every time a seed is encountered, its corresponding vertex number stored. <br />
Whenever the seed is encountered again, the stored is just returned.<br />
<br />
<br />
=== FoldG ===<br />
<br />
{{{#!syntax haskell<br />
foldGAllImplementation bot f gr = finalTbl<br />
where finalTbl = fixedPoint updateTbl initialTbl<br />
initialTbl = listArray bnds (replicate (rangeSize bnds) bot)<br />
<br />
fixedPoint f x = fp x<br />
where fp z = if z == z' then z else fp z'<br />
where z' = f z<br />
updateTbl tbl = listArray bnds $ map recompute $ indices gr<br />
where recompute v = f v [(b, tbl!k) | (b, k) <- gr!v]<br />
bnds = bounds gr<br />
}}}<br />
<br />
<br />
The proposed implementation for foldG is rather bold.<br />
It just applies the coalescing <br />
function repeatedly till it converges.<br />
<br />
While this is not an ideal situation, it's perfectly suited for a first-trial <br />
implementation, or when performance is not crucial.<br />
<br />
If execution time becomes critical, then more specialized <br />
versions can be crafted.<br />
In the case of the shortest path algorithm, for example, <br />
it could take advantage<br />
of the nice properties of the coalescing function to use <br />
a priority queue and greedily<br />
find the fixed point. This would restore the optimal O(n * log n) complexity.<br />
<br />
<br />
== Conclusion ==<br />
<br />
The approach presented may not be excellent for controlling details of implementation <br />
and tuning run-time performance, but I think that's not the point <br />
of haskell programming anyway.<br />
On the other hand, it is very good for quick implementation <br />
of a large range of graph algorithms. The fact that it's mostly based on a<br />
generalisation on fold and unfold should appeal to haskell<br />
programmers.<br />
<br />
<br />
== References ==<br />
<br />
*[[Anchor(cycle-therapy)]] [1] ''Cycle Therapy: A Prescription for Fold and Unfold on Regular Trees'', F. Turbak and J.B. Wells, http://cs.wellesley.edu/~fturbak/pubs/ppdp01.pdf<br />
*[[Anchor(king-thesis)]] [2] ''Functional Programming and Graph Algorithms'', D. J. King, http://www.macs.hw.ac.uk/~gnik/publications <br />
*[[Anchor(induct)]] [3] ''Inductive Graphs and Functional Graph Algorithms'', Martin Erwig, http://web.engr.oregonstate.edu/~erwig/papers/abstracts.html<br />
*[[Anchor(dfs)]] [4] , D. J. King and John Launchbury, http://www.cse.ogi.edu/~jl/Papers/dfs.ps<br />
*[[Anchor(bananas-lenses)]] [5] ''Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire'', Erik Meijer, Maarten Fokkinga, Ross Paterson. http://citeseer.ist.psu.edu/meijer91functional.html<br />
*[[Anchor(hop&ull)]] [6] ''Introduction to Automata Theory, Languages, and Computation'', JE Hopcroft, and JD Ullman, http://www-db.stanford.edu/~ullman/ialc.html <br />
*[[Anchor(dragon)]] [7] ''Compilers: Principles, Techniques and Tools'', Alfred V. Aho, Ravi Sethi, and Jeffrey D. Ullman. (Addison-Wesley 1986; ISBN 0-201-10088-6)<br />
<br />
----<br />
CategoryArticle</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=The_Monad.Reader/Issue5/Number_Param_Types&diff=20842The Monad.Reader/Issue5/Number Param Types2008-05-09T14:26:18Z<p>WouterSwierstra: </p>
<hr />
<div>'''This article needs reformatting! Please help tidy it up.'''--[[User:WouterSwierstra|WouterSwierstra]] 14:26, 9 May 2008 (UTC)<br />
<br />
= Number-parameterized types =<br />
<br />
''This article is also [http://pobox.com/~oleg/ftp/papers/number-parameterized-types.pdf available in PDF]. This Wiki page<br />
is not the master file: rather, it is the result of the {{{SXML->Wiki}}}<br />
conversion. Please comment in ["/Comments"]''<br />
<br />
= Abstract =<br />
This paper describes practical programming with types<br />
parameterized by numbers: e.g., an array type parameterized by the<br />
array{{{'}}}s size or a modular group type `Zn`<br />
parameterized by the modulus. An attempt to add, for example, two<br />
integers of different moduli should result in a compile-time error<br />
with a clear error message. Number-parameterized types let the<br />
programmer capture more invariants through types and eliminate some<br />
run-time checks.[[BR]]<br />
We review several encodings of the numeric<br />
parameter but concentrate on the phantom type representation of a<br />
sequence of ''decimal'' digits. The decimal encoding makes<br />
programming with number-parameterized types convenient and error<br />
messages more comprehensible. We implement arithmetic on<br />
decimal number-parameterized types, which lets us statically<br />
typecheck operations such as array concatenation.[[BR]]<br />
Overall we demonstrate a practical<br />
dependent-type-like system that is just a Haskell library. The basics<br />
of the number-parameterized types are written in Haskell98.<br />
=== Keywords: ===<br />
Haskell, number-parameterized types, type arithmetic, decimal types, type-directed programming.<br />
<br />
= Contents =<br />
<br />
[[TableOfContents(2)]]<br />
<br />
<br />
<br />
== Introduction ==<br />
[[Anchor(sec:Introduction)]]Discussions about types parameterized by values -- especially<br />
types of arrays or finite groups parameterized by their size --<br />
reoccur every couple of months on functional programming languages<br />
newsgroups and mailing lists. The often expressed wish is to guarantee<br />
that, for example, we never attempt to add two vectors of different<br />
lengths. As one poster said [#Haskell-list-quote [ Haskell-list-quote] ], <br />
{{{`}}}{{{`}}}This {{{[}}}feature{{{]}}} would be helpful in the crypto library where I end up having<br />
to either define new length Words all the time or using lists and<br />
losing the capability of ensuring I am manipulating lists of the same<br />
length.{{{'}}}{{{'}}} Number-parameterized types as other more expressive types<br />
let us tell the typechecker our intentions. The typechecker may then<br />
help us write the code correctly. Many errors (which are often<br />
trivial) can be detected at compile time. Furthermore, we no longer<br />
need to litter the code with array boundary match checks. The code<br />
therefore becomes more readable, reliable, and<br />
fast. Number-parameterized types when expressed in signatures also<br />
provide a better documentation of the code and let the invariants<br />
be checked across module boundaries.<br />
<br />
In this paper, we develop realizations of number-parameterized<br />
types in Haskell that indeed have all the above advantages. The<br />
numeric parameter is specified in ''decimal'' rather than in<br />
binary, which makes types smaller and far easier to read. Type error<br />
messages also become more comprehensible. The programmer may write or<br />
the compiler can infer equality constraints (e.g., two argument<br />
vectors of a function must be of the same size), arithmetic<br />
constraints (e.g., one vector must be larger by some amount), and<br />
inequality constraints (e.g., the size of the argument vector must be<br />
at least one). The violations of the constraints are detected at<br />
compile time. We can remove run-time tag checks in functions like<br />
{{{vhead}}}, which are statically assured to receive a non-empty<br />
vector.<br />
<br />
Although we come close to the dependent-type programming, we do<br />
not extend either a compiler or the language. Our system is a regular<br />
Haskell library. In fact, the basic number-parameterized types can be<br />
implemented entirely in Haskell98. Advanced operations such as type<br />
arithmetic require commonly supported Haskell98 extensions to<br />
multi-parameter classes with functional dependencies and higher-ranked<br />
types.<br />
<br />
Our running example is arrays parameterized over their size. The<br />
parameter of the vector type is therefore a non-negative integer<br />
number. For simplicity, all the vectors in the paper are indexed from<br />
zero. In addition to vector constructors and element accessors, we<br />
define a {{{zipWith}}}-like operation to map two vectors onto<br />
the third, element by element. An attempt to map vectors of different<br />
sizes should be reported as a type error. The typechecker will also<br />
guarantee that there is no attempt to allocate a vector of a negative<br />
size. In Section [#sec:arithmetic [ sec:arithmetic] ] we introduce operations {{{vhead}}}, {{{vtail}}} and {{{vappend}}} on number-parameterized vectors.<br />
The types of these operations exhibit arithmetic and inequality<br />
constraints.<br />
<br />
The present paper describes several gradually more sophisticated<br />
number-parameterized Haskell libraries. We start by paraphrasing the<br />
approach by Chris Okasaki, who represents the size parameter of<br />
vectors in a sequence of data constructors. We then switch to the<br />
encoding of the size in a sequence of type constructors. The<br />
resulting types are phantom and impose no run-time overhead. Section<br />
[#sec:unary-type [ sec:unary-type] ] describes unary encoding of numerals in type<br />
constructors, Sections [#sec:decimal-fixed [ sec:decimal-fixed] ] and<br />
[#sec:decimal-arb [ sec:decimal-arb] ] discuss decimal encodings. Section<br />
[#sec:decimal-fixed [ sec:decimal-fixed] ] introduces a type representation for<br />
fixed-precision decimal numbers. Section [#sec:decimal-arb [ sec:decimal-arb] ]<br />
removes the limitation on the maximal size of representable numbers,<br />
at a cost of a more complex implementation and of replacing commas<br />
with unsightly dollars signs. The decimal encoding is extendible to<br />
other bases, e.g., 16 or 64. The latter can be used to develop<br />
practical realizations of number-parameterized cryptographically<br />
interesting groups.<br />
<br />
Section [#sec:arithmetic [ sec:arithmetic] ] describes the first<br />
contribution of the paper. We develop addition and subtraction of<br />
{{{`}}}{{{`}}}decimal types{{{'}}}{{{'}}}, i.e., of the type constructor applications<br />
representing non-negative integers in decimal notation. The<br />
implementation is significantly different from that for more common<br />
unary numerals. Although decimal numerals are notably difficult to<br />
add, they make number-parameterized programming practical. We can now<br />
write arithmetic equality and inequality constraints on<br />
number-parameterized types.<br />
<br />
Section [#sec:dynamic [ sec:dynamic] ] briefly describes working with<br />
number-parameterized types when the numeric parameter, and even its<br />
upper bound, are not known until run time. We show one, quite simple<br />
technique, which assures a static constraint by a run-time check --<br />
witnessing. The witnessing code, which must be trustworthy, is notably<br />
compact. The section uses the method of blending of static and dynamic<br />
assurances that was first described in [#stanamic-trees [ stanamic-trees] ].<br />
<br />
Section [#sec:related [ sec:related] ] compares our approach with the<br />
phantom type programming in SML by Matthias Blume, with a practical<br />
dependent-type system of Hongwei Xi, with statically-sized and generic<br />
arrays in Pascal and C, with the shape inference in array-oriented<br />
languages, and with C++ template meta-programming. Section [#sec:conclusions [ sec:conclusions] ] concludes.<br />
<br />
<br />
<br />
== Encoding the number parameter in data constructors ==<br />
[[Anchor(sec:Okasaki)]]The first approach to vectors parameterized by their size encodes<br />
the size as a series of data constructors. This approach has been used<br />
extensively by Chris Okasaki. For example, in [#Okasaki99 [ Okasaki99] ]<br />
he describes square matrixes whose dimensions can be proved equal at<br />
compile time. He digresses briefly to demonstrate vectors of<br />
statically known size. A similar technique has been described by<br />
McBride [#McBride [ McBride] ]. In this section, we develop a more naive<br />
encoding of the size through data constructors, for introduction and<br />
comparison with the encoding of the size via type constructors in the<br />
following sections.<br />
<br />
Our representation of vectors of a statically checked size is<br />
reminiscent of the familiar representation of lists:<br />
<br />
<br />
{{{<br />
data List a = Nil | Cons a (List a)<br />
}}}<br />
{{{List a}}} is a recursive datatype. Lists of different sizes<br />
have the same recursive type. To make the types different (so that<br />
we can represent the size, too) we break the explicit recursion in the<br />
datatype declaration. We introduce two data constructors:<br />
<br />
<br />
{{{<br />
module UnaryDS where<br />
data VZero a = VZero deriving Show<br />
<br />
infixr 3 :+:<br />
data Vecp tail a = a :+: (tail a) deriving Show<br />
}}}<br />
The constructor {{{VZero}}} represents a vector of a zero<br />
size. A value of the type {{{Vecp tail a}}} is a non-empty vector<br />
formed from an element of the type {{{a}}} and (a smaller vector)<br />
of the type {{{tail a}}}. We place our vectors into the class<br />
{{{Show}}} for expository purposes. Thus vectors holding one<br />
element have the type {{{Vecp VZero a}}}, vectors with two<br />
elements have the type {{{Vecp (Vecp VZero) a}}}, with three elements <br />
{{{Vecp (Vecp (Vecp VZero)) a}}}, etc. We should stress the<br />
separation of the shape type of a vector, {{{Vecp (Vecp VZero)}}} in the last example, from the type of vector elements. The shape<br />
type of a vector clearly encodes vector{{{'}}}s size, as repeated<br />
applications of a type constructor {{{Vecp}}} to the type<br />
constructor {{{VZero}}}, i.e., as a Peano numeral. We have indeed<br />
designed a number-parameterized ''type''.<br />
<br />
To generically manipulate the family of differently-sized vectors,<br />
we define a class of polymorphic functions:<br />
<br />
<br />
{{{<br />
class Vec t where<br />
vlength:: t a -> Int<br />
vat:: t a -> Int -> a<br />
vzipWith:: (a->b->c) -> t a -> t b -> t c<br />
}}}<br />
The method {{{vlength}}} gives us the size of a vector; the<br />
method {{{vat}}} lets us retrieve a specific element, and the method<br />
{{{vzipWith}}} produces a vector by an element-by-element<br />
combination of two other vectors. We can use {{{vzipWith}}} to<br />
add two vectors elementwise. We must emphasize the type of {{{vzipWith}}}: the two argument vectors may hold elements of different<br />
types, but the vectors must have the same shape, i.e., size.<br />
<br />
The implementation of the class {{{Vec}}} has only two<br />
instances:<br />
<br />
<br />
{{{<br />
instance Vec VZero where<br />
vlength = const 0<br />
vat = error "null array or index out of range"<br />
vzipWith f a b = VZero<br />
<br />
instance (Vec tail) => Vec (Vecp tail) where<br />
vlength (_ :+: t) = 1 + vlength t<br />
vat (a :+: _) 0 = a<br />
vat (_ :+: ta) n = vat ta (n-1)<br />
vzipWith f (a :+: ta) (b :+: tb) =<br />
(f a b) :+: (vzipWith f ta tb)<br />
}}}<br />
The second instance makes it clear that a value of a type {{{Vecp tail a}}} is a vector {{{Vec}}} if and only if<br />
{{{tail a}}} is a vector {{{Vec}}}. Our vectors,<br />
instances of the class {{{Vec}}}, are recursively defined too. Unlike<br />
lists, our vectors reveal their sizes in their types.<br />
<br />
That was the complete implementation of the number-parameterized<br />
vectors. We can now define a few sample vectors:<br />
<br />
<br />
{{{<br />
v3c = 'a' :+: 'b' :+: 'c' :+: VZero<br />
v3i = 1 :+: 2 :+: 3 :+: VZero<br />
v4i = 1 :+: 2 :+: 3 :+: 4 :+: VZero<br />
}}}<br />
and a few simple tests:<br />
<br />
<br />
{{{<br />
test1 = vlength v3c<br />
test2 = [vat v3c 0, vat v3c 1, vat v3c 2]<br />
}}}<br />
We can load the code into a Haskell system and run the<br />
tests. Incidentally, we can ask the Haskell system to tell us the<br />
inferred type of a sample vector:<br />
<br />
<br />
{{{<br />
*UnaryDS> :t v3c<br />
Vecp (Vecp (Vecp VZero)) Char<br />
}}}<br />
The inferred type indeed encodes the size of the vector as a<br />
Peano numeral. We can try more complex tests, of element-wise<br />
operations on two vectors: <br />
<br />
<br />
{{{<br />
test3 = vzipWith (\c i -> (toEnum $ fromEnum c + fromIntegral i)::Char)<br />
v3c v3i<br />
test4 = vzipWith (+) v3i v3i<br />
*UnaryDS> test3<br />
'b' :+: ('d' :+: ('f' :+: VZero))<br />
}}}<br />
In particular, {{{test3}}} demonstrates an operation on two<br />
vectors of the same shape but of different element types.<br />
<br />
An attempt to add, by mistake, two vectors of different sizes is <br />
revealing:<br />
<br />
<br />
{{{<br />
test5 = vzipWith (+) v3i v4i<br />
<br />
Couldn't match `VZero' against `Vecp VZero'<br />
Expected type: Vecp (Vecp (Vecp VZero)) a<br />
Inferred type: Vecp (Vecp (Vecp (Vecp VZero))) a1<br />
In the third argument of `vzipWith', namely `v4i'<br />
In the definition of `test5': vzipWith (+) v3i v4i<br />
}}}<br />
We get a type error, with a clear error message (the quoted message,<br />
here and elsewhere in the paper, is by GHCi. The Hugs error message<br />
is essentially the same). The typechecker, at the compile time, has<br />
detected that the sizes of the vectors to add elementwise do not<br />
match. To be more precise, the sizes are off by one.<br />
<br />
For vectors described in this section, the element access<br />
operation, {{{vat}}}, takes {{{O(n)}}} time where<br />
{{{n}}} is the size of the vector. Chris Okasaki [#Okasaki99 [ Okasaki99] ] has designed more sophisticated number-parameterized<br />
vectors with element access time {{{O(log n)}}}. Although this<br />
is an improvement, the overhead of accessing an element adds up for<br />
many operations. Furthermore, the overhead of data constructors,<br />
{{{:+:}}} in our example, becomes noticeable for longer<br />
vectors. When we encode the size of a vector as a sequence of data<br />
constructors, the latter overhead cannot be eliminated.<br />
<br />
Although we have achieved the separation of the shape type of a<br />
vector from the type of its elements, we did so at the expense of a<br />
sequence of data constructors, {{{:+:}}}, at the term<br />
level. These constructors add time and space overheads, which<br />
increase with the vector size. In the following sections we<br />
show more efficient representations for number-parameterized<br />
vectors. The structure of their type will still tell us the size of<br />
the vector; however there will be no corresponding term structure,<br />
and, therefore, no space overhead of storing it nor run-time overhead<br />
of traversing it.<br />
<br />
<br />
<br />
== Encoding the number parameter in type constructors, in unary ==<br />
[[Anchor(sec:unary-type)]]To improve the efficiency of number-parameterized vectors, we<br />
choose a better run-time representation: Haskell arrays. The code in<br />
the present section is in Haskell98.<br />
<br />
<br />
{{{<br />
module UnaryT (..elided..) where<br />
import Data.Array<br />
}}}<br />
First, we need a type structure (an infinite family of types) to<br />
encode non-negative numbers. In the present section, we will use an<br />
unary encoding in the form of Peano numerals. The unary type encoding of<br />
integers belongs to programming folklore. It is also described in<br />
[#Blume01 [ Blume01] ] in the context of a foreign-function interface<br />
library of SML.<br />
<br />
<br />
{{{<br />
data Zero = Zero<br />
data Succ a = Succ a<br />
}}}<br />
That is, the term {{{Zero}}} of the type {{{Zero}}}<br />
represents the number 0. The term {{{(Succ (Succ Zero))}}} of the type<br />
{{{(Succ (Succ Zero))}}} encodes the number two. We call these<br />
numerals Peano numerals because the number {{{n}}} is<br />
represented as a repeated application of {{{n}}} type (data)<br />
constructors {{{Succ}}} to the type (term) {{{Zero}}}. We observe a one-to-one correspondence between the types of our<br />
numerals and the terms. In fact, a numeral term looks precisely the<br />
same as its type. This property is crucial as we shall see on many<br />
occasions below. It lets us {{{`}}}{{{`}}}lift{{{'}}}{{{'}}} number computations to the type<br />
level. The property also makes error messages lucid. [[FootNote(We could have declared {{{Succ}}} as {{{newtype Succ a = Succ a}}} so that {{{Succ}}} is just a tag and all non-bottom Peano numerals share the same run-time representation. As we shall see however, we hardly ever use the values of our numerals.)]] <br />
<br />
We place our Peano numerals into a class {{{Card}}}, which<br />
has a method {{{c2num}}} to convert a numeral into the<br />
corresponding number.<br />
<br />
<br />
{{{<br />
class Card c where<br />
c2num:: (Num a) => c -> a -- convert to a number<br />
<br />
cpred::(Succ c) -> c<br />
cpred = undefined<br />
<br />
instance Card Zero where <br />
c2num _ = 0<br />
instance (Card c) => Card (Succ c) where<br />
c2num x = 1 + c2num (cpred x)<br />
}}}<br />
The function {{{cpred}}} determines the predecessor for a<br />
positive Peano numeral. The definition for that function may seem<br />
puzzling: it is undefined. We observe that the callers do not need the value<br />
returned by that function: they merely need the type of that<br />
value. Indeed, let us examine the definitions of the method {{{c2num}}} in the above two instances. In the instance {{{Card Zero}}}, we are certain that the argument of {{{c2num}}} has<br />
the type {{{Zero}}}. That type, in our encoding, represents the<br />
number zero, which we return. There can be only one non-bottom value<br />
of the type {{{Zero}}}: therefore, once we know the type, we do<br />
not need to examine the value. Likewise, in the instance<br />
{{{Card (Succ c)}}}, we know that the type of the argument of {{{c2num}}} is {{{(Succ c)}}}, where {{{c}}} is itself a<br />
{{{Card}}} numeral. If we could convert a value of the type <br />
{{{c}}} to a number, we can convert the value of the type {{{(Succ c)}}} as well. By induction we determine that {{{c2num}}} never examines the value of its argument. Indeed, not only {{{c2num (Succ (Succ Zero))}}} evaluates to 2, but so does<br />
{{{c2num (undefined::(Succ (Succ Zero)))}}}.<br />
<br />
The same correspondence between the types and the terms suggests<br />
that the numeral type alone is enough to describe the size of a<br />
vector. We do not need to store the value of the numeral. The shape<br />
type of our vectors could be ''phantom'' [#Blume01 [ Blume01] ].<br />
<br />
<br />
{{{<br />
newtype Vec size a = Vec (Array Int a) deriving Show<br />
}}}<br />
That is, the type variable {{{size}}} does not occur on the<br />
right-hand size of the {{{Vec}}} declaration. More importantly,<br />
at run-time our {{{Vec}}} is indistinguishable from an {{{Array}}}, thus incurring no additional overhead and providing<br />
constant-time element access. As we mentioned earlier, for simplicity,<br />
all the vectors in the paper are indexed from zero. The data<br />
constructor {{{Vec}}} is not exported from the module, so one<br />
has to use the following functions to construct vectors.<br />
<br />
<br />
{{{<br />
listVec':: (Card size) => size -> [a] -> Vec size a<br />
listVec' size elems = Vec $ listArray (0,(c2num size)-1) elems<br />
<br />
listVec:: (Card size) => size -> [a] -> Vec size a<br />
listVec size elems | not (c2num size == length elems) =<br />
error "listVec: static/dynamic sizes mismatch"<br />
listVec size elems = listVec' size elems<br />
<br />
vec:: (Card size) => size -> a -> Vec size a<br />
vec size elem = listVec' size $ repeat elem<br />
}}}<br />
The private function {{{listVec{{{'}}}}}} constructs the vector<br />
of the requested size initialized with the given values. The function<br />
makes no check that the length of the list of the initial values<br />
{{{elems}}} is equal to the length of the vector. We use this<br />
non-exported function internally, when we have proven that {{{elems}}} has the right length, or when truncating such a list is<br />
appropriate. The exported function {{{listVec}}} is a safe<br />
version of {{{listVec{{{'}}}}}}. The former assures that the<br />
constructed vector is consistently initialized. The function {{{vec}}} initializes all elements to the same value. For example, the<br />
following expression creates a boolean vector of two elements with the<br />
initial values {{{True}}} and {{{False}}}.<br />
<br />
<br />
{{{<br />
*UnaryT> listVec (Succ (Succ Zero)) [True,False]<br />
Vec (array (0,1) [(0,True),(1,False)])<br />
}}}<br />
A Haskell interpreter created the requested value, and printed it<br />
out. We can confirm that the inferred type of the vector encodes its<br />
size:<br />
<br />
<br />
{{{<br />
*UnaryT> :type listVec (Succ (Succ Zero)) [True,False]<br />
Vec (Succ (Succ Zero)) Bool<br />
}}}<br />
We can now introduce functions to operate on our vectors. The<br />
functions are similar to those in the previous section. As before,<br />
they are polymorphic in the shape of vectors (i.e., their sizes). This<br />
polymorphism is expressed differently however. In the present section<br />
we use just the parametric polymorphism rather than typeclasses.<br />
<br />
<br />
{{{<br />
vlength_t:: Vec size a -> size<br />
vlength_t _ = undefined<br />
<br />
vlength:: Vec size a -> Int<br />
vlength (Vec arr) = let (0,last) = bounds arr in last+1<br />
<br />
velems:: Vec size a -> [a]<br />
velems (Vec v) = elems v<br />
<br />
vat:: Vec size a -> Int -> a<br />
vat (Vec arr) i = arr ! i<br />
vzipWith:: Card size => <br />
(a->b->c) -> Vec size a -> Vec size b -> Vec size c<br />
vzipWith f va vb = <br />
listVec' (vlength_t va) $ zipWith f (velems va) (velems vb)<br />
}}}<br />
The functions {{{vlength{{{_}}}t}}} and {{{vlength}}} tell<br />
the size of their argument vector. The function {{{vat}}}<br />
returns the element of a vector at a given zero-based index. The function<br />
{{{velems}}}, which gives the list of vector{{{'}}}s elements, is the<br />
left inverse of {{{listVec}}}. The function<br />
{{{vzipWith}}} elementwise combines two vectors into the third<br />
one by applying a user-specified function {{{f}}} to the<br />
corresponding elements of the argument vectors. The polymorphic types<br />
of these functions indicate that the functions generically operate on<br />
number-parameterized vectors of any {{{size}}}. Furthermore,<br />
the type of {{{vzipWith}}} expresses the constraint that the<br />
two argument vectors must have the same size. The result will be a<br />
vector of the same size as that of the argument vectors. We rely on<br />
the fact that the function {{{zipWith}}}, when applied to two<br />
lists of the same size, gives the list of that size. This justifies our<br />
use of {{{listVec{{{'}}}}}}.<br />
<br />
We have introduced two functions that yield the size of their<br />
argument vector. One is the function {{{vlength{{{_}}}t}}}: it<br />
returns a value whose type represents the size of the vector. We are<br />
interested only in the type of the return value -- which we extract<br />
statically from the type of the argument vector. The function {{{vlength{{{_}}}t}}} is a ''compile-time'' function. Therefore, it is<br />
no surprise that its body is {{{undefined}}}. The type of the<br />
function ''is'' its true definition. The function {{{vlength}}} in contrast retrieves vector{{{'}}}s size from the run-time<br />
representation as an array. If we export {{{listVec}}} from the<br />
module {{{UnaryT}}} but do not export the constructor {{{Vec}}}, we can guarantee that {{{c2num . vlength{{{_}}}t}}} is<br />
equivalent to {{{vlength}}}: our number-parameterized vector<br />
type is sound.<br />
<br />
From the practical point of view, passing terms such as<br />
{{{(Succ (Succ Zero))}}} to the functions {{{vec}}} or {{{listVec}}} to construct vectors is inconvenient. The previous<br />
section showed a better approach. We can implement it here too: we let<br />
the user enumerate the values, which we accumulate into a list,<br />
counting them at the same time:<br />
<br />
<br />
{{{<br />
infixl 3 &+<br />
data VC size a = VC size [a]<br />
<br />
vs:: VC Zero a; vs = VC Zero []<br />
(&+):: VC size a -> a -> VC (Succ size) a<br />
(&+) (VC size lst) x = VC (Succ size) (x:lst)<br />
vc:: (Card size) => VC size a -> Vec size a<br />
vc (VC size lst) = listVec' size (reverse lst)<br />
}}}<br />
The counting operation is effectively performed by a typechecker<br />
at compile time. Finally, the function {{{vc}}} will allocate<br />
and initialize the vector of the right size -- and of the right<br />
type. Here are a few sample vectors and operations on them:<br />
<br />
<br />
{{{<br />
v3c = vc $ vs &+ 'a' &+ 'b' &+ 'c'<br />
v3i = vc $ vs &+ 1 &+ 2 &+ 3<br />
v4i = vc $ vs &+ 1 &+ 2 &+ 3 &+ 4<br />
<br />
test1 = vlength v3c; test1' = vlength_t v3c<br />
test2 = [vat v3c 0, vat v3c 1, vat v3c 2]<br />
test3 = vzipWith (\c i -> (toEnum $ fromEnum c + fromIntegral i)::Char)<br />
v3c v3i<br />
test4 = vzipWith (+) v3i v3i<br />
}}}<br />
We can run the tests as follows:<br />
<br />
<br />
{{{<br />
*UnaryT> test3<br />
Vec (array (0,2) [(0,'b'),(1,'d'),(2,'f')])<br />
*UnaryT> :type test3<br />
Vec (Succ (Succ (Succ Zero))) Char<br />
}}}<br />
The type of the result bears the clear indication of the size of<br />
the vector. If we attempt to perform an element-wise operation on<br />
vectors of different sizes, for example:<br />
<br />
<br />
{{{<br />
test5 = vzipWith (+) v3i v4i<br />
Couldn't match `Zero' against `Succ Zero'<br />
Expected type: Vec (Succ (Succ (Succ Zero))) a<br />
Inferred type: Vec (Succ (Succ (Succ (Succ Zero)))) a1<br />
In the third argument of `vzipWith', namely `v4i'<br />
In the definition of `test5': vzipWith (+) v3i v4i<br />
}}}<br />
we get a message from the typechecker that the sizes are off<br />
by one.<br />
<br />
<br />
<br />
== Fixed-precision decimal types ==<br />
[[Anchor(sec:decimal-fixed)]]Peano numerals adequately represent the size of a vector in vector{{{'}}}s<br />
type. However, they make the notation quite verbose. We want to offer<br />
a programmer a familiar, decimal notation for the terms and the types<br />
representing non-negative numerals. This turns out possible even in<br />
Haskell98. In this section, we describe a fixed-precision notation,<br />
assuming that a programmer will never need a vector with more than 999<br />
elements. The limit is not hard and can be readily extended. The next<br />
section will eliminate the limit altogether.<br />
<br />
We again will be using Haskell arrays as the run-time<br />
representation for our vectors. In fact, the implementation of<br />
vectors is the same as that in the previous section. The only change<br />
is the use of decimal rather than unary types to describe the sizes of<br />
our vectors.<br />
<br />
<br />
{{{<br />
module FixedDecT (..export list elided..) where<br />
import Data.Array<br />
}}}<br />
Since we will be using the decimal notation, we need the terms and<br />
the types for all ten digits:<br />
<br />
<br />
{{{<br />
data D0 = D0<br />
data D1 = D1<br />
... <br />
data D9 = D9<br />
}}}<br />
For clarity and to save space, we elide repetitive code<br />
fragments. The full code is available from [#CodeForPaper [ CodeForPaper] ]. To manipulate the digits uniformly (e.g., to find out the<br />
corresponding integer), we put them into a class {{{Digit}}}. We also introduce a class for non-zero digits. The latter has no<br />
methods: we use {{{NonZeroDigit}}} as a constraint on allowable<br />
digits.<br />
<br />
<br />
{{{<br />
class Digit d where -- class of digits<br />
d2num:: (Num a) => d -> a -- convert to a number<br />
<br />
instance Digit D0 where d2num _ = 0<br />
instance Digit D1 where d2num _ = 1<br />
...<br />
instance Digit D9 where d2num _ = 9<br />
<br />
class Digit d => NonZeroDigit d<br />
instance NonZeroDigit D1<br />
instance NonZeroDigit D2<br />
...<br />
instance NonZeroDigit D9<br />
}}}<br />
We define a class of non-negative numerals. We make all<br />
single-digit numerals the members of that class:<br />
<br />
<br />
{{{<br />
class Card c where<br />
c2num:: (Num a) => c -> a -- convert to a number<br />
<br />
-- Single-digit numbers are non-negative numbers<br />
instance Card D0 where c2num _ = 0<br />
instance Card D1 where c2num _ = 1<br />
...<br />
instance Card D9 where c2num _ = 9<br />
}}}<br />
We define a two-digit number, a tuple {{{(d1,d2)}}}<br />
where {{{d1}}} is a non-zero digit, a member of the class {{{Card}}}. The class {{{NonZeroDigit}}} makes expressing<br />
the constraint lucid. We also introduce three-digit decimal<br />
numerals {{{(d1,d2,d3)}}}:<br />
<br />
<br />
{{{<br />
instance (NonZeroDigit d1,Digit d2) => Card (d1,d2) where<br />
c2num c = 10*(d2num $ t12 c) + (d2num $ t22 c)<br />
<br />
instance (NonZeroDigit d1,Digit d2,Digit d3) => <br />
Card (d1,d2,d3) where<br />
c2num c = 100*(d2num $ t13 c) + 10*(d2num $ t23 c)<br />
+ (d2num $ t33 c)<br />
}}}<br />
The instance constraints of the {{{Card}}} instances<br />
guarantee the uniqueness of our representation of numbers: the<br />
major decimal digit of a multi-digit number is not zero. It will be a<br />
type error to attempt to form such an number:<br />
<br />
<br />
{{{<br />
*FixedDecT> vec (D0,D1) 'a'<br />
<interactive>:1:<br />
No instance for (NonZeroDigit D0)<br />
}}}<br />
The auxiliary compile-time functions {{{t12}}}...{{{t33}}} are tuple selectors. We could have avoided them in GHC with<br />
Glasgow extensions, which supports local type variables. We feel<br />
however that keeping the code Haskell98 justifies the extra hassle:<br />
<br />
<br />
{{{<br />
t12::(a,b) -> a; t12 = undefined<br />
t22::(a,b) -> b; t22 = undefined<br />
...<br />
t33::(a,b,c) -> c; t33 = undefined<br />
}}}<br />
The rest of the code is as before, e.g.:<br />
<br />
<br />
{{{<br />
newtype Vec size a = Vec (Array Int a) deriving Show<br />
<br />
listVec':: Card size => size -> [a] -> Vec size a<br />
listVec' size elems = Vec $ listArray (0,(c2num size)-1) elems<br />
}}}<br />
The implementations of the polymorphic functions {{{listVec}}}, {{{vec}}}, {{{vlength{{{_}}}t}}}, {{{vlength}}}, {{{vat}}},<br />
{{{velems}}}, and {{{vzipWith}}} are precisely the same<br />
as those in Section [#sec:unary-type [ sec:unary-type] ]. We elide the code for<br />
the sake of space. We introduce a few sample vectors, using the<br />
decimal notation this time:<br />
<br />
<br />
{{{<br />
v12c = listVec (D1,D2) $ take 12 ['a'..'z']<br />
v12i = listVec (D1,D2) [1..12]<br />
v13i = listVec (D1,D3) [1..13]<br />
}}}<br />
The decimal notation is so much convenient. We can now define long<br />
vectors without pain. As before, the type of our vectors -- the size<br />
part of the type -- looks precisely the same as the corresponding<br />
size term expression: <br />
<br />
<br />
{{{<br />
*FixedDecT> :type v12c<br />
Vec (D1, D2) Char<br />
}}}<br />
We can use the sample vectors in the tests like those of the<br />
previous section, [#CodeForPaper [ CodeForPaper] ]. If we attempt to<br />
elementwise add two vectors of different sizes, we get a type<br />
error:<br />
<br />
<br />
{{{<br />
test5 = vzipWith (+) v12i v13i<br />
<br />
Couldn't match `D2' against `D3'<br />
Expected type: Vec (D1, D2) a<br />
Inferred type: Vec (D1, D3) a1<br />
In the third argument of `vzipWith', namely `v13i'<br />
In the definition of `test5': vzipWith (+) v12i v13i<br />
}}}<br />
The error message literally says that 12 is not equal to 13: the<br />
typechecker expected a vector of size 12 but found a vector of size 13<br />
instead.<br />
<br />
<br />
<br />
== Arbitrary-precision decimal types ==<br />
[[Anchor(sec:decimal-arb)]]From the practical point of view, the fixed-precision<br />
number-parameterized vectors of the previous section are<br />
sufficient. The imposition of a limit on the width of the decimal<br />
numerals -- however easily extended -- is nevertheless intellectually<br />
unsatisfying. One may wish for an encoding of arbitrarily large decimal<br />
numbers within a framework that has been set up once and for all. Such<br />
an SML framework has been introduced in [#Blume01 [ Blume01] ], to<br />
encode the sizes of arrays in their types. It is interesting to ask<br />
if such an encoding is possible in Haskell. The present section<br />
demonstrates a representation of arbitrary large decimal numbers in<br />
''Haskell98''. We also show that typeclasses in Haskell have<br />
made the encoding easier and precise: our decimal types are in<br />
bijection with non-negative integers. As before, we use the decimal<br />
types as phantom types describing the shape of number-parameterized<br />
vectors.<br />
<br />
We start by defining the types for the ten digits:<br />
<br />
<br />
{{{<br />
module ArbPrecDecT (..export list elided..) where<br />
import Data.Array<br />
<br />
data D0 a = D0 a<br />
data D1 a = D1 a<br />
...<br />
data D9 a = D9 a<br />
}}}<br />
Unlike the code in the previous section, {{{D0}}} through {{{D9}}} are type constructors of one argument. We<br />
use the composition of the constructors to represent sequences of<br />
digits. And so we introduce a class for arbitrary sequences of<br />
digits:<br />
<br />
<br />
{{{<br />
class Digits ds where<br />
ds2num:: (Num a) => ds -> a -> a<br />
}}}<br />
with a method to convert a sequence to the corresponding<br />
number. The method {{{ds2num}}} is designed in the<br />
accumulator-passing style: its second argument is the accumulator. We<br />
also need a type, which we call {{{Sz}}}, to represent an empty<br />
sequence of digits:<br />
<br />
<br />
{{{<br />
data Sz = Sz -- zero size (or the Nil of the sequence)<br />
instance Digits Sz where<br />
ds2num _ acc = acc<br />
}}}<br />
We now inductively define arbitrarily long sequences of digits:<br />
<br />
<br />
{{{<br />
instance (Digits ds) => Digits (D0 ds) where<br />
ds2num dds acc = ds2num (t22 dds) (10*acc)<br />
instance (Digits ds) => Digits (D1 ds) where<br />
ds2num dds acc = ds2num (t22 dds) (10*acc + 1)<br />
...<br />
instance (Digits ds) => Digits (D9 ds) where<br />
ds2num dds acc = ds2num (t22 dds) (10*acc + 9)<br />
<br />
t22::(f x) -> x; t22 = undefined<br />
}}}<br />
The type and the term {{{Sz}}} is an empty sequence;<br />
{{{D9 Sz}}} -- that is, the application of the constructor {{{D9}}} to {{{Sz}}} -- is a sequence of one digit, digit 9. The<br />
application of the constructor {{{D1}}} to the latter sequence<br />
gives us {{{D1 (D9 Sz)}}}, a two-digit sequence of digits one<br />
and nine. Compositions of data/type constructors indeed encode<br />
sequences of digits. As before, the terms and the types look precisely<br />
the same. The compositions can of course be arbitrarily long:<br />
<br />
<br />
{{{<br />
*ArbPrecDecT> :type D1$ D2$ D3$ D4$ D5$ D6$ D7$ D8$ D9$ D0$ D9$ <br />
D8$ D7$ D6$ D5$ D4$ D3$ D2$ D1$ Sz<br />
D1 (D2 (D3 (D4 (D5 (D6 (D7 (D8 (D9 (D0 (D9 (D8 (D7 <br />
(D6 (D5 (D4 (D3 (D2 (D1 Sz))))))))))))))))))<br />
*ArbPrecDecT> ds2num (D1$ D2$ D3$ D4$ D5$ D6$ D7$ D8$ D9$ D0$ D9$ <br />
D8$ D7$ D6$ D5$ D4$ D3$ D2$ D1$ Sz) 0<br />
1234567890987654321<br />
}}}<br />
We should point out a notable advantage of Haskell typeclasses in<br />
designing of sophisticated type families -- in particular, in<br />
specifying constraints. Nothing prevents a programmer from using our<br />
type constructors, e.g., {{{D1}}}, in unintended ways. For<br />
example, a programmer may form a value of the type {{{D1 Bool}}}: either by applying a data constructor {{{D1}}} to a boolean<br />
value, or by casting a polymorphic value, {{{undefined}}},<br />
into that type:<br />
<br />
<br />
{{{<br />
*ArbPrecDecT> :type D1 True<br />
D1 Bool<br />
*ArbPrecDecT> :type (undefined::D1 Bool)<br />
D1 Bool<br />
}}}<br />
However, such types do ''not'' represent decimal<br />
sequences. Indeed, an attempt to pass either of these values to<br />
{{{ds2num}}} will result in a type error:<br />
<br />
<br />
{{{<br />
*ArbPrecDecT> ds2num (undefined::D1 Bool) 0<br />
No instance for (Digits Bool)<br />
arising from use of `ds2num' at <interactive>:1<br />
In the definition of `it': ds2num (undefined :: D1 Bool) 0<br />
}}}<br />
In contrast, the approach in [#Blume01 [ Blume01] ] prevented the<br />
user from constructing (non-bottom) values of these types by a careful<br />
design and export of value constructors. That approach relied on SML{{{'}}}s<br />
module system to preclude the overt mis-use of the decimal type<br />
system. Yet the user can still form a (latent, in SML) bottom value of<br />
the {{{`}}}{{{`}}}bad{{{'}}}{{{'}}} type, e.g., by attaching an appropriate type signature to<br />
an empty list, error function or other suitable polymorphic value. In<br />
a non-strict language like Haskell such values would make our approach,<br />
which relies on phantom types, unsound. Fortunately, we are able to<br />
eliminate ill-formed decimal types at the type level rather than at<br />
the term level. Our class {{{Digits}}} admits those and ''only'' those types that represent sequences of digits.<br />
<br />
To guarantee the bijection between non-negative numbers and <br />
sequences of digits, we need to impose an additional restriction: the<br />
first, i.e., the major, digit of a sequence must be<br />
non-zero. Expressing such a restriction is surprisingly<br />
straightforward in Haskell, even Haskell98.<br />
<br />
<br />
{{{<br />
class (Digits c) => Card c where<br />
c2num:: (Num a) => c -> a<br />
c2num c = ds2num c 0<br />
<br />
instance Card Sz<br />
instance (Digits ds) => Card (D1 ds)<br />
instance (Digits ds) => Card (D2 ds)<br />
...<br />
instance (Digits ds) => Card (D9 ds)<br />
}}}<br />
As in the previous sections, the class {{{Card}}}<br />
represents non-negative integers. A non-negative integer is realized<br />
here as a sequence of decimal digits -- provided, as the instances<br />
specify, that the sequence starts with a digit other than zero. We can<br />
now define the type of our number-parameterized vectors:<br />
<br />
<br />
{{{<br />
newtype Vec size a = Vec (Array Int a) deriving Show<br />
}}}<br />
which looks precisely as before, and polymorphic functions {{{vec}}}, {{{listVec}}}, {{{vlength{{{_}}}t}}}, {{{vlength}}}, {{{velems}}}, {{{vat}}}, and {{{vzipWith}}} -- which are identical to those in Section [#sec:unary-type [ sec:unary-type] ]. We can define a few sample vectors:<br />
<br />
<br />
{{{<br />
v12c = listVec (D1 $ D2 Sz) $ take 12 ['a'..'z']<br />
v12i = listVec (D1 $ D2 Sz) [1..12]<br />
v13i = listVec (D1 $ D3 Sz) [1..13]<br />
}}}<br />
we should note a slight change of notation compared to the<br />
corresponding vectors of Section [#sec:decimal-fixed [ sec:decimal-fixed] ]. The<br />
tests are not changed and continue to work as before:<br />
<br />
<br />
{{{<br />
test4 = vzipWith (+) v12i v12i<br />
<br />
*ArbPrecDecT> :type test4<br />
Vec (D1 (D2 Sz)) Int<br />
*ArbPrecDecT> test4<br />
Vec (array (0,11) [(0,2),(1,4),(2,6),...(11,24)])<br />
}}}<br />
The compiler has been able to infer the size of the result of the<br />
{{{vzipWith}}} operation. The size is lucidly spelled in<br />
decimal in the type of the vector. Again, an attempt to elementwise<br />
add vectors of different sizes leads to a type error:<br />
<br />
<br />
{{{<br />
test5 = vzipWith (+) v12i v13i<br />
Couldn't match `D2 Sz' against `D3 Sz'<br />
Expected type: Vec (D1 (D2 Sz)) a<br />
Inferred type: Vec (D1 (D3 Sz)) a1<br />
In the third argument of `vzipWith', namely `v13i'<br />
In the definition of `test5': vzipWith (+) v12i v13i<br />
}}}<br />
The typechecker complains that 2 is not equal to 3: it found the<br />
vector of size 13 whereas it expected a vector of size 12. The decimal<br />
types make the error message very clear.<br />
<br />
We must again point out a significant difference of our approach<br />
from that of [#Blume01 [ Blume01] ]. We were able to state that only<br />
those types of digital sequences that start with a non-zero digit<br />
correspond to a non-negative number. SML, as acknowledged in [#Blume01 [ Blume01] ], is unable to express such a restriction directly. The<br />
paper [#Blume01 [ Blume01] ], therefore, prevents the user from building<br />
invalid decimal sequences by relying on the module system: by<br />
exporting carefully-designed value constructors. The latter use an<br />
auxiliary phantom type to keep track of {{{`}}}{{{`}}}nonzeroness{{{'}}}{{{'}}} of the major<br />
digit. Our approach does not incur such a complication. Furthermore,<br />
by the very inductive construction of the classes {{{Digits}}}<br />
and {{{Card}}}, there is a one-to-one correspondence between<br />
''types'', the members of {{{Card}}}, and the integers<br />
in decimal notation. In [#Blume01 [ Blume01] ], the similar mapping<br />
holds only when the family of decimal types is restricted to the types<br />
that correspond to constructible values. A user of that system may<br />
still form bottom values of invalid decimal types, which will cause<br />
run-time errors. In our case, when the digit constructors are<br />
misapplied, the result will no longer be in the class {{{Card}}}, and so the error will be detected ''statically'' by the<br />
typechecker:<br />
<br />
<br />
{{{<br />
*ArbPrecDecT> vec (D1$ D0$ D0$ True) 0<br />
No instance for (Digits Bool)<br />
arising from use of `vec' at <interactive>:1<br />
In the definition of `it': vec (D1 $ (D0 $ (D0 $ True))) 0<br />
<br />
*ArbPrecDecT> vec (D0$ D1$ D0 Sz) 0<br />
No instance for (Card (D0 (D1 (D0 Sz))))<br />
arising from use of `vec' at <interactive>:1<br />
In the definition of `it': vec (D0 $ (D1 $ (D0 Sz))) 0<br />
}}}<br />
<br />
<br />
== Computations with decimal types ==<br />
[[Anchor(sec:arithmetic)]]The previous sections gave many examples of functions such as<br />
{{{vzipWith}}} that take two vectors ''statically''<br />
known to be of equal size. The signature of these functions states<br />
quite detailed invariants whose violations will be reported at<br />
compile-time. Furthermore, the invariants can be inferred by the<br />
compiler itself. This use of the type system is not particular to<br />
Haskell: Matthias Blume [#Blume01 [ Blume01] ] has derived a similar<br />
parameterization of arrays in SML, which can express such equality of<br />
size constraints. Matthias Blume however cautions one not to overstate<br />
the usefulness of the approach because the type system can express<br />
only fairly simple constraints: {{{`}}}{{{`}}}There is still no type that, for<br />
example, would force two otherwise arbitrary arrays to differ in size<br />
by exactly one.{{{'}}}{{{'}}} That was written in the context of SML however. In<br />
Haskell with common extensions we ''can'' define vector<br />
functions whose type contains arithmetic constraints on the sizes of<br />
the argument and the result vectors. These constraints can be verified<br />
statically and sometimes even inferred by a compiler. In this section,<br />
we consider the example of vector concatenation. We shall see that the<br />
inferred type of {{{vappend}}} manifestly affirms that the size<br />
of the result is the sum of the sizes of two argument vectors. We also<br />
introduce the functions {{{vhead}}} and {{{vtail}}},<br />
whose type specifies that they can only be applied to non-empty<br />
vectors. Furthermore, the type of {{{vtail}}} says that the<br />
size of the result vector is less by one than the size of the argument<br />
vector. These examples are quite unusual and almost cross into the<br />
realm of dependent types.<br />
<br />
We must note however that the examples in this section require the<br />
Haskell98 extension to multi-parameter classes with functional<br />
dependencies. That extension is activated by flags {{{-98}}} of<br />
Hugs and {{{-fglasgow-exts -fallow-undecidable-instances}}} of<br />
GHCi.<br />
<br />
We will be using the arbitrary precision decimal types introduced<br />
in the previous section. We aim to design a {{{`}}}type addition{{{'}}} of decimal<br />
sequences. Our decimal types spell the corresponding non-negative<br />
numbers in the conventional (i.e., big-endian) decimal notation: the<br />
most-significant digit first. However, it is more convenient to add<br />
such numbers starting from the least-significant digit. Therefore, we<br />
need a way to reverse digital sequences, or more precise, types of the<br />
class {{{Digits}}}. We use the conventional sequence reversal<br />
algorithm written in the accumulator-passing style.<br />
<br />
<br />
{{{<br />
class DigitsInReverse' df w dr | df w -> dr<br />
<br />
instance DigitsInReverse' Sz acc acc<br />
instance (Digits (d drest), DigitsInReverse' drest (d acc) dr) <br />
=> DigitsInReverse' (d drest) acc dr<br />
}}}<br />
We introduced the class {{{DigitsInReverse{{{'}}} df w dr}}} where<br />
{{{df}}} is the source sequence, {{{dr}}} is the<br />
reversed sequence, and {{{w}}} is the accumulator. The three<br />
digit sequence types belong to {{{DigitsInReverse{{{'}}}}}} if<br />
the reverse of {{{df}}} appended to {{{w}}} gives the<br />
digit sequence {{{dr}}}. The functional dependency and the two<br />
instances spell this constraint out. We can now introduce a class that<br />
relates a sequence of digits with its reverse:<br />
<br />
<br />
{{{<br />
class DigitsInReverse df dr | df -> dr, dr -> df<br />
<br />
instance (DigitsInReverse' df Sz dr, DigitsInReverse' dr Sz df)<br />
=> DigitsInReverse df dr<br />
}}}<br />
Two sequences of digits {{{df}}} and {{{dr}}} belong<br />
to the class {{{DigitsInReverse}}} if they are the reverse of<br />
each other. The functional dependencies make the {{{`}}}{{{`}}}each other{{{'}}}{{{'}}} part<br />
clear: one sequence uniquely determines the other. The typechecker<br />
will verify that given {{{df}}}, it can find {{{dr}}} so<br />
that both {{{DigitsInReverse{{{'}}} df Sz dr}}} and {{{DigitsInReverse{{{'}}} dr Sz df}}} are satisfied. To test the reversal<br />
process, we define a function {{{digits{{{_}}}rev}}}:<br />
<br />
<br />
{{{<br />
digits_rev:: (Digits ds, Digits dsr, DigitsInReverse ds dsr)<br />
=> ds -> dsr<br />
digits_rev = undefined<br />
}}}<br />
It is again a compile-time function specified entirely by its<br />
type. Its body is therefore undefined. We can now run a few<br />
examples:<br />
<br />
<br />
{{{<br />
*ArbArithmT> :t digits_rev (D1$D2$D3 Sz)<br />
D3 (D2 (D1 Sz))<br />
*ArbArithmT> :t (\v -> digits_rev v `asTypeOf` (D1$D2$D3 Sz))<br />
D3 (D2 (D1 Sz)) -> D1 (D2 (D3 Sz))<br />
}}}<br />
Indeed, the process of reversing sequences of decimal digits works<br />
both ways. Given the type of the argument to {{{digits{{{_}}}rev}}},<br />
the compiler infers the type of the result. Conversely, given the type<br />
of the result the compiler infers the type of the argument.<br />
<br />
A sequence of digits belongs to the class {{{Card}}} only<br />
if the most-significant digit is not a zero. To convert an arbitrary<br />
sequence to {{{Card}}} we need a way to strip leading zeros:<br />
<br />
<br />
{{{<br />
class NoLeadingZeros d d0 | d -> d0<br />
instance NoLeadingZeros Sz Sz<br />
instance (NoLeadingZeros d d') => NoLeadingZeros (D0 d) d'<br />
instance NoLeadingZeros (D1 d) (D1 d)<br />
...<br />
instance NoLeadingZeros (D9 d) (D9 d)<br />
}}}<br />
We are now ready to build the addition machinery. We draw our<br />
inspiration from the computer architecture: the adder of an<br />
arithmetical-logical unit (ALU) of the CPU is constructed by chaining<br />
of so-called full-adders. A full-adder takes two summands and the<br />
carry-in and yields the result of the summation and the carry-out. In<br />
our case, the summands and the result are decimal rather than<br />
binary. Carry is still binary.<br />
<br />
<br />
{{{<br />
class FullAdder d1 d2 cin dr cout<br />
| d1 d2 cin -> cout, d1 d2 cin -> dr, <br />
d1 dr cin -> cout, d1 dr cin -> d2 <br />
where<br />
_unused:: (d1 xd1) -> (d2 xd2) -> cin -> (dr xdr)<br />
_unused = undefined<br />
}}}<br />
The class {{{FullAdder}}} establishes a relation among<br />
three digits {{{d1}}}, {{{d2}}}, and {{{dr}}} and<br />
two carry bits {{{cin}}} and {{{cout}}}: {{{d1 + d2 + cin = dr + 10*cout}}}. The digits are represented by the type<br />
constructors {{{D0}}} through {{{D9}}}. The sole purpose<br />
of the method {{{{{{_}}}unused}}} is to cue the compiler that<br />
{{{d1}}}, {{{d2}}}, and {{{dr}}} are type<br />
constructors. The functional dependencies of the class tell us that<br />
the summands and the input carry uniquely determine the result digit<br />
and the output carry. On the other hand, if we know the result digit,<br />
one of the summands, {{{d1}}}, and the input carry, we can<br />
determine the other summand. The same relation {{{FullAdder}}}<br />
can therefore be used for addition and for subtraction. In the latter<br />
case, the carry bits should be more properly called borrow bits.<br />
<br />
<br />
{{{<br />
data Carry0<br />
data Carry1<br />
<br />
instance FullAdder D0 D0 Carry0 D0 Carry0<br />
instance FullAdder D0 D0 Carry1 D1 Carry0<br />
instance FullAdder D0 D1 Carry0 D1 Carry0<br />
...<br />
instance FullAdder D9 D8 Carry1 D8 Carry1<br />
instance FullAdder D9 D9 Carry0 D8 Carry1<br />
instance FullAdder D9 D9 Carry1 D9 Carry1<br />
}}}<br />
The full code [#CodeForPaper [ CodeForPaper] ] indeed contains 200 instances of<br />
{{{FullAdder}}}. The exhaustive enumeration verifies the<br />
functional dependencies of the class. The number of instances could be<br />
significantly reduced if we availed ourselves to an overlapping<br />
instances extension. For generality however we tried to use as few<br />
Haskell98 extensions as possible. Although 200 instances seems like<br />
quite many, we have to write them only once. We place the instances<br />
into a module and separately compile it. Furthermore, we did not write<br />
those instances by hand: we used Haskell itself:<br />
<br />
<br />
{{{<br />
make_full_adder <br />
= mapM_ putStrLn <br />
[unwords $ doit d1 d2 cin | d1<-[0..9],<br />
d2<-[0..9], cin<-[0..1]]<br />
where<br />
doit d1 d2 cin <br />
= ["instance FullAdder", tod d1, tod d2, toc cin,<br />
tod d12, toc cout]<br />
where <br />
(d12,cout) = let sum = d1 + d2 + cin<br />
in if sum >= 10 then (sum-10,1) else (sum,0)<br />
tod n | (n >= 0 && 9 >= n) = "D" ++ (show n)<br />
toc 0 = "Carry0"; toc 1 = "Carry1"<br />
}}}<br />
That function is ready for Template Haskell. Currently we used a<br />
low-tech approach of cutting and pasting from an Emacs buffer with<br />
GHCi into the Emacs buffer with the code.<br />
<br />
We use {{{FullAdder}}} to build the full adder of two<br />
little-endian decimal sequences {{{ds1}}} and {{{ds2}}}.<br />
The relation {{{DigitsSum ds1 ds2 cin dsr}}} holds if {{{ds1+ds2+cin = dsr}}}. We add the digits from the least significant<br />
onwards, and we propagate the carry. If one input sequence turns out<br />
shorter than the other, we pad it with zeros. The correctness of the<br />
algorithm follows by simple induction.<br />
<br />
<br />
{{{<br />
class DigitsSum ds1 ds2 cin dsr | ds1 ds2 cin -> dsr<br />
instance DigitsSum Sz Sz Carry0 Sz<br />
instance DigitsSum Sz Sz Carry1 (D1 Sz)<br />
instance (DigitsSum (D0 Sz) (d2 d2rest) cin (d12 d12rest)) =><br />
DigitsSum Sz (d2 d2rest) cin (d12 d12rest)<br />
instance (DigitsSum (d1 d1rest) (D0 Sz) cin (d12 d12rest)) =><br />
DigitsSum (d1 d1rest) Sz cin (d12 d12rest)<br />
instance (FullAdder d1 d2 cin d12 cout, <br />
DigitsSum d1rest d2rest cout d12rest) =><br />
DigitsSum (d1 d1rest) (d2 d2rest) cin (d12 d12rest)<br />
}}}<br />
We also need the inverse relation: {{{DigitsDif ds1 ds2 cin dsr}}} holds on precisely the same condition as {{{DigitsSum}}}. Now, however, the sequences {{{ds1}}}, {{{dsr}}} and<br />
the input carry {{{cin}}} determine one of the summands,<br />
{{{ds2}}}. The input carry actually means the input borrow<br />
bit. The relation {{{DigitsDif}}} is defined only if the output<br />
sequence {{{dsr}}} has at least as many digits as {{{ds1}}} --- which is the necessary condition for the result of the<br />
subtraction to be non-negative.<br />
<br />
<br />
{{{<br />
class DigitsDif ds1 ds2 cin dsr | ds1 dsr cin -> ds2<br />
instance DigitsDif Sz ds Carry0 ds<br />
instance (DigitsDif (D0 Sz) (d2 d2rest) Carry1 (d12 d12rest)) =><br />
DigitsDif Sz (d2 d2rest) Carry1 (d12 d12rest)<br />
instance (FullAdder d1 d2 cin d12 cout, <br />
DigitsDif d1rest d2rest cout d12rest) =><br />
DigitsDif (d1 d1rest) (d2 d2rest) cin (d12 d12rest)<br />
}}}<br />
The class {{{CardSum}}} with a single instance puts it all<br />
together:<br />
<br />
<br />
{{{<br />
class (Card c1, Card c2, Card c12) => <br />
CardSum c1 c2 c12 | c1 c2 -> c12, c1 c12 -> c2<br />
instance (Card c1, Card c2, Card c12,<br />
DigitsInReverse c1 c1r, <br />
DigitsInReverse c2 c2r,<br />
DigitsSum c1r c2r Carry0 c12r,<br />
DigitsDif c1r c2r' Carry0 c12r,<br />
DigitsInReverse c2r' c2', NoLeadingZeros c2' c2,<br />
DigitsInReverse c12r c12)<br />
=> CardSum c1 c2 c12<br />
}}}<br />
The class establishes the relation between three {{{Card}}}<br />
sequences {{{c1}}}, {{{c2}}}, and {{{c12}}} such<br />
that the latter is the sum of the formers. The two summands determine<br />
the sum, or the sum and one summand determine the other. The class can<br />
be used for addition and subtraction of sequences. The dependencies of<br />
the sole {{{CardSum}}} instance spell out the algorithm. We<br />
reverse the summand sequences to make them little-endian, add them<br />
together with the zero carry, and reverse the result. We also make<br />
sure that the subtraction and summation are the exact inverses. The<br />
addition algorithm {{{DigitsSum}}} never produces a sequence<br />
with the major digit zero. The subtraction algorithm however may<br />
result in a sequence with zero major digits, which have to be stripped<br />
away, with the help of the relation {{{NoLeadingZeros}}}. We<br />
introduce a compile-time function {{{card{{{_}}}sum}}} so we can try<br />
the addition out:<br />
<br />
<br />
{{{<br />
card_sum:: CardSum c1 c2 c12 => c1 -> c2 -> c12<br />
card_sum = undefined<br />
}}}<br />
<br />
{{{<br />
*ArbArithmT> :t card_sum (D1 Sz) (D9$D9 Sz)<br />
D1 (D0 (D0 Sz))<br />
*ArbArithmT> :t \v -> card_sum (D1 Sz) v `asTypeOf` (D1$D0$D0 Sz)<br />
D9 (D9 Sz) -> D1 (D0 (D0 Sz))<br />
*ArbArithmT> :t \v -> card_sum (D9$D9 Sz) v `asTypeOf` (D1$D0$D0 Sz)<br />
D1 Sz -> D1 (D0 (D0 Sz))<br />
}}}<br />
The typechecker can indeed add and subtract with carry and<br />
borrow. Now we define the function {{{vappend}}} to<br />
concatenate two vectors.<br />
<br />
<br />
{{{<br />
vappend va vb = listVec (card_sum (vlength_t va) (vlength_t vb))<br />
$ (velems va) ++ (velems vb)<br />
}}}<br />
We could have used the function {{{listVec{{{'}}}}}}; for illustration,<br />
we chose however to perform a run-time check and avoid proving the theorem <br />
about the size of the list concatenation result. We did not declare <br />
the type of {{{vappend}}}; still the compiler is able to infer it:<br />
<br />
<br />
{{{<br />
*ArbArithmT> :t vappend<br />
vappend :: (CardSum size size1 c12) =><br />
Vec size a -> Vec size1 a -> Vec c12 a<br />
}}}<br />
which literally says that the size of the result vector is the sum<br />
of the sizes of the argument vectors. The constraint is spelled out<br />
patently, as part of the type of {{{vappend}}}. The sizes may<br />
be arbitrarily large decimal numbers: for example, the following<br />
expression demonstrates the concatenation of a vector of 25 elements<br />
and a vector of size 979: <br />
<br />
<br />
{{{<br />
*ArbArithmT> :t vappend (vec (D2$D5 Sz) 0) (vec (D9$D7$D9 Sz) 0) <br />
(Num a) => Vec (D1 (D0 (D0 (D4 Sz)))) a<br />
}}}<br />
We introduce the deconstructor functions {{{vhead}}} and<br />
{{{vtail}}}. The type of the latter is exactly what was listed in<br />
[#Blume01 [ Blume01] ] as an unattainable wish.<br />
<br />
<br />
{{{<br />
vhead:: CardSum (D1 Sz) size1 size => Vec size a -> Vec (D1 Sz) a<br />
vhead va = listVec (D1 Sz) $ [head (velems va)]<br />
vtail:: CardSum (D1 Sz) size1 size => Vec size a -> Vec size1 a<br />
vtail va = result<br />
where result = listVec (vlength_t result) $ tail (velems va)<br />
}}}<br />
Although the body of {{{vtail}}} seem to refer to that<br />
function result, the function is not divergent and not<br />
recursive. Recall that {{{vlength{{{_}}}t}}} is a compile-time,<br />
{{{`}}}type{{{'}}} function. Therefore the body of {{{vtail}}} refers to<br />
the statically known type of {{{result}}} rather than to its<br />
value. The type of {{{vhead}}} is also noteworthy: it<br />
essentially specifies an ''inequality'' constraint: the input<br />
vector is non-empty. The constraint is expressed via an implicitly<br />
existentially quantified variable {{{size1}}}: the type of<br />
{{{vhead}}} says that there must exist a non-negative number<br />
{{{size1}}} such that incrementing it by one should give the<br />
size of the input vector.<br />
<br />
We can now run a few examples. We note that the compiler could<br />
correctly infer the type of the result, which includes the size of the<br />
vector after appending or truncating it.<br />
<br />
<br />
{{{<br />
*ArbArithmT> let v = vappend (vec (D9 Sz) 0) (vec (D1 Sz) 1)<br />
*ArbArithmT> :t v<br />
Vec (D1 (D0 Sz)) Integer<br />
*ArbArithmT> v<br />
Vec (array (0,9) [(0,0),(1,0),...,(8,0),(9,1)])<br />
*ArbArithmT> :type vhead v<br />
Vec (D1 Sz) Integer<br />
*ArbArithmT> :type vtail v<br />
Vec (D9 Sz) Integer<br />
*ArbArithmT> vtail v<br />
Vec (array (0,8) [(0,0),(1,0),...,(7,0),(8,1)])<br />
*ArbArithmT> :type (vappend (vhead v) (vtail v))<br />
Vec (D1 (D0 Sz)) Integer<br />
}}}<br />
The types of {{{vhead}}} and {{{vtail}}} embed a<br />
non-empty argument vector constraint. Indeed, an attempt to apply<br />
{{{vhead}}} to an empty vector results in a type error:<br />
<br />
<br />
{{{<br />
*ArbArithmT> vtail (vec Sz 0)<br />
<interactive>:1:0:<br />
No instances for (DigitsInReverse' c2' Sz c2r',<br />
DigitsInReverse' c2r' Sz c2',<br />
DigitsDif (D1 Sz) c2r' Carry0 Sz,<br />
DigitsSum (D1 Sz) c2r Carry0 Sz,<br />
DigitsInReverse' c2r Sz size1,<br />
DigitsInReverse' size1 Sz c2r)<br />
arising from use of `vtail' at <interactive>:1:0-4<br />
}}}<br />
The error message essentially says that there is no such decimal<br />
type {{{c2r}}} such that {{{DigitsSum (D1 Sz) c2r Carry0 Sz}}}<br />
holds. That is, there is no non-negative number that gives zero if<br />
added to one.<br />
<br />
We can form quite complex expressions from the functions {{{vappend}}}, {{{vhead}}}, and {{{vtail}}}, and the<br />
compiler will ''infer'' and verify the corresponding<br />
constraints on the sizes of involved vectors. For example:<br />
<br />
<br />
{{{<br />
testc1 =<br />
let va = vec (D1$D2 Sz) 0<br />
vb = vec (D5 Sz) 1<br />
vc = vec (D8 Sz) 2<br />
in vzipWith (+) va (vappend vb (vtail vc))<br />
*ArbArithmT> testc1<br />
Vec (array (0,11) [(0,1),...,(4,1),(5,2),(6,2),...,(11,2)])<br />
}}}<br />
The size of the vector {{{va}}} must be the sum of the<br />
sizes of {{{vb}}} and {{{vc}}} minus one. Furthermore,<br />
the vector {{{vc}}} must be non-empty. The compiler has<br />
inferred this non-trivial constraint and checked it. Indeed, if we by<br />
mistake write {{{vc = vec (D9 Sz) 2}}}, as we actually did when<br />
writing the example, the compiler will instantly report a type<br />
error:<br />
<br />
<br />
{{{<br />
Couldn't match `D9 Sz' against `D8 Sz'<br />
Expected type: D9 Sz<br />
Inferred type: D8 Sz<br />
When using functional dependencies to combine<br />
DigitsSum (D1 Sz) c2r Carry0 (D9 Sz),<br />
arising from use of `vtail' at ArbArithmT.hs:420:34-38<br />
DigitsSum (D1 Sz) c2r Carry0 (D8 Sz),<br />
arising from use of `vtail' at ArbArithmT.hs:411:34-38<br />
}}}<br />
The result {{{12 - 5 + 1}}} is expected to be 8 rather than 9.<br />
<br />
We can define other operations that extend or shrink our<br />
vectors. For example, Section [#sec:unary-type [ sec:unary-type] ] introduced<br />
the operator {{{&+}}} to make the entering of vectors<br />
easier. It is straightforward to implement such an operator for<br />
decimally-typed vectors.<br />
<br />
We must point out that the type system guarantees that {{{vhead}}} and {{{vtail}}} are applied to non-empty<br />
vectors. Therefore, we no longer need the corresponding run-time<br />
check. The bodies of {{{vhead}}} and {{{vtail}}} may<br />
''safely'' use unsafe versions of the library functions {{{head}}} and {{{tail}}}, and hence increase the performance<br />
of the code without compromising its safety.<br />
<br />
<br />
<br />
== Statically-sized vectors in a dynamic context ==<br />
[[Anchor(sec:dynamic)]]In the present version of the paper, we demonstrate the simplest<br />
method of handling number-parameterized vectors in the dynamic<br />
context. The method involves run-time checks. The successful result of<br />
a run-time check is marked with the appropriate static type. Further<br />
computations can therefore rely on the result of the check (e.g., that<br />
the vector in question definitely has a particular size) and avoid the<br />
need to do that test over and over again. The net advantage is the<br />
reduction in the number of run-time checks. The complete elimination<br />
of the run-time checks is quite difficult (in general, may not even be<br />
possible) and ultimately requires a dependent type system.<br />
<br />
For our presentation we use an example of dynamically-sized<br />
vectors: reversing a vector by the familiar accumulator-passing<br />
algorithm. Each iteration splits the source vector into the head and<br />
the tail, and prepends the head to the accumulator. The sizes of the<br />
vectors change in the course of the computation, to be precise, on<br />
each iteration. We treat vectors as if they were lists. Most of the<br />
vector processing code does not have such a degree of variation in<br />
vector sizes. The code is quite simple:<br />
<br />
<br />
{{{<br />
vreverse v = listVec (vlength_t v) $ reverse $ velems v<br />
}}}<br />
whose inferred type is obviously<br />
<br />
<br />
{{{<br />
*ArbArithmT> :t vreverse<br />
vreverse :: (Card size) => Vec size a -> Vec size a<br />
}}}<br />
The use of {{{listVec}}} implies a dynamic test -- as a<br />
witness to {{{`}}}acquire{{{'}}} the static type {{{size}}}, the size type<br />
of the input vector. We do this test only once, at the conclusion of<br />
the algorithm. We can treat the result as any other number-parameterized<br />
vector, for example:<br />
<br />
<br />
{{{<br />
testv = let v = vappend (vec (D3 Sz) 1) (vec (D1 Sz) 4)<br />
vr = vreverse v<br />
in vhead (vtail (vtail vr))<br />
}}}<br />
using the versions of {{{vhead}}} and {{{vtail}}}<br />
without any further run-time size checks.<br />
<br />
<br />
<br />
== Related work ==<br />
[[Anchor(sec:related)]]This paper was inspired by Matthias Blume{{{'}}}s messages on the<br />
newsgroup comp.lang.functional in February 2002. Many ideas<br />
of this paper were first developed during the USENET discussion, and<br />
posted in a series of three messages at that time. In more detail<br />
Matthias Blume described his method in [#Blume01 [ Blume01] ],<br />
although that paper uses binary rather than decimal types of array<br />
sizes for clarity. The approaches by Matthias Blume and ours both rely on<br />
phantom types to encode additional information about a value (e.g.,<br />
the size of an array) in a manner suitable for a typechecker. The<br />
paper [#Blume01 [ Blume01] ] exhibits the most pervasive and thorough<br />
use of phantom types: to represent the size of arrays and the<br />
constness of imported C values, to encode C structure tag ''names'' and C function prototypes.<br />
<br />
However, the paper [#Blume01 [ Blume01] ] was written in the context<br />
of SML, whereas we use Haskell. The language has greatly influenced<br />
the method of specifying and enforcing complex static constraints,<br />
e.g., that digit sequences representing non-negative numbers must<br />
not have leading zeros. The SML approach in [#Blume01 [ Blume01] ]<br />
relies on the sophisticated module system of SML to restrict the<br />
availability of value constructors so that users cannot build<br />
values of outlawed types. Haskell typeclasses on the other hand can<br />
directly express the constraint, as we saw in Section<br />
[#sec:decimal-arb [ sec:decimal-arb] ]. Furthermore, Haskell typeclasses let us<br />
specify arithmetic equality and inequality constraints -- which, as<br />
admitted in [#Blume01 [ Blume01] ], seems quite unlikely to be possible<br />
in SML.<br />
<br />
Arrays of a statically known size -- whose size is a part of their<br />
type -- are a fairly popular feature in programming languages. Such<br />
arrays are present in Fortran, Pascal, C [[FootNote(C does permit truly statically-sized arrays like those in Pascal. To achieve this, we should make a C array a member of a C structure. The compiler preserves the array size information when passing such a wrapped array as an argument. It is even possible to assign such ``arrays''.)]] . Pascal has the most complete realization of statically sized<br />
arrays. A Pascal compiler can therefore typecheck array functions like<br />
our {{{vzipWith}}}. Statically sized arrays also contribute to<br />
expressiveness and efficiency: for example, in Pascal we can copy one<br />
instance of an array into another instance of the same type by a<br />
single assignment, which, for small arrays, can be fully inlined by<br />
the compiler into a sequential code with no loops or range<br />
checks. However, in a language without the parametric polymorphism<br />
statically sized arrays are a great nuisance. If the size of an array<br />
is a part of its type, we cannot write generic functions that operate<br />
on arrays of any size. We can only write functions dealing with arrays<br />
of specific, fixed sizes. The inability to build generic<br />
array-processing libraries is one of the most serious drawbacks of<br />
Pascal. Therefore, Fortran and C introduce {{{`}}}{{{`}}}generic{{{'}}}{{{'}}} arrays whose<br />
size type is not statically known. The compiler silently converts a<br />
statically-sized array into a generic one when passing arrays as<br />
arguments to functions. We can now build generic array-processing<br />
libraries. We still need to know the size of the array. In Fortran and<br />
C, the programmer must arrange for passing the size information to a<br />
function in some other way, e.g., via an additional argument, global<br />
variable, etc. It becomes then the responsibility of a programmer to<br />
make sure that the size information is correct. The large number of<br />
Internet security advisories related to buffer overflows and other<br />
array-management issues testify that programmers in general are not to<br />
be relied upon for correctly passing and using the array size<br />
information. Furthermore, the silent, irreversible conversion of<br />
statically sized arrays into generic ones negate all the benefits of<br />
the former.<br />
<br />
A different approach to array processing is a so-called<br />
shape-invariant programming, which is a key feature of array-oriented<br />
languages such as APL or SaC [#SaC [ SaC] ]. These languages let a<br />
programmer define operations that can be applied to arrays of<br />
arbitrary shape/dimensionality. The code becomes shorter and free from<br />
explicit iterations, and thus more reusable, easier to read and to<br />
write. The exact shape of an array has to be known,<br />
eventually. Determining it at run-time is greatly<br />
inefficient. Therefore, high-performance array-oriented languages<br />
employ shape inference [#Scholz01 [ Scholz01] ], which tries to<br />
statically infer the dimensionalities or even exact sizes of all<br />
arrays in a program. Shape inference is, in general, undecidable,<br />
since arrays may be dynamically allocated. Therefore, one can either<br />
restrict the class of acceptable shape-invariant programs to a<br />
decidable subset, resort to a dependent-type language like Cayenne<br />
[#Cayenne [ Cayenne] ], or use {{{`}}}{{{`}}}soft typing.{{{'}}}{{{'}}} The latter approach is<br />
described in [#Scholz01 [ Scholz01] ], which introduces a non-unique type<br />
system based on a hierarchy of array types: from fully specialized<br />
ones with the statically known sizes and dimensionality, to a type of<br />
an array with the known dimensionality but not size, to a fully<br />
generic array type whose shape can only be determined at run-time. The<br />
system remains decidable because at any time the typechecker can throw<br />
up hands and give to a value a fully generic array type. Shape<br />
inference of SaC is specific to that language, whose type system is<br />
otherwise deliberately constrained: SaC lacks parametric polymorphism<br />
and higher-order functions. Using shape inference for compilation of<br />
shape-invariant array operations into a highly efficient code is<br />
presented in [#Kreye [ Kreye] ]. Their compiler tries to generate as<br />
precise shape-specific code as possible. When the shape inference<br />
fails to give the exact sizes or dimensionalities, the compiler emits<br />
code for a dynamic shape dispatch and generic loops. <br />
<br />
There is however a great difference in goals and implementation<br />
between the shape inference of SaC and our approach. The former <br />
aims at accepting more programs than can statically be inferred<br />
shape-correct. We strive to express assertions about the array sizes<br />
and enforcing the programming style that assures them. We have shown<br />
the definitions of functions such as {{{vzipWith}}} whose the<br />
argument and the result vectors are all of the same size. This<br />
constraint is assured at compile-time -- even if we do not statically<br />
know the exact sizes of the vectors. Because SaC lacks parametric<br />
polymorphism, it cannot express such an assertion and statically<br />
verify it. If a SaC programmer applies a function such as {{{vzipWith}}} to vectors of unequal size, the compiler will not flag<br />
that as an error but will compile a generic array code instead. The<br />
error will be raised at run time during a range check.<br />
<br />
The approach of the present paper comes close to emulating a<br />
dependent type system, of which Cayenne [#Cayenne [ Cayenne] ] is the<br />
epitome. We were particularly influenced by a practical dependent type<br />
system of Hongwei Xi [#Xi98 [ Xi98] ] [#XiThesis [ XiThesis] ], which is<br />
a conservative extension of SML. In [#Xi98 [ Xi98] ], Hongwei Xi et<br />
al. demonstrated an application of their system to the elimination of<br />
array bound checking and list tag checking. The related work section<br />
of that paper lists a number of other dependent and pseudo-dependent<br />
type systems. Using the type system to avoid unnecessary run-time<br />
checks is a goal of the present paper too. <br />
<br />
C++ templates provide parametric polymorphism and indexing of<br />
types by true integers. A C++ programmer can therefore define<br />
functions like {{{vzipWith}}} and {{{vtail}}} with<br />
equality and even arithmetic constraints on the sizes of the argument<br />
vectors. Blitz++ [#Blitz [ Blitz] ] was the first example of using a<br />
so-called template meta-programming for generating efficient and safe<br />
array code. The type system of C++ however presents innumerable<br />
hurdles to the functional style. For example, the result type of a<br />
function is not used for the overloading resolution, which significantly<br />
restricts the power of the type inference. Templates were<br />
introduced in C++ ad hoc, and therefore, are not well integrated with<br />
its type system. Violations of static constraints expressed via<br />
templates result in error messages so voluminous as to become<br />
incomprehensible.<br />
<br />
McBride [#McBride [ McBride] ] gives an extensive survey of the<br />
emulation of dependent type systems in Haskell. He also describes<br />
number-parameterized arrays that are similar to the ones discussed in<br />
Section [#sec:Okasaki [ sec:Okasaki] ]. The paper by Fridlender and Indrika<br />
[#Fridlender [ Fridlender] ] shows another example of emulating dependent<br />
types within the Hindley-Milner type system: namely, emulating<br />
variable-arity functions such as generic {{{zipWith}}}. Their<br />
technique relies on ad hoc codings for natural numbers which resemble<br />
Peano numerals. They aim at defining more functions (i.e.,<br />
multi-variate functions), whereas we are concerned with making<br />
functions more restrictive by expressing sophisticated invariants in<br />
functions{{{'}}} types. Another approach to multivariate functions --<br />
multivariate composition operator -- is discussed in [#mcomp [ mcomp] ].<br />
<br />
<br />
<br />
== Conclusions ==<br />
[[Anchor(sec:conclusions)]]Throughout this paper we have demonstrated several realizations of<br />
number-parameterized types in Haskell, using arrays parameterized by<br />
their size as an example. We have concentrated on techniques that<br />
rely on phantom types to encode the size information in the type of<br />
the array value. We have built a family of infinite types so that<br />
different values of the vector size can have their own distinct<br />
type. That type is a decimal encoding of the corresponding integer<br />
(rather than the more common unary, Peano-like encoding). The<br />
examples throughout the paper illustrate that the decimal notation for<br />
the number-parameterized vectors makes our approach practical.<br />
<br />
We have used the phantom size types to express non-trivial<br />
constraints on the sizes of the argument and the result arrays in the<br />
type of functions. The constraints include the size<br />
equality, e.g., the type of a function of two arguments may indicate<br />
that the arguments must be vectors of the same size. More<br />
importantly, we can specify arithmetical constraints: e.g., that<br />
the size of the vector after concatenation is the sum of the source<br />
vector sizes. Furthermore, we can write inequality constraints by<br />
means of an implicit existential quantification, e.g., the function<br />
{{{vhead}}} must be applied to a non-empty vector. The<br />
programmer should benefit from more expressive function signatures<br />
and from the ability of the compiler to statically check complex<br />
invariants in all applications of the vector-processing functions. The<br />
compiler indeed infers and checks non-trivial constraints involving<br />
addition and subtraction of sizes -- and presents<br />
readable error messages on violation of the constraints.<br />
<br />
<br />
<br />
= References =<br />
<br />
[[Anchor(Cayenne)]]Augustsson, L. Cayenne -- a language with dependent types. Proc. ACM SIGPLAN International Conference on Functional Programming, pp. 239--250, 1998.<br />
<br />
[[Anchor(Blume01)]]Matthias Blume: No-Longer-Foreign: Teaching an ML compiler to speak C {{{`}}}{{{`}}}natively.{{{'}}}{{{'}}} In BABEL{{{'}}}01: First workshop on multi-language infrastructure and interoperability, September 2001, Firenze, Italy. [http://people.cs.uchicago.edu/~blume/pub.html] <br />
<br />
[[Anchor(CodeForPaper)]]The complete source code for the article. August 9, 2005. [http://pobox.com/~oleg/ftp/Haskell/number-param-vector-code.tar.gz] <br />
<br />
[[Anchor(Fridlender)]]Daniel Fridlender and Mia Indrika: Do we Need Dependent Types? BRICS Report Series RS-01-10, March 2001. [http://www.brics.dk/RS/01/10/] <br />
<br />
[[Anchor(mcomp)]]Oleg Kiselyov: Polyvariadic composition. October 31, 2003. [http://pobox.com/~oleg/ftp/Haskell/types.scm{{{#}}}polyvar-comp] <br />
<br />
[[Anchor(stanamic-trees)]]Oleg Kiselyov: Polymorphic stanamically balanced AVL trees. April 26, 2003. [http://pobox.com/~oleg/ftp/Haskell/types.scm{{{#}}}stanamic-AVL] <br />
<br />
[[Anchor(Kreye)]]Dietmar Kreye: A Compilation Scheme for a Hierarchy of Array Types. Proc. 3th International Workshop on Implementation of Functional Languages (IFL{{{'}}}01).<br />
<br />
[[Anchor(McBride)]]Conor McBride: Faking it---simulating dependent types in Haskell. Journal of Functional Programming, 2002, v.12, pp. 375-392 [http://www.cs.nott.ac.uk/~ctm/faking.ps.gz] <br />
<br />
[[Anchor(Okasaki99)]]Chris Okasaki: From fast exponentiation to square matrices: An adventure in types. Proc. fourth ACM SIGPLAN International Conference on Functional Programming (ICFP {{{'}}}99), Paris, France, September 27-29, pp. 28 - 35, 1999 [http://www.eecs.usma.edu/Personnel/okasaki/pubs.html{{{#}}}icfp99] <br />
<br />
[[Anchor(Scholz01)]]Sven-Bodo Scholz: A Type System for Inferring Array Shapes. Proc. 3th International Workshop on Implementation of Functional Languages (IFL{{{'}}}01). [http://homepages.feis.herts.ac.uk/~comqss/research.html] <br />
<br />
[[Anchor(SaC)]]Singe-Assignment C homepage. [http://www.sac-home.org/] <br />
<br />
[[Anchor(Haskell-list-quote)]]Dominic Steinitz: Re: Polymorphic Recursion / Rank-2 Confusion. Message posted on the Haskell mailing list on Sep 21 2003. [http://www.haskell.org/pipermail/haskell/2003-September/012726.html] <br />
<br />
[[Anchor(Blitz)]]Todd L. Veldhuizen: Arrays in Blitz++. Proc. 2nd International Scientific Computing in Object-Oriented Parallel Environments (ISCOPE{{{'}}}98). Santa Fe, New Mexico, 1998. [http://www.oonumerics.org/blitz/manual/blitz.html] <br />
<br />
[[Anchor(Xi98)]]Hongwei Xi, Frank Pfenning: Eliminating Array Bound Checking Through Dependent Types. Proc. ACM SIGPLAN Conference on Programming Language Design and Implementation, pp. 249--257, 1998. [http://www-2.cs.cmu.edu/~hwxi/] <br />
<br />
[[Anchor(XiThesis)]]Hongwei Xi: Dependent Types in Practical Programming. Ph.D thesis, Carnegie Mellon University, September 1998. [http://www.cs.bu.edu/~hwxi/]<br />
----<br />
CategoryCategory CategoryArticle</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=The_Monad.Reader/Issue5/HRay:_A_Haskell_ray_tracer&diff=20841The Monad.Reader/Issue5/HRay: A Haskell ray tracer2008-05-09T14:25:43Z<p>WouterSwierstra: </p>
<hr />
<div>'''This article needs reformatting! Please help tidy it up.'''--[[User:WouterSwierstra|WouterSwierstra]] 14:25, 9 May 2008 (UTC)<br />
<br />
= HRay: A Haskell ray tracer =<br />
''by KennethHoste for The Monad.Reader IssueFive''<br />
[[BR]]<br />
''2005/06/11''<br />
<br />
http://www.elis.ugent.be/~kehoste/Haskell/hray_logo.png<br />
<br />
'''Abstract.'''<br />
As a thesis subject for Ghent University, I chose to write a ray tracer (HRay) in Haskell. The goal was to show how elegant,<br />
short and maintainable a ray tracing implementation would be in a functional language, as opposed to an imperative or procedural language. To achieve that goal, I first created a formal model for the application, using the functional and <br />
declarative formalism Funmath [[#funmath 1]]. In a next phase, the model was implemented in Haskell to create a working version of the ray tracer. Additional features were added to the model, including a command line and graphical user interface, a parser for input specifications and neat texture support using the Perlin Noise technique.<br />
<br />
This article describes the Haskell implementation of HRay. For more information about the formal model and my thesis, check<br />
the HRay website at [http://www.elis.ugent.be/~kehoste/Haskell/HRay].<br />
<br />
= Contents =<br />
<br />
[[TableOfContents(3)]]<br />
<br />
== Overview ==<br />
<br />
Before discussing the actual implementation, a brief description of the ray tracing algorithm is given. The implementation is<br />
split up into several modules, which are discussed one by one. First of all, the necessary mathematical functions to support<br />
the ray tracing algorithm are discussed. Based on that module, functions are presented which implement the ray tracer. To<br />
support some extra textures, a Perlin noise module is implemented, which will be discussed briefly. The two user interfaces are<br />
presented, one using command line arguments and one using the Haskell GUI library [http://haskell.org/gtk2hs Gtk2Hs]. Finally, some conclusions are drawn.<br />
<br />
== Ray tracing ==<br />
<br />
The powerful ray tracing algorithm is well known for the amazing results it produces [[#raytrac1 2]]. Because the rest of the article is based on the algorithm, we'll give a brief description of it. If you want more details, a good place to start is [[#raytrac2 3]].<br />
<br />
As the name states, the ray tracing algorithm is built upon the notion of rays. A ray is constructed through the point of view and some point of the view window. Given a desired resolution and collection of 3D objects, we calculate the intersection of the ray with the objects, for each pixel of the image. Again using rays, which are constructed as needed, the color for the intersection point (if any) is determined, depending on the color (or texture) of the object and the surrounding objects and light sources. Iterating over all pixels of the image results in a matrix of colors, representing the image. <br />
<br />
In pseudo-code:<br />
<br />
{{{<br />
for each pixel<br />
construct ray from view point to pixel location<br />
(*) determine closest intersection of ray with objects<br />
if no intersection<br />
pixel color = background color<br />
else<br />
for each light source<br />
construct ray from intersection point to light source location<br />
if intersection found between intersection point and light source<br />
shadow, so color = black<br />
else<br />
determine color contribution for light source, remember it<br />
if reflective surface<br />
construct reflected ray, determine contribution (*)<br />
if transparent surface<br />
construct refracted ray, determine contribution (*)<br />
determine color for pixel by summing all contributions<br />
}}}<br />
<br />
This is all quite imperative. The next step we need is to construct functions which can help with implementing the algorithm using a functional style. This is where the formal model came into play, but because we want to concentrate on the implementation of it all in Haskell, I'll just use the model informally: we'll first construct some basic math functions before moving on to the actual ray tracing engine. The needed types and datatypes are presented as needed.<br />
<br />
== Math functions ==<br />
<br />
=== Types and datatypes ===<br />
<br />
First of all, we mention the several (data)types needed to support the needed basic math functions. Because we are making a transition from a 2D to a 3D world, we need to be able to represent points in both worlds. Also, we need vectors for direction purposes, which are defined the same way as points in the 3D world are. This was a little point of critique in the formal model, but because in Haskell giving objects two different types is sufficient to distinguish between them, there is no problem. On the contrary, this way of defining the types supports overloading of functions such as multiplying a point/vector with some factor, and calculating the sum of two points/vectors. <br />
<br />
{{{#!syntax haskell<br />
type Point2D = (Int,Int)<br />
type Point3D = (Double,Double,Double)<br />
type Vector = (Double,Double,Double)<br />
}}}<br />
<br />
Using these new types, we can define a Ray datatype as follows.<br />
<br />
{{{#!syntax haskell<br />
data Ray = Ray Point3D Vector<br />
}}}<br />
<br />
Intuitively, every ray has a point of origin and a direction. Besides rays, the supported objects are also important in this module. For this purpose, the datatype defining the several supported objects is also defined here. To keep the implementation simple, we only support spheres and planes. <br />
<br />
{{{#!syntax haskell<br />
data Object = Sphere Double Point3D<br />
| Plane (Double,Double,Double,Double) <br />
}}}<br />
<br />
For type-convenience and clarity, we also introduce a Resolution and a Dimension type. The use of these types will become clear later on.<br />
<br />
{{{#!syntax haskell<br />
type Resolution = (Int,Int)<br />
type Dimension = (Int,Int)<br />
}}}<br />
<br />
=== Basic math functions ===<br />
<br />
When modelling the ray tracer, the need arose for the extension of some elementary operations on points and vectors. The functions supporting these operations where defined as infix operators, so the several expressions wouldn't become gibberish (don't you just hate when that happens in imperative languages). <br />
The first of these operations (sum of two points/vectors, subtraction of two points/vectors, multiplication of two points/vectors and multiplication of a point/vector with a factor) are given below. For efficiency, the types were not made polymorphic (i.e. using Num). <br />
<br />
{{{#!syntax haskell<br />
(<+>) :: (Double,Double,Double) -> (Double,Double,Double) -> (Double,Double,Double)<br />
(x1,y1,z1) <+> (x2,y2,z2) = (x1+x2, y1+y2, z1+z2)<br />
<br />
(<->) :: (Double,Double,Double) -> (Double,Double,Double)} -> (Double,Double,Double)<br />
(x1,y1,z1) <-> (x2,y2,z2) = (x1-x2,y1-y2,z1-z2)<br />
<br />
(<*>) :: (Double,Double,Double) -> (Double,Double,Double)} -> (Double,Double,Double)<br />
(x1,y1,z1) <*> (x2,y2,z2) = (x1*x2,y1*y2,z1*z2)<br />
<br />
(*>) :: (Double,Double,Double) -> Double -> (Double,Double,Double)<br />
(x,y,z) *> f = (x*f,y*f,z*f)}<br />
}}}<br />
<br />
Also the classic max and min operators are extended, to make the definition of a clip function (which will adjust the RGB values of colors) really simple. <br />
<br />
{{{#!syntax haskell<br />
maxF :: Double -> (Double,Double,Double) -> (Double,Double,Double)<br />
maxF f (x,y,z) = (max x f, max y f, max z f)<br />
<br />
minF :: Double -> (Double,Double,Double) -> (Double,Double,Double)<br />
minF f (x,y,z) = (min x f, min y f, min z f)<br />
}}}<br />
<br />
Besides the generalised functions above, we have defined elementary vector-only functions: scalar product of two vectors, length of a vector, normalizing a vector and creating a normalized vector given two points.<br />
<br />
{{{#!syntax haskell<br />
(*.) :: Vector -> Vector -> Double<br />
(x1,y1,z1) *. (x2,y2,z2) = x1*x2 + y1*y2 + z1*z2<br />
<br />
len :: Vector -> Double<br />
len v = sqrt (v *. v)<br />
<br />
norm :: Vector -> Vector<br />
norm v<br />
| len v < 10**(-9) = (0.0,0.0,0.0)<br />
| otherwise = v *> (1/(len v))<br />
<br />
mkNormVect :: Point3D -> Point3D -> Vector<br />
mkNormVect v w = norm (w <-> v)<br />
}}}<br />
<br />
For shadow purposes, we need a function which determines the distance between two given points, and to avoid 'color spilling', where the RGB values of a color reach values outside the [0,1] interval, we define a clipping function.<br />
<br />
{{{#!syntax haskell<br />
dist :: Point3D -> Point3D -> Double<br />
dist p0 p1 = sqrt ((p1 <-> p0) *. (p1 <-> p0))<br />
<br />
clip :: (Double,Double,Double) -> (Double,Double,Double)<br />
clip = (maxF 0.0) . (minF 1.0)<br />
}}}<br />
<br />
=== Geometric and other math functions ===<br />
Several functions which deal with the geometric issues involved in playing around with rays and geometric objects, are defined to support the ray tracing calculations. An important function is a quadratic solver, which is kept simple: it just returns a list of results... This way, it is compatible with future functions, which might produce more solutions.<br />
<br />
{{{#!syntax haskell<br />
solveq :: (Double,Double,Double) ->[Double]<br />
solveq (a,b,c)<br />
| (d < 0) = []<br />
| (d > 0) = [(- b - sqrt d)/(2*a), (- b + sqrt d)/(2*a)]<br />
| otherwise = [-b/(2*a)]<br />
where<br />
d = b*b - 4*a*c<br />
}}}<br />
<br />
For ray construction, we already have the data constructor Ray which comes with the data declaration of Ray. Because we also will need to create a ray given two points, we define a function which creates a normalized vector out of these points, and uses the Ray data constructor internally.<br />
<br />
{{{#!syntax haskell <br />
mkRay :: Point3D -> Point3D -> Ray<br />
mkRay p1 p2 = Ray p1 (mkNormVect p1 p2)<br />
}}}<br />
<br />
To determine the intersection of a ray with some object, we provide the intRayWith function. It returns a collection of intersection points, which should be sorted from close to far. Since we only support spheres and planes, the code needed if fairly simple.<br />
<br />
{{{#!syntax haskell <br />
intRayWith :: Ray -> Object -> [Double]<br />
intRayWith (Ray start dir) (Sphere rad cen) = solveq (dir *. dir, 2*(dir *. d), (d *. d) - rad^2)<br />
where<br />
d = start <-> cen<br />
intRayWith (Ray start dir) (Plane (a,b,c,d)) = if (abs(part) < 10**(-9)) then [] else [- (d + ((a,b,c) *. start) ) / part]<br />
where<br />
part = (a,b,c) *. dir<br />
}}}<br />
<br />
Ray tracing accurately models light information, i.e. shadows, highlights and fading colors. The calculations which make this possible, use the normals on the surfaces of objects to determine the angle in which the object is seen. For that purpose, we support a normal function (again, fairly simple).<br />
<br />
{{{#!syntax haskell <br />
normal :: Point3D -> Object -> Vector<br />
normal p (Sphere rad cen) = norm ((p <-> cen) *> (1/rad))<br />
normal _ (Plane (a,b,c,d)) = norm (a,b,c)<br />
}}}<br />
<br />
The real power of the ray tracing algorithm is the ease in which it models real-life effects such a reflection and refraction of light rays. Giving some direction, we provide<br />
functions to determine the reflected and refracted directions when the incoming ray intersects with some surface.<br />
<br />
{{{#!syntax haskell <br />
reflectDir :: Vector -> Vector -> Vector<br />
reflectDir i n = i <-> (n *> (2*(n *. i)))<br />
}}}<br />
<br />
{{{#!syntax haskell <br />
refractDir :: Vector -> Vector -> Double -> Vector<br />
refractDir i n r = if (v < 0) then (0.0, 0.0, 0.0) else norm $ (i *> r_c) <+> (n *> (r_c*(abs c) - sqrt v))<br />
where<br />
c = n *. (i *> (-1))<br />
r_c = if (c < 0) then r else 1/r -- when cosVal < 0, inside of sphere (so travelling to vacuum)<br />
v = 1 + (r_c^2) * (c^2 - 1)<br />
}}}<br />
<br />
Because the user can choose the resulting image resolution and the size of the view window, we need to map the pixel indices onto the window dynamically. This is done using the mapToWin function below.<br />
<br />
{{{#!syntax haskell <br />
mapToWin :: Resolution -> Dimension -> Point2D -> Point3D<br />
mapToWin (rx,ry) (w,h) (px,py) = (x/rxD,y/ryD,0.0)<br />
where<br />
(rxD,ryD) = (fromIntegral rx, fromIntegral ry)<br />
(pxD,pyD) = (fromIntegral px, fromIntegral py)<br />
(wD,hD) = (fromIntegral w, fromIntegral h)<br />
(x,y) = ( (pxD-rxD/2)*wD, (pyD-ryD/2)*hD )<br />
}}}<br />
<br />
== Ray tracing engine ==<br />
<br />
Using the mathematical basis of the HRayMath module, we can define the functions which will support the actual raytracing. First, we explain the needed (data)types, in the same way as we did with the math module. Next, we provide some functions which allow extraction of the needed information from an instance of the Intersection datatype. We discuss the functions which do the actual intersecting of rays with objects, determine the color of a pixel and support reflection and refraction effects. At the end of this section, we show the core functions of the raytracing engine, which are pretty short and elegant.<br />
<br />
=== Types and datatypes ===<br />
<br />
When working with (3D) graphics, the notion of color is essential. We represent a color as an RGB value (values between 0 and 1).<br />
<br />
{{{#!syntax haskell<br />
type Color = (Double,Double,Double)<br />
}}}<br />
<br />
As already mentioned, we will support special textures for all objects, besides just plain colors. For this purpose, we created a special dataype called Diff (diffuse color).<br />
<br />
{{{#!syntax haskell<br />
data Diff = Solid Color |<br />
Perlin (Point3D -> Color)<br />
}}}<br />
<br />
Each texture, whether plain or Perlin, has several parameters, to determine the reflection and refraction effects of the specific material. We support specular coefficient along with specularity, and refraction coefficient along with refraction index.<br />
<br />
{{{#!syntax haskell<br />
data Texture = Texture Diff Double Int Double Double<br />
}}}<br />
<br />
Every object in our scene is has a certain type (sphere, plane) and a texture attached to it. Hence we provide a TexturedObject datatype.<br />
<br />
{{{#!syntax haskell<br />
type TexturedObject = (Object,Texture)<br />
}}}<br />
<br />
Every light source has a certain color, which we call Intensity. That way one can use red, blue, bright and dark lights. We also support 2 basic types of light sources: <br />
ambient lights, which is actually a correcting factor, and point lights, which have a certain position in the 3D space.<br />
<br />
{{{#!syntax haskell<br />
type Intensity = (Double,Double,Double)<br />
<br />
data Light = PointLight Point3D Intensity<br />
| AmbientLight Intensity<br />
}}}<br />
<br />
The point of view (or camera) determines which view is rendered of the provided scene. We allow to define the point in space through which the scene is looked at, and the<br />
view window dimensions. The view window is always located in the XY-plane in the origin of the axis.<br />
<br />
{{{#!syntax haskell<br />
data Camera = Camera Point3D Dimension<br />
}}}<br />
<br />
The whole scene is described by describing the camera, background color, list of objects and list of light sources.<br />
<br />
{{{#!syntax haskell<br />
data Scene = Scene Camera Color [TexturedObject] [Light]<br />
}}}<br />
<br />
An intersection of a ray with an object provides the neccesary information for the raytracing calculations. We include the distance to the intersection, the intersecting ray and the object which is intersected.<br />
<br />
{{{#!syntax haskell<br />
data Intersection = Intersection Double Ray TexturedObject<br />
}}}<br />
<br />
The final product of the algorithm, an image representing the 3D scene, is actually a matrix of color. That way, each pixel pair is mapped to a RGB color tuple.<br />
<br />
{{{#!syntax haskell<br />
type Image = Point2D -> Color<br />
}}}<br />
<br />
=== Intersection functions ===<br />
<br />
Because not every ray will intersect with an object, we use the Maybe monad in combination with our own Intersection datatype. In order to make working with the artificial<br />
(Maybe Intersection) type easier, we provide some functions which extract information from a possible intersection. This design choice might be an opportunity for improvement.<br />
We show the distance, texture, color, normal and intersection point functions. Notice how the color of some point in the 3D space is dependent on the location for Perlin textures.<br />
<br />
{{{#!syntax haskell<br />
intDist :: (Maybe Intersection) -> Double<br />
intDist Nothing = 0.0<br />
intDist (Just (Intersection d _ _)) = d<br />
}}}<br />
<br />
{{{#!syntax haskell<br />
intText :: (Maybe Intersection) -> Texture<br />
intText Nothing = Texture (Solid (0.0,0.0,0.0)) 0.0 0 0.0 0.0<br />
intText (Just (Intersection _ _ (_,t))) = t<br />
}}}<br />
<br />
{{{#!syntax haskell<br />
colorAt :: (Maybe Intersection) -> Color<br />
colorAt Nothing = (0.0,0.0,0.0)<br />
colorAt (Just (Intersection _ _ (_,Texture (Solid color) _ _ _ _) )) = color<br />
colorAt i@(Just (Intersection _ _ (_,Texture (Perlin f) _ _ _ _) )) = f (intPt i)<br />
}}}<br />
<br />
{{{#!syntax haskell<br />
normalAt :: (Maybe Intersection) -> Vector<br />
normalAt Nothing = (0.0,0.0,0.0)<br />
normalAt i@(Just (Intersection _ _ (o,_) )) = normal (intPt i) o<br />
}}}<br />
<br />
{{{#!syntax haskell<br />
intPt :: (Maybe Intersection) -> Point3D<br />
intPt Nothing = (0.0,0.0,0.0)<br />
intPt (Just (Intersection d (Ray start dir) _)) = start <+> (dir *> d)<br />
}}}<br />
<br />
=== Intersecting ===<br />
<br />
To determine the intersection (if any) of a ray with the collection of objects, we use the functions below. <br />
<br />
At first, a simple function is used to determine the first positive element in a list of doubles. We chose not to remove the negative values from the list of intersection<br />
distances because we might need them for other uses, when extending the features of the raytracer. <br />
<br />
{{{#!syntax haskell<br />
fstPos :: [Double] -> Double<br />
fstPos [] = 0.0<br />
fstPos (l:ls) = if l > 10**(-6) then l else fstPos ls<br />
}}}<br />
<br />
This function determines whether the intersection formed by the given ray and object (if any) is better than the known intersection. <br />
<br />
{{{#!syntax haskell<br />
closestInt :: Ray -> (Maybe Intersection) -> TexturedObject -> (Maybe Intersection)<br />
closestInt r i (o,m) = if d > 10**(-6) && ((isNothing i) || d < (intDist i)) <br />
then Just (Intersection d r (o,m))<br />
else i<br />
where<br />
d = fstPos (intRayWith r o)<br />
}}}<br />
<br />
To intersect a ray with a collection of objects, we can use the powerfull foldl operator in combination with the closestInt function (which is given a basic empty intersection to start with).<br />
<br />
{{{#!syntax haskell<br />
intersect :: Ray -> [TexturedObject] -> (Maybe Intersection)<br />
intersect r o = foldl (closestInt r) Nothing o<br />
}}}<br />
<br />
=== Determining color ===<br />
<br />
The color of an intersection point is determined by several factors: the diffuse color of the object itself (and the angle of view), the specular influence of the light sources and the other objects, which might be blocking the light from some light sources. The functions diff, spec and shadePt do the neccesary calculations to determine the color at a given intersection point. For more details on how this is done, we point to [[#raytrac2 3]].<br />
<br />
{{{#!syntax haskell<br />
diff :: (Maybe Intersection) -> Light -> Color<br />
diff _ (AmbientLight _) = (0.0,0.0,0.0)<br />
diff i (PointLight pos int) = (int *> ((mkNormVect (intPt i) pos) *. (normalAt i))) <*> (colorAt i)<br />
<br />
spec :: (Maybe Intersection) -> Vector -> Light -> Color<br />
spec _ _ (AmbientLight _) = (0.0,0.0,0.0)<br />
spec i d (PointLight pos int) = int *> (reflCoef * ( ((normalAt i) *. h)**(fromIntegral specCoef) ))<br />
where<br />
h = norm ((d *> (-1)) <+> (mkNormVect (intPt i) pos))<br />
(Texture _ reflCoef specCoef _ _) = intText i <br />
<br />
shadePt :: Intersection -> Vector -> [TexturedObject] -> Light -> Color <br />
shadePt i d o (AmbientLight int) = int<br />
shadePt i d o l@(PointLight pos int)<br />
| s = (0.0,0.0,0.0)<br />
| otherwise = (diff (Just i) l) <+> (spec (Just i) d l)<br />
where <br />
s = not (isNothing i_s) && (intDist i_s) <= dist (intPt (Just i)) pos <br />
i_s = intersect (mkRay (intPt (Just i)) pos) o <br />
}}}<br />
<br />
=== Recursive functions ===<br />
<br />
The reflection and refraction effects are basically contributions of the other objects in the scene to the color of an intersection point. In the raytracing algorithm, this is modelled pretty easily: just look at the other objects in the scene from the intersection, and add the color observed in that point to the color of the intersection (of course taking into account the reflection/refraction component of the texture of the object). In other words: use recursion.<br />
<br />
{{{#!syntax haskell<br />
reflectPt :: Int -> Intersection -> Vector -> [TexturedObject] -> [Light] -> Color <br />
reflectPt depth i d = colorPt depth (Ray (intPt (Just i)) (reflectDir d (normalAt (Just i)))) (0.0,0.0,0.0) <br />
<br />
refractPt :: Int -> Intersection -> Vector -> Color -> [TexturedObject] -> [Light] -> Color <br />
refractPt depth i d b = if refractedDir == (0.0,0.0,0.0) then (\x y -> (0.0,0.0,0.0)) <br />
else colorPt depth (Ray (intPt (Just i)) refractedDir) (b *> refrCoef) <br />
where <br />
refractedDir = refractDir d (normalAt (Just i)) refrIndex <br />
(Texture _ _ _ refrCoef refrIndex) = intText (Just i) <br />
}}}<br />
<br />
=== Core functions ===<br />
<br />
To determine the actual color at some pixel of the image, we shoot a ray through the pixel from some point and add all the different components (object, reflection, refraction) together. To avoid infinite recursion (rays bouncing between two point without changing direction), we use a recursion depth parameter. <br />
<br />
{{{#!syntax haskell<br />
colorPt :: Int -> Ray -> Color -> [TexturedObject] -> [Light] -> Color<br />
colorPt (-1) _ _ _ _ = (0.0, 0.0, 0.0) <br />
colorPt d r@(Ray _ dir) b o l = if (isNothing i) then b else clip $ shadeColor <+> reflectColor <+> refractColor<br />
where <br />
shadeColor = foldl (<+>) (0.0,0.0,0.0) (map (shadePt (fromJust i) dir o) l)<br />
reflectColor = if (reflCoef == 0.0) then (0.0, 0.0, 0.0) <br />
else (reflectPt (d-1) (fromJust i) dir o l) *> reflCoef <br />
refractColor = if (refrCoef == 0.0) then (0.0, 0.0, 0.0) <br />
else (refractPt (d-1) (fromJust i) dir b o l) *> refrCoef <br />
i = intersect r o <br />
(Texture _ reflCoef _ refrCoef _) = intText i <br />
}}}<br />
<br />
The actual algorithm translation to Haskell is really simple: for every pixel, map it to the view window, create a ray and determine the color.<br />
<br />
{{{#!syntax haskell<br />
rayTracePt :: Int -> Scene -> Point3D -> Color <br />
rayTracePt d (Scene (Camera eye _) b o l) p = colorPt d (Ray p (mkNormVect eye p)) b o l <br />
<br />
rayTrace :: Int -> Resolution -> Scene -> Image <br />
rayTrace d r s@(Scene (Camera _ dim) _ _ _) = (rayTracePt d s) . (mapToWin r dim) <br />
}}}<br />
<br />
== Perlin noise textures ==<br />
<br />
Because the article would be too long, we don't elaborate on the implementation of the Perlin Noise textures. The code is freely available on [http://scannedinavian.org/~boegel/HRay], together with a gallery of images produced by the raytracer. If you want more information on the technique, please contact me (again, see the website).<br />
<br />
== A scene description parser ==<br />
<br />
To support creation of scenes, we created a small, simple parser using Happy. When downloading the source code from the website, several example files are included, which should be enough to create your own scenes. Again, because of space limitations, we won't elaborate on the parser implementation.<br />
<br />
== User interfaces ==<br />
<br />
To simplify usage of the raytracer, two simple interfaces were created. First, a very basic command line interface is provided, which gives limited feedback (render time, parse errors). Besides that, a simple GUI was created using the Gtk2Hs graphical library, a Gtk binding for Haskell. This provides a bit more feedback, but requires that Gtk2Hs [http://haskell.org/gtk2hs] installed. For more information on Gtk2Hs, one should read my article published in the first issue of The Monad.Reader [http://www.haskell.org/hawiki/TheMonadReader/IssueOne]. <br />
<br />
== Performance issues ==<br />
<br />
While implementing the raytracer, performance was no issue. The goal was not to implement to fastest possible raytracer imaginable, but to show how elegant and clear a Haskell implementation can be. Because of this, a lot of improvements can be made to the existing code. Everyone who feels like doing so is free to use the code and make the neccesary adjustments. <br />
<br />
== Conlusions ==<br />
<br />
Recently, Jean-Philippe Bernardy added metaball support to HRay (check the gallery at http://scannedinavian.org/~boegel/HRay for images). While implementing, we used the Haskell98 standard. Several improvements are possible, including using type classes, which wasn't possible without using extensions to the Haskell98 standard (more specifically, existential types). Contributions, remarks or/and bugs are welcome. Have fun rendering and hacking !<br />
<br />
== References ==<br />
<br />
[[Anchor(funmath)]]<br />
[1] R.T. Boute "Concrete Generic Functionals: Principles, Design and Applications". Generic Programming 89-119, Kluwer, 2003<br />
<br />
[[Anchor(raytrac1)]]<br />
[2] POV-Ray: Hall of Fame (http://www.povray.org/community/hof/)<br />
<br />
[[Anchor(raytrac2)]]<br />
[3] M. Slater, A. Steed and Y. Chrysanthou "Computer Graphics and Virtual Environments - From Realism to Real-Time". Addison-Wesley, 2001<br />
<br />
----<br />
CategoryArticle</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=The_Monad.Reader/Issue5/Generating_Polyominoes&diff=20840The Monad.Reader/Issue5/Generating Polyominoes2008-05-09T14:25:09Z<p>WouterSwierstra: </p>
<hr />
<div>'''This article needs reformatting! Please help tidy it up.'''--[[User:WouterSwierstra|WouterSwierstra]] 14:25, 9 May 2008 (UTC)<br />
<br />
= Generating Polyominoes =<br />
== A simple algorithm for enumerating the polyominoes of a given rank, implemented in Haskell ==<br />
<br />
Readers of Arthur C. Clarke's novel ''Imperial Earth'' may remember the "Pentomino" puzzle that beguiles the book's protagonist, Duncan Mackenzie. The object of the puzzle is to arrange twelve puzzle pieces, each made of five congruent squares, in a rectangular box. For a box with the dimensions six squares by ten, there are many possible solutions; for a box with the dimensions three squares by twenty, there are only two. The young Duncan Mackenzie's initial encounter with the six-by-ten version of the puzzle teaches him a lesson about the uses of intuition in mathematical problem solving. Solving the harder three-by-twenty version requires that intuition be supplemented by technique: it is a task for a more developed formal imagination.<br />
<br />
In this article, we will take a first step towards the construction of a solver for the pentomino puzzle. We will begin with the task of ''enumerating'' the pentominoes, and other similar shapes. Duncan's puzzle has twelve pieces, and these pieces are the twelve distinct "free" pentominoes; that is, they are all the shapes that can be made by glueing five squares together orthogonally along their edges, that are different from each other even when rotated, reflected and/or translated. For example, under this definition, the following shapes are all the "same" free pentomino (the "F" pentomino):<br />
<br />
{{{<br />
** * * * ** * * * <br />
** *** ** *** ** *** ** ***<br />
* * ** * * * ** *<br />
}}}<br />
<br />
If we wanted to find out what the other eleven free pentominoes were, we would need both a way of generating new candidate pentominoes, and a way of recognising when one of these new candidates was the same - under rotation, reflection and/or translation - as a pentomino we had already found.<br />
<br />
The set of twelve pentominoes belongs to a larger set of "polyforms" known as polyominoes. The name "polyominoes" was first given to these shapes by the mathematician Solomon W. Golomb, whose book ''Polyominoes'' is a fascinating introduction to the subject of polyomino puzzles and techniques for solving them. Probably the best-known polyominoes are the five "tetrominoes" that appear in the game "Tetris":<br />
<br />
{{{<br />
**** ** *** ** ***<br />
** * ** *<br />
}}}<br />
As their name suggests, each of the tetrominoes is made of four squares orthogonally connected along their edges. Every pentomino is a tetromino with an additional square attached to one of its constituent squares along an edge not already taken by another square. In the same manner, every tetromino is a ''tromino'' - a polyomino made of three squares, or a polyomino of rank three - with an additional square attached to it. Every tromino is a domino with an extra square, and a domino is a monomino with an extra square.<br />
<br />
Given a definition of the polyominoes of rank zero as an empty set, and the polyominoes of rank one as a set containing a single monomino, we can define the polyominoes of rank ''n'' as all of the polyominoes that can be created by adding an extra square to some polyomino of rank ''n-1'', that are different from one another under translation, rotation and reflection. This definition gives us the outline of a simple algorithm for enumerating all of the polyominoes of rank ''n''.<br />
<br />
== Implementing the algorithm in Haskell ==<br />
<br />
Implementing such an algorithm in Haskell involves generating a list of candidate polyominoes of rank ''n'', based on a list of known polyominoes of rank ''n-1'', and removing from that list all polyominoes that are found to be the same as another included polyomino after they have been translated, rotated and/or reflected. The module described below does just that, providing a function, {{{rank}}}, that generates all the polyominoes of a given rank. The approach taken by this module is the least efficient of the three discussed in the [http://en.wikipedia.org/wiki/Polyomino wikipedia article on polyominoes], but it is easy to follow and provides a nice illustration of the usefulness of function composition in expressing algorithms in Haskell.<br />
<br />
We begin with the module declaration, some imports, and a couple of types:<br />
<br />
{{{<br />
<br />
> module Generator (rank) where<br />
<br />
> import List (sort)<br />
> import Data.Set (setToList, mkSet)<br />
<br />
> type Point = (Int, Int)<br />
> type Polyomino = [Point]<br />
<br />
}}}<br />
<br />
In order to compare two candidate polyominoes and determine whether they are the same, we introduce some functions that will convert any candidate polyomino into a normalised, "canonical" form. The first kind of normalisation we perform is to translate the candidate polyomino such that its bottom and left edges are aligned with the x and y axes:<br />
<br />
{{{<br />
<br />
> minima :: Polyomino -> Point<br />
> minima (p:ps) = foldr (\(x, y) (mx, my) -> (min x mx, min y my)) p ps<br />
<br />
> translateToOrigin :: Polyomino -> Polyomino<br />
> translateToOrigin p =<br />
> let (minx, miny) = minima p in<br />
> map (\(x, y) -> (x - minx, y - miny)) p<br />
<br />
}}}<br />
<br />
The second kind of normalisation we perform is to take all of the rotated and reflected forms of the translated polyomino, and sort them in order to find the "bottommost and leftmost" form.<br />
<br />
{{{<br />
<br />
> rotate90 :: Point -> Point<br />
> rotate90 (x, y) = (y, -x)<br />
<br />
> rotate180 :: Point -> Point<br />
> rotate180 (x, y) = (-x, -y)<br />
<br />
> rotate270 :: Point -> Point<br />
> rotate270 (x, y) = (-y, x)<br />
<br />
> reflect :: Point -> Point<br />
> reflect (x, y) = (-x, y)<br />
<br />
> rotationsAndReflections :: Polyomino -> [Polyomino]<br />
> rotationsAndReflections p =<br />
> [p,<br />
> map rotate90 p,<br />
> map rotate180 p,<br />
> map rotate270 p,<br />
> map reflect p,<br />
> map (rotate90 . reflect) p,<br />
> map (rotate180 . reflect) p,<br />
> map (rotate270 . reflect) p]<br />
<br />
> canonical :: Polyomino -> Polyomino<br />
> canonical = minimum . map (sort . translateToOrigin) . rotationsAndReflections<br />
<br />
}}}<br />
The function {{{canonical}}} is constructed by composing together other functions, so as to create a kind of pipeline of transformations which are applied to an initial value in right-to-left order. Thus, {{{rotationsAndReflections}}} takes a polyomino and returns a list of all the rotated and reflected forms of that polyomino. The next stage in the pipeline uses {{{map}}} to apply the composed function {{{sort . translateToOrigin}}} to each of these forms in turn, translating them so that their bottommost and leftmost square is in the position (0, 0) and sorting the list of points that makes up each polyomino so that the bottommost and leftmost square appears first in the list, and the rightmost and topmost appears last. We then use {{{minimum}}} take the lowest-ordered polyomino in the resulting list of translated and internally-sorted polyominoes.<br />
<br />
Given a polyomino of rank ''n'', we would like to know what polyominos of rank ''n+1'' can be generated by attaching another point to it. We therefore need to find all the unique places where another point can be attached. This definition of {{{unique}}} is efficient enough for short lists:<br />
<br />
{{{<br />
<br />
> unique :: (Eq a) => [a] -> [a]<br />
> unique [] = []<br />
> unique (x:xs) = foldr (\y ys -> if y `elem` ys then ys else y:ys) [x] xs<br />
<br />
}}}<br />
<br />
We also define an alternative {{{unique'}}} function for removing duplicates from a long list of polyominoes. This alternative function simply converts a list to a set, and then back into a list:<br />
<br />
{{{<br />
<br />
> unique' :: (Ord a) => [a] -> [a]<br />
> unique' = setToList . mkSet<br />
<br />
}}}<br />
<br />
The function {{{contiguous}}} returns all the orthogonally adjacent points of a given point.<br />
<br />
{{{<br />
<br />
> contiguous :: Point -> [Point]<br />
> contiguous (x, y) =<br />
> [(x - 1, y),<br />
> (x + 1, y),<br />
> (x, y - 1),<br />
> (x, y + 1)]<br />
<br />
}}}<br />
<br />
Given these two functions, we can find the contiguous points for each point in a polyomino. We're only interested in points that fall outside of the original polyomino, so we filter out any that are already taken.<br />
<br />
{{{<br />
<br />
> newPoints :: Polyomino -> [Point]<br />
> newPoints p =<br />
> let notInP = filter (not . flip elem p) in<br />
> unique . notInP . concatMap contiguous $ p<br />
<br />
}}}<br />
<br />
Now we can generate a list of new polyominoes using newPoints. We'll put them all into canonical form, and only take the unique ones.<br />
<br />
{{{<br />
<br />
> newPolys :: Polyomino -> [Polyomino]<br />
> newPolys p = unique . map (canonical . flip (:) p) $ newPoints p<br />
<br />
}}}<br />
<br />
Again, this function is composed out of smaller functions: {{{newPoints}}} feeds its results to a function that adds each new point to the initial polyomino and returns the resulting new polyomino in canonical form, and {{{unique}}} then removes all duplicates from the resulting list of new polyominoes.<br />
<br />
We now define the first two ranks of polyominoes. The zeroth rank of polyominoes is the empty list. The first rank of polyominoes is the monominoes, which is a list containing a single element:<br />
<br />
{{{<br />
<br />
> monomino = [(0, 0)]<br />
> monominoes = [monomino]<br />
<br />
> rank :: Int -> [Polyomino]<br />
> rank 0 = []<br />
> rank 1 = monominoes<br />
<br />
}}}<br />
<br />
The next rank of monominoes can be generated from the rank before it. We find all the new polyominoes in rank ''n'' that can be generated from each<br />
polyomino in rank ''n -1'', concatenate them together into a single list, and throw out any duplicates.<br />
<br />
{{{<br />
<br />
> rank n = unique' . concatMap newPolys $ rank (n - 1)<br />
<br />
}}}<br />
<br />
That is the entire algorithm!<br />
<br />
== Conclusions ==<br />
<br />
The above program uses function composition to express an algorithm as a pipeline made up of three kinds of function: functions of type {{{a -> [a]}}} that generate new values from some source value, functions of type {{{[a] -> [a]}}} that filter a list of values to remove duplicate or unwanted values, and functions of type {{{[a] -> a}}} that extract a single result from a list of values. Many algorithms based on the blind generation of a list of new candidate results, which is then filtered to extract only the valid answers, can easily be expressed in this style. It also lends itself to recursion, as each new generation of results can be fed back into the pipeline to produce another generation.<br />
<br />
Haskell's lazy evaluation and garbage collection can help to minimize the creation and retention of large data structures during the course of such pipelined processing. However, there are limitations to how much work lazy evaluation can help us to avoid. In the function {{{canonical}}} in the program above, for example, the entire list consumed by {{{minimum}}} must be constructed before the lowest-ordered value can be found.<br />
<br />
As I mentioned earlier, this is not by any means the best algorithm for enumerating polyominoes. It is better if possible to avoid generating values that will only be thrown away, as these still have to be checked before being discarded. It is worth looking for a heuristic that can limit the number of invalid or duplicate candidate results generated, as this will often have a greater impact on performance than any optimisation that can be applied to the validation stage.<br />
<br />
=== For Further Investigation ===<br />
<br />
Can an algorithm for enumerating polyforms made of cubes be constructed as easily as one for enumerating polyforms made of squares? How about polyforms of arbitrary dimension? And what about two-dimensional polyforms made out of other shapes, such as hexagons?<br />
----<br />
CategoryArticle</div>WouterSwierstrahttps://wiki.haskell.org/index.php?title=The_Monad.Reader/Issue5/Different_Language&diff=20839The Monad.Reader/Issue5/Different Language2008-05-09T14:24:34Z<p>WouterSwierstra: </p>
<hr />
<div>'''This article needs reformatting! Please help tidy it up.'''--[[User:WouterSwierstra|WouterSwierstra]] 14:24, 9 May 2008 (UTC)<br />
<br />
Haskell: A Very Different Language<br />
by John Goerzen<br />
<br />
First published in Free Software Magazine, June, 2005.<br />
<br />
Many programmers are fluent in several programming languages. Most of these languages have some things in common. Loops and variables are fundamental features of most languages.<br />
I want to show you a different way of solving problems. Haskell takes a different approach than you're used to -- to just about everything.<br />
<br />
= Why Haskell is interesting =<br />
There are quite a few things about Haskell that make it interesting and unique.<br />
Haskell has no loops because it doesn't need them. There is no for or while in Haskell.<br />
<br />
Haskell has no loops because it doesn't need them.<br />
<br />
Haskell has no equivalent of the variables that you're used to; it doesn't need them, either.<br />
Haskell is a functional language. In a language like Java or Python, your primary view of the world is an object. In Haskell, your primary view of the world is a function. I like to say that Haskell manipulates functions with the same ease that Perl manipulates strings. In Haskell, it's commonplace to pass around bits of code. This is a powerful concept.<br />
<br />
Haskell manipulates functions with the same ease that Perl manipulates strings.<br />
<br />
Haskell functions are also pure. Every time they're called with the same arguments, they'll return the same result. Functions in most languages can return different results each time they're called. The results may depend on things like a global counter or I/O. Haskell functions also have no side-effects. They won't stomp over a global variable.<br />
<br />
Haskell is a lazy language. It never performs a computation unless it needs to. This is not just an optimization; it is a powerful way to view the world. Code that could be infinite loops or consume vast amounts of memory in other languages are simple, everyday tools in Haskell.<br />
<br />
Haskell can be either interpreted or compiled to native machine code. It also interfaces easily with C. You can call C functions from Haskell with a minimum of hassle. Usually, you'll only need 2 or 3 lines of code to accomplish the call. Haskell also has interfaces to Java, .NET, and Python.<br />
<br />
Haskell lets you write code in a surprisingly intuitive way. Reading Haskell code is easy, and reasoning about Haskell code is easy, too. You'll have less of a need for a debugger with Haskell.<br />
<br />
To get you started, here's an example for a simplistic grep, written in Haskell:<br />
{{{#!syntax haskell<br />
import MissingH.List<br />
<br />
main = do<br />
c <- getContents<br />
putStr (unlines(filter (\line -> contains "Haskell" line) (lines c)))<br />
}}}<br />
This will simply read data from standard input and display all lines containing the word "Haskell" on standard output. I'll go through this example with you in more detail and show you how it works.<br />
= The Haskell toolbox =<br />
To get started with Haskell, you'll need a compiler or interpreter. The most popular compiler is GHC, available from http://www.haskell.org/ghc/. Some Linux or BSD distributions have GHC packages available; look for a package named ghc or ghc6. If your operating system doesn't have packages available, sources and binaries for many systems are available from the GHC homepage.<br />
<br />
The GHC package actually includes a compiler (ghc) and an interpreter (ghci). Use whichever you like. If you prefer a smaller package that includes only an interpreter, try Hugs from http://www.haskell.org/hugs/. Many distributions also contain Hugs packages.<br />
<br />
Both GHC and Hugs come with a basic library of Haskell code called fptools. A reference is [http://www.haskell.org/ghc/docs/latest/html/libraries/index.html available] from GHC's site.<br />
<br />
The examples in this article will also use functions from MissingH, a library of useful functions written in Haskell. MissingH can be downloaded from http://quux.org/devel/missingh. Many other Haskell libraries are also available for your use. See the links at the end of this article for more information.<br />
<br />
To compile a Haskell program with ghc, you could use a command such as {{{ghc --make -o program program.hs}}}. The examples here use MissingH, so you'll need to add {{{-package MissingH}}} at the beginning of your ghc command line. You can run Haskell programs with Hugs by saying runhugs program.hs.<br />
<br />
= Laziness at work =<br />
The grep example at the beginning of this chapter probably doesn't make much sense yet. Here's another version of it that does exactly the same thing, but breaks down the code into more manageable pieces:<br />
{{{#!syntax haskell<br />
import MissingH.List<br />
<br />
filterfunc line = contains "Haskell" line<br />
<br />
main = do<br />
c <- getContents<br />
let inputlines = lines c<br />
let outputlines = filter filterfunc inputlines<br />
let outputstring = unlines outputlines<br />
putStr outputstring<br />
}}}<br />
Let's analyze this version. First, I import the MissingH.List module. This module has the contains function that we'll be using.<br />
<br />
Next, I create a function named filterfunc. It takes one parameter, line. It calls the contains function, passing it two arguments: the string "Haskell" and line. The contains function returns a boolean value (Bool type in Haskell). So, filterfunc takes a string and returns a Bool.<br />
<br />
Next, you see the main function. This is the entry point to the Haskell program, similar to main() in C programs. In Haskell, main takes nothing and returns an IO action. Actions will be covered in more detail later.<br />
<br />
The main function starts by calling getContents. This returns the entire contents of standard input as a string. getContents is an IO action, so we use the <- operator to cause c to represent the result of evaluating the action.<br />
<br />
Next, we set up several Haskell variables. The inputlines variable holds a list of strings. Each string represents one line from the input. The lines function takes a string, separates it by newline characters, and returns a list of the component lines.<br />
<br />
The outputlines variable also holds a list of strings. It calls filter to eliminate all lines that don't contain "Haskell". filter is a function that takes a function as an argument. In this case, we pass along our own filterfunc. filter returns only those elements from the input list for which the passed function returns a True value. This model is quite popular in Haskell, and is a very simple illustration of passing functions around.<br />
<br />
Then, the unlines function is called to combine this list of lines back into a string. Finally, this resulting string is printed.<br />
<br />
If you look at this program from a traditional perspective, you'll think that this is a a poorly-written program. You might think that it starts by reading the entire file into memory -- a bad thing if your file is huge. Not so in Haskell.<br />
<br />
In Haskell, a string is a list of characters. Because Haskell is lazy, elements of a list are only evaluated when their contents are required for computation. And they can be garbage-collected whenever the compiler knows they won't be needed again. So, when you see c <- getContents, nothing actually happens right then.<br />
<br />
In fact, nothing at all happens until the very last line in the program. That line demands the content from outputstring, which in turn follows up until it reaches getContents. It's only now that input is read.<br />
<br />
= Types and patterns =<br />
Haskell is a strongly-typed language like Java or C. However, you probably noticed that I supplied no typing information at all in the grep example. That is because Haskell has another unique feature: type inference. This means that Haskell can automatically determine the type of a piece of data by looking at how it is created and used in a program. Haskell can still catch type errors at compile time, but it saves you from the effort of manually declaring types all the time.<br />
<br />
Haskell can automatically determine the type of a piece of data by looking at how it is created and used.<br />
<br />
You can manually declare types for clarity or if you wish to make the type more restrictive than the inferred type. Here's an example of the grep program with explicit types given:<br />
{{{#!syntax haskell<br />
import MissingH.List<br />
<br />
filterfunc :: String -> Bool<br />
filterfunc line = contains "Haskell" line<br />
<br />
main :: IO ()<br />
main = do<br />
c <- getContents<br />
putStr $ (unlines . filter filterfunc . lines) c<br />
}}}<br />
The declaration for filterfunc says that it takes a String and returns a Bool. If it took more parameters, you could put more arrows and types in the line; the very last one is the return value.<br />
<br />
Types are closely related to patterns. Let's say that we wanted to write our own filter. Here's a way it might be done:<br />
{{{#!syntax haskell<br />
import MissingH.List<br />
<br />
filterfunc :: String -> Bool<br />
filterfunc line = contains "Haskell" line<br />
<br />
myfilter :: (a -> Bool) -> [a] -> [a]<br />
myfilter _ [] = []<br />
myfilter f (x:xs) = <br />
if f x<br />
then x : myfilter f xs<br />
else myfilter f xs<br />
<br />
main :: IO ()<br />
main = do<br />
c <- getContents<br />
putStr $ (unlines . myfilter filterfunc . lines) c<br />
}}}<br />
The {{{myfilter}}} function is the new and interesting one here. Before I discuss how it works, there are several interesting things to note about its type declaration. This function is said to be polymorphic because it works on items of many different types. In this case, it can take a list of any type of item, a function that takes one of those items, and returns a list of the same type of items. The a in the type declaration represents this. The first parameter to myfilter is given to be a function itself. The second parameter is a list of items, and the return value is a list of the same type of items.<br />
<br />
Next, I declare the function itself. The line {{{myfilter _ [] = []}}} means that if you call myfilter with an empty list, it returns an empty list. The underscore is a wildcard and means I don't care what function is supplied. In fact, _ [] is a simple instance of pattern matching in Haskell.<br />
<br />
Next, you see {{{myfilter f (x:xs)}}}. In Haskell, the colon represents the list you get by adding a single item to the beginning of the list. So, this pattern will put the first item of the list into x, and the rest of the list into xs. Note that xs may be empty if the list has only one item.<br />
<br />
Now, we call the passed function, passing in the current item. If the function returns True, we can think of the return value as being the current item plus the result of filtering the rest of the list. So, I say x : myfilter x xs. This becomes the return value; the function calls itself. This is recursion, and is the most common way to achieve in Haskell what would be looping in other languages.<br />
<br />
You can also define your own data types in Haskell. Here's an example:<br />
{{{#!syntax haskell<br />
data Maybe a = Nothing | Just a<br />
}}}<br />
This defines a new polymorphic type, Maybe a. You can create a value of type Maybe a in two ways. First, you could simply say Nothing. Secondly, you could say Just x, where x is some value of type a. Pattern matching works just as well with custom types as it does with built-in types.<br />
<br />
The Maybe type is, in fact, such a useful pattern in Haskell that it is defined for you in the Haskell Prelude -- the set of functions and types available to every Haskell program. Functions that may either compute a value or generate an error frequently use Nothing to indicate a problem, or Just x to indicate a successful calculation.<br />
<br />
= Functions =<br />
You've seen a little bit of how versatile functions are in Haskell already, when I passed a function to filter. Let's look at some other things you can do with functions.<br />
<br />
In the first grep example, you saw this: {{{\line -> contains "Haskell" line}}}. This declared a new function on the spot. The backslash begins a declaration. The function took one parameter (line), and calculates its result by applying the part on the right. Functions declared like this are often called anonymous functions because they are never bound to a name.<br />
<br />
As you've probably noticed, to call a function, you list its name and all parameters to it, separated by a space. There is a unique twist to that. The contains function is defined in MissingH with this type:<br />
{{{#!syntax haskell<br />
contains :: [a] -> [a] -> Bool<br />
}}}<br />
Since a String is a list of Chars in Haskell, this works well for filtering Strings.<br />
<br />
Let's say we call contains with only one argument. In most languages, that will generate an error. In Haskell, however, it returns a new function, with the leading arguments no longer needing to be specified. This is called partial application. So, the type of {{{contains "Haskell"}}} is {{{String -> Bool}}}. Note that the type isn't {{{[a] -> Bool}}}. Because the first argument was given as a String, we know the next argument must also be a String.<br />
<br />
So, instead of saying {{{\line -> contains "Haskell" line}}}, I could have said simply {{{contains "Haskell"}}}.<br />
<br />
Did you notice the last line of the last grep example looked unusual? That line was:<br />
{{{#!syntax haskell<br />
putStr $ (unlines . myfilter filterfunc . lines) c<br />
}}}<br />
The period is a function composition operator. In general terms, where f and g are functions, (f . g) x means the same as f (g x). In other words, the period is used to take the result from the function on the right, feed it as a parameter to the function on the left, and return a new function that represents this computation. The dollar sign is a bit of syntactic sugar that simply removed the need to put everything after putStr in parenthesis.<br />
<br />
= Variables =<br />
Recall that I said that Haskell has no variables in the conventional sense. You might be wondering about the let statements in the second grep example. Haskell does have "variables", and let is one way to declare them. A Haskell variable doesn't hold a value and can't be modified. Instead, a Haskell variable tells the compiler, "if you ever need to know the value of x, here's how you calculate it." Assigning something to a variable doesn't cause it to be calculated; in fact, if the value is never needed, it will never be calculated. Thus, a variable in Haskell is just a shortcut, similar to a macro in some other languages.<br />
<br />
= Monads and I/O =<br />
You've seen a very small bit of the power of functions so far. Monads are used to combine functions together in a way similar to the period operator, feeding the result of one to the input of the next. However, Monads provide more capabilities. For instance, a monad can abort the processing of an entire chain when there is a problem anywhere along it. The Maybe monad, for instance, can receive Just 5 from one function, pass 5 to the next, receive Just 6 from it, pass 6 to the next, and continue doing that across many functions. If any function returns Nothing, the computations stop, and the result of the entire computation becomes Nothing. Otherwise, the result of the entire computation is the result of the last function in the chain.<br />
<br />
I/O was historically a tricky problem for pure languages like Haskell. A function that reads data from the keyboard obviously can't be guaranteed to return the same thing each time it is invoked.<br />
<br />
In Haskell, the IO monad is used. The IO type is opaque, meaning that a Haskell program can't see "inside" it. By using constructs like <-, however, things can be read and written. The <- operator extracts the value from the inside of a monad type and assigns it to a variable. If you were using the Maybe monad and wrote x <- Just 5, then x would evaluate to 5.<br />
<br />
The IO monad is inescapable, however. Once you call IO functions, your return value will be in the IO monad. That is, your return type might be IO Int or IO String. This provides a neat way of segmenting impurities.<br />
<br />
Typically, Haskell programs are structured so that the outermost layers are in the IO monad, and computations are outside of it. The main function returns IO () -- an empty value in the IO monad. So, to execute a Haskell program, the compiler simply evaluates the I/O action that main represents, calling other functions as needed along the way.<br />
<br />
= Typeclasses: OOP in reverse =<br />
Object-oriented programming (OOP) is a fixture of many languages. OOP, in general, permits you to write code that accepts an object or any child of that object. It's a way to conceptualize the view of the world.<br />
<br />
Haskell provides something similar called typeclasses. Typeclasses let your functions take data of any type, so long as a particular interface for that type exists. Instead of preventing us from accessing the internal representation of data in an object, typeclasses instead provide a way to handle many different types of data in a generic way.<br />
<br />
Typeclasses provide a way to handle many different types of data in a generic way.<br />
<br />
For instance, there is a built-in function called show. The show function can generate a string representation from many different data types. Its type is this:<br />
{{{#!syntax haskell<br />
show :: Show a => a -> String<br />
}}}<br />
You can read this as "The show function takes any value of type a, such that a is part of the typeclass Show, and returns a String." You can say show "Hi", or show 5.0, or even show True, and get a valid String.<br />
<br />
You can add your own data types to the Show typeclass very easily:<br />
{{{#!syntax haskell<br />
data MyType = Red | Blue<br />
instance Show MyType where<br />
show Red = "Red"<br />
show Blue = "Blue"<br />
The Show class itself could be defined like this in the Prelude:<br />
class Show a where<br />
showsPrec :: Int -> a -> ShowS<br />
show :: a -> String <br />
showList :: [a] -> ShowS<br />
<br />
showsPrec _ x s = show x ++ s<br />
show x = showsPrec 0 x ""<br />
showList = ...<br />
}}}<br />
Here, you can see that to be an instance of Show, one would normally have to provide three functions. However, in this case, defaults are provided, so really, only one function is required.<br />
<br />
Typeclasses are powerful abstractions in Haskell. The Num typeclass, for instance, is used to provide an abstraction of arithmetic operators. The type of (+), the function representing the + operator, is {{{Num a => a -> a -> a}}}. Numeric types are all instances of Num, and thus + can be used with many different types of numbers. You can invent your own numeric types and, by simply making them instances of Num, all existing numeric operators will work with them.<br />
<br />
= Conclusion =<br />
Haskell is a powerful and flexible language. Its approach to solving problems is unique and refreshing. The ability to combine functions is powerful and time-saving. There is a great deal of power in Haskell that is easily tapped, but a magazine article such as this can just barely scratch the surface. I encourage you to seek out more detailed resources about Haskell.<br />
<br />
= For more information =<br />
Here are some resources for more information on Haskell.<br />
<br />
For general information, look at:<br />
*The Haskell home page, http://www.haskell.org<br />
*The Haskell wiki, http://www.haskell.org/hawiki<br />
*Tutorials and references, http://www.haskell.org/learning.html<br />
*Yet Another Haskell Tutorial, in my opinion the best Haskell tutorial available; http://www.isi.edu/%7Ehdaume/htut/<br />
Libraries and code:<br />
*Haskell at Freshmeat, http://freshmeat.net/browse/834/<br />
*Libraries in Haskell, http://haskell.org/libraries<br />
*Applications in Haskell, http://haskell.org/practice.html<br />
*Libraries wiki page, http://haskell.org/hawiki/LibrariesAndTools<br />
----<br />
CategoryArticle</div>WouterSwierstra