Programming with Arrows, also written by John Hughes. A more recent paper on arrows, and also a very didactic one, introducing the arrow subclasses with detailed examples and rich explanations on the motivations of each decision.
Programming:Haskell arrows (article of the English Wikibooks) is not only a good introduction, but it shows also a funny metaphor for arrows, the factory/conveyor belt metaphor, we know this image for monads, but it is modified here for arrows, too.
ProdArrows -- Arrows for Fudgets is also a good general material on the arrow concept (and also good for seeing, how arrows can be used to implement stream processors and Fudgets). It is written by Magnus Carlsson.
See also Research papers/Monads and arrows.
Control.Arrow is the standard library for arrows.
Arrow transformer library (see the bottom of the page) is an extension with arrow transformers, subclasses, useful data types (Data.Stream, Data.Sequence).
Various concepts follow here, which can be seen as concrete examples covered by the arrow concept. Not all of them provide links to Haskell-related materials: some of them are here only to give a self-contained material (e.g. section #Automaton gives links only to the finite state concept itself.).
is represented with arrow parsers this way:
data Expr = Plus Expr Expr | Minus Expr Expr | ... expr :: ParseArrow () Expr expr = proc () -> do t <- term -< () exprTail -< t exprTail :: ParseArrow Expr Expr exprTail = proc e -> do symbol PLUS -< () t <- term -< () exprTail -< Plus e t <+> do symbol MINUS -< () t <- term -< () exprTail -< Minus e t <+> returnA -< e
The funny thing which took a long time for me to understand arrow parsers is a sort of differential approach -- in contrast to the well-known parser approaches. (I mean, in some way well-known parsers are of differential approach too, in the sense that they manage state transitions where the states are remainder streams -- but here I mean being differential in another sense: arrow parsers seem to me differential in the way how they consume and produce values -- their input and output.)
The idea of borrowing this image from mathematical analysis comes from another topic: the version control systems article Integrals and derivatives written by Martin Pool uses a similar image.
Arrows and Computation written by Ross Paterson (pages 2, 6, 7) and ProdArrows -- Arrows for Fudgets
written by Magnus Carlsson (page 9) mentions that computation (e.g. state) is threaded through the operands of
I mean, even the mere definition of
p &&& q = arr dup >>> first p >>> second q
shows that the order of the computation (the side effects) is important when using
&&&, and this can be exemplified very well with parser arrows. See an example found in PArrows written by Einar Karttunen (see module
-- | Match zero or more occurrences of the given parser. many :: MD i o -> MD i [o] many = MStar -- | Match one or more occurrences of the given parser. many1 :: MD i o -> MD i [o] many1 x = (x &&& MStar x) >>> pure (\(b,bs) -> (b:bs))
The definition of
between parser combinator can show another example for the importance of the order in which the computation (e.g. the side effects) take place using
between :: MD i t -> MD t close -> MD t o -> MD i o between open close real = open >>> (real &&& close) >>^ fst
A more complicated example (from the same module):
-- | Match one or more occurrences of the given parser separated by the separator. sepBy1 :: MD i o -> MD i o' -> MD i [o] sepBy1 p s = (many (p &&& s >>^ fst) &&& p) >>^ (\(bs,b) -> bs++[b])
This makes clear that the order of effects of the operands of
&&& operation can be important. But let us mention also a counterexample, e.g. nondeterministic functions arrows, or more generally, the various implementations of binary relation arrows -- there is no such sequencing of effect orders. Now let us see this fact on the mere mathematical concept of binary relations (not minding how it implemented):
The picture illustrating
*** in Programming:Haskell_arrows article of Wikibooks suggests exactly such a view: order of side effects can be unimportant at some arrow instances, and the symmetry of the figure reflects this. In generally, however, the figure should use a notation for threading through side effects in a sequence.
The Lazy K programming language is an interesting esoteric language (from the family of pure, lazy functional languages), whose I/O concept is approached by streams.
Arrows are useful also to grasp the concept of stream processors. See details in
- ProdArrows -- Arrows for Fudgets written by Magnus Carlsson, 2001.
- Generalising Monads to Arrows written by John Hughes (section 6, pages 20--24)
Functional I/O, graphical user interfaces
On the Expressiveness of Purely Functional I/O Systems written by Paul Hudak and Raman S. Sundaresh.
How these concepts can be implemented using the concept of arrow, can be found in the introductory articles on arrows mentioned above.