https://wiki.haskell.org/api.php?action=feedcontributions&user=Davorak&feedformat=atomHaskellWiki - User contributions [en]2020-01-22T07:36:00ZUser contributionsMediaWiki 1.27.4https://wiki.haskell.org/index.php?title=GHC/Type_families&diff=56990GHC/Type families2013-10-14T08:56:59Z<p>Davorak: It took a little searching to realize it was not in 7.6 and I am not the only one http://ircbrowse.net/browse/haskell?id=16552826&timestamp=1380562745#t1380562745</p>
<hr />
<div>Indexed type families, or '''type families''' for short, are a Haskell extension supporting ad-hoc overloading of data types. Type families are parametric types that can be assigned specialized representations based on the type parameters they are instantiated with. They are the data type analogue of [[Type class|type classes]]: families are used to define overloaded ''data'' in the same way that classes are used to define overloaded ''functions''. Type families are useful for generic programming, for creating highly parameterised library interfaces, and for creating interfaces with enhanced static information, much like dependent types.<br />
<br />
Type families come in two flavors: ''data families'' and ''type synonym families''. Data families are the indexed form of data and newtype definitions. Type synonym families are the indexed form of type synonyms. Each of these flavors can be defined in a standalone manner or ''associated'' with a type class. Standalone definitions are more general, while associated types can more clearly express how a type is used and lead to better error messages.<br />
<br />
''NB: see also Simon's [http://hackage.haskell.org/trac/ghc/blog/LetGeneralisationInGhc7 blog entry on let generalisation] for a significant change in the policy for let generalisation, driven by the type family extension. In brief: a few programs will puzzlingly fail to compile with <tt>-XTypeFamilies</tt> even though the code is legal Haskell 98.''<br />
<br />
== What are type families? ==<br />
<br />
The concept of a type family comes from type theory. An indexed type family in type theory is a partial function at the type level. Applying the function to parameters (called ''type indices'') yields a type. Type families permit a program to compute what data constructors it will operate on, rather than having them fixed statically (as with simple type systems) or treated as opaque unknowns (as with parametrically polymorphic types).<br />
<br />
Type families are to vanilla data types what type class methods are to regular functions. Vanilla polymorphic data types and functions have a single definition, which is used at all type instances. Classes and type families, on the other hand, have an interface definition and any number of instance definitions. A type family's interface definition declares its [[kind]] and its arity, or the number of type indices it takes. Instance definitions define the type family over some part of the domain.<br />
<br />
As a simple example of how type families differ from ordinary parametric data types, consider a strict list type. We can represent a list of <hask>Char</hask> in the usual way, with cons cells. We can do the same thing to represent a list of <hask>()</hask>, but since a strict <hask>()</hask> value carries no useful information, it would be more efficient to just store the length of the list. This can't be done with an ordinary parametric data type, because the data constructors used to represent the list would depend on the list's type parameter: if it's <hask>Char</hask> then the list consists of cons cells; if it's <hask>()</hask>, then the list consists of a single integer. We basically want to select between two different data types based on a type parameter. Using type families, this list type could be declared as follows:<br />
<br />
<haskell><br />
-- Declare a list-like data family<br />
data family XList a<br />
<br />
-- Declare a list-like instance for Char<br />
data instance XList Char = XCons !Char !(XList Char) | XNil<br />
<br />
-- Declare a number-like instance for ()<br />
data instance XList () = XListUnit !Int<br />
</haskell><br />
<br />
The right-hand sides of the two <code>data instance</code> declarations are exactly ordinary data definitions. In fact, a <code>data instance</code> declaration is nothing more than a shorthand for a <code>data</code> declaration followed by a <code>type instance</code> (see below) declaration. However, they define two instances of the same parametric data type, <hask>XList Char</hask> and <hask>XList ()</hask>, whereas ordinary data declarations define completely unrelated types. A recent [[Simonpj/Talk:FunWithTypeFuns|tutorial paper]] has more in-depth examples of programming with type families. <br />
<br />
[[GADT]]s bear some similarity to type families, in the sense that they allow a parametric type's constructors to depend on the type's parameters. However, all GADT constructors must be defined in one place, whereas type families can be extended. Functional dependences are similar to type families, and many type classes that use functional dependences can be equivalently expressed with type families. Type families provide a more functional style of type-level programming than the relational style of functional dependences.<br />
<br />
== What do I need to use type families? ==<br />
<br />
Type families are a GHC extension enabled with the <code>-XTypeFamilies</code> flag or the <code>{-# LANGUAGE TypeFamilies #-}</code> pragma. The first stable release of GHC that properly supports type families is 6.10.1. (The 6.8 release included an early partial implementation, but its use is deprecated.) Please [http://hackage.haskell.org/trac/ghc/query?status=new&status=assigned&status=reopened&group=priority&type=bug&order=id&desc=1 report bugs] via the GHC bug tracker, ideally accompanied by a small example program that demonstrates the problem. Use the [mailto:glasgow-haskell-users@haskell.org GHC mailing list] for questions or for a discussion of this language extension and its description on this wiki page.<br />
<br />
== An associated data type example ==<br />
<br />
As an example, consider Ralf Hinze's [http://www.cs.ox.ac.uk/ralf.hinze/publications/GGTries.ps.gz generalised tries], a form of generic finite maps. <br />
<br />
=== The class declaration ===<br />
<br />
We define a type class whose instances are the types that we can use as keys in our generic maps:<br />
<haskell><br />
class GMapKey k where<br />
data GMap k :: * -> *<br />
empty :: GMap k v<br />
lookup :: k -> GMap k v -> Maybe v<br />
insert :: k -> v -> GMap k v -> GMap k v<br />
</haskell><br />
The interesting part is the ''associated data family'' declaration of the class. It gives a [http://www.haskell.org/ghc/docs/latest/html/users_guide/type-families.html#data-family-declarations ''kind signature''] (here <hask>* -> *</hask>) for the associated data type <hask>GMap k</hask> - analogous to how methods receive a type signature in a class declaration.<br />
<br />
What it is important to notice is that the first parameter of the associated type <hask>GMap</hask> coincides with the class parameter of <hask>GMapKey</hask>. This indicates that also in all instances of the class, the instances of the associated data type need to have their first argument match up with the instance type. In general, the type arguments of an associated type can be a subset of the class parameters (in a multi-parameter type class) and they can appear in any order, possibly in an order other than in the class head. The latter can be useful if the associated data type is partially applied in some contexts.<br />
<br />
The second important point is that as <hask>GMap k</hask> has kind <hask>* -> *</hask> and <hask>k</hask> (implicitly) has kind <hask>*</hask>, the type constructor <hask>GMap</hask> (without an argument) has kind <hask>* -> * -> *</hask>. Consequently, we see that <hask>GMap</hask> is applied to two arguments in the signatures of the methods <hask>empty</hask>, <hask>lookup</hask>, and <hask>insert</hask>.<br />
<br />
=== An Int instance ===<br />
<br />
To use Ints as keys into generic maps, we declare an instance that simply uses <hask>Data.IntMap</hask>, thusly:<br />
<haskell><br />
instance GMapKey Int where<br />
data GMap Int v = GMapInt (Data.IntMap.IntMap v)<br />
empty = GMapInt Data.IntMap.empty<br />
lookup k (GMapInt m) = Data.IntMap.lookup k m<br />
insert k v (GMapInt m) = GMapInt (Data.IntMap.insert k v m)<br />
</haskell><br />
The <hask>Int</hask> instance of the associated data type <hask>GMap</hask> needs to have both of its parameters, but as only the first one corresponds to a class parameter, the second needs to be a type variable (here <hask>v</hask>). As mentioned before, any associated type parameter that corresponds to a class parameter must be identical to the class parameter in each instance. The right hand side of the associated data declaration is like that of any other data type.<br />
<br />
NB: At the moment, GADT syntax is not allowed for associated data types (or other indexed types). This is not a fundamental limitation, but just a shortcoming of the current implementation, which we expect to lift in the future.<br />
<br />
As an exercise, implement an instance for <hask>Char</hask> that maps back to the <hask>Int</hask> instance using the conversion functions <hask>Char.ord</hask> and <hask>Char.chr</hask>.<br />
<br />
=== A unit instance ===<br />
<br />
Generic definitions, apart from elementary types, typically cover units, products, and sums. We start here with the unit instance for <hask>GMap</hask>:<br />
<haskell><br />
instance GMapKey () where<br />
data GMap () v = GMapUnit (Maybe v)<br />
empty = GMapUnit Nothing<br />
lookup () (GMapUnit v) = v<br />
insert () v (GMapUnit _) = GMapUnit $ Just v<br />
</haskell><br />
For unit, the map is just a <hask>Maybe</hask> value.<br />
<br />
=== Product and sum instances ===<br />
<br />
Next, let us define the instances for pairs and sums (i.e., <hask>Either</hask>):<br />
<haskell><br />
instance (GMapKey a, GMapKey b) => GMapKey (a, b) where<br />
data GMap (a, b) v = GMapPair (GMap a (GMap b v))<br />
empty = GMapPair empty<br />
lookup (a, b) (GMapPair gm) = lookup a gm >>= lookup b <br />
insert (a, b) v (GMapPair gm) = GMapPair $ case lookup a gm of<br />
Nothing -> insert a (insert b v empty) gm<br />
Just gm2 -> insert a (insert b v gm2 ) gm<br />
<br />
instance (GMapKey a, GMapKey b) => GMapKey (Either a b) where<br />
data GMap (Either a b) v = GMapEither (GMap a v) (GMap b v)<br />
empty = GMapEither empty empty<br />
lookup (Left a) (GMapEither gm1 _gm2) = lookup a gm1<br />
lookup (Right b) (GMapEither _gm1 gm2 ) = lookup b gm2<br />
insert (Left a) v (GMapEither gm1 gm2) = GMapEither (insert a v gm1) gm2<br />
insert (Right b) v (GMapEither gm1 gm2) = GMapEither gm1 (insert b v gm2)<br />
</haskell><br />
If you find this code algorithmically surprising, I'd suggest to have a look at [http://www.cs.ox.ac.uk/ralf.hinze/publications/index.html#J4 Ralf Hinze's paper]. The only novelty concerning associated types, in these two instances, is that the instances have a context <hask>(GMapKey a, GMapKey b)</hask>. Consequently, the right hand sides of the associated type declarations can use <hask>GMap</hask> recursively at the key types <hask>a</hask> and <hask>b</hask> - not unlike the method definitions use the class methods recursively at the types for which the class is given in the instance context.<br />
<br />
=== Using a generic map ===<br />
<br />
Finally, some code building and querying a generic map:<br />
<haskell><br />
myGMap :: GMap (Int, Either Char ()) String<br />
myGMap = insert (5, Left 'c') "(5, Left 'c')" $<br />
insert (4, Right ()) "(4, Right ())" $<br />
insert (5, Right ()) "This is the one!" $<br />
insert (5, Right ()) "This is the two!" $<br />
insert (6, Right ()) "(6, Right ())" $<br />
insert (5, Left 'a') "(5, Left 'a')" $<br />
empty<br />
main = putStrLn $ maybe "Couldn't find key!" id $ lookup (5, Right ()) myGMap<br />
</haskell><br />
<br />
=== Download the code ===<br />
<br />
If you want to play with this example without copying it off the wiki, just download the source code[http://darcs.haskell.org/testsuite/tests/ghc-regress/indexed-types/should_run/GMapAssoc.hs] for <hask>GMap</hask> from GHC's test suite.<br />
<br />
== Detailed definition of data families ==<br />
<br />
Data families appear in two flavours: (1) they can be defined on the toplevel or (2) they can appear inside type classes (in which case they are known as associated types). The former is the more general variant, as it lacks the requirement for the type-indices to coincide with the class parameters. However, the latter can lead to more clearly structured code and compiler warnings if some type instances were - possibly accidentally - omitted. In the following, we always discuss the general toplevel form first and then cover the additional constraints placed on associated types.<br />
<br />
=== Family declarations ===<br />
<br />
Indexed data families are introduced by a signature, such as <br />
<haskell><br />
data family GMap k :: * -> *<br />
</haskell><br />
The special <hask>family</hask> distinguishes family from standard data declarations. The result kind annotation is optional and, as usual, defaults to <hask>*</hask> if omitted. An example is<br />
<haskell><br />
data family Array e<br />
</haskell><br />
Named arguments can also be given explicit kind signatures if needed. Just as with [http://www.haskell.org/ghc/docs/latest/html/users_guide/gadt.html GADT declarations] named arguments are entirely optional, so that we can declare <hask>Array</hask> alternatively with<br />
<haskell><br />
data family Array :: * -> *<br />
</haskell><br />
<br />
==== Associated family declarations ====<br />
<br />
When a data family is declared as part of a type class, we drop the <hask>family</hask> keyword. The <hask>GMap</hask> declaration takes the following form<br />
<haskell><br />
class GMapKey k where<br />
data GMap k :: * -> *<br />
...<br />
</haskell><br />
In contrast to toplevel declarations, named arguments must be used for all type parameters that are to be used as type-indices. Moreover, the argument names must be class parameters. Each class parameter may only be used at most once per associated type, but some may be omitted and they may be in an order other than in the class head. In other words: '''the named type parameters of the data declaration must be a permutation of a subset of the class variables'''. <br />
<br />
Example is admissible:<br />
<haskell><br />
class C a b c where { data T c a :: * } -- OK<br />
class C a b c where { data T a a :: * } -- Bad: repeated variable<br />
class D a where { data T a x :: * } -- Bad: x is not a class variable<br />
class D a where { data T a :: * -> * } -- OK<br />
</haskell><br />
<br />
=== Instance declarations ===<br />
<br />
Instance declarations of data and newtype families are very similar to standard data and newtype declarations. The only two differences are that the keyword <hask>data</hask> or <hask>newtype</hask> is followed by <hask>instance</hask> and that some or all of the type arguments can be non-variable types, but may not contain forall types or type synonym families. However, data families are generally allowed in type parameters, and type synonyms are allowed as long as they are fully applied and expand to a type that is itself admissible - exactly as this is required for occurrences of type synonyms in class instance parameters. For example, the <hask>Either</hask> instance for <hask>GMap</hask> is<br />
<haskell><br />
data instance GMap (Either a b) v = GMapEither (GMap a v) (GMap b v)<br />
</haskell><br />
In this example, the declaration has only one variant. In general, it can be any number.<br />
<br />
Data and newtype instance declarations are only legit when an appropriate family declaration is in scope - just like class instances require the class declaration to be visible. Moreover, each instance declaration has to conform to the kind determined by its family declaration. This implies that the number of parameters of an instance declaration matches the arity determined by the kind of the family. Although all data families are declared with the <hask>data</hask> keyword, instances can be either <hask>data</hask> or <hask>newtype</hask>s, or a mix of both.<br />
<br />
Even if type families are defined as toplevel declarations, functions that perform different computations for different family instances still need to be defined as methods of type classes. In particular, the following is not possible:<br />
<haskell><br />
data family T a<br />
data instance T Int = A<br />
data instance T Char = B<br />
nonsense :: T a -> Int<br />
nonsense A = 1 -- WRONG: These two equations together...<br />
nonsense B = 2 -- ...will produce a type error.<br />
</haskell><br />
Given the functionality provided by GADTs (Generalised Algebraic Data Types), it might seem as if a definition, such as the above, should be feasible. However, type families - in contrast to GADTs - are ''open''; i.e., new instances can always be added, possibly in other modules. Supporting pattern matching across different data instances would require a form of extensible case construct.<br />
<br />
==== Associated type instances ====<br />
<br />
When an associated family instance is declared within a type class instance, we drop the <hask>instance</hask> keyword in the family instance. So, the <hask>Either</hask> instance for <hask>GMap</hask> becomes:<br />
<haskell><br />
instance (GMapKey a, GMapKey b) => GMapKey (Either a b) where<br />
data GMap (Either a b) v = GMapEither (GMap a v) (GMap b v)<br />
...<br />
</haskell><br />
The most important point about associated family instances is that the type indices corresponding to class parameters must be identical to the type given in the instance head; here this is the first argument of <hask>GMap</hask>, namely <hask>Either a b</hask>, which coincides with the only class parameter. Any parameters to the family constructor that do not correspond to class parameters, need to be variables in every instance; here this is the variable <hask>v</hask>.<br />
<br />
Instances for an associated family can only appear as part of instance declarations of the class in which the family was declared - just as with the equations of the methods of a class. Also in correspondence to how methods are handled, declarations of associated types can be omitted in class instances. If an associated family instance is omitted, the corresponding instance type is not inhabited; i.e., only diverging expressions, such as <hask>undefined</hask>, can assume the type.<br />
<br />
==== Scoping of class parameters ====<br />
<br />
In the case of multi-parameter type classes, the visibility of class parameters in the right-hand side of associated family instances depends ''solely'' on the parameters of the data family. As an example, consider the simple class declaration<br />
<haskell><br />
class C a b where<br />
data T a<br />
</haskell><br />
Only one of the two class parameters is a parameter to the data family. Hence, the following instance declaration is invalid:<br />
<haskell><br />
instance C [c] d where<br />
data T [c] = MkT (c, d) -- WRONG!! 'd' is not in scope<br />
</haskell><br />
Here, the right-hand side of the data instance mentions the type variable <hask>d</hask> that does not occur in its left-hand side. We cannot admit such data instances as they would compromise type safety.<br />
<br />
==== Type class instances of family instances ====<br />
<br />
Type class instances of instances of data families can be defined as usual, and in particular data instance declarations can have <hask>deriving</hask> clauses. For example, we can write<br />
<haskell><br />
data GMap () v = GMapUnit (Maybe v)<br />
deriving Show<br />
</haskell><br />
which implicitly defines an instance of the form<br />
<haskell><br />
instance Show v => Show (GMap () v) where ...<br />
</haskell><br />
<br />
Note that class instances are always for particular ''instances'' of a data family and never for an entire family as a whole. This is for essentially the same reasons that we cannot define a toplevel function that performs pattern matching on the data constructors of ''different'' instances of a single type family. It would require a form of extensible case construct.<br />
<br />
==== Overlap ====<br />
<br />
The instance declarations of a data family used in a single program may not overlap at all, independent of whether they are associated or not. In contrast to type class instances, this is not only a matter of consistency, but one of type safety.<br />
<br />
=== Import and export ===<br />
<br />
The association of data constructors with type families is more dynamic than that is the case with standard data and newtype declarations. In the standard case, the notation <hask>T(..)</hask> in an import or export list denotes the type constructor and all the data constructors introduced in its declaration. However, a family declaration never introduces any data constructors; instead, data constructors are introduced by family instances. As a result, which data constructors are associated with a type family depends on the currently visible instance declarations for that family. Consequently, an import or export item of the form <hask>T(..)</hask> denotes the family constructor and all currently visible data constructors - in the case of an export item, these may be either imported or defined in the current module. The treatment of import and export items that explicitly list data constructors, such as <hask>GMap(GMapEither)</hask>, is analogous.<br />
<br />
==== Associated families ====<br />
<br />
As expected, an import or export item of the form <hask>C(..)</hask> denotes all of the class' methods and associated types. However, when associated types are explicitly listed as subitems of a class, we need some new syntax, as uppercase identifiers as subitems are usually data constructors, not type constructors. To clarify that we denote types here, each associated type name needs to be prefixed by the keyword <hask>type</hask>. So for example, when explicitly listing the components of the <hask>GMapKey</hask> class, we write <hask>GMapKey(type GMap, empty, lookup, insert)</hask>.<br />
<br />
==== Examples ====<br />
<br />
Assuming our running <hask>GMapKey</hask> class example, let us look at some export lists and their meaning:<br />
<br />
* <hask>module GMap (GMapKey) where...</hask>: Exports just the class name.<br />
* <hask>module GMap (GMapKey(..)) where...</hask>: Exports the class, the associated type <hask>GMap</hask> and the member functions <hask>empty</hask>, <hask>lookup</hask>, and <hask>insert</hask>. None of the data constructors is exported.<br />
* <hask>module GMap (GMapKey(..), GMap(..)) where...</hask>: As before, but also exports all the data constructors <hask>GMapInt</hask>, <hask>GMapChar</hask>, <hask>GMapUnit</hask>, <hask>GMapPair</hask>, and <hask>GMapEither</hask>.<br />
* <hask>module GMap (GMapKey(empty, lookup, insert), GMap(..)) where...</hask>: As before.<br />
* <hask>module GMap (GMapKey, empty, lookup, insert, GMap(..)) where...</hask>: As before.<br />
<br />
Finally, you can write <hask>GMapKey(type GMap)</hask> to denote both the class <hask>GMapKey</hask> as well as its associated type <hask>GMap</hask>. However, you cannot write <hask>GMapKey(type GMap(..))</hask> &mdash; i.e., sub-component specifications cannot be nested. To specify <hask>GMap</hask>'s data constructors, you have to list it separately.<br />
<br />
==== Instances ====<br />
<br />
Family instances are implicitly exported, just like class instances. However, this applies only to the heads of instances, not to the data constructors an instance defines.<br />
<br />
== An associated type synonym example ==<br />
<br />
Type synonym families are an alternative to functional dependencies, which makes functional dependency examples well suited to introduce type synonym families. In fact, type families are a more functional way to express the same as functional dependencies (despite the name!), as they replace the relational notation of functional dependencies by an expression-oriented notation; i.e., functions on types are really represented by functions and not relations.<br />
<br />
=== The <hask>class</hask> declaration ===<br />
<br />
Here's an example from Mark Jones' seminal paper on functional dependencies:<br />
<haskell><br />
class Collects e ce | ce -> e where<br />
empty :: ce<br />
insert :: e -> ce -> ce<br />
member :: e -> ce -> Bool<br />
toList :: ce -> [e]<br />
</haskell><br />
<br />
With associated type synonyms we can write this as<br />
<haskell><br />
class Collects ce where<br />
type Elem ce<br />
empty :: ce<br />
insert :: Elem ce -> ce -> ce<br />
member :: Elem ce -> ce -> Bool<br />
toList :: ce -> [Elem ce]<br />
</haskell><br />
Instead of the multi-parameter type class, we use a single parameter class, and the parameter <hask>e</hask><br />
turned into an associated type synonym <hask>Elem ce</hask>.<br />
<br />
=== An <hask>instance</hask>===<br />
<br />
Instances change correspondingly. An instance of the two-parameter class<br />
<haskell><br />
instance Eq e => Collects e [e] where<br />
empty = []<br />
insert e l = (e:l)<br />
member e [] = False<br />
member e (x:xs) <br />
| e == x = True<br />
| otherwise = member e xs<br />
toList l = l<br />
</haskell><br />
becomes an instance of a single-parameter class, where the dependent type parameter turns into an associated type instance declaration:<br />
<haskell><br />
instance Eq e => Collects [e] where<br />
type Elem [e] = e<br />
empty = []<br />
insert e l = (e:l)<br />
member e [] = False<br />
member e (x:xs) <br />
| e == x = True<br />
| otherwise = member e xs<br />
toList l = l<br />
</haskell><br />
<br />
=== Using generic collections ===<br />
<br />
With Functional Dependencies the code would be:<br />
<haskell><br />
sumCollects :: (Collects e c1, Collects e c2) => c1 -> c2 -> c2<br />
sumCollects c1 c2 = foldr insert c2 (toList c1)<br />
</haskell><br />
<br />
In contrast, with associated type synonyms, we get:<br />
<haskell><br />
sumCollects :: (Collects c1, Collects c2, Elem c1 ~ Elem c2) => c1 -> c2 -> c2<br />
sumCollects c1 c2 = foldr insert c2 (toList c1)<br />
</haskell><br />
<br />
== Detailed definition of type synonym families ==<br />
<br />
Type families appear in two flavours: (1) they can be defined on the toplevel or (2) they can appear inside type classes (in which case they are known as associated type synonyms). The former is the more general variant, as it lacks the requirement for the type-indices to coincide with the class parameters. However, the latter can lead to more clearly structured code and compiler warnings if some type instances were - possibly accidentally - omitted. In the following, we always discuss the general toplevel form first and then cover the additional constraints placed on associated types.<br />
<br />
=== Family declarations ===<br />
<br />
Indexed type families are introduced by a signature, such as <br />
<haskell><br />
type family Elem c :: *<br />
</haskell><br />
The special <hask>family</hask> distinguishes family from standard type declarations. The result kind annotation is optional and, as usual, defaults to <hask>*</hask> if omitted. An example is<br />
<haskell><br />
type family Elem c<br />
</haskell><br />
Parameters can also be given explicit kind signatures if needed. We call the number of parameters in a type family declaration, the family's arity, and all applications of a type family must be fully saturated w.r.t. to that arity. This requirement is unlike ordinary type synonyms and it implies that the kind of a type family is not sufficient to determine a family's arity, and hence in general, also insufficient to determine whether a type family application is well formed. As an example, consider the following declaration:<br />
<haskell><br />
type family F a b :: * -> * -- F's arity is 2, <br />
-- although its overall kind is * -> * -> * -> *<br />
</haskell><br />
Given this declaration the following are examples of well-formed and malformed types:<br />
<haskell><br />
F Char [Int] -- OK! Kind: * -> *<br />
F Char [Int] Bool -- OK! Kind: *<br />
F IO Bool -- WRONG: kind mismatch in the first argument<br />
F Bool -- WRONG: unsaturated application<br />
</haskell><br />
<br />
A top-level type family can be declared as open or closed. (Associated type<br />
families are always open.) A closed type family has all of its equations<br />
defined in one place and cannot be extended, whereas an open family can have<br />
instances spread across modules. The advantage of a closed family is that<br />
its equations are tried in order, similar to a term-level function definition:<br />
<haskell><br />
type family G a where<br />
G Int = Bool<br />
G a = Char<br />
</haskell><br />
With this definition, the type <hask>G Int</hask> becomes <hask>Bool</hask><br />
and, say, <hask>G Double</hask> becomes <hask>Char</hask>. See also [http://ghc.haskell.org/trac/ghc/wiki/NewAxioms here] for more information about closed type families.<br />
<br />
==== Associated family declarations ====<br />
<br />
When a type family is declared as part of a type class, we drop the <hask>family</hask> special. The <hask>Elem</hask> declaration takes the following form<br />
<haskell><br />
class Collects ce where<br />
type Elem ce :: *<br />
...<br />
</haskell><br />
Exactly as in the case of an associated data declaration, '''the named type parameters must be a permutation of a subset of the class parameters'''. Examples<br />
<haskell><br />
class C a b c where { type T c a :: * } -- OK<br />
class D a where { type T a x :: * } -- No: x is not a class parameter<br />
class D a where { type T a :: * -> * } -- OK<br />
</haskell><br />
<br />
=== Type instance declarations ===<br />
<br />
Instance declarations of open type families are very similar to standard type synonym declarations. The only two differences are that the keyword <hask>type</hask> is followed by <hask>instance</hask> and that some or all of the type arguments can be non-variable types, but may not contain forall types or type synonym families. However, data families are generally allowed, and type synonyms are allowed as long as they are fully applied and expand to a type that is admissible - these are the exact same requirements as for data instances. For example, the <hask>[e]</hask> instance for <hask>Elem</hask> is<br />
<haskell><br />
type instance Elem [e] = e<br />
</haskell><br />
<br />
A type family instance declaration must satisfy the following rules:<br />
* An appropriate family declaration is in scope - just like class instances require the class declaration to be visible. <br />
* The instance declaration conforms to the kind determined by its family declaration<br />
* The number of type parameters in an instance declaration matches the number of type parameters in the family declaration.<br />
* The right-hand side of a type instance must be a monotype (i.e., it may not include foralls) and after the expansion of all saturated vanilla type synonyms, no synonyms, except family synonyms may remain.<br />
<br />
Here are some examples of admissible and illegal type instances and closed families:<br />
<haskell><br />
type family F a :: *<br />
type instance F [Int] = Int -- OK!<br />
type instance F String = Char -- OK!<br />
type instance F (F a) = a -- WRONG: type parameter mentions a type family<br />
type instance F (forall a. (a, b)) = b -- WRONG: a forall type appears in a type parameter<br />
type instance F Float = forall a.a -- WRONG: right-hand side may not be a forall type<br />
<br />
type family F2 a where -- OK!<br />
F (Maybe Int) = Int<br />
F (Maybe Bool) = Bool<br />
F (Maybe a) = String<br />
<br />
type family G a b :: * -> *<br />
type instance G Int = (,) -- WRONG: must be two type parameters<br />
type instance G Int Char Float = Double -- WRONG: must be two type parameters<br />
</haskell><br />
<br />
==== Closed family simplification ====<br />
<br />
Currently in recent versions of ghc 7.7 and planed to be included in 7.8.1.<br />
<br />
When dealing with closed families, simplifying the type is harder than just finding a left-hand side that matches and replacing that with a right-hand side. GHC will select an equation to use in a given type family application (the "target") if and only if the following 2 conditions hold:<br />
<br />
# There is a substitution from the variables in the equation's LHS that makes the left-hand side of the branch coincide with the target.<br />
# For each previous equation in the family: either the LHS of that equation is ''apart'' from the type family application, '''or''' the equation is ''compatible'' with the chosen equation.<br />
<br />
Now, we define ''apart'' and ''compatible'':<br />
# Two types are ''apart'' when one cannot simplify to the other, even after arbitrary type-family simplifications<br />
# Two equations are ''compatible'' if, either, their LHSs are apart or their LHSs unify and their RHSs are the same under the substitution induced by the unification.<br />
<br />
Some examples are in order:<br />
<haskell><br />
type family F a where<br />
F Int = Bool<br />
F Bool = Char<br />
F a = Bool<br />
<br />
type family And (a :: Bool) (b :: Bool) :: Bool where<br />
And False c = False<br />
And True d = d<br />
And e False = False<br />
And f True = f<br />
And g g = g<br />
</haskell><br />
<br />
In <hask>F</hask>, all pairs of equations are compatible except the second and third. The first two are compatible because their LHSs are apart. The first and third are compatible because the unifying substitution leads the RHSs to be the same. But, the second and third are not compatible because neither of these conditions holds. As a result, GHC will not use the third equation to simplify a target unless that target is apart from <hask>Bool</hask>.<br />
<br />
In <hask>And</hask>, ''every'' pair of equations is compatible, meaning GHC never has to make the extra apartness check during simplification.<br />
<br />
Why do all of this? It's a matter of type safety. Consider this example:<br />
<br />
<haskell><br />
type family J a b<br />
type instance where<br />
J a a = Int<br />
J a b = Bool<br />
</haskell><br />
<br />
Say GHC selected the second branch just because the first doesn't apply at the moment, because two type variables are distinct. The problem is that those variables might later be instantiated at the same value, and then the first branch would have applied. You can convince this sort of inconsistency to produce <hask>unsafeCoerce</hask>.<br />
<br />
It gets worse. GHC has no internal notion of inequality, so it can't use previous, failed term-level GADT pattern matches to refine its type assumptions. For example:<br />
<br />
<haskell><br />
data G :: * -> * where<br />
GInt :: G Int<br />
GBool :: G Bool<br />
<br />
type family Foo (a :: *) :: *<br />
type instance where<br />
Foo Int = Char<br />
Foo a = Double<br />
<br />
bar :: G a -> Foo a<br />
bar GInt = 'x'<br />
bar _ = 3.14<br />
</haskell><br />
<br />
The last line will fail to typecheck, because GHC doesn't know that the type variable <hask>a</hask> can't be <hask>Int</hask> here, even though it's obvious. The only general way to fix this is to have inequality evidence introduced into GHC, and that's a big deal and we don't know if we have the motivation for such a change yet.<br />
<br />
==== Associated type instances ====<br />
<br />
When an associated family instance is declared within a type class instance, we drop the <hask>instance</hask> keyword in the family instance. So, the <hask>[e]</hask> instance for <hask>Elem</hask> becomes:<br />
<haskell><br />
instance (Eq (Elem [e])) => Collects ([e]) where<br />
type Elem [e] = e<br />
...<br />
</haskell><br />
The most important point about associated family instances is that the type indexes corresponding to class parameters must be identical to the type given in the instance head; here this is <hask>[e]</hask>, which coincides with the only class parameter.<br />
<br />
Instances for an associated family can only appear as part of instance declarations of the class in which the family was declared - just as with the equations of the methods of a class. Also in correspondence to how methods are handled, declarations of associated types can be omitted in class instances. If an associated family instance is omitted, the corresponding instance type is not inhabited; i.e., only diverging expressions, such as <hask>undefined</hask>, can assume the type.<br />
<br />
==== Overlap ====<br />
<br />
The instance declarations of an open type family used in a single program must be compatible, in the form defined above. This condition is independent of whether the type family is associated or not, and it is not only a matter of consistency, but one of type safety. <br />
<br />
Here are two examples to illustrate the condition under which overlap is permitted.<br />
<haskell><br />
type instance F (a, Int) = [a]<br />
type instance F (Int, b) = [b] -- overlap permitted<br />
<br />
type instance G (a, Int) = [a]<br />
type instance G (Char, a) = [a] -- ILLEGAL overlap, as [Char] /= [Int]<br />
</haskell><br />
<br />
==== Decidability ====<br />
<br />
In order to guarantee that type inference in the presence of type families is decidable, we need to place a number of additional restrictions on the formation of type instance declarations (c.f., Definition 5 (Relaxed Conditions) of [http://www.cse.unsw.edu.au/~chak/papers/SPCS08.html Type Checking with Open Type Functions]). Instance declarations have the general form<br />
<haskell><br />
type instance F t1 .. tn = t<br />
</haskell><br />
where we require that for every type family application <hask>(G s1 .. sm)</hask> in <hask>t</hask>, <br />
# <hask>s1 .. sm</hask> do not contain any type family constructors,<br />
# the total number of symbols (data type constructors and type variables) in <hask>s1 .. sm</hask> is strictly smaller than in <hask>t1 .. tn</hask>, and<br />
# for every type variable <hask>a</hask>, <hask>a</hask> occurs in <hask>s1 .. sm</hask> at most as often as in <hask>t1 .. tn</hask>.<br />
These restrictions are easily verified and ensure termination of type inference. However, they are not sufficient to guarantee completeness of type inference in the presence of, so called, ''loopy equalities'', such as <hask>a ~ [F a]</hask>, where a recursive occurrence of a type variable is underneath a family application and data constructor application - see the above mentioned paper for details. <br />
<br />
If the option <tt>-XUndecidableInstances</tt> is passed to the compiler, the above restrictions are not enforced and it is on the programmer to ensure termination of the normalisation of type families during type inference.<br />
<br />
=== Equality constraints ===<br />
<br />
Type context can include equality constraints of the form <hask>t1 ~ t2</hask>, which denote that the types <hask>t1</hask> and <hask>t2</hask> need to be the same. In the presence of type families, whether two types are equal cannot generally be decided locally. Hence, the contexts of function signatures may include equality constraints, as in the following example:<br />
<haskell><br />
sumCollects :: (Collects c1, Collects c2, Elem c1 ~ Elem c2) => c1 -> c2 -> c2<br />
</haskell><br />
where we require that the element type of <hask>c1</hask> and <hask>c2</hask> are the same. In general, the types <hask>t1</hask> and <hask>t2</hask> of an equality constraint may be arbitrary monotypes; i.e., they may not contain any quantifiers, independent of whether higher-rank types are otherwise enabled.<br />
<br />
Equality constraints can also appear in class and instance contexts. The former enable a simple translation of programs using functional dependencies into programs using family synonyms instead. The general idea is to rewrite a class declaration of the form<br />
<haskell><br />
class C a b | a -> b<br />
</haskell><br />
to<br />
<haskell><br />
class (F a ~ b) => C a b where<br />
type F a<br />
</haskell><br />
That is, we represent every functional dependency (FD) <hask>a1 .. an -> b</hask> by an FD type family <hask>F a1 .. an</hask> and a superclass context equality <hask>F a1 .. an ~ b</hask>, essentially giving a name to the functional dependency. In class instances, we define the type instances of FD families in accordance with the class head. Method signatures are not affected by that process.<br />
<br />
== Frequently asked questions ==<br />
<br />
=== Comparing type families and functional dependencies ===<br />
<br />
Functional dependencies cover some of the same territory as type families. How do the two compare?<br />
There are some articles about this question:<br />
<br />
* Experiences in converting functional dependencies to type families: "[[Functional dependencies vs. type families]]"<br />
* [http://hackage.haskell.org/trac/ghc/wiki/TFvsFD GHC trac] on a comparison of functional dependencies and type families<br />
<br />
=== Injectivity, type inference, and ambiguity ===<br />
<br />
A common problem is this<br />
<haskell><br />
type family F a<br />
<br />
f :: F a -> F a<br />
f = undefined<br />
<br />
g :: F Int -> F Int<br />
g x = f x<br />
</haskell><br />
The compiler complains about the definition of <tt>g</tt> saying<br />
<haskell><br />
Couldn't match expected type `F Int' against inferred type `F a1'<br />
</haskell><br />
In type-checking <tt>g</tt>'s right hand side GHC discovers (by instantiating <tt>f</tt>'s type with a fresh type variable) that it has type <tt>F a1 -> F a1</tt> for some as-yet-unknown type <tt>a1</tt>. Now it tries to make the inferred type match <tt>g</tt>'s type signature. Well, you say, just make <tt>a1</tt> equal to <tt>Int</tt> and you are done. True, but what if there were these instances<br />
<haskell><br />
type instance F Int = Bool<br />
type instance F Char = Bool<br />
</haskell><br />
Then making <tt>a1</tt> equal to <tt>Char</tt> would ''also'' make the two types equal. Because there is (potentially) more than one choice, the program is rejected.<br />
<br />
However (and confusingly) if you omit the type signature on <tt>g</tt> altogether, thus<br />
<haskell><br />
f :: F a -> F a<br />
f = undefined<br />
<br />
g x = f x<br />
</haskell><br />
GHC will happily infer the type <tt>g :: F a -> F a</tt>. But you can't ''write'' that type signature or, indeed, the more specific one above. (Arguably this behaviour, where GHC ''infers'' a type it can't ''check'', is very confusing. I suppose we could make GHC reject both programs, with and without type signatures.)<br />
<br />
'''What is the problem?''' The nub of the issue is this: knowing that <tt>F t1</tt>=<tt>F t2</tt> does ''not'' imply that <tt>t1</tt> = <tt>t2</tt>.<br />
The difficulty is that the type function <tt>F</tt> need not be ''injective''; it can map two distinct types to the same type. For an injective type constructor like <tt>Maybe</tt>, if we know that <tt>Maybe t1</tt> = <tt>Maybe t2</tt>, then we know that <tt>t1</tt> = <tt>t2</tt>. But not so for non-injective type functions.<br />
<br />
The problem starts with <tt>f</tt>. Its type is ''ambiguous''; even if I know the argument and result types for <tt>f</tt>, I cannot use that to find the type at which <tt>a</tt> should be instantiated. (So arguably, <tt>f</tt> should be rejected as having an ambiguous type, and probably will be in future.) The situation is well known in type classes: <br />
<haskell><br />
bad :: (Read a, Show a) => String -> String<br />
bad x = show (read x)<br />
</haskell><br />
At a call of <tt>bad</tt> one cannot tell at what type <tt>a</tt> should be instantiated.<br />
<br />
The only solution is to avoid ambiguous types. In the type signature of a function, <br />
* Ensure that every type variable occurs in the part after the "<tt>=></tt>"<br />
* Ensure that every type variable appears at least once outside a type function call.<br />
Alternatively, you can use data families, which create new types and are therefore injective. The following code works:<br />
<br />
<haskell>data family F a<br />
<br />
f :: F a -> F a<br />
f = undefined<br />
<br />
g :: F Int -> F Int<br />
g x = f x</haskell><br />
<br />
== References ==<br />
<br />
* [http://www.cse.unsw.edu.au/~chak/papers/CKPM05.html Associated Types with Class.] Manuel M. T. Chakravarty, Gabriele Keller, Simon Peyton Jones, and Simon Marlow. In ''Proceedings of The 32nd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL'05)'', pages 1-13, ACM Press, 2005.<br />
* [http://www.cse.unsw.edu.au/~chak/papers/CKP05.html Associated Type Synonyms.] Manuel M. T. Chakravarty, Gabriele Keller, and Simon Peyton Jones. In ''Proceedings of The Tenth ACM SIGPLAN International Conference on Functional Programming'', ACM Press, pages 241-253, 2005.<br />
* [http://www.cse.unsw.edu.au/~chak/papers/SCPD07.html System F with Type Equality Coercions.] Martin Sulzmann, Manuel M. T. Chakravarty, Simon Peyton Jones, and Kevin Donnelly. In ''Proceedings of The Third ACM SIGPLAN Workshop on Types in Language Design and Implementation'', ACM Press, 2007.<br />
* [http://www.cse.unsw.edu.au/~chak/papers/SPCS08.html Type Checking With Open Type Functions.] Tom Schrijvers, Simon Peyton-Jones, Manuel M. T. Chakravarty, Martin Sulzmann. In ''Proceedings of The 13th ACM SIGPLAN International Conference on Functional Programming'', ACM Press, pages 51-62, 2008.<br />
* [[Simonpj/Talk:FunWithTypeFuns | Fun with Type Functions]] Oleg Kiselyov, Simon Peyton Jones, Chung-chieh Shan (the source for this paper can be found at http://patch-tag.com/r/schoenfinkel/typefunctions/wiki ) <br />
<br />
[[Category:Type-level programming]]<br />
[[Category:Language extensions]]<br />
[[Category:GHC|Indexed types]]</div>Davorakhttps://wiki.haskell.org/index.php?title=Automatic_Differentiation&diff=56961Automatic Differentiation2013-10-07T04:23:17Z<p>Davorak: fixed edit error</p>
<hr />
<div>'''Automatic Differentiation''' enables you to compute both the value of a function at a point and its derivative(s) at the same time.<br />
<br />
When using '''Forward Mode''' this roughly means that a numerical value is equipped with its derivative with respect to one of your input, which is updated accordingly on every function application.<br />
Let the number <math>x_0</math> be equipped with the derivative <math>x_1</math>: <math>\langle x_0,x_1 \rangle</math>.<br />
For example the sinus is defined as:<br />
* <math>\sin\langle x_0,x_1 \rangle = \langle \sin x_0, x_1\cdot\cos x_0\rangle</math><br />
<br />
Replacing this single derivative with a lazy list of them can enable you to compute an entire derivative tower at the same time.<br />
<br />
However, it becomes more difficult for vector functions, when computing the derivatives in reverse, when computing towers, and/or when trying to minimize the number of computations needed to compute all of the kth partial derivatives of an n-ary function.<br />
<br />
Forward mode is suitable when you have fewer arguments than outputs, because it requires multiple applications of the function, one for each input.<br />
<br />
Reverse mode is suitable when you have fewer results than inputs, because it requires multiple applications of the function, one for each output.<br />
<br />
Implementations:<br />
<br />
* [http://hackage.haskell.org/cgi-bin/hackage-scripts/package/ad ad] (forward, forward w/ tower, reverse and other modes)<br />
* [http://hackage.haskell.org/cgi-bin/hackage-scripts/package/fad fad] (forward mode tower)<br />
* [http://hackage.haskell.org/cgi-bin/hackage-scripts/package/rad rad] (reverse mode)<br />
* [[Vector-space]] (forward mode tower)<br />
* [http://comonad.com/haskell/monoids/dist/doc/html/monoids/Data-Ring-Module-AutomaticDifferentiation.html Data.Ring.Module.AutomaticDifferentiation](forward mode)<br />
<br />
== Power Series ==<br />
<br />
If you can compute all of the derivatives of a function, you can compute Taylor series from it.<br />
<br />
Implementation with Haskell 98 type classes:<br />
http://code.haskell.org/~thielema/htam/src/PowerSeries/Taylor.hs<br />
<br />
With advanced type classes in [[Numeric Prelude]]:<br />
http://hackage.haskell.org/packages/archive/numeric-prelude/0.0.5/doc/html/MathObj-PowerSeries.html<br />
<br />
== See also ==<br />
<br />
* [[Functional differentiation]]<br />
* Chris Smith in Haskell-cafe on [http://www.haskell.org/pipermail/haskell-cafe/2007-November/035477.html Hit a wall with the type system]<br />
* Edward Kmett in StackOverflow on [http://stackoverflow.com/questions/2744973/is-there-any-working-implementation-of-reverse-mode-automatic-differentiation-for Is there any working implementation of reverse mode automatic differentiation for Haskell?]<br />
* Edward Kmett in Comonad.Reader on [http://comonad.com/reader/2010/reverse-mode-automatic-differentiation-in-haskell/ Reverse Mode Automatic Differentiation in Haskell]<br />
* Conal M. Elliott in [https://vimeo.com/album/126865/video/6622658 Beautiful Diﬀerentiation](video) from. International Conference on Functional Programming (ICFP) Edinburgh 2009. Kindly recorded and posted by Malcolm Wallace.<br />
<br />
[[Category:Mathematics]]</div>Davorakhttps://wiki.haskell.org/index.php?title=Automatic_Differentiation&diff=56960Automatic Differentiation2013-10-07T04:22:18Z<p>Davorak: Added video link to talk by Conal Elliott</p>
<hr />
<div>'''Automatic Differentiation''' enables you to compute both the value of a function at a point and its derivative(s) at the same time.<br />
<br />
When using '''Forward Mode''' this roughly means that a numerical value is equipped with its derivative with respect to one of your input, which is updated accordingly on every function application.<br />
Let the number <math>x_0</math> be equipped with the derivative <math>x_1</math>: <math>\langle x_0,x_1 \rangle</math>.<br />
For example the sinus is defined as:<br />
* <math>\sin\langle x_0,x_1 \rangle = \langle \sin x_0, x_1\cdot\cos x_0\rangle</math><br />
<br />
Replacing this single derivative with a lazy list of them can enable you to compute an entire derivative tower at the same time.<br />
<br />
However, it becomes more difficult for vector functions, when computing the derivatives in reverse, when computing towers, and/or when trying to minimize the number of computations needed to compute all of the kth partial derivatives of an n-ary function.<br />
<br />
Forward mode is suitable when you have fewer arguments than outputs, because it requires multiple applications of the function, one for each input.<br />
<br />
Reverse mode is suitable when you have fewer results than inputs, because it requires multiple applications of the function, one for each output.<br />
<br />
Implementations:<br />
<br />
* [http://hackage.haskell.org/cgi-bin/hackage-scripts/package/ad ad] (forward, forward w/ tower, reverse and other modes)<br />
* [http://hackage.haskell.org/cgi-bin/hackage-scripts/package/fad fad] (forward mode tower)<br />
* [http://hackage.haskell.org/cgi-bin/hackage-scripts/package/rad rad] (reverse mode)<br />
* [[Vector-space]] (forward mode tower)<br />
* [http://comonad.com/haskell/monoids/dist/doc/html/monoids/Data-Ring-Module-AutomaticDifferentiation.html Data.Ring.Module.AutomaticDifferentiation](forward mode)<br />
<br />
== Power Series ==<br />
<br />
If you can compute all of the derivatives of a function, you can compute Taylor series from it.<br />
<br />
Implementation with Haskell 98 type classes:<br />
http://code.haskell.org/~thielema/htam/src/PowerSeries/Taylor.hs<br />
<br />
With advanced type classes in [[Numeric Prelude]]:<br />
http://hackage.haskell.org/packages/archive/numeric-prelude/0.0.5/doc/html/MathObj-PowerSeries.html<br />
<br />
== See also ==<br />
<br />
* [[Functional differentiation]]<br />
* Chris Smith in Haskell-cafe on [http://www.haskell.org/pipermail/haskell-cafe/2007-November/035477.html Hit a wall with the type system]<br />
* Edward Kmett in StackOverflow on [http://stackoverflow.com/questions/2744973/is-there-any-working-implementation-of-reverse-mode-automatic-differentiation-for Is there any working implementation of reverse mode automatic differentiation for Haskell?]<br />
* Edward Kmett in Comonad.Reader on [http://comonad.com/reader/2010/reverse-mode-automatic-differentiation-in-haskell/ Reverse Mode Automatic Differentiation in Haskell]<br />
* Conal M. Elliott in [https://vimeo.com/album/126865/video/6622658 Beautiful Diﬀerentiation](video) from. International Conference on Functional Programming (ICFP)<br />
Edinburgh 2009. Kindly recorded and posted by Malcolm Wallace.<br />
<br />
[[Category:Mathematics]]</div>Davorakhttps://wiki.haskell.org/index.php?title=Why_not_Pointed%3F&diff=56754Why not Pointed?2013-09-04T04:26:38Z<p>Davorak: </p>
<hr />
<div>The <code>Pointed</code> type class lives in the [http://hackage.haskell.org/package/pointed pointed library], moved from the [http://hackage.haskell.org/package/category-extras category-extras library].<br />
<br />
Edward Kmett, the author of category-extras, pointed, and many related packages, has since moved his focus to [http://hackage.haskell.org/package/semigroupoids semigroupoids] and [http://hackage.haskell.org/package/semigroups semigroups]. He finds them more interesting and useful, and considers <code>Pointed</code> to be historical now (he still provides the pointed package only because “people were whinging”).<br />
<br />
<br />
Expanded/improved information from Edward Kmett<br />
<br><br />
source:http://www.reddit.com/r/haskell/comments/1lokkj/lens_based_classy_prelude/cc1cm92<br />
<br />
<blockquote cite="http://www.reddit.com/r/haskell/comments/1lokkj/lens_based_classy_prelude/cc1cm92"><br />
To be fair, that assessment is a bit of a simplification of my position. ;)<br />
A slightly less simplified version of my position is that Pointed has no useful laws and almost all applications people point to for it are actually abuses of ad hoc relationships it happens to have for the instances it does offer.<br />
The one law Pointed offers is how it interoperates with fmap:<br />
<br><br />
fmap f . point = point . f<br />
<br><br />
but this is a free theorem of the type, so Pointed does not need Functor as a superclass, and now, unlike at the time of the original Typeclassopaedia when I first pushed Pointed on the community, it no longer has the superclass.<br />
Most usecases folks claim for it have to do with things like making sets of valus with constructions like 'foldMap point' but there we're relying on an ad hoc relationship because we happen to know what this does operationally for (Set a). At first blush one might think it generalizes. After all, if we substitute [a] then we get a list of all of the elements, right?<br />
<br><br />
But when you go to drop in Maybe all of a sudden we're getting Just the result of smashing together the results as a Monoid. or Nothing. Not anything at all sensible. Without knowing the concrete instances you're getting for Monoid and Pointed you know nothing about the behavior.<br />
<br><br />
The notion of a semigroupoid on the other hand gives you an associativity condition and can be used to drive many useful operations. You can 'foldMap1' a non-empty container with a semigroup. You can 'traverse1' a non-empty container with a mere Apply (semi-applicative) instance. You can use it to do interesting zip-like things to maps.<br />
<br><br />
Including Pointed in the hierarchy comes at a cost. Including the semiapplicative (Apply)/semimonad (Bind) tiers would come at a cost.<br />
<br><br />
If you fully fleshed out the lattice of them you'd indeed get some finer grained control over what you could do with the standard classes.<br />
For instance lens would be able to give real types to affine traversals given Pointed -- one of their few legitimate uses!<br />
<br><br />
However, this comes at the price that you no longer really get to define good mutual definitions for these things. The full lattice.<br />
<br><br />
1.) Functor<br />
<br><br />
2.) Pointed<br />
<br><br />
3.) Functor + Pointed -- this is free theoremed, no class needed<br />
<br><br />
4.) Functor => Apply -- associativity law<br />
<br><br />
5.) Apply + Pointed => Applicative -- class needed, unit laws<br />
<br><br />
6.) Apply => Bind -- inhabited by Map k, IntMap, etc.<br />
<br><br />
7.) Bind + Pointed => Monad<br />
<br><br />
takes the user up to having to define 6 classes before breakfast just to get to Monad and back up to the functionality he had in 3 lines before we started tinkering with his code. Worse, some of these tiers are uninhabited by methods, they merely offer laws.<br />
<br><br />
Laws that newer users may not understand are critical to the correctness of their code, and which won't be pushed on them when they get a bag of constraints out of the typechecker. So the user can silently introduce code that relies on constraints it hasn't put properly on the type.<br />
<br><br />
There is also an understandable undercurrent in the community that having to deal with so many classes would be a bad idea.<br />
<br><br />
So this leads us to consider either a.) default superclass systems that can try to take some pain out or b.) removing layers from our über-system of classes.<br />
No extant default superclass proposal deals well with complex lattices, due to multiple candidate default definitions conflicting.<br />
<br><br />
So we try to focus on a few good abstractions, rather than capturing them all. Pointed has a bad power to weight ratio and induces people to write code they can't reason about.<br />
<br><br />
Or you can just say "people were whinging” =)<br />
</blockquote></div>Davorakhttps://wiki.haskell.org/index.php?title=Talk:Pipes&diff=55880Talk:Pipes2013-05-08T19:06:36Z<p>Davorak: </p>
<hr />
<div>I would like to request renaming this page to Pipes. According to the [http://www.haskell.org/haskellwiki/HaskellWiki:Guidelines editorial guidelines] I am bringing up the request in the Discussion section to solicit feedback before making this change.<br />
<br />
The History indicates that the original rationale was for consistency with Conduit and Iteratee pages, which are both singular, however I would prefer to name the page in the plural for several reasons:<br />
<br />
* People most often search for the plural rather than the singular form. I know this from my blog, which collects search traffic related to my pipes posts and I never see any searches for the singular 'pipe'.<br />
<br />
* The Conduit page has the same name as the corresponding library, which works well for `conduit`, but the `pipes` library name is plural and people are far more likely to search by the library name than the data type.<br />
<br />
* Pipe is not the central data type of the library. Proxy is technically the central abstraction, but again, nobody searches by the data type.<br />
<br />
* Using the data type as the page name would make sense if multiple libraries implemented the Pipe type (Actually, `conduit` does implement a variation on the Pipe type, but nobody would expect to find `conduit` here). For example, naming by the data type makes sense for the Iteratee page because there are multiple iteratee implementations, but for `pipes` there is only one authoritative implementation now that `pipes-core` has official merged into `pipes`.<br />
<br />
*The focus of this page is a discussion of the library rather than the type. It would only be appropriate to name the page after the type if the page were a discussion of the theory behind the type itself.<br />
<br />
* Naming a page after a library's central data type causes disruption to the Haskell wiki if the library changes its underlying type scheme, which has been the case both for `conduit` and `pipes`.<br />
<br />
<br />
I was wondering why it was named pipe, I thought it odd. It makes sense to me that it would be named pipes. [[User:Davorak|Davorak]] 19:06, 8 May 2013 (UTC)</div>Davorakhttps://wiki.haskell.org/index.php?title=HaskellImplementorsWorkshop/2010&diff=55840HaskellImplementorsWorkshop/20102013-05-01T21:53:00Z<p>Davorak: /* Programme */</p>
<hr />
<div>= Haskell Implementors Workshop 2010 =<br />
<br />
The 2010 Haskell Implementors Workshop was held alongside [http://www.icfpconference.org/icfp2010/ ICFP 2010] in Baltimore.<br />
<br />
== Links ==<br />
<br />
* [[HaskellImplementorsWorkshop/2010/Call_for_Talks|Call for Talks]]<br />
<br />
== Dates ==<br />
<br />
* '''Friday 6 Aug''': Submissions due<br />
* '''Monday 23 Aug''': Notification<br />
* '''Friday 1 Oct''': Workshop<br />
<br />
== Organisers ==<br />
<br />
* '''Jean-Philippe Bernardy''' (Chalmers University of Technology)<br />
* '''Duncan Coutts''' - co-chair (Well-Typed LLP)<br />
* '''Iavor Diatchki''' (Galois)<br />
* '''Simon Marlow''' - co-chair (Microsoft Research)<br />
* '''Ben Lippmeier''' (University of New South Wales)<br />
* '''Neil Mitchell''' (Standard Chartered)<br />
<br />
== Programme ==<br />
<br />
8:00 8:45 Breakfast<br />
<br />
9:00 10:00 Session 1<br />
<br />
* '''''Hackage, Cabal and the Haskell Platform: The Second Year''''' (Don Stewart and Duncan Coutts)<br />
:: [http://donsbot.wordpress.com/2010/10/01/hackage-cabal-and-the-haskell-platform-the-second-year/ Slides](slides link broken), [http://www.vimeo.com/15462768 Video]<br />
<br />
* '''''Hackage 2.0: Serving Packages Better''''' (Matthew Gruen)<br />
:: [http://www.galois.com/~dons/talks/hiw-2010/gruen-hackage2.pdf Slides (PDF)](slides link broken), [http://www.vimeo.com/15464003 Video]<br />
<br />
10:00 10:30 Break<br />
<br />
10:30 12:30 Session 2<br />
<br />
* '''''Shake: A Better Make''''' (Neil Mitchell)<br />
:: [http://www.galois.com/~dons/talks/hiw-2010/ndm-shake.pdf Slides (PDF)](slides link broken), [http://www.vimeo.com/15465133 Video]<br />
<br />
* '''''Improving Cabal's Test Support''''' (Thomas Tuegel)<br />
:: [http://www.galois.com/~dons/talks/hiw-2010/tuegel-cabal-test.pdf Slides (PDF)](slides link borken), [http://www.vimeo.com/15466100 Video]<br />
<br />
* '''''Revamping Haddock Output''''' (Mark Lentczner)<br />
:: [http://mtnviewmark.wordpress.com/2010/10/01/haddock-revamp/ Slides], [http://www.vimeo.com/15466885 Video]<br />
<br />
* First '''''short-talks''''' session: 10-minute(ish) talks/demos, sign up on the day<br />
** GHC Status - Simon Peyton Jones [http://www.galois.com/~dons/talks/hiw-2010/spj-ghc7-status.pdf Slides], [http://www.vimeo.com/15467880 Video]<br />
** UHC Status - Atze Dijkstra [http://www.galois.com/~dons/talks/hiw-2010/atze-uhc-status.pdf Slides] , [http://www.vimeo.com/15467692 Video]<br />
<br />
12:30 2:00 Lunch<br />
<br />
2:00 3:00 Session 3<br />
<br />
* '''''Typed type-level functional programming in GHC''''' (Brent Yorgey)<br />
:: [http://www.cis.upenn.edu/~byorgey/talks/typetype-HIW-20101001.pdf Slides], [http://www.vimeo.com/15480577 Video]<br />
<br />
* Second '''''short-talks''''' session: 10-minute(ish) talks/demos, sign up on the day<br />
** DDC peekOn/pokeOn (Ben Lippmeier) [http://www.cse.unsw.edu.au/~benl/talks/PeekOn-HIW2010.pdf Slides] [http://www.vimeo.com/15481154 Video]<br />
** Scrap your zippers (Michael Adams) [http://vimeo.com/15481513 Video]<br />
** Parallel CASHflow (Kevin Hammond) [http://vimeo.com/15567270 Video]<br />
** Eden - a parallel Haskell (Oleg Lobachev) [http://vimeo.com/15567935 Video]<br />
** Performance visualization for multicore Haskell (Peter Wortmann) [http://vimeo.com/15568451 Video]<br />
** pandoc + lhs2TeX for literate programming (Tillmann Rendel) [http://vimeo.com/15481736 Video]<br />
<br />
3:00 3:30 Break<br />
<br />
3:30 4:30 Session 4<br />
<br />
* '''''Fibon -- a new benchmark suite for Haskell''''' (David Peixotto)<br />
:: [http://www.cs.rice.edu/~dmp4866/PDF/2010.Fibon-HIW-Talk.pdf Slides (PDF)], [http://vimeo.com/15568843 Video]<br />
<br />
* '''''Kansas Lava -- Using and Abusing GHC's Type Extensions''''' (Andrew Farmer)<br />
:: [http://www.scribd.com/doc/38559736/kansaslava-hiw10 Slides], [http://vimeo.com/15571220 Video]<br />
<br />
4:30 5:00 Break<br />
<br />
5:00 6:00 Session 5<br />
<br />
* '''''Scheduling Lazy Evaluation on Multicore''''' (Simon Marlow)<br />
:: [http://www.galois.com/~dons/talks/hiw-2010/simonmar-multicore.pdf Slides], [http://vimeo.com/15573590 Video]<br />
<br />
* '''''Beyond Haskell''''' discussion, chaired by Ben Lippmeier<br />
:: [http://www.cse.unsw.edu.au/~benl/talks/BeyondHaskell-HIW2010.pdf Slides], [http://vimeo.com/15576718 Video]<br />
<br />
<br />
[[Category:Community]]</div>Davorakhttps://wiki.haskell.org/index.php?title=HaskellImplementorsWorkshop/2010&diff=55839HaskellImplementorsWorkshop/20102013-05-01T21:51:43Z<p>Davorak: /* Programme */</p>
<hr />
<div>= Haskell Implementors Workshop 2010 =<br />
<br />
The 2010 Haskell Implementors Workshop was held alongside [http://www.icfpconference.org/icfp2010/ ICFP 2010] in Baltimore.<br />
<br />
== Links ==<br />
<br />
* [[HaskellImplementorsWorkshop/2010/Call_for_Talks|Call for Talks]]<br />
<br />
== Dates ==<br />
<br />
* '''Friday 6 Aug''': Submissions due<br />
* '''Monday 23 Aug''': Notification<br />
* '''Friday 1 Oct''': Workshop<br />
<br />
== Organisers ==<br />
<br />
* '''Jean-Philippe Bernardy''' (Chalmers University of Technology)<br />
* '''Duncan Coutts''' - co-chair (Well-Typed LLP)<br />
* '''Iavor Diatchki''' (Galois)<br />
* '''Simon Marlow''' - co-chair (Microsoft Research)<br />
* '''Ben Lippmeier''' (University of New South Wales)<br />
* '''Neil Mitchell''' (Standard Chartered)<br />
<br />
== Programme ==<br />
<br />
8:00 8:45 Breakfast<br />
<br />
9:00 10:00 Session 1<br />
<br />
* '''''Hackage, Cabal and the Haskell Platform: The Second Year''''' (Don Stewart and Duncan Coutts)<br />
:: [http://donsbot.wordpress.com/2010/10/01/hackage-cabal-and-the-haskell-platform-the-second-year/ Slides](slides link broken), [http://www.vimeo.com/15462768 Video]<br />
<br />
* '''''Hackage 2.0: Serving Packages Better''''' (Matthew Gruen)<br />
:: [http://www.galois.com/~dons/talks/hiw-2010/gruen-hackage2.pdf Slides (PDF)](slides link broken), [http://www.vimeo.com/15464003 Video]<br />
<br />
10:00 10:30 Break<br />
<br />
10:30 12:30 Session 2<br />
<br />
* '''''Shake: A Better Make''''' (Neil Mitchell)<br />
:: [http://www.galois.com/~dons/talks/hiw-2010/ndm-shake.pdf Slides (PDF)], [http://www.vimeo.com/15465133 Video]<br />
<br />
* '''''Improving Cabal's Test Support''''' (Thomas Tuegel)<br />
:: [http://www.galois.com/~dons/talks/hiw-2010/tuegel-cabal-test.pdf Slides (PDF)], [http://www.vimeo.com/15466100 Video]<br />
<br />
* '''''Revamping Haddock Output''''' (Mark Lentczner)<br />
:: [http://mtnviewmark.wordpress.com/2010/10/01/haddock-revamp/ Slides], [http://www.vimeo.com/15466885 Video]<br />
<br />
* First '''''short-talks''''' session: 10-minute(ish) talks/demos, sign up on the day<br />
** GHC Status - Simon Peyton Jones [http://www.galois.com/~dons/talks/hiw-2010/spj-ghc7-status.pdf Slides], [http://www.vimeo.com/15467880 Video]<br />
** UHC Status - Atze Dijkstra [http://www.galois.com/~dons/talks/hiw-2010/atze-uhc-status.pdf Slides] , [http://www.vimeo.com/15467692 Video]<br />
<br />
12:30 2:00 Lunch<br />
<br />
2:00 3:00 Session 3<br />
<br />
* '''''Typed type-level functional programming in GHC''''' (Brent Yorgey)<br />
:: [http://www.cis.upenn.edu/~byorgey/talks/typetype-HIW-20101001.pdf Slides], [http://www.vimeo.com/15480577 Video]<br />
<br />
* Second '''''short-talks''''' session: 10-minute(ish) talks/demos, sign up on the day<br />
** DDC peekOn/pokeOn (Ben Lippmeier) [http://www.cse.unsw.edu.au/~benl/talks/PeekOn-HIW2010.pdf Slides] [http://www.vimeo.com/15481154 Video]<br />
** Scrap your zippers (Michael Adams) [http://vimeo.com/15481513 Video]<br />
** Parallel CASHflow (Kevin Hammond) [http://vimeo.com/15567270 Video]<br />
** Eden - a parallel Haskell (Oleg Lobachev) [http://vimeo.com/15567935 Video]<br />
** Performance visualization for multicore Haskell (Peter Wortmann) [http://vimeo.com/15568451 Video]<br />
** pandoc + lhs2TeX for literate programming (Tillmann Rendel) [http://vimeo.com/15481736 Video]<br />
<br />
3:00 3:30 Break<br />
<br />
3:30 4:30 Session 4<br />
<br />
* '''''Fibon -- a new benchmark suite for Haskell''''' (David Peixotto)<br />
:: [http://www.cs.rice.edu/~dmp4866/PDF/2010.Fibon-HIW-Talk.pdf Slides (PDF)], [http://vimeo.com/15568843 Video]<br />
<br />
* '''''Kansas Lava -- Using and Abusing GHC's Type Extensions''''' (Andrew Farmer)<br />
:: [http://www.scribd.com/doc/38559736/kansaslava-hiw10 Slides], [http://vimeo.com/15571220 Video]<br />
<br />
4:30 5:00 Break<br />
<br />
5:00 6:00 Session 5<br />
<br />
* '''''Scheduling Lazy Evaluation on Multicore''''' (Simon Marlow)<br />
:: [http://www.galois.com/~dons/talks/hiw-2010/simonmar-multicore.pdf Slides], [http://vimeo.com/15573590 Video]<br />
<br />
* '''''Beyond Haskell''''' discussion, chaired by Ben Lippmeier<br />
:: [http://www.cse.unsw.edu.au/~benl/talks/BeyondHaskell-HIW2010.pdf Slides], [http://vimeo.com/15576718 Video]<br />
<br />
<br />
[[Category:Community]]</div>Davorakhttps://wiki.haskell.org/index.php?title=HaRe&diff=55801HaRe2013-04-29T18:58:52Z<p>Davorak: </p>
<hr />
<div>= The HaRe Project =<br />
<br />
This page is a little stale the latest version, from the [https://github.com/alanz/HaRe/tree/ghc-api github], of HaRe is working with the GHC api and the latest update seem to be on the [https://plus.google.com/communities/116266567145785623821 google+ community page.] [[User:Davorak|Davorak]] 18:58, 29 April 2013 (UTC)<br />
<br />
Currently, HaRe is a full Haskell 98 refactoring tool for automated refactoring of Haskell 98 programs. It is integrated with Emacs and Vim. Future plans are to extend support for Haskell 2010 and other language extensions.<br />
<br />
The project is lead by [http://www.cs.st-andrews.ac.uk/~chrisb/ Chris Brown].<br />
<br />
* [http://www.cs.kent.ac.uk/projects/refactor-fp/hare.html Project homepage]<br />
* [http://hackage.haskell.org/package/HaRe Stable release] (on hackage)<br />
* [http://www.cs.kent.ac.uk/projects/refactor-fp/hare/demo.html Screenshots]<br />
* [https://github.com/RefactoringTools/HaRe/wiki GHC port] (In Progress)<br />
<br />
== Roadmap (sketch) ==<br />
* API decisions (ghc-api/ programmatica/ haskell-src-exts)<br />
* Extending refactorings to cope with new extensions<br />
* Simpler generics for tree traversals.<br />
* Query / transform language support<br />
* Better examples, user extension documentation<br />
* Maintainance and portability long term.<br />
* Possibly use Scion, to abstract editor<- scion-{hare} ->ghc-api layer.<br />
<br />
== Infrastructure ==<br />
* HaRe is currently built upon the Programatica project for access to an AST and Token Stream and Strafunski for generic tree traversal. <br />
* HaRe uses the Programatica front-end to parse full Haskell projects into an AST with static semantics, and a token stream. Programatica's token stream contains layout and comments together with position information, which HaRe uses (in combination with the AST) to pretty-print the refactored programs. HaRe attempts to produce refactored output that looks as close as possible to the original program.<br />
* Strafunski is used for tree traversal and transformation. In particular, Strafunski has a powerful and expressible mode of traversal, allowing all nodes, some nodes or one node to be traversed/transformed.<br />
* Static semantics are vital for refactorings that require binding information of variables. Renaming, for instance, renames all occurrences of a name within its scope. This scope information is currently retrieved from the AST. In addition to this, static semantics are also vital for HaRe to identify particular expressions for refactoring and to also pretty-print the refactored output (together with the token stream).<br />
* The refactorings in HaRe are relativity low level, working directly on the AST for transformation. An underlying [http://www.cs.kent.ac.uk/projects/refactor-fp/hare/haddock/0.5/API/RefacUtils.html API] is used to build the transformations and queries. <br />
* Some refactorings require types, which are retrieved using hint.<br />
* HaRe is integrated into Emacs and VIm: when a new refactoring is added, the build system generates the Emacs and VIm scripts automatically.<br />
<br />
== Some Problems to Address ==<br />
* Programatica is only Haskell 98. HaRe currently will not parse a Haskell program that is not 98.<br />
* Programatica is no longer maintained: we need a parser that is maintained and up-to-date with the current language standard and extensions.<br />
* Scrap-your-boilerplate may be more suitable these days for generics.<br />
* We require static semantics. Packages such as haskell-src-exts do not provide this. We also require layout and comments to be preserved.<br />
* Type information in the AST would be useful.<br />
* Moving to Scion for editor interfacing. This would eliminate the dependency for Emacs and Vim, and make HaRe more usable to people using other systems.<br />
* HaRe currently uses a two-tier state monad. The first layer being Programatica's parser monad, and then HaRe's own state monad underneath. The reason for this is to plumb the token stream (and AST) through the program implicitly. Transformations can then be performed using (typically) an update function. It may be wise to simplify this design.<br />
* The implementation of refactorings require reasonable technical knowledge and a steep learning curve. Abstracting away from the internals of HaRe and moving towards a DSL for refactoring would benefit knew refactorings.</div>Davorakhttps://wiki.haskell.org/index.php?title=HaRe&diff=55800HaRe2013-04-29T18:58:29Z<p>Davorak: </p>
<hr />
<div>= The HaRe Project =<br />
<br />
This page is a little stale the latest version, from the [https://github.com/alanz/HaRe/tree/ghc-api github], of HaRe is working with the GHC api and the latest update seem to be on the [https://plus.google.com/communities/116266567145785623821 google+ community page.]<br />
<br />
Currently, HaRe is a full Haskell 98 refactoring tool for automated refactoring of Haskell 98 programs. It is integrated with Emacs and Vim. Future plans are to extend support for Haskell 2010 and other language extensions.<br />
<br />
The project is lead by [http://www.cs.st-andrews.ac.uk/~chrisb/ Chris Brown].<br />
<br />
* [http://www.cs.kent.ac.uk/projects/refactor-fp/hare.html Project homepage]<br />
* [http://hackage.haskell.org/package/HaRe Stable release] (on hackage)<br />
* [http://www.cs.kent.ac.uk/projects/refactor-fp/hare/demo.html Screenshots]<br />
* [https://github.com/RefactoringTools/HaRe/wiki GHC port] (In Progress)<br />
<br />
== Roadmap (sketch) ==<br />
* API decisions (ghc-api/ programmatica/ haskell-src-exts)<br />
* Extending refactorings to cope with new extensions<br />
* Simpler generics for tree traversals.<br />
* Query / transform language support<br />
* Better examples, user extension documentation<br />
* Maintainance and portability long term.<br />
* Possibly use Scion, to abstract editor<- scion-{hare} ->ghc-api layer.<br />
<br />
== Infrastructure ==<br />
* HaRe is currently built upon the Programatica project for access to an AST and Token Stream and Strafunski for generic tree traversal. <br />
* HaRe uses the Programatica front-end to parse full Haskell projects into an AST with static semantics, and a token stream. Programatica's token stream contains layout and comments together with position information, which HaRe uses (in combination with the AST) to pretty-print the refactored programs. HaRe attempts to produce refactored output that looks as close as possible to the original program.<br />
* Strafunski is used for tree traversal and transformation. In particular, Strafunski has a powerful and expressible mode of traversal, allowing all nodes, some nodes or one node to be traversed/transformed.<br />
* Static semantics are vital for refactorings that require binding information of variables. Renaming, for instance, renames all occurrences of a name within its scope. This scope information is currently retrieved from the AST. In addition to this, static semantics are also vital for HaRe to identify particular expressions for refactoring and to also pretty-print the refactored output (together with the token stream).<br />
* The refactorings in HaRe are relativity low level, working directly on the AST for transformation. An underlying [http://www.cs.kent.ac.uk/projects/refactor-fp/hare/haddock/0.5/API/RefacUtils.html API] is used to build the transformations and queries. <br />
* Some refactorings require types, which are retrieved using hint.<br />
* HaRe is integrated into Emacs and VIm: when a new refactoring is added, the build system generates the Emacs and VIm scripts automatically.<br />
<br />
== Some Problems to Address ==<br />
* Programatica is only Haskell 98. HaRe currently will not parse a Haskell program that is not 98.<br />
* Programatica is no longer maintained: we need a parser that is maintained and up-to-date with the current language standard and extensions.<br />
* Scrap-your-boilerplate may be more suitable these days for generics.<br />
* We require static semantics. Packages such as haskell-src-exts do not provide this. We also require layout and comments to be preserved.<br />
* Type information in the AST would be useful.<br />
* Moving to Scion for editor interfacing. This would eliminate the dependency for Emacs and Vim, and make HaRe more usable to people using other systems.<br />
* HaRe currently uses a two-tier state monad. The first layer being Programatica's parser monad, and then HaRe's own state monad underneath. The reason for this is to plumb the token stream (and AST) through the program implicitly. Transformations can then be performed using (typically) an update function. It may be wise to simplify this design.<br />
* The implementation of refactorings require reasonable technical knowledge and a steep learning curve. Abstracting away from the internals of HaRe and moving towards a DSL for refactoring would benefit knew refactorings.</div>Davorakhttps://wiki.haskell.org/index.php?title=Library/Streams&diff=55675Library/Streams2013-04-09T21:36:48Z<p>Davorak: Added link to Iteratee I/O and the io-streams hackage package</p>
<hr />
<div>[[Category:Libraries]]<br />
== Introduction ==<br />
<br />
=== Streams: the extensible I/O library ===<br />
<br />
I (Bulat Ziganshin) developed a new I/O library in 2006 that IMHO is so sharp that it can eventually replace the current I/O facilities based on using Handles. The main advantage of the new library is its strong modular design using<br />
typeclasses. The library consists of small independent modules, each<br />
implementing one type of stream (file, memory buffer, pipe) or one<br />
part of common stream functionality (buffering, Char encoding,<br />
locking). 3rd-party libs can easily add new stream types and new common<br />
functionality. Other benefits of the new library include support for<br />
streams functioning in any monad, Hugs and GHC compatibility, high<br />
speed and an easy migration path from the existing I/O library.<br />
<br />
The Streams library is heavily based on the HVIO module written by John<br />
Goerzen. I especially want to thank John for his clever design and<br />
implementation. Really, I just renamed HVIO to Stream and presented<br />
this as my own work. :) Further development direction was inspired<br />
by the "New I/O library" written by Simon Marlow.<br />
<br />
---<br />
<br />
More recent, 2013-04, developments have focused on [[Iteratee_I/O]] and in particular [http://hackage.haskell.org/package/io-streams io-streams] is similar in it's focus on I/O and replacing file handles.<br />
<br />
=== Simple Streams ===<br />
<br />
The key concept of the lib is the Stream class, whose interface mimics<br />
familiar interface for Handles, just with "h" replaced with "v" in<br />
function names:<br />
<br />
<haskell><br />
class (Monad m) => Stream m h where<br />
vPutStrLn :: h -> String -> m ()<br />
vGetContents :: h -> m String<br />
vIsEOF :: h -> m Bool<br />
vClose :: h -> m ()<br />
....................<br />
</haskell><br />
<br />
This means that you already know how to use any stream! The Stream interface<br />
currently has 8 implementations: a Handle itself, raw files, pipes,<br />
memory buffers and string buffers. Future plans include support for<br />
memory-mapped files, sockets, circular memory buffers for interprocess<br />
communication and UArray-based streams.<br />
<br />
By themselves, these Stream implementations are rather simple. Basically,<br />
to implement new Stream type, it's enough to provide vPutBuf/vGetBuf<br />
operations, or even vGetChar/vPutChar. The latter way, although<br />
inefficient, allows us to implement streams that can work in any monad.<br />
StringReader and StringBuffer streams use this to provide string-based<br />
Stream class implementations both for IO and ST monads. Yes, you can<br />
use the full power of Stream operations inside the ST monad!<br />
<br />
=== Layers of functionality ===<br />
<br />
All additional functionality is implemented via Stream Transformers,<br />
which are just parameterized Streams, whose parameters also<br />
implement the Stream interface. This allows you to apply any number of stream<br />
transformers to the raw stream and then use the result as an ordinary<br />
Stream. For example:<br />
<br />
<haskell><br />
h <- openRawFD "test" WriteMode<br />
>>= bufferBlockStream<br />
>>= withEncoding utf8<br />
>>= withLocking<br />
</haskell><br />
<br />
This code creates a new FD, which represents a raw file, and then adds<br />
to this Stream buffering, Char encoding and locking functionality. The<br />
result type of "h" is something like this:<br />
<br />
<haskell><br />
WithLocking (WithEncoding (BufferedBlockStream FD))<br />
</haskell><br />
<br />
The complete type, as well as all the intermediate types, implements the Stream<br />
interface. Each transformer intercepts operations corresponding to its<br />
nature, and passes the rest through. For example, the encoding transformer<br />
intercepts only vGetChar/vPutChar operations and translates them to<br />
the sequences of vGetByte/vPutByte calls of the lower-level stream.<br />
The locking transformer just wraps any operation in the locking wrapper.<br />
<br />
We can trace, for example, the execution of a "vPutBuf" operation on the<br />
above-constructed Stream. First, the locking transformer acquires a lock<br />
and then passes this call to the next level. Then the encoding transformer does<br />
nothing and passes this call to the next level. The buffering<br />
transformer flushes the current buffer and passes the call further.<br />
Finally, FD itself performs the operation after all these<br />
preparations and on the returning path, the locking transformer release<br />
its lock.<br />
<br />
As another example, the "vPutChar" call on this Stream is<br />
transformed (after locking) into several "vPutByte" calls by the<br />
encoding transformer, and these bytes go to the buffer in the<br />
buffering transformer, with or without a subsequent call to the FD's<br />
"vPutBuf".<br />
<br />
=== Modularity ===<br />
<br />
As you can see, stream transformers really are independent of each<br />
other. This allows you to use them on any stream and in any combination<br />
(but you should apply them in proper order - buffering, then Char<br />
encoding, then locking). As a result, you can apply to the stream<br />
only the transformers that you really need. If you don't use the<br />
stream in multiple threads, you don't need to apply the locking<br />
transformer. If you don't use any encodings other than Latin-1 -- or<br />
don't use text I/O at all -- you don't need an encoding transformer.<br />
Moreover, you may not even need to know anything about the UserData transformer<br />
until you actually need to use it :)<br />
<br />
Both streams and stream transformers can be implemented by 3rd-party<br />
libraries. Streams and transformers from arbitrary libraries will<br />
seamlessly work together as long as they properly implement the Stream<br />
interface. My future plans include implementation of an on-the-fly<br />
(de)compression transformer and I will be happy to see 3rd-party<br />
transformers that intercept vGetBuf/vPutBuf calls and use select(),<br />
kqueue() and other methods to overlap I/O operations.<br />
<br />
=== Speed ===<br />
<br />
A quick comment about speed: it's fast enough -- 10-50 MB/s (depending<br />
on the type of operation) on a 1GHz cpu. The Handle operations, for comparison,<br />
show speed of 1-10 mb/s on the same computer. But that don't means that each<br />
and any operation in new library is 10 times faster. Strict I/O (including<br />
vGetChar/vPutChar) is a LOT faster. I included a demonstration of this<br />
fascinating speed as "Examples/wc.hs". If you need a really high speed,<br />
don't forget to increase buffer size with "vSetBuffering".<br />
<br />
On the other side, lazy I/O (including any operations that receive or return<br />
strings) show only modest speedup. This is limited by Haskell/GHC itself and<br />
I can't do much to get around these limits. Instead, I plan to provide support<br />
for I/O using packed strings. This will allow to write I/O-intensive Haskell<br />
programs that are as fast as their C counterparts.<br />
<br />
Other sources of slowness includes using of locking transformer (if you need<br />
to do this, try use "lock" around speed-critical algorithms) and complex class<br />
structure, what may be avoided by using "forall" types (I'm not sure, Simon<br />
Marlow can enlighten this topic).<br />
<br />
The library includes benchmarking code in the file "Examples/StreamsBenchmark.hs"<br />
<br />
<br />
<br />
== Overview of Stream transformers ==<br />
<br />
=== Buffering ===<br />
<br />
There are three buffering transformers. Each buffering transformer<br />
implements support for vGetByte, vPutChar, vGetContents and other<br />
byte- and text-oriented operations for the streams, which by themselves<br />
support only vGetBuf/vPutBuf (or vReceiveBuf/vSendBuf) operations.<br />
<br />
The first transformer can be applied to any stream supporting<br />
vGetBuf/vPutBuf. This is applied by the operation "bufferBlockStream". The<br />
well-known vSetBuffering/vGetBuffering operations are intercepted by<br />
this transformer and used to control buffer size. At this moment, only<br />
BlockBuffering is implemented, while LineBuffering and NoBuffering are<br />
only in the planning stages.<br />
<br />
Two other transformers can be applied to streams that implement<br />
vReceiveBuf/vSendBuf operations -- that is, streams whose data<br />
reside in memory, including in-memory streams and memory-mapped<br />
files. In these cases, the buffering transformer doesn't need to allocate<br />
a buffer itself, it just requests from the underlying stream the address and<br />
size of the next available portion of data. Nevertheless, the final<br />
result is the same -- we get support for all byte- and text-oriented<br />
I/O operations. The "bufferMemoryStream" operation can be applied to any<br />
memory-based stream to add buffering to it. The "bufferMemoryStreamUnchecked"<br />
operation (which implements the third buffering transformer) can be used instead,<br />
if you can guarantee that I/O operations can't overflow the used buffer.<br />
<br />
=== Encoding ===<br />
<br />
The Char encoding transformer allows you to encode each Char written to the<br />
stream as a sequence of bytes, implementing UTF and other encodings.<br />
This transformer can be applied to any stream implementing<br />
vGetByte/vPutByte operations and in return it implements<br />
vGetChar/vPutChar and all other text-oriented operations. This<br />
transformer can be applied to a stream with the "withEncoding encoding"<br />
operation, where `encoding` may be `latin1`, `utf8` or any other<br />
encoding that you (or a 3rd-party lib) implement. Look at the<br />
"Data.CharEncoding" module to see how to implement new encodings.<br />
Encoding of streams created with the "withEncoding" operation can be<br />
queried with "vGetEncoding". See examples of their usage in the file<br />
"Examples/CharEncoding.hs"<br />
<br />
=== Locking ===<br />
<br />
The locking transformer ensures that the stream is properly shared by<br />
several threads. You already know enough about its basic usage --<br />
"withLocking" applies this transformer to the stream and all the<br />
required locking is performed automagically. You can also use "lock"<br />
operations to acquire the lock explicitly during multiple operations:<br />
<br />
<haskell><br />
lock h $ \h -> do<br />
savedpos <- vTell h<br />
vSeek h AbsoluteSeek 100<br />
vPutStr h ":-)"<br />
vSeek h AbsoluteSeek savedpos<br />
</haskell><br />
<br />
See the file "Examples/Locking.hs" for examples of using locking transformer.<br />
<br />
=== Attaching user data ===<br />
<br />
This transformer allows you to attach arbitrary data to any Stream. It does<br />
nothing extraordinary except that the stream with attached data is the proper<br />
Stream, again. See example of its usage in the file "Examples/UserData.hs"<br />
<br />
== Overview of Stream [[type]]s ==<br />
<br />
=== Handle (legacy way to access files/sockets) ===<br />
<br />
"Handle" is an instance of the Stream class, with a straightforward implementation.<br />
You can use the<br />
Char encoding transformer with Handles. Although Handles implement<br />
buffering and locking by themselves, you may also be interested in<br />
applying these transformers to the Handle type. This has<br />
benefits -- "bufferBlockStream" works faster than internal Handle<br />
buffering, and the locking transformer enables the use of a "lock" operation to<br />
create a lock around a sequence of operations. Moreover, the locking<br />
transformer should be used to ensure proper multi-threading operation<br />
of Handle with added encoding or buffering facilities.<br />
<br />
=== FD (new way to access files) ===<br />
<br />
The new method of using files, independent of the existing I/O<br />
library, is implemented with the FD type. FD is just an Int representing a<br />
POSIX file descriptor and the FD type implements only basic Stream I/O<br />
operations - vGetBuf and vPutBuf. So, to create a full-featured FD-based<br />
stream, you need to apply buffering transformers. Therefore, the library<br />
defines two ways to open files with FD - openRawFD/openRawBinaryFD<br />
just creates FD, while openFD/openBinaryFD creates FD and immediatelly<br />
apply buffering transformer (bufferBlockStream) to it. In most cases<br />
you will use the latter operations. Both pairs mimic the arguments and<br />
behaviour of well-known Handle operations openFile/openBinaryFile, so<br />
you already know how to use them. Other transformers may be used then<br />
as you need. So, abovementioned example can be abbreviated to:<br />
<br />
<haskell><br />
h <- openFD "test" WriteMode<br />
>>= withEncoding utf8<br />
>>= withLocking<br />
</haskell><br />
<br />
Thus, to switch from the existing I/O library to using Streams, you<br />
need only to replace "h" with "v" in the names of Handle operations, and<br />
replace openFile/openBinaryFile calls with openFD/openBinaryFD while<br />
adding the "withLocking" transformer to files used in multiple threads.<br />
That's all!<br />
<br />
For example, the following code:<br />
<br />
<haskell><br />
h <- openFile "test" ReadMode<br />
text <- hGetContents h<br />
hClose h<br />
</haskell><br />
<br />
should be translated to:<br />
<br />
<haskell><br />
h <- openFD "test" ReadMode<br />
-- >>= withLocking -- needed only for multi-threaded usage<br />
text <- vGetContents h<br />
vClose h<br />
</haskell><br />
<br />
<br />
File "Examples/FD.hs" will show you the FD usage.<br />
<br />
<br />
In order to work with stdin/stdout/stderr via FDs, you should open them in the same way:<br />
<br />
<haskell><br />
stdinStream <- bufferBlockStream fdStdIn <br />
>>= withEncoding utf8 -- optional, required only for using non-Latin1 encoding<br />
>>= withLocking -- optional, required only to use this Stream in concurrent Haskell threads<br />
<br />
stdoutStream <- bufferBlockStream fdStdOut<br />
>>= withEncoding utf8 -- see above<br />
>>= withLocking -- ...<br />
<br />
stderrStream <- bufferBlockStream fdStdErr<br />
>>= withEncoding utf8 -- ...<br />
>>= withLocking -- ...<br />
</haskell><br />
<br />
Please note that Streams currently supports only block buffering, there is no line buffering and no-buffering support.<br />
<br />
=== MemBuf (memory-resident stream) ===<br />
<br />
MemBuf is a stream type, that keeps its contents in memory buffer.<br />
There are two types of MemBufs you can create - you can either open<br />
existing memory buffer with "openMemBuf ptr size" or create new one<br />
with "createMemBuf initsize". MemBuf opened by "openMemBuf" will be<br />
never resized or moved in memory, and will not be freed by "vClose".<br />
MemBuf created by "createMemBuf" will grow as needed, can be manually<br />
resized by "vSetFileSize" operation, and is automatically freed by<br />
"vClose".<br />
<br />
Actually, raw MemBufs created by the "createRawMemBuf" and "openRawMemBuf"<br />
operations, while createMemBuf/openMemBuf incorporate an additional<br />
"bufferMemoryStream" call (as you should remember, buffering adds vGetChar,<br />
vPutStr and other text- and byte-I/O operations on top of vReceiveBuf<br />
and vSendBuf). You can also apply Char encoding and locking<br />
transformers to these streams. The "saveToFile" and "readFromFile" operations<br />
provide an easy way to save/restore buffer contents in a file.<br />
<br />
File "Examples/MemBuf.hs" demonstrates the usage of MemBuf.<br />
<br />
=== FunctionsMemoryStream ===<br />
<br />
This Stream type allows implementation of arbitrary streams, just by<br />
providing three functions that implement vReceiveBuf, vSendBuf and cleanup<br />
operations. It seems that this Stream type is of interest only for my<br />
own program and can be scrutinized only as example of creating 3rd-party<br />
Stream types. It is named "FunctionsMemoryStream", see the sources if you<br />
are interested.<br />
<br />
=== StringReader & StringBuffer (String-based streams) ===<br />
<br />
Four remaining Stream types were part of the HVIO module and I copied their<br />
description from there:<br />
<br />
In addition to Handle, there are several pre-defined stream types for<br />
your use. 'StringReader' is a particularly interesting one. At<br />
creation time, you pass it a String. Its contents are read lazily<br />
whenever a read call is made. It can be used, therefore, to implement<br />
filters (simply initialize it with the result from, say, a map over<br />
hGetContents from another Stream object), codecs, and simple I/O<br />
testing. Because it is lazy, it needs not hold the entire string in<br />
memory. You can create a 'StringReader' with a call to<br />
'newStringReader'.<br />
<br />
'StringBuffer' is a similar type, but with a different purpose. It<br />
provides a full interface like Handle (it supports read, write and<br />
seek operations). However, it maintains an in-memory buffer with the<br />
contents of the file, rather than an actual on-disk file. You can<br />
access the entire contents of this buffer at any time. This can be<br />
quite useful for testing I/O code, or for cases where existing APIs<br />
use I/O, but you prefer a String representation. Note however that<br />
this stream type is very inefficient. You can create a 'StringBuffer'<br />
with a call to 'newStringBuffer'.<br />
<br />
One significant improvement over the original HVIO library is that<br />
'StringReader' and 'StringBuffer' can work not only in IO, but also in<br />
ST monad.<br />
<br />
=== Pipes (passing data between Haskell threads) ===<br />
<br />
Finally, there are pipes. These pipes are analogous to the Unix pipes<br />
that are available from System.Posix, but don't require Unix and work<br />
only in Haskell. When you create a pipe, you actually get two Stream<br />
objects: a 'PipeReader' and a 'PipeWriter'. You must use the<br />
'PipeWriter' in one thread and the 'PipeReader' in another thread.<br />
Data that's written to the 'PipeWriter' will then be available for<br />
reading with the 'PipeReader'. The pipes are implemented completely<br />
with existing Haskell threading primitives, and require no special<br />
operating system support. Unlike Unix pipes, these pipes cannot be<br />
used across a fork(). Also unlike Unix pipes, these pipes are<br />
portable and interact well with Haskell threads. A new pipe can be<br />
created with a call to 'newHVIOPipe'.<br />
<br />
<br />
<br />
== Additional details ==<br />
<br />
=== Support for [[GHC]], [[Hugs]] and other compilers ===<br />
<br />
The library is compatible with [[GHC]] 6.4<br />
<br />
<br />
The library fully supports [[Hugs]] 2003-2006, but<br />
<br />
1) support for FD and MMFile is temporarily disabled because I don't know how<br />
to build DLLs<br />
<br />
2) Hugs 2003 doesn't include support for "instance Bits Word" and vGetBuf/vPutBuf,<br />
so you need to add these implementations manually or delete the lines that use it<br />
(look for "2003" in the sources)<br />
<br />
3) WinHugs doesn't support preprocessing, so I included the MakeHugs.cmd script<br />
to preprocess source files using cpphs<br />
<br />
<br />
Main disadvantage of the library is that it supports only Hugs and GHC<br />
because of using extensions in type classe system (namely, MPTC+FD). I think that it<br />
can be made H98-compatible at the cost of excluding support for non-IO<br />
monads. I will try to make such a stripped version for other compilers<br />
if people are interested.<br />
<br />
=== Downloading and installation ===<br />
<br />
To get Streams 0.1.7, you can download one of<br />
http://files.pupeno.com/software/streams/Streams-0.1.7.tar.bz2<br />
http://files.pupeno.com/software/streams/Streams-0.1.7.tar.gz<br />
or you can get it from its repository by running:<br />
<br />
darcs get --tag=0.1.7 http://software.pupeno.com/Streams-0.1 Streams-0.1.7<br />
<br />
You can also download and keep track of the 0.1 branch, which is<br />
supposed to remain stable and only get bug-fixes by running <br />
<br />
darcs get http://software.pupeno.com/Streams-0.1/<br />
<br />
and then run 'darcs pull' inside it to get further changes.<br />
<br />
To get the latest unstable and fluctuating version, the development<br />
version, run:<br />
<br />
darcs get http://software.pupeno.com/Streams/<br />
<br />
Note: as of this moment, while the project is being darcsified you are<br />
not going to find anything useful there, but we expect that to change.<br />
<br />
Preferably, you should send patches to code to <br />
[mailto:Bulat.Ziganshin@gmail.com Bulat.Ziganshin@gmail.com]<br />
and to other parts of library to Pupeno. Documentation may<br />
be edited right at the project homepage, which remains<br />
http://haskell.org/haskellwiki/Library/Streams<br />
<br />
Thanks to Jeremy Shaw, the library is now cabalized. To install it, run command:<br />
<br />
make install<br />
<br />
Directory "Examples" contains examples of using the library.<br />
<br />
=== Stage of development ===<br />
<br />
The library is currently at the beta stage. It contains a number of<br />
known minor problems and an unknown number of yet-to-be-discovered bugs.<br />
It is not properly documented, doesn't include QuickCheck tests, is not<br />
cabalized, and not all "h*" operations have their "v*" equivalents yet.<br />
If anyone wants to join this effort in order to help fix these oddities<br />
and prepare the lib for inclusion in the standard libraries suite, I would<br />
be really happy. :) I will also be happy (although much less ;) to see<br />
bug reports and suggestions about its interface and internal<br />
organization. It's just a first public version, so we still can change<br />
everything here!<br />
<br />
In particular, this wiki page is an official library documentation.<br />
Please continue to improve it and add more information about using the library.<br />
Feel free to ask me about library usage via email:<br />
[mailto:Bulat.Ziganshin@gmail.com Bulat.Ziganshin@gmail.com]<br />
<br />
=== Changelog ===<br />
<br />
User-visible improvements made in Streams library since version 0.1 (6 Feb 2006)<br />
<br />
0.1a (6 Feb 2006)<br />
- Fixed bug: System.MMFile was uncompilable on non-Windows systems<br />
<br />
0.1b (9 Feb 2006)<br />
- Fixed bug: very slow WithLocking.vGetLine<br />
- Fixed bug: System.FD was also uncompilable on non-Windows systems<br />
<br />
0.1c (12 Feb 2006)<br />
- Fixed bug: System.FD modified one more time to reach Unix compatibility<br />
<br />
0.1d (13 Feb 2006)<br />
- Fixed bug: BufferedBlockStream.vGetLine caused exception<br />
* CharEncoding transformer was made faster, but vSetEncoding no more supported<br />
<br />
0.1e (8 Jun 2006)<br />
- Fixed bug: "openFD name WriteMode" didn't truncate files on unixes<br />
* Full library now released under BSD3 license, thanks to John Goerzen<br />
+ Now cabalized, thanks to Jeremy Shaw<br />
<br />
0.1.6 (Oct 14 2006)<br />
* Added compatibility with just released GHC 6.6<br />
<br />
0.1.7 (Nov 24 2006)<br />
* true support for GHC 6.6<br />
* support of files larger than 4 gb on windows (see FD5gb.hs example)<br />
* files are now open in shared mode on all systems<br />
* haddock'ized internal docs<br />
* ready to be included in any unix packaging system</div>Davorakhttps://wiki.haskell.org/index.php?title=Iteratee_I/O&diff=55674Iteratee I/O2013-04-09T21:29:49Z<p>Davorak: Added io-streams to the list.</p>
<hr />
<div>Iteratee I/O is a way to avoid the problems that can occur with lazy I/O. They work by making the I/O actions explicit, making their behavior easier to reason about.<br />
<br />
== The problem with lazy I/O ==<br />
<br />
As a beginner, you probably used Haskell's lazy I/O through the <code>System.IO</code> module. However, while it is good enough for simple programs, its unpredictability makes it unsuitable for practical use.<br />
<br />
For example, a common beginner mistake is to close a file before one has finished reading it:<br />
<br />
<haskell><br />
wrong = do<br />
fileData <- withFile "test.txt" ReadMode hGetContents<br />
putStr fileData<br />
</haskell><br />
<br />
The problem is <code>withFile</code> closes the handle before <code>fileData</code> is forced. The correct way is to pass all the code to <code>withFile</code>:<br />
<br />
<haskell><br />
right = withFile "test.txt" ReadMode $ \handle -> do<br />
fileData <- hGetContents handle<br />
putStr fileData<br />
</haskell><br />
<br />
Here, the data is consumed before <code>withFile</code> finishes.<br />
<br />
Although this is easily fixed, the type system does not enforce the correct solution. Even worse, if you use the former code, it won't even raise an error &ndash; it will just fail silently and return an empty string. Many years passed before a satisfactory solution to the ''streaming data problem'' was found.<br />
<br />
== How iteratees work ==<br />
<br />
When you "step" an iteratee, it reads a chunk of data, updates the internal state and returns a new iteratee along with the data it read. Because an iteratee is simply a function with state, many iteratees can be composed together to form a pipeline.<br />
<br />
Some implementations also provide a resource management layer that releases resources automatically when they are no longer needed. This is very useful in a server, where sockets and file handles are scarce.<br />
<br />
== Implementations ==<br />
<br />
; [http://hackage.haskell.org/package/iteratee iteratee] : The original iteratee library, by Oleg Kiselyov.<br />
; [http://hackage.haskell.org/package/iterIO iterIO] : Yet another implementation.<br />
; [http://hackage.haskell.org/package/enumerator enumerator] : Used in Snap. It does not use any extensions, so it will work with most Haskell compilers.<br />
; [http://hackage.haskell.org/package/pipes pipes] : A more recent implementation, which strives to be more elegant than existing libraries.<br />
; [http://hackage.haskell.org/package/pipes-core pipes-core] : Fork of pipes which adds resource finalization, though pipes has it's own finilization now as well [http://hackage.haskell.org/package/pipes-safe pipes-safe].<br />
; [http://hackage.haskell.org/package/conduit conduit] : Popular implementation designed with practical use in mind, created by the author of Yesod. Recently heavily influenced by pipes.<br />
; [http://hackage.haskell.org/package/liboleg liboleg] : An evolving collection of Oleg Kiselyov's Haskell modules (depends on the package unix and will therefore not compile on Windows systems).<br />
; [http://hackage.haskell.org/package/io-streams io-streams] : Focuses on streaming IO and having a simpler type framework then the Conduit and Pipes packages.<br />
<br />
== Essays by Oleg ==<br />
<br />
* Oleg's writings: [http://okmij.org/ftp/Streams.html#iteratee Incremental multi-level input processing with left-fold enumerator: predictable, high-performance, safe, and elegant]<br />
* [http://okmij.org/ftp/Haskell/Iteratee/Iteratee.hs An implementation by Oleg, iteratees on Chars and Strings]<br />
* [http://okmij.org/ftp/Haskell/Iteratee/IterateeM.hs A general library by Oleg] <br />
<br />
== Other discussions ==<br />
<br />
* [http://johnlato.blogspot.sg/2012/06/understandings-of-iteratees.html Understandings of Iteratees]<br />
* [http://themonadreader.wordpress.com/2010/05/12/issue-16/ The Monad.Reader Issue 16]; see the section "Iteratee: Teaching an Old Fold New Tricks" by John W. Lato<br />
* [http://www.yesodweb.com/book/conduit Yesod Book: Conduits]<br />
* [http://sites.google.com/site/haskell/notes/lazy-io-considered-harmful-way-to-go-left-fold-enumerator Lazy IO considered harmful; way to go, Left-fold enumerator!]<br />
* [http://www.tiresiaspress.us/haskell/iteratee/ A Darcs repository of an alternative implementation]<br />
* [http://www.scs.stanford.edu/11au-cs240h/notes/iteratee.html Stanford CS240h lecture on iteratee]<br />
<br />
== Users of Iteratee I/O ==<br />
<br />
* [http://snapframework.com Snap]: The Snap web framework<br />
* [http://hackage.haskell.org/package/yaml yaml]: Low-level binding to the libyaml C library]<br />
* [http://hackage.haskell.org/package/usb-0.4 usb 0.4]: Communicate with USB devices<br />
* [http://hackage.haskell.org/package/sstable sstable]: SSTables in Haskell<br />
* [http://hackage.haskell.org/package/wai WAI]: a Web Application Interface for haskell web frameworks (used by [http://www.yesodweb.com Yesod]).<br />
<br />
== See also ==<br />
<br />
* [[Enumerator and iteratee]]<br />
* [[Iteratee]]<br />
<br />
[[Category:Idioms]]</div>Davorakhttps://wiki.haskell.org/index.php?title=IRC_channel&diff=55582IRC channel2013-03-19T15:06:02Z<p>Davorak: Added descriptions of @tell | @ask command.</p>
<hr />
<div>Internet Relay Chat is a worldwide text chat service with many thousands<br />
of users among various irc networks.<br />
<br />
The Freenode IRC network hosts the very large #haskell channel, and we've had<br />
up to 1046<br />
concurrent users, making the channel consistently<br />
[http://irc.netsplit.de/channels/details.php?room=%23haskell&net=freenode one of the most popular]<br />
of the thousands of channels on freenode. One famous<br />
resident is [[Lambdabot]], another is [http://hpaste.org hpaste] (see<br />
the [[#Bots|Bots]] section below).<br />
<br />
The IRC channel can be an excellent place to learn more about Haskell,<br />
and to just keep in the loop on new things in the Haskell world. Many<br />
new developments in the Haskell world first appear on the irc channel.<br />
<br />
Since 2009, the Haskell channel has grown large enough that we've split it in two parts:<br />
<br />
* #haskell, for all the usual things<br />
* #haskell-in-depth , for those seeking in depth, or more theoretical discussion<br />
<br />
As always, #haskell remains the primary place for new user questions.<br />
<br />
{| border="0" align="right"<br />
|+ '''#haskell visualized'''<br />
|-<br />
| [[Image:Haskell-current.png|thumb|The social graph, Jan 2008]]<br />
| [[Image:Irc-raw.png|thumb|Daily traffic since 2004]]<br />
|-<br />
| [[Image:Nick-activity.png|thumb|Growth]]<br />
| [[Image:Haskell-wordle-irc.png|thumb|Noun map]]<br />
|}<br />
<br />
== Getting there ==<br />
<br />
If you point your irc client to [irc://chat.freenode.net/haskell chat.freenode.net] and then join the #haskell channel, you'll be there. Alternately, you can try http://java.freenode.net/ or http://webchat.freenode.net/ which connects inside the browser.<br />
<br />
Example, using [http://www.irssi.org/ irssi]:<br />
<br />
$ irssi -c chat.freenode.net -n myname -w mypassword<br />
/join #haskell<br />
<br />
Tip, if you're using Emacs to edit your Haskell sources then why not use it to chat about Haskell? Check out [http://www.emacswiki.org/cgi-bin/wiki/EmacsIRCClient ERC], The Emacs IRC client. Invoke it like this and follow the commands:<br />
<br />
M-x erc-select<br />
...<br />
/join #haskell<br />
<br />
[[Image:Irc--haskell-screenshot.png|frame|A screenshot of an irssi session in #haskell]]<br />
<br />
== Principles ==<br />
<br />
The #haskell channel is a very friendly, welcoming place to hang out,<br />
teach and learn. The goal of #haskell is to encourage learning and<br />
discussion of Haskell, functional programming, and programming in<br />
general. As part of this we welcome newbies, and encourage teaching of<br />
the language.<br />
<br />
Part of the #haskell success comes from the fact that the community<br />
is quite tight knit &mdash; we know each other &mdash; it's not just a homework<br />
channel. As a result, many collaborative projects have arisen between<br />
Haskell irc channel citizens.<br />
<br />
To maintain the friendly, open culture, the following is required:<br />
<br />
* Low to zero tolerance for ridiculing questions. Insulting new users is unacceptable. New Haskell users should feel entirely comfortable asking questions.<br />
<br />
* Helpful answers should be encouraged with <code>name++</code> karma points, in public, as a reward for providing a good answer.<br />
<br />
* Avoid getting frustrated by negative comments and ambiguous questions. Approach them by asking for details (i.e. [http://en.wikipedia.org/wiki/Socratic_method Socratic questioning]), rather than challenging the competence of the writer (ad hominem). As the channel grows, we see a diverse range of people with different programming backgrounds getting accustomed to Haskell. Be patient and take satisfaction from spreading knowledge.<br />
<br />
== History ==<br />
<br />
The #haskell channel appeared in the late 90s, and really got going<br />
in early 2001, with the help of Shae Erisson (aka shapr).<br />
<br />
A fairly extensive analysis of the traffic on #haskell over the years is<br />
[http://www.cse.unsw.edu.au/~dons/irc/ kept here]<br />
<br />
== Related channels ==<br />
<br />
In addition to the main Haskell channel there are also:<br />
<br />
{| border="1" cellspacing="0" cellpadding="5" align="center"<br />
! Channel<br />
! Purpose<br />
|-<br />
| #haskell-br<br />
| Brazilian Portuguese (pt_BR) speakers<br />
|-<br />
| #haskell.cz<br />
| Czech speakers (UTF-8)<br />
|- <br />
| #haskell.de<br />
| German speakers<br />
|-<br />
| #haskell.dut<br />
| Dutch speakers<br />
|-<br />
| #haskell.es<br />
| Spanish speakers<br />
|-<br />
| #haskell.fi<br />
| Finnish speakers<br />
|-<br />
| #haskell-fr (note the hyphen!)<br />
| French speakers <br />
|-<br />
| #haskell.hr<br />
| Croatian speakers<br />
|-<br />
| #haskell.it <br />
| Italian speakers<br />
|-<br />
| #haskell.jp <br />
| Japanese speakers<br />
|-<br />
| #haskell.no <br />
| Norwegian speakers<br />
|-<br />
| #haskell.pt<br />
| Portuguese speakers<br />
|-<br />
| #haskell-pl<br />
| Polish speakers<br />
|-<br />
| #haskell.ru <br />
| Russian speakers. Seems that most of them migrated to Jabber conference (haskell@conference.jabber.ru).<br />
|-<br />
| #haskell_ru <br />
| Russian speakers again, in UTF-8. For those, who prefer good ol' IRC channel with a lambdabot.<br />
|-<br />
| #haskell.se <br />
| Swedish speakers<br />
|-<br />
| #haskell.tw<br />
| Chinese speakers (mainly in Taiwan)<br />
|-<br />
| #haskell-blah <br />
| Haskell people talking about anything except Haskell itself<br />
|-<br />
| #haskell-books <br />
| Authors organizing the collaborative writing of the [http://en.wikibooks.org/wiki/Haskell Haskell wikibook] and other books or tutorials.<br />
|-<br />
| #haskell-game<br />
| The hub for Haskell-based [[Game Development|game development]]<br />
|-<br />
| #haskell-in-depth<br />
| slower paced discussion of use, theory, implementation etc with no monad tutorials!<br />
|-<br />
| #haskell-iphone<br />
| Haskell-based [[iPhone]] development<br />
|-<br />
| #haskell-overflow<br />
| Overflow conversations<br />
|-<br />
| #haskell-web<br />
| Friendly, practical discussion of haskell web app/framework/server development<br />
|-<br />
| '''Platform-specific:'''<br />
|<br />
|-<br />
| #arch-haskell <br />
| [[Arch Linux]]/ specific Haskell conversations<br />
|-<br />
| #fedora-haskell<br />
| [[Fedora]] Haskell SIG<br />
|-<br />
| #gentoo-haskell <br />
| [[Gentoo]]/Linux specific Haskell conversations<br />
|-<br />
| '''Projects using haskell:'''<br />
|<br />
|-<br />
| #darcs <br />
| [[Darcs]] revision control system<br />
|-<br />
| #hackage<br />
| Haskell's software distribution infrastructure<br />
|-<br />
| #happs<br />
| [http://happstack.com Happstack] web framework<br />
|-<br />
| #hledger<br />
| [http://hledger.org hledger] accounting tools and library<br />
|-<br />
| #leksah<br />
| [http://leksah.org Leksah] IDE for Haskell development<br />
|-<br />
| #perl6 <br />
| [http://www.pugscode.org Perl 6] development (plenty of Haskell chat there too)<br />
|-<br />
| #snapframework<br />
| [http://snapframework.com/ Snap] web framework<br />
|-<br />
| #xmonad<br />
| [http://xmonad.org Xmonad] tiling window manager<br />
|-<br />
| #yesod<br />
| [http://yesodweb.com Yesod] web framework<br />
|}<br />
<br />
== Logs ==<br />
<br />
'''Logs''' are kept at http://tunes.org/~nef/logs/haskell/<br />
<br />
<!-- anywhere else? ircbrowse.com is a goner, apparently --><br />
<br />
== Bots ==<br />
<br />
There are various bots on the channel. Their names and usage are described here.<br />
<br />
=== lambdabot ===<br />
<br />
[[Lambdabot]] is both the name of a software package and a bot on the channel. It provides many useful services for visitors to the IRC channel. It is available as a haskell package and can be integrated into ghci. Details on the software are found on a [[Lambdabot|separate wiki page]].<br />
<br />
Here is its interface for the IRC user:<br />
<br />
lambdabot's commands are prepended by a '@' sign.<br />
<br />
{| border="1" cellspacing="0" cellpadding="5" align="center"<br />
! Command<br />
! Usage<br />
|-<br />
| @help<br />
| display help to other commands, but help text is not available for all commands.<br />
|-<br />
| @type EXPR or ':t' EXPR<br />
| shows the type of an expression<br />
|-<br />
| @kind TYPECONSTRUCTOR<br />
| shows the kind of a type constructor<br />
|-<br />
| @run EXPR or '>' EXPR<br />
| evaluates EXPR<br />
|-<br />
| @pl FUNCTION<br />
| shows a [[pointfree]] version of FUNCTION<br />
|-<br />
| @pointful FUNCTION or '@unpl' FUNCTION<br />
| shows a 'pointful' version of FUNCTION<br />
|-<br />
| @tell <nick> <msg> -- same as @ask<br />
| Next time <nick> speaks in channel they will be notified they have a message pending and how to receive it.<br />
|}<br />
<br />
=== preflex ===<br />
<br />
is the name of a lambdabot with more commands/plugins enabled. It is run by ?? To talk to preflex, write <tt>preflex: command ARGS</tt><br />
<br />
{| border="1" cellspacing="0" cellpadding="5" align="center"<br />
! Command<br />
! Usage<br />
|-<br />
| help COMMAND<br />
| displays help to other commands.<br />
|-<br />
| list<br />
| lists all plugins with their commands<br />
|-<br />
| NICK++ / NICK--<br />
| in/decrements the karma of NICK.<br />
|-<br />
| karma NICK<br />
| shows the karma of NICK<br />
|-<br />
| seen NICK<br />
| shows information about the last message of a user<br />
|-<br />
| tell / ask<br />
| sends NICK MSG a message when she becomes active.<br />
|-<br />
| xseen<br />
| ''see 'seen' ?? any difference ?''<br />
|-<br />
| quote NICK<br />
| prints a random quote of NICK<br />
|-<br />
| remember NAME QUOTE<br />
| associates NAME with quote. can be accessed by 'quote'<br />
|-<br />
| ...<br />
| ...<br />
|}<br />
<br />
=== hpaste ===<br />
The hpaste bot provides a notification interface to the [http://hpaste.org hpaste pastebin]. [[Hpaste.el|Emacs integration]] is available.<br />
<br />
''Usage?''<br />
<br />
''Not online often !? ''<br />
<br />
=== hackage ===<br />
The hackage bot provides real-time notifications of new package uploads to [http://hackage.haskell.org Hackage].<br />
<br />
== Locations ==<br />
<br />
To get an overview of where everybody on the channel might<br />
be, physically, please visit [[Haskell user locations]].<br />
<br />
<br />
[[Category:Community]]</div>Davorakhttps://wiki.haskell.org/index.php?title=State_Monad&diff=55478State Monad2013-02-25T15:12:07Z<p>Davorak: Add link to wikibook on state monad</p>
<hr />
<div>The State Monad by Example<br />
<br />
This is a short tutorial on the state monad. Emphasis is placed on intuition. The types have been simplified to protect the innocent.<br />
<br />
Another longer walkthrough of the state monad can be found in the wiki book section [https://en.wikibooks.org/wiki/Haskell/Understanding_monads/State Understanding monads/State.]<br />
<br />
=Foundations=<br />
==Primitives==<br />
<br />
<haskell><br />
runState (return 'X') 1<br />
</haskell><br />
<blockquote><br />
('X',1)<br />
</blockquote><br />
<br />
Return set the result value but left the state unchanged.<br />
Comments:<br />
return 'X' :: State Int Char<br />
runState (return 'X') :: Int -> (Char, Int)<br />
initial state = 1 :: Int<br />
final value = 'X' :: Char<br />
final state = 1 :: Int<br />
result = ('X', 1) :: (Char, Int)<br />
<br />
----<br />
<haskell><br />
runState get 1<br />
</haskell><br />
<blockquote><br />
(1,1)<br />
</blockquote><br />
<br />
Get set the result value to the state and left the state unchanged.<br />
Comments:<br />
get :: State Int Int<br />
runState get :: Int -> (Int, Int)<br />
initial state = 1 :: Int<br />
final value = 1 :: Int<br />
final state = 1 :: Int<br />
<br />
<br />
----<br />
<haskell><br />
runState (put 5) 1<br />
</haskell><br />
<blockquote><br />
((),5)<br />
</blockquote><br />
<br />
Put set the result value to () and set the state value.<br />
Comments:<br />
put 5 :: State Int ()<br />
runState (put 5) :: Int -> ((),Int)<br />
initial state = 1 :: Int<br />
final value = () :: ()<br />
final state = 5 :: Int<br />
<br />
==Combinations==<br />
<br />
Because (State s) forms a monad, values can be combined together with (>>=) or do{}.<br />
<br />
<haskell><br />
runState (do { put 5; return 'X' }) 1<br />
</haskell><br />
<blockquote><br />
('X',5)<br />
</blockquote><br />
Comments:<br />
do { put 5; return 'X' } :: State Int Char <br />
runState (do { put 5; return 'X' }) :: Int -> (Char,Int)<br />
initial state = 1 :: Int<br />
final value = 'X' :: Char<br />
final state = 5 :: Int<br />
<br />
<br />
----<br />
<haskell><br />
postincrement = do { x <- get; put (x+1); return x }<br />
runState postincrement 1<br />
</haskell><br />
<blockquote><br />
(1,2)<br />
</blockquote><br />
<br />
----<br />
<haskell><br />
predecrement = do { x <- get; put (x-1); get }<br />
runState predecrement 1<br />
</haskell><br />
<blockquote><br />
(0,0)<br />
</blockquote><br />
<br />
==Other Functions==<br />
<haskell><br />
runState (modify (+1)) 1<br />
</haskell><br />
<blockquote><br />
((),2)<br />
</blockquote><br />
<br />
----<br />
<haskell><br />
runState (gets (+1)) 1<br />
</haskell><br />
<blockquote><br />
(2,1)<br />
</blockquote><br />
<br />
----<br />
<haskell><br />
evalState (gets (+1)) 1<br />
</haskell><br />
<blockquote><br />
2<br />
</blockquote><br />
<br />
----<br />
<haskell><br />
execState (gets (+1)) 1<br />
</haskell><br />
<blockquote><br />
1<br />
</blockquote><br />
<br />
=Implementation=<br />
<br />
At its heart, a value of type (State s a) is a function from initial state s to final value a and final state s: (a,s). These are usually wrapped, but shown here unwrapped for simplicity.<br />
<br />
Return leaves the state unchanged and sets the result:<br />
<haskell><br />
-- ie: (return 5) 1 -> (5,1)<br />
return :: a -> State s a<br />
return x s = (x,s)<br />
</haskell><br />
<br />
Get leaves state unchanged and sets the result to the state:<br />
<haskell><br />
-- ie: get 1 -> (1,1)<br />
get :: State s s<br />
get s = (s,s)<br />
</haskell><br />
<br />
Put sets the result to () and sets the state:<br />
<haskell><br />
-- ie: (put 5) 1 -> ((),5)<br />
put :: s -> State s ()<br />
put x s = ((),x)<br />
</haskell><br />
<br />
----<br />
The helpers are simple variations of these primitives:<br />
<haskell><br />
modify :: (s -> s) -> State s ()<br />
modify f = do { x <- get; put (f x) }<br />
<br />
gets :: (s -> a) -> State s a<br />
gets f = do { x <- get; return (f x) }<br />
</haskell><br />
<br />
EvalState and execState just select one of the two values returned by runState. EvalState returns the final result while execState returns the final state:<br />
<haskell><br />
evalState :: State s a -> s -> a<br />
evalState act = fst . runState act<br />
<br />
execState :: State s a -> s -> s<br />
execState act = snd . runState act<br />
</haskell><br />
<br />
----<br />
Combining two states is the trickiest bit in the whole scheme. To combine do { x <- act1; act2 x } we need a function which takes an initial state, runs act1 to get an intermediate result and state, feeds the intermediate result to act2 and then runs that action with the intermediate state to get a final result and a final state:<br />
<br />
<haskell><br />
(>>=) :: State s a -> (a -> State s b) -> State s b<br />
(act1 >>= fact2) s = runState act2 is <br />
where (iv,is) = runState act1 s<br />
act2 = fact2 iv<br />
</haskell><br />
<br />
=Complete and Concrete Example 1=<br />
Simple example that demonstrates the use of the standard Control.Monad.State monad. Its a simple string parsing algorithm.<br />
<br />
<haskell><br />
module StateGame where<br />
<br />
import Control.Monad.State<br />
<br />
-- Example use of State monad<br />
-- Passes a string of dictionary {a,b,c}<br />
-- Game is to produce a number from the string.<br />
-- By default the game is off, a C toggles the<br />
-- game on and off. A 'a' gives +1 and a b gives -1.<br />
-- E.g <br />
-- 'ab' = 0<br />
-- 'ca' = 1<br />
-- 'cabca' = 0<br />
-- State = game is on or off & current score<br />
-- = (Bool, Int)<br />
<br />
type GameValue = Int<br />
type GameState = (Bool, Int)<br />
<br />
playGame :: String -> State GameState GameValue<br />
playGame [] = do<br />
(_, score) <- get<br />
return score<br />
<br />
playGame (x:xs) = do<br />
(on, score) <- get<br />
case x of<br />
'a' | on -> put (on, score + 1)<br />
'b' | on -> put (on, score - 1)<br />
'c' -> put (not on, score)<br />
_ -> put (on, score)<br />
playGame xs<br />
<br />
startState = (False, 0)<br />
<br />
main = print $ evalState (playGame "abcaaacbbcabbab") startState<br />
</haskell><br />
<br />
=Complete and Concrete Example 2=<br />
<haskell><br />
-- a concrete and simple example of using the State monad<br />
<br />
<br />
import Control.Monad.State<br />
<br />
-- non monadic version of a very simple state example<br />
-- the State is an integer.<br />
-- the value will always be the negative of of the state<br />
<br />
type MyState = Int<br />
<br />
valFromState :: MyState -> Int<br />
valFromState s = -s<br />
nextState :: MyState->MyState<br />
nextState x = 1+x<br />
<br />
type MyStateMonad = State MyState <br />
<br />
-- this is it, the State transformation. Add 1 to the state, return -1*the state as the computed value.<br />
getNext :: MyStateMonad Int<br />
getNext = state (\st -> let st' = nextState(st) in (valFromState(st'),st') )<br />
<br />
<br />
<br />
-- advance the state three times. <br />
inc3::MyStateMonad Int<br />
inc3 = getNext >>= \x -><br />
getNext >>= \y -><br />
getNext >>= \z -><br />
return z <br />
<br />
-- advance the state three times with do sugar<br />
inc3Sugared::MyStateMonad Int<br />
inc3Sugared = do x <- getNext<br />
y <- getNext<br />
z <- getNext<br />
return z<br />
<br />
-- advance the state three times without inspecting computed values<br />
inc3DiscardedValues::MyStateMonad Int<br />
inc3DiscardedValues = getNext >> getNext >> getNext<br />
<br />
-- advance the state three times without inspecting computed values with do sugar<br />
inc3DiscardedValuesSugared::MyStateMonad Int<br />
inc3DiscardedValuesSugared = do <br />
getNext<br />
getNext<br />
getNext<br />
<br />
<br />
-- advance state 3 times, compute the square of the state<br />
inc3AlternateResult::MyStateMonad Int<br />
inc3AlternateResult = do getNext<br />
getNext<br />
getNext<br />
s<-get<br />
return (s*s)<br />
<br />
<br />
-- advance state 3 times, ignoring computed value, and then once more<br />
inc4::MyStateMonad Int<br />
inc4 = do <br />
inc3AlternateResult<br />
getNext<br />
<br />
main = <br />
do<br />
print (evalState inc3 0) -- -3<br />
print (evalState inc3Sugared 0) -- -3<br />
print (evalState inc3DiscardedValues 0) -- -3<br />
print (evalState inc3DiscardedValuesSugared 0) -- -3<br />
print (evalState inc3AlternateResult 0) -- 9<br />
print (evalState inc4 0) -- -4<br />
</haskell><br />
[[Category:Tutorials]]</div>Davorakhttps://wiki.haskell.org/index.php?title=Talk:Introduction&diff=55306Talk:Introduction2013-01-24T20:14:50Z<p>Davorak: /* Proposed change to introductory paragraph */</p>
<hr />
<div>Added the quote by Graham Klyne.<br />
<br />
Over the years I've received numerous complaints about the quicksort example. But none of the complainers sent me anything better so it's still here. Anyone want to come up with a better example?<br />
<br />
--[[User:John Peterson|John Peterson]] 00:39, 26 January 2006 (UTC)<br />
<br />
:Perhaps an in-place quicksort? &mdash;[[User:Ashley Y|Ashley Y]] 04:24, 26 January 2006 (UTC)<br />
<br />
:If you want to offer "proof" that functional programming is a good idea, starting out with an invalid example pretty much wipes out your credibility. Taking the quicksort example offered, you are saying that if memory and CPU use do not matter ... but they very often do (certainly for large sort operations). Reading up on Haskell looks too much like a waste of time. Why the '''equivalent''' quicksort code in Haskell is larger than the C code requires an explanation as to why this is a good idea - and is not suitable material for the introduction. --[[User:PrestonBannister|PrestonBannister]] 01:51, 12 August 2008 (UTC)<br />
<br />
::The good idea here is that the algorithm becomes rather obvious when presented in Haskell. [[User:DonStewart|dons]] 04:45, 12 August 2008 (UTC)<br />
<br />
:::That's because the C quicksort and the Haskell "quicksort" use '''two different algorithms'''. A naive bubblesort implementation in C is rather obvious, too.<br />
<br />
:Change the C to mergesort? Because that really is what the Haskell code does. (Okay, sure, it's a funny 3-way, bottom-up, lazy mergesort. Still.) -- [[User:AaronDenney|AaronDenney]] 19:55, 10 July 2007 (UTC)<br />
<br />
::Eh? The defining property of quicksort is that the diving phase is elaborate (filter < and >) but that the conquer phase is simple (list concatenation). For mergesort, it's the other way round. [[User:Apfelmus|apfe<b>&lambda;</b>mus]] 10:51, 11 December 2007 (UTC)<br />
<br />
What is this about Haskell being in 2nd place behind C (gcc) in the computer language shootout, the link provided shows it to be 13th (and 12th on the previous benchmark, even though it does proportionally worst). You're right about functional languages doing well though: Clean, OCaml and MLton indeed occupy positions 6,9,11.<br />
--[[User:Noegenesis|Noegenesis]] 12:01, 28 August 2007 (UTC)<br />
<br />
::It was in 2nd place when the article was written, since then benchmarks and programs have changed. [[User:DonStewart|dons]] 00:17, 12 December 2007 (UTC)<br />
<br />
::Update, mid-2008, the rankings have stabilised with Haskell in 8th, behind C, C++, D and friends. The shootout continues to be a moving target. [[User:DonStewart|dons]] 04:45, 12 August 2008 (UTC)<br />
<br />
''Most functional languages, and Haskell in particular, are strongly typed, eliminating a huge class of easy-to-make errors at compile time.'' This sentence confuses strong and static typing.<br />
[[User:MichalPalka|MichalPalka]] 00:11, 2 December 2007 (UTC)<br />
<br />
I created a login account to specifically complain about the qsort example.<br />
<br />
All it proves is that you can write your own, or use the standard c lib implementation. So, any programmer who knows c reading this sees it as a Steve Jobs style sales pitch. Developers don't like being lied to, I'd suggest you dump that sales approach for one that is honest, you've lost me.<br />
<br />
--[[User:Tsingi|Tsingi]] 11:08, 28 June 2010 (UTC)<br />
<br />
Although visually appealing, the Haskell quicksort code has the unfortunate property of doubly traversing the input list. This can be remedied with Bird's proposed <br />
<br />
<haskell><br />
qsort [] = [] <br />
qsort (x:xs) = part xs ([],[])<br />
where<br />
part [] (a,b) = qsort a ++ x : qsort b<br />
part (y:ys) (a,b) = part ys $ if y<x then (y:a,b) else (a,y:b)<br />
</haskell><br />
<br />
Or, to keep it stable,<br />
<haskell><br />
qsort (x:xs) = part xs (\(a,b)-> qsort a ++ x : qsort b)<br />
where<br />
part [] k = k ([],[])<br />
part (y:ys) k = part ys $ if y<x then (k.first(y:)) else (k.second(y:))<br />
</haskell><br />
<br />
But that's besides the point, so I removed it from the main page to here. [[User:WillNess|WillNess]] 00:56, 26 September 2010 (UTC)<br />
<br />
<br />
I have a simple complaint about the qsort: as I come from a C background, that part makes sense. Since you didn't bother to use the same variable names, or add comments, the Haskell is line noise, save for the symbols qsort and filter. I can't intuit whatever you might be expecting me to learn, so all I see is a '''long learning curve'''.<br />
<br />
Of course, everyone who knows Haskell already isn't going to offer you a new one, they aren't reading your ''beginning tutorial''. <br />
<br />
--[[User:Hacksaw|Hacksaw]] 08:56, 21 February 2011 (UTC)<br />
<br />
:some explanations for Haskell syntax added to the main page per your request. Can't use same names in C and Haskell versions, as C has an array and Haskell version works with lists. Thanks for the suggestion btw. [[User:WillNess|WillNess]] 01:32, 7 March 2011 (UTC)<br />
<br />
== Intuitively understanding the "quicksort" implementation ==<br />
<br />
:''You should be able to understand the program without any previous knowledge of either Haskell or quicksort.''<br />
I can assure you that this is not the case. Sure, it might seem that way if you already know a little bit of Haskell, but that's not the target audience of this page now, is it?<br />
<br />
Oh, and please stop comparing apples to oranges: that piece of Haskell is not a quicksort implementation, so don't compare it to a real quicksort implementation. – [[User:AdrianW|AdrianW]] 10:52, 3 August 2011 (UTC)<br />
<br />
: Made some changes which hopefully address your concerns.<br />
<br />
:: Yes, much better. Well done!<br />
<br />
: As for the second issue, could you please explain? Do you mean it's not a "real" efficient in-place algorithm? Presumably quicksort is a general algorithm which may have extremely inefficient as well as efficient implementations? Is efficiency one of its defining features? [[User:WillNess|WillNess]] 17:48, 3 August 2011 (UTC)<br />
<br />
:: Well, one might argue that (1) being in-place and (2) selecting a good pivot are crucial properties of the quicksort algorithm, and then again one might argue that divide & conquer alone is the defining concept of quicksort. In my opinion, the clever way of partitioning the list in-place is the most recognizable trait of this algorithm, but that might just be a subjective perception.<br />
:: However, what's more important is that someone with an imperative background, coming here to be seduced by Haskell's elegance, feels cheated: the C version is longer, more complicated, and not as easy to comprehend as the Haskell version, sure, but it also does the job "better", i.e. in a more efficient way (that's why I spoke of comparing apples and oranges).<br />
:: An example to illustrate my point: say someone implements a matrix multiplication using Strassen's algorithm; then someone else comes along and says "Dude, why so complicated? I can do the same in 4-5 lines, using 3 simple nested for-loops." – [[User:AdrianW|AdrianW]] 06:14, 4 August 2011 (UTC)<br />
<br />
:::Thanks. Yes I guess more clarifications could be added along those lines, about the tradeoffs of brevity vs efficiency, and Haskell's being able to concisely express a less-efficent, less-tuned-up version of an algorithm, with ease - an "executable specification" kind of thing, a first step in a progression of code developement (of course next, more complicated steps can also be expressed in Haskell, it'll just not be that short). Go ahead, write some yourself, it's a wiki after all! Cheers, [[User:WillNess|WillNess]] 08:57, 5 August 2011 (UTC)<br />
<br />
== Proposed change to introductory paragraph ==<br />
<br />
This page attempts to answer the question "What is Haskell?" If you are looking for a more example driven explanation to Haskell refer to [http://tryhaskell.org/ try Haskell in your browser], a [[Learning_Haskell|list of learning resources]], [[Books]], or [[Tutorials]].<br />
<br />
Haskell is a computer programming language. In particular, it is a<br />
'' [[Polymorphism|polymorphically]] [[typing|statically typed]], [[Lazy evaluation|lazy]], [[functional programming|purely functional]] '' language,<br />
quite different from most other programming languages. <br />
The language is named for [[Haskell Brooks Curry]], whose work in mathematical logic serves as a foundation for<br />
functional languages. <br />
Haskell is based on the ''[[lambda calculus]]'', hence the lambda we use as a logo.<br />
[[User:Davorak|Davorak]] 20:14, 24 January 2013 (UTC)</div>Davorakhttps://wiki.haskell.org/index.php?title=Talk:Introduction&diff=55282Talk:Introduction2013-01-21T18:17:04Z<p>Davorak: /* Proposed change to introductory paragraph */ new section</p>
<hr />
<div>Added the quote by Graham Klyne.<br />
<br />
Over the years I've received numerous complaints about the quicksort example. But none of the complainers sent me anything better so it's still here. Anyone want to come up with a better example?<br />
<br />
--[[User:John Peterson|John Peterson]] 00:39, 26 January 2006 (UTC)<br />
<br />
:Perhaps an in-place quicksort? &mdash;[[User:Ashley Y|Ashley Y]] 04:24, 26 January 2006 (UTC)<br />
<br />
:If you want to offer "proof" that functional programming is a good idea, starting out with an invalid example pretty much wipes out your credibility. Taking the quicksort example offered, you are saying that if memory and CPU use do not matter ... but they very often do (certainly for large sort operations). Reading up on Haskell looks too much like a waste of time. Why the '''equivalent''' quicksort code in Haskell is larger than the C code requires an explanation as to why this is a good idea - and is not suitable material for the introduction. --[[User:PrestonBannister|PrestonBannister]] 01:51, 12 August 2008 (UTC)<br />
<br />
::The good idea here is that the algorithm becomes rather obvious when presented in Haskell. [[User:DonStewart|dons]] 04:45, 12 August 2008 (UTC)<br />
<br />
:::That's because the C quicksort and the Haskell "quicksort" use '''two different algorithms'''. A naive bubblesort implementation in C is rather obvious, too.<br />
<br />
:Change the C to mergesort? Because that really is what the Haskell code does. (Okay, sure, it's a funny 3-way, bottom-up, lazy mergesort. Still.) -- [[User:AaronDenney|AaronDenney]] 19:55, 10 July 2007 (UTC)<br />
<br />
::Eh? The defining property of quicksort is that the diving phase is elaborate (filter < and >) but that the conquer phase is simple (list concatenation). For mergesort, it's the other way round. [[User:Apfelmus|apfe<b>&lambda;</b>mus]] 10:51, 11 December 2007 (UTC)<br />
<br />
What is this about Haskell being in 2nd place behind C (gcc) in the computer language shootout, the link provided shows it to be 13th (and 12th on the previous benchmark, even though it does proportionally worst). You're right about functional languages doing well though: Clean, OCaml and MLton indeed occupy positions 6,9,11.<br />
--[[User:Noegenesis|Noegenesis]] 12:01, 28 August 2007 (UTC)<br />
<br />
::It was in 2nd place when the article was written, since then benchmarks and programs have changed. [[User:DonStewart|dons]] 00:17, 12 December 2007 (UTC)<br />
<br />
::Update, mid-2008, the rankings have stabilised with Haskell in 8th, behind C, C++, D and friends. The shootout continues to be a moving target. [[User:DonStewart|dons]] 04:45, 12 August 2008 (UTC)<br />
<br />
''Most functional languages, and Haskell in particular, are strongly typed, eliminating a huge class of easy-to-make errors at compile time.'' This sentence confuses strong and static typing.<br />
[[User:MichalPalka|MichalPalka]] 00:11, 2 December 2007 (UTC)<br />
<br />
I created a login account to specifically complain about the qsort example.<br />
<br />
All it proves is that you can write your own, or use the standard c lib implementation. So, any programmer who knows c reading this sees it as a Steve Jobs style sales pitch. Developers don't like being lied to, I'd suggest you dump that sales approach for one that is honest, you've lost me.<br />
<br />
--[[User:Tsingi|Tsingi]] 11:08, 28 June 2010 (UTC)<br />
<br />
Although visually appealing, the Haskell quicksort code has the unfortunate property of doubly traversing the input list. This can be remedied with Bird's proposed <br />
<br />
<haskell><br />
qsort [] = [] <br />
qsort (x:xs) = part xs ([],[])<br />
where<br />
part [] (a,b) = qsort a ++ x : qsort b<br />
part (y:ys) (a,b) = part ys $ if y<x then (y:a,b) else (a,y:b)<br />
</haskell><br />
<br />
Or, to keep it stable,<br />
<haskell><br />
qsort (x:xs) = part xs (\(a,b)-> qsort a ++ x : qsort b)<br />
where<br />
part [] k = k ([],[])<br />
part (y:ys) k = part ys $ if y<x then (k.first(y:)) else (k.second(y:))<br />
</haskell><br />
<br />
But that's besides the point, so I removed it from the main page to here. [[User:WillNess|WillNess]] 00:56, 26 September 2010 (UTC)<br />
<br />
<br />
I have a simple complaint about the qsort: as I come from a C background, that part makes sense. Since you didn't bother to use the same variable names, or add comments, the Haskell is line noise, save for the symbols qsort and filter. I can't intuit whatever you might be expecting me to learn, so all I see is a '''long learning curve'''.<br />
<br />
Of course, everyone who knows Haskell already isn't going to offer you a new one, they aren't reading your ''beginning tutorial''. <br />
<br />
--[[User:Hacksaw|Hacksaw]] 08:56, 21 February 2011 (UTC)<br />
<br />
:some explanations for Haskell syntax added to the main page per your request. Can't use same names in C and Haskell versions, as C has an array and Haskell version works with lists. Thanks for the suggestion btw. [[User:WillNess|WillNess]] 01:32, 7 March 2011 (UTC)<br />
<br />
== Intuitively understanding the "quicksort" implementation ==<br />
<br />
:''You should be able to understand the program without any previous knowledge of either Haskell or quicksort.''<br />
I can assure you that this is not the case. Sure, it might seem that way if you already know a little bit of Haskell, but that's not the target audience of this page now, is it?<br />
<br />
Oh, and please stop comparing apples to oranges: that piece of Haskell is not a quicksort implementation, so don't compare it to a real quicksort implementation. – [[User:AdrianW|AdrianW]] 10:52, 3 August 2011 (UTC)<br />
<br />
: Made some changes which hopefully address your concerns.<br />
<br />
:: Yes, much better. Well done!<br />
<br />
: As for the second issue, could you please explain? Do you mean it's not a "real" efficient in-place algorithm? Presumably quicksort is a general algorithm which may have extremely inefficient as well as efficient implementations? Is efficiency one of its defining features? [[User:WillNess|WillNess]] 17:48, 3 August 2011 (UTC)<br />
<br />
:: Well, one might argue that (1) being in-place and (2) selecting a good pivot are crucial properties of the quicksort algorithm, and then again one might argue that divide & conquer alone is the defining concept of quicksort. In my opinion, the clever way of partitioning the list in-place is the most recognizable trait of this algorithm, but that might just be a subjective perception.<br />
:: However, what's more important is that someone with an imperative background, coming here to be seduced by Haskell's elegance, feels cheated: the C version is longer, more complicated, and not as easy to comprehend as the Haskell version, sure, but it also does the job "better", i.e. in a more efficient way (that's why I spoke of comparing apples and oranges).<br />
:: An example to illustrate my point: say someone implements a matrix multiplication using Strassen's algorithm; then someone else comes along and says "Dude, why so complicated? I can do the same in 4-5 lines, using 3 simple nested for-loops." – [[User:AdrianW|AdrianW]] 06:14, 4 August 2011 (UTC)<br />
<br />
:::Thanks. Yes I guess more clarifications could be added along those lines, about the tradeoffs of brevity vs efficiency, and Haskell's being able to concisely express a less-efficent, less-tuned-up version of an algorithm, with ease - an "executable specification" kind of thing, a first step in a progression of code developement (of course next, more complicated steps can also be expressed in Haskell, it'll just not be that short). Go ahead, write some yourself, it's a wiki after all! Cheers, [[User:WillNess|WillNess]] 08:57, 5 August 2011 (UTC)<br />
<br />
== Proposed change to introductory paragraph ==<br />
<br />
This page attempts to answer the question "What is Haskell?" If you are looking for a more example driven explanation to Haskell refer to [http://tryhaskell.org/ try Haskell in your browser], a [[Learning_Haskell|list of learning resources]], [[Books]], or [[Tutorials]].<br />
<br />
Haskell is a computer programming language. In particular, it is a<br />
'' [[Polymorphism|polymorphically]] [[typing|statically typed]], [[Lazy evaluation|lazy]], [[functional programming|purely functional]] '' language,<br />
quite different from most other programming languages. <br />
The language is named for [[Haskell Brooks Curry]], whose work in mathematical logic serves as a foundation for<br />
functional languages. <br />
Haskell is based on the ''[[lambda calculus]]'', hence the lambda we use as a logo.</div>Davorakhttps://wiki.haskell.org/index.php?title=Library/CC-delcont&diff=55276Library/CC-delcont2013-01-21T09:04:22Z<p>Davorak: </p>
<hr />
<div>[[Category:Libraries]]<br />
[[Category:Monad]]<br />
[[Category:Tutorials]]<br />
<br />
== Introduction ==<br />
<br />
This page is intended as a brief overview of delimited continuations and related constructs, and how they can be used in Haskell. It uses the library CC-delcont as a vehicle for doing so, but the examples should be general enough so that if you have another implementation, they should be relatively straight forward to port (whenever possible, I have endeavored not to use the operators on abstract prompt and sub-continuation [[type]]s from CC-delcont, instead using the more typical, functional operators).<br />
<br />
== The basics ==<br />
<br />
=== Undelimited continuations ===<br />
<br />
If you've taken university courses in computer science, or done much investigation of [[language design]], you've probably encountered [[continuation]]s before. The author first recalls learning about them in a class on said subject, where they were covered very briefly, and it was mentioned (without proof; and no proof will be provided here) that they could be used as a basis upon which all control flow operators could be built. At the time, they seemed rather abstract and unwieldy. Perhaps they could be used to implement any more common control flow pattern, but why bother, when, as far as language implementation concerns go, it's easier to implement (and understand) most common control flow directly than it is to implement continuations?<br />
<br />
As far as usage goes, continuations are probably most closely associated with Scheme, and its call-with-current-continuation function (abbreviated to Haskell's version, callCC from now on), although many other languages have them (undelimited continuations for Haskell are provided by the [http://haskell.org/ghc/docs/latest/html/libraries/mtl/Control-Monad-Cont.html Cont monad and ContT transformer]). They're often regarded as being difficult to understand, as their use can cause very complex control flow patterns (much like GOTO, although more sophisticated), though reduced to their basics, they aren't that hard to understand.<br />
<br />
A continuation of an expression is, in a loose sense, 'the stuff that happens after the expression.' An example to refer to may help:<br />
<br />
<haskell><br />
m >>= f >>= g >>= h<br />
</haskell><br />
<br />
Here we have an ordinary [[:Category:Monad |monadic]] pipeline. A computation m is run, and its result is fed into f, and so on. We might ask what the continuation of 'm' is, the portion of the program that executes after m, and it looks something like:<br />
<br />
<haskell><br />
\x -> f x >>= g >>= h<br />
</haskell><br />
<br />
The continuation takes the value produced by m, and feeds it into 'the rest of the program.' But, the fact that we can represent this using [[function]]s as above should be a clue that continuations can be built up using them, and indeed, this is the case. There is a standard way to transform a program written normally (or in a monadic style, as above) into a program in which continuations, represented as functions, are passed around explicitly (known as the CPS transform), and this is what Cont/ContT does.<br />
<br />
However, such a transform would be of little use if the passed continuations were inaccessible (as with any monad), and callCC is just the operator for the job. It will call a function with the implicitly passed continuation, so in:<br />
<br />
<haskell><br />
callCC (\k -> e) >>= f >>= g >>= h<br />
</haskell><br />
<br />
'k' will be set to a function that is something like the above '\x -> f x >>= g >>= h'. However, in some sense, it is not an ordinary function, as it will never return to the point where it is invoked. Instead, calling 'k' should be viewed as execution jumping to the point where callCC was invoked, with the entire 'callCC (..)' expression replaced with the value passed to 'k'. So k is not merely a normal function, but a way of feeding a value into into an execution context (and this is reflected in its monadic type: a -> Cont b).<br />
<br />
So, what is all this good for? Well, a standard example is that one can use continuations to capture a method of escaping from loops (particularly nested ones), and if you ponder for a while, you might be able to imagine implementing some sort of exception mechanism with them. A simple example is computing the product of a list of numbers:<br />
<br />
<haskell><br />
prod l = callCC (\k -> loop k l)<br />
where<br />
loop _ [] = return 1<br />
loop k (0:_) = k 0<br />
loop k (x:xs) = do n <- loop k xs ; return (n*x)<br />
</haskell><br />
<br />
Under normal circumstances, the loop will simply multiply all the numbers. However, if a 0 is detected, there is no need to multiply anything, the answer will always be 0. So, the continuation is invoked, and 0 is returned immediately, without performing any multiplications.<br />
<br />
=== Delimited continuations ===<br />
<br />
So, [[continuation]]s (hopefully) seem pretty clear, and at least theoretically useful. Where do delimited continuations come into the picture?<br />
<br />
The story (according to the hearsay the author has come across) goes back again to Scheme. As was mentioned earlier, callCC is often associated with it. Another thing closely associated with Scheme (and Lisp in general) is interactive environments in which code can be defined and run (much like our own [[Hugs]] and [[GHC/GHCi | GHCi]]). Naturally, it would be nice if such environments could themselves be written in Scheme.<br />
<br />
However, continuations in Scheme are not implemented as they are in Haskell. In Haskell, continuation using code is tagged with a monadic type, and one must use runCont(T) to run such computations, and the effects can't escape it. In Scheme, continuations are native, and all code can capture them, and capturing them captures not 'the rest of the Cont(T) computation,' but 'the rest of the program.' And if the interactive loop is written in Scheme, this includes the loop itself, so programs run within the session can affect the session itself.<br />
<br />
Now, this might be a minor nit, but it is a nit nonetheless, and luckily for us, it led to the idea of delimited continuations. The idea was, of course, to tag a point at which the interactive loop invoked some sub-program, and then control flow operators such as callCC would only be able to capture a portion of the program up to the marker. To the sub-program, this is all that's of interest anyhow. Such a setup would solve the issue nicely.<br />
<br />
However, once one has the ability to create such markers, why not put them in the hands of the programmer? Then, instead of them being able to capture 'the rest of the program's execution,' they would be able to delimit, capture and manipulate arbitrary portions of their programs. And indeed, such operations can be useful.<br />
<br />
== Samples ==<br />
<br />
=== Iterators ===<br />
<br />
So, what are delimited continuations good for? Well, suppose we have a binary tree data [[type]] like so:<br />
<br />
<haskell><br />
data Tree a = Leaf | Branch a (Tree a) (Tree a)<br />
<br />
empty = Leaf<br />
singleton a = Branch a Leaf Leaf<br />
<br />
insert b Leaf = Branch b Leaf Leaf<br />
insert b (Branch a l r)<br />
| b < a = Branch a (insert b l) r<br />
| otherwise = Branch a l (insert b r)<br />
<br />
fold :: (a -> b -> b -> b) -> b -> Tree a -> b<br />
fold f z Leaf = z<br />
fold f z (Branch a l r) = f a (fold f z l) (fold f z r)<br />
<br />
for :: Monad m => Tree a -> (a -> m b) -> m ()<br />
for t f = fold (\a l r -> l >> f a >> r) (return ()) t<br />
</haskell><br />
<br />
Now, we have a [[fold]] over our data type, and as shown, we can therefore write a monadic iteration function 'for' over it (this is actually done for arbitrary data types in <hask>Data.Foldable</hask>). The fold is a fine method of traversing the data structure to operate on elements in most cases. However, what if we wanted something more like an iterator object, which somehow captured the traversal of the tree, remembering what element we're currently at, and which come next?<br />
<br />
Well, it turns out one can build just such an object using continuations. It is indeed possible to build it using undelimited continuations, but it's rather complex to do so (I'll not include code that does, as I don't feel like figuring out all the details). However, it turns out it's easy using delimited continuations:<br />
<br />
<haskell><br />
data Iterator m a = Done | Cur a (m (Iterator m a))<br />
<br />
begin :: MonadDelimitedCont p s m => Tree a -> m (Iterator m a)<br />
begin t = reset $ \p -><br />
for t (\a -><br />
shift p (\k -> return (Cur a (k $ return ())))) >> return Done<br />
<br />
current :: Iterator m a -> Maybe a<br />
current Done = Nothing<br />
current (Cur a _) = Just a<br />
<br />
next :: Monad m => Iterator m a -> m (Iterator m a)<br />
next Done = return Done<br />
next (Cur _ i) = i<br />
<br />
finished :: Iterator m a -> Bool<br />
finished Done = True<br />
finished _ = False<br />
</haskell><br />
<br />
So, clearly, Iterator is the type of iterators over a tree. current, next and done are some utility functions for operating on them. The interesting work is done in the begin function.<br />
<br />
There are two delimited control operators in play here. First is reset, which is a way to place a delimiter around a computation. The term 'p' is simply a way to reference that delimiter; the library I'm working with allows for many named delimiters to exist, and for control operators to specify which delimiters they're working with (so a control operator may capture the continuation up to p, even if it runs into a delimiter q sooner, provided p /= q).<br />
<br />
The other operator is shift, which is used to capture the delimited continuation. In many ways, it's like callCC, but with an important difference: it aborts the captured continuation. When callCC is called on a function f, if f returns normally, execution will pick up from just after the callCC. However, when shift is called, the continuation between the call and the enclosing prompt is packaged up (into 'k' here), and passed to the function, and a normal return will return to the place where the delimiter was set, not where shift was called.<br />
<br />
With this in mind, we can begin to analyze the 'begin' function. First, it delimits a computation with the delimiter 'p'. Next, it begins to loop over the tree. For each element, we use 'shift' to capture "the rest of the loop", calling it 'k'. We then package that, and the current tree element, into an Iterator object, and return it. Since the shift has aborted the rest of the loop (for the time being), it returns to where 'reset' was called, and the function returns the iterator object (wrapped in a monad, of course).<br />
<br />
The main remaining piece of interest is when next goes to get the next element of the traversal. When this happens, 'k $ return ()' is executed, which invokes the captured continuation (with the value (), because the loop doesn't take the return value of the traversal function into account anyway). This, essentially, re-enters the loop. If there is a next element, then the traversal function is called with it, shift will once again capture 'the rest of the loop' (from a later point that before, though), and return an iterator object with the new current element and continuation. If there are no new elements, then control will pass out of the loop to the following computation, which is, in this case, 'return Done', so in either case, an Iterator object is the result, and the types work out.<br />
<br />
We can test our iterator like so:<br />
<br />
<haskell><br />
main :: IO ()<br />
main = runCCT $ do t <- randomTree 10<br />
i <- begin t<br />
doStuff i<br />
where<br />
doStuff i<br />
| finished i = return ()<br />
| otherwise = do i' <- next i<br />
i'' <- next i<br />
liftIO $ print (fromJust $ current i :: Int)<br />
doStuff i'<br />
<br />
randomTree n = rt empty n<br />
where<br />
rt t 0 = return t<br />
rt t n = do r <- liftIO randomIO<br />
rt (insert r t) (n - 1)<br />
</haskell><br />
<br />
The output of which might go something like:<br />
<br />
-1937814587<br />
-1171184756<br />
-1068642732<br />
-741588272<br />
-553872051<br />
-499564662<br />
-421862876<br />
-59900888<br />
315891595<br />
1868487875<br />
<br />
The example shows one possibly interesting property: one can re-use old iterators without affecting new ones. In this case, we call 'next' on the same iterator twice, but it doesn't advance the iterator twice. Our iterators behave like an ordinary functional data structure, even though they're built out of somewhat out-of-the-ordinary components.<br />
<br />
=== Breadth-first Traversal ===<br />
<br />
This example is an adaptation of an example from a set of slides by Olivier Danvy, [http://www.brics.dk/~danvy/delimited-continuations-blues.pdf Delimited-Continuations Blues]. It involves the traversal of a binary tree, so let's first define such a type:<br />
<br />
<haskell><br />
data Tree a = Node a (Tree a) (Tree a) | Leaf a<br />
<br />
t = Node 1 (Node 2 (Leaf 3)<br />
(Node 4 (Leaf 5)<br />
(Leaf 6)))<br />
(Node 7 (Node 8 (Leaf 9)<br />
(Leaf 10))<br />
(Leaf 11))<br />
<br />
toList (Leaf i) = [i]<br />
toList (Node a t1 t2) = a : toList t1 ++ toList t2<br />
</haskell><br />
<br />
toList is a pre-order, depth-first traversal, and t is ordered so that such a traversal yields [1 .. 11]. Depth-first traversals are clearly the easiest to write in a language like Haskell, since recursive descent on the trees can be used. To perform a breadth-first traversal, one would likely keep a list of sub-trees at a given level, and pass through the list at each level, visiting roots, and producing a new list of the children one level down. Which is a bit more bookkeeping. However, it turns out that delimited control can allow one to write breadth-first traversal in a recursive descent style similar to the depth-first traversal (modulo the need for monads):<br />
<br />
<haskell><br />
visit :: MonadDelimitedCont p s m => p [a] -> Tree a -> m ()<br />
visit p = visit'<br />
where<br />
visit' (Leaf i) = control p $ \k -> (i:) `liftM` k (return ())<br />
visit' (Node i t1 t2) = control p $ \k -> do a <- k (return ())<br />
visit' t2<br />
visit' t1<br />
(i:) `liftM` return a<br />
<br />
bf :: MonadDelimitedCont p s m => Tree a -> m [a]<br />
bf t = reset $ \p -> visit p t >> return []<br />
</haskell><br />
<br />
And, a quick check shows that 'bf t2' yields:<br />
<br />
[5,6,9,10,3,4,8,11,2,7,1]<br />
<br />
(Note that in this example, since elements are always pre-pended, the element visited first will be last in the list, and vice versa; so this is a pre-order breadth-first traversal).<br />
<br />
So, how exactly does this work? As the slides say, the key idea is to "return before recursively traversing the subtrees." This is accomplished through the use of the delimited control operator 'control.' At the Node stage of a traversal, control is used to capture the sub-continuation that comes after said Node (which is, effectively, the traversal over the rest of the nodes at the same level). However, instead of descending depth-first style, that sub-continuation is immediately invoked, the result being called a. Only after that are the sub-trees descended into.<br />
<br />
It should be noted, also, that this particular example can be used to display a difference between 'shift' (the so-called 'static' delimited operator) and 'control' (which is one of the 'dynamic' operators). The difference between the two is that in 'shift p (\k -> e)' calls to k are delimited by the prompt p, whereas in control, they are not (in both, e is). This has important consequences. For instance, at some point in a traversal an evaluation may look something like:<br />
<br />
<haskell><br />
delimited (visit' t2 >> visit' t1)<br />
</haskell><br />
<br />
Which, using some simplified notation/traversal, expands to:<br />
<br />
<haskell><br />
delimited (control (\k -> k () >> visit' t22 >> visit' t21)<br />
>> control (\k -> k () >> visit' t12 >> visit' t11))<br />
</haskell><br />
<br />
Which, due to the effects of control turns into:<br />
<br />
<haskell><br />
delimited ((control (\k -> k () >> visit' t12 >> visit' t11)) >> visit' t22 >> visit' t21)<br />
<br />
==><br />
<br />
delimited (visit' t22 >> visit' t21 >> visit' t12 >> visit' t11)<br />
</haskell><br />
<br />
In other words, using 'control' ends up building and executing a sequence of traversals at the same level, after the actions for the above level performed by the 'k ()'. The control operators of the lower level are then free to close over, and manipulate all the visitations on there level. This is why the result is a breadth-first traversal. However, replacing control with shift, we get:<br />
<br />
<haskell><br />
delmited (visit' t2 >> visit' t1)<br />
<br />
==><br />
<br />
delimited ((shift (\k -> k () >> visit' t22 >> visit' t21))<br />
>> (shift (\k -> k () >> visit' t12 >> visit' t11)))<br />
<br />
==><br />
<br />
delimited (delimited (shift (\k -> k () >> visit' t12 >> visit' t11)) >> visit' t22 >> visit' t21)<br />
</haskell><br />
<br />
And already we can see a difference. The sub-traversal of t1 is now isolated, and control effects (via shift, at least) therein cannot affect the sub-traversal of t2. So, control effects no longer affect an entire level of the whole tree, and instead are localized to a given node and its descendants. In such a case, we end up with an ordinary depth-first traversal (although the sub-continuations allow the visitation of each node to look a bit different than toList, and since we're always pre-pending, as we get to a node, the results are reversed compared to toList).<br />
<br />
In any case, the desired result has been achieved: A slightly modified recursive descent traversal has allowed us to express breadth-first search (and depth-first search in the same style is a matter of substitution of control operators) without having to do the normal list-of-sub-trees sort of bookkeeping (although the added weight of working with delimited control may more than outweigh that).<br />
<br />
For a more in-depth discussion of the differences between shift, control and other, similar operators, see Shift to Control, cited below.<br />
<br />
=== Resumable Parsing ===<br />
<br />
Our next example concerns a Haskell version of a [http://caml.inria.fr/pub/ml-archives/caml-list/2007/07/7a34650001bf6876b71c7b1060ac501f.en.html post to the OCaml mailing list]. The translation was [http://www.mail-archive.com/haskell-cafe%40haskell.org/msg27177.html originally given] on the haskell-cafe mailing list, and complete code and some additional discussion can be found there.<br />
<br />
The problem is similar to the above iterator example. Specifically, we are in need of a parser that can take fragments of input at a time, suspending for more input after each fragment, until such time as it can be provided. However, there are already plenty of fine parsing libraries available, and ideally, we don't want to have to re-write a new library from scratch just to have this resumable parser feature.<br />
<br />
As it turns out, delimited continuations provide a fairly straightforward way to have our cake and eat it too in this case. First, we'll need a data type for the resumable parser.<br />
<br />
<haskell><br />
-- Needs Rank2Types<br />
data Request m a = Done a | ReqChar (Maybe Char -> m (Request m a))<br />
</haskell><br />
<br />
Such a parser is either complete, or in a state of requesting more characters. Again, we'll have some convenience functions for working on the data type:<br />
<br />
<haskell><br />
provide :: Monad m => Char -> Request m a -> m (Request m a)<br />
provide _ d@(Done _) = return d<br />
provide c (ReqChar k) = k (Just c)<br />
<br />
provideString :: Monad m => String -> Request m a -> m (Request m a)<br />
provideString [] s = return s<br />
provideString (x:xs) s = provide x s >>= provideString xs<br />
<br />
finish :: Monad m => Request m a -> m (Request m a)<br />
finish d@(Done _) = return d<br />
finish (ReqChar k) = k Nothing<br />
</haskell><br />
<br />
So, 'provide' feeds a character into a parser, 'provideString' feeds in a string, and 'finish' informs the parser that there are no more characters to be had.<br />
<br />
Finally, we need to have some way of suspending parsing and waiting for characters. This is exactly what delimited continuations do for us. The hook we'll use to get control over the parser is through the character stream it takes as input:<br />
<br />
<haskell><br />
toList :: Monad m => m (Maybe a) -> m [a]<br />
toList gen = gen >>= maybe (return []) (\c -> liftM (c:) $ toList gen)<br />
<br />
streamInvert :: MonadDelimitedCont p s m => p (Request m a) -> m (Maybe Char)<br />
streamInvert p = shift p (\k -> return $ ReqChar (k . return))<br />
<br />
invertParse :: MonadDelimitedCont p s m => (String -> a) -> m (Request m a)<br />
invertParse parser = reset $ \p -> (Done . parser) `liftM` toList (streamInvert p)<br />
</haskell><br />
<br />
So, 'toList' simply takes a monadic action that may produce a character, and uses it to produce a list of characters (stopping when it sees a 'Nothing'). 'streamInvert' is just such a monadic, character-producing action (given a delimiter). Each time it is run, it captures a sub-continuation (here, 'the rest of the list generation'), and puts it in a Request object. We can then pass around the Request object, and feed characters in as desired (via 'provide' and 'provideString' above), gradually building the list of characters to be parsed.<br />
<br />
In the 'invertParse' method, this gradually produced list is fed through a parser (of type String -> a, so it doesn't need to know about the delimited continuation monad we're using), and the output of the parser is packaged in a finished (Done) Request object, so when we finally call 'finish', we will be able to access the results of the parser.<br />
<br />
For this example, the words function suffices as a parser:<br />
<br />
<haskell><br />
gradualParse :: [String]<br />
gradualParse = runCC $ do p1 <- invertParse words<br />
p2 <- provideString "The quick" p1<br />
p3 <- provideString " brown fox jum" p2<br />
p4 <- provideString "ps over the laz" p3<br />
p5 <- provideString "y dog" p4 >>= finish<br />
p6 <- provideString "iest dog" p4 >>= finish<br />
let (Done l1) = p5<br />
(Done l2) = p6<br />
return (l1 ++ l2)<br />
<br />
main :: IO ()<br />
main = mapM_ putStrLn gradualParse<br />
</haskell><br />
<br />
And we get output:<br />
<br />
The<br />
quick<br />
brown<br />
fox<br />
jumps<br />
over<br />
the<br />
lazy<br />
dog<br />
The<br />
quick<br />
brown<br />
fox<br />
jumps<br />
over<br />
the<br />
laziest<br />
dog<br />
<br />
So, the resumable parser works. It will pause at arbitrary places in the parse, even in the middle of tokens, and wait for more input. And one can resume a parse from any point to which a Request pointer is saved without interfering with other resumable parser objects.<br />
<br />
(A note: depending on what exactly one wants to do with such parsers, there are a few nits in the above implementation. It doesn't exactly match the semantics of the OCaml parser. For more information on this topic, see the linked mailing-list thread, as it discusses the issues, their causes, and provides an alternate implementation (which changes mostly the parser, not the delimited continuation end) which matches the OCaml version much more closely)<br />
<br />
== CC-delcont ==<br />
<br />
=== Installation ===<br />
<br />
Packages are available on Hackage:<br />
<br />
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/CC-delcont<br />
<br />
The library is cabalized, so installation should be as simple as:<br />
<br />
runhaskell Setup.lhs configure<br />
runhaskell Setup.lhs build<br />
sudo runhaskell Setup.lhs install<br />
<br />
(to install to the default directory, /usr/local/lib on Unix)<br />
<br />
== More information ==<br />
<br />
A google search for delimited continuations will likely yield plenty of interesting resources on the subject. However, the following resources proved especially useful to the author when he was investigating them:<br />
<br />
* [http://okmij.org/ftp/papers/context-OS.pdf Delimited continuations in operating systems] -- This paper provides excellent insight into how delimited continuations can arise as a natural solution/model for real problems, specifically in the context of implementing an operating system.<br />
<br />
* [http://www.cs.indiana.edu/~sabry/papers/monadicDC.pdf A Monadic Framework for Delimited Continuations] -- This is the paper from which the implementation of the above library was derived. It's quite thorough in its explanation of the motivations for the interface, and also has several possible implementations thereof (though CC-delcont uses only one).<br />
<br />
* [http://okmij.org/ftp/papers/DDBinding.pdf Delimited Dynamic Binding] -- This paper, and related code, served as the basis for the dynamically scoped variable portion of the CC-delcont library. It explains the rationale for having dynamic scoping and delimited control interact in the way they do in the library, and goes through the implementation of dynamic variables in terms of delimited continuations.<br />
<br />
* [http://www.cs.rutgers.edu/~ccshan/recur/recur.pdf Shift to control] -- This paper explores four different sets of delimited control operators (all of which are implemented in CC-delcont), and their implementation. Though it's not directly relevant to this particular library, it provides some good insight into delimited continuations and their implementation in general.<br />
<br />
* [http://okmij.org/ftp/Computation/Continuations.html Oleg Kiselyov's continuation page] -- Contains plenty of excellent information on delimited continuations and the like (including some of the above papers), including examples of their use in Haskell.</div>Davorakhttps://wiki.haskell.org/index.php?title=Lifting&diff=55266Lifting2013-01-19T10:15:52Z<p>Davorak: /* Applicative lifting */</p>
<hr />
<div>Lifting is a concept which allows you to transform a function into a corresponding function within another (usually more general) setting. <br />
<br />
== Lifting in general ==<br />
<br />
We usually start with a (covariant) [[functor]], for simplicity we will consider the Pair functor first. Haskell doesn't allow a <hask>type Pair a = (a, a)</hask> to be a functor instance, so we define our own Pair [[type]] instead.<br />
<haskell><br />
data Pair a = Pair a a deriving Show<br />
instance Functor Pair where<br />
fmap f (Pair x y) = Pair (f x) (f y)<br />
</haskell><br />
If you look at the type of <hask>fmap</hask> (<hask>Functor f => (a -> b) -> (f a -> f b)</hask>), you will notice that <hask>fmap</hask> already is a lifting operation: It transforms a function between simple types <hask>a</hask> and <hask>b</hask> into a function between pairs of these types.<br />
<haskell><br />
lift :: (a -> b) -> Pair a -> Pair b<br />
lift = fmap<br />
<br />
plus2 :: Pair Int -> Pair Int<br />
plus2 = lift (+2)<br />
-- plus2 (Pair 2 3) ---> Pair 4 5<br />
</haskell><br />
Note, however, that not all functions between <hask>Pair a</hask> and <hask>Pair b</hask> can constructed as a lifted function (e.g. <hask>\(x, _) -> (x, 0)</hask> can't).<br />
<br />
A functor can only lift functions of exactly one variable, but we want to lift other functions, too:<br />
<haskell><br />
lift0 :: a -> Pair a<br />
lift0 x = Pair x x<br />
<br />
lift2 :: (a -> b -> r) -> (Pair a -> Pair b -> Pair r)<br />
lift2 f (Pair x1 x2) (Pair y1 y2) = Pair (f x1 y1) (f x2 y2)<br />
<br />
plus :: Pair Int -> Pair Int -> Pair Int<br />
plus = lift2 (+)<br />
-- plus (Pair 1 2) (Pair 3 4) ---> Pair 4 6<br />
</haskell><br />
<br />
In a similar way, we can define lifting operations for all containers that have "a fixed size", for example for the functions from <hask>Double</hask> to any value <hask>((->) Double)</hask>, which might be thought of as values that are varying over time (given as <hask>Double</hask>). The function <hask> \t -> if t < 2.0 then 0 else 2 </hask> would then represent a value which switches at time 2.0 from 0 to 2. Using lifting, such functions can be manipulated in a very high-level way. In fact, this kind of lifting operation is already defined. <hask>Control.Monad.Reader</hask> (see [[MonadReader]]) provides a <hask>Functor</hask>, <hask>Applicative</hask>, <hask>Monad</hask>, <hask>MonadFix</hask> and <hask>MonadReader</hask> instance for the type <hask>(->) r</hask>. The <hask>liftM</hask> (see below) functions of this [[monad]] are precisely the lifting operations we are searching for.<br />
<br />
If the containers don't have fixed size, it's not always clear how to make lifting operations for them. The <hask>[]</hask> - type could be lifted using the <hask>zipWith</hask>-family of functions or using <hask>liftM</hask> from the list monad, for example.<br />
<br />
== Applicative lifting ==<br />
<br />
This should only provide a definition what lifting means (in the usual cases, not in the arrow case). It's not a suggestion for an implementation. I start with the (simplest?) basic operations <hask>zipL</hask>, which combines to containers into a single one and <hask>zeroL</hask>, which gives a standard container for ().<br />
<haskell><br />
class Functor f => Liftable f where<br />
zipL :: f a -> f b -> f (a, b)<br />
zeroL :: f ()<br />
<br />
liftL :: Liftable f => (a -> b) -> (f a -> f b)<br />
liftL = fmap<br />
<br />
liftL2 :: Liftable f => (a -> b -> c) -> (f a -> f b -> f c)<br />
liftL2 f x y = fmap (uncurry f) $ zipL x y<br />
<br />
liftL3 :: Liftable f => (a -> b -> c -> d) -> (f a -> f b -> f c -> f d)<br />
liftL3 f x y z = fmap (uncurry . uncurry $ f) $ zipL (zipL x y) z<br />
<br />
liftL0 :: Liftable f => a -> f a<br />
liftL0 x = fmap (const x) zeroL <br />
<br />
appL :: Liftable f => f (a -> b) -> f a -> f b<br />
appL = liftL2 ($)<br />
</haskell><br />
<br />
We need to postulate a few laws so that the definitions make sense. (Are they complete and/or minimal?)<br />
<haskell><br />
assoc :: ((a, b), c) -> (a, (b, c))<br />
assoc ~(~(x, y), z) = (x, (y, z))<br />
<br />
{-<br />
Identity:<br />
fmap snd $ zipL zeroL x === x<br />
fmap fst $ zipL x zeroL === x<br />
<br />
Associativity:<br />
fmap assoc $ zipL (zipL x y) $ z === zipL x $ zipL y z<br />
-}<br />
</haskell><br />
<br />
Today we have the <hask>Applicative</hask> class that provides [[Applicative functor]]s. It is equivalent to the <hask>Liftable</hask> class.<br />
<haskell><br />
pure = liftL0<br />
(<*>) = appL<br />
<br />
zeroL = pure ()<br />
zipL = liftA2 (,)<br />
</haskell><br />
<br />
In principle, <hask>Applicative</hask> should be a superclass of <hask>Monad</hask>, but chronologically <hask>Functor</hask> and <hask>Monad</hask> were before <hask>Applicative</hask>.<br />
Unfortunately, inserting <hask>Applicative</hask> between <hask>Functor</hask> and <hask>Monad</hask> in the subclass hierarchy would break a lot of existing code and thus has not been done as of today (2011). This is still true as of Jan 2013.<br />
<br />
== Monad lifting ==<br />
<br />
Lifting is often used together with [[monad]]s. The members of the <hask>liftM</hask>-family take a function and perform the corresponding computation within the monad.<br />
<haskell><br />
return :: (Monad m) => a -> m a<br />
liftM :: (Monad m) => (a1 -> r) -> m a1 -> m r<br />
liftM2 :: (Monad m) => (a1 -> a2 -> r) -> m a1 -> m a2 -> m r<br />
</haskell><br />
Consider for example the list monad ([[MonadList]]). It performs a [[Non-determinism |nondeterministic calculation]], returning all possible results. <hask>liftM2</hask> just turns a deterministic function into a nondeterministic one:<br />
<haskell><br />
plus :: [Int] -> [Int] -> [Int]<br />
plus = liftM2 (+)<br />
-- plus [1,2,3] [3,6,9] ---> [4,7,10, 5,8,11, 6,9,12]<br />
-- plus [1..] [] ---> _|_ (i.e., keeps on calculating forever)<br />
-- plus [] [1..] ---> []<br />
</haskell><br />
<br />
Every <hask>Monad</hask> can be made an instance of <hask>Liftable</hask> using the following implementations:<br />
<haskell><br />
{-# OPTIONS -fglasgow-exts #-}<br />
{-# LANGUAGE AllowUndecidableInstances #-}<br />
import Control.Monad<br />
<br />
instance (Functor m, Monad m) => Liftable m where <br />
zipL = liftM2 (\x y -> (x,y))<br />
zeroL = return ()<br />
</haskell><br />
<br />
Lifting becomes especially interesting when there are more levels you can lift between. <hask>Control.Monad.Trans</hask> (see [[Monad transformer]]s) defines a class<br />
<haskell><br />
class MonadTrans t where<br />
lift :: Monad m => m a -> t m a -- lifts a value from the inner monad m to the transformed monad t m<br />
-- could be called lift0<br />
</haskell><br />
lift takes the side effects of a monadic computation within the inner monad <hask>m</hask> and lifts them into the transformed monad <hask>t m</hask>. We can easily define functions which lift functions between inner monads to functions between transformed monads. Then we can perform three different lifting operations:<br />
<hask>liftM</hask> can be used both to transform a pure function into a function between inner monads and to a function between transformed monads, and finally lift transforms from the inner monad to the transformed monad. Because of the purity of Haskell, we can only lift "up".<br />
<br />
== Arrow lifting ==<br />
<br />
Until now, we have only considered lifting from functions to other functions. John Hughes' arrows (see [[Understanding arrows]]) are a generalization of computation that aren't functions anymore. An arrow <hask>a b c</hask> stands for a computation which transforms values of type <hask>b</hask> to values of type <hask>c</hask>. The basic primitive <hask>arr</hask>, aka <hask>pure</hask>,<br />
<haskell><br />
arr :: (Arrow a) => b -> c -> a b c<br />
</haskell><br />
is also a lifting operation.<br />
<br />
[[Category:Idioms]]</div>Davorakhttps://wiki.haskell.org/index.php?title=Lifting&diff=55265Lifting2013-01-19T10:12:47Z<p>Davorak: /* Applicative lifting */</p>
<hr />
<div>Lifting is a concept which allows you to transform a function into a corresponding function within another (usually more general) setting. <br />
<br />
== Lifting in general ==<br />
<br />
We usually start with a (covariant) [[functor]], for simplicity we will consider the Pair functor first. Haskell doesn't allow a <hask>type Pair a = (a, a)</hask> to be a functor instance, so we define our own Pair [[type]] instead.<br />
<haskell><br />
data Pair a = Pair a a deriving Show<br />
instance Functor Pair where<br />
fmap f (Pair x y) = Pair (f x) (f y)<br />
</haskell><br />
If you look at the type of <hask>fmap</hask> (<hask>Functor f => (a -> b) -> (f a -> f b)</hask>), you will notice that <hask>fmap</hask> already is a lifting operation: It transforms a function between simple types <hask>a</hask> and <hask>b</hask> into a function between pairs of these types.<br />
<haskell><br />
lift :: (a -> b) -> Pair a -> Pair b<br />
lift = fmap<br />
<br />
plus2 :: Pair Int -> Pair Int<br />
plus2 = lift (+2)<br />
-- plus2 (Pair 2 3) ---> Pair 4 5<br />
</haskell><br />
Note, however, that not all functions between <hask>Pair a</hask> and <hask>Pair b</hask> can constructed as a lifted function (e.g. <hask>\(x, _) -> (x, 0)</hask> can't).<br />
<br />
A functor can only lift functions of exactly one variable, but we want to lift other functions, too:<br />
<haskell><br />
lift0 :: a -> Pair a<br />
lift0 x = Pair x x<br />
<br />
lift2 :: (a -> b -> r) -> (Pair a -> Pair b -> Pair r)<br />
lift2 f (Pair x1 x2) (Pair y1 y2) = Pair (f x1 y1) (f x2 y2)<br />
<br />
plus :: Pair Int -> Pair Int -> Pair Int<br />
plus = lift2 (+)<br />
-- plus (Pair 1 2) (Pair 3 4) ---> Pair 4 6<br />
</haskell><br />
<br />
In a similar way, we can define lifting operations for all containers that have "a fixed size", for example for the functions from <hask>Double</hask> to any value <hask>((->) Double)</hask>, which might be thought of as values that are varying over time (given as <hask>Double</hask>). The function <hask> \t -> if t < 2.0 then 0 else 2 </hask> would then represent a value which switches at time 2.0 from 0 to 2. Using lifting, such functions can be manipulated in a very high-level way. In fact, this kind of lifting operation is already defined. <hask>Control.Monad.Reader</hask> (see [[MonadReader]]) provides a <hask>Functor</hask>, <hask>Applicative</hask>, <hask>Monad</hask>, <hask>MonadFix</hask> and <hask>MonadReader</hask> instance for the type <hask>(->) r</hask>. The <hask>liftM</hask> (see below) functions of this [[monad]] are precisely the lifting operations we are searching for.<br />
<br />
If the containers don't have fixed size, it's not always clear how to make lifting operations for them. The <hask>[]</hask> - type could be lifted using the <hask>zipWith</hask>-family of functions or using <hask>liftM</hask> from the list monad, for example.<br />
<br />
== Applicative lifting ==<br />
<br />
This should only provide a definition what lifting means (in the usual cases, not in the arrow case). It's not a suggestion for an implementation. I start with the (simplest?) basic operations <hask>zipL</hask>, which combines to containers into a single one and <hask>zeroL</hask>, which gives a standard container for ().<br />
<haskell><br />
class Functor f => Liftable f where<br />
zipL :: f a -> f b -> f (a, b)<br />
zeroL :: f ()<br />
<br />
liftL :: Liftable f => (a -> b) -> (f a -> f b)<br />
liftL = fmap<br />
<br />
liftL2 :: Liftable f => (a -> b -> c) -> (f a -> f b -> f c)<br />
liftL2 f x y = fmap (uncurry f) $ zipL x y<br />
<br />
liftL3 :: Liftable f => (a -> b -> c -> d) -> (f a -> f b -> f c -> f d)<br />
liftL3 f x y z = fmap (uncurry . uncurry $ f) $ zipL (zipL x y) z<br />
<br />
liftL0 :: Liftable f => a -> f a<br />
liftL0 x = fmap (const x) zeroL <br />
<br />
appL :: Liftable f => f (a -> b) -> f a -> f b<br />
appL = liftL2 ($)<br />
</haskell><br />
<br />
We need to postulate a few laws so that the definitions make sense. (Are they complete and/or minimal?)<br />
<haskell><br />
assoc :: ((a, b), c) -> (a, (b, c))<br />
assoc ~(~(x, y), z) = (x, (y, z))<br />
<br />
{-<br />
Identity:<br />
fmap snd $ zipL zeroL x === x<br />
fmap fst $ zipL x zeroL === x<br />
<br />
Associativity:<br />
fmap assoc $ zipL (zipL x y) $ z === zipL x $ zipL y z<br />
-}<br />
</haskell><br />
<br />
Today we have the <hask>Applicative</hask> class that provides [[Applicative functor]]s. It is equivalent to the <hask>Liftable</hask> class.<br />
<haskell><br />
pure = liftL0<br />
(<*>) = appL<br />
<br />
zeroL = pure ()<br />
zipL = liftA2 (,)<br />
</haskell><br />
<br />
In principle, <hask>Applicative</hask> should be a superclass of <hask>Monad</hask>, but chronologically <hask>Functor</hask> and <hask>Monad</hask> were before <hask>Applicative</hask>.<br />
Unfortunately, inserting <hask>Applicative</hask> between <hask>Functor</hask> and <hask>Monad</hask> in the subclass hierarchy would break a lot of existing code and thus has not been done as of today (Jan 2013).<br />
<br />
== Monad lifting ==<br />
<br />
Lifting is often used together with [[monad]]s. The members of the <hask>liftM</hask>-family take a function and perform the corresponding computation within the monad.<br />
<haskell><br />
return :: (Monad m) => a -> m a<br />
liftM :: (Monad m) => (a1 -> r) -> m a1 -> m r<br />
liftM2 :: (Monad m) => (a1 -> a2 -> r) -> m a1 -> m a2 -> m r<br />
</haskell><br />
Consider for example the list monad ([[MonadList]]). It performs a [[Non-determinism |nondeterministic calculation]], returning all possible results. <hask>liftM2</hask> just turns a deterministic function into a nondeterministic one:<br />
<haskell><br />
plus :: [Int] -> [Int] -> [Int]<br />
plus = liftM2 (+)<br />
-- plus [1,2,3] [3,6,9] ---> [4,7,10, 5,8,11, 6,9,12]<br />
-- plus [1..] [] ---> _|_ (i.e., keeps on calculating forever)<br />
-- plus [] [1..] ---> []<br />
</haskell><br />
<br />
Every <hask>Monad</hask> can be made an instance of <hask>Liftable</hask> using the following implementations:<br />
<haskell><br />
{-# OPTIONS -fglasgow-exts #-}<br />
{-# LANGUAGE AllowUndecidableInstances #-}<br />
import Control.Monad<br />
<br />
instance (Functor m, Monad m) => Liftable m where <br />
zipL = liftM2 (\x y -> (x,y))<br />
zeroL = return ()<br />
</haskell><br />
<br />
Lifting becomes especially interesting when there are more levels you can lift between. <hask>Control.Monad.Trans</hask> (see [[Monad transformer]]s) defines a class<br />
<haskell><br />
class MonadTrans t where<br />
lift :: Monad m => m a -> t m a -- lifts a value from the inner monad m to the transformed monad t m<br />
-- could be called lift0<br />
</haskell><br />
lift takes the side effects of a monadic computation within the inner monad <hask>m</hask> and lifts them into the transformed monad <hask>t m</hask>. We can easily define functions which lift functions between inner monads to functions between transformed monads. Then we can perform three different lifting operations:<br />
<hask>liftM</hask> can be used both to transform a pure function into a function between inner monads and to a function between transformed monads, and finally lift transforms from the inner monad to the transformed monad. Because of the purity of Haskell, we can only lift "up".<br />
<br />
== Arrow lifting ==<br />
<br />
Until now, we have only considered lifting from functions to other functions. John Hughes' arrows (see [[Understanding arrows]]) are a generalization of computation that aren't functions anymore. An arrow <hask>a b c</hask> stands for a computation which transforms values of type <hask>b</hask> to values of type <hask>c</hask>. The basic primitive <hask>arr</hask>, aka <hask>pure</hask>,<br />
<haskell><br />
arr :: (Arrow a) => b -> c -> a b c<br />
</haskell><br />
is also a lifting operation.<br />
<br />
[[Category:Idioms]]</div>Davorakhttps://wiki.haskell.org/index.php?title=Numeric_Haskell:_A_Vector_Tutorial&diff=55264Numeric Haskell: A Vector Tutorial2013-01-19T10:01:21Z<p>Davorak: /* Impure Arrays */</p>
<hr />
<div>[[Category:Tutorials]]<br />
<br />
[http://hackage.haskell.org/package/vector Vector] is a Haskell library for working with arrays. It has an emphasis on very high performance through loop fusion, whilst retaining a rich interface. The main data types are boxed and unboxed arrays, and arrays may be immutable (pure), or mutable. Arrays may hold Storable elements, suitable for passing to and from C, and you can convert between the array types. Arrays are indexed by non-negative <hask>Int</hask> values.<br />
<br />
The vector library has an API similar to the famous Haskell list library, with many of the same names.<br />
<br />
This tutorial is modelled on [http://www.scipy.org/Tentative_NumPy_Tutorial the NumPy tutorial].<br />
<br />
__TOC__<br />
<br />
= Quick Tour =<br />
<br />
Here is a quick overview to get you started.<br />
<br />
== Importing the library ==<br />
<br />
Download the vector package:<br />
<br />
$ cabal install vector<br />
<br />
and import it as, for boxed arrays:<br />
<br />
<haskell><br />
import qualified Data.Vector as V<br />
</haskell><br />
<br />
or:<br />
<br />
<haskell><br />
import qualified Data.Vector.Unboxed as V<br />
</haskell><br />
<br />
for unboxed arrays. The library needs to be imported qualified as it shares the same function names as list operations in the Prelude.<br />
<br />
== Generating Vectors ==<br />
<br />
New vectors can be generated in many ways:<br />
<br />
<haskell><br />
$ ghci<br />
GHCi, version 6.12.1: http://www.haskell.org/ghc/ :? for help<br />
Loading package ghc-prim ... linking ... done.<br />
Loading package integer-gmp ... linking ... done.<br />
Loading package base ... linking ... done.<br />
Loading package ffi-1.0 ... linking ... done.<br />
<br />
Prelude> :m + Data.Vector<br />
<br />
-- Generating a vector from a list:<br />
Prelude Data.Vector> let a = fromList [10, 20, 30, 40]<br />
<br />
Prelude Data.Vector> a<br />
fromList [10,20,30,40] :: Data.Vector.Vector<br />
<br />
-- Or filled from a sequence<br />
Prelude Data.Vector> enumFromStepN 10 10 4<br />
fromList [10,20,30,40] :: Data.Vector.Vector<br />
<br />
-- A vector created from four consecutive values<br />
Prelude Data.Vector> enumFromN 10 4<br />
fromList [10,11,12,13] :: Data.Vector.Vector<br />
</haskell><br />
<br />
You can also build vectors using operations similar to lists:<br />
<br />
<haskell><br />
-- The empty vector<br />
Prelude Data.Vector> empty<br />
fromList [] :: Data.Vector.Vector<br />
<br />
-- A vector of length one<br />
Prelude Data.Vector> singleton 2<br />
fromList [2] :: Data.Vector.Vector<br />
<br />
-- A vector of length 10, filled with the value '2'<br />
-- Note that to disambiguate names,<br />
-- and avoid a clash with the Prelude,<br />
-- with use the full path to the Vector module<br />
Prelude Data.Vector> Data.Vector.replicate 10 2<br />
fromList [2,2,2,2,2,2,2,2,2,2] :: Data.Vector.Vector<br />
</haskell><br />
<br />
In general, you may construct new vectors by applying a function to the index space:<br />
<br />
<haskell><br />
Prelude Data.Vector> generate 10 (^2)<br />
fromList [0,1,4,9,16,25,36,49,64,81] :: Data.Vector.Vector<br />
</haskell><br />
<br />
Vectors may have more than one dimension:<br />
<br />
<haskell><br />
-- Here we create a two dimensional vector, 10 columns,<br />
-- each row filled with the row index.<br />
Prelude Data.Vector> let x = generate 10 (\n -> Data.Vector.replicate 10 n)<br />
<br />
-- The type is "Vector of Vector of Ints"<br />
Prelude Data.Vector> :t x<br />
x :: Vector (Vector Int)<br />
</haskell><br />
<br />
Vectors may be grown or shrunk arbitrarily:<br />
<br />
<haskell><br />
Prelude Data.Vector> let y = Data.Vector.enumFromTo 0 11<br />
Prelude Data.Vector> y<br />
fromList [0,1,2,3,4,5,6,7,8,9,10,11] :: Data.Vector.Vector<br />
<br />
-- Take the first 3 elements as a new vector<br />
Prelude Data.Vector> Data.Vector.take 3 y<br />
fromList [0,1,2] :: Data.Vector.Vector<br />
<br />
-- Duplicate and join the vector<br />
Prelude Data.Vector> y Data.Vector.++ y<br />
fromList [0,1,2,3,4,5,6,7,8,9,10,11,0,1,2,3,4,5,6,7,8,9,10,11] :: Data.Vector.Vector<br />
</haskell><br />
<br />
== Modifying vectors ==<br />
<br />
Just as with lists, you can iterate (map) over arrays, reduce them (fold), filter them, or join them in various ways:<br />
<br />
<haskell><br />
-- mapping a function over the elements of a vector<br />
Prelude Data.Vector> Data.Vector.map (^2) y<br />
fromList [0,1,4,9,16,25,36,49,64,81,100,121] :: Data.Vector.Vector<br />
<br />
-- Extract only the odd elements from a vector<br />
Prelude Data.Vector> Data.Vector.filter odd y<br />
fromList [1,3,5,7,9,11] :: Data.Vector.Vector<br />
<br />
-- Reduce a vector<br />
Prelude Data.Vector> Data.Vector.foldl (+) 0 y<br />
66<br />
<br />
-- Take a scan (partial results from a reduction):<br />
Prelude Data.Vector> Data.Vector.scanl (+) 0 y<br />
fromList [0,0,1,3,6,10,15,21,28,36,45,55,66] :: Data.Vector.Vector<br />
<br />
-- Zip two arrays pairwise, into an array of pairs<br />
Prelude Data.Vector> Data.Vector.zip y y<br />
fromList [(0,0),(1,1),(2,2),(3,3),(4,4),(5,5),(6,6),(7,7),(8,8),(9,9),(10,10),(11,11)] :: Data.Vector.Vector<br />
</haskell><br />
<br />
== Indexing vectors ==<br />
<br />
And like all good arrays, you can index them in various ways:<br />
<br />
<haskell><br />
-- Take an arbitrary element<br />
Prelude Data.Vector> y ! 4<br />
4<br />
<br />
-- Take the last element<br />
Prelude Data.Vector> Data.Vector.last y<br />
11<br />
<br />
-- Take the first element<br />
Prelude Data.Vector> Data.Vector.head y<br />
0<br />
<br />
-- Take the rest<br />
Prelude Data.Vector> Data.Vector.tail y<br />
fromList [1,2,3,4,5,6,7,8,9,10,11] :: Data.Vector.Vector<br />
<br />
</haskell><br />
<br />
= The Tutorial =<br />
<br />
The vector package provides several types of array. The most general interface is via [http://hackage.haskell.org/packages/archive/vector/0.5/doc/html/Data-Vector.html Data.Vector], which provides for boxed arrays, holding any type.<br />
<br />
There are also more specialized array types:<br />
<br />
* [http://hackage.haskell.org/packages/archive/vector/0.5/doc/html/Data-Vector-Unboxed.html Unboxed] <br />
* [http://hackage.haskell.org/packages/archive/vector/0.5/doc/html/Data-Vector-Storable.html Storable] <br />
<br />
which provide unboxed arrays (i.e. no closures) and storable arrays (data that is pinned, and may be passed to and from C via a Ptr).<br />
<br />
In all cases, the operations are subject to loop fusion. That is, if you compose two functions, <br />
<br />
map f . map g<br />
<br />
the compiler will rewrite it into a single traversal:<br />
<br />
map (f . g)<br />
<br />
saving time and space.<br />
<br />
== Simple example ==<br />
<br />
You can create the arrays in many ways, for example, from a regular Haskell list:<br />
<br />
<haskell><br />
Prelude Data.Vector> let a = fromList [2,3,4]<br />
<br />
Prelude Data.Vector> a<br />
fromList [2,3,4] :: Data.Vector.Vector<br />
<br />
Prelude Data.Vector> :t a<br />
a :: Vector Integer<br />
</haskell><br />
<br />
GHCi will print the contents of the vector as executable code.<br />
<br />
To create a multidimensional array, you can use a nested list generator to fill it:<br />
<br />
<haskell><br />
Prelude Data.Vector> let x = fromList [ fromList [1 .. x] | x <- [1..10] ]<br />
<br />
Prelude Data.Vector> :t x<br />
x :: Vector (Vector Integer)<br />
</haskell><br />
<br />
-- XXX TODO need a better printing function for multidimensional arrays.<br />
<br />
You can also just create arrays filled with zeroes:<br />
<br />
<haskell><br />
Prelude Data.Vector> Data.Vector.replicate 10 0<br />
fromList [0,0,0,0,0,0,0,0,0,0] :: Data.Vector.Vector<br />
</haskell><br />
<br />
And you can fill arrays from a sequence generator:<br />
<br />
<haskell><br />
Prelude Data.Vector> enumFromN 1 10<br />
fromList [1,2,3,4,5,6,7,8,9,10] :: Data.Vector.Vector<br />
<br />
Prelude Data.Vector> enumFromStepN 0 10 10<br />
fromList [0,10,20,30,40,50,60,70,80,90] :: Data.Vector.Vector<br />
</haskell><br />
<br />
== Array Types ==<br />
<br />
The vector package provides several array types, with an identical interface. They have different flexibility with respect to the types of values that may be stored in them, and different performance characteristics.<br />
<br />
In general:<br />
<br />
* End users should use Data.Vector.Unboxed for most cases<br />
* If you need to store more complex structures, use Data.Vector<br />
* If you need to pass to C, use Data.Vector.Storable<br />
<br />
For library writers;<br />
<br />
* Use the generic interface, to ensure your library is maximally flexible: Data.Vector.Generic<br />
<br />
<br />
=== Boxed Arrays: Data.Vector ===<br />
<br />
The most flexible type is [http://hackage.haskell.org/packages/archive/vector/0.5/doc/html/Data-Vector.html#t%3AVector Data.Vector.Vector], which provides *boxed* arrays: arrays of pointers to Haskell values.<br />
<br />
* Data.Vector.Vector's are fully polymorphic: they can hold any valid Haskell type<br />
<br />
These arrays are suitable for storing complex Haskell types (sum types, or algebraic data types), but a better choice for simple data types is Data.Vector.Unboxed.<br />
<br />
=== Unboxed Arrays: Data.Vector.Unboxed ===<br />
<br />
Simple, atomic types, and pair types can be stored in a more efficient manner: consecutive memory slots without pointers. The [http://hackage.haskell.org/packages/archive/vector/0.5/doc/html/Data-Vector-Unboxed.html#t%3AVector Data.Array.Unboxed.Vector] type provides unboxed arrays of types that are members of the Unbox class, including:<br />
<br />
* Bool<br />
* ()<br />
* Char<br />
* Double<br />
* Float<br />
* Int<br />
* Int8, 16, 32, 64<br />
* Word<br />
* Word8, 16, 32, 64<br />
* Complex a's, where 'a' is in Unbox<br />
* Tuple types, where the elements are unboxable<br />
<br />
Unboxed arrays should be preferred when you have unboxable elements, as they are generally more efficient.<br />
<br />
=== Storable Arrays: passing data to C ===<br />
<br />
Storable arrays ([http://hackage.haskell.org/packages/archive/vector/0.5/doc/html/Data-Vector-Storable.html#t%3AVector Data.Vector.Storable.Vector]) are vectors of any type in the Storable class.<br />
<br />
These arrays are pinned, and may be converted to and from pointers, that may be passed to C functions, using a number of functions:<br />
<br />
<haskell><br />
unsafeFromForeignPtr<br />
:: Storable a<br />
=> ForeignPtr a<br />
-> Int<br />
-> Int <br />
-> Vector a <br />
<br />
-- Create a vector from a ForeignPtr with an offset and a length. The data may<br />
-- not be modified through the ForeignPtr afterwards.<br />
<br />
unsafeToForeignPtr<br />
:: Storable a<br />
=> Vector a<br />
-> (ForeignPtr a, Int, Int)<br />
<br />
-- Yield the underlying ForeignPtr together with the offset to the data and its<br />
-- length. The data may not be modified through the ForeignPtr.<br />
<br />
unsafeWith<br />
:: Storable a<br />
=> Vector a<br />
-> (Ptr a -> IO b)<br />
-> IO b<br />
<br />
-- Pass a pointer to the vector's data to the IO action. The data may not be<br />
-- modified through the 'Ptr.<br />
</haskell><br />
<br />
==== Storing your own types in Storable vectors ====<br />
<br />
You can store new data types in Storable vectors, beyond those for instances that already exist, by writing a Storable instance for the type.<br />
<br />
Here we store a 4-element Double vector as the array element.<br />
<br />
<haskell><br />
{-# LANGUAGE BangPatterns #-}<br />
<br />
import Data.Vector.Storable<br />
import qualified Data.Vector.Storable as V<br />
import Foreign<br />
import Foreign.C.Types<br />
<br />
-- Define a 4 element vector type<br />
data Vec4 = Vec4 {-# UNPACK #-} !CFloat<br />
{-# UNPACK #-} !CFloat<br />
{-# UNPACK #-} !CFloat<br />
{-# UNPACK #-} !CFloat<br />
</haskell><br />
<br />
Ensure we can store it in an array<br />
<br />
<haskell><br />
instance Storable Vec4 where<br />
sizeOf _ = sizeOf (undefined :: CFloat) * 4<br />
alignment _ = alignment (undefined :: CFloat)<br />
<br />
{-# INLINE peek #-}<br />
peek p = do<br />
a <- peekElemOff q 0<br />
b <- peekElemOff q 1<br />
c <- peekElemOff q 2<br />
d <- peekElemOff q 3<br />
return (Vec4 a b c d)<br />
where<br />
q = castPtr p<br />
{-# INLINE poke #-}<br />
poke p (Vec4 a b c d) = do<br />
pokeElemOff q 0 a<br />
pokeElemOff q 1 b<br />
pokeElemOff q 2 c<br />
pokeElemOff q 3 d<br />
where<br />
q = castPtr p<br />
</haskell><br />
<br />
And now we can write operations on the new vector, with very good performance.<br />
<br />
<haskell><br />
a = Vec4 0.2 0.1 0.6 1.0<br />
m = Vec4 0.99 0.7 0.8 0.6<br />
<br />
add :: Vec4 -> Vec4 -> Vec4<br />
{-# INLINE add #-}<br />
add (Vec4 a b c d) (Vec4 a' b' c' d') = Vec4 (a+a') (b+b') (c+c') (d+d')<br />
<br />
mult :: Vec4 -> Vec4 -> Vec4<br />
{-# INLINE mult #-}<br />
mult (Vec4 a b c d) (Vec4 a' b' c' d') = Vec4 (a*a') (b*b') (c*c') (d*d')<br />
<br />
vsum :: Vec4 -> CFloat<br />
{-# INLINE vsum #-}<br />
vsum (Vec4 a b c d) = a+b+c+d<br />
<br />
multList :: Int -> Vector Vec4 -> Vector Vec4<br />
multList !count !src<br />
| count <= 0 = src<br />
| otherwise = multList (count-1) $ V.map (\v -> add (mult v m) a) src<br />
<br />
main = do<br />
print $ Data.Vector.Storable.sum<br />
$ Data.Vector.Storable.map vsum<br />
$ multList repCount<br />
$ Data.Vector.Storable.replicate arraySize (Vec4 0 0 0 0)<br />
<br />
repCount, arraySize :: Int<br />
repCount = 10000<br />
arraySize = 20000<br />
</haskell><br />
<br />
=== Pure Arrays ===<br />
=== Impure Arrays ===<br />
<br />
Arrays can be created and operated on in a mutable fashion -- using destructive updates, as in an imperative language. Once all operations are complete, the mutable array can be "frozen" to a pure array, which changes its type.<br />
<br />
Mutable arrays plus freezing are quite useful for initializing arrays from data in the outside world.<br />
<br />
For example, to fill a generic array, we first<br />
<br />
* allocate an empty vector of size <hask>n</hask><br />
* destructively update the cells using a generator function<br />
* freeze the array and return it as a pure value.<br />
<br />
<haskell><br />
import qualified System.Random.Mersenne as R<br />
<br />
import qualified Data.Vector.Generic as G<br />
import qualified Data.Vector.Generic.Mutable as GM<br />
<br />
random :: (R.MTRandom a, G.Vector v a) => R.MTGen -> Int -> IO (v a)<br />
random g n = do<br />
v <- GM.new n<br />
fill v 0<br />
G.unsafeFreeze v<br />
where<br />
fill v i<br />
| i < n = do<br />
x <- R.random g<br />
GM.unsafeWrite v i x<br />
fill v (i+1)<br />
| otherwise = return ()<br />
</haskell><br />
<br />
Here we use the Data.Vector.Generic.Mutable.new to allocate a new, uninitialized array, which we then write elements to (using unsafeWrite).<br />
<br />
By using the generic interface, we can construct boxed, storable or unboxed arrays all from the same code.<br />
<br />
=== Some examples ===<br />
<br />
The most important attributes of an array are available in O(1) time, such as the size (length), <br />
<br />
<haskell><br />
-- how big is the array?<br />
Prelude Data.Vector> let a = fromList [1,2,3,4,5,6,7,8,9,10]<br />
Prelude Data.Vector> Data.Vector.length a<br />
10<br />
<br />
-- is the array empty?<br />
Prelude Data.Vector> Data.Vector.null a<br />
False<br />
</haskell><br />
<br />
== Array Creation ==<br />
<br />
=== Enumerations ===<br />
<br />
The most common way to generate a vector is via an enumeration function:<br />
<br />
* enumFromN<br />
* enumFromStepN<br />
<br />
And the list-like:<br />
<br />
* enumFromTo<br />
* enumFromThenTo <br />
<br />
The enumFrom*N functions are guaranteed to optimize well for any type. The enumFromTo functions might fall back to generating from lists if there is no specialization for your type. They are currently specialized to most Int/Word/Double/Float generators.<br />
<br />
<haskell><br />
> enumFromN 1 10<br />
fromList [1,2,3,4,5,6,7,8,9,10]<br />
<br />
> enumFromStepN 1 3 4<br />
fromList [1,4,7,10]<br />
<br />
> Data.Vector.enumFromTo 1 10<br />
fromList [1,2,3,4,5,6,7,8,9,10]<br />
<br />
-- counting backwards<br />
> Data.Vector.enumFromThenTo 10 9 1<br />
fromList [10,9,8,7,6,5,4,3,2,1]<br />
</haskell><br />
<br />
==== A note on fusion ====<br />
<br />
As for almost all vector functions, if an enumerator is composed with a traversal or fold, they will fuse into a single loop.<br />
<br />
For example, we can fuse generation of an array of doubles, with computing the product of the square roots. The source program consists of two loops:<br />
<br />
<haskell><br />
import qualified Data.Vector as V<br />
<br />
test :: V.Vector Int -> Double<br />
test = V.foldl (\ a b -> a * sqrt (fromIntegral b)) 0<br />
<br />
create :: Int -> V.Vector Int<br />
create n = (V.enumFromTo 1 n)<br />
<br />
main = print (test (create 1000000))<br />
</haskell><br />
<br />
And after optimization (revealed with the ghc-core tool), we have only one loop:<br />
<br />
<haskell><br />
main_$s$wfoldlM_loop :: Int# -> Double# -> Double#<br />
<br />
main_$s$wfoldlM_loop =<br />
\ (sc_sWA :: Int#) (sc1_sWB :: Double#) -><br />
case <=# sc_sWA 1000000 of _ {<br />
False -> sc1_sWB;<br />
True -><br />
main_$s$wfoldlM_loop<br />
(+# sc_sWA 1)<br />
(*##<br />
sc1_sWB (sqrtDouble# (int2Double# sc_sWA)))<br />
}<br />
</haskell><br />
<br />
Doubling the performance, by halving the number of traversals. Fusion also means we can avoid any intermediate data structure allocation.<br />
<br />
=== An example: filling a vector from a file ===<br />
<br />
We often want to populate a vector using a external data file. The easiest way to do this is with bytestring IO, and Data.Vector.unfoldr (or the equivalent functions in Data.Vector.Unboxed or Data.Vector.Storable:<br />
<br />
==== Parsing Ints ====<br />
<br />
The simplest way to parse a file of Int or Integer types is with a strict or lazy ByteString, and the readInt or readInteger functions:<br />
<br />
<haskell><br />
{-# LANGUAGE BangPatterns #-}<br />
<br />
import qualified Data.ByteString.Lazy.Char8 as L<br />
import qualified Data.Vector as U<br />
import System.Environment<br />
<br />
main = do<br />
[f] <- getArgs<br />
s <- L.readFile f<br />
print . U.sum . parse $ s<br />
<br />
-- Fill a new vector from a file containing a list of numbers.<br />
parse = U.unfoldr step<br />
where<br />
step !s = case L.readInt s of<br />
Nothing -> Nothing<br />
Just (!k, !t) -> Just (k, L.tail t)<br />
</haskell><br />
<br />
Note the use of bang patterns to ensure the parsing accumulated state is produced strictly.<br />
<br />
Create a data file filled with 1 million integers:<br />
<br />
$ seq 1 1000000 > data<br />
<br />
Compile with -Odph (enables special optimizations to help fusion):<br />
<br />
$ ghc -Odph --make vector.hs<br />
<br />
Run:<br />
<br />
$ time ./vector data <br />
500000500000<br />
./vector data 0.08s user 0.01s system 98% cpu 0.088 total<br />
<br />
==== Parsing Floating Point Values ====<br />
<br />
To load a file of floating point values into a vector, you can use bytestrings and the [http://hackage.haskell.org/package/bytestring-lexing bytestring-lexing] package, which provides readDouble and readFloat functions.<br />
<br />
<haskell><br />
{-# LANGUAGE BangPatterns #-}<br />
<br />
import qualified Data.ByteString.Lazy.Char8 as L<br />
import qualified Data.ByteString.Lex.Lazy.Double as L<br />
import qualified Data.Vector as U<br />
import System.Environment<br />
<br />
main = do<br />
[f] <- getArgs<br />
s <- L.readFile f<br />
print . U.sum . parse $ s<br />
<br />
-- Fill a new vector from a file containing a list of numbers.<br />
parse = U.unfoldr step<br />
where<br />
step !s = case L.readDouble s of<br />
Nothing -> Nothing<br />
Just (!k, !t) -> Just (k, L.tail t)<br />
</haskell><br />
<br />
=== Parsing Binary Data ===<br />
<br />
The best way to parse binary data is via bytestrings and the [http://hackage.haskell.org/package/binary Data.Binary] package.<br />
<br />
There are instances of Binary and Serialize available in the [http://hackage.haskell.org/package/vector-binary-instances-0.1 vector-binary-instances] package.<br />
<br />
An example: parsing a list of integers in text form, serializing them back in binary form, then loading that binary file:<br />
<br />
<haskell><br />
{-# LANGUAGE BangPatterns #-}<br />
<br />
import Data.Vector.Binary<br />
import Data.Binary<br />
import qualified Data.ByteString.Lazy.Char8 as L<br />
import qualified Data.Vector.Unboxed as V<br />
<br />
main = do<br />
s <- L.readFile "dat"<br />
let v = parse s :: V.Vector Int<br />
encodeFile "dat2" v<br />
v' <- decodeFile "dat2" :: IO (V.Vector Int)<br />
print (v == v')<br />
<br />
-- Fill a new vector from a file containing a list of numbers.<br />
parse = V.unfoldr step<br />
where<br />
step !s = case L.readInt s of<br />
Nothing -> Nothing<br />
Just (!k, !t) -> Just (k, L.tail t)<br />
</haskell><br />
<br />
=== Random numbers ===<br />
<br />
If we can parse from a file, we can also fill a vector with random numbers. <br />
We'll use the [http://hackage.haskell.org/package/mersenne-random mersenne-random] package:<br />
<br />
$ cabal install mersenne-random<br />
<br />
We can then use MTRandom class to generate random vectors of different types:<br />
<br />
<haskell><br />
import qualified Data.Vector.Unboxed as U<br />
import System.Random.Mersenne<br />
import Control.Monad<br />
<br />
main = do<br />
-- create a new source of randomness<br />
-- andan infinite list of randoms<br />
g <- newMTGen Nothing<br />
rs <- randoms g<br />
<br />
-- fill a vector with the first 10 random Ints<br />
let a = U.fromList (take 10 rs) :: U.Vector Int<br />
<br />
-- print the sum<br />
print (U.sum a)<br />
<br />
-- print each element<br />
forM_ (U.toList a) print<br />
</haskell><br />
<br />
Running it:<br />
<br />
<haskell><br />
$ runhaskell B.hs<br />
-56426044567146682<br />
-2144043065897064806<br />
-2361203915295681429<br />
1023751035988638668<br />
-5145147152582103336<br />
6545758323081548799<br />
-7630294342751488332<br />
-5861937811333342436<br />
-3198510304070719259<br />
8949914511577398116<br />
-8681457396993884283<br />
</haskell><br />
<br />
We can also just use the vector-random package to generate new vectors initialized with the mersenne twister generator:<br />
<br />
For example, to generate 100 million random Doubles and sum them:<br />
<br />
<haskell><br />
<br />
import qualified Data.Vector.Unboxed as U<br />
import System.Random.Mersenne<br />
import qualified Data.Vector.Random.Mersenne as G<br />
<br />
main = do<br />
g <- newMTGen Nothing<br />
a <- G.random g 10000000 :: IO (U.Vector Double) -- 100 M<br />
print (U.sum a)<br />
</haskell><br />
<br />
=== Filling with a monadic action ===<br />
<br />
We might want to fill a vector with a monadic action, and have a pure vector at the end. The Vector API now contains a standard replicateM for this purpose, but if your monadic action is in IO, the following code is more efficient:<br />
<br />
<haskell><br />
replicateMIO :: (G.Vector v a) => Int -> IO a -> IO (v a)<br />
replicateMIO n a = do<br />
v <- M.new n<br />
fill v 0<br />
G.unsafeFreeze v<br />
where<br />
fill v i<br />
| i < n = do<br />
x <- a<br />
M.unsafeWrite v i x<br />
fill v (i+1)<br />
| otherwise = return ()<br />
</haskell><br />
<br />
<br />
<br />
== Transformations on Vectors ==<br />
<br />
A primary operation on arrays are the zip class of functions. <br />
<br />
<haskell><br />
> let a = fromList [20,30,40,50]<br />
<br />
> let b = enumFromN 0 4<br />
<br />
> a <br />
fromList [20,30,40,50]<br />
<br />
> b<br />
fromList [0,1,2,3]<br />
<br />
> Data.Vector.zipWith (-) a b<br />
fromList [20,29,38,47]<br />
</haskell><br />
<br />
We can also, of course, apply a function to each element of a vector (map):<br />
<br />
<haskell><br />
> Data.Vector.map (^2) b<br />
fromList [0,1,4,9]<br />
<br />
> Data.Vector.map (\e -> 10 * sin (fromIntegral e)) a<br />
fromList [9.129452507276277,-9.880316240928618,7.451131604793488,-2.6237485370392877]<br />
<br />
> Data.Vector.map (< 35) a<br />
fromList [True,True,False,False]<br />
</haskell><br />
<br />
=== Folds: Sums, Products, Min, Max ===<br />
<br />
Many special purpose folds (reductions) on vectors are available:<br />
<br />
<haskell><br />
> let a = enumFromN 1 100<br />
<br />
> Data.Vector.sum a<br />
5050<br />
<br />
> Data.Vector.product a<br />
93326215443944152681699238856266700490715968264381621468592963895217599993229915608941463976156518286253697920827223758251185210916864000000000000000000000000<br />
<br />
> Data.Vector.minimum a<br />
1<br />
<br />
> Data.Vector.maximum a<br />
100<br />
</haskell><br />
<br />
== Indexing, Slicing and Iterating ==<br />
<br />
One dimensional arrays can be indexed, sliced and iterated over pretty much like lists.<br />
<br />
Because Haskell values are by default immutable, all slice operations are zero-copying.<br />
<br />
<haskell><br />
> let a = enumFromN 0 10<br />
<br />
> a<br />
fromList [0,1,2,3,4,5,6,7,8,9]<br />
<br />
> let b = Data.Vector.map (^3) a<br />
<br />
> b ! 2<br />
8<br />
<br />
> slice 2 3 b<br />
fromList [8,27,64]<br />
</haskell><br />
<br />
slice takes 3 arguments: the initial index to slice from, the number of elements to slice, and the vector to operate on. <br />
<br />
A number of special-purpose slices are also available:<br />
<br />
<haskell><br />
> Data.Vector.init b<br />
fromList [0,1,8,27,64,125,216,343,512]<br />
<br />
> Data.Vector.tail b<br />
fromList [1,8,27,64,125,216,343,512,729]<br />
<br />
> Data.Vector.take 3 b<br />
fromList [0,1,8] :: Data.Vector.Vector<br />
<br />
> Data.Vector.drop 3 b<br />
fromList [27,64,125,216,343,512,729] :: Data.Vector.Vector<br />
</haskell><br />
<br />
=== Unsafe Slices ===<br />
<br />
For performance reasons you may wish to avoid bounds checks, when you<br />
can prove that the substring or index will be in bounds. For these cases<br />
there are unsafe operations, that let you skip the bounds check:<br />
<br />
<haskell><br />
> let a = fromList [1..10]<br />
<br />
> unsafeSlice 2 4 a<br />
fromList [3,4,5,6]<br />
<br />
> unsafeInit a<br />
fromList [1,2,3,4,5,6,7,8,9]<br />
<br />
> unsafeTail a<br />
fromList [2,3,4,5,6,7,8,9,10]<br />
</haskell><br />
<br />
and also unsafeTake, unsafeDrop.<br />
<br />
== Examples ==<br />
<br />
=== Sum of Squares ===<br />
<br />
Take the sum of the squares of the elements of a vector:<br />
<br />
<haskell><br />
sumsq :: U.Vector Int -> Int<br />
sumsq v = U.sum (U.map (\x -> x * x) v)<br />
</haskell><br />
<br />
== Stacking together different arrays ==<br />
== Splitting one array into several smaller ones ==<br />
== Copies and Views ==<br />
== No Copy at All ==<br />
== Indexing with Arrays of Indices ==<br />
== Indexing with Boolean Arrays ==<br />
== Permutations ==<br />
== Randoms ==<br />
== IO ==<br />
== References ==</div>Davorakhttps://wiki.haskell.org/index.php?title=Iteratee_I/O&diff=55263Iteratee I/O2013-01-19T00:56:14Z<p>Davorak: /* Implementations */</p>
<hr />
<div>Iteratee I/O is a way to avoid the problems that can occur with lazy I/O. They work by making the I/O actions explicit, making their behavior easier to reason about.<br />
<br />
== The problem with lazy I/O ==<br />
<br />
As a beginner, you probably used Haskell's lazy I/O through the <code>System.IO</code> module. However, while it is good enough for simple programs, its unpredictability makes it unsuitable for practical use.<br />
<br />
For example, a common beginner mistake is to close a file before one has finished reading it:<br />
<br />
<haskell><br />
wrong = do<br />
fileData <- withFile "test.txt" ReadMode hGetContents<br />
putStr fileData<br />
</haskell><br />
<br />
The problem is <code>withFile</code> closes the handle before <code>fileData</code> is forced. The correct way is to pass all the code to <code>withFile</code>:<br />
<br />
<haskell><br />
right = withFile "test.txt" ReadMode $ \handle -> do<br />
fileData <- hGetContents handle<br />
putStr fileData<br />
</haskell><br />
<br />
Here, the data is consumed before <code>withFile</code> finishes.<br />
<br />
Although this is easily fixed, the type system does not enforce the correct solution. Even worse, if you use the former code, it won't even raise an error &ndash; it will just fail silently and return an empty string. Many years passed before a satisfactory solution to the ''streaming data problem'' was found.<br />
<br />
== How iteratees work ==<br />
<br />
When you "step" an iteratee, it reads a chunk of data, updates the internal state and returns a new iteratee along with the data it read. Because an iteratee is simply a function with state, many iteratees can be composed together to form a pipeline.<br />
<br />
Some implementations also provide a resource management layer that releases resources automatically when they are no longer needed. This is very useful in a server, where sockets and file handles are scarce.<br />
<br />
== Implementations ==<br />
<br />
; [http://hackage.haskell.org/package/iteratee iteratee] : The original iteratee library, by Oleg Kiselyov.<br />
; [http://hackage.haskell.org/package/iterIO iterIO] : Yet another implementation.<br />
; [http://hackage.haskell.org/package/enumerator enumerator] : Used in Snap. It does not use any extensions, so it will work with most Haskell compilers.<br />
; [http://hackage.haskell.org/package/pipes pipes] : A more recent implementation, which strives to be more elegant than existing libraries.<br />
; [http://hackage.haskell.org/package/pipes-core pipes-core] : Fork of pipes which adds resource finalization, though pipes has it's own finilization now as well [http://hackage.haskell.org/package/pipes-safe pipes-safe].<br />
; [http://hackage.haskell.org/package/conduit conduit] : Popular implementation designed with practical use in mind, created by the author of Yesod. Recently heavily influenced by pipes.<br />
; [http://hackage.haskell.org/package/liboleg liboleg] : An evolving collection of Oleg Kiselyov's Haskell modules (depends on the package unix and will therefore not compile on Windows systems).<br />
<br />
== Essays by Oleg ==<br />
<br />
* Oleg's writings: [http://okmij.org/ftp/Streams.html#iteratee Incremental multi-level input processing with left-fold enumerator: predictable, high-performance, safe, and elegant]<br />
* [http://okmij.org/ftp/Haskell/Iteratee/Iteratee.hs An implementation by Oleg, iteratees on Chars and Strings]<br />
* [http://okmij.org/ftp/Haskell/Iteratee/IterateeM.hs A general library by Oleg] <br />
<br />
== Other discussions ==<br />
<br />
* [http://johnlato.blogspot.sg/2012/06/understandings-of-iteratees.html Understandings of Iteratees]<br />
* [http://themonadreader.wordpress.com/2010/05/12/issue-16/ The Monad.Reader Issue 16]; see the section "Iteratee: Teaching an Old Fold New Tricks" by John W. Lato<br />
* [http://www.yesodweb.com/book/conduit Yesod Book: Conduits]<br />
* [http://sites.google.com/site/haskell/notes/lazy-io-considered-harmful-way-to-go-left-fold-enumerator Lazy IO considered harmful; way to go, Left-fold enumerator!]<br />
* [http://www.tiresiaspress.us/haskell/iteratee/ A Darcs repository of an alternative implementation]<br />
* [http://www.scs.stanford.edu/11au-cs240h/notes/iteratee.html Stanford CS240h lecture on iteratee]<br />
<br />
== Users of Iteratee I/O ==<br />
<br />
* [http://snapframework.com Snap]: The Snap web framework<br />
* [http://hackage.haskell.org/package/yaml yaml]: Low-level binding to the libyaml C library]<br />
* [http://hackage.haskell.org/package/usb-0.4 usb 0.4]: Communicate with USB devices<br />
* [http://hackage.haskell.org/package/sstable sstable]: SSTables in Haskell<br />
* [http://hackage.haskell.org/package/wai WAI]: a Web Application Interface for haskell web frameworks (used by [http://www.yesodweb.com Yesod]).<br />
<br />
== See also ==<br />
<br />
* [[Enumerator and iteratee]]<br />
* [[Iteratee]]<br />
<br />
[[Category:Idioms]]</div>Davorakhttps://wiki.haskell.org/index.php?title=Vim&diff=55260Vim2013-01-14T07:35:10Z<p>Davorak: Added external list with summary and screen shots.</p>
<hr />
<div>[[Category:Development tools]] <br />
This page intended Haskell vim-users.<br />
<br />
= Indentation =<br />
<br />
The following setup from merijn @ #haskell ensures you use spaces not tabs for indentation for generally sane behaviour:<br />
<br />
<pre><br />
" Tab specific option<br />
set tabstop=8 "A tab is 8 spaces<br />
set expandtab "Always uses spaces instead of tabs<br />
set softtabstop=4 "Insert 4 spaces when tab is pressed<br />
set shiftwidth=4 "An indent is 4 spaces<br />
set smarttab "Indent instead of tab at start of line<br />
set shiftround "Round spaces to nearest shiftwidth multiple<br />
set nojoinspaces "Don't convert spaces to tabs<br />
</pre><br />
<br />
= Plugins =<br />
Put code in file <code>~/.vim/plugin/Haskell.vim</code>, or in multiple files in that directory. <br />
<br />
== Module Sections ==<br />
The following code prompts for a name, and places a section with that name at current position, when key sequence "--s":<br />
<pre><br />
let s:width = 80<br />
<br />
function! HaskellModuleSection(...)<br />
let name = 0 < a:0 ? a:1 : inputdialog("Section name: ")<br />
<br />
return repeat('-', s:width) . "\n"<br />
\ . "-- " . name . "\n"<br />
\ . "\n"<br />
<br />
endfunction<br />
<br />
nmap <silent> --s "=HaskellModuleSection()<CR>gp<br />
</pre><br />
Like so:<br />
<haskell><br />
<br />
--------------------------------------------------------------------------------<br />
-- my section<br />
<br />
</haskell><br />
<br />
<br />
== Module Headers ==<br />
The following code prompts for module name, a note, a description of module, and places a module comment at top, when key sequence "--h":<br />
<pre><br />
let s:width = 80<br />
<br />
<br />
function! HaskellModuleHeader(...)<br />
let name = 0 < a:0 ? a:1 : inputdialog("Module: ")<br />
let note = 1 < a:0 ? a:2 : inputdialog("Note: ")<br />
let description = 2 < a:0 ? a:3 : inputdialog("Describe this module: ")<br />
<br />
return repeat('-', s:width) . "\n" <br />
\ . "-- | \n" <br />
\ . "-- Module : " . name . "\n"<br />
\ . "-- Note : " . note . "\n"<br />
\ . "-- \n"<br />
\ . "-- " . description . "\n"<br />
\ . "-- \n"<br />
\ . repeat('-', s:width) . "\n"<br />
\ . "\n"<br />
<br />
endfunction<br />
<br />
<br />
nmap <silent> --h "=HaskellModuleHeader()<CR>:0put =<CR><br />
</pre><br />
like so:<br />
<haskell><br />
--------------------------------------------------------------------------------<br />
-- | <br />
-- Module : MyModule<br />
-- Note : This is a preview<br />
-- <br />
-- This is an empty module, to show the headercomment produced. <br />
-- <br />
--------------------------------------------------------------------------------<br />
<br />
<br />
</haskell><br />
<br />
= List of Plugins<br />
<br />
* [https://github.com/bitc/vim-hdevtools Hdevtools] taken from the github page:<br />
<blockquote><br />
hdevtools is a command line program powered by the GHC API, that provides services for Haskell development. hdevtools works by running a persistent process in the background, so that your Haskell modules remain in memory, instead of having to reload everything each time you change only one file. This is just like :reload in GHCi - with hdevtools you get the speed of GHCi as well as tight integration with your editor.<br />
<br />
This is the Vim plugin that integrates Vim with hdevtools.<br />
</blockquote><br />
<br />
<br />
*[https://github.com/lukerandall/haskellmode-vim Haskellmode-vim] from the github:<br />
<blockquote><br />
The Haskell mode plugins provide advanced support for Haskell development<br />
using GHC/GHCi on Windows and Unix-like systems. The functionality is<br />
based on Haddock-generated library indices, on GHCi's interactive<br />
commands, or on simply activating (some of) Vim's built-in program editing<br />
support in Haskell-relevant fashion. These plugins live side-by-side with<br />
the pre-defined |syntax-highlighting| support for |haskell| sources, and<br />
any other Haskell-related plugins you might want to install (see<br />
|haskellmode-resources|).<br />
<br />
The Haskell mode plugins consist of three filetype plugins (haskell.vim,<br />
haskell_doc.vim, haskell_hpaste.vim), which by Vim's |filetype| detection<br />
mechanism will be auto-loaded whenever files with the extension '.hs' are<br />
opened, and one compiler plugin (ghc.vim) which you will need to load from<br />
your vimrc file (see |haskellmode-settings|).<br />
</blockquote><br />
<br />
* [https://github.com/eagletmt/ghcmod-vim Ghcmod-vim] from the github Page:<br />
<blockquote><br />
Displaying the type of sub-expressions (ghc-mod type)<br />
Displaying error/warning messages and their locations (ghc-mod check and ghc-mod lint)<br />
Displaying the expansion of splices (ghc-mod expand)<br />
Completions are supported by another plugin. See neco-ghc .<br />
</blockquote>.<br />
<br />
* [https://github.com/scrooloose/syntastic Syntastic] supports Haskell and several other languages. From the github<br />
<blockquote><br />
Syntastic is a syntax checking plugin that runs files through external syntax checkers and displays any resulting errors to the user. This can be done on demand, or automatically as files are saved. If syntax errors are detected, the user is notified and is happy because they didn't have to compile their code or execute their script to find them.<br />
<br />
At the time of this writing, syntax checking plugins exist for applescript, c, coffee, cpp, css, cucumber, cuda, docbk, erlang, eruby, fortran, gentoo_metadata, go, haml, haskell, html, javascript, json, less, lua, matlab, perl, php, puppet, python, rst, ruby, sass/scss, sh, tcl, tex, vala, xhtml, xml, xslt, yaml, zpt<br />
</blockquote><br />
<br />
* [https://github.com/ujihisa/neco-ghc Neco-ghc] power by ghcmod-vim for completion of pragma, modules, functions and more.<br />
<br />
* [http://blog-mno2.csie.org/blog/2011/11/17/vim-plugins-for-haskell-programmers/ Addition list] with some missing here with screen shots of many of the above.</div>Davorakhttps://wiki.haskell.org/index.php?title=IDEs&diff=55259IDEs2013-01-14T07:34:39Z<p>Davorak: /* Vim */ Added external summary and list.</p>
<hr />
<div>The IDE world in Haskell is incomplete, but is in motion. There are many choices. When choosing your IDE, there are the following things to consider.<br />
<br />
== Notable features of interest to consider ==<br />
<br />
This is a list of features that any Haskell IDE could or should have. The IDEs listed above generally support some subset of these features. Please add more to this list if you think of anything. In future this should be expanded into separate headings with more description of how they would desirably work.<br />
<br />
* Syntax highlighting (e.g. for Haskell, Cabal, Literate Haskell, Core, etc.)<br />
* Macros (e.g. inserting imports/aligning/sorting imports, aligning up text, transposing/switching/moving things around)<br />
* Type information (e.g. type at point, info at point, type of expression)<br />
* Intellisense/completion (e.g. jump-to-definition, who-calls, calls-who, search by type, completion, etc.)<br />
* Project management (e.g. understanding of Cabal, configuration/building/installing, package sandboxing)<br />
* Interactive REPL (e.g. GHCi/Hugs interaction, expression evaluation and such)<br />
* Knowledge of Haskell in the GHCi/GHC side (e.g. understanding error types, the REPL, REPL objects, object inspection)<br />
* Indentation support (e.g. tab cycle, simple back-forward indentation, whole area indentation, structured editing, etc.)<br />
* Proper syntactic awareness of Haskell (e.g. with a proper parser and proper editor transpositions a la the structured editors of the 80s and Isabel et al)<br />
* Documentation support (e.g. ability to call up documentation of symbol or module, either in the editor, or in the browser)<br />
* Debugger support (e.g. stepping, breakpoints, etc.)<br />
* Refactoring support (e.g. symbol renaming, hlint, etc.)<br />
* Templates (e.g. snippets, zencoding type stuff, filling in all the cases of a case, etc.)<br />
<br />
== Software ==<br />
<br />
=== [http://www.haskell.org/visualhaskell Visual Haskell] ===<br />
:Visual Haskell is a complete development environment for Haskell software, based on Microsoft's [http://www.microsoft.com/visualstudio/en-us Microsoft Visual Studio] platform. Visual Haskell integrates with the Visual Studio editor to provide interactive features to aid Haskell development, and it enables the construction of projects consisting of multiple Haskell modules, using the Cabal building/packaging infrastructure.<br />
<br />
=== [http://eclipsefp.github.com/ EclipseFP plugin for Eclipse IDE] ===<br />
:Eclipse is an open, extensible IDE platform for "everything and nothing in particular". It is implemented in Java and runs on several platforms. The Java IDE built on top of it has already become very popular among Java developers. The Haskell tools extend it to support editing (syntax coloring, code assist), compiling, and running Haskell programs from within the IDE. In more details, it features:<br />
:* Syntax highlighting and errors/warning highlighting<br />
:* A module browser showing all installed packages, their modules and the contents of the modules (functions, types, etc.)<br />
:* Integration with [http://www.haskell.org/hoogle/ Hoogle]: select an identifier in your code, press F4 and see the results in hoogle<br />
:* Code navigation: from within a Haskell source file, jump to the file where a symbol in declared, or everywhere a symbol is used (type sensitive search, not just a text search)<br />
:* Outline view: quickly jump to definitions in your file<br />
:* Quick fixes on common errors and import management<br />
:* A cabal file editor and integration with Cabal (uses cabal configure, cabal build under the covers), and a graphical view of installed packages<br />
:* Integration with GHCi: launch GHCi inside Eclipse on any module<br />
:* Integration with the GHCi debugger: performs the GHCi debugging commands for you from the standard Eclipse debugging interface<br />
:* Integration with [http://community.haskell.org/~ndm/hlint/ HLint]: gives you HLint warning on building and allows you to quick fix them<br />
:* Integration with [https://github.com/jaspervdj/stylish-haskell Stylish-Haskell]: format your code with stylish-haskell<br />
:* Test support: shows results of test-framework based test suite in a graphical format. HTF support to come soon.<br />
<br />
=== [http://colorer.sourceforge.net/eclipsecolorer/index.html Colorer plugin for Eclipse IDE] ===<br />
:Rudimentary syntax highlighting in Eclipse can be achieved using the Colorer plugin. This is more light weight than using the EclipseFP plugin which has much functionality but can be messy to install and has sometimes been a bit shaky.<br />
<br />
:Eclipse Colorer is a plugin that enables syntax highlighting for a wide range of languages. It uses its own XML-based language for describing syntactic regions of languages. It does not include support for Haskell by default, but this can be added using the syntax description files attached below.<br />
<br />
:<b>Installation instructions</b><br />
:# Install the Colorer from the update site <code>http://colorer.sf.net/eclipsecolorer/</code> (for more detailed instructions see the project page).<br />
:# Download the Haskell syntax description files in [http://www.haskell.org/wikiupload/1/16/Haskell_Eclipse_Colorer.tar.gz Haskell_Eclipse_Colorer.tar.gz].<br />
:#Extract its contents (haskell.hrc and proto.hrc) into the following directory (overwriting proto.hrc): <code>eclipse_installation_dir/plugins/net.sf.colorer_0.9.9/colorer/hrc</code> (sometimes the wiki seems to create a nesting tar file, so you might have to unpack twise).<br />
:# Finished. A restart of Eclipse might be required. .hs files should open with syntax highlighting.<br />
<br />
:<b>Troubleshooting</b><br />
:If .hs files open with another kind of syntax highlighting check that they are associated with the Colorer Editor (Preferences -> General -> Editors -> File Associations). Or right click on them and choose Open With -> Other -> Colorer Editor.<br />
<br />
=== [[Leksah]] ===<br />
:Leksah is an IDE for Haskell written in Haskell. Leksah is intended as a practical tool to support the Haskell development process. It is an pre-release phase with bugs and open ends but actively developed and moving quickly. Hopefully, Leksah will already be interesting, useful and fun. Leksah uses GTK+ as GUI Toolkit with the gtk2hs binding. It is platform independent and should run on any platform where GTK+, gtk2hs and GHC can be installed. I have tested it on Windows and Linux. It only supports GHC.<br />
<br />
"I found Leksah less than satisfactory on OS X Lion. I could not figure out how to reference Test.Unit in a very simple program after several hours of trying to find the right settings in Leksah, and Leksah crashes a lot. I was able to build the same program in Eclipse within a few minutes of installing Eclipse and [http://eclipsefp.github.com/ Haskell support for Eclipse]." -- Doug Ransom<br />
<br />
- is the above still relevant? I did have to read the manual, the section concerning setting up a project (including dependencies), as leksah requires you to, but did not notice any issues, described above. Stability does not seem to be an issue either. -- Vladimir Lopatin.<br />
<br />
=== [http://www.haskell.org/haskellwiki/HIDE hIDE] ===<br />
:hIDE is a GUI-based Haskell IDE written using gtk+hs. It does not include an editor but instead interfaces with NEdit, vim or GNU emacs.<br />
<br />
=== [http://www.haskell.org/haskellwiki/HIDE hIDE-2] ===<br />
:Through the dark ages many a programmer has longed for the ultimate tool. In response to this most unnerving craving, of which we ourselves have had maybe more than our fair share, the dynamic trio of #Haskellaniacs (dons, dcoutts and Lemmih) hereby announce, to the relief of the community, that a fetus has been conceived: ''hIDE - the Haskell Integrated Development Environment''. So far the unborn integrates source code recognition and a chameleon editor, resenting these in a snappy gtk2 environment. Although no seer has yet predicted the date of birth of our hIDEous creature, we hope that the mere knowledge of its existence will spread peace of mind throughout the community as oil on troubled waters. See also: [[HIDE/Screenshots of HIDE]] and [[HIDE]]<br />
<br />
=== [http://web.archive.org/web/20060213161530/http://www.students.cs.uu.nl/people/rjchaaft/JCreator/ JCreator with Haskell support] ===<br />
: <b>N.B. The link above is to the Wayback Machine (Web Archive); it seem that JCreator is no longer supported.</b><br />
:JCreator is a highly customizable Java IDE for Windows. Features include extensive project support, fully customizable toolbars (including the images of user tools) and menus, increase/decrease indent for a selected block of text (tab/shift+tab respectively). The Haskell support module adds syntax highlighting for Haskell files and WinHugs, hugs, a static checker (if you double click on the error message, JCreator will jump to the right file and line and highlight it yellow) and the Haskell 98 Report as tools. Platforms: Win95, Win98, WinNT and Win2000 (only Win95 not tested yet). Size: 6MB. JCreator is a trademark of Xinox Software; Copyright &copy; 2000 Xinox Software. The Haskell support module is made by Rijk-Jan van Haaften.<br />
<br />
=== [http://kdevelop.org/ KDevelop] ===<br />
:This IDE supports many languages. For Haskell it currently supports project management, syntax highlighting, building (with GHC) & executing within the IDE.<br />
<br />
=== [[haste]] - Haskell TurboEdit ===<br />
:haste - Haskell TurboEdit - was an IDE for the functional programming language Haskell, written in Haskell.<br />
<br />
=== [http://www.cs.kent.ac.uk/projects/vital/ Vital] ===<br />
:Vital is a visual programming environment. It is particularly intended for supporting the open-ended, incremental style of development often preferred by end users (engineers, scientists, analysts, etc.).<br />
<br />
=== [http://www.cs.kent.ac.uk/projects/pivotal/ Pivotal] ===<br />
:Pivotal 0.025 is an early prototype of a Vital-like environment for Haskell. Unlike Vital, however, Pivotal is implemented entirely in Haskell. The implementation is based on the use of the hs-plugins library to allow dynamic compilation and evaluation of Haskell expressions together with the gtk2hs library for implementing the GUI.<br />
<br />
=== [http://www.vim.org Vim] ===<br />
<br />
This may or may not be up to date. A Vim user should update it.<br />
<br />
:* [http://projects.haskell.org/haskellmode-vim/ Haskell mode for Vim by Claus Reinke] - These plugins provide Vim integration with GHC and Haddock.<br />
:* [https://github.com/scrooloose/syntastic Syntastic] -- An extremely useful Vim plugin which will interact with ghc_mod (when editing a Haskell file) every time the source file is saved to check for syntax and type errors.<br />
:* [http://www.vim.org/scripts/script.php?script_id=2356 SHIM by Lars Kotthoff] -- Superior Haskell Interaction Mode (SHIM) plugin for Vim providing full GHCi integration (requires Vim compiled with Ruby support).<br />
:* [http://www.vim.org/scripts/script.php?script_id=3200 Haskell Conceal] -- shows Unicode symbols for common Haskell operators such as ++ and other lexical notation in Vim window (source file itself remains unchanged).<br />
:* [http://urchin.earth.li/~ian/vim/ by Ian Lynagh]: distinguishes different literal Haskell styles (Vim 7.0 includes a syntax file which supersedes these plugins).<br />
:* There's a [[Literate programming/Vim|copy of lhaskell.vim]] on the Wiki.<br />
:* [https://github.com/MarcWeber/vim-addon-haskell by Marc Weber] -- Vim script-based function/module completion, cabal support, tagging by one command, context completion ( w<tab> -> where ), module outline, etc<br />
:* [http://www.vim.org/scripts/script.php?script_id=1968 Vim indenting mode for Haskell]<br />
:* [https://github.com/ujihisa/neco-ghc neco-ghc] pragma, module, function completion.<br />
:* [https://github.com/eagletmt/ghcmod-vim Ghcmod-vim]<br />
:* [https://github.com/bitc/vim-hdevtools Hdevtools] - gives type information, quicker reloading and more.<br />
:* [http://blog-mno2.csie.org/blog/2011/11/17/vim-plugins-for-haskell-programmers/ Addition list] with some missing here with screen shots of many of the above.<br />
<br />
=== [http://www.gnu.org/s/emacs/ Emacs] ===<br />
<br />
See [[Emacs]].<br />
<br />
=== Other IDEs ===<br />
<br />
The list below is incomplete. Please add to it with whatever you think of. This list should be expanded into sections, as above, with more details, with links to the actual documentation of the described features.<br />
<br />
* Vim — '''PROS:''' Free. Works on Windows. Works in terminal. Decent alignment support. Tag-based completion and jumps. Very good syntax highlighting, flymake (via Syntastic), Cabal integration, Hoogle. Documentation for symbol at point '''CONS:''' Arcane, difficult for new users. Some complain of bad indentation support.<br />
* [http://www.haskell.org/haskellwiki/Haskell_mode_for_Emacs Emacs]— '''PROS:''' Free. Works on Windows. Works in terminal. Decent alignment, indentation, syntax highlighting. Limited type information (type and info of name at point). Cabal/GHC/GHCi awareness and Haskell-aware REPL. Completion and jump-to-definition (via ETAGS). Documentation of symbol at point. Hoogle. Documentation for symbol at point. Flymake (error checking on the fly). '''CONS:''' Arcane, difficult for new users.<br />
* Sublime — '''PROS:''' Works on Windows. '''CONS:''' Poor alignment support (though [http://www.reddit.com/r/haskell/comments/ts8fi/haskell_ides_emacs_vim_and_sublime_oh_my_opinions/c4pair1 there are packages] to do indentation a little better). Proprietary.<br />
* [[Yi]] — '''PROS:''' Written in Haskell. Works in terminal. '''CONS:''' Very immature, lacking features. Problems building generally, especially on Windows.<br />
* [http://www.haskell.org/haskellwiki/Leksah Leksah] — '''PROS:''' Syntax highlighting. Understands Cabal, Module browser, dependency knowledge, documentation display inside the IDE, jump-to-definition, flymake (error checking on the fly), limited evaluation of snippets, scratch buffer. Autocompletion. Not an arcane interface a la Emacs/Vim. '''CONS:''' Doesn't have a decent REPL. Are there any other cons? — This should be moved to the section above.<br />
* [http://www.haskell.org/visualhaskell/ Visual Haskell]<br />
* [http://www.haskell.org/haskellwiki/Editors Other]<br />
* KDevelop — Decent project management.<br />
* [http://www.cs.kent.ac.uk/projects/heat/ HEAT:] An Interactive Development Environment for Learning & Teaching Haskell<br />
* [http://www.geany.org/ Geany] '''PROS:''' Free. Works on Windows. Syntax highlighting, REPL. '''CONS:''' After using it for a while, Geany freezes quite often.<br />
<br />
== See also ==<br />
<br />
* [http://fpcomplete.com/designing-the-haskell-ide/ Designing the Haskell IDE]. FP Complete is working on a Haskell IDE.<br />
* [http://blog.johantibell.com/2011/08/results-from-state-of-haskell-2011.html Results from the State of Haskell, 2011 Survey]. <br />
* [http://nickknowlson.com/blog/2011/09/12/haskell-survey-categorized-weaknesses/ Categorized Weaknesses from the State of Haskell 2011 Survey], which barely touched upon IDEs.<br />
* [[Editors]]<br />
* [[Applications and libraries/Program development#Editor support]]<br />
* [http://code.haskell.org/shim/ Shim]; the aim of the shim (Superior Haskell Interaction Mode) project is to provide better support for editing Haskell code in VIM and Emacs<br />
<br />
== Outdated ==<br />
<br />
* [http://web.archive.org/web/20110726153330/http://hoovy.org/HaskellXcodePlugin/ plugin for Xcode] (links to the web archive)<br />
* [[hIDE]], now apparently orphaned</div>Davorakhttps://wiki.haskell.org/index.php?title=IDEs&diff=55258IDEs2013-01-14T07:28:04Z<p>Davorak: /* Vim */ added more vim plugins.</p>
<hr />
<div>The IDE world in Haskell is incomplete, but is in motion. There are many choices. When choosing your IDE, there are the following things to consider.<br />
<br />
== Notable features of interest to consider ==<br />
<br />
This is a list of features that any Haskell IDE could or should have. The IDEs listed above generally support some subset of these features. Please add more to this list if you think of anything. In future this should be expanded into separate headings with more description of how they would desirably work.<br />
<br />
* Syntax highlighting (e.g. for Haskell, Cabal, Literate Haskell, Core, etc.)<br />
* Macros (e.g. inserting imports/aligning/sorting imports, aligning up text, transposing/switching/moving things around)<br />
* Type information (e.g. type at point, info at point, type of expression)<br />
* Intellisense/completion (e.g. jump-to-definition, who-calls, calls-who, search by type, completion, etc.)<br />
* Project management (e.g. understanding of Cabal, configuration/building/installing, package sandboxing)<br />
* Interactive REPL (e.g. GHCi/Hugs interaction, expression evaluation and such)<br />
* Knowledge of Haskell in the GHCi/GHC side (e.g. understanding error types, the REPL, REPL objects, object inspection)<br />
* Indentation support (e.g. tab cycle, simple back-forward indentation, whole area indentation, structured editing, etc.)<br />
* Proper syntactic awareness of Haskell (e.g. with a proper parser and proper editor transpositions a la the structured editors of the 80s and Isabel et al)<br />
* Documentation support (e.g. ability to call up documentation of symbol or module, either in the editor, or in the browser)<br />
* Debugger support (e.g. stepping, breakpoints, etc.)<br />
* Refactoring support (e.g. symbol renaming, hlint, etc.)<br />
* Templates (e.g. snippets, zencoding type stuff, filling in all the cases of a case, etc.)<br />
<br />
== Software ==<br />
<br />
=== [http://www.haskell.org/visualhaskell Visual Haskell] ===<br />
:Visual Haskell is a complete development environment for Haskell software, based on Microsoft's [http://www.microsoft.com/visualstudio/en-us Microsoft Visual Studio] platform. Visual Haskell integrates with the Visual Studio editor to provide interactive features to aid Haskell development, and it enables the construction of projects consisting of multiple Haskell modules, using the Cabal building/packaging infrastructure.<br />
<br />
=== [http://eclipsefp.github.com/ EclipseFP plugin for Eclipse IDE] ===<br />
:Eclipse is an open, extensible IDE platform for "everything and nothing in particular". It is implemented in Java and runs on several platforms. The Java IDE built on top of it has already become very popular among Java developers. The Haskell tools extend it to support editing (syntax coloring, code assist), compiling, and running Haskell programs from within the IDE. In more details, it features:<br />
:* Syntax highlighting and errors/warning highlighting<br />
:* A module browser showing all installed packages, their modules and the contents of the modules (functions, types, etc.)<br />
:* Integration with [http://www.haskell.org/hoogle/ Hoogle]: select an identifier in your code, press F4 and see the results in hoogle<br />
:* Code navigation: from within a Haskell source file, jump to the file where a symbol in declared, or everywhere a symbol is used (type sensitive search, not just a text search)<br />
:* Outline view: quickly jump to definitions in your file<br />
:* Quick fixes on common errors and import management<br />
:* A cabal file editor and integration with Cabal (uses cabal configure, cabal build under the covers), and a graphical view of installed packages<br />
:* Integration with GHCi: launch GHCi inside Eclipse on any module<br />
:* Integration with the GHCi debugger: performs the GHCi debugging commands for you from the standard Eclipse debugging interface<br />
:* Integration with [http://community.haskell.org/~ndm/hlint/ HLint]: gives you HLint warning on building and allows you to quick fix them<br />
:* Integration with [https://github.com/jaspervdj/stylish-haskell Stylish-Haskell]: format your code with stylish-haskell<br />
:* Test support: shows results of test-framework based test suite in a graphical format. HTF support to come soon.<br />
<br />
=== [http://colorer.sourceforge.net/eclipsecolorer/index.html Colorer plugin for Eclipse IDE] ===<br />
:Rudimentary syntax highlighting in Eclipse can be achieved using the Colorer plugin. This is more light weight than using the EclipseFP plugin which has much functionality but can be messy to install and has sometimes been a bit shaky.<br />
<br />
:Eclipse Colorer is a plugin that enables syntax highlighting for a wide range of languages. It uses its own XML-based language for describing syntactic regions of languages. It does not include support for Haskell by default, but this can be added using the syntax description files attached below.<br />
<br />
:<b>Installation instructions</b><br />
:# Install the Colorer from the update site <code>http://colorer.sf.net/eclipsecolorer/</code> (for more detailed instructions see the project page).<br />
:# Download the Haskell syntax description files in [http://www.haskell.org/wikiupload/1/16/Haskell_Eclipse_Colorer.tar.gz Haskell_Eclipse_Colorer.tar.gz].<br />
:#Extract its contents (haskell.hrc and proto.hrc) into the following directory (overwriting proto.hrc): <code>eclipse_installation_dir/plugins/net.sf.colorer_0.9.9/colorer/hrc</code> (sometimes the wiki seems to create a nesting tar file, so you might have to unpack twise).<br />
:# Finished. A restart of Eclipse might be required. .hs files should open with syntax highlighting.<br />
<br />
:<b>Troubleshooting</b><br />
:If .hs files open with another kind of syntax highlighting check that they are associated with the Colorer Editor (Preferences -> General -> Editors -> File Associations). Or right click on them and choose Open With -> Other -> Colorer Editor.<br />
<br />
=== [[Leksah]] ===<br />
:Leksah is an IDE for Haskell written in Haskell. Leksah is intended as a practical tool to support the Haskell development process. It is an pre-release phase with bugs and open ends but actively developed and moving quickly. Hopefully, Leksah will already be interesting, useful and fun. Leksah uses GTK+ as GUI Toolkit with the gtk2hs binding. It is platform independent and should run on any platform where GTK+, gtk2hs and GHC can be installed. I have tested it on Windows and Linux. It only supports GHC.<br />
<br />
"I found Leksah less than satisfactory on OS X Lion. I could not figure out how to reference Test.Unit in a very simple program after several hours of trying to find the right settings in Leksah, and Leksah crashes a lot. I was able to build the same program in Eclipse within a few minutes of installing Eclipse and [http://eclipsefp.github.com/ Haskell support for Eclipse]." -- Doug Ransom<br />
<br />
- is the above still relevant? I did have to read the manual, the section concerning setting up a project (including dependencies), as leksah requires you to, but did not notice any issues, described above. Stability does not seem to be an issue either. -- Vladimir Lopatin.<br />
<br />
=== [http://www.haskell.org/haskellwiki/HIDE hIDE] ===<br />
:hIDE is a GUI-based Haskell IDE written using gtk+hs. It does not include an editor but instead interfaces with NEdit, vim or GNU emacs.<br />
<br />
=== [http://www.haskell.org/haskellwiki/HIDE hIDE-2] ===<br />
:Through the dark ages many a programmer has longed for the ultimate tool. In response to this most unnerving craving, of which we ourselves have had maybe more than our fair share, the dynamic trio of #Haskellaniacs (dons, dcoutts and Lemmih) hereby announce, to the relief of the community, that a fetus has been conceived: ''hIDE - the Haskell Integrated Development Environment''. So far the unborn integrates source code recognition and a chameleon editor, resenting these in a snappy gtk2 environment. Although no seer has yet predicted the date of birth of our hIDEous creature, we hope that the mere knowledge of its existence will spread peace of mind throughout the community as oil on troubled waters. See also: [[HIDE/Screenshots of HIDE]] and [[HIDE]]<br />
<br />
=== [http://web.archive.org/web/20060213161530/http://www.students.cs.uu.nl/people/rjchaaft/JCreator/ JCreator with Haskell support] ===<br />
: <b>N.B. The link above is to the Wayback Machine (Web Archive); it seem that JCreator is no longer supported.</b><br />
:JCreator is a highly customizable Java IDE for Windows. Features include extensive project support, fully customizable toolbars (including the images of user tools) and menus, increase/decrease indent for a selected block of text (tab/shift+tab respectively). The Haskell support module adds syntax highlighting for Haskell files and WinHugs, hugs, a static checker (if you double click on the error message, JCreator will jump to the right file and line and highlight it yellow) and the Haskell 98 Report as tools. Platforms: Win95, Win98, WinNT and Win2000 (only Win95 not tested yet). Size: 6MB. JCreator is a trademark of Xinox Software; Copyright &copy; 2000 Xinox Software. The Haskell support module is made by Rijk-Jan van Haaften.<br />
<br />
=== [http://kdevelop.org/ KDevelop] ===<br />
:This IDE supports many languages. For Haskell it currently supports project management, syntax highlighting, building (with GHC) & executing within the IDE.<br />
<br />
=== [[haste]] - Haskell TurboEdit ===<br />
:haste - Haskell TurboEdit - was an IDE for the functional programming language Haskell, written in Haskell.<br />
<br />
=== [http://www.cs.kent.ac.uk/projects/vital/ Vital] ===<br />
:Vital is a visual programming environment. It is particularly intended for supporting the open-ended, incremental style of development often preferred by end users (engineers, scientists, analysts, etc.).<br />
<br />
=== [http://www.cs.kent.ac.uk/projects/pivotal/ Pivotal] ===<br />
:Pivotal 0.025 is an early prototype of a Vital-like environment for Haskell. Unlike Vital, however, Pivotal is implemented entirely in Haskell. The implementation is based on the use of the hs-plugins library to allow dynamic compilation and evaluation of Haskell expressions together with the gtk2hs library for implementing the GUI.<br />
<br />
=== [http://www.vim.org Vim] ===<br />
<br />
This may or may not be up to date. A Vim user should update it.<br />
<br />
:* [http://projects.haskell.org/haskellmode-vim/ Haskell mode for Vim by Claus Reinke] - These plugins provide Vim integration with GHC and Haddock.<br />
:* [https://github.com/scrooloose/syntastic Syntastic] -- An extremely useful Vim plugin which will interact with ghc_mod (when editing a Haskell file) every time the source file is saved to check for syntax and type errors.<br />
:* [http://www.vim.org/scripts/script.php?script_id=2356 SHIM by Lars Kotthoff] -- Superior Haskell Interaction Mode (SHIM) plugin for Vim providing full GHCi integration (requires Vim compiled with Ruby support).<br />
:* [http://www.vim.org/scripts/script.php?script_id=3200 Haskell Conceal] -- shows Unicode symbols for common Haskell operators such as ++ and other lexical notation in Vim window (source file itself remains unchanged).<br />
:* [http://urchin.earth.li/~ian/vim/ by Ian Lynagh]: distinguishes different literal Haskell styles (Vim 7.0 includes a syntax file which supersedes these plugins).<br />
:* There's a [[Literate programming/Vim|copy of lhaskell.vim]] on the Wiki.<br />
:* [https://github.com/MarcWeber/vim-addon-haskell by Marc Weber] -- Vim script-based function/module completion, cabal support, tagging by one command, context completion ( w<tab> -> where ), module outline, etc<br />
:* [http://www.vim.org/scripts/script.php?script_id=1968 Vim indenting mode for Haskell]<br />
:* [https://github.com/ujihisa/neco-ghc neco-ghc] pragma, module, function completion.<br />
:* [https://github.com/eagletmt/ghcmod-vim Ghcmod-vim]<br />
:* [https://github.com/bitc/vim-hdevtools Hdevtools] - gives type information, quicker reloading and more.<br />
<br />
=== [http://www.gnu.org/s/emacs/ Emacs] ===<br />
<br />
See [[Emacs]].<br />
<br />
=== Other IDEs ===<br />
<br />
The list below is incomplete. Please add to it with whatever you think of. This list should be expanded into sections, as above, with more details, with links to the actual documentation of the described features.<br />
<br />
* Vim — '''PROS:''' Free. Works on Windows. Works in terminal. Decent alignment support. Tag-based completion and jumps. Very good syntax highlighting, flymake (via Syntastic), Cabal integration, Hoogle. Documentation for symbol at point '''CONS:''' Arcane, difficult for new users. Some complain of bad indentation support.<br />
* [http://www.haskell.org/haskellwiki/Haskell_mode_for_Emacs Emacs]— '''PROS:''' Free. Works on Windows. Works in terminal. Decent alignment, indentation, syntax highlighting. Limited type information (type and info of name at point). Cabal/GHC/GHCi awareness and Haskell-aware REPL. Completion and jump-to-definition (via ETAGS). Documentation of symbol at point. Hoogle. Documentation for symbol at point. Flymake (error checking on the fly). '''CONS:''' Arcane, difficult for new users.<br />
* Sublime — '''PROS:''' Works on Windows. '''CONS:''' Poor alignment support (though [http://www.reddit.com/r/haskell/comments/ts8fi/haskell_ides_emacs_vim_and_sublime_oh_my_opinions/c4pair1 there are packages] to do indentation a little better). Proprietary.<br />
* [[Yi]] — '''PROS:''' Written in Haskell. Works in terminal. '''CONS:''' Very immature, lacking features. Problems building generally, especially on Windows.<br />
* [http://www.haskell.org/haskellwiki/Leksah Leksah] — '''PROS:''' Syntax highlighting. Understands Cabal, Module browser, dependency knowledge, documentation display inside the IDE, jump-to-definition, flymake (error checking on the fly), limited evaluation of snippets, scratch buffer. Autocompletion. Not an arcane interface a la Emacs/Vim. '''CONS:''' Doesn't have a decent REPL. Are there any other cons? — This should be moved to the section above.<br />
* [http://www.haskell.org/visualhaskell/ Visual Haskell]<br />
* [http://www.haskell.org/haskellwiki/Editors Other]<br />
* KDevelop — Decent project management.<br />
* [http://www.cs.kent.ac.uk/projects/heat/ HEAT:] An Interactive Development Environment for Learning & Teaching Haskell<br />
* [http://www.geany.org/ Geany] '''PROS:''' Free. Works on Windows. Syntax highlighting, REPL. '''CONS:''' After using it for a while, Geany freezes quite often.<br />
<br />
== See also ==<br />
<br />
* [http://fpcomplete.com/designing-the-haskell-ide/ Designing the Haskell IDE]. FP Complete is working on a Haskell IDE.<br />
* [http://blog.johantibell.com/2011/08/results-from-state-of-haskell-2011.html Results from the State of Haskell, 2011 Survey]. <br />
* [http://nickknowlson.com/blog/2011/09/12/haskell-survey-categorized-weaknesses/ Categorized Weaknesses from the State of Haskell 2011 Survey], which barely touched upon IDEs.<br />
* [[Editors]]<br />
* [[Applications and libraries/Program development#Editor support]]<br />
* [http://code.haskell.org/shim/ Shim]; the aim of the shim (Superior Haskell Interaction Mode) project is to provide better support for editing Haskell code in VIM and Emacs<br />
<br />
== Outdated ==<br />
<br />
* [http://web.archive.org/web/20110726153330/http://hoovy.org/HaskellXcodePlugin/ plugin for Xcode] (links to the web archive)<br />
* [[hIDE]], now apparently orphaned</div>Davorakhttps://wiki.haskell.org/index.php?title=Vim&diff=55257Vim2013-01-14T07:24:11Z<p>Davorak: added another plugin</p>
<hr />
<div>[[Category:Development tools]] <br />
This page intended Haskell vim-users.<br />
<br />
= Indentation =<br />
<br />
The following setup from merijn @ #haskell ensures you use spaces not tabs for indentation for generally sane behaviour:<br />
<br />
<pre><br />
" Tab specific option<br />
set tabstop=8 "A tab is 8 spaces<br />
set expandtab "Always uses spaces instead of tabs<br />
set softtabstop=4 "Insert 4 spaces when tab is pressed<br />
set shiftwidth=4 "An indent is 4 spaces<br />
set smarttab "Indent instead of tab at start of line<br />
set shiftround "Round spaces to nearest shiftwidth multiple<br />
set nojoinspaces "Don't convert spaces to tabs<br />
</pre><br />
<br />
= Plugins =<br />
Put code in file <code>~/.vim/plugin/Haskell.vim</code>, or in multiple files in that directory. <br />
<br />
== Module Sections ==<br />
The following code prompts for a name, and places a section with that name at current position, when key sequence "--s":<br />
<pre><br />
let s:width = 80<br />
<br />
function! HaskellModuleSection(...)<br />
let name = 0 < a:0 ? a:1 : inputdialog("Section name: ")<br />
<br />
return repeat('-', s:width) . "\n"<br />
\ . "-- " . name . "\n"<br />
\ . "\n"<br />
<br />
endfunction<br />
<br />
nmap <silent> --s "=HaskellModuleSection()<CR>gp<br />
</pre><br />
Like so:<br />
<haskell><br />
<br />
--------------------------------------------------------------------------------<br />
-- my section<br />
<br />
</haskell><br />
<br />
<br />
== Module Headers ==<br />
The following code prompts for module name, a note, a description of module, and places a module comment at top, when key sequence "--h":<br />
<pre><br />
let s:width = 80<br />
<br />
<br />
function! HaskellModuleHeader(...)<br />
let name = 0 < a:0 ? a:1 : inputdialog("Module: ")<br />
let note = 1 < a:0 ? a:2 : inputdialog("Note: ")<br />
let description = 2 < a:0 ? a:3 : inputdialog("Describe this module: ")<br />
<br />
return repeat('-', s:width) . "\n" <br />
\ . "-- | \n" <br />
\ . "-- Module : " . name . "\n"<br />
\ . "-- Note : " . note . "\n"<br />
\ . "-- \n"<br />
\ . "-- " . description . "\n"<br />
\ . "-- \n"<br />
\ . repeat('-', s:width) . "\n"<br />
\ . "\n"<br />
<br />
endfunction<br />
<br />
<br />
nmap <silent> --h "=HaskellModuleHeader()<CR>:0put =<CR><br />
</pre><br />
like so:<br />
<haskell><br />
--------------------------------------------------------------------------------<br />
-- | <br />
-- Module : MyModule<br />
-- Note : This is a preview<br />
-- <br />
-- This is an empty module, to show the headercomment produced. <br />
-- <br />
--------------------------------------------------------------------------------<br />
<br />
<br />
</haskell><br />
<br />
= List of Plugins<br />
<br />
* [https://github.com/bitc/vim-hdevtools Hdevtools] taken from the github page:<br />
<blockquote><br />
hdevtools is a command line program powered by the GHC API, that provides services for Haskell development. hdevtools works by running a persistent process in the background, so that your Haskell modules remain in memory, instead of having to reload everything each time you change only one file. This is just like :reload in GHCi - with hdevtools you get the speed of GHCi as well as tight integration with your editor.<br />
<br />
This is the Vim plugin that integrates Vim with hdevtools.<br />
</blockquote><br />
<br />
<br />
*[https://github.com/lukerandall/haskellmode-vim Haskellmode-vim] from the github:<br />
<blockquote><br />
The Haskell mode plugins provide advanced support for Haskell development<br />
using GHC/GHCi on Windows and Unix-like systems. The functionality is<br />
based on Haddock-generated library indices, on GHCi's interactive<br />
commands, or on simply activating (some of) Vim's built-in program editing<br />
support in Haskell-relevant fashion. These plugins live side-by-side with<br />
the pre-defined |syntax-highlighting| support for |haskell| sources, and<br />
any other Haskell-related plugins you might want to install (see<br />
|haskellmode-resources|).<br />
<br />
The Haskell mode plugins consist of three filetype plugins (haskell.vim,<br />
haskell_doc.vim, haskell_hpaste.vim), which by Vim's |filetype| detection<br />
mechanism will be auto-loaded whenever files with the extension '.hs' are<br />
opened, and one compiler plugin (ghc.vim) which you will need to load from<br />
your vimrc file (see |haskellmode-settings|).<br />
</blockquote><br />
<br />
* [https://github.com/eagletmt/ghcmod-vim Ghcmod-vim] from the github Page:<br />
<blockquote><br />
Displaying the type of sub-expressions (ghc-mod type)<br />
Displaying error/warning messages and their locations (ghc-mod check and ghc-mod lint)<br />
Displaying the expansion of splices (ghc-mod expand)<br />
Completions are supported by another plugin. See neco-ghc .<br />
</blockquote>.<br />
<br />
* [https://github.com/scrooloose/syntastic Syntastic] supports Haskell and several other languages. From the github<br />
<blockquote><br />
Syntastic is a syntax checking plugin that runs files through external syntax checkers and displays any resulting errors to the user. This can be done on demand, or automatically as files are saved. If syntax errors are detected, the user is notified and is happy because they didn't have to compile their code or execute their script to find them.<br />
<br />
At the time of this writing, syntax checking plugins exist for applescript, c, coffee, cpp, css, cucumber, cuda, docbk, erlang, eruby, fortran, gentoo_metadata, go, haml, haskell, html, javascript, json, less, lua, matlab, perl, php, puppet, python, rst, ruby, sass/scss, sh, tcl, tex, vala, xhtml, xml, xslt, yaml, zpt<br />
</blockquote><br />
<br />
* [https://github.com/ujihisa/neco-ghc Neco-ghc] power by ghcmod-vim for completion of pragma, modules, functions and more.</div>Davorakhttps://wiki.haskell.org/index.php?title=Vim&diff=55256Vim2013-01-14T07:21:10Z<p>Davorak: Added a list of vim plugins with description form project page.</p>
<hr />
<div>[[Category:Development tools]] <br />
This page intended Haskell vim-users.<br />
<br />
= Indentation =<br />
<br />
The following setup from merijn @ #haskell ensures you use spaces not tabs for indentation for generally sane behaviour:<br />
<br />
<pre><br />
" Tab specific option<br />
set tabstop=8 "A tab is 8 spaces<br />
set expandtab "Always uses spaces instead of tabs<br />
set softtabstop=4 "Insert 4 spaces when tab is pressed<br />
set shiftwidth=4 "An indent is 4 spaces<br />
set smarttab "Indent instead of tab at start of line<br />
set shiftround "Round spaces to nearest shiftwidth multiple<br />
set nojoinspaces "Don't convert spaces to tabs<br />
</pre><br />
<br />
= Plugins =<br />
Put code in file <code>~/.vim/plugin/Haskell.vim</code>, or in multiple files in that directory. <br />
<br />
== Module Sections ==<br />
The following code prompts for a name, and places a section with that name at current position, when key sequence "--s":<br />
<pre><br />
let s:width = 80<br />
<br />
function! HaskellModuleSection(...)<br />
let name = 0 < a:0 ? a:1 : inputdialog("Section name: ")<br />
<br />
return repeat('-', s:width) . "\n"<br />
\ . "-- " . name . "\n"<br />
\ . "\n"<br />
<br />
endfunction<br />
<br />
nmap <silent> --s "=HaskellModuleSection()<CR>gp<br />
</pre><br />
Like so:<br />
<haskell><br />
<br />
--------------------------------------------------------------------------------<br />
-- my section<br />
<br />
</haskell><br />
<br />
<br />
== Module Headers ==<br />
The following code prompts for module name, a note, a description of module, and places a module comment at top, when key sequence "--h":<br />
<pre><br />
let s:width = 80<br />
<br />
<br />
function! HaskellModuleHeader(...)<br />
let name = 0 < a:0 ? a:1 : inputdialog("Module: ")<br />
let note = 1 < a:0 ? a:2 : inputdialog("Note: ")<br />
let description = 2 < a:0 ? a:3 : inputdialog("Describe this module: ")<br />
<br />
return repeat('-', s:width) . "\n" <br />
\ . "-- | \n" <br />
\ . "-- Module : " . name . "\n"<br />
\ . "-- Note : " . note . "\n"<br />
\ . "-- \n"<br />
\ . "-- " . description . "\n"<br />
\ . "-- \n"<br />
\ . repeat('-', s:width) . "\n"<br />
\ . "\n"<br />
<br />
endfunction<br />
<br />
<br />
nmap <silent> --h "=HaskellModuleHeader()<CR>:0put =<CR><br />
</pre><br />
like so:<br />
<haskell><br />
--------------------------------------------------------------------------------<br />
-- | <br />
-- Module : MyModule<br />
-- Note : This is a preview<br />
-- <br />
-- This is an empty module, to show the headercomment produced. <br />
-- <br />
--------------------------------------------------------------------------------<br />
<br />
<br />
</haskell><br />
<br />
= List of Plugins<br />
<br />
* [https://github.com/bitc/vim-hdevtools Hdevtools] taken from the github page:<br />
<blockquote><br />
hdevtools is a command line program powered by the GHC API, that provides services for Haskell development. hdevtools works by running a persistent process in the background, so that your Haskell modules remain in memory, instead of having to reload everything each time you change only one file. This is just like :reload in GHCi - with hdevtools you get the speed of GHCi as well as tight integration with your editor.<br />
<br />
This is the Vim plugin that integrates Vim with hdevtools.<br />
</blockquote><br />
<br />
<br />
*[https://github.com/lukerandall/haskellmode-vim Haskellmode-vim] from the github:<br />
<blockquote><br />
The Haskell mode plugins provide advanced support for Haskell development<br />
using GHC/GHCi on Windows and Unix-like systems. The functionality is<br />
based on Haddock-generated library indices, on GHCi's interactive<br />
commands, or on simply activating (some of) Vim's built-in program editing<br />
support in Haskell-relevant fashion. These plugins live side-by-side with<br />
the pre-defined |syntax-highlighting| support for |haskell| sources, and<br />
any other Haskell-related plugins you might want to install (see<br />
|haskellmode-resources|).<br />
<br />
The Haskell mode plugins consist of three filetype plugins (haskell.vim,<br />
haskell_doc.vim, haskell_hpaste.vim), which by Vim's |filetype| detection<br />
mechanism will be auto-loaded whenever files with the extension '.hs' are<br />
opened, and one compiler plugin (ghc.vim) which you will need to load from<br />
your vimrc file (see |haskellmode-settings|).<br />
</blockquote><br />
<br />
* [https://github.com/eagletmt/ghcmod-vim Ghcmod-vim] from the github Page:<br />
<blockquote><br />
Displaying the type of sub-expressions (ghc-mod type)<br />
Displaying error/warning messages and their locations (ghc-mod check and ghc-mod lint)<br />
Displaying the expansion of splices (ghc-mod expand)<br />
Completions are supported by another plugin. See neco-ghc .<br />
</blockquote>.<br />
<br />
* [https://github.com/scrooloose/syntastic Syntastic] supports Haskell and several other languages. From the github<br />
<blockquote><br />
Syntastic is a syntax checking plugin that runs files through external syntax checkers and displays any resulting errors to the user. This can be done on demand, or automatically as files are saved. If syntax errors are detected, the user is notified and is happy because they didn't have to compile their code or execute their script to find them.<br />
<br />
At the time of this writing, syntax checking plugins exist for applescript, c, coffee, cpp, css, cucumber, cuda, docbk, erlang, eruby, fortran, gentoo_metadata, go, haml, haskell, html, javascript, json, less, lua, matlab, perl, php, puppet, python, rst, ruby, sass/scss, sh, tcl, tex, vala, xhtml, xml, xslt, yaml, zpt<br />
</blockquote></div>Davorakhttps://wiki.haskell.org/index.php?title=Parameter_order&diff=55239Parameter order2013-01-11T20:10:09Z<p>Davorak: marked down -> wiki syntax</p>
<hr />
<div>The '''parameter order''' of Haskell functions is an important design decision when programming libraries.<br />
The parameter order shall<br />
* allow [[Pointfree|piping]],<br />
* be consistent across similar functions.<br />
<br />
== Motivation ==<br />
<br />
=== Application ===<br />
<br />
Parameters in Haskell are rather reversed compared to imperative or object oriented languages.<br />
In an object oriented language, the object to work on is the very first parameter.<br />
In a function call it is often written even before the function name, say <code>file</code> in <code>file.write("bla")</code>.<br />
Strictly spoken, in Haskell it is not possible to alter objects,<br />
but there are many functions which return a somehow altered input object.<br />
This object should be the last parameter because then you can compose a sequence of operations on this object<br />
using the function composition operator <hask>.</hask>.<br />
The code<br />
<haskell><br />
sum . map f . filter p . scanl (*) 1<br />
</haskell><br />
describes a function, which applies three transformations to a list.<br />
This can be written so easily because the list is always the last parameter.<br />
<br />
The order of the parameters except the last one is not so critical.<br />
However you should keep in mind that also transformations on functions are perfectly ok in Haskell.<br />
That's why function operators like the differentiation and integration in functional analysis<br />
should have the parameter of the derived/integrated function at the last position<br />
and the transformed function should be the parameter before the last one.<br />
<haskell><br />
integrate :: a -> (a -> a) -> (a -> a)<br />
integrate f0 f x = ...<br />
<br />
differentiate :: a -> (a -> a) -> (a -> a)<br />
differentiate h f x = ...<br />
<br />
-- continuous extension, aka function limit<br />
continuous :: (a -> a) -> (a -> a)<br />
continuous f x = ...<br />
<br />
exampleTransform = differentiate h . continuous<br />
</haskell><br />
<br />
<br />
The third thing to consider is that it is easily possible to fix parameters, which are at the beginning.<br />
E.g.<br />
<haskell><br />
sum = foldl (+) 0<br />
product = foldl (*) 1<br />
</haskell><br />
that's why we can consider the parameter order of <hask>foldl</hask> to be a good one.<br />
We also see in this example that it is easily possible to generate a function<br />
with the first parameters fixed and that functions shall be prepared for this.<br />
<br />
Consider two parameters <hask>sampleRate :: Double</hask> and <hask>signal :: [Double]</hask>,<br />
where the sample rate functionally depends on the signal, that is every signal has a unique sampling rate.<br />
You will make <hask>sampleRate</hask> the first parameter and <hask>signal</hask> the second parameter,<br />
because there are more signals with the same sampling rate, but only one sample rate per signal.<br />
This makes it more likely that you want to fix the sampling rate than to fix the signal parameter.<br />
(You might not want to organize the sampling rate and the signal in one record,<br />
because an operation like mixing processes multiple signals, but all with the same sampling rate.)<br />
<br />
<br />
=== Implementation ===<br />
<br />
Also for the implementation of a function the order of parameters count.<br />
If you do a case analysis on a parameter, this one should be the last function parameter.<br />
Function parameters that are handled the same way for all cases should be first.<br />
If you use <hask>case</hask> instead of [[pattern matching]] on function parameters,<br />
a carefully chosen parameter order can simplify the implementation,<br />
and this order should also be prefered.<br />
Say, you want to know which of the signatures<br />
<haskell><br />
formatMsg :: String -> Maybe Int -> String<br />
formatMsg :: Maybe Int -> String -> String<br />
</haskell><br />
shall be used.<br />
The implementation might be<br />
<haskell><br />
formatMsg :: String -> Maybe Int -> String<br />
formatMsg msg Nothing = msg ++ "\n"<br />
formatMsg msg (Just n) = msg ++ " " ++ show n ++ "\n"<br />
</haskell><br />
If you use <hask>case</hask> instead, you can factor out common parts of the implementation.<br />
<haskell><br />
formatMsg :: String -> Maybe Int -> String<br />
formatMsg msg mn =<br />
msg ++<br />
(case mn of<br />
Nothing -> ""<br />
Just n -> " " ++ show n) ++<br />
"\n"<br />
</haskell><br />
You can even omit the parameter you apply the case analysis to.<br />
<haskell><br />
formatMsg :: String -> Maybe Int -> String<br />
formatMsg msg =<br />
(msg ++) . (++ "\n") . maybe "" (\n -> " " ++ show n)<br />
</haskell><br />
<br />
<br />
== Bad examples ==<br />
<br />
Sometimes library writers have infix usage of functions in mind.<br />
See for instance <hask>Data.Bits</hask> and [[syntactic sugar/Cons |Cons of syntactic sugar]].<br />
Unfortunately the order of arguments to infix operators, which seems to be natural for many programmers,<br />
is reversed with respect to the one we encourage above.<br />
Maybe this only indicates that parameter order should be reverse, at all,<br />
meaning that the name of the called function comes after the arguments ([[Reverse Polish Notation]]).<br />
<br />
The operators <hask>(-)</hask>, <hask>(/)</hask>, <hask>(^)</hask>, <hask>(^^)</hask>, <hask>(**)</hask>, <hask>div</hask>, <hask>mod</hask><br />
(used as <hask> a `div` b</hask>, <hask> a `mod` b</hask>) are adaptions of the mathematical tradition.<br />
However when using [[Section of an infix operator|section]], in most cases the first argument is omitted.<br />
This strongly indicates that their parameter order is unnatural in the Haskell sense.<br />
However, for the subtraction there also exists <hask>subtract</hask>, which is better for partial application.<br />
<br />
There are more cases where there is even no simple reason,<br />
why the parameter order was chosen in an unnatural way.<br />
* <hask>Data.Map.lookup :: (Monad m, Ord k) => k -> Map k a -> m a</hask><br />
* <hask>Data.Map.findWithDefault :: Ord k => a -> k -> Map k a -> a</hask><br />
* <hask>Data.Map.lookupIndex :: (Monad m, Ord k) => k -> Map k a -> m Int</hask><br />
* <hask>Data.Map.findIndex :: Ord k => k -> Map k a -> Int</hask><br />
Since objects of type <hask>Map</hask> represent mappings,<br />
it is natural to have some function which transforms a <hask>Map</hask> object to the represented function.<br />
All of the functions above do this in some way,<br />
where <hask>Data.Map.findWithDefault</hask> is certainly closest to the ideal Map->Function transformer.<br />
See the type <haskell>flip (Data.Map.findWithDefault deflt) :: Ord k => Map k a -> (k -> a)</haskell>.<br />
Unfortunately the parameters are ordered in a way that requires a flip for revealing this connection.<br />
Maybe the library designer imitated the signature of <hask>Data.List.lookup</hask> here.<br />
<br />
<br />
== Context ==<br />
<br />
Say a set of your functions works within a certain context.<br />
You have a function which run these functions within that context.<br />
<haskell><br />
startSound :: (SoundServer -> IO a) -> IO a<br />
</haskell><br />
You wonder whether to make the <hask>SoundServer</hask> context the first or the last parameter of according sound functions.<br />
Since a context is something that varies not very frequently it should be the first parameter.<br />
<haskell><br />
play :: SoundServer -> Sound -> IO ()<br />
</haskell><br />
This way it is easy to play a sequence of sounds, say<br />
<haskell><br />
startSound (\server -> mapM_ (play server) [soundA, soundB, soundC]) .<br />
</haskell><br />
On the other hand the parameter order<br />
<haskell><br />
play' :: Sound -> SoundServer -> IO ()<br />
</haskell><br />
simplifies the calls to single sound functions:<br />
<haskell><br />
startSound (play' soundA) .<br />
</haskell><br />
<br />
In this case we should actually make the context the last argument, but hide it in a [[Reader monad]].<br />
<haskell><br />
type SoundAction a = ReaderT SoundServer IO a<br />
<br />
playM :: Sound -> SoundAction ()<br />
playM = ReaderT . play'<br />
<br />
startSoundM :: SoundAction a -> IO a<br />
startSoundM = startSound . runReaderT<br />
</haskell><br />
<br />
This way, both of the above examples become equally simple.<br />
<haskell><br />
startSoundM (mapM_ playM [soundA, soundB, soundC])<br />
startSoundM (playM soundA)<br />
</haskell><br />
<br />
Note:<br />
Instead of <hask>f :: a -> b -> Reader r c</hask><br />
you could also use the signature <hask>f :: Reader r (a -> b -> c)</hask><br />
which gets us back to the parameter order proposed initially.<br />
Currently this prohibits reasonable commenting with Haddock, but this should be fixed in future.<br />
I have to think more carefully about it.<br />
<br />
<br />
== The rule of thumb ==<br />
<br />
What do we learn from all this considerations?<br />
<br />
The more important the parameter, the more frequently it changes,<br />
the more it shall be moved to the end of the parameter list.<br />
If there is some recursion involved, probably the parameter, which you recurse on,<br />
is the one which should be at the last position.<br />
If parameter <hask>b</hask> functionally depends on parameter <hask>a</hask>,<br />
then <hask>b</hask> should be before <hask>a</hask>.<br />
<br />
<br />
== See also ==<br />
<br />
* Parameter order of [[multi-parameter type class]]es with [[functional dependencies]] in Haskell Cafe on [http://www.haskell.org/pipermail/haskell-cafe/2009-May/061298.html Fundep curiosity]<br />
<br />
* [http://stackoverflow.com/questions/5863128/ordering-of-parameters-to-make-use-of-currying Order of parameters to make use of curring] on Stackoverlfow covers several strategies.<br />
<br />
* [http://www.reddit.com/r/haskell/comments/16diti/is_flip_a_code_smell/ Reddit Discussion] about flip being code smell discusses Parameter ordering as well.<br />
<br />
[[Category:Style]]</div>Davorakhttps://wiki.haskell.org/index.php?title=Parameter_order&diff=55238Parameter order2013-01-11T20:08:49Z<p>Davorak: Added additional resources.</p>
<hr />
<div>The '''parameter order''' of Haskell functions is an important design decision when programming libraries.<br />
The parameter order shall<br />
* allow [[Pointfree|piping]],<br />
* be consistent across similar functions.<br />
<br />
== Motivation ==<br />
<br />
=== Application ===<br />
<br />
Parameters in Haskell are rather reversed compared to imperative or object oriented languages.<br />
In an object oriented language, the object to work on is the very first parameter.<br />
In a function call it is often written even before the function name, say <code>file</code> in <code>file.write("bla")</code>.<br />
Strictly spoken, in Haskell it is not possible to alter objects,<br />
but there are many functions which return a somehow altered input object.<br />
This object should be the last parameter because then you can compose a sequence of operations on this object<br />
using the function composition operator <hask>.</hask>.<br />
The code<br />
<haskell><br />
sum . map f . filter p . scanl (*) 1<br />
</haskell><br />
describes a function, which applies three transformations to a list.<br />
This can be written so easily because the list is always the last parameter.<br />
<br />
The order of the parameters except the last one is not so critical.<br />
However you should keep in mind that also transformations on functions are perfectly ok in Haskell.<br />
That's why function operators like the differentiation and integration in functional analysis<br />
should have the parameter of the derived/integrated function at the last position<br />
and the transformed function should be the parameter before the last one.<br />
<haskell><br />
integrate :: a -> (a -> a) -> (a -> a)<br />
integrate f0 f x = ...<br />
<br />
differentiate :: a -> (a -> a) -> (a -> a)<br />
differentiate h f x = ...<br />
<br />
-- continuous extension, aka function limit<br />
continuous :: (a -> a) -> (a -> a)<br />
continuous f x = ...<br />
<br />
exampleTransform = differentiate h . continuous<br />
</haskell><br />
<br />
<br />
The third thing to consider is that it is easily possible to fix parameters, which are at the beginning.<br />
E.g.<br />
<haskell><br />
sum = foldl (+) 0<br />
product = foldl (*) 1<br />
</haskell><br />
that's why we can consider the parameter order of <hask>foldl</hask> to be a good one.<br />
We also see in this example that it is easily possible to generate a function<br />
with the first parameters fixed and that functions shall be prepared for this.<br />
<br />
Consider two parameters <hask>sampleRate :: Double</hask> and <hask>signal :: [Double]</hask>,<br />
where the sample rate functionally depends on the signal, that is every signal has a unique sampling rate.<br />
You will make <hask>sampleRate</hask> the first parameter and <hask>signal</hask> the second parameter,<br />
because there are more signals with the same sampling rate, but only one sample rate per signal.<br />
This makes it more likely that you want to fix the sampling rate than to fix the signal parameter.<br />
(You might not want to organize the sampling rate and the signal in one record,<br />
because an operation like mixing processes multiple signals, but all with the same sampling rate.)<br />
<br />
<br />
=== Implementation ===<br />
<br />
Also for the implementation of a function the order of parameters count.<br />
If you do a case analysis on a parameter, this one should be the last function parameter.<br />
Function parameters that are handled the same way for all cases should be first.<br />
If you use <hask>case</hask> instead of [[pattern matching]] on function parameters,<br />
a carefully chosen parameter order can simplify the implementation,<br />
and this order should also be prefered.<br />
Say, you want to know which of the signatures<br />
<haskell><br />
formatMsg :: String -> Maybe Int -> String<br />
formatMsg :: Maybe Int -> String -> String<br />
</haskell><br />
shall be used.<br />
The implementation might be<br />
<haskell><br />
formatMsg :: String -> Maybe Int -> String<br />
formatMsg msg Nothing = msg ++ "\n"<br />
formatMsg msg (Just n) = msg ++ " " ++ show n ++ "\n"<br />
</haskell><br />
If you use <hask>case</hask> instead, you can factor out common parts of the implementation.<br />
<haskell><br />
formatMsg :: String -> Maybe Int -> String<br />
formatMsg msg mn =<br />
msg ++<br />
(case mn of<br />
Nothing -> ""<br />
Just n -> " " ++ show n) ++<br />
"\n"<br />
</haskell><br />
You can even omit the parameter you apply the case analysis to.<br />
<haskell><br />
formatMsg :: String -> Maybe Int -> String<br />
formatMsg msg =<br />
(msg ++) . (++ "\n") . maybe "" (\n -> " " ++ show n)<br />
</haskell><br />
<br />
<br />
== Bad examples ==<br />
<br />
Sometimes library writers have infix usage of functions in mind.<br />
See for instance <hask>Data.Bits</hask> and [[syntactic sugar/Cons |Cons of syntactic sugar]].<br />
Unfortunately the order of arguments to infix operators, which seems to be natural for many programmers,<br />
is reversed with respect to the one we encourage above.<br />
Maybe this only indicates that parameter order should be reverse, at all,<br />
meaning that the name of the called function comes after the arguments ([[Reverse Polish Notation]]).<br />
<br />
The operators <hask>(-)</hask>, <hask>(/)</hask>, <hask>(^)</hask>, <hask>(^^)</hask>, <hask>(**)</hask>, <hask>div</hask>, <hask>mod</hask><br />
(used as <hask> a `div` b</hask>, <hask> a `mod` b</hask>) are adaptions of the mathematical tradition.<br />
However when using [[Section of an infix operator|section]], in most cases the first argument is omitted.<br />
This strongly indicates that their parameter order is unnatural in the Haskell sense.<br />
However, for the subtraction there also exists <hask>subtract</hask>, which is better for partial application.<br />
<br />
There are more cases where there is even no simple reason,<br />
why the parameter order was chosen in an unnatural way.<br />
* <hask>Data.Map.lookup :: (Monad m, Ord k) => k -> Map k a -> m a</hask><br />
* <hask>Data.Map.findWithDefault :: Ord k => a -> k -> Map k a -> a</hask><br />
* <hask>Data.Map.lookupIndex :: (Monad m, Ord k) => k -> Map k a -> m Int</hask><br />
* <hask>Data.Map.findIndex :: Ord k => k -> Map k a -> Int</hask><br />
Since objects of type <hask>Map</hask> represent mappings,<br />
it is natural to have some function which transforms a <hask>Map</hask> object to the represented function.<br />
All of the functions above do this in some way,<br />
where <hask>Data.Map.findWithDefault</hask> is certainly closest to the ideal Map->Function transformer.<br />
See the type <haskell>flip (Data.Map.findWithDefault deflt) :: Ord k => Map k a -> (k -> a)</haskell>.<br />
Unfortunately the parameters are ordered in a way that requires a flip for revealing this connection.<br />
Maybe the library designer imitated the signature of <hask>Data.List.lookup</hask> here.<br />
<br />
<br />
== Context ==<br />
<br />
Say a set of your functions works within a certain context.<br />
You have a function which run these functions within that context.<br />
<haskell><br />
startSound :: (SoundServer -> IO a) -> IO a<br />
</haskell><br />
You wonder whether to make the <hask>SoundServer</hask> context the first or the last parameter of according sound functions.<br />
Since a context is something that varies not very frequently it should be the first parameter.<br />
<haskell><br />
play :: SoundServer -> Sound -> IO ()<br />
</haskell><br />
This way it is easy to play a sequence of sounds, say<br />
<haskell><br />
startSound (\server -> mapM_ (play server) [soundA, soundB, soundC]) .<br />
</haskell><br />
On the other hand the parameter order<br />
<haskell><br />
play' :: Sound -> SoundServer -> IO ()<br />
</haskell><br />
simplifies the calls to single sound functions:<br />
<haskell><br />
startSound (play' soundA) .<br />
</haskell><br />
<br />
In this case we should actually make the context the last argument, but hide it in a [[Reader monad]].<br />
<haskell><br />
type SoundAction a = ReaderT SoundServer IO a<br />
<br />
playM :: Sound -> SoundAction ()<br />
playM = ReaderT . play'<br />
<br />
startSoundM :: SoundAction a -> IO a<br />
startSoundM = startSound . runReaderT<br />
</haskell><br />
<br />
This way, both of the above examples become equally simple.<br />
<haskell><br />
startSoundM (mapM_ playM [soundA, soundB, soundC])<br />
startSoundM (playM soundA)<br />
</haskell><br />
<br />
Note:<br />
Instead of <hask>f :: a -> b -> Reader r c</hask><br />
you could also use the signature <hask>f :: Reader r (a -> b -> c)</hask><br />
which gets us back to the parameter order proposed initially.<br />
Currently this prohibits reasonable commenting with Haddock, but this should be fixed in future.<br />
I have to think more carefully about it.<br />
<br />
<br />
== The rule of thumb ==<br />
<br />
What do we learn from all this considerations?<br />
<br />
The more important the parameter, the more frequently it changes,<br />
the more it shall be moved to the end of the parameter list.<br />
If there is some recursion involved, probably the parameter, which you recurse on,<br />
is the one which should be at the last position.<br />
If parameter <hask>b</hask> functionally depends on parameter <hask>a</hask>,<br />
then <hask>b</hask> should be before <hask>a</hask>.<br />
<br />
<br />
== See also ==<br />
<br />
* Parameter order of [[multi-parameter type class]]es with [[functional dependencies]] in Haskell Cafe on [http://www.haskell.org/pipermail/haskell-cafe/2009-May/061298.html Fundep curiosity]<br />
<br />
* [Order of parameters to make use of curring](http://stackoverflow.com/questions/5863128/ordering-of-parameters-to-make-use-of-currying) on Stackoverlfow covers several strategies.<br />
<br />
* [Reddit Discussion](http://www.reddit.com/r/haskell/comments/16diti/is_flip_a_code_smell/) about flip being code smell discusses Parameter ordering as well.<br />
<br />
[[Category:Style]]</div>Davorakhttps://wiki.haskell.org/index.php?title=Physical_units&diff=55233Physical units2013-01-10T01:45:14Z<p>Davorak: </p>
<hr />
<div>How would one go about modeling units (seconds, meters, meters per second, etc) in Haskell? I'm particularly interested in getting the typechecker to verify proper usage, and do not want to restrict it to any particular numeric representation (i.e. both integral seconds and fractional seconds). If this can in fact be done, it could also be used to model coordinate system axes in, say, Geometric Algebra.<br />
<br />
* It can. Look at [[Dimensionalized numbers]] for a toy implementation. I've thought a bit about the Geometric Algebra case, but I can't see a good way of handling it without forcing a choice of basis. I'm also not sure how it would work -- the whole point of GA is to incorporate areas, lengths, volumes, etc. into one number type. I suppose we could use this type of technique to segregate GA spaces with different dimensions or even metrics. --AaronDenney<br />
<br />
* A (non-toy) implementation of statically checked physical dimensions with a complete set of dimensions (but limited number of predefined units) is [http://code.google.com/p/dimensional/ Dimensional]. I apologize for coming up with the same name as Aaron and am on the lookout for a new one. I cannot comment on the applicability to GA. --[[User:Bjorn|Bjorn]]<br />
<br />
* NumericPrelude also contains an implementation of values equipped with physical units. However all unit checking is made dynamically, that is expressions like <hask>1 * meter < 2 * second</hask> are accepted by the Haskell compiler but lead to a runtime error. This design allows the programmer to process values from IO world with units, that are unknown at compile time, but it prohibits catching unit errors at compile time. Units are handled as integer vectors of exponents of base units. Computation with these vectors is rather simple, however formatting them is difficult. Formatting and parsing values with units is supported. A small library of SI units is provided.<br />
: http://code.haskell.org/numeric-prelude/src/Number/Physical.hs<br />
: http://code.haskell.org/numeric-prelude/src/Number/SI.hs<br />
<br />
:Get with <code>darcs get http://code.haskell.org/numeric-prelude/</code><br />
<br />
* poor man's type level [http://code.haskell.org/numeric-prelude/src/Number/DimensionTerm.hs physical dimensions] in [[Numeric Prelude]]<br />
<br />
* [[CalDims]]<br />
<br />
* {{HackagePackage|id=unittyped}}<br />
<br />
* {{HackagePackage|id=Measure}} - Encompasses a few units, currently has build failure on ghc >= 7.0.<br />
<br />
* {{HackagePackage|id=time-units}} - for time units only.<br />
<br />
<br />
[[Category:Mathematics]]</div>Davorakhttps://wiki.haskell.org/index.php?title=Physical_units&diff=55232Physical units2013-01-10T01:39:46Z<p>Davorak: added additional library</p>
<hr />
<div>How would one go about modeling units (seconds, meters, meters per second, etc) in Haskell? I'm particularly interested in getting the typechecker to verify proper usage, and do not want to restrict it to any particular numeric representation (i.e. both integral seconds and fractional seconds). If this can in fact be done, it could also be used to model coordinate system axes in, say, Geometric Algebra.<br />
<br />
* It can. Look at [[Dimensionalized numbers]] for a toy implementation. I've thought a bit about the Geometric Algebra case, but I can't see a good way of handling it without forcing a choice of basis. I'm also not sure how it would work -- the whole point of GA is to incorporate areas, lengths, volumes, etc. into one number type. I suppose we could use this type of technique to segregate GA spaces with different dimensions or even metrics. --AaronDenney<br />
<br />
* A (non-toy) implementation of statically checked physical dimensions with a complete set of dimensions (but limited number of predefined units) is [http://code.google.com/p/dimensional/ Dimensional]. I apologize for coming up with the same name as Aaron and am on the lookout for a new one. I cannot comment on the applicability to GA. --[[User:Bjorn|Bjorn]]<br />
<br />
* NumericPrelude also contains an implementation of values equipped with physical units. However all unit checking is made dynamically, that is expressions like <hask>1 * meter < 2 * second</hask> are accepted by the Haskell compiler but lead to a runtime error. This design allows the programmer to process values from IO world with units, that are unknown at compile time, but it prohibits catching unit errors at compile time. Units are handled as integer vectors of exponents of base units. Computation with these vectors is rather simple, however formatting them is difficult. Formatting and parsing values with units is supported. A small library of SI units is provided.<br />
: http://code.haskell.org/numeric-prelude/src/Number/Physical.hs<br />
: http://code.haskell.org/numeric-prelude/src/Number/SI.hs<br />
<br />
:Get with <code>darcs get http://code.haskell.org/numeric-prelude/</code><br />
<br />
* poor man's type level [http://code.haskell.org/numeric-prelude/src/Number/DimensionTerm.hs physical dimensions] in [[Numeric Prelude]]<br />
<br />
* [[CalDims]]<br />
<br />
* {{HackagePackage|id=unittyped}}<br />
<br />
* {{HackagePackage|id=Measure}} - Encompasses a few units, currently has build failure on ghc >= 7.0.<br />
<br />
<br />
[[Category:Mathematics]]</div>Davorakhttps://wiki.haskell.org/index.php?title=Monomorphism_restriction&diff=55216Monomorphism restriction2013-01-04T04:24:21Z<p>Davorak: </p>
<hr />
<div>The monomorphism restriction is probably the most annoying and controversial feature of Haskell's type system. And is turned off with the use of the NoMonomorphismRestriction language pragma. All seem to agree that it is evil, but whether or not it is considered a necessary evil depends on who you ask.<br />
<br />
The definition of the restriction is fairly technical, but to a first approximation it means that you often cannot overload a function unless you provide an explicit type signature. In summary:<br />
<br />
<haskell><br />
-- This is allowed<br />
f1 x = show x<br />
<br />
-- This is not allowed<br />
f2 = \x -> show x<br />
<br />
-- ...but this is allowed<br />
f3 :: (Show a) => a -> String<br />
f3 = \x -> show x<br />
<br />
-- This is not allowed<br />
f4 = show<br />
<br />
-- ...but this is allowed<br />
f5 :: (Show a) => a -> String<br />
f5 = show<br />
</haskell><br />
<br />
Arguably, these should all be equivalent, but thanks to the monomorphism restriction, they are not.<br />
<br />
The difference between the first and second version is that the first version binds x via a "simple pattern binding" (see section 4.4.3.2 of the Haskell 98 report), and is therefore unrestricted, but the second version does not. The reason why one is allowed and the other is not is that it's considered clear that sharing f1 will not share any computation, and less clear that sharing f2 will have the same effect. If this seems arbitrary, that's because it is. It is difficult to design an objective rule which disallows subjective unexpected behaviour. Some people are going to fall foul of the rule even though they're doing quite reasonable things.<br />
<br />
So why is the restriction imposed? The reasoning behind it is fairly subtle, and is fully explained in the [http://haskell.org/onlinereport/ Haskell 98 report]. Basically, it solves one practical problem (without the restriction, there would be some ambiguous types) and one semantic problem (without the restriction, there would be some repeated evaluation where a programmer might expect the evaluation to be shared). Those who are for the restriction argue that these cases should be dealt with correctly. Those who are against the restriction argue that these cases are so rare that it's not worth sacrificing the type-independence of eta reduction.<br />
<br />
:An example, from [http://research.microsoft.com/~simonpj/papers/history-of-haskell/index.htm A History of Haskell]: Consider the <code>genericLength</code> function, from <code>Data.List</code><br />
<br />
:<haskell><br />
genericLength :: Num a => [b] -> a<br />
</haskell><br />
<br />
:And consider the function:<br />
<br />
<haskell><br />
f xs = (len,len)<br />
where<br />
len = genericLength xs<br />
</haskell><br />
<br />
:<code>len</code> has type <code>Num a => a</code> and, without the monomorphism restriction, it could be computed ''twice''. --[[User:ARG|ARG]]<br />
<br />
----<br />
<br />
It is not clear to me how this whole thing about being computed once or twice works. Isn't type checking/inference something that happens at compile-time and shouldn't have any effect on what happens at run-time, as long as the typecheck passes? [[User:Dainichi|Dainichi]]<br />
<br />
The trouble is that typeclasses essentially introduce additional function parameters -- specifically, the dictionary of code implementing the instances in question. In the case of typeclass polymorphic pattern bindings, you end up turning something that looked like a pattern binding -- a constant that would only ever be evaluated once, into what is really a function binding, something which will not be memoised. [[User:CaleGibbard|CaleGibbard]] 23:46, 1 February 2008 (UTC)<br />
<br />
The type of <code>f</code>, if no signature is given, then the compiler doesn't know that the two elements of the returned pair are of the same type. It's return value will be:<br />
<br />
<haskell><br />
f::(Num a, Num b) => [x] -> (a, b)<br />
</haskell><br />
<br />
This means that <i>while compiling f</i> the compiler is unable to memoise len - clearly if a /= b then different code is executed to compute the first and second appearance of len in the pair. It's possible the compiler could do something more clever <i>when f is actually applied</i> if a == b, but I'm supposing this isn't a straight-forward thing to implement in the compilers. [[User:Dozer|Dozer]] 23:54, 4 February 2008 (GMT)<br />
<br />
Thank you, the nature of the ''problem'' is getting clearer now, but I'm still confused about how the restriction of top level definitions is supposed to ''solve'' this problem. To me, the given example explains why f's type is inferred to be <haskell>Num a => [x] -> (a, a)</haskell>, not <haskell>(Num a, Num b) => [x] -> (a, b)</haskell>, but not why this means that you cannot define top-level overloading outside pattern bindings. Is there an example which makes this clearer?<br />
<br />
Maybe I need to read up on the Hindley–Milner type system, but this seems related to the existence of functions (such as genericLength and read) that are polymorphic in their return type. Would MR need to exist without these functions? <br />
<br />
I'm a bit confused about functions like this, since I somehow feel they belong to more of a system with dependent types. <br />
<br />
--[[User:Dainichi|Dainichi]] 06:53 15 Aug 2011 (UTC)<br />
<br />
<br />
<br />
----<br />
<br />
Oversimplifying the debate somewhat: Those in favour tend to be those who have written Haskell [[Implementations]] and those against tend to be those who have written complex combinator libraries (and hence have hit their collective heads against the restriction all too often). It often boils down to the fact that programmers want to avoid [http://catb.org/esr/jargon/html/L/legalese.html legalese], and language implementors want to avoid [http://catb.org/esr/jargon/html/C/cruft.html cruft].<br />
<br />
In almost all cases, you can get around the restriction by including explicit type declarations. Those who are for the restriction are usually quick to point out that including explicit type declarations is good programming practice anyway. In a few very rare cases, however, you may need to supply a type signature which is not valid Haskell. (Such type signatures require a type system extension such as [[Scoped type variables]].) Unless you're writing some weird combinator libraries, or are in the habit of not writing type declarations, you're unlikely to come across it. Even so, most Haskell [[Implementations]] provide a way to turn the restriction off.<br />
<br />
See also: [http://haskell.org/onlinereport/decls.html#sect4.5.5 Section 4.5.5, Haskell 98 report].<br />
<br />
-- [[Andrew Bromage]]<br />
<br />
Some question or suggestion: As I understand the problem arises from the situation that two different forms of assignment are described by the same notation. There are two forms of assignment, namely the inspection of data structures ("unpacking", "pattern binding") and the definition of functions ("function binding"). Unique examples are:<br />
<br />
<haskell><br />
let f x = y -- function definition<br />
let F x = y -- data structure decomposition<br />
</haskell><br />
<br />
In the first case we have the identifier f starting with lower case. This means this is a function binding. The second assignment starts with F, which must be a constructor. That's why this is a pattern binding. The monomorphism restriction applies only to the pattern binding. I think this was not defined in order to please compiler writers, but has shown to be useful in practice, or am I wrong? But the different handling of these binding types leads to a problem since both types have a common case.<br />
<br />
<haskell><br />
let x = y -- function or pattern binding?<br />
</haskell><br />
<br />
So, what speaks against differentiating the assignments notationally, say<br />
<br />
<haskell><br />
let f x = y -- function definition<br />
let F x <= y -- data structure decomposition<br />
</haskell><br />
<br />
and keep the monomorphism restriction as it is?<br />
<br />
-- [[Henning Thielemann]]<br />
<br />
The problem isn't just pattern bindings, it's that pattern bindings which are typeclass polymorphic are actually function bindings in disguise, since the usual implementation of typeclasses adds parameters to such definitions, to allow the definition to take the typeclass dictionaries involved. Thus, such pattern bindings have different properties with respect to sharing (they're generally less shared than you want). In especially bad cases, without the MR, it is possible to have programs which run exponentially slower without type signatures than when signatures are added. Just distinguishing pattern bindings with a new notation doesn't solve the problem, since they'll have to be converted into function bindings in that case anyway. If you intend to keep the MR, then you don't need to change anything. The issue with the MR is just the fact that it's annoying to have eta-reduction fail in the absence of explicit type signatures, and the fact that it makes otherwise perfectly valid programs fail to compile on speculation that there might be loss of sharing (when there usually isn't, or at least the impact isn't large enough to worry about).<br />
<br />
John Hughes recently advocated the removal of the MR on the Haskell Prime mailing list, and suggested replacing it with two forms of pattern binding: one for call-by-name (polymorphic, not shared), and one for call-by-need (monomorphic, guaranteed shared). This might be similar to what you're suggesting. If you look at it too closely, it seems like a good solution, but the overall impact on Haskell code seems too large to me, to resolve a distinction which it ought to be statically possible to determine.<br />
<br />
I'm of the opinion that it would be better to find a way to restore sharing lost through the typeclass transformation in some way, or else implement typeclasses in an altogether different way which doesn't run into this problem. Additional runtime machinery seems like a likely candidate for this -- the interactions with garbage collection are somewhat subtle, but I think it should be doable. It's also possible to restore the sharing via whole-program analysis, but advocates of separate compilation will probably complain, unless we were to find a mechanism to fix the problem from the object code (and potentially temporaries) at link time.<br />
<br />
:- [[Cale Gibbard]]<br />
<br />
--------------------<br />
I think it'd be useful to collect a set of examples of the Monormorphism Restriction biting people in an unexpected way. This may help to inform the debate over the MR by giving real-life examples. Add more examples here if (an only if) they constitute an unexpected MR-related incident in your life or someone else's. No invented examples! -- [[Simon Peyton Jones]]<br />
<br />
* GHC Trac bug [http://hackage.haskell.org/trac/ghc/ticket/1749 1749]<br />
* In trying to build an editor with undoable actions:<br />
<haskell><br />
class EditAction e a | e -> a where<br />
apply :: a -> e -> a<br />
<br />
data ListAction a = Append a | Remove<br />
<br />
instance EditAction (ListAction a) [a] where<br />
apply list (Append a) = a:list<br />
apply (x:xs) Remove = xs<br />
<br />
-- Apply all the EditActions to the input<br />
--edit :: EditAction e a => a -> [e] -> a -- monomorphism restriction - I have to put this in!<br />
edit = foldl apply<br />
</haskell><br />
<br />
----<br />
Back before forM was in the Control.Monad library, I once spent about 1/2 an hour trying to figure out why my action in the ST monad was having its '<hask>s</hask>' parameter squished to <hask>()</hask>. I tore the code apart for quite a while before discovering that it was that the MR was applying to my definition of <hask>forM</hask>:<br />
<br />
<haskell><br />
forM = flip mapM<br />
</haskell><br />
<br />
----<br />
I recently got tired of typing <hask>print "blah"</hask> in a ghci shell session and tried <hask>let p = print</hask>. Thanks to MR and Haskell defaulting, the type of <hask>p</hask> silently became <hask>() -> IO ()</hask>. No surprise that my new "short" version of print was only capable of printing void values -<br />
<br />
<hask><br />
Prelude> p ()<br />
()<br />
Prelude> p "blah"<br />
<br />
<interactive>:1:2:<br />
Couldn't match expected type `()' against inferred type `[Char]'<br />
In the first argument of `p', namely `"blah"'<br />
In the expression: p "blah"<br />
In the definition of `it': it = p "blah"<br />
</hask><br />
<br />
----<br />
<br />
<haskell><br />
import Graphics.UI.Gtk<br />
import Graphics.UI.Gtk.Glade<br />
<br />
-- xmlGetWidget' :: WidgetClass widget => (GObject -> widget) -> String -> IO widget<br />
xmlGetWidget' = xmlGetWidget undefined<br />
<br />
main :: IO ()<br />
main<br />
= do<br />
initGUI<br />
window <- xmlGetWidget' castToWindow "window1"<br />
button <- xmlGetWidget' castToButton "button1"<br />
widgetShowAll window<br />
mainGUI<br />
</haskell><br />
<br />
If I comment main, I cannot compile this code because of the monomorphism restriction. With main, it'll infer the type:<br />
<br />
<haskell><br />
xmlGetWidget' :: (GObject -> Window) -> String -> IO Window<br />
</haskell><br />
<br />
And give me a type error in the button line. If I uncomment the type signature, it'll work.<br />
----<br />
<br />
I wasn't expecting the following to fail...<br />
<br />
<haskell><br />
square :: (Num a) => a -> a <br />
square x = x * x <br />
dx = 0.0000001<br />
deriv1 :: (Fractional a) => (a -> a) -> (a -> a)<br />
deriv1 g = (\x -> ((g (x + 2) - (g x)) / dx )) <br />
main = printf "res==%g %g\n" (square 5.12::Double) ((deriv1 square) 2::Float)<br />
</haskell><br />
<br />
and for this to work.<br />
<br />
<haskell><br />
square :: (Num a) => a -> a <br />
square x = x * x <br />
dx = 0.0000001<br />
deriv1 :: (Fractional a) => (a -> a) -> (a -> a)<br />
deriv1 g = (\x -> ((g (x + 2) - (g x)) / 0.0000001 )) <br />
main = printf "res==%g %g\n" (square 5.12::Double) ((deriv1 square) 2::Float)<br />
</haskell><br />
<br />
The fix was to add<br />
<br />
<haskell><br />
dx :: Fractional a => a<br />
</haskell><br />
<br />
--Harry<br />
<br />
----<br />
<br />
Along the same lines as Simon's question above, does anyone have any real examples of being bitten by the lack of MR? I know what it's for, but I can't really think of any realistic cases when it would be a problem. --pumpkin<br />
<br />
[[Category:Glossary]]<br />
[[Category:Language]]</div>Davorakhttps://wiki.haskell.org/index.php?title=Monomorphism_restriction&diff=55215Monomorphism restriction2013-01-04T04:22:55Z<p>Davorak: </p>
<hr />
<div>The monomorphism restriction is probably the most annoying and controversial feature of Haskell's type system. And is turned off with the use of the NoMonomorphismRestriction [[pragma]]. All seem to agree that it is evil, but whether or not it is considered a necessary evil depends on who you ask.<br />
<br />
The definition of the restriction is fairly technical, but to a first approximation it means that you often cannot overload a function unless you provide an explicit type signature. In summary:<br />
<br />
<haskell><br />
-- This is allowed<br />
f1 x = show x<br />
<br />
-- This is not allowed<br />
f2 = \x -> show x<br />
<br />
-- ...but this is allowed<br />
f3 :: (Show a) => a -> String<br />
f3 = \x -> show x<br />
<br />
-- This is not allowed<br />
f4 = show<br />
<br />
-- ...but this is allowed<br />
f5 :: (Show a) => a -> String<br />
f5 = show<br />
</haskell><br />
<br />
Arguably, these should all be equivalent, but thanks to the monomorphism restriction, they are not.<br />
<br />
The difference between the first and second version is that the first version binds x via a "simple pattern binding" (see section 4.4.3.2 of the Haskell 98 report), and is therefore unrestricted, but the second version does not. The reason why one is allowed and the other is not is that it's considered clear that sharing f1 will not share any computation, and less clear that sharing f2 will have the same effect. If this seems arbitrary, that's because it is. It is difficult to design an objective rule which disallows subjective unexpected behaviour. Some people are going to fall foul of the rule even though they're doing quite reasonable things.<br />
<br />
So why is the restriction imposed? The reasoning behind it is fairly subtle, and is fully explained in the [http://haskell.org/onlinereport/ Haskell 98 report]. Basically, it solves one practical problem (without the restriction, there would be some ambiguous types) and one semantic problem (without the restriction, there would be some repeated evaluation where a programmer might expect the evaluation to be shared). Those who are for the restriction argue that these cases should be dealt with correctly. Those who are against the restriction argue that these cases are so rare that it's not worth sacrificing the type-independence of eta reduction.<br />
<br />
:An example, from [http://research.microsoft.com/~simonpj/papers/history-of-haskell/index.htm A History of Haskell]: Consider the <code>genericLength</code> function, from <code>Data.List</code><br />
<br />
:<haskell><br />
genericLength :: Num a => [b] -> a<br />
</haskell><br />
<br />
:And consider the function:<br />
<br />
<haskell><br />
f xs = (len,len)<br />
where<br />
len = genericLength xs<br />
</haskell><br />
<br />
:<code>len</code> has type <code>Num a => a</code> and, without the monomorphism restriction, it could be computed ''twice''. --[[User:ARG|ARG]]<br />
<br />
----<br />
<br />
It is not clear to me how this whole thing about being computed once or twice works. Isn't type checking/inference something that happens at compile-time and shouldn't have any effect on what happens at run-time, as long as the typecheck passes? [[User:Dainichi|Dainichi]]<br />
<br />
The trouble is that typeclasses essentially introduce additional function parameters -- specifically, the dictionary of code implementing the instances in question. In the case of typeclass polymorphic pattern bindings, you end up turning something that looked like a pattern binding -- a constant that would only ever be evaluated once, into what is really a function binding, something which will not be memoised. [[User:CaleGibbard|CaleGibbard]] 23:46, 1 February 2008 (UTC)<br />
<br />
The type of <code>f</code>, if no signature is given, then the compiler doesn't know that the two elements of the returned pair are of the same type. It's return value will be:<br />
<br />
<haskell><br />
f::(Num a, Num b) => [x] -> (a, b)<br />
</haskell><br />
<br />
This means that <i>while compiling f</i> the compiler is unable to memoise len - clearly if a /= b then different code is executed to compute the first and second appearance of len in the pair. It's possible the compiler could do something more clever <i>when f is actually applied</i> if a == b, but I'm supposing this isn't a straight-forward thing to implement in the compilers. [[User:Dozer|Dozer]] 23:54, 4 February 2008 (GMT)<br />
<br />
Thank you, the nature of the ''problem'' is getting clearer now, but I'm still confused about how the restriction of top level definitions is supposed to ''solve'' this problem. To me, the given example explains why f's type is inferred to be <haskell>Num a => [x] -> (a, a)</haskell>, not <haskell>(Num a, Num b) => [x] -> (a, b)</haskell>, but not why this means that you cannot define top-level overloading outside pattern bindings. Is there an example which makes this clearer?<br />
<br />
Maybe I need to read up on the Hindley–Milner type system, but this seems related to the existence of functions (such as genericLength and read) that are polymorphic in their return type. Would MR need to exist without these functions? <br />
<br />
I'm a bit confused about functions like this, since I somehow feel they belong to more of a system with dependent types. <br />
<br />
--[[User:Dainichi|Dainichi]] 06:53 15 Aug 2011 (UTC)<br />
<br />
<br />
<br />
----<br />
<br />
Oversimplifying the debate somewhat: Those in favour tend to be those who have written Haskell [[Implementations]] and those against tend to be those who have written complex combinator libraries (and hence have hit their collective heads against the restriction all too often). It often boils down to the fact that programmers want to avoid [http://catb.org/esr/jargon/html/L/legalese.html legalese], and language implementors want to avoid [http://catb.org/esr/jargon/html/C/cruft.html cruft].<br />
<br />
In almost all cases, you can get around the restriction by including explicit type declarations. Those who are for the restriction are usually quick to point out that including explicit type declarations is good programming practice anyway. In a few very rare cases, however, you may need to supply a type signature which is not valid Haskell. (Such type signatures require a type system extension such as [[Scoped type variables]].) Unless you're writing some weird combinator libraries, or are in the habit of not writing type declarations, you're unlikely to come across it. Even so, most Haskell [[Implementations]] provide a way to turn the restriction off.<br />
<br />
See also: [http://haskell.org/onlinereport/decls.html#sect4.5.5 Section 4.5.5, Haskell 98 report].<br />
<br />
-- [[Andrew Bromage]]<br />
<br />
Some question or suggestion: As I understand the problem arises from the situation that two different forms of assignment are described by the same notation. There are two forms of assignment, namely the inspection of data structures ("unpacking", "pattern binding") and the definition of functions ("function binding"). Unique examples are:<br />
<br />
<haskell><br />
let f x = y -- function definition<br />
let F x = y -- data structure decomposition<br />
</haskell><br />
<br />
In the first case we have the identifier f starting with lower case. This means this is a function binding. The second assignment starts with F, which must be a constructor. That's why this is a pattern binding. The monomorphism restriction applies only to the pattern binding. I think this was not defined in order to please compiler writers, but has shown to be useful in practice, or am I wrong? But the different handling of these binding types leads to a problem since both types have a common case.<br />
<br />
<haskell><br />
let x = y -- function or pattern binding?<br />
</haskell><br />
<br />
So, what speaks against differentiating the assignments notationally, say<br />
<br />
<haskell><br />
let f x = y -- function definition<br />
let F x <= y -- data structure decomposition<br />
</haskell><br />
<br />
and keep the monomorphism restriction as it is?<br />
<br />
-- [[Henning Thielemann]]<br />
<br />
The problem isn't just pattern bindings, it's that pattern bindings which are typeclass polymorphic are actually function bindings in disguise, since the usual implementation of typeclasses adds parameters to such definitions, to allow the definition to take the typeclass dictionaries involved. Thus, such pattern bindings have different properties with respect to sharing (they're generally less shared than you want). In especially bad cases, without the MR, it is possible to have programs which run exponentially slower without type signatures than when signatures are added. Just distinguishing pattern bindings with a new notation doesn't solve the problem, since they'll have to be converted into function bindings in that case anyway. If you intend to keep the MR, then you don't need to change anything. The issue with the MR is just the fact that it's annoying to have eta-reduction fail in the absence of explicit type signatures, and the fact that it makes otherwise perfectly valid programs fail to compile on speculation that there might be loss of sharing (when there usually isn't, or at least the impact isn't large enough to worry about).<br />
<br />
John Hughes recently advocated the removal of the MR on the Haskell Prime mailing list, and suggested replacing it with two forms of pattern binding: one for call-by-name (polymorphic, not shared), and one for call-by-need (monomorphic, guaranteed shared). This might be similar to what you're suggesting. If you look at it too closely, it seems like a good solution, but the overall impact on Haskell code seems too large to me, to resolve a distinction which it ought to be statically possible to determine.<br />
<br />
I'm of the opinion that it would be better to find a way to restore sharing lost through the typeclass transformation in some way, or else implement typeclasses in an altogether different way which doesn't run into this problem. Additional runtime machinery seems like a likely candidate for this -- the interactions with garbage collection are somewhat subtle, but I think it should be doable. It's also possible to restore the sharing via whole-program analysis, but advocates of separate compilation will probably complain, unless we were to find a mechanism to fix the problem from the object code (and potentially temporaries) at link time.<br />
<br />
:- [[Cale Gibbard]]<br />
<br />
--------------------<br />
I think it'd be useful to collect a set of examples of the Monormorphism Restriction biting people in an unexpected way. This may help to inform the debate over the MR by giving real-life examples. Add more examples here if (an only if) they constitute an unexpected MR-related incident in your life or someone else's. No invented examples! -- [[Simon Peyton Jones]]<br />
<br />
* GHC Trac bug [http://hackage.haskell.org/trac/ghc/ticket/1749 1749]<br />
* In trying to build an editor with undoable actions:<br />
<haskell><br />
class EditAction e a | e -> a where<br />
apply :: a -> e -> a<br />
<br />
data ListAction a = Append a | Remove<br />
<br />
instance EditAction (ListAction a) [a] where<br />
apply list (Append a) = a:list<br />
apply (x:xs) Remove = xs<br />
<br />
-- Apply all the EditActions to the input<br />
--edit :: EditAction e a => a -> [e] -> a -- monomorphism restriction - I have to put this in!<br />
edit = foldl apply<br />
</haskell><br />
<br />
----<br />
Back before forM was in the Control.Monad library, I once spent about 1/2 an hour trying to figure out why my action in the ST monad was having its '<hask>s</hask>' parameter squished to <hask>()</hask>. I tore the code apart for quite a while before discovering that it was that the MR was applying to my definition of <hask>forM</hask>:<br />
<br />
<haskell><br />
forM = flip mapM<br />
</haskell><br />
<br />
----<br />
I recently got tired of typing <hask>print "blah"</hask> in a ghci shell session and tried <hask>let p = print</hask>. Thanks to MR and Haskell defaulting, the type of <hask>p</hask> silently became <hask>() -> IO ()</hask>. No surprise that my new "short" version of print was only capable of printing void values -<br />
<br />
<hask><br />
Prelude> p ()<br />
()<br />
Prelude> p "blah"<br />
<br />
<interactive>:1:2:<br />
Couldn't match expected type `()' against inferred type `[Char]'<br />
In the first argument of `p', namely `"blah"'<br />
In the expression: p "blah"<br />
In the definition of `it': it = p "blah"<br />
</hask><br />
<br />
----<br />
<br />
<haskell><br />
import Graphics.UI.Gtk<br />
import Graphics.UI.Gtk.Glade<br />
<br />
-- xmlGetWidget' :: WidgetClass widget => (GObject -> widget) -> String -> IO widget<br />
xmlGetWidget' = xmlGetWidget undefined<br />
<br />
main :: IO ()<br />
main<br />
= do<br />
initGUI<br />
window <- xmlGetWidget' castToWindow "window1"<br />
button <- xmlGetWidget' castToButton "button1"<br />
widgetShowAll window<br />
mainGUI<br />
</haskell><br />
<br />
If I comment main, I cannot compile this code because of the monomorphism restriction. With main, it'll infer the type:<br />
<br />
<haskell><br />
xmlGetWidget' :: (GObject -> Window) -> String -> IO Window<br />
</haskell><br />
<br />
And give me a type error in the button line. If I uncomment the type signature, it'll work.<br />
----<br />
<br />
I wasn't expecting the following to fail...<br />
<br />
<haskell><br />
square :: (Num a) => a -> a <br />
square x = x * x <br />
dx = 0.0000001<br />
deriv1 :: (Fractional a) => (a -> a) -> (a -> a)<br />
deriv1 g = (\x -> ((g (x + 2) - (g x)) / dx )) <br />
main = printf "res==%g %g\n" (square 5.12::Double) ((deriv1 square) 2::Float)<br />
</haskell><br />
<br />
and for this to work.<br />
<br />
<haskell><br />
square :: (Num a) => a -> a <br />
square x = x * x <br />
dx = 0.0000001<br />
deriv1 :: (Fractional a) => (a -> a) -> (a -> a)<br />
deriv1 g = (\x -> ((g (x + 2) - (g x)) / 0.0000001 )) <br />
main = printf "res==%g %g\n" (square 5.12::Double) ((deriv1 square) 2::Float)<br />
</haskell><br />
<br />
The fix was to add<br />
<br />
<haskell><br />
dx :: Fractional a => a<br />
</haskell><br />
<br />
--Harry<br />
<br />
----<br />
<br />
Along the same lines as Simon's question above, does anyone have any real examples of being bitten by the lack of MR? I know what it's for, but I can't really think of any realistic cases when it would be a problem. --pumpkin<br />
<br />
[[Category:Glossary]]<br />
[[Category:Language]]</div>Davorakhttps://wiki.haskell.org/index.php?title=Continuation&diff=55214Continuation2013-01-04T04:12:57Z<p>Davorak: Added section for relevant blog posts, added link to blog post.</p>
<hr />
<div>__TOC__<br />
<br />
== General or introductory materials ==<br />
<br />
=== Powerful metaphors, images ===<br />
<br />
Here is a collection of short descriptions, analogies or metaphors, that illustrate this difficult concept, or an aspect of it.<br />
<br />
==== Imperative metaphors ====<br />
<br />
* In computing, a continuation is a representation of the execution state of a program (for example, the call stack) at a certain point in time (Wikipedia's [http://en.wikipedia.org/wiki/Continuation Continuation]).<br />
* At its heart, <code>call/cc</code> is something like the <code>goto</code> instruction (or rather, like a label for a <code>goto</code> instruction); but a Grand High Exalted <code>goto</code> instruction... The point about <code>call/cc</code> is that it is not a ''static'' (lexical) <code>goto</code> instruction but a ''dynamic'' one (David Madore's [http://www.madore.org/~david/computers/callcc.html#sec_intro A page about <code>call/cc</code>])<br />
<br />
==== Functional metaphors ====<br />
<br />
* Continuations represent the future of a computation, as a function from an intermediate result to the final result ([http://www.haskell.org/haskellwiki/All_About_Monads#The_Continuation_monad] section in Jeff Newbern's All About Monads)<br />
* The idea behind CPS is to pass around as a function argument what to do next ([http://darcs.haskell.org/yaht/yaht.pdf Yet Another Haskell Tutorial] written by Hal Daume III, 4.6 Continuation Passing Style, pp 53-56). [http://en.wikibooks.org/wiki/Haskell/YAHT/Type_basics#Continuation_Passing_Style It can be read also in wikified format].<br />
* Rather than return the result of a function, pass one or more [[Higher order function | Higher Order Functions]] to determine what to do with the result. Yes, direct sum like things (or in generally, case analysis, managing cases, alternatives) can be implemented in CPS by passing ''more'' continuations.<br />
<br />
=== External links ===<br />
<br />
* [http://en.wikibooks.org/wiki/Haskell/Continuation_passing_style The appropriate section of Haskell: Functional Programming with Types].<br />
* Wikipedia's [http://en.wikipedia.org/wiki/Continuation Continuation] is a surprisingly good introductory material on this topic. See also [http://en.wikipedia.org/wiki/Continuation-passing_style Continuation-passing style].<br />
* [http://darcs.haskell.org/yaht/yaht.pdf Yet Another Haskell Tutorial] written by Hal Daume III contains a section on continuation passing style (4.6 Continuation Passing Style, pp 53-56). [http://en.wikibooks.org/wiki/Haskell/YAHT/Type_basics#Continuation_Passing_Style It can be read also in wikified format], thanks to Eric Kow.<br />
* David Madore's [http://www.madore.org/~david/computers/callcc.html A page about <code>call/cc</code>] describes the concept, and his [http://www.madore.org/~david/programs/unlambda/ The Unlambda Programming Language] page shows how he implemented this construct in an esoteric functional programming language.<br />
* [http://www.defmacro.org/ramblings/fp.html#part_9 Continuations] section of article [http://www.defmacro.org/ramblings/fp.html Functional Programming For The Rest of Us], an introductory material to functional programming.<br />
* [http://okmij.org/ftp/Computation/Continuations.html Continuations and delimited control]<br />
<br />
== Examples ==<br />
<br />
=== Citing haskellized Scheme examples from Wikipedia ===<br />
<br />
Quoting the Scheme examples (with their explanatory texts) from Wikipedia's [http://en.wikipedia.org/wiki/Continuation-passing_style#Examples Continuation-passing style] article, but Scheme examples are translated to Haskell, and some straightforward modifications are made to the explanations (e.g. replacing word ''Scheme'' with ''Haskell'', or using abbreviated name <hask>fac</hask> instead of <code>factorial</code>).<br />
<br />
In the Haskell programming language, the simplest of direct-style functions is the identity function: <br />
<br />
<haskell><br />
id :: a -> a<br />
id a = a<br />
</haskell><br />
<br />
which in CPS becomes:<br />
<br />
<haskell><br />
idCPS :: a -> (a -> r) -> r<br />
idCPS a ret = ret a<br />
</haskell><br />
where <hask>ret</hask> is the continuation argument (often also called <hask>k</hask>). A further comparison of direct and CPS style is below.<br />
{|<br />
!<center>Direct style</center>!!<center>Continuation passing style</center><br />
|-<br />
|<br />
<haskell><br />
mysqrt :: Floating a => a -> a<br />
mysqrt a = sqrt a<br />
print (mysqrt 4) :: IO ()<br />
</haskell><br />
||<br />
<haskell><br />
mysqrtCPS :: a -> (a -> r) -> r<br />
mysqrtCPS a k = k (sqrt a)<br />
mysqrtCPS 4 print :: IO ()<br />
</haskell><br />
|-<br />
|<br />
<haskell><br />
mysqrt 4 + 2 :: Floating a => a<br />
</haskell><br />
||<br />
<haskell><br />
mysqrtCPS 4 (+ 2) :: Floating a => a<br />
</haskell><br />
|-<br />
|<br />
<haskell><br />
fac :: Integral a => a -> a<br />
fac 0 = 1<br />
fac n'@(n + 1) = n' * fac n<br />
fac 4 + 2 :: Integral a => a<br />
</haskell><br />
||<br />
<haskell><br />
facCPS :: a -> (a -> r) -> r<br />
facCPS 0 k = k 1<br />
facCPS n'@(n + 1) k = facCPS n $ \ret -> k (n' * ret)<br />
facCPS 4 (+ 2) :: Integral a => a<br />
</haskell><br />
|}<br />
<br />
The translations shown above show that CPS is a global transformation; the direct-style factorial, <hask>fac</hask> takes, as might be expected, a single argument. The CPS factorial, <hask>facCPS</hask> takes two: the argument and a continuation. Any function calling a CPS-ed function must either provide a new continuation or pass its own; any calls from a CPS-ed function to a non-CPS function will use implicit continuations. Thus, to ensure the total absence of a function stack, the entire program must be in CPS.<br />
<br />
As an exception, <hask>mysqrt</hask> calls <hask>sqrt</hask> without a continuation &mdash; here <hask>sqrt</hask> is considered a primitive [http://en.wikipedia.org/wiki/Operator_%28programming%29 operator]; that is, it is assumed that <hask>sqrt</hask> will compute its result in finite time and without abusing the stack. Operations considered primitive for CPS tend to be arithmetic, constructors, accessors, or mutators; any [http://en.wikipedia.org/wiki/Big_O_notation O(1) operation] will be considered primitive.<br />
<br />
The quotation ends here.<br />
<br />
=== Intermediate structures ===<br />
<br />
The function <hask>Foreign.C.String.withCString</hask> converts a Haskell string to a C string.<br />
But it does not provide it for external use but restricts the use of the C string to a sub-procedure,<br />
because it will cleanup the C string after its use.<br />
It has signature <hask>withCString :: String -> (CString -> IO a) -> IO a</hask>.<br />
This looks like continuation and the functions from continuation monad can be used,<br />
e.g. for allocation of a whole array of pointers:<br />
<haskell><br />
multiCont :: [(r -> a) -> a] -> ([r] -> a) -> a<br />
multiCont xs = runCont (mapM Cont xs)<br />
<br />
withCStringArray0 :: [String] -> (Ptr CString -> IO a) -> IO a<br />
withCStringArray0 strings act =<br />
multiCont<br />
(map withCString strings)<br />
(\rs -> withArray0 nullPtr rs act)<br />
</haskell><br />
However, the right associativity of <hask>mapM</hask> leads to inefficiencies here.<br />
<br />
See:<br />
* Cale Gibbard in Haskell-Cafe on [http://www.haskell.org/pipermail/haskell-cafe/2008-February/038963.html A handy little consequence of the Cont monad]<br />
<br />
=== More general examples ===<br />
<br />
Maybe it is confusing, that<br />
* the type of the (non-continuation) argument of the discussed functions (<hask>idCPS</hask>, <hask>mysqrtCPS</hask>, <hask>facCPS</hask>)<br />
* and the type of the argument of the continuations<br />
coincide in the above examples. It is not a necessity (it does not belong to the essence of the continuation concept), so I try to figure out an example which avoids this confusing coincidence:<br />
<haskell><br />
newSentence :: Char -> Bool<br />
newSentence = flip elem ".?!"<br />
<br />
newSentenceCPS :: Char -> (Bool -> r) -> r<br />
newSentenceCPS c k = k (elem c ".?!")<br />
</haskell><br />
but this is a rather uninteresting example. Let us see another one that uses at least recursion:<br />
<haskell><br />
mylength :: [a] -> Integer<br />
mylength [] = 0<br />
mylength (_ : as) = succ (mylength as)<br />
<br />
mylengthCPS :: [a] -> (Integer -> r) -> r<br />
mylengthCPS [] k = k 0<br />
mylengthCPS (_ : as) k = mylengthCPS as (k . succ)<br />
<br />
test8 :: Integer<br />
test8 = mylengthCPS [1..2006] id<br />
<br />
test9 :: IO ()<br />
test9 = mylengthCPS [1..2006] print<br />
</haskell><br />
<br />
You can download the Haskell source code (the original examples plus the new ones): [[Media:Continuation.hs|Continuation.hs]].<br />
<br />
== Continuation monad ==<br />
<br />
* Jeff Newbern's [http://www.haskell.org/haskellwiki/All_About_Monads All About Monads] contains a [http://www.haskell.org/haskellwiki/All_About_Monads#The_Continuation_monad section] on it.<br />
* [http://hackage.haskell.org/packages/archive/mtl/latest/doc/html/Control-Monad-Cont.html Control.Monad.Cont] is contained by Haskell Hierarchical Libraries.<br />
<br />
== Delimited continuation ==<br />
<br />
* [[Library/CC-delcont]]<br />
* [http://okmij.org/ftp/Computation/Continuations.html#zipper Generic Zipper and its applications], writing that "[[Zipper]] can be viewed as a [[Library/CC-delcont|delimited continuation]] reified as a data structure" (links added).<br />
<br />
== Linguistics ==<br />
<br />
Chris Barker: [http://www.cs.bham.ac.uk/~hxt/cw04/barker.pdf Continuations in Natural Language]<br />
<br />
== Applications ==<br />
<br />
;[http://okmij.org/ftp/Computation/Continuations.html#zipper-fs ZipperFS] <br />
:Oleg Kiselyov's [[zipper]]-based [[Libraries and tools/Operating system|file server/OS]] where threading and exceptions are all realized via [[Library/CC-delcont|delimited continuation]]s.<br />
<br />
== Blog Posts ==<br />
<br />
* [http://www.haskellforall.com/2012/12/the-continuation-monad.html The Continuation Monad] by Gabriel Gonzalez.<br />
<br />
[[Category:Idioms]]<br />
[[Category:Glossary]]</div>Davorakhttps://wiki.haskell.org/index.php?title=Monad&diff=55136Monad2012-12-23T21:02:45Z<p>Davorak: Control-Monad-state.html just links to Control-Monad-state-lazy.html so I linked it directly.</p>
<hr />
<div>{{Standard class|Monad|module=Control.Monad|module-doc=Control-Monad|package=base}}<br />
<br />
'''''Monads''''' in Haskell can be thought of as ''composable'' computation descriptions. The essence of monad is thus ''separation'' of ''composition timeline'' from the composed computation's ''execution timeline'', as well as the ability of ''computation'' to implicitly carry extra data, as pertaining to the computation itself, in addition to its ''one'' (hence the name) output, that it '''''will produce''''' when run (or queried, or called upon). This lends monads to supplementing ''pure'' calculations with features like I/O, common environment or state, and to ''preprocessing'' of computations (simplification, optimization etc.). <br />
<br />
Each monad, or computation type, provides means, subject to '''''Monad Laws''''', to '''''(a)''''' ''create'' a description of computation action that will produce (a.k.a. "return") a given Haskell value, '''''(b)''''' somehow ''run'' a computation action description (possibly getting its output back into Haskell should the monad choose to allow it, if computations described by the monad are pure, or causing the prescribed side effects if it's not), and '''''(c)''''' ''combine'' (a.k.a. "bind") a computation action description with a ''reaction'' to it &ndash; a regular Haskell function of one argument (that will receive computation-produced value) returning another action description (using or dependent on that value, if need be) &ndash; thus creating a combined computation action description that will feed the original action's output through the reaction while automatically taking care of the particulars of the computational process itself. A monad might also define additional primitives to provide access to and/or enable manipulation of data it implicitly carries, specific to its nature.<br />
<br />
[[Image:Monads inter-dependencies 2.png|center]]<br />
<br />
Thus in Haskell, though it is a purely-functional language, side effects that '''''will be performed''''' by a computation can be dealt with and combined ''purely'' at the monad's composition time. Monads thus resemble programs in a particular [[DSL]]. While programs may describe impure effects and actions ''outside'' Haskell, they can still be combined and processed (''"assembled"'') purely, ''inside'' Haskell, creating a pure Haskell value - a computation action description that describes an impure calculation. That is how Monads in Haskell '''''separate''''' between the ''pure'' and the ''impure''. <br />
<br />
The computation doesn't have to be impure and can be pure itself as well. Then monads serve to provide the benefits of separation of concerns, and automatic creation of a computational "pipeline". Because they are very useful in practice but rather mind-twisting for the beginners, numerous tutorials that deal exclusively with monads were created (see [[Monad#Monad tutorials|monad tutorials]]).<br />
<br />
== Common monads ==<br />
Most common applications of monads include:<br />
* Representing failure using <hask>Maybe</hask> monad<br />
* Nondeterminism using <hask>List</hask> monad to represent carrying multiple values<br />
* State using <hask>State</hask> monad<br />
* Read-only environment using <hask>Reader</hask> monad<br />
* I/O using <hask>IO</hask> monad<br />
<br />
== Monad class ==<br />
<br />
Monads can be viewed as a standard programming interface to various data or control structures, which is captured by the <hask>Monad</hask> class. All common monads are members of it:<br />
<br />
<haskell><br />
class Monad m where<br />
(>>=) :: m a -> (a -> m b) -> m b<br />
(>>) :: m a -> m b -> m b<br />
return :: a -> m a<br />
fail :: String -> m a<br />
</haskell><br />
<br />
In addition to implementing the class functions, all instances of Monad should obey the following equations, or '''''Monad Laws''''':<br />
<br />
<haskell><br />
return a >>= k = k a<br />
m >>= return = m<br />
m >>= (\x -> k x >>= h) = (m >>= k) >>= h<br />
</haskell><br />
<br />
See [[Monad laws|this intuitive explanation]] of why they should obey the Monad laws. It basically says that monad's reactions should be associative under Kleisli composition, defined as <code>(f >=> g) x = f x >>= g</code>, with <code>return</code> its left and right identity element.<br />
<br />
Any Monad can be made a [[Functor]] by defining <br />
<br />
<haskell><br />
fmap ab ma = ma >>= (return . ab)<br />
</haskell><br />
<br />
However, the Functor class is not a superclass of the Monad class. See [[Functor hierarchy proposal]].<br />
<br />
== Special notation ==<br />
<br />
In order to improve the look of code that uses monads Haskell provides a special [[syntactic sugar]] called <hask>do</hask>-notation. For example, following expression:<br />
<haskell><br />
thing1 >>= (\x -> func1 x >>= (\y -> thing2 <br />
>>= (\_ -> func2 y (\z -> return z))))<br />
</haskell><br />
which can be written more clearly by breaking it into several lines and omitting parentheses:<br />
<haskell><br />
thing1 >>= \x -><br />
func1 x >>= \y -><br />
thing2 >>= \_ -><br />
func2 y >>= \z -><br />
return z<br />
</haskell><br />
can be also written using the <hask>do</hask>-notation as follows:<br />
<haskell><br />
do<br />
x <- thing1<br />
y <- func1 x<br />
thing2<br />
z <- func2 y<br />
return z<br />
</haskell><br />
Code written using the <hask>do</hask>-notation is transformed by the compiler to ordinary expressions that use <hask>Monad</hask> class functions.<br />
<br />
When using the <hask>do</hask>-notation and a monad like <hask>State</hask> or <hask>IO</hask> programs look very much like programs written in an imperative language as each line contains a statement that can change the simulated global state of the program and optionally binds a (local) variable that can be used by the statements later in the code block.<br />
<br />
It is possible to intermix the <hask>do</hask>-notation with regular notation.<br />
<br />
More on the <hask>do</hask>-notation can be found in a section of [[Monads as computation#Do notation|Monads as computation]] and in other [[Monad#Monad tutorials|tutorials]].<br />
<br />
== Commutative monads ==<br />
'''Commutative monads''' are monads for which the order of actions makes no difference (they '''commute'''), that is when following code:<br />
<haskell><br />
do<br />
a <- actA<br />
b <- actB<br />
m a b<br />
</haskell><br />
is the same as:<br />
<haskell><br />
do<br />
b <- actB<br />
a <- actA<br />
m a b<br />
</haskell><br />
<br />
Examples of commutative include:<br />
* <hask>Reader</hask> monad<br />
* <hask>Maybe</hask> monad<br />
<br />
== Monad tutorials ==<br />
<br />
Monads are known for being deeply confusing to lots of people, so there are plenty of tutorials specifically related to monads. Each takes a different approach to Monads, and hopefully everyone will find something useful.<br />
<br />
See the [[Monad tutorials timeline]] for a comprehensive list of monad tutorials.<br />
<br />
== Monad reference guides ==<br />
<br />
An explanation of the basic Monad functions, with examples, can be found in the reference guide [http://members.chello.nl/hjgtuyl/tourdemonad.html A tour of the Haskell Monad functions], by Henk-Jan van Tuyl.<br />
<br />
== Monad research ==<br />
<br />
A collection of [[Research_papers/Monads_and_arrows|research papers]] about monads.<br />
<br />
== Monads in other languages ==<br />
<br />
Implementations of monads in other languages.<br />
<br />
* [http://programming.reddit.com/goto?id=1761q C]<br />
* [http://www-static.cc.gatech.edu/~yannis/fc++/FC++.1.5/monad.h C++], [http://www-static.cc.gatech.edu/~yannis/fc++/New1.5/lambda.html#monad doc]<br />
* [http://cml.cs.uchicago.edu/pages/cml.html CML.event] ?<br />
* [http://www.st.cs.ru.nl/papers/2010/CleanStdEnvAPI.pdf Clean] State monad<br />
* [http://clojure.googlegroups.com/web/monads.clj Clojure]<br />
* [http://cratylus.freewebspace.com/monads-in-javascript.htm JavaScript]<br />
* [http://www.ccs.neu.edu/home/dherman/browse/code/monads/JavaMonads/ Java]<br />
* [http://permalink.gmane.org/gmane.comp.lang.concatenative/1506 Joy]<br />
* [http://research.microsoft.com/~emeijer/Papers/XLinq%20XML%20Programming%20Refactored%20(The%20Return%20Of%20The%20Monoids).htm LINQ], [http://www.idealliance.org/xmlusa/05/call/xmlpapers/63.1015/.63.html#S4. more, C#, VB] (inaccessible)<br />
* [http://sleepingsquirrel.org/monads/monads.lisp Lisp]<br />
* [http://lambda-the-ultimate.org/node/1136#comment-12448 Miranda]<br />
* OCaml:<br />
** [http://www.cas.mcmaster.ca/~carette/pa_monad/ OCaml]<br />
** [https://mailman.rice.edu/pipermail/metaocaml-users-l/2005-March/000057.html more]<br />
** [http://www.pps.jussieu.fr/~beffara/darcs/pivm/caml-vm/monad.mli also]<br />
** [http://www.cas.mcmaster.ca/~carette/metamonads/ MetaOcaml]<br />
** [http://enfranchisedmind.com/blog/posts/a-monad-tutorial-for-ocaml/ A Monad Tutorial for Ocaml]<br />
* [http://sleepingsquirrel.org/monads/monads.html Perl]<br />
* [http://programming.reddit.com/info/p66e/comments Perl6 ?]<br />
* [http://logic.csci.unt.edu/tarau/research/PapersHTML/monadic.html Prolog] <br />
* Python<br />
** [http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/439361 Python]<br />
** [http://www.etsimo.uniovi.es/python/pycon/papers/deferex/ here]<br />
** Twisted's [http://programming.reddit.com/info/p66e/comments/cp8eh Deferred monad]<br />
* Ruby:<br />
** [http://moonbase.rydia.net/mental/writings/programming/monads-in-ruby/00introduction.html Ruby]<br />
** [http://meta-meta.blogspot.com/2006/12/monads-in-ruby-part-1-identity.htmland other implementation]<br />
* Scala: <br />
** [http://scala.epfl.ch/examples/files/simpleInterpreter.html Scala]<br />
** [http://scala.epfl.ch/examples/files/callccInterpreter.html A continuation monad]<br />
* Scheme:<br />
** [http://okmij.org/ftp/Scheme/monad-in-Scheme.html Scheme]<br />
** [http://www.ccs.neu.edu/home/dherman/research/tutorials/monads-for-schemers.txt also]<br />
* [http://wiki.tcl.tk/13844 Tcl]<br />
* [http://okmij.org/ftp/Computation/monadic-shell.html The Unix Shell]<br />
* [http://okmij.org/ftp/Computation/monads.html More monads by Oleg]<br />
* [http://lambda-the-ultimate.org/node/2322 CLL]: a concurrent language based on a first-order intuitionistic linear logic where all right synchronous connectives are restricted to a monad. <br />
<br />
Unfinished:<br />
<br />
* [http://slate.tunes.org/repos/main/src/unfinished/monad.slate Slate]<br />
* [http://wiki.tcl.tk/14295 Parsing], [http://wiki.tcl.tk/13844 Maybe and Error] in Tcl<br />
<br />
And possibly there exist:<br />
<br />
* Standard ML (via modules?)<br />
<br />
Please add them if you know of other implementations.<br />
<br />
[http://lambda-the-ultimate.org/node/1136 Collection of links to monad implementations in various languages.] on [http://lambda-the-ultimate/ Lambda The Ultimate].<br />
<br />
==Interesting monads==<br />
<br />
A list of monads for various evaluation strategies and games:<br />
<br />
* [http://hackage.haskell.org/packages/archive/mtl/latest/doc/html/Control-Monad-Identity.html Identity monad] - the trivial monad.<br />
* [http://haskell.org/ghc/docs/latest/html/libraries/base/Data-Maybe.html Optional results from computations] - error checking without null.<br />
* [http://hackage.haskell.org/packages/archive/monad-mersenne-random/latest/doc/html/Control-Monad-Mersenne-Random.html Random values] - run code in an environment with access to a stream of random numbers.<br />
* [http://hackage.haskell.org/packages/archive/mtl/latest/doc/html/Control-Monad-Reader.html Read only variables] - guarantee read-only access to values.<br />
* [http://hackage.haskell.org/packages/archive/mtl/latest/doc/html/Control-Monad-Writer-Lazy.html Writable state] - i.e. log to a state buffer<br />
* [http://haskell.org/haskellwiki/New_monads/MonadSupply A supply of unique values] - useful for e.g. guids or unique variable names<br />
* [http://haskell.org/ghc/docs/latest/html/libraries/base/Control-Monad-ST.html ST - memory-only, locally-encapsulated mutable variables]. Safely embed mutable state inside pure functions.<br />
* [http://hackage.haskell.org/packages/archive/mtl/latest/doc/html/Control-Monad-State-Lazy.html Global state] - a scoped, mutable state.<br />
* [http://hackage.haskell.org/packages/archive/Hedi/latest/doc/html/Undo.html Undoable state effects] - roll back state changes<br />
* [http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Monad-Instances.html#t:Monad Function application] - chains of function application.<br />
* [http://hackage.haskell.org/packages/archive/mtl/latest/doc/html/Control-Monad-Error.html Functions which may error] - track location and causes of errors.<br />
* [http://hackage.haskell.org/packages/archive/stm/latest/doc/html/Control-Monad-STM.html Atomic memory transactions] - software transactional memory<br />
* [http://hackage.haskell.org/packages/archive/mtl/latest/doc/html/Control-Monad-Cont.html Continuations] - computations which can be interrupted and resumed.<br />
* [http://haskell.org/ghc/docs/latest/html/libraries/base/System-IO.html#t%3AIO IO] - unrestricted side effects on the world<br />
* [http://hackage.haskell.org/packages/archive/level-monad/0.4.1/doc/html/Control-Monad-Levels.html Search monad] - bfs and dfs search environments.<br />
* [http://hackage.haskell.org/packages/archive/stream-monad/latest/doc/html/Control-Monad-Stream.html non-determinism] - interleave computations with suspension.<br />
* [http://hackage.haskell.org/packages/archive/stepwise/latest/doc/html/Control-Monad-Stepwise.html stepwise computation] - encode non-deterministic choices as stepwise deterministic ones<br />
* [http://logic.csci.unt.edu/tarau/research/PapersHTML/monadic.html Backtracking computations]<br />
* [http://www.cs.cornell.edu/people/fluet/research/rgn-monad/index.html Region allocation effects]<br />
* [http://hackage.haskell.org/packages/archive/logict/0.5.0.2/doc/html/Control-Monad-Logic.html LogicT] - backtracking monad transformer with fair operations and pruning<br />
* [http://hackage.haskell.org/packages/archive/monad-task/latest/doc/html/Control-Monad-Task.html concurrent events and threads] - refactor event and callback heavy programs into straight-line code via co-routines<br />
* [http://hackage.haskell.org/package/QIO QIO] - The Quantum computing monad<br />
* [http://hackage.haskell.org/packages/archive/full-sessions/latest/doc/html/Control-Concurrent-FullSession.html Pi calculus] - a monad for Pi-calculus style concurrent programming<br />
* [http://www-fp.dcs.st-and.ac.uk/~kh/papers/pasco94/subsubsectionstar3_3_2_3.html Commutable monads for parallel programming]<br />
* [http://hackage.haskell.org/package/stream-monad Simple, Fair and Terminating Backtracking Monad]<br />
* [http://hackage.haskell.org/package/control-monad-exception Typed exceptions with call traces as a monad]<br />
* [http://hackage.haskell.org/package/control-monad-omega Breadth first list monad]<br />
* [http://hackage.haskell.org/package/control-monad-queue Continuation-based queues as monads]<br />
* [http://hackage.haskell.org/package/full-sessions Typed network protocol monad]<br />
* [http://hackage.haskell.org/package/level-monad Non-Determinism Monad for Level-Wise Search]<br />
* [http://hackage.haskell.org/package/monad-tx Transactional state monad]<br />
* [http://hackage.haskell.org/package/monadiccp A constraint programming monad]<br />
* [http://hackage.haskell.org/package/ProbabilityMonads A probability distribution monad]<br />
* [http://hackage.haskell.org/package/set-monad Sets] - Set computations<br />
* [http://hackage.haskell.org/package/http-monad/ HTTP] - http connections as a monadic environment<br />
* [http://hackage.haskell.org/package/monad-memo Memoization] - add memoization to code<br />
<br />
There are many more interesting instance of the monad abstraction out there. Please add them as you come across each species.<br />
<br />
==Fun==<br />
<br />
* If you are tired of monads, you can easily [http://saxophone.jpberlin.de/MonadTransformer?source=http%3A%2F%2Fwww%2Ehaskell%2Eorg%2Fhaskellwiki%2FCategory%3AMonad&language=English get rid of them].<br />
<br />
==See also==<br />
<br />
* [[What a Monad is not]]<br />
* [[Monads as containers]]<br />
* [[Monads as computation]]<br />
* [[Monad/ST]]<br />
* [http://www.haskellforall.com/2012/06/you-could-have-invented-free-monads.html Why free monads matter] (blog article)<br />
<br />
[[Category:Monad|*]]</div>Davorakhttps://wiki.haskell.org/index.php?title=Structure_of_a_Haskell_project&diff=54914Structure of a Haskell project2012-12-12T05:53:16Z<p>Davorak: Additional section for blog posts and more recent discussions.</p>
<hr />
<div>[[Category:Applications]]<br />
[[Category:Tutorials]]<br />
The intention behind this page is to flesh out some semi-standard for<br />
the directory structure, and the tool-setup for medium to large-sized<br />
Haskell projects. It is intended to make it easier for newcomers to<br />
start up projects, and for everybody to navigate others projects. Newcomers should also read [[How to write a Haskell program]] for more detailed instructions on setting up a new project.<br />
<br />
Especially I hope some focus can be made on how to make the different<br />
tools play well together, and giving the project structure that allows<br />
scaling.<br />
<br />
Hopefully someone more qualified than I (the initiator of this page)<br />
will be summoned and write their advices, change the faults, add<br />
missing bits and discuss differences in opinions.<br />
<br />
And perhaps a sample project (in the spirit of HNop, but with broader<br />
ambitions) should be made, so that can be used as a template.<br />
<br />
== Tools ==<br />
It is recommended to make use of the following tool chain:<br />
* [[Darcs]] for revision control<br />
* [[Cabal]] for managing builds, tests and haddock'ing <br />
* [[QuickCheck]] and [[SmallCheck]] for auto-generated test-cases<br />
* [[HUnit]] for hand-coded tests<br />
* [[Haddock]] for generating API documents '''or''' [[Literate programming]] combined with latex for thorough documentation of the code<br />
<br />
== Directory Structure ==<br />
<br />
For a project called app an outline the directory structure should<br />
look like this (inspired by looking at projects like [[GHC]], PUGS, Yi,<br />
[[Haskore]], Hmp3, Fps):<br />
<br />
<pre><br />
app/ -- Root-dir<br />
src/ -- For keeping the sourcecode<br />
Main.lhs -- The main-module<br />
App/ -- Use hierarchical modules<br />
...<br />
Win32/ -- For system dependent stuff<br />
Unix/<br />
cbits/ -- For C code to be linked to the haskell program<br />
testsuite/ -- Contains the testing stuff<br />
runtests.sh -- Will run all tests<br />
tests/ -- For unit-testing and checking<br />
App/ -- Clone the module hierarchy, so that there is one <br />
testfile per sourcefile<br />
benchmarks/ -- For testing performance<br />
doc/ -- Contains the manual, and other documentation<br />
examples/ -- Example inputs for the program<br />
dev/ -- Information for new developers about the project, <br />
and eg. related literature<br />
util/ -- Auxiliary scripts for various tasks<br />
dist/ -- Directory containing what end-users should get<br />
build/ -- Contains binary files, created by cabal<br />
doc/ -- The haddock documentation goes here, created by cabal<br />
resources/ -- Images, soundfiles and other non-source stuff<br />
used by the program<br />
_DARCS/ <br />
README -- Textfile with short introduction of the project<br />
INSTALL -- Textfile describing how to build and install<br />
TODO -- Textfile describing things that ought to be done<br />
AUTHORS -- Textfile containing info on who does and has done <br />
what in this project, and their contact info<br />
LICENSE -- Textfile describing licensing terms for this project<br />
app.cabal -- Project-description-file for cabal<br />
Setup.hs -- Program for running cabal commands<br />
</pre><br />
<br />
== Technicalities ==<br />
=== The sourcefiles ===<br />
<br />
* It is recommended to write sourcefiles in plain ascii or UTF-8 with unix line-endings using only spaces (and not tabs) for indentation.<br />
* The interface (everything a module exports) should be commented in english with haddock comments.<br />
* All of the code should be a large latex document going through the code and explaining it. The latex markup should be kept light, so that it is still readable in an editor. The main module should include all of the files somehow.<br />
* The modules should have explicit export-lists.<br />
* Explicit type-annotations should be given for all top-level definitions.<br />
<br />
== Discussions ==<br />
=== Why not use lhs2Tex ===<br />
<br />
Some short experiments showed that lhs2Tex is not too happy about<br />
haddock-comments, and since these two techniques of commenting are<br />
orthogonal something else should be chosen. Eg. [http://www.acooke.org/jara/pancito/haskell.sty latex.sty]<br />
<br />
=== How is the testing framework best made? ===<br />
<br />
Here should be a recipe for making a test-framework with both<br />
HUnit-tests and QuickCheck properties, that can all be run with a<br />
simple command, and how to make darcs use that for testing before<br />
recording.<br />
<br />
[http://www.informatik.uni-freiburg.de/~wehr/software/haskell/#HTF_-_The_Haskell_Test_Framework HTF] attempts to be such a test-framework, but is currently woefully under documented (although there's a tutorial hidden in the documentation for Test.Framework.Tutorial).<br />
<br />
Alternatively, [http://batterseapower.github.com/test-framework/ test-framework] has a similiar function as HTF.<br />
<br />
[[Development_Libraries_and_Tools#Testing_Frameworks| Additional testing frameworks.]]<br />
<br />
===Blog Posts and other External discussions===<br />
<br />
Michael Snoyman on project [http://www.yesodweb.com/blog/2012/09/project-templates template for Haskell.]</div>Davorakhttps://wiki.haskell.org/index.php?title=Structure_of_a_Haskell_project&diff=54913Structure of a Haskell project2012-12-12T05:50:28Z<p>Davorak: Link to additional testing frameworks and development tools.</p>
<hr />
<div>[[Category:Applications]]<br />
[[Category:Tutorials]]<br />
The intention behind this page is to flesh out some semi-standard for<br />
the directory structure, and the tool-setup for medium to large-sized<br />
Haskell projects. It is intended to make it easier for newcomers to<br />
start up projects, and for everybody to navigate others projects. Newcomers should also read [[How to write a Haskell program]] for more detailed instructions on setting up a new project.<br />
<br />
Especially I hope some focus can be made on how to make the different<br />
tools play well together, and giving the project structure that allows<br />
scaling.<br />
<br />
Hopefully someone more qualified than I (the initiator of this page)<br />
will be summoned and write their advices, change the faults, add<br />
missing bits and discuss differences in opinions.<br />
<br />
And perhaps a sample project (in the spirit of HNop, but with broader<br />
ambitions) should be made, so that can be used as a template.<br />
<br />
== Tools ==<br />
It is recommended to make use of the following tool chain:<br />
* [[Darcs]] for revision control<br />
* [[Cabal]] for managing builds, tests and haddock'ing <br />
* [[QuickCheck]] and [[SmallCheck]] for auto-generated test-cases<br />
* [[HUnit]] for hand-coded tests<br />
* [[Haddock]] for generating API documents '''or''' [[Literate programming]] combined with latex for thorough documentation of the code<br />
<br />
== Directory Structure ==<br />
<br />
For a project called app an outline the directory structure should<br />
look like this (inspired by looking at projects like [[GHC]], PUGS, Yi,<br />
[[Haskore]], Hmp3, Fps):<br />
<br />
<pre><br />
app/ -- Root-dir<br />
src/ -- For keeping the sourcecode<br />
Main.lhs -- The main-module<br />
App/ -- Use hierarchical modules<br />
...<br />
Win32/ -- For system dependent stuff<br />
Unix/<br />
cbits/ -- For C code to be linked to the haskell program<br />
testsuite/ -- Contains the testing stuff<br />
runtests.sh -- Will run all tests<br />
tests/ -- For unit-testing and checking<br />
App/ -- Clone the module hierarchy, so that there is one <br />
testfile per sourcefile<br />
benchmarks/ -- For testing performance<br />
doc/ -- Contains the manual, and other documentation<br />
examples/ -- Example inputs for the program<br />
dev/ -- Information for new developers about the project, <br />
and eg. related literature<br />
util/ -- Auxiliary scripts for various tasks<br />
dist/ -- Directory containing what end-users should get<br />
build/ -- Contains binary files, created by cabal<br />
doc/ -- The haddock documentation goes here, created by cabal<br />
resources/ -- Images, soundfiles and other non-source stuff<br />
used by the program<br />
_DARCS/ <br />
README -- Textfile with short introduction of the project<br />
INSTALL -- Textfile describing how to build and install<br />
TODO -- Textfile describing things that ought to be done<br />
AUTHORS -- Textfile containing info on who does and has done <br />
what in this project, and their contact info<br />
LICENSE -- Textfile describing licensing terms for this project<br />
app.cabal -- Project-description-file for cabal<br />
Setup.hs -- Program for running cabal commands<br />
</pre><br />
<br />
== Technicalities ==<br />
=== The sourcefiles ===<br />
<br />
* It is recommended to write sourcefiles in plain ascii or UTF-8 with unix line-endings using only spaces (and not tabs) for indentation.<br />
* The interface (everything a module exports) should be commented in english with haddock comments.<br />
* All of the code should be a large latex document going through the code and explaining it. The latex markup should be kept light, so that it is still readable in an editor. The main module should include all of the files somehow.<br />
* The modules should have explicit export-lists.<br />
* Explicit type-annotations should be given for all top-level definitions.<br />
<br />
== Discussions ==<br />
=== Why not use lhs2Tex ===<br />
<br />
Some short experiments showed that lhs2Tex is not too happy about<br />
haddock-comments, and since these two techniques of commenting are<br />
orthogonal something else should be chosen. Eg. [http://www.acooke.org/jara/pancito/haskell.sty latex.sty]<br />
<br />
=== How is the testing framework best made? ===<br />
<br />
Here should be a recipe for making a test-framework with both<br />
HUnit-tests and QuickCheck properties, that can all be run with a<br />
simple command, and how to make darcs use that for testing before<br />
recording.<br />
<br />
[http://www.informatik.uni-freiburg.de/~wehr/software/haskell/#HTF_-_The_Haskell_Test_Framework HTF] attempts to be such a test-framework, but is currently woefully under documented (although there's a tutorial hidden in the documentation for Test.Framework.Tutorial).<br />
<br />
Alternatively, [http://batterseapower.github.com/test-framework/ test-framework] has a similiar function as HTF.<br />
<br />
[[Development_Libraries_and_Tools#Testing_Frameworks| Additional testing frameworks.]]</div>Davorakhttps://wiki.haskell.org/index.php?title=Development_Libraries_and_Tools&diff=54912Development Libraries and Tools2012-12-12T05:46:49Z<p>Davorak: Added some testing frameworks</p>
<hr />
<div>[[Category:Libraries]] <br />
[[Category:Tools]]<br />
<br />
This page contains a list of useful tools and libraries used frequently in Haskell development, for things such as debugging, benchmarking, static analysis and testing.<br />
<br />
== Debuggers ==<br />
<br />
* [http://www.haskell.org/haskellwiki/GHC/GHCi ghci] powerful and extensible debugger for GHC.<br />
* [http://www.haskell.org/hugs/ hugs] a haskell interpreter that includes debugging facilities.<br />
<br />
== Static Analysis Tools ==<br />
<br />
Tools that perform some form of analysis on your code and provide useful feedback.<br />
<br />
* [http://community.haskell.org/~ndm/hlint/ HLint] - Detect common style mistakes and redundant parts of syntax, improving code quality.<br />
* [http://www.cl.cam.ac.uk/research/hvg/Isabelle/haskabelle.html Haskabelle] - Convert haskell programs to isabelle theories.<br />
* [http://community.haskell.org/~ndm/catch/ Catch] - Detect common sources of runtime errors (currently difficult to compile)<br />
* [http://hackage.haskell.org/package/SourceGraph-0.5.5.0 Sourcegraph] - Haskell visualizer<br />
Also, GHC when using the "-Wall" option provides a great deal of useful feedback.<br />
<br />
== Testing Frameworks ==<br />
<br />
Libraries for testing Haskell.<br />
<br />
* [http://www.cs.chalmers.se/~rjmh/QuickCheck/ QuickCheck] - powerful testing framework where test cases are generated according to specific properties.<br />
* [http://hackage.haskell.org/package/HUnit-1.2.2.1 HUnit] - unit testing framework similar to JUnit.<br />
* [http://hspec.github.com/ Hspec] - a framework for BDD similar to RSpec<br />
* [http://batterseapower.github.com/test-framework/ test-framework] integrates both Hunit and quickCheck<br />
* [http://hackage.haskell.org/package/HTF-0.10.0.5 The Haskell Test Framework, HTF] integrates both Hunit and quickCheck.<br />
<br />
== Dynamic Analysis Tools ==<br />
<br />
Tools that analyse your program's run-time behavior to provide useful information, such as coverage or benchmarks.<br />
<br />
* [http://community.haskell.org/~ndm/hat/ hat] - analyse each evaluation step of your haskell program.<br />
* [http://projects.unsafeperformio.com/hpc/ hpc] - check evaluation coverage of a haskell program, useful for determining test coverage.<br />
* [http://hackage.haskell.org/package/criterion criterion] - powerful benchmarking framework.<br />
* [http://code.haskell.org/ThreadScope/ threadscope] - a new feature on the horizon which allows for benchmarking and visualization of multithreaded performance.<br />
* [http://www.haskell.org/ghc/docs/latest/html/users_guide/profiling.html ghc profiling tools] - a powerful suite of profiling tools exist within GHC itself<br />
<br />
== Ancillary Tools ==<br />
<br />
Tools that aid the development process, such as build tools, revision control systems, and similar, while not performing any interaction with the code itself.<br />
<br />
* [http://darcs.net/ darcs] revision control system <br />
* [http://www.haskell.org/haddock/ haddock] documentation system<br />
* [http://www.haskell.org/cabal/ cabal] build system<br />
* [http://community.haskell.org/~ndm/hoogle/ hoogle] type-aware API search engine</div>Davorakhttps://wiki.haskell.org/index.php?title=How_to_write_a_Haskell_program&diff=54877How to write a Haskell program2012-12-09T10:20:33Z<p>Davorak: Deleted section. Link to don's old blog was broken and article was missing on Don's new blog.</p>
<hr />
<div>A developers' guide to creating a new Haskell project or program, and working in the Haskell developer ecosystem.<br />
<br />
''Note: for learning the Haskell language itself we recommend [http://haskell.org/haskellwiki/Tutorials#Introductions_to_Haskell these resources].''<br />
<br />
== Recommended tools ==<br />
<br />
Almost all new Haskell projects use the following tools. Each is<br />
intrinsically useful, but using a set of common tools also helps<br />
everyone by increasing productivity, and you're more likely to get<br />
patches.<br />
<br />
=== Revision control ===<br />
<br />
Use [http://git-scm.com/ git] or [http://darcs.net darcs] unless you have a specific reason not to. Both are lightweight distributed revision control systems (and darcs is written in Haskell). Both have massive market share in the Haskell world. If you want to encourage contributions from other Haskell hackers then git or darcs are the best. Darcs hosting is available on [http://hub.darcs.net/ hub.darcs.net]. For git, [http://github.com/ github] is very popular.<br />
<br />
=== Build system ===<br />
<br />
[[Image:Cabal-With-Text-small.png|frame|Built with Cabal]]<br />
<br />
Use [http://haskell.org/cabal/ Cabal].<br />
You should read at least the start of section 2 of the [http://www.haskell.org/cabal/users-guide/ Cabal User's Guide].<br />
<br />
You should use [http://haskell.org/cabal/download.html cabal-install] as a front-end for installing your Cabal library. Cabal-install provides commands not only for building libraries but also for installing them from, and uploading them to, Hackage. As a bonus, for almost all programs, it's faster than using Setup.hs scripts directly, since no time is wasted compiling the scripts. (This does not apply for programs that use custom Setup.hs scripts, since those need to be compiled even when using cabal-install.)<br />
<br />
cabal-install is widely available, as part of the [http://haskell.org/platform Haskell Platform], so you can probably assume your users will have it too.<br />
<br />
=== Documentation ===<br />
<br />
For libraries, use [http://haskell.org/haddock/ Haddock]. We recommend<br />
using the version of Haddock that ships with the Haskell Platform. Haddock generates [http://hackage.haskell.org/packages/archive/base/4.3.1.0/doc/html/Prelude.html nice markup], with links to source.<br />
<br />
=== Testing ===<br />
<br />
You can use [http://hackage.haskell.org/package/QuickCheck QuickCheck] or [http://www.mail-archive.com/haskell@haskell.org/msg19215.html SmallCheck] to test pure code. To test impure code, use [http://hackage.haskell.org/cgi-bin/hackage-scripts/package/HUnit HUnit]. See [http://hackage.haskell.org/packages/archive/hashable/1.1.2.2/hashable.cabal this Cabal file] for an example of how to include tests in your Cabal package.<br />
<br />
To get started, try [[Introduction to QuickCheck]]. For a slightly more advanced introduction, [http://blog.codersbase.com/2006/09/simple-unit-testing-in-haskell.html Simple Unit Testing in Haskell] is a blog article about creating a testing framework for QuickCheck using some Template Haskell. For HUnit, see [[HUnit 1.0 User's Guide]]<br />
<br />
=== Distribution ===<br />
<br />
The standard mechanism for distributing Haskell libraries and<br />
applications is [http://hackage.haskell.org/packages/hackage.html Hackage]. Hackage can<br />
host your cabalised tarball releases, and link to any library<br />
dependencies your code has. Users will find and install your packages via "cabal install", and your package will be integrated into Haskell search engines, like [http://www.haskell.org/hoogle/ hoogle]<br />
<br />
=== Target Environment ===<br />
<br />
If at all possible, depend on libraries that are provided by the [http://haskell.org/platform Haskell Platform], and libraries that in turn build against the Haskell Platform. This set of libraries is designed to be widely available, so your end users will be able to build your software.<br />
<br />
== Structure of a simple project ==<br />
<br />
The basic structure of a new Haskell project can be adopted from<br />
[http://semantic.org/hnop/ HNop], the minimal Haskell project. It<br />
consists of the following files, for the mythical project "haq".<br />
<br />
* Haq.hs -- the main haskell source file<br />
* haq.cabal -- the cabal build description<br />
* Setup.hs -- build script itself<br />
* _darcs -- revision control<br />
* README -- info<br />
* LICENSE -- license<br />
<br />
Of course, you can elaborate on this, with subdirectories and multiple<br />
modules. See [[Structure of a Haskell project]] for an example of a larger project's directory structure.<br />
<br />
Here is a transcript that shows how you'd create a minimal darcs and cabalised<br />
Haskell project for the cool new Haskell program "haq", build it,<br />
install it and release.<br />
<br />
''Note'': The new tool "cabal init" automates all this for you, but you should<br />
understand all the parts even so. <br />
<br />
We will now walk through the creation of the infrastructure for a simple<br />
Haskell executable. Advice for libraries follows after.<br />
<br />
=== Create a directory ===<br />
<br />
Create somewhere for the source:<br />
<br />
<code><br />
$ mkdir haq<br />
$ cd haq<br />
</code><br />
<br />
=== Write some Haskell source ===<br />
<br />
Write your program:<br />
<br />
<haskell><br />
$ cat > Haq.hs<br />
--<br />
-- Copyright (c) 2006 Don Stewart - http://www.cse.unsw.edu.au/~dons/<br />
-- GPL version 2 or later (see http://www.gnu.org/copyleft/gpl.html)<br />
--<br />
import System.Environment<br />
<br />
-- | 'main' runs the main program<br />
main :: IO ()<br />
main = getArgs >>= print . haqify . head<br />
<br />
haqify s = "Haq! " ++ s<br />
</haskell><br />
<br />
=== Stick it in version control ===<br />
<br />
Place the source under revision control (you may need to enter your e-mail address first, to identify you as maintainer of this source):<br />
<br />
<code><br />
$ darcs init<br />
$ darcs add Haq.hs <br />
$ darcs record<br />
addfile ./Haq.hs<br />
Shall I record this change? (1/?) [ynWsfqadjkc], or ? for help: y<br />
hunk ./Haq.hs 1<br />
+--<br />
+-- Copyright (c) 2006 Don Stewart - http://www.cse.unsw.edu.au/~dons/<br />
+-- GPL version 2 or later (see http://www.gnu.org/copyleft/gpl.html)<br />
+--<br />
+import System.Environment<br />
+<br />
+-- | 'main' runs the main program<br />
+main :: IO ()<br />
+main = getArgs >>= print . haqify . head<br />
+<br />
+haqify s = "Haq! " ++ s<br />
Shall I record this change? (2/?) [ynWsfqadjkc], or ? for help: y<br />
What is the patch name? Import haq source<br />
Do you want to add a long comment? [yn]n<br />
Finished recording patch 'Import haq source'<br />
</code><br />
<br />
And we can see that darcs is now running the show:<br />
<br />
<code><br />
$ ls<br />
Haq.hs _darcs<br />
</code><br />
<br />
=== Add a build system ===<br />
<br />
Create a .cabal file describing how to build your project:<br />
<br />
<code><br />
$ cat > haq.cabal<br />
Name: haq<br />
Version: 0.0<br />
Description: Super cool mega lambdas<br />
License: GPL<br />
License-file: LICENSE<br />
Author: Don Stewart<br />
Maintainer: dons@cse.unsw.edu.au<br />
Build-Type: Simple<br />
Cabal-Version: >=1.2<br />
<br />
Executable haq<br />
Main-is: Haq.hs<br />
Build-Depends: base >= 3 && < 5<br />
</code><br />
<br />
(If your package uses other packages, e.g. <tt>haskell98</tt>, you'll need to add them to the <tt>Build-Depends:</tt> field as a comma separated list.)<br />
Add a <tt>Setup.hs</tt> that will actually do the building:<br />
<br />
<haskell><br />
$ cat > Setup.hs<br />
import Distribution.Simple<br />
main = defaultMain<br />
</haskell><br />
Cabal allows either <tt>Setup.hs</tt> or <tt>Setup.lhs</tt>.<br />
<br />
Now would also be a good time to add a LICENSE file and a README file. Examples are in the tarball for HNop.<br />
<br />
Record your changes:<br />
<br />
<code><br />
$ darcs add haq.cabal Setup.hs LICENSE README<br />
$ darcs record --all<br />
What is the patch name? Add a build system<br />
Do you want to add a long comment? [yn]n<br />
Finished recording patch 'Add a build system'<br />
</code><br />
<br />
=== Build your project ===<br />
<br />
Now build it! There are two methods of accessing Cabal functionality: through your Setup.hs script or through cabal-install. In most cases, cabal-install is now the preferred method.<br />
<br />
Building using cabal-install:<br />
<br />
<code><br />
$ cabal install --prefix=$HOME --user<br />
</code><br />
<br />
Building using the traditional Setup.hs method:<br />
<br />
<code><br />
$ runhaskell Setup configure --prefix=$HOME --user<br />
$ runhaskell Setup build<br />
$ runhaskell Setup install<br />
</code><br />
<br />
This will install your newly minted haq program in $HOME/bin.<br />
<br />
=== Run it ===<br />
<br />
And now you can run your cool project:<br />
<code><br />
$ haq me<br />
"Haq! me"<br />
</code><br />
<br />
You can also run it in-place, even if you skip the install phase:<br />
<code><br />
$ dist/build/haq/haq you<br />
"Haq! you"<br />
</code><br />
<br />
=== Build some haddock documentation ===<br />
<br />
Generate some API documentation into dist/doc/*<br />
<br />
Using cabal install:<br />
<code><br />
$ cabal haddock<br />
</code><br />
<br />
Traditional method:<br />
<code><br />
$ runhaskell Setup haddock<br />
</code><br />
<br />
which generates files in dist/doc/ including:<br />
<br />
<code><br />
$ w3m -dump dist/doc/html/haq/Main.html<br />
haq Contents Index<br />
Main<br />
<br />
Synopsis<br />
main :: IO ()<br />
<br />
Documentation<br />
<br />
main :: IO ()<br />
main runs the main program<br />
<br />
Produced by Haddock version 0.7<br />
</code><br />
<br />
No output? Make sure you have actually installed haddock. It is a separate program, not something that comes with Cabal. Note that the stylized comment in the source gets picked up by Haddock.<br />
<br />
=== (Optional) Improve your code: HLint ===<br />
<br />
[http://hackage.haskell.org/package/hlint HLint] can be a valuable tool for improving your coding style, particularly if you're new to Haskell. Let's run it now.<br />
<br />
<code><br />
$ hlint .<br />
./Haq.hs:11:1: Warning: Eta reduce<br />
Found:<br />
haqify s = "Haq! " ++ s<br />
Why not:<br />
haqify = ("Haq! " ++)<br />
</code><br />
<br />
The existing code will work, but let's follow that suggestion. Open Haq.hs in your favourite editor and change the line:<br />
<br />
<haskell><br />
where haqify s = "Haq! " ++ s<br />
</haskell><br />
<br />
to:<br />
<br />
<haskell><br />
where haqify = ("Haq! " ++)<br />
</haskell><br />
<br />
=== Add some automated testing: QuickCheck ===<br />
<br />
==== QuickCheck v1 ====<br />
<br />
We'll use QuickCheck to specify a simple property of our Haq.hs code. Create a tests module, Tests.hs, with some QuickCheck boilerplate:<br />
<br />
<haskell><br />
$ cat > Tests.hs<br />
import Char<br />
import List<br />
import Test.QuickCheck<br />
import Text.Printf<br />
<br />
main = mapM_ (\(s,a) -> printf "%-25s: " s >> a) tests<br />
<br />
instance Arbitrary Char where<br />
arbitrary = choose ('\0', '\128')<br />
coarbitrary c = variant (ord c `rem` 4)<br />
</haskell><br />
<br />
Now let's write a simple property:<br />
<br />
<haskell><br />
$ cat >> Tests.hs <br />
-- reversing twice a finite list, is the same as identity<br />
prop_reversereverse s = (reverse . reverse) s == id s<br />
where _ = s :: [Int]<br />
<br />
-- and add this to the tests list<br />
tests = [("reverse.reverse/id", test prop_reversereverse)]<br />
</haskell><br />
<br />
We can now run this test, and have QuickCheck generate the test data:<br />
<br />
<code><br />
$ runhaskell Tests.hs<br />
reverse.reverse/id : OK, passed 100 tests.<br />
</code><br />
<br />
Let's add a test for the 'haqify' function:<br />
<br />
<haskell><br />
-- Dropping the "Haq! " string is the same as identity<br />
prop_haq s = drop (length "Haq! ") (haqify s) == id s<br />
where haqify s = "Haq! " ++ s<br />
<br />
tests = [("reverse.reverse/id", test prop_reversereverse)<br />
,("drop.haq/id", test prop_haq)]<br />
</haskell><br />
<br />
and let's test that:<br />
<br />
<code><br />
$ runhaskell Tests.hs<br />
reverse.reverse/id : OK, passed 100 tests.<br />
drop.haq/id : OK, passed 100 tests.<br />
</code><br />
<br />
Great!<br />
<br />
==== QuickCheck v2 ====<br />
<br />
If you're using version 2 of QuickCheck, the code in the previous section needs some minor modifications:<br />
<br />
<haskell><br />
$ cat > Tests.hs<br />
import Char<br />
import List<br />
import Test.QuickCheck<br />
import Text.Printf<br />
<br />
main = mapM_ (\(s,a) -> printf "%-25s: " s >> a) tests<br />
<br />
-- reversing twice a finite list, is the same as identity<br />
prop_reversereverse s = (reverse . reverse) s == id s<br />
where _ = s :: [Int]<br />
<br />
-- Dropping the "Haq! " string is the same as identity<br />
prop_haq s = drop (length "Haq! ") (haqify s) == id s<br />
where haqify s = "Haq! " ++ s<br />
<br />
tests = [("reverse.reverse/id", quickCheck prop_reversereverse)<br />
,("drop.haq/id", quickCheck prop_haq)]<br />
</haskell><br />
<br />
To run the test:<br />
<br />
<code><br />
$ runhaskell Tests.hs<br />
reverse.reverse/id : +++ OK, passed 100 tests.<br />
drop.haq/id : +++ OK, passed 100 tests.<br />
</code><br />
<br />
Success!<br />
<br />
=== Running the test suite from darcs ===<br />
<br />
We can arrange for darcs to run the test suite on every commit that is run with the flag --test:<br />
<br />
<code><br />
$ darcs setpref test "runhaskell Tests.hs"<br />
Changing value of test from '' to 'runhaskell Tests.hs'<br />
</code><br />
<br />
will run the full set of QuickChecks.<br />
If your test requires it, you may need to ensure other things are built too -- for example:<code>darcs setpref test "alex Tokens.x;happy Grammar.y;runhaskell Tests.hs"</code>.<br />
You will encounter that this way a darcs patch is also accepted if a QuickCheck test fails.<br />
You have two choices to [http://www.haskell.org/pipermail/haskell-cafe/2007-October/033834.html work around] this:<br />
* Use <hask>quickCheck'</hask> from the package QuickCheck-2 and call <hask>exitWithFailure</hask> if it return <hask>False</hask>.<br />
* Keep the test program as it is, and implement the failure on the shell level:<br />
: <code>runhaskell Tests.hs | tee test.log && if grep Falsifiable test.log >/dev/null; then exit 1; fi</code><br />
<br />
Let's commit a new patch:<br />
<br />
<code><br />
$ darcs add Tests.hs<br />
$ darcs record --all --test<br />
What is the patch name? Add testsuite<br />
Do you want to add a long comment? [yn]n<br />
Running test...<br />
reverse.reverse/id : OK, passed 100 tests.<br />
drop.haq/id : OK, passed 100 tests.<br />
Test ran successfully.<br />
Looks like a good patch.<br />
Finished recording patch 'Add testsuite'<br />
</code><br />
<br />
Excellent: now, patches must pass the test suite before they can be committed provided the --test flag is passed.<br />
<br />
=== Tag the stable version, create a tarball, and sell it! ===<br />
<br />
Tag the stable version:<br />
<br />
<code><br />
$ darcs tag<br />
What is the version name? 0.0<br />
Finished tagging patch 'TAG 0.0'<br />
</code><br />
<br />
==== Create a tarball ====<br />
You can do this using either Cabal or darcs, or even an explicit <tt>tar</tt> command.<br />
<br />
===== Using Cabal =====<br />
<br />
Since the code is cabalised, we can create a tarball with cabal-install<br />
directly (you can also use <tt>runhaskell Setup.hs sdist</tt>, but you need <tt>tar</tt> on your system [http://thread.gmane.org/gmane.comp.lang.haskell.cafe/60617/focus=60653]):<br />
<br />
<code><br />
$ cabal sdist<br />
Building source dist for haq-0.0...<br />
Source tarball created: dist/haq-0.0.tar.gz<br />
</code><br />
This has the advantage that Cabal will do a bit more checking, and<br />
ensure that the tarball has the structure that HackageDB expects. <br />
Note that it does require the LICENSE file to exist.<br />
It packages up the files needed to build the project; to include other files (such as <tt>Test.hs</tt> in the above example, and our README), we need to add:<br />
<br />
<code><br />
extra-source-files: Tests.hs README<br />
</code><br />
<br />
to the .cabal file to have everything included.<br />
<br />
===== Using darcs =====<br />
<br />
Alternatively, you can use darcs:<br />
<code><br />
$ darcs dist -d haq-0.0<br />
Created dist as haq-0.0.tar.gz<br />
</code><br />
<br />
And you're all set up!<br />
<br />
==== Check that your source package is complete ====<br />
<br />
Just to make sure everything works, try building the source package in some temporary directory:<br />
<code><br />
$ tar xzf haq-0.0.tar.gz<br />
$ cd haq-0.0<br />
$ cabal configure<br />
$ cabal build<br />
</code><br />
and for packages containing libraries,<br />
<code><br />
$ cabal haddock<br />
</code><br />
<br />
==== Upload your package to Hackage ====<br />
<br />
Whichever of the above methods you've used to create your package, you can upload it to the Hackage package collection via a [http://hackage.haskell.org/packages/upload.html web interface].<br />
You may wish to use the package checking interface there first, and fix things it warns about, before uploading your package.<br />
<br />
=== Summary ===<br />
<br />
The following files were created:<br />
<br />
$ ls<br />
Haq.hs Tests.hs dist haq.cabal<br />
Setup.hs _darcs haq-0.0.tar.gz<br />
<br />
== Libraries ==<br />
<br />
The process for creating a Haskell library is almost identical. The differences<br />
are as follows, for the hypothetical "ltree" library:<br />
<br />
=== Hierarchical source ===<br />
<br />
The source should live under a directory path that fits into the<br />
existing [[Hierarchical module names|module layout guide]].<br />
So we would create the following directory structure, for the module<br />
Data.LTree:<br />
<br />
$ mkdir Data<br />
$ cat > Data/LTree.hs <br />
module Data.LTree where<br />
<br />
So our Data.LTree module lives in Data/LTree.hs<br />
<br />
=== The Cabal file ===<br />
<br />
Cabal files for libraries list the publically visible modules, and have<br />
no executable section:<br />
<br />
$ cat > ltree.cabal <br />
Name: ltree<br />
Version: 0.1<br />
Description: Lambda tree implementation<br />
License: BSD3<br />
License-file: LICENSE<br />
Author: Don Stewart<br />
Maintainer: dons@cse.unsw.edu.au<br />
Build-Type: Simple<br />
Cabal-Version: >=1.2<br />
<br />
Library<br />
Build-Depends: base >= 3 && < 5<br />
Exposed-modules: Data.LTree<br />
ghc-options: -Wall<br />
<br />
We can thus build our library:<br />
<br />
$ cabal configure --prefix=$HOME --user<br />
$ cabal build <br />
Preprocessing library ltree-0.1...<br />
Building ltree-0.1...<br />
[1 of 1] Compiling Data.LTree ( Data/LTree.hs, dist/build/Data/LTree.o )<br />
/usr/bin/ar: creating dist/build/libHSltree-0.1.a<br />
<br />
and our library has been created as a object archive. Now install it:<br />
<br />
$ cabal install<br />
Installing: /home/dons/lib/ltree-0.1/ghc-6.6 & /home/dons/bin ltree-0.1...<br />
Registering ltree-0.1...<br />
Reading package info from ".installed-pkg-config" ... done.<br />
Saving old package config file... done.<br />
Writing new package config file... done.<br />
<br />
And we're done!<br />
To try it out, first make sure that your working directory is anything but the source directory of your library:<br />
<br />
$ cd ..<br />
<br />
And then use your new library from, for example, ghci:<br />
<br />
$ ghci -package ltree<br />
Prelude> :m + Data.LTree<br />
Prelude Data.LTree> <br />
<br />
The new library is in scope, and ready to go.<br />
<br />
=== More complex build systems ===<br />
<br />
For larger projects, you may want to store source trees in subdirectories. This can be done simply by creating a directory -- for example, "src" -- into which you will put your src tree.<br />
<br />
To have Cabal find this code, you add the following line to your Cabal<br />
file:<br />
<br />
hs-source-dirs: src<br />
<br />
You can also set up Cabal to run configure scripts, among other features. For more information consult the<br />
[http://www.haskell.org/cabal/users-guide/ Cabal user guide].<br />
<br />
== Automation ==<br />
<br />
A tool to automatically populate a new cabal project is available:<br />
<br />
cabal init<br />
<br />
Usage is:<br />
<br />
<code><br />
$ cabal init<br />
Package name [default "haq"]? <br />
Package version [default "0.1"]? <br />
Please choose a license:<br />
1) GPL<br />
2) GPL-2<br />
3) GPL-3<br />
4) LGPL<br />
5) LGPL-2.1<br />
6) LGPL-3<br />
* 7) BSD3<br />
8) BSD4<br />
9) MIT<br />
10) PublicDomain<br />
11) AllRightsReserved<br />
12) OtherLicense<br />
13) Other (specify)<br />
Your choice [default "BSD3"]? <br />
Author name? Henry Laxen<br />
Maintainer email? nadine.and.henry@pobox.com<br />
Project homepage/repo URL? http://somewhere.com/haq/<br />
Project synopsis? A wonderful little module<br />
Project category:<br />
1) Codec<br />
2) Concurrency<br />
3) Control<br />
4) Data<br />
5) Database<br />
6) Development<br />
7) Distribution<br />
8) Game<br />
9) Graphics<br />
10) Language<br />
11) Math<br />
12) Network<br />
13) Sound<br />
14) System<br />
15) Testing<br />
16) Text<br />
17) Web<br />
18) Other (specify)<br />
Your choice? 3<br />
What does the package build:<br />
1) Library<br />
2) Executable<br />
Your choice? 1<br />
Generating LICENSE...<br />
Generating Setup.hs...<br />
Generating haq.cabal...<br />
<br />
You may want to edit the .cabal file and add a Description field.<br />
</code><br />
<br />
== Licenses ==<br />
<br />
Code for the common base library package must be BSD licensed. Otherwise, it<br />
is entirely up to you as the author.<br />
Choose a licence (inspired by [http://www.dina.dk/~abraham/rants/license.html this]).<br />
Check the licences of things you use (both other Haskell packages and C<br />
libraries), since these may impose conditions you must follow.<br />
Use the same licence as related projects, where possible. The Haskell community is<br />
split into 2 camps, roughly: those who release everything under BSD, and<br />
(L)GPLers. Some Haskellers recommend avoiding LGPL, due to cross-module optimisation<br />
issues. Like many licensing questions, this advice is controversial. Several Haskell projects<br />
(wxHaskell, HaXml, etc) use the LGPL with an extra permissive clause which gets round the<br />
cross-module optimisation problem.<br />
<br />
== Releases ==<br />
<br />
It's important to release your code as stable, tagged tarballs. Don't<br />
just [http://jackunrue.blogspot.com/2006/11/don-do-releases.html rely on darcs for distribution].<br />
<br />
* '''darcs dist''' generates tarballs directly from a darcs repository<br />
<br />
For example:<br />
<br />
$ cd fps<br />
$ ls <br />
Data LICENSE README Setup.hs TODO _darcs cbits dist fps.cabal tests<br />
$ darcs dist -d fps-0.8<br />
Created dist as fps-0.8.tar.gz<br />
<br />
You can now just post your fps-0.8.tar.gz<br />
<br />
You can also have darcs do the equivalent of 'daily snapshots' for you by using a post-hook.<br />
<br />
put the following in _darcs/prefs/defaults:<br />
apply posthook darcs dist<br />
apply run-posthook<br />
<br />
Advice:<br />
* Tag each release using '''darcs tag'''. For example:<br />
<br />
$ darcs tag 0.8<br />
Finished tagging patch 'TAG 0.8'<br />
<br />
Then people can <tt>darcs pull --partial -t 0.8</tt>, to get just the tagged version (and not the entire history).<br />
<br />
== Hosting ==<br />
<br />
Hosting for repos is available from the Haskell community server:<br />
<br />
http://community.haskell.org/<br />
<br />
A Darcs repository can be published simply by making it available from a<br />
web page.<br />
<br />
There is also a (minimal) Github equivalent for Darcs at [http://hub.darcs.net/ hub.darcs.net].<br />
<br />
== Web page ==<br />
<br />
Create a web page documenting your project! An easy way to do this is to<br />
add a project specific page to [[Haskell|the Haskell wiki]]<br />
<br />
== The user experience ==<br />
<br />
When developing a new Haskell library, it is important to remember how the user expects to be able to build and use a library.<br />
<br />
=== Introductory information and build guide ===<br />
<br />
A typical library user expects to:<br />
<br />
# Visit [[Haskell|Haskell.org]]<br />
# Find the library/program they are looking for:<br />
## if not found, try mailing list; <br />
## if it is hidden, try improving the documentation on haskell.org;<br />
## if it does not exist, try contributing code and documentation) <br />
# Download<br />
# Build and install<br />
# Enjoy<br />
<br />
Each of these steps can pose potential road blocks, and code authors can<br />
do a lot to help code users avoid such blocks. Steps 1..2 may be easy enough, and many coders and users are mainly concerned with step 5. Steps 3..4 are the ones that often get in the way. In particular, the<br />
following questions should have clear answers:<br />
<br />
* Which is the latest version? <br />
* What state is it in? <br />
* What are its aims? <br />
* Where is the documentation?<br />
* Which is the right version for given OS and Haskell implementation?<br />
* How is it packaged, and what tools are needed to get and unpack it?<br />
* How is it installed, and what tools are needed to install it?<br />
* How do we handle dependencies?<br />
* How do we provide/acquire the knowledge and tool-chains needed?<br />
<br />
The best place to answer these questions is a README file,<br />
distributed with the library or application, and often accompanied with<br />
similar text on a more extensive web page.<br />
<br />
=== Tutorials ===<br />
<br />
Generated haddock documentation is usually not enough to help new<br />
programmers learn how to use a library. You must also provide accompanying examples, and even tutorials about the library.<br />
<br />
Please consider providing example code for your library or application. The code should be type-correct and well-commented.<br />
<br />
== Program structure ==<br />
<br />
Monad transformers are very useful for programming in the large,<br />
encapsulating state, and controlling side effects. To learn more about this approach, try [http://www.grabmueller.de/martin/www/pub/Transformers.en.html Monad Transformers Step by Step].<br />
<br />
== Publicity ==<br />
<br />
The best code in the world is meaningless if nobody knows about it. The<br />
process to follow once you've tagged and released your code is:<br />
<br />
=== Join the community ===<br />
<br />
If you haven't already, join the community. The best way to do this is to [http://haskell.org/haskellwiki/Mailing_lists subscribe] to at least haskell-cafe@ and haskell@ mailing lists. Joining the [[IRC_channel|#haskell IRC channel]] is also an excellent idea.<br />
<br />
=== Announce your project on haskell@ ===<br />
<br />
Most important: announce your project releases to the haskell@haskell.org mailing list. Tag your email subject line with "ANNOUNCE: ...". This ensure it will then make it into the [http://haskell.org/haskellwiki/HWN Haskell Weekly News]. To be doubly sure, you can email the release text to the [[HWN|HWN editor]].<br />
<br />
=== Add your code to the public collections ===<br />
<br />
* Add your library or application to the [[Libraries and tools]] page, under the relevant category, so people can find it.<br />
<br />
* If your release is a Cabal package, add it to the [http://hackage.haskell.org/packages/hackage.html Hackage database] (Haskell's CPAN wanna-be).<br />
<br />
=== Blog about it ===<br />
<br />
Blog about it! Blog about your new code on [http://planet.haskell.org Planet Haskell].<br />
Write about your project in your blog, then email the [http://planet.haskell.org/ Planet Haskell] maintainer (ibid on [[IRC channel|#haskell]]) the RSS feed url for your blog<br />
<br />
[[Category:Community]]<br />
[[Category:Tutorials]]</div>Davorakhttps://wiki.haskell.org/index.php?title=The_Monad.Reader&diff=54776The Monad.Reader2012-12-02T05:45:49Z<p>Davorak: </p>
<hr />
<div>[[Category:Community]]<br />
The Monad.Reader is an electronic magazine about all things Haskell. It is less formal than journal, but more enduring than a wiki-page or blog post. There have been a wide variety of articles, including: exciting code fragments, intriguing puzzles, book reviews, tutorials, and even half-baked research ideas.<br />
<br />
'''Please note that the Monad.Reader has moved to [http://themonadreader.wordpress.com http://themonadreader.wordpress.com]. This site will no longer be updated.'''<br />
<br />
Older issues,1-13, can be found [[The_Monad.Reader/Previous_issues|here.]]</div>Davorakhttps://wiki.haskell.org/index.php?title=99_questions/Solutions/6&diff=5476699 questions/Solutions/62012-11-30T20:49:35Z<p>Davorak: Added solution using Control.Arrows fan out operator.</p>
<hr />
<div>(*) Find out whether a list is a palindrome. A palindrome can be read forward or backward; e.g. (x a m a x).<br />
<br />
<haskell><br />
isPalindrome :: (Eq a) => [a] -> Bool<br />
isPalindrome xs = xs == (reverse xs)<br />
</haskell><br />
<br />
<haskell><br />
isPalindrome' [] = True<br />
isPalindrome' [_] = True<br />
isPalindrome' xs = (head xs) == (last xs) && (isPalindrome' $ init $ tail xs) <br />
</haskell><br />
<br />
Here's one to show it done in a fold just for the fun of it. Do note that it is less efficient then the previous 2 though.<br />
<br />
<haskell><br />
isPalindrome'' :: (Eq a) => [a] -> Bool<br />
isPalindrome'' xs = foldl (\acc (a,b) -> if a == b then acc else False) True input<br />
where<br />
input = zip xs (reverse xs)<br />
</haskell><br />
<br />
Another one just for fun:<br />
<br />
<haskell><br />
isPalindrome''' :: (Eq a) => [a] -> Bool<br />
isPalindrome''' = Control.Monad.liftM2 (==) id reverse<br />
</haskell><br />
<br />
Or even:<br />
<br />
<haskell><br />
isPalindrome'''' :: (Eq a) => [a] -> Bool<br />
isPalindrome'''' = (==) Control.Applicative.<*> reverse<br />
</haskell><br />
<br />
Here's one that does half as many compares:<br />
<br />
<haskell><br />
palindrome :: (Eq a) => [a] -> Bool<br />
palindrome xs = p [] xs xs<br />
where p rev (x:xs) (_:_:ys) = p (x:rev) xs ys<br />
p rev (x:xs) [_] = rev == xs<br />
p rev xs [] = rev == xs<br />
</haskell><br />
<br />
Here's one using foldr and zipWith.<br />
<br />
<haskell><br />
palindrome :: (Eq a) => [a] -> Bool<br />
palindrome xs = foldr (&&) True $ zipWith (==) xs (reverse xs)<br />
palindrome' xs = and $ zipWith (==) xs (reverse xs) -- same, but easier<br />
</haskell><br />
<br />
<br />
<haskell><br />
isPalindrome list = take half_len list == reverse (drop (half_len + (len `mod` 2)) list)<br />
where <br />
len = length list<br />
half_len = len `div` 2<br />
<br />
isPalindrome' list = f_part == reverse s_part<br />
where <br />
len = length list<br />
half_len = len `div` 2<br />
(f_part, s_part') = splitAt half_len list<br />
s_part = drop (len `mod` 2) s_part'<br />
</haskell><br />
<br />
<br />
Using Control.Arrows (&&&) fan out operator.<br />
<br />
With monomorphism restriction:<br />
<br />
<haskell><br />
isPalindrome1 xs = (uncurry (==) . (id &&& reverse)) xs<br />
</haskell><br />
<br />
Point free with no monomorphism restriction:<br />
<br />
<haskell><br />
isPalindrome1 = (uncurry (==) . (id &&& reverse))<br />
</haskell></div>Davorakhttps://wiki.haskell.org/index.php?title=99_questions/Solutions/5&diff=5476599 questions/Solutions/52012-11-30T19:54:26Z<p>Davorak: tagged the implementation found in prelude.</p>
<hr />
<div>(*) Reverse a list.<br />
<br />
<haskell><br />
reverse :: [a] -> [a]<br />
reverse = foldl (flip (:)) []<br />
</haskell><br />
<br />
The standard definition, found in the prelude, is concise, but not very readable. Another way to define reverse is:<br />
<br />
<haskell><br />
reverse :: [a] -> [a]<br />
reverse [] = []<br />
reverse (x:xs) = reverse xs ++ [x]<br />
</haskell><br />
<br />
However this definition is more wasteful than the one in Prelude as it repeatedly reconses the result as it is accumulated. The following variation avoids that, and thus computationally closer to the Prelude version.<br />
<br />
<haskell><br />
reverse :: [a] -> [a]<br />
reverse list = reverse' list []<br />
where<br />
reverse' [] reversed = reversed<br />
reverse' (x:xs) reversed = reverse' xs (x:reversed)<br />
</haskell></div>Davorakhttps://wiki.haskell.org/index.php?title=99_questions/Solutions/4&diff=5476499 questions/Solutions/42012-11-30T19:22:34Z<p>Davorak: Fix formating</p>
<hr />
<div>(*) Find the number of elements of a list.<br />
<br />
<haskell><br />
myLength :: [a] -> Int<br />
myLength [] = 0<br />
myLength (_:xs) = 1 + myLength xs<br />
<br />
myLength' :: [a] -> Int<br />
myLength' list = myLength_acc list 0 -- same, with accumulator<br />
where<br />
myLength_acc [] n = n<br />
myLength_acc (_:xs) n = myLength_acc xs (n + 1)<br />
</haskell><br />
<br />
<haskell><br />
myLength' = foldl (\n _ -> n + 1) 0<br />
myLength'' = foldr (\_ n -> n + 1) 0<br />
myLength''' = foldr (\_ -> (+1)) 0<br />
myLength'''' = foldr ((+) . (const 1)) 0<br />
myLength''''' = foldr (const (+1)) 0<br />
myLength'''''' = foldl (const . (+1)) 0<br />
</haskell><br />
<br />
<haskell><br />
myLength' xs = snd $ last $ zip xs [1..] -- Just for fun<br />
myLength'' = snd . last . (flip zip [1..]) -- Because point-free is also fun<br />
myLength''' = fst . last . zip [1..] -- same, but easier<br />
</haskell><br />
<br />
<haskell><br />
myLength = sum . map (\_->1)<br />
</haskell><br />
<br />
This is <hask>length</hask> in <hask>Prelude</hask>.<br />
<br />
-- length returns the length of a finite list as an Int. <br />
length :: [a] -> Int <br />
length [] = 0 <br />
length (_:l) = 1 + length l<br />
<br />
The prelude for haskell 2010 can be found [http://www.haskell.org/onlinereport/haskell2010/haskellch9.html#x16-1710009 here.]</div>Davorakhttps://wiki.haskell.org/index.php?title=99_questions/Solutions/4&diff=5476399 questions/Solutions/42012-11-30T19:21:50Z<p>Davorak: Grouped all fold solutions together</p>
<hr />
<div>(*) Find the number of elements of a list.<br />
<br />
<haskell><br />
myLength :: [a] -> Int<br />
myLength [] = 0<br />
myLength (_:xs) = 1 + myLength xs<br />
<br />
myLength' :: [a] -> Int<br />
myLength' list = myLength_acc list 0 -- same, with accumulator<br />
where<br />
myLength_acc [] n = n<br />
myLength_acc (_:xs) n = myLength_acc xs (n + 1)<br />
</haskell><br />
<br />
<haskell><br />
myLength' = foldl (\n _ -> n + 1) 0<br />
myLength'' = foldr (\_ n -> n + 1) 0<br />
myLength''' = foldr (\_ -> (+1)) 0<br />
myLength'''' = foldr ((+) . (const 1)) 0<br />
myLength''''' = foldr (const (+1)) 0<br />
myLength'''''' = foldl (const . (+1)) 0<br />
</haskell><br />
<br />
<haskell><br />
myLength' xs = snd $ last $ zip xs [1..] -- Just for fun<br />
myLength'' = snd . last . (flip zip [1..]) -- Because point-free is also fun<br />
myLength''' = fst . last . zip [1..] -- same, but easier<br />
</haskell><br />
<br />
<haskell><br />
myLength = sum . map (\_->1)<br />
</haskell><br />
<br />
This is <hask>length</hask> in <hask>Prelude</hask>.<br />
<br />
-- length returns the length of a finite list as an Int. <br />
length :: [a] -> Int <br />
length [] = 0 <br />
length (_:l) = 1 + length l<br />
<br />
The prelude for haskell 2010 can be found [http://www.haskell.org/onlinereport/haskell2010/haskellch9.html#x16-1710009 here.]</div>Davorakhttps://wiki.haskell.org/index.php?title=99_questions/Solutions/4&diff=5476299 questions/Solutions/42012-11-30T19:21:02Z<p>Davorak: Their was a section for the length function in the prelude but it was missing so I took the function form the 2010 report and link to it.</p>
<hr />
<div>(*) Find the number of elements of a list.<br />
<br />
<haskell><br />
myLength :: [a] -> Int<br />
myLength [] = 0<br />
myLength (_:xs) = 1 + myLength xs<br />
<br />
myLength' :: [a] -> Int<br />
myLength' list = myLength_acc list 0 -- same, with accumulator<br />
where<br />
myLength_acc [] n = n<br />
myLength_acc (_:xs) n = myLength_acc xs (n + 1)<br />
</haskell><br />
<br />
<haskell><br />
myLength' = foldl (\n _ -> n + 1) 0<br />
myLength'' = foldr (\_ n -> n + 1) 0<br />
myLength''' = foldr (\_ -> (+1)) 0<br />
myLength'''' = foldr ((+) . (const 1)) 0<br />
myLength''''' = foldr (const (+1)) 0<br />
</haskell><br />
<br />
<haskell><br />
myLength' xs = snd $ last $ zip xs [1..] -- Just for fun<br />
myLength'' = snd . last . (flip zip [1..]) -- Because point-free is also fun<br />
myLength''' = fst . last . zip [1..] -- same, but easier<br />
</haskell><br />
<br />
<haskell><br />
myLength = sum . map (\_->1)<br />
</haskell><br />
<br />
This is <hask>length</hask> in <hask>Prelude</hask>.<br />
<br />
-- length returns the length of a finite list as an Int. <br />
length :: [a] -> Int <br />
length [] = 0 <br />
length (_:l) = 1 + length l<br />
<br />
The prelude for haskell 2010 can be found [http://www.haskell.org/onlinereport/haskell2010/haskellch9.html#x16-1710009 here.]<br />
<br />
A fancier one! :-)<br />
<haskell><br />
myLength = foldl (const . (+1)) 0<br />
</haskell></div>Davorakhttps://wiki.haskell.org/index.php?title=99_questions/Solutions/3&diff=5476199 questions/Solutions/32012-11-30T18:58:04Z<p>Davorak: Reformat solution to fit on page</p>
<hr />
<div>(*) Find the K'th element of a list. The first element in the list is number 1.<br />
<br />
This is (almost) the infix operator !! in Prelude, which is defined as:<br />
<br />
<haskell><br />
(!!) :: [a] -> Int -> a<br />
(x:_) !! 0 = x<br />
(_:xs) !! n = xs !! (n-1)<br />
</haskell><br />
<br />
Except this doesn't quite work, because !! is zero-indexed, and element-at should be one-indexed. So:<br />
<br />
<haskell><br />
elementAt :: [a] -> Int -> a<br />
elementAt list i = list !! (i-1)<br />
</haskell><br />
<br />
Or without using the infix operator:<br />
<br />
<haskell><br />
elementAt' :: [a] -> Int -> a<br />
elementAt' (x:_) 1 = x<br />
elementAt' [] _ = error "Index out of bounds"<br />
elementAt' (_:xs) k<br />
| k < 1 = error "Index out of bounds"<br />
| otherwise = elementAt' xs (k - 1)<br />
</haskell><br />
<br />
Alternative version:<br />
<br />
<haskell><br />
elementAt'' :: [a] -> Int -> a<br />
elementAt'' (x:_) 1 = x<br />
elementAt'' (_:xs) i = elementAt'' xs (i - 1)<br />
elementAt'' _ _ = error "Index out of bounds"<br />
</haskell><br />
'''This does not work correctly on invalid indexes and infinite lists, e.g.:'''<br />
<haskell><br />
elementAt'' [1..] 0<br />
</haskell><br />
<br />
A few more solutions using prelude functions:<br />
<br />
<haskell><br />
elementAt'' xs n | length xs < n = error "Index out of bounds"<br />
| otherwise = fst . last $ zip xs [1..n] <br />
</haskell><br />
<br />
<haskell><br />
elementAt''' xs n = head $ foldr ($) xs <br />
$ replicate (n - 1) tail<br />
</haskell><br />
<br />
<haskell><br />
elementAt_w' xs n = last . take n $ xs -- wrong<br />
-- Main> map (elementAt_w' [1..4]) [1..10]<br />
-- [1,2,3,4,4,4,4,4,4,4]<br />
</haskell><br />
<br />
<haskell><br />
elementAt_w'' xs n = head . reverse . take n $ xs -- wrong<br />
-- Main> map (elementAt_w'' [1..4]) [1..10]<br />
-- [1,2,3,4,4,4,4,4,4,4]<br />
</haskell><br />
<br />
<haskell><br />
elementAt_w''' xs n = head . drop (n - 1) $ xs -- wrong<br />
-- Main> map (elementAt_w''' [1..4]) [0..10]<br />
-- [1,1,2,3,4,*** Exception: Prelude.head: empty list<br />
</haskell><br />
<br />
or <hask>elementAt_w'</hask> correctly in point-free style:<br />
<haskell><br />
elementAt_w'pf = (last .) . take . (+ 1)<br />
</haskell><br />
<br />
Pedantic note: the above definition of <hask>elementAt_w'pf</hask> does not conform to the order of arguments specified by the question, but the following does:<br />
<haskell><br />
elementAt_w'pf' = flip $ (last .) . take . (+ 1)<br />
</haskell></div>Davorakhttps://wiki.haskell.org/index.php?title=99_questions/Solutions/3&diff=5476099 questions/Solutions/32012-11-30T18:56:45Z<p>Davorak: Add new problem solution, reorder solutions to group all like solutions together.</p>
<hr />
<div>(*) Find the K'th element of a list. The first element in the list is number 1.<br />
<br />
This is (almost) the infix operator !! in Prelude, which is defined as:<br />
<br />
<haskell><br />
(!!) :: [a] -> Int -> a<br />
(x:_) !! 0 = x<br />
(_:xs) !! n = xs !! (n-1)<br />
</haskell><br />
<br />
Except this doesn't quite work, because !! is zero-indexed, and element-at should be one-indexed. So:<br />
<br />
<haskell><br />
elementAt :: [a] -> Int -> a<br />
elementAt list i = list !! (i-1)<br />
</haskell><br />
<br />
Or without using the infix operator:<br />
<br />
<haskell><br />
elementAt' :: [a] -> Int -> a<br />
elementAt' (x:_) 1 = x<br />
elementAt' [] _ = error "Index out of bounds"<br />
elementAt' (_:xs) k<br />
| k < 1 = error "Index out of bounds"<br />
| otherwise = elementAt' xs (k - 1)<br />
</haskell><br />
<br />
Alternative version:<br />
<br />
<haskell><br />
elementAt'' :: [a] -> Int -> a<br />
elementAt'' (x:_) 1 = x<br />
elementAt'' (_:xs) i = elementAt'' xs (i - 1)<br />
elementAt'' _ _ = error "Index out of bounds"<br />
</haskell><br />
'''This does not work correctly on invalid indexes and infinite lists, e.g.:'''<br />
<haskell><br />
elementAt'' [1..] 0<br />
</haskell><br />
<br />
A few more solutions using prelude functions:<br />
<br />
<haskell><br />
elementAt'' xs n | length xs < n = error "Index out of bounds"<br />
| otherwise = fst . last $ zip xs [1..n] <br />
</haskell><br />
<br />
<haskell><br />
elementAt''' xs n = head $ foldr ($) xs $ replicate (n - 1) tail<br />
</haskell><br />
<br />
<haskell><br />
elementAt_w' xs n = last . take n $ xs -- wrong<br />
-- Main> map (elementAt_w' [1..4]) [1..10]<br />
-- [1,2,3,4,4,4,4,4,4,4]<br />
</haskell><br />
<br />
<haskell><br />
elementAt_w'' xs n = head . reverse . take n $ xs -- wrong<br />
-- Main> map (elementAt_w'' [1..4]) [1..10]<br />
-- [1,2,3,4,4,4,4,4,4,4]<br />
</haskell><br />
<br />
<haskell><br />
elementAt_w''' xs n = head . drop (n - 1) $ xs -- wrong<br />
-- Main> map (elementAt_w''' [1..4]) [0..10]<br />
-- [1,1,2,3,4,*** Exception: Prelude.head: empty list<br />
</haskell><br />
<br />
or <hask>elementAt_w'</hask> correctly in point-free style:<br />
<haskell><br />
elementAt_w'pf = (last .) . take . (+ 1)<br />
</haskell><br />
<br />
Pedantic note: the above definition of <hask>elementAt_w'pf</hask> does not conform to the order of arguments specified by the question, but the following does:<br />
<haskell><br />
elementAt_w'pf' = flip $ (last .) . take . (+ 1)<br />
</haskell></div>Davorakhttps://wiki.haskell.org/index.php?title=Talk:99_questions/Solutions/3&diff=54759Talk:99 questions/Solutions/32012-11-30T18:52:51Z<p>Davorak: New page: It seems like the questions should be changed to use 0 as the first index of the list. I think the only reason that it is 1 currently is because that is what it was in the prolog problems ...</p>
<hr />
<div>It seems like the questions should be changed to use 0 as the first index of the list. I think the only reason that it is 1 currently is because that is what it was in the prolog problems set. Is there any reason to maintain this if haskell starts at index 0 normally?</div>Davorakhttps://wiki.haskell.org/index.php?title=The_Monad.Reader&diff=54711The Monad.Reader2012-11-21T19:48:28Z<p>Davorak: I had to find the pervious issues through google.</p>
<hr />
<div>[[Category:Community]]<br />
The Monad.Reader is an electronic magazine about all things Haskell. It is less formal than journal, but more enduring than a wiki-page or blog post. There have been a wide variety of articles, including: exciting code fragments, intriguing puzzles, book reviews, tutorials, and even half-baked research ideas.<br />
<br />
'''Please note that the Monad.Reader has moved to [http://themonadreader.wordpress.com http://themonadreader.wordpress.com]. This site will no longer be updated.'''<br />
<br />
Older issues can be found [[The_Monad.Reader/Previous_issues|here.]]</div>Davorakhttps://wiki.haskell.org/index.php?title=GHC/CloudAndHPCHaskell&diff=45221GHC/CloudAndHPCHaskell2012-04-11T22:24:45Z<p>Davorak: </p>
<hr />
<div>= Haskell in the Cloud =<br />
<br />
This page serves to discuss the architecture of Cloud and High Performance Computing (HPC) frameworks, possibly with an emphasis on the distributed execution and network model.<br />
<br />
Subpages: <br />
* http://haskell.org/haskellwiki/GHC/CloudAndHPCHaskell/Transport<br />
<br />
== Relevant papers ==<br />
* [http://research.microsoft.com/en-us/um/people/simonpj/papers/parallel/remote.pdf Towards Haskell in the Cloud], Jeff Epstein, Andrew Black, and Simon Peyton Jones, ICFP 2011<br />
<br />
* [http://research.microsoft.com/en-us/um/people/simonpj/papers/parallel/epstein-thesis.pdf Functional programming for the data centre], Jeff Epstein, MPhil thesis, 2011<br />
<br />
* [http://www.macs.hw.ac.uk/~pm175/papers/Maier_Trinder_IFL2011_XT.pdf Implementing a High-level Distributed-Memory Parallel Haskell in Haskell], Patrick Maier and Phil Trinder, submitted to IFL 2011<br />
<br />
* [http://www.macs.hw.ac.uk/~mka19/paperreport.pdf Architecture Aware Parallel Programming in Glasgow Parallel Haskell (GPH)] M. KH. Aswad, P. W. Trinder, H. W. Loidl, Tech. Rep. 0086, Heriot-Watt University, Edinburgh, UK.<br />
<br />
== Peter Braam's thoughts about HPC & Cloud Compute Frameworks ==<br />
<br />
I want to describe here what is taking place in HPC and cloud computing, and how I think it could impact the architecture of the Haskell packages we are discussing. <br />
<br />
In HPC the 90's saw the emergence of clusters for computing with a distributed memory model. MPI became the de-facto standard for this some 10 years later. Then multicore entered the game for which OpenMP became the software and communication "standard", and more recently GPGPU's (which have CUDA and OpenCL) and other accelerators (FPGA boards are seeing a revival). Moreover, other many core chips which have a NOC (network on chip) are becoming important. It is likely that these hybrid architectures which are sometimes even re-configurable will remain prominent for the next 10 years or so.<br />
<br />
The HPC community is looking for a software platform to program these complex systems within one model - things like CUDA + OpenMP + MPI are way too complicated. The architecture of the system must be made a first class citizen in further considerations for that to succeed. The architecture should be some kind of data structure, influenced by the hardware industry, that describes the networks with their interfaces and topologies that connect nodes, and include the various kinds of memories present in the node, the CPU chips with cores and accelerators with their memories and NOC's. As an example of this interest, AMD is developing the so called FSA architecture that does this at least for a node. It provides very low level interfaces to the components for instruction / data / thread dispatch and scheduling. Elements of such architecture descriptions exist (e.g. LLNL "genders" and resources associated with schedulers), but nowhere close to what we need.<br />
<br />
A second key element is the scheduler. Until recently, things like HPC or the Hadoop job schedulers started jobs. Now, much more needs to be scheduled, on different cores, on accelerator boards and so on. The scheduler is responsible to create running entities on the components of the architecture, importantly these running entities normally read input data from a distributed file system or database. Currently scheduling is another messy story - one thing starts processes on remote nodes, another element spawns threads, and yet other facilities deal with the accelerators.<br />
<br />
The networks are increasingly complicated too. There are now facilities to send packets directly from CPU caches to Infiniband networks, and people are working to set up networking between the accelerators and the cluster network (often IB). Moreover, most of the architectures are NUMA now. There are API's (like CCI) that aim to unify these various transports as one. About MPI, one gets about 50% of the bandwidth that is available, it's not a good platform anymore (see http://gasnet.cs.berkeley.edu/ for comparison with a PGAS oriented transport - created for UPC, or see ETI's web pages about SWARM).<br />
<br />
Unlike client server jobs and unlike user applications running a 3rd party network protocols, the cloud and HPC jobs would probably be well served with an SPMD like model even with these complex architectures. I think that in SPMD models much communication is the exchange of memory, with just one caveat that sending small messages is very costly. With that we can hide the network almost completely from the programmer, just like DPH hides the cores.<br />
<br />
Combining DPH with (i) networking, (ii) a BSP (Bulk Synchronous Processing) mode (iii) and beginning to look at other dispatch after vectorizing in DPH (e.g. to Accelerate) - probably totally obvious ideas to you would provide the parallel programs the Architecture and Scheduler can run. Encapsulating network and node failures in some Monad should be possible, and would as mentioned be an important component, or hopefully a perpendicular extension. The OTP failure model is perfect for purely functional situations, but with state involved things like checkpoints and exactly once semantics make it more involved.<br />
<br />
This is in some sense just elaborating what SPJ said at a CUFP 2011 breakfast last week. I think that getting rid of compiler flags and runtime hints and instead leverage a built in scheduler / architecture model would be a good first step.<br />
<br />
== Phil Trinder's thoughts about large scale architectures ==<br />
<br />
Agree with Peter above that architectures are probably going to show through into the programming language. The trick is to make the architectures visible in a suitably abstract way.<br />
<br />
We have started some work in this area where we use an abstract notion of communication distance to characterise the architecture. So a small unit of computation may only be commnicated a small distance. However, unlike Cloud Haskell, we don't specify *which* of the many locations at that communication distance the work is allocated to.<br />
<br />
Some initial thoughts appear in the third paper above.<br />
<br />
== Other Sources of information ==<br />
<br />
* [http://hackage.haskell.org/trac/ghc/wiki/ErlangInHaskell GHC trac page on Erlang in Haskell/Cloud Haskell</div>Davorakhttps://wiki.haskell.org/index.php?title=Video_presentations&diff=44290Video presentations2012-01-30T20:26:30Z<p>Davorak: Added video links to Haskell Implementors Workshops 2009, 2010,211</p>
<hr />
<div>[[Category:Tutorials]]<br />
Collected videos of Haskell tutorials and conference presentations, sorted by topic.<br />
<br />
For more recent videos, check:<br />
<br />
* '''[http://vimeo.com/channels/haskell The Haskell Vimeo Channel]'''<br />
* '''[http://vimeo.com/channels/galois The Galois Tech Talk Vimeo Channel]'''<br />
* '''[http://channel9.msdn.com/tags/Haskell/ Haskell videos on MSDN Channel 9]'''<br />
<br />
Maintained by the community.<br />
<br />
== Introductions to Haskell ==<br />
;<br />
http://panther2.video.blip.tv/OSCON-OSCON2007SimonPeytonJonesATasteOfHaskellPartI551-440.jpg<br />
A Taste of Haskell<br />
:[http://blip.tv/file/324976 Part 1] ([http://blip.tv/file/get/OSCON-OSCON2007SimonPeytonJonesATasteOfHaskellPartI455.flv Download])<br />
:[http://blip.tv/file/325646 Part 2] ([http://blip.tv/file/get/OSCON-OSCON2007SimonPeytonJonesATasteOfHaskellPartII749.flv Download])<br />
:[http://conferences.oreillynet.com/presentations/os2007/os_peytonjones.pdf Slides]<br />
:Simon Peyton-Jones, OSCON, July 2007.<br />
<blockquote><br />
Haskell is the world's leading purely functional programming language<br />
that offers a radical and elegant attack on the whole business of<br />
writing programs. In the last two or three years there has been an<br />
explosion of interest in Haskell, and it is now being used for a<br />
bewildering variety of applications. In this tutorial, I will try to<br />
show you why programming in Haskell is such fun, and how it makes you<br />
think about programming in a new way.<br />
</blockquote><br />
<br />
[http://channel9.msdn.com/shows/Going+Deep/Lecture-Series-Erik-Meijer-Functional-Programming-Fundamentals-Chapter-1/ Functional Programming Fundamentals - Erik Meijer]<br />
:Erik's 13 part lecture series on Haskell, using [http://www.amazon.com/Programming-Haskell-Graham-Hutton/dp/0521692695/ref=sr_1_1?ie=UTF8&qid=1287780326&sr=8-1 Programming in Haskell] by Graham Hutton.<br />
<br />
[http://channel9.msdn.com/showpost.aspx?postid=326762 Programming language nirvana]<br />
:Simon Peyton-Jones, Eric Meijer, MSR, July 2007.<br />
<br />
;[http://ulf.wiger.net/weblog/2008/02/29/peyton-jones-taming-effects-the-next-big-challenge/ Taming Effects - The Next Big Challenge]<br />
:Simon Peyton-Jones at Ericsson, February 2008.<br />
<br />
;[http://video.google.com/videoplay?docid=-4167170843018186532 Faith, Evolution, and Programming Languages]<br />
:Phil Wadler, April 2007. Slides are [http://homepages.inf.ed.ac.uk/wadler/topics/gj.html#oopsla here]<br />
<br />
;[http://port25.technet.com/archive/2007/09/26/haskell-in-the-hallway-sam-interviews-simon-peyton-jones.aspx Haskell in the Hallway]<br />
: An interview with SPJ at OSCON, Sep 2007.<br />
<br />
;[http://video.s-inf.de/#FP.2005-SS-Giesl.(COt).HD_Videoaufzeichnung Lecture Functional Programming]<br />
: A computer science lecture at RWTH University Aachen (Germany) dealing with functional programming and haskell (including theoretical background)<br />
<br />
;[http://www.dotnetrocks.com/default.aspx?ShowNum=310 Simon Peyton Jones on Functional Programming and Haskell (Audio)]<br />
:Simon explains laziness, purity, parallelism, side effects, monads, software transactional memory<br />
<br />
;[http://ulf.wiger.net/weblog/2008/02/29/john-launchbury-high-assurance-software/ High-Assurance Software] <br />
:John Launchbury at Ericsson, 21 February 2008.<br />
<br />
== Haskell Implementors Workshop 2009,2010,2011 ==<br />
[[HaskellImplementorsWorkshop]]<br />
<br />
== Haskell Symposium 2008 ==<br />
<br />
[[/Haskell Symposium 2008]] videos. <br />
<br />
== ICFP 2007 and Workshops ==<br />
<br />
; [http://video.google.com/videoplay?docid=-1518197558546337776 The Reduceron: Widening the von Neumann Bottleneck for Graph Reduction using an FPGA.]<br />
:The Reduceron: Widening the von Neumann Bottleneck for Graph Reduction using an FPGA. A research talk given at IFL'2007 in Freiburg. Work by Matthew Naylor and Colin Runciman of the University of York. <br />
<br />
; [http://www.ludd.ltu.se/~pj/icfp2007/ICFP2007.html Selected videos from IFL 2007 and ICFP 2007.]<br />
<br />
; [http://www.ludd.ltu.se/~pj/hw2007/HaskellWorkshop.html All talks from Haskell Workshop 2007.]<br />
<br />
== Advanced topics in functional programming ==<br />
<br />
;[http://video.google.com/videoplay?docid=-4991530385753299192 Type-driven testing in Haskell]<br />
:Simon Peyton Jones talks about QuickCheck and SmallCheck<br />
<br />
;[http://www.youtube.com/user/TheCatsters The Catsters on YouTube]<br />
:Various interesting lectures on category theory, including monads, adjunctions, limits, and a variety of other topics.<br />
<br />
;[http://iba-cg.de/haskell.html Generic Programming in Haskell].<br />
:Johan Jeuring, July 2007.<br />
<br />
;[http://video.google.com/videoplay?docid=-4851250372422374791 Parametric Polymorphism and the Girard-Reynolds Isomorphism]<br />
:Phil Gossett, April 2007.<br />
<br />
;[http://channel9.msdn.com/ShowPost.aspx?PostID=358968#358968 Don't fear the monads]<br />
:Brian Beckman introducing monads<br />
<br />
== Concurrency and parallelism ==<br />
<br />
;[http://www.bayfp.org/blog/?p=25 Concurrent and multicore programming in Haskell] [http://blip.tv/file/913860 (alternate link, just video)]<br />
:[http://www.serpentine.com/blog/ Bryan O’Sullivan] at [http://bayfp.org Bay Area Functional Programmers], 8 May 2008<br />
<br />
;[http://ulf.wiger.net/weblog/2008/02/29/satnam-singh-declarative-programming-techniques-for-many-core-architectures/ Declarative Programming Techniques for Many-Core Architectures] <br />
:Satnam Singh at Ericsson, 21 February 2008<br />
<br />
;[http://www.blip.tv/file/317758/ Transactional Memory for Concurrent Programming]<br />
:Simon Peyton-Jones, OSCON, July 2007.<br />
<br />
;[http://channel9.msdn.com/Showpost.aspx?postid=231495 Programming in the Age of Concurrency: Software Transactional Memory]<br />
:Simon Peyton-Jones and Tim Harris, September 2006.<br />
<br />
;[http://www.londonhug.net/2007/09/25/nested-data-parallelism-video-returns/ Nested Data Parallelism in Haskell]<br />
:Simon Peyton-Jones, [http://www.londonhug.net/ London HUG], May 2007.<br />
:[http://research.microsoft.com/~simonpj/papers/ndp/NdpSlides.pdf Slides] (pdf)<br />
<br />
== The Web ==<br />
<br />
;[http://www.bayfp.org/blog/2007/10/16/alex-jacobson-on-happs-videos-slides/ HAppS]<br />
:Alex Jacobson, [http://www.bayfp.org/blog Bay Area FPers], Oct 2007.<br />
<br />
== Games ==<br />
<br />
;[http://www.londonhug.net/2007/09/24/better-video-for-games-in-haskell/ Games in Haskell]<br />
:2007 meeting of the London Haskell User Group. Matthew Sackman and Tristan Allwood of Imperial College talk about building 3D games in Haskell.<br />
<br />
;[http://uk.youtube.com/watch?v=uziCn2SBbxs Data parallel physics engine]<br />
:2008, Hpysics' visulization code now performs double buffering<br />
<br />
;[http://uk.youtube.com/watch?v=mwge13bX9W8 Yampa Space Invaders]<br />
:Space Invaders, using functional reactive programming.<br />
<br />
;[http://uk.youtube.com/watch?v=zqFgQiPKtOI Monadius]<br />
:A 2D space game<br />
<br />
;[http://uk.youtube.com/watch?v=gVLFGQGRsDw Super 'Nario' Bros]<br />
:Super 'Nario' Brothers in Haskell<br />
<br />
;[http://uk.youtube.com/watch?v=0jYdu2u8gAU Frag]<br />
: Frag<br />
<br />
== The ICFP contest ==<br />
<br />
;[http://video.google.com/videoplay?docid=6419094369756184531 2006 ICFP contest results]<br />
:ICFP, 2006<br />
<br />
;[http://www.ludd.ltu.se/~pj/icfp2007/ICFP%20contest%202007.mov 2007 ICFP contest results]<br />
:ICFP, 2007<br />
<br />
== Livecoding Haskell ==<br />
<br />
;[http://www.youtube.com/watch?v=045422s6xik Data Driven Programming in Haskell]<br />
: Uses the unscripted, “real-world” toy project of performing character recognition on an image that many other OCR tools fail on due to very low resolution.<br />
<br />
[[Category:Music]]<br />
<br />
;[http://video.google.de/videoplay?docid=-6594267962912965757&q=hal2+july+2007&total=7&start=0&num=50&so=0&type=search&plindex=3 Music and Sound generation]<br />
:Henning Thielemann July 2007 in Leipzig about Music and Sound using SuperCollider, CSound, MIDI and pure Haskell (German)<br />
<br />
;[http://youtube.com/watch?v=xaoLbKWMwoU Haskell music]<br />
:[http://doc.gold.ac.uk/~ma503am/ Yaxu], 2006.<br />
<br />
;[http://youtube.com/watch?v=eLS6GHXWMpA Hacking Haskell music]<br />
:More of Yaxu live coding music and Haskell, 2006.<br />
<br />
;[http://doc.gold.ac.uk/~ma503am/alex/asciirave ASCII Rave in Haskell]<br />
:Yaxu, using words to control the articulation of a physical modelling synthesiser based on the elegant Karplus-Strong algorithm<br />
<br />
== GHC Hackathon presentations ==<br />
<br />
;[http://hackage.haskell.org/trac/ghc/wiki/AboutVideos GHC commentary]<br />
:Simon Peyton Jones and Simon Marlow, 2006.<br />
<br />
== Commercial users ==<br />
<br />
;[http://www.londonhug.net/2008/08/11/video-paradise-a-dsel-for-derivatives-pricing/ Paradise]<br />
:An EDSL in Haskell developed by Credit Suisse for Derivatives Pricing<br />
<br />
== Haskell applications ==<br />
<br />
;[http://www.youtube.com/watch?v=oYdkrOMhFWU Emacs Flymake]<br />
:Daisuke IKEGAMI, a demo of editing Haskell program on Emacs with on-the-fly syntax and type checking using flymake-mode (see also [http://www.emacswiki.org/cgi-bin/emacs/FlymakeHaskell EmacsWiki:FlymakeHaskell] for details), 11 November 2007<br />
<br />
;[http://video.google.de/videosearch?q=hal2+july+2007 HAL2]<br />
:HAL2 meeting in July 2007 in Leipzig, presenting talks about Generic Programming (English), Eclipse for Haskell (German), Grapefruit GUI (German) and Music+Sound generation (German)<br />
<br />
;[http://ftp.belnet.be/mirrors/FOSDEM/2006/FOSDEM2006-darcs.avi GADTs for darcs]<br />
:David Roundy, FOSDEM, 2006<br />
<br />
;[http://www.londonhug.net/2008/02/02/video-darcs-and-gadts/ Darcs and Generalised Algebraic Data Types]<br />
:Ganesh Sittampalamhs talk on Darcs and GADTs<br />
<br />
;[http://www.uwtv.org/programs/displayevent.aspx?rID=2124&fID=368 Functional Image Synthesis]<br />
:Conal Elliott, talk at University of Washington, November 2000<br />
<br />
;[http://www.youtube.com/watch?v=faJ8N0giqzw Tangible Functional Programming]<br />
:Conal Eliott's Google Tech Talk<br />
<br />
;[http://ulf.wiger.net/weblog/2008/02/29/john-hughes-testing-with-quickcheck/ Testing with QuickCheck]<br />
:John Hughes at Ericsson, 21 February 2008<br />
<br />
;[http://ulf.wiger.net/weblog/2008/02/29/simon-peyton-jones-composing-contracts-an-adventure-in-financial-engineering/ Composing Contracts - An Adventure in Financial Engineering] <br />
:Simon Peyton Jones at Ericsson, 21 February 2008.<br />
<br />
;[http://www.ludd.ltu.se/~pj/hw2007/xmonad.mov xmonad]<br />
: Don Stewart at the Haskell Workshop, 2007.<br />
<br />
;[http://www.youtube.com/watch?v=yHd0u6zuWdw Coconut: COde CONstructing User Tool]<br />
:Haskell DSL to produce high performance SIMD-Parallel code<br />
<br />
;[http://covector.blogspot.com/2007/10/functional-augmented-reality.html Augmented reality using Haskell computer vision]<br />
:Functional augmented reality, Alberto Ruiz<br />
<br />
;[http://www.vimeo.com/1983774 Understanding HaskellDB trailer]<br />
<br />
== Other ==<br />
<br />
;[http://www.youtube.com/watch?v=faJ8N0giqzw Tangible Functional Programming: a modern marriage of usability and composability]<br />
:Conal Elliott. Google TechTalk, November 2007</div>Davorakhttps://wiki.haskell.org/index.php?title=Hac_Boston/Projects&diff=44110Hac Boston/Projects2012-01-21T08:33:39Z<p>Davorak: /* Refactoring combinators for Haskell */</p>
<hr />
<div>== Sharing your code ==<br />
<br />
If you need a place to host a project so that others can help with it, we suggest <br />
[http://github.com github], but if you are using darcs [http://patch-tag.com/ patch-tag] is just dandy as well. <br />
<br />
You can also apply for an account on <br />
[http://community.haskell.org/admin/ the community server].<br />
<br />
== Projects ==<br />
<br />
If you have a project that you want to work on at the Hackathon, please describe it here.<br />
<br />
Since Hackathons are great for teamwork, consider joining one of the projects mentioned below. If you're interested in one of these projects, add your name to the list of hackers under that project.<br />
<br />
<!-- Copy this template<br />
=== Project name ===<br />
<br />
I am a project. Love me.<br />
<br />
Interested in this project:<br />
<br />
* Hacker 1<br />
* Hacker 2<br />
--><br />
<br />
=== Trifecta ===<br />
<br />
[http://hackage.haskell.org/package/trifecta Trifecta] is a library for dealing with both parsing and the ancillary concerns that arise once you have a parser.<br />
<br />
Interested in this project:<br />
<br />
* Edward Kmett<br />
* Doug McClean<br />
* Paul Martel<br />
<br />
=== Machine code analysis tools ===<br />
<br />
Haskell could be a great platform for analyzing and reverse-engineering machine code. We already have disassemblers ([http://hackage.haskell.org/package/hdis86 x86], [https://github.com/copumpkin/charm ARM]), object format parsers ([http://hackage.haskell.org/package/elf ELF], [http://hackage.haskell.org/package/pecoff PE/COFF], [http://hackage.haskell.org/package/macho MachO]), SMT and [http://hackage.haskell.org/package/sbv bitvector] solvers, [http://hackage.haskell.org/package/hoopl dataflow analysis], etc. Let's improve these tools and fill in the gaps.<br />
<br />
Some concrete projects in this area:<br />
<br />
* Write bindings to [http://radare.org/ radare]<br />
* Finish up the [https://github.com/copumpkin/charm charm] disassembler for ARM, and get it on Hackage<br />
* Modify [http://hackage.haskell.org/package/elf elf] to support parsing relocation records<br />
* Add support to the object format libraries for writing data structures back out to disk<br />
* Write a format-agnostic layer on top of the object format libraries<br />
<br />
Interested in this project:<br />
<br />
* Keegan McAllister<br />
* Ben Gamari (ARM support in GHC linker)<br />
<br />
=== G-code backend for Diagrams ===<br />
<br />
[http://projects.haskell.org/diagrams Diagrams] is a nice library for declarative vector graphics. With a [http://linuxcnc.org/docs/html/gcode_main.html G-code] backend, it could be used to control industrial cutting equipment.<br />
<br />
We already have a [https://github.com/kmcallister/gcode G-code output library]. For this project we would need to render Diagrams constructs to the simpler G-code commands.<br />
<br />
Interested in this project:<br />
<br />
* Keegan McAllister<br />
* Ben Gamari (relevant hack: https://github.com/bgamari/GGen)<br />
<br />
=== Livecoding and Music ===<br />
<br />
Haskell has libraries for livecoding and music composition. We need more tutorials and tools. Focus will be on [http://slavepianos.org/rd/ut/hsc3-texts/ hsc3] for [http://www.haskell.org/haskellwiki/SuperCollider Supercollider].<br />
<br />
Interested in this project:<br />
<br />
* Tom Murphy (amindfv)<br />
<br />
=== Wide fanout sequences ===<br />
<br />
I'd like to build a drop-in replacement for [http://www.haskell.org/ghc/docs/latest/html/libraries/containers/Data-Sequence.html Data.Sequence] that uses wide-fanout trees, similar to the wide-fanout tries used by Johan Tibbell in recent versions of [https://github.com/tibbe/unordered-containers unordered containers]. The hope is to come up with something that's substantially faster than lists or vectors, even for short lists, while supporting efficient (lg n) concatenation and indexing.<br />
<br />
Interested in this project:<br />
<br />
* Jan-Willem Maessen<br />
<br />
=== hxournal ===<br />
<br />
[http://ianwookim.org/hxournal hxournal] is a note-taking program being developed in haskell and gtk2hs. It is modeled after xournal program but it is going to have more functionalities and better flexibility. <br />
<br />
Interested in this project:<br />
<br />
* Ian-Woo Kim<br />
<br />
=== GObject Introspection for Haskell ===<br />
<br />
GObject Introspection provides machine-readable API descriptions for C libraries. <code>haskell-gi</code> generates Haskell bindings for C libraries using these descriptions.<br />
<br />
http://www.haskell.org/haskellwiki/GObjectIntrospection<br />
<br />
Interested in this project:<br />
<br />
* Dafydd Harries<br />
<br />
=== Refactoring combinators for Haskell ===<br />
<br />
Highly experimental project! Create a code database for Haskell, support querying and transformation via Datalog, and use this to implement a set of refactoring combinators that would allow arbitrary compilation-preserving transformations of Haskell codebases. I posted some more details here: <br />
<br />
http://pchiusano.blogspot.com/2012/01/possible-projects-for-boston-haskell.html<br />
<br />
Interested in this project:<br />
<br />
* Paul Chiusano<br />
* Patrick Wheeler - While unfortunately not attending I wanted to register my interest.<br />
<br />
=== Smarter evaluation strategies for lazy languages ===<br />
<br />
Highly experimental project! Whiteboard ideas for a specializing, strictness propagating evaluation strategy with the same termination properties as normal order evaluation, but better space usage. Then implement a prototype. The general idea is to propagate additional strictness information at runtime so that evaluation becomes more predictable for polymorphic and higher order code. Some more details here:<br />
<br />
http://pchiusano.blogspot.com/2012/01/possible-projects-for-boston-haskell.html<br />
<br />
Interested in this project:<br />
<br />
* Paul Chiusano<br />
<br />
== Experience ==<br />
<br />
Please list projects with which you are familiar. This way, people know whom to contact for more information or guidance on a particular project.<br />
<br />
{| class="wikitable"<br />
! Name<br />
! Projects<br />
|-<br />
| edwardk<br />
| [http://github.com/ekmett lots of projects], mtl, general libraries<br />
|-<br />
| ezyang<br />
| ghc<br />
|-<br />
| keegan<br />
| [https://github.com/kmcallister various], some GHC internals, FFI tricks<br />
|-<br />
| jmaessen<br />
| containers, unordered-containers, eager haskell, ancient haskell history<br />
|-<br />
| mokus<br />
| [https://github.com/mokus0 these], HOC, lambdabot, several others<br />
|}</div>Davorak