https://wiki.haskell.org/api.php?action=feedcontributions&user=Gphilip&feedformat=atomHaskellWiki - User contributions [en]2022-12-03T08:08:07ZUser contributionsMediaWiki 1.31.7https://wiki.haskell.org/index.php?title=User_talk:Gphilip&diff=54136User talk:Gphilip2012-09-30T22:43:27Z<p>Gphilip: </p>
<hr />
<div>Hi,<br />
<br />
I noticed that you switched the s and the a at the newtype page; the order in which you write these is not really important, but they are now reversed compared to the State type in the State library. See <br />
http://hackage.haskell.org/packages/archive/transformers/latest/doc/html/Control-Monad-Trans-State-Lazy.html#v:runState<br />
<br />
[[User:Henk-Jan van Tuyl|Henk-Jan van Tuyl]] 21:27, 30 September 2012 (UTC)<br />
<br />
: I switched these so that they match the order in the definition of <hask>runState</hask> earlier on in the page. [[User:Gphilip|Gphilip]] 22:43, 30 September 2012 (UTC)</div>Gphiliphttps://wiki.haskell.org/index.php?title=Newtype&diff=54134Newtype2012-09-30T18:32:14Z<p>Gphilip: Fixed spelling.</p>
<hr />
<div>A <hask>newtype</hask> declaration creates a new type in much the same way as <hask>data</hask>. The syntax and usage of newtypes is virtually identical to that of data declarations - in fact, you can replace the <hask>newtype</hask> keyword with <hask>data</hask> and it'll still compile, indeed there's even a good chance your program will still work. The converse is not true, however - <hask>data</hask> can only be replaced with <hask>newtype</hask> if the type has ''exactly one constructor'' with ''exactly one field'' inside it.<br />
<br />
Some examples:<br />
<br />
<haskell><br />
newtype Fd = Fd CInt<br />
-- data Fd = Fd CInt would also be valid<br />
<br />
-- newtypes can have deriving clauses just like normal types<br />
newtype Identity a = Identity a<br />
deriving (Eq, Ord, Read, Show)<br />
<br />
-- record syntax is still allowed, but only for one field<br />
newtype State s a = State { runState :: s -> (s, a) }<br />
<br />
-- this is *not* allowed:<br />
-- newtype Pair a b = Pair { pairFst :: a, pairSnd :: b }<br />
-- but this is:<br />
data Pair a b = Pair { pairFst :: a, pairSnd :: b }<br />
-- and so is this:<br />
newtype Pair' a b = Pair' (a, b)<br />
</haskell><br />
<br />
Sounds pretty limited! So why does anyone use <hask>newtype</hask>?<br />
<br />
== The short version ==<br />
<br />
The restriction to one constructor with one field means that the new type and the type of the field are in direct correspondence:<br />
<br />
<haskell><br />
State :: (s -> (s, a)) -> State s a<br />
runState :: State s a -> (s -> (s, a))<br />
</haskell><br />
<br />
or in mathematical terms they are ''isomorphic''. This means that after the type is checked at compile time, at run time the two types can be treated essentially the same, without the overhead or indirection normally associated with a data constructor. So if you want to declare different type class instances for a particular type, or want to make a type abstract, you can wrap it in a <hask>newtype</hask> and it'll be considered distinct to the type-checker, but identical at runtime. You can then use all sorts of deep trickery like phantom or recursive types without worrying about GHC shuffling buckets of bytes for no reason.<br />
<br />
== The messy bits ==<br />
<br />
Why doesn't everyone just use <hask>newtype</hask> whenever they can, then? Well, quite often they do. But there is a subtle yet semantically significant difference. When we create a data type supposedly isomorphic to <hask>Bool</hask> like so:<br />
<br />
<haskell>data Any = Any { getAny :: Bool }</haskell><br />
<br />
we actually find that the isomorphism isn't exact:<br />
<br />
<haskell><br />
Any . getAny $ Any True = Any True -- okay, fine<br />
Any . getAny $ Any False = Any False -- also fine<br />
Any . getAny $ Any ⊥ = Any ⊥ -- so far so good<br />
Any . getAny $ ⊥ = Any ⊥ -- wait a second...<br />
</haskell><br />
([[Bottom|what's that upside-down T thing?]])<br />
<br />
The problem is that types declared with the <hask>data</hask> keyword are ''lifted'' - that is, they contain their own ⊥ value that is distinct from all the others. In this example, we have <hask>⊥ :: Any</hask> distinct from <hask>Any ⊥ :: Any</hask>. What this means is that the following pattern match:<br />
<br />
<haskell><br />
case x of<br />
Any _ -> ()<br />
</haskell><br />
<br />
must evaluate its argument, even though it seems like the pattern match can't fail: we must check whether <hask>x</hask> is <hask>⊥</hask> or <hask>Any y</hask> for some <hask>y</hask>.<br />
<br />
This is intrinsic to Haskell's lazy, non-total semantics. The problem is that this means tracking whether a value is wrapped in a constructor or not, which means keeping track of those extra constructors at runtime even when all they do is distinguish an extra bottom value we don't even want. So in order to be consistent, but also allow the exact isomorphism to be preserved, Haskell provides the <hask>newtype</hask> keyword, for the construction of unlifted types. Pattern-matching on a newtype constructor doesn't do any work, because there is no separate ⊥ so every value in the type is wrapped in the constructor.<br />
<br />
== What about strict types? ==<br />
<br />
You may notice that a type like<br />
<br />
<haskell>data Identity' a = Identity' !a</haskell><br />
<br />
has <hask>Identity' ⊥ = ⊥</hask> and so you might think you have your coveted isomorphism. But all the strictness annotation means is that <hask>Identity' ⊥</hask> really means <hask>Identity' $! ⊥</hask> - the semantics of the type are fundamentally the same, and in particular the case expression still forces the value.<br />
<br />
== Examples ==<br />
<br />
<haskell><br />
module Foo where<br />
<br />
data Foo1 = Foo1 Int -- Defines Foo1 constructor that lazily refers to an Int<br />
data Foo2 = Foo2 !Int -- Defines Foo2 constructor that strictly refers to an Int<br />
newtype Foo3 = Foo3 Int -- Defines Foo3 constructor that is synonymous with Int<br />
<br />
-- Argument is lazy and ignored, so <br />
-- undefined does not cause failure since<br />
-- the contructor pattern match succeeds.<br />
x1 = case Foo1 undefined of<br />
Foo1 _ -> 1 -- 1<br />
<br />
-- Argument is strict (because of !), so<br />
-- undefined does cause failure.<br />
x2 = case Foo2 undefined of<br />
Foo2 _ -> 1 -- undefined<br />
<br />
-- The newtype behaves like Int, see yInt below<br />
x3 = case Foo3 undefined of<br />
Foo3 _ -> 1 -- 1<br />
<br />
-- Constructor pattern match fails<br />
y1 = case undefined of<br />
Foo1 _ -> 1 -- undefined<br />
<br />
-- Constructor pattern match fails<br />
y2 = case undefined of<br />
Foo2 _ -> 1 -- undefined<br />
<br />
-- The newtype behaves like Int, there is no<br />
-- constructor at runtime.<br />
y3 = case undefined of<br />
Foo3 _ -> 1 -- 1<br />
<br />
-- Demonstration of Int behavior<br />
int :: Int<br />
int = undefined<br />
<br />
yInt = case int of<br />
_ -> 1 -- 1<br />
</haskell><br />
<br />
== See also ==<br />
<br />
The Haskell 98 Report defines newtypes in [http://www.haskell.org/onlinereport/decls.html#sect4.2.3 section 4.2.3].<br />
<br />
[[Category:FAQ]]<br />
[[Category:Language]]</div>Gphiliphttps://wiki.haskell.org/index.php?title=Newtype&diff=54133Newtype2012-09-30T18:30:23Z<p>Gphilip: ... and in the type of runState</p>
<hr />
<div>A <hask>newtype</hask> declaration creates a new type in much the same way as <hask>data</hask>. The syntax and usage of newtypes is virtually identical to that of data declarations - in fact, you can replace the <hask>newtype</hask> keyword with <hask>data</hask> and it'll still compile, indeed there's even a good chance your program will still work. The converse is not true, however - <hask>data</hask> can only be replaced with <hask>newtype</hask> if the type has ''exactly one constructor'' with ''exactly one field'' inside it.<br />
<br />
Some examples:<br />
<br />
<haskell><br />
newtype Fd = Fd CInt<br />
-- data Fd = Fd CInt would also be valid<br />
<br />
-- newtypes can have deriving clauses just like normal types<br />
newtype Identity a = Identity a<br />
deriving (Eq, Ord, Read, Show)<br />
<br />
-- record syntax is still allowed, but only for one field<br />
newtype State s a = State { runState :: s -> (s, a) }<br />
<br />
-- this is *not* allowed:<br />
-- newtype Pair a b = Pair { pairFst :: a, pairSnd :: b }<br />
-- but this is:<br />
data Pair a b = Pair { pairFst :: a, pairSnd :: b }<br />
-- and so is this:<br />
newtype Pair' a b = Pair' (a, b)<br />
</haskell><br />
<br />
Sounds pretty limited! So why does anyone use <hask>newtype</hask>?<br />
<br />
== The short version ==<br />
<br />
The restriction to one constructor with one field means that the new type and the type of the field are in direct correspondence:<br />
<br />
<haskell><br />
State :: (s -> (s, a)) -> State s a<br />
runState :: State s a -> (s -> (s, a))<br />
</haskell><br />
<br />
or in mathematical terms they are ''isomorphic''. This means that after the type is checked at compile time, at run time the two types can be treated essentially the same, without the overhead or indirection normally associated with a data constructor. So if you want to declare different type class instances for a particular type, or want to make a type abstract, you can wrap it in a <hask>newtype</hask> and it'll be considered distinct to the type-checker, but identical at runtime. You can then use all sorts of deep trickery like phantom or recursive types without worrying about GHC shuffling buckets of bytes for no reason.<br />
<br />
== The messy bits ==<br />
<br />
Why doesn't everyone just use <hask>newtype</hask> whenever they can, then? Well, quite often they do. But there is a subtle yet sematically significant difference. When we create a data type supposedly isomorphic to <hask>Bool</hask> like so:<br />
<br />
<haskell>data Any = Any { getAny :: Bool }</haskell><br />
<br />
we actually find that the isomorphism isn't exact:<br />
<br />
<haskell><br />
Any . getAny $ Any True = Any True -- okay, fine<br />
Any . getAny $ Any False = Any False -- also fine<br />
Any . getAny $ Any ⊥ = Any ⊥ -- so far so good<br />
Any . getAny $ ⊥ = Any ⊥ -- wait a second...<br />
</haskell><br />
([[Bottom|what's that upside-down T thing?]])<br />
<br />
The problem is that types declared with the <hask>data</hask> keyword are ''lifted'' - that is, they contain their own ⊥ value that is distinct from all the others. In this example, we have <hask>⊥ :: Any</hask> distinct from <hask>Any ⊥ :: Any</hask>. What this means is that the following pattern match:<br />
<br />
<haskell><br />
case x of<br />
Any _ -> ()<br />
</haskell><br />
<br />
must evaluate its argument, even though it seems like the pattern match can't fail: we must check whether <hask>x</hask> is <hask>⊥</hask> or <hask>Any y</hask> for some <hask>y</hask>.<br />
<br />
This is intrinsic to Haskell's lazy, non-total semantics. The problem is that this means tracking whether a value is wrapped in a constructor or not, which means keeping track of those extra constructors at runtime even when all they do is distinguish an extra bottom value we don't even want. So in order to be consistent, but also allow the exact isomorphism to be preserved, Haskell provides the <hask>newtype</hask> keyword, for the construction of unlifted types. Pattern-matching on a newtype constructor doesn't do any work, because there is no separate ⊥ so every value in the type is wrapped in the constructor.<br />
<br />
== What about strict types? ==<br />
<br />
You may notice that a type like<br />
<br />
<haskell>data Identity' a = Identity' !a</haskell><br />
<br />
has <hask>Identity' ⊥ = ⊥</hask> and so you might think you have your coveted isomorphism. But all the strictness annotation means is that <hask>Identity' ⊥</hask> really means <hask>Identity' $! ⊥</hask> - the semantics of the type are fundamentally the same, and in particular the case expression still forces the value.<br />
<br />
== Examples ==<br />
<br />
<haskell><br />
module Foo where<br />
<br />
data Foo1 = Foo1 Int -- Defines Foo1 constructor that lazily refers to an Int<br />
data Foo2 = Foo2 !Int -- Defines Foo2 constructor that strictly refers to an Int<br />
newtype Foo3 = Foo3 Int -- Defines Foo3 constructor that is synonymous with Int<br />
<br />
-- Argument is lazy and ignored, so <br />
-- undefined does not cause failure since<br />
-- the contructor pattern match succeeds.<br />
x1 = case Foo1 undefined of<br />
Foo1 _ -> 1 -- 1<br />
<br />
-- Argument is strict (because of !), so<br />
-- undefined does cause failure.<br />
x2 = case Foo2 undefined of<br />
Foo2 _ -> 1 -- undefined<br />
<br />
-- The newtype behaves like Int, see yInt below<br />
x3 = case Foo3 undefined of<br />
Foo3 _ -> 1 -- 1<br />
<br />
-- Constructor pattern match fails<br />
y1 = case undefined of<br />
Foo1 _ -> 1 -- undefined<br />
<br />
-- Constructor pattern match fails<br />
y2 = case undefined of<br />
Foo2 _ -> 1 -- undefined<br />
<br />
-- The newtype behaves like Int, there is no<br />
-- constructor at runtime.<br />
y3 = case undefined of<br />
Foo3 _ -> 1 -- 1<br />
<br />
-- Demonstration of Int behavior<br />
int :: Int<br />
int = undefined<br />
<br />
yInt = case int of<br />
_ -> 1 -- 1<br />
</haskell><br />
<br />
== See also ==<br />
<br />
The Haskell 98 Report defines newtypes in [http://www.haskell.org/onlinereport/decls.html#sect4.2.3 section 4.2.3].<br />
<br />
[[Category:FAQ]]<br />
[[Category:Language]]</div>Gphiliphttps://wiki.haskell.org/index.php?title=Newtype&diff=54132Newtype2012-09-30T18:29:06Z<p>Gphilip: Fixed the order of the elements in the return value of the function in the type of State</p>
<hr />
<div>A <hask>newtype</hask> declaration creates a new type in much the same way as <hask>data</hask>. The syntax and usage of newtypes is virtually identical to that of data declarations - in fact, you can replace the <hask>newtype</hask> keyword with <hask>data</hask> and it'll still compile, indeed there's even a good chance your program will still work. The converse is not true, however - <hask>data</hask> can only be replaced with <hask>newtype</hask> if the type has ''exactly one constructor'' with ''exactly one field'' inside it.<br />
<br />
Some examples:<br />
<br />
<haskell><br />
newtype Fd = Fd CInt<br />
-- data Fd = Fd CInt would also be valid<br />
<br />
-- newtypes can have deriving clauses just like normal types<br />
newtype Identity a = Identity a<br />
deriving (Eq, Ord, Read, Show)<br />
<br />
-- record syntax is still allowed, but only for one field<br />
newtype State s a = State { runState :: s -> (s, a) }<br />
<br />
-- this is *not* allowed:<br />
-- newtype Pair a b = Pair { pairFst :: a, pairSnd :: b }<br />
-- but this is:<br />
data Pair a b = Pair { pairFst :: a, pairSnd :: b }<br />
-- and so is this:<br />
newtype Pair' a b = Pair' (a, b)<br />
</haskell><br />
<br />
Sounds pretty limited! So why does anyone use <hask>newtype</hask>?<br />
<br />
== The short version ==<br />
<br />
The restriction to one constructor with one field means that the new type and the type of the field are in direct correspondence:<br />
<br />
<haskell><br />
State :: (s -> (s, a)) -> State s a<br />
runState :: State s a -> (s -> (a, s))<br />
</haskell><br />
<br />
or in mathematical terms they are ''isomorphic''. This means that after the type is checked at compile time, at run time the two types can be treated essentially the same, without the overhead or indirection normally associated with a data constructor. So if you want to declare different type class instances for a particular type, or want to make a type abstract, you can wrap it in a <hask>newtype</hask> and it'll be considered distinct to the type-checker, but identical at runtime. You can then use all sorts of deep trickery like phantom or recursive types without worrying about GHC shuffling buckets of bytes for no reason.<br />
<br />
== The messy bits ==<br />
<br />
Why doesn't everyone just use <hask>newtype</hask> whenever they can, then? Well, quite often they do. But there is a subtle yet sematically significant difference. When we create a data type supposedly isomorphic to <hask>Bool</hask> like so:<br />
<br />
<haskell>data Any = Any { getAny :: Bool }</haskell><br />
<br />
we actually find that the isomorphism isn't exact:<br />
<br />
<haskell><br />
Any . getAny $ Any True = Any True -- okay, fine<br />
Any . getAny $ Any False = Any False -- also fine<br />
Any . getAny $ Any ⊥ = Any ⊥ -- so far so good<br />
Any . getAny $ ⊥ = Any ⊥ -- wait a second...<br />
</haskell><br />
([[Bottom|what's that upside-down T thing?]])<br />
<br />
The problem is that types declared with the <hask>data</hask> keyword are ''lifted'' - that is, they contain their own ⊥ value that is distinct from all the others. In this example, we have <hask>⊥ :: Any</hask> distinct from <hask>Any ⊥ :: Any</hask>. What this means is that the following pattern match:<br />
<br />
<haskell><br />
case x of<br />
Any _ -> ()<br />
</haskell><br />
<br />
must evaluate its argument, even though it seems like the pattern match can't fail: we must check whether <hask>x</hask> is <hask>⊥</hask> or <hask>Any y</hask> for some <hask>y</hask>.<br />
<br />
This is intrinsic to Haskell's lazy, non-total semantics. The problem is that this means tracking whether a value is wrapped in a constructor or not, which means keeping track of those extra constructors at runtime even when all they do is distinguish an extra bottom value we don't even want. So in order to be consistent, but also allow the exact isomorphism to be preserved, Haskell provides the <hask>newtype</hask> keyword, for the construction of unlifted types. Pattern-matching on a newtype constructor doesn't do any work, because there is no separate ⊥ so every value in the type is wrapped in the constructor.<br />
<br />
== What about strict types? ==<br />
<br />
You may notice that a type like<br />
<br />
<haskell>data Identity' a = Identity' !a</haskell><br />
<br />
has <hask>Identity' ⊥ = ⊥</hask> and so you might think you have your coveted isomorphism. But all the strictness annotation means is that <hask>Identity' ⊥</hask> really means <hask>Identity' $! ⊥</hask> - the semantics of the type are fundamentally the same, and in particular the case expression still forces the value.<br />
<br />
== Examples ==<br />
<br />
<haskell><br />
module Foo where<br />
<br />
data Foo1 = Foo1 Int -- Defines Foo1 constructor that lazily refers to an Int<br />
data Foo2 = Foo2 !Int -- Defines Foo2 constructor that strictly refers to an Int<br />
newtype Foo3 = Foo3 Int -- Defines Foo3 constructor that is synonymous with Int<br />
<br />
-- Argument is lazy and ignored, so <br />
-- undefined does not cause failure since<br />
-- the contructor pattern match succeeds.<br />
x1 = case Foo1 undefined of<br />
Foo1 _ -> 1 -- 1<br />
<br />
-- Argument is strict (because of !), so<br />
-- undefined does cause failure.<br />
x2 = case Foo2 undefined of<br />
Foo2 _ -> 1 -- undefined<br />
<br />
-- The newtype behaves like Int, see yInt below<br />
x3 = case Foo3 undefined of<br />
Foo3 _ -> 1 -- 1<br />
<br />
-- Constructor pattern match fails<br />
y1 = case undefined of<br />
Foo1 _ -> 1 -- undefined<br />
<br />
-- Constructor pattern match fails<br />
y2 = case undefined of<br />
Foo2 _ -> 1 -- undefined<br />
<br />
-- The newtype behaves like Int, there is no<br />
-- constructor at runtime.<br />
y3 = case undefined of<br />
Foo3 _ -> 1 -- 1<br />
<br />
-- Demonstration of Int behavior<br />
int :: Int<br />
int = undefined<br />
<br />
yInt = case int of<br />
_ -> 1 -- 1<br />
</haskell><br />
<br />
== See also ==<br />
<br />
The Haskell 98 Report defines newtypes in [http://www.haskell.org/onlinereport/decls.html#sect4.2.3 section 4.2.3].<br />
<br />
[[Category:FAQ]]<br />
[[Category:Language]]</div>Gphiliphttps://wiki.haskell.org/index.php?title=Solution4.html&diff=53954Solution4.html2012-09-21T18:01:53Z<p>Gphilip: Another definition of grandparent.</p>
<hr />
<div><haskell><br />
parent :: MonadPlus m => Sheep -> m Sheep<br />
parent s = (toMonad (father s)) `mplus` (toMonad (mother s))<br />
<br />
grandparent :: MonadPlus m => Sheep -> m Sheep<br />
grandparent s = (toMonad (parentalGrandfather s)) `mplus`<br />
(toMonad (parentalGrandmother s)) `mplus`<br />
(toMonad (maternalGrandfather s)) `mplus`<br />
(toMonad (maternalGrandmother s))<br />
<br />
toMonad :: MonadPlus m => Maybe a -> m a<br />
toMonad Nothing = mzero<br />
toMonad (Just s) = return s<br />
</haskell><br />
<br />
If the compiler cannot guess which MonadPlus to use you will need to specify it when the function is called. So, <hask>parent someSheep :: Maybe Sheep</hask> will use the Maybe monad and either <hask>parent someSheep :: [] Sheep</hask> or <hask>parent someSheep :: [Sheep]</hask> will use the list monad.<br />
<br />
Alternative grandparent:<br />
<haskell><br />
grandparent :: (MonadPlus m) => Sheep -> m Sheep<br />
grandparent s = parent s >>= parent<br />
</haskell></div>Gphiliphttps://wiki.haskell.org/index.php?title=Solution2.html&diff=53944Solution2.html2012-09-21T14:33:42Z<p>Gphilip: </p>
<hr />
<div><haskell><br />
parent :: Sheep -> Maybe Sheep<br />
parent s = father s `mplus` mother s<br />
<br />
grandparent :: Sheep -> Maybe Sheep<br />
grandparent s = paternalGrandfather s `mplus` <br />
paternalGrandmother s `mplus` <br />
maternalGrandfather s `mplus` <br />
maternalGrandmother s<br />
</haskell><br />
<br />
Alternative grandparent:<br />
<haskell><br />
grandparent :: Sheep -> Maybe Sheep<br />
grandparent s = parent s >>= parent<br />
</haskell></div>Gphiliphttps://wiki.haskell.org/index.php?title=Solution3.html&diff=53943Solution3.html2012-09-21T14:32:55Z<p>Gphilip: Another solution.</p>
<hr />
<div><haskell><br />
parent :: Sheep -> [Sheep]<br />
parent s = (maybeToList (mother s)) ++ (maybeToList (father s))<br />
<br />
grandparent :: Sheep -> [Sheep]<br />
grandparent s = (maybeToList (paternalGrandfather s)) ++<br />
(maybeToList (paternalGrandmother s)) ++<br />
(maybeToList (maternalGrandfather s)) ++<br />
(maybeToList (maternalGrandmother s))<br />
<br />
</haskell><br />
<br />
Alternate solution:<br />
<br />
<haskell><br />
parent :: Sheep -> [Sheep]<br />
parent s = (maybeToList $ mother s) `mplus` (maybeToList $ father s)<br />
<br />
grandparent :: Sheep -> [Sheep]<br />
grandparent s = parent s >>= parent <br />
<br />
<br />
</haskell></div>Gphiliphttps://wiki.haskell.org/index.php?title=Solution2.html&diff=53940Solution2.html2012-09-21T14:15:49Z<p>Gphilip: Another definition of grandparent.</p>
<hr />
<div><haskell><br />
parent :: Sheep -> Maybe Sheep<br />
parent s = father s `mplus` mother s<br />
<br />
grandparent :: Sheep -> Maybe Sheep<br />
grandparent s = paternalGrandfather s `mplus` <br />
paternalGrandmother s `mplus` <br />
maternalGrandfather s `mplus` <br />
maternalGrandmother s<br />
</haskell><br />
<br />
Alternative grandparent:<br />
<haskell><br />
grandparent :: Sheep -> Maybe Sheep<br />
grandparent s = (father s >>= parent) `mplus` (mother s >>= parent)<br />
</haskell></div>Gphiliphttps://wiki.haskell.org/index.php?title=99_questions/Solutions/3&diff=5335999 questions/Solutions/32012-09-15T21:21:24Z<p>Gphilip: Minor correction to the pointless version.</p>
<hr />
<div>(*) Find the K'th element of a list. The first element in the list is number 1.<br />
<br />
This is (almost) the infix operator !! in Prelude, which is defined as:<br />
<br />
<haskell><br />
(!!) :: [a] -> Int -> a<br />
(x:_) !! 0 = x<br />
(_:xs) !! n = xs !! (n-1)<br />
</haskell><br />
<br />
Except this doesn't quite work, because !! is zero-indexed, and element-at should be one-indexed. So:<br />
<br />
<haskell><br />
elementAt :: [a] -> Int -> a<br />
elementAt list i = list !! (i-1)<br />
</haskell><br />
<br />
Or without using the infix operator:<br />
<br />
<haskell><br />
elementAt' :: [a] -> Int -> a<br />
elementAt' (x:_) 1 = x<br />
elementAt' [] _ = error "Index out of bounds"<br />
elementAt' (_:xs) k<br />
| k < 1 = error "Index out of bounds"<br />
| otherwise = elementAt' xs (k - 1)<br />
</haskell><br />
<br />
Alternative version:<br />
<br />
<haskell><br />
elementAt'' :: [a] -> Int -> a<br />
elementAt'' (x:_) 1 = x<br />
elementAt'' (_:xs) i = elementAt'' xs (i - 1)<br />
elementAt'' _ _ = error "Index out of bounds"<br />
</haskell><br />
'''This does not work correctly on invalid indexes and infinite lists, e.g.:'''<br />
<haskell><br />
elementAt'' [1..] 0<br />
</haskell><br />
<br />
A few more solutions using prelude functions:<br />
<br />
<haskell><br />
elementAt_w' xs n = last . take n $ xs -- wrong<br />
-- Main> map (elementAt_w' [1..4]) [1..10]<br />
-- [1,2,3,4,4,4,4,4,4,4]<br />
</haskell><br />
<br />
<haskell><br />
elementAt_w'' xs n = head . reverse . take n $ xs -- wrong<br />
-- Main> map (elementAt_w'' [1..4]) [1..10]<br />
-- [1,2,3,4,4,4,4,4,4,4]<br />
</haskell><br />
<br />
<haskell><br />
elementAt_w''' xs n = head . drop (n - 1) $ xs -- wrong<br />
-- Main> map (elementAt_w''' [1..4]) [0..10]<br />
-- [1,1,2,3,4,*** Exception: Prelude.head: empty list<br />
</haskell><br />
<br />
or <hask>elementAt_w'</hask> correctly in point-free style:<br />
<haskell><br />
elementAt_w'pf = (last .) . take . (+ 1)<br />
</haskell><br />
<br />
Pedantic note: the above definition of <hask>elementAt_w'pf</hask> does not conform to the order of arguments specified by the question, but the following does:<br />
<haskell><br />
elementAt_w'pf' = flip $ (last .) . take . (+ 1)<br />
</haskell></div>Gphiliphttps://wiki.haskell.org/index.php?title=Talk:Prime_numbers&diff=38798Talk:Prime numbers2011-02-16T15:43:01Z<p>Gphilip: The primesFrom function does not compile.</p>
<hr />
<div>Here's an interesting question: will the program go faster if we replace all those <hask>(n >)</hask> expressions with <hask>(\x -> floor (sqrt n) > x)</hask>?<br />
<br />
On one hand, a composite integer cannot possess a factor greater than its square root.<br />
<br />
On the other hand, since the list we're looking through contains all possible prime numbers, we are guaranteed to find a factor or an exact match eventually, so do we need the <hask>takeWhile</hask> at all?<br />
<br />
Throwing this over to somebody with a bigger brain than me...<br />
<br />
[[User:MathematicalOrchid|MathematicalOrchid]] 16:41, 5 February 2007 (UTC)<br />
<br />
a composite can indeed have factors greater than its square root, and indeed most do. what you mean is that a composite will definitely have at least one factor smaller-equal than its square root.<br />
<br />
why not use <hask>(\x -> n > x*x)</hask> --[[User:JohannesAhlmann|Johannes Ahlmann]] 21:18, 5 February 2007 (UTC)<br />
<br />
LOL! That is ''indeed'' what I meant.<br />
<br />
It turns out my comment above is correct - the <hask>takeWhile</hask> filtering in <hask>factors</hask> is in fact unecessary. The function works just fine without it. (Notice I have made some edits to correct the multiple bugs in the <hask>primes</hask> function. Oops!)<br />
<br />
Now the only use of <hask>takeWhile</hask> is in the <hask>is_prime</hask> function, which could be changed to 'give up' the search a lot faster and hence confirm large primes with much less CPU time and RAM usage. Maybe I'll wrap my brain around that later.<br />
<br />
[[User:MathematicalOrchid|MathematicalOrchid]] 10:17, 6 February 2007 (UTC)<br />
<br />
The section [[Prime_numbers#Simple_Prime_Sieve_II|Simple Prime Sieve II]] is not a sieve in the same sense that the first one is. It really implements a primality test as a filter.<br />
<br />
A more "sieve-like" version of the simple sieve which exploits the fact that we need not check for primes larger than the square root would be<br />
<br />
<hask><br />
primes :: [Integer]<br />
primes = sieve [2..]<br />
where sieve (p:xs) = p : sieve [x | x<-xs, (x< p*p) || (x `mod` p /= 0)]<br />
</hask><br />
<br />
However, this runs even slower than the original!<br />
<br />
[[User:Kapil|Kapil Hari Paranjape]] 06:51, 4 February 2009 (UTC)<br />
<br />
I want to thank Leon P. Smith for showing me the idea of producing the spans of odds directly, for version [http://www.haskell.org/haskellwiki/?title=Prime_numbers&oldid=31589#Simple_Prime_Sieve_IV IV]. I had a combination of span and infinite odds list, as in span (< p*p) [3,5..] etc. That sped it up some 20% more, when GHC-compiled.<br />
<br />
The mark-and-comb version that I put under [http://www.haskell.org/haskellwiki/?title=Prime_numbers&oldid=31589#Simple_Sieve_of_Eratosthenes Simple Sieve of Eratosthenes] seems to me very "faithful" to the original (IYKWIM). Strangely it shows exactly same asymptotic behavior when GHC-compiled (tested inside GHCi) as IV. Does this prove that priority queue-based code is <i>better</i> than the original? <i>:)</i><br />
<br />
BTW "unzip" is somehow screwed up inside "haskell" block, I don't know how to fix that.<br />
<br />
: not anymore [[User:WillNess|WillNess]] 13:39, 10 February 2011 (UTC)<br />
<br />
I've also added the postponed-filters version to the first sieve code to show that the <i>squares</i> optimization <i>does</i> matter and gives <i>huge</i> efficiency advantage just by itself. The <i>odds only</i> trick gives it a dozen or two percent improvement, but it's nothing compared to this 20x massive speedup!<br />
<br />
Written in list-comprehension style, it's<br />
<br />
<hask><br />
primes :: [Integer]<br />
primes = 2: 3: sieve (tail primes) [5,7..] where <br />
sieve (p:ps) xs<br />
= h ++ sieve ps [x|x<-t, x `rem` p /= 0] <br />
where (h,(_:t))=span (< p*p) xs<br />
</hask><br />
<br />
Which BTW is <i>faster</i> than the IV version itself, when interpreted in GHCi. So <i>what</i> are we comparing here, code versions or Haskell implementations??<br />
<br />
[[User:WillNess|WillNess]] 10:46, 15 November 2009 (UTC)<br />
<br />
I've added the code for Euler's sieve which is just the postponed filters with minimal modification, substituting <code>(t `minus` multiples p)</code> for <code>(filter (nodivs p) t)</code>.<br />
<br />
:as it later turned out it was not a Euler sieve, but rather an approximation. [[User:WillNess|WillNess]] 13:27, 10 February 2011 (UTC)<br />
<br />
Now it is obvious that <code>(...(((s - a) - b) - c) - ...)</code> is the same as <code>(s - (a + b + c + ...))</code> and this is the next code, the "merged multiples" variation of Euler's sieve.<br />
<br />
It is very much like the streamlined and further optimized famous ''Richard Bird's code'' (appearing in [http://www.cs.hmc.edu/~oneill/papers/Sieve-JFP.pdf Melissa O'Neill's] [http://journals.cambridge.org/download.php?file=%2FJFP%2FJFP19_01%2FS0956796808007004a.pdf&code=1c69f7d6fafad8f6454f2789ffa4ab4d JFP article]), which copyright status is unknown to me, so I couldn't reference it in the main article body. The code as written in the article has the wrong clause order in <code>merge</code>.<br />
<br />
:when using <code>compare</code>, clause order doesn't matter. [[User:WillNess|WillNess]] 15:32, 26 January 2010 (UTC)<br />
<br />
I've also changed the span pattern-binding to the more correct, lazy pattern, <code>(h,~(_:t))</code>.<br />
<br />
[[User:WillNess|WillNess]] 17:10, 5 December 2009 (UTC)<br />
<br />
<br />
New [http://haskell.org/haskellwiki/index.php?title=Prime_numbers&oldid=32749#Treefold_Merged_Multiples_Removal treefolding merge] is inspired by apfelmus's VIP code from [[Prime_numbers#Implicit_Heap|Implicit Heap]]; but it uses a different structure, better at primes multiples generation: instead of his 1+(2+(4+(8+...))) it's (2+4)+( (4+8) + ( (8+16) + ...)). The reason I put my version here is to show the natural progression of developement from the postponed filters to Euler's sieve to merged multiples to treefold-merged multiples. I.e. it's not some ad-hoc invention; it's <i>logical</i>. It is also step-by-step.<br />
<br />
I estimate the total cost of producing primes multiples as Sum (1/p)*d, where d is the leaf's depth, i.e. the amount of merge nodes its produced prime must pass on its way up to the top. The values for cost function correspond very well with the actual time performance of the respective algorithms: it's better by 10%-12% and the performance boost is 10%-12% too.<br />
<br />
I will also add this code further improved with the Wheel optimization here. That one beats the PQ-based code from Melissa ONeill's ZIP file by a constant margin of around 20%, its asympotic behaviour *exactly* the same. <br />
<br />
:that was with the [http://haskell.org/haskellwiki/index.php?title=Prime_numbers&diff=prev&oldid=32778#Treefold_Merged_Multiples.2C_with_Wheel incomplete code] which only rolled the wheel on numbers supply, and not on multiples. It had e.g. [11*11,11*13,11*15,11*17...] but of course 11*15 could've been eliminated in advance too (and 11*25, 11*35, 11*45, 11*49, etc...). [http://haskell.org/haskellwiki/index.php?title=Prime_numbers&diff=next&oldid=32778#Treefold_Merged_Multiples.2C_with_Wheel Fixing that] made it run ''twice'' faster than before. [[User:WillNess|WillNess]] 08:33, 29 December 2009 (UTC)<br />
<br />
::these tests were most probably wrong, either on GHCi or without using the -O2 switch [[User:WillNess|WillNess]] 13:27, 10 February 2011 (UTC)<br />
<br />
I measure local asymptotics by taking a logBase of run time ratio in base of a problem size ratio. I've settled on testing code performance as interpreted, inside GHCi. Running a compiled code feels more like testing a compiler itself. Too many times I saw two operationally equivalent expressions running in wildly different times. It can't be anything else other than the compiler's quirks, and we're not interested in those, here. :)<br />
<br />
Apparently, <i>arrays</i> are <i><b>very</b></i> fast. :) (using accumArray as seen in [http://www.haskell.org/pipermail/haskell-cafe/2007-March/023095.html Thorkil Naur's Haskell cafe post], but still without the Wheel).<br />
<br />
[[User:WillNess|WillNess]] 14:47, 25 December 2009 (UTC)<br />
<br />
<br />
NEW! A *major* improvement to [http://haskell.org/haskellwiki/index.php?title=Prime_numbers&oldid=32868#Treefold_Merged_Multiples_Removal treefold merge] by [http://www.haskell.org/pipermail/haskell-cafe/2010-January/071807.html Daniel Fischer]'s reimplementing spMerge into a properly tail-recursive thingy, fixing a space leak there. :) [[User:WillNess|WillNess]] 06:56, 9 January 2010 (UTC)<br />
<br />
:AND his other idea: making `tfold' ''strict'' - which ''really'' brings down the memory consumption. The only caveat: use at least 6 primes to bootstrap the tree-folding. At each tree expansion it needs additional 3*2^n, n=1,... primes, but is producing PI( (PRIMES !! SUM(i=1..n)(3*2^i)) ^ 2) which is way more than that. [[User:WillNess|WillNess]] 10:02, 25 January 2010 (UTC)<br />
<br />
:I've also inlined spMerge completely into mergeSP itself (now called unionSP) along the lines of Leon P. Smith's Data.OrdList.mergeAll implementation. Looks yet simpler that way. Haven't tested it though. [[User:WillNess|WillNess]] 23:21, 28 February 2010 (UTC) <br />
<br />
:changed Data.OrdList to Data.List.Ordered as per the new version of [http://hackage.haskell.org/package/data-ordlist data-ordlist] package. [[User:WillNess|WillNess]] 07:45, 16 March 2010 (UTC)<br />
<br />
[[Prime_numbers#Euler.27s_Sieve | Euler's Sieve]] initial one-liner code is due to [http://www.haskell.org/pipermail/haskell-cafe/2010-January/071615.html Daniel Fischer]. <br />
[[User:WillNess|WillNess]] 16:35, 19 January 2010 (UTC)<br />
<br />
Here's new streamlined code for immutable arrays:<br />
<haskell><br />
primes = 2: 3: sieve [] (tail primes) 3 <br />
where<br />
sieve fs (p:ps) x = [i*2 + x | (i,e) <- assocs a, e]<br />
++ sieve fs' ps (p*p)<br />
where <br />
q = (p*p-x)`div`2 <br />
fs' = (p,0) : [(s, rem (y-q) s) | (s,y) <- fs]<br />
a = accumArray (\ b c -> False) True (1,q-1)<br />
[(i,()) | (s,y) <- fs, i <- [y+s,y+s+s..q]]<br />
</haskell><br />
It is twice faster, but more obscure; so I thought I'd keep the previous version on the main page for didactic reasons. [[User:WillNess|WillNess]] 08:43, 18 July 2010 (UTC)<br />
<br />
: I've added it now to the main page and restyled the treefold code a bit. '''The test entries on Ideone.com are [http://ideone.com/willness/primes here]'''. [[User:WillNess|WillNess]] 11:00, 6 August 2010 (UTC)<br />
:ST Array code also becomes much faster (3x for 1 mln-th prime in fact) when translated into working on odds only, like the immutable array version - but its memory footprint is also large. [[User:WillNess|WillNess]] 10:34, 13 August 2010 (UTC)<br />
<br />
Augmenting the latest [http://haskell.org/haskellwiki/index.php?title=Prime_numbers&oldid=38513#Treefold_Merged_Multiples.2C_with_Wheel Treefold Merged Multiples, with Wheel] version to work with [http://haskell.org/haskellwiki/index.php?title=Prime_numbers&oldid=38513#Implicit_Heap VIPs] does nothing except slow it down [http://ideone.com/r3tbr/ a little bit]. Lazy pattern in join/tfold also starts causing space leak then, so primes' becomes necessary to be more defined upfront to prevent a loop when the tilde is removed. [[User:WillNess|WillNess]] 18:31, 6 February 2011 (UTC)<br />
<br />
Compared with Melissa O’Neill’s considerably more complex priority queue-based code it runs (compiled by ghc 6.10.1 with -O2 switch) about 1.6x slower at producing 1 million primes, and about 1.7x slower at 10 to 20 million, with empirical complexity of n^1.23 vs ONeill's n^1.20; both versions having same low and near-constant memory consumption. Ideone.com uses (ghc-6.8.2) and limits run time to 15s; [https://ideone.com/H8Y5V there] the ratio is 1.16x .. 1.25x for generating 1 .. 6 million primes, and the empirical complexities are n^1.24 vs ONeill's n^1.20. [[User:WillNess|WillNess]] 07:55, 13 February 2011 (UTC)<br />
<br />
The <code>primesFrom</code> function in Section 2.1.2.3 does not compile. Specifically, the <code>sieve</code> function is defined to take in a list as its first argument, but is passed a number instead : <code>(length h)</code> .<br />
--[[User:Gphilip|Gphilip]] 15:43, 16 February 2011 (UTC)</div>Gphiliphttps://wiki.haskell.org/index.php?title=Lambdabot/Building&diff=38463Lambdabot/Building2011-02-06T05:19:33Z<p>Gphilip: /* To compile lambdabot on ghc >=6.10, follow the following steps */ The repos mentioned seem not to be available.</p>
<hr />
<div>Unfortunately, some of the packages that [[Lambdabot]] depends on in [[Hackage]] do not, as of 9 January 2009, support [[GHC]] 6.10.<br />
Fortunately, the [[darcs]] repositories of most of these packages do support it.<br />
<br />
== To compile lambdabot on ghc >=6.10, follow the following steps ==<br />
<br />
Get the stuff from [[darcs]] repositories:<br />
<code><br />
darcs get http://code.haskell.org/HSP/haskell-src-exts<br />
darcs get http://code.haskell.org/mubot<br />
darcs get http://code.haskell.org/lambdabot<br />
</code><br />
<br />
All these three repos fail for me with : "darcs failed: Not a repository: ..."<br />
--[[User:Gphilip|Gphilip]] 05:19, 6 February 2011 (UTC)<br />
<br />
Lets install haskell-src-exts from darcs (TODO: This may not actually be required, the one from Hackage might be good enough):<br />
<code><br />
cd haskell-src-exts<br />
cabal install<br />
cd ..<br />
</code><br />
<br />
Install mueval:<br />
<code><br />
cd mubot/mueval<br />
cabal install<br />
cd ../..<br />
</code><br />
<br />
Apply a patch to lambdabot and install it:<br />
<code><br />
cd lambdabot<br />
wget http://www.haskell.org/sitewiki/images/e/ed/Lambdabot.patch -O -|patch -p1<br />
cabal install<br />
cd ..<br />
</code><br />
<br />
Edit your online.rc file at ~/.cabal/share/lambdabot-<i>version</i>/online.rc<br />
And run lambdabot via:<br />
<code><br />
lambdabot -e "rc ~/.cabal/share/lambdabot-<i>version</i>/online.rc"<br />
</code><br />
<br />
This may not be complete, please correct it if any more fixes are required, and as things are updated.</div>Gphiliphttps://wiki.haskell.org/index.php?title=Higher_order_function&diff=18020Higher order function2007-12-27T17:00:33Z<p>Gphilip: Corrected grammar.</p>
<hr />
<div>[[Category:Glossary]] [[Category:Idioms]]<br />
<br />
==Definition==<br />
A '''higher order function''' is a function that takes other functions as arguments or returns a function as result.<br />
<br />
==Discussion==<br />
The major use is to abstract common behaviour into one place.<br />
<br />
===Examples===<br />
<br />
====In the libraries====<br />
<br />
Many functions in the libraries are higher order. The (probably) most commonly given examples are <hask>map</hask> and <hask>fold</hask>.<br />
<br />
Two other common ones are <hask>curry, uncurry</hask>. A possible implementation of these is:<br />
<haskell><br />
curry :: ((a,b)->c) -> a->b->c<br />
curry f a b = f (a,b)<br />
<br />
uncurry :: (a->b->c) -> ((a,b)->c)<br />
uncurry f (a,b)= f a b<br />
</haskell><br />
<br />
<hask>curry</hask>'s first argument must be a function which accepts a pair. It applies that function to its next two arguments.<br />
<br />
<hask>uncurry</hask> is the inverse of <hask>curry</hask>. Its first argument must be a function taking two values. <hask>uncurry</hask> then applies that function to the components of the pair which is the second argument.<br />
====Simple code examples====<br />
Rather than writing <br />
<haskell><br />
doubleList [] = []<br />
doubleList (x:xs) = 2*x : doubleList xs<br />
</haskell><br />
and <br />
<haskell><br />
tripleList [] = []<br />
tripleList (x:xs) = 3*x : tripleList xs<br />
</haskell><br />
we can parameterize out the difference<br />
<haskell><br />
multList n [] = []<br />
multList n (x:xs) = n*x : multList n xs<br />
</haskell><br />
and define <br />
<haskell><br />
tripleList = multList 3<br />
doubleList = multList 2<br />
</haskell><br />
leading to a less error prone definition of each.<br />
<br />
But now, if we had the function<br />
<haskell><br />
addToList n [] = []<br />
addToList n (x:xs) = n+x : addToList n xs<br />
</haskell><br />
we could parameterize the difference again<br />
<haskell><br />
operlist n bop [] = []<br />
operlist n bop (x:xs) = bop n x : operlist n bop xs<br />
</haskell><br />
and define doubleList as<br />
<haskell><br />
doubleList = operList 2 (*) <br />
</haskell><br />
but this ties us into a constant parameters<br />
<br />
and we could redefine things as<br />
<haskell><br />
mapList f [] = []<br />
mapList f (x:xs) = f x : mapList f xs<br />
</haskell><br />
and define doubleList as<br />
<haskell><br />
doubleList = mapList (2*)<br />
</haskell><br />
This higher order function "mapList" can be used in a wide range of areas to simplify code.<br />
It is called <hask>map</hask> in Haskell's Prelude.<br />
<br />
====Mathematical examples====<br />
<br />
In mathematics the counterpart to higher order functions are functionals (mapping functions to scalars) and function operators (mapping functions to functions).<br />
Typical functionals are the limit of a sequence, or the integral of an interval of a function.<br />
<haskell><br />
limit :: [Double] -> Double<br />
definiteIntegral :: (Double, Double) -> (Double -> Double) -> Double<br />
</haskell><br />
Typical operators are the indefinite integral, the derivative, the function inverse.<br />
<haskell><br />
indefiniteIntegral :: Double -> (Double -> Double) -> (Double -> Double)<br />
derive :: (Double -> Double) -> (Double -> Double)<br />
inverse :: (Double -> Double) -> (Double -> Double)<br />
</haskell><br />
Here is a numerical approximation:<br />
<haskell><br />
derive :: Double -> (Double -> Double) -> (Double -> Double)<br />
derive eps f x = (f(x+eps) - f(x-eps)) / (2*eps)<br />
</haskell><br />
<br />
==See also==<br />
[[Accumulator recursion]] where the accumulator is a higher order function is one interesting case of [[continuation passing style]].</div>Gphiliphttps://wiki.haskell.org/index.php?title=Higher_order_function&diff=18019Higher order function2007-12-27T16:02:08Z<p>Gphilip: Corrected grammar.</p>
<hr />
<div>[[Category:Glossary]] [[Category:Idioms]]<br />
<br />
==Definition==<br />
A '''higher order function''' is a function that takes other functions as arguments or returns a function as result.<br />
<br />
==Discussion==<br />
The major use is to abstract common behaviour into one place.<br />
<br />
===Examples===<br />
<br />
====In the libraries====<br />
<br />
Many functions in the libraries are higher order. The (probably) most commonly given examples are <hask>map</hask> and <hask>fold</hask>.<br />
<br />
Two other common ones are <hask>curry, uncurry</hask>. A possible implementation of these is:<br />
<haskell><br />
curry :: ((a,b)->c) -> a->b->c<br />
curry f a b = f (a,b)<br />
<br />
uncurry :: (a->b->c) -> ((a,b)->c)<br />
uncurry f (a,b)= f a b<br />
</haskell><br />
<br />
<hask>curry</hask>'s first argument must be a function which accepts a pair. It applies that function to its next two arguments.<br />
<br />
<hask>uncurry</hask> is the inverse of <hask>curry</hask>. Its first argument must be a function taking two values. <hask>uncurry</hask> then applies that function to the components of the pair which is the second argument.<br />
====Simple code examples====<br />
Rather than writing <br />
<haskell><br />
doubleList [] = []<br />
doubleList (x:xs) = 2*x : doubleList xs<br />
</haskell><br />
and <br />
<haskell><br />
tripleList [] = []<br />
tripleList (x:xs) = 3*x : tripleList xs<br />
</haskell><br />
we can parameterize out the difference<br />
<haskell><br />
multList n [] = []<br />
multList n (x:xs) = n*x : multList n xs<br />
</haskell><br />
and define <br />
<haskell><br />
tripleList = multList 3<br />
doubleList = multList 2<br />
</haskell><br />
leading to a less error prone definition of each.<br />
<br />
But now, if we had the function<br />
<haskell><br />
addToList n [] = []<br />
addToList n (x:xs) = n+x : addToList n xs<br />
</haskell><br />
we could parameterize the difference again<br />
<haskell><br />
operlist n bop [] = []<br />
operlist n bop (x:xs) = bop n x : operlist n bop xs<br />
</haskell><br />
and define doubleList as<br />
<haskell><br />
doubleList = operList 2 (*) <br />
</haskell><br />
but this ties us into a constant parameters<br />
<br />
and we could redefine things as<br />
<haskell><br />
mapList f [] = []<br />
mapList f (x:xs) = f x : mapList f xs<br />
</haskell><br />
and define doubleList as<br />
<haskell><br />
doubleList = mapList (2*)<br />
</haskell><br />
This higher order function "mapList" can be used in a wide range of areas to simplify code.<br />
It is called <hask>map</hask> in Haskell's Prelude.<br />
<br />
====Mathematical examples====<br />
<br />
In mathematics the counterpart to higher order functions are functionals (mapping functions to scalars) and function operators (mapping functions to functions).<br />
Typical functionals are the limit of a sequence, or the integral of an interval of a function.<br />
<haskell><br />
limit :: [Double] -> Double<br />
definiteIntegral :: (Double, Double) -> (Double -> Double) -> Double<br />
</haskell><br />
Typical operators are the indefinite integral, the derivative, the function inverse.<br />
<haskell><br />
indefiniteIntegral :: Double -> (Double -> Double) -> (Double -> Double)<br />
derive :: (Double -> Double) -> (Double -> Double)<br />
inverse :: (Double -> Double) -> (Double -> Double)<br />
</haskell><br />
Here a numerical approximation:<br />
<haskell><br />
derive :: Double -> (Double -> Double) -> (Double -> Double)<br />
derive eps f x = (f(x+eps) - f(x-eps)) / (2*eps)<br />
</haskell><br />
<br />
==See also==<br />
[[Accumulator recursion]] where the accumulator is a higher order function is one interesting case of [[continuation passing style]].</div>Gphiliphttps://wiki.haskell.org/index.php?title=Introduction&diff=18018Introduction2007-12-27T14:57:46Z<p>Gphilip: </p>
<hr />
<div>[[Category:Tutorials]] [[Category:Language]]<br />
Haskell is a computer programming language. In particular, it is a<br />
'' [[Polymorphism|polymorphically]] [[typing|statically typed]], [[Lazy evaluation|lazy]], [[functional programming|purely functional]] '' language,<br />
quite different from most other programming languages. <br />
The language is named for [[Haskell Brooks Curry]], whose work in mathematical logic serves as a foundation for<br />
functional languages. <br />
Haskell is based on the ''[[lambda calculus]]'', hence the lambda we use as a logo.<br />
<br />
<br />
==Why use Haskell?==<br />
Writing large software systems that<br />
work is difficult and expensive. Maintaining those systems is even<br />
more difficult and expensive. Functional programming languages, such<br />
as Haskell, can make it easier and cheaper. For example, a new user who<br />
wrote a small relational DBMS in Haskell had this to say:<br />
<blockquote><br />
WOW! I basically wrote this without testing just thinking about my<br />
program in terms of transformations between types. I wrote the<br />
test/example code and had almost no implementation errors in the code! The<br />
compiler/type-system is really really good at preventing you from<br />
making coding mistakes! I've never in my life had a block of code<br />
this big work on the first try. I am WAY impressed.<br />
</blockquote><br />
Even if you are not in a position to use Haskell in your programming projects, learning Haskell can make you a better programmer in any language. <br />
<blockquote><br />
I learned Haskell a couple of years ago, having previously programmed in<br />
Python and (many) other languages. Recently, I've been using Python for a<br />
project (the choice being determined by both technical and non-technical<br />
issues), and find my Python programming style is now heavily influenced (for the better, I hope ;-) by my Haskell programming experience.<br><br><br />
Graham Klyne<br />
</blockquote><br />
<br />
<br />
Haskell offers you:<br />
<br />
*Substantially increased programmer productivity (Ericsson measured an improvement factor of between 9 and 25 using Erlang, a functional programming language similar to Haskell, in one set of experiments on telephony software).<br />
*Shorter, clearer, and more maintainable code.<br />
*Fewer errors, higher reliability.<br />
*A smaller &quot;semantic gap&quot; between the programmer and the language.<br />
*Shorter lead times.<br />
<br />
Haskell is a wide-spectrum language, suitable for a variety of<br />
applications. It is particularly suitable for programs which need to<br />
be highly modifiable and maintainable.<br />
<br />
Much of a software product's life is spent in ''specification'',<br />
''design'' and ''maintenance'', and not in ''programming''.<br />
Functional languages are superb for writing specifications which can<br />
actually be executed (and hence tested and debugged). Such a<br />
specification then ''is'' the first prototype of the final<br />
program.<br />
<br />
Functional programs are also relatively easy to maintain, because the<br />
code is shorter, clearer, and the rigorous control of side effects<br />
eliminates a huge class of unforeseen interactions.<br />
<br />
==What is functional programming?==<br />
C, Java, Pascal, Ada, and so on, are all ''imperative''<br />
languages. They are &quot;imperative&quot; in the sense that they<br />
consist of a sequence of commands, which are executed strictly one<br />
after the other. Haskell is a ''functional'' language. A<br />
functional program is a single expression, which is executed by<br />
evaluating the expression. <br />
<br />
Anyone who has used a spreadsheet has experience of functional<br />
programming. In a spreadsheet, one specifies the value of each cell<br />
in terms of the values of other cells. The focus is on ''what'' is<br />
to be computed, not ''how'' it should be computed. For<br />
example:<br />
*we do not specify the order in which the cells should be calculated -&nbsp;instead we take it for granted that the spreadsheet will compute cells in an order which respects their dependencies.<br />
*we do not tell the spreadsheet how to allocate its memory - rather, we expect it to present us with an apparently infinite plane of cells, and to allocate memory only to those cells which are actually in use.<br />
*for the most part, we specify the value of a cell by an ''expression'' (whose parts can be evaluated in any order), rather than by a ''sequence of commands '' which computes its value.<br />
<br />
An interesting consequence of the spreadsheet's unspecified order<br />
of re-calculation is that the notion of assignment is not very useful.<br />
After all, if you don't know exactly when an assignment will<br />
happen, you can't make much use of it! This contrasts strongly<br />
with programs in conventional languages like C, which consist<br />
essentially of a carefully-specified sequence of assignments, or Java,<br />
in which the ordering of method calls is crucial to the meaning of a<br />
program. <br />
<br />
This focus on the high-level &quot;what&quot; rather than the<br />
low-level &quot;how&quot; is a distinguishing characteristic of<br />
functional programming languages.<br />
<br />
Another well-known nearly-functional language is the standard database<br />
query language SQL. An SQL query is an expression involving<br />
projections, selections, joins and so forth. The query says what<br />
relation should be computed, without saying how it should be computed.<br />
Indeed, the query can be evaluated in any convenient order. SQL<br />
implementations often perform extensive query optimization which<br />
(among other things) figures out the best order in which to evaluate<br />
the expression.<br />
<br />
==What's good about functional programming?==<br />
<br />
Spreadsheets and SQL are both fairly specialized languages. Functional<br />
programming languages take the same ideas and move them into the realm<br />
of general-purpose programming. To get an idea of what a functional<br />
program is like, and the expressiveness of functional languages, look at<br />
the following quicksort programs. They both sort a sequence of numbers<br />
into ascending order using a standard method called "quicksort". The<br />
first program is written in Haskell and the second in C. <br />
<br />
Whereas the C program describes the particular steps the machine must<br />
make to perform a sort -- with most code dealing with the low-level<br />
details of data manipulation -- the Haskell program encodes the sorting<br />
algorithm at a much higher level, with improved brevity and clarity as<br />
a result.<br />
<br />
===Quicksort in Haskell===<br />
<br />
<haskell><br />
qsort [] = []<br />
qsort (x:xs) = qsort (filter (< x) xs) ++ [x] ++ qsort (filter (>= x) xs)<br />
</haskell><br />
<br />
===Quicksort in C===<br />
<br />
<pre><br />
void qsort(int a[], int lo, int hi) {<br />
{<br />
int h, l, p, t;<br />
<br />
if (lo &lt; hi) {<br />
l = lo;<br />
h = hi;<br />
p = a[hi];<br />
<br />
do {<br />
while ((l &lt; h) &amp;&amp; (a[l] &lt;= p)) <br />
l = l+1;<br />
while ((h &gt; l) &amp;&amp; (a[h] &gt;= p))<br />
h = h-1;<br />
if (l &lt; h) {<br />
t = a[l];<br />
a[l] = a[h];<br />
a[h] = t;<br />
}<br />
} while (l &lt; h);<br />
<br />
t = a[l];<br />
a[l] = a[hi];<br />
a[hi] = t;<br />
<br />
qsort( a, lo, l-1 );<br />
qsort( a, l+1, hi );<br />
}<br />
}<br />
</pre><br />
<br />
A [[/Direct Translation | semi-direct translation]] of the C is here.<br />
<br />
Let's examine some of the benefits of Haskell and functional programming.<br />
A more detailed case for functional programming can be found in <br />
<br />
<BLOCKQUOTE><br />
[http://www.md.chalmers.se/~rjmh/Papers/whyfp.html '''Why Functional Programming Matters'''] by [http://www.md.chalmers.se/~rjmh/ John Hughes], The Computer<br />
Journal, Vol. 32, No. 2, 1989, pp. 98 - 107. Also in: David A. Turner<br />
(ed.): Research Topics in Functional Programming, Addison-Wesley,<br />
1990, pp. 17 - 42. <br />
</BLOCKQUOTE><br />
A slightly less formal essay inspired by the paper above can be found in<br />
<BLOCKQUOTE><br />
[[Why Haskell matters |'''Why Haskell Matters''']] originally by [mailto:sylvan@dtek.chalmers.se Sebastian Sylvan]<br />
</BLOCKQUOTE><br />
<br />
===Brevity===<br />
Functional programs tend to be much more '''concise''' than their<br />
imperative counterparts. Quicksort is a rather extreme case, but in<br />
general functional programs are much shorter (by a factor of two to ten).<br />
<br />
===Ease of understanding===<br />
Functional programs are often easier to '''understand'''. You should be able to understand the program without any previous knowledge of either Haskell or quicksort. The same certainly cannot be said of the C program. It takes quite a while to understand, and even when you do understand it, it is extremely easy to make a small slip and end up with an incorrect program. Here is a detailed explanation of the Haskell quicksort:<br />
<haskell><br />
qsort [] = []<br />
qsort (x:xs) = qsort (filter (< x) xs) ++ [x] ++ qsort (filter (>= x) xs)<br />
</haskell><br />
The first line reads:<br />
"When you sort an empty list (<tt>[]</tt>), the result is another empty list". <br />
The second line reads: "To sort a list whose first element is named <tt>x</tt> and<br />
the rest of which is named <tt>xs</tt>, sort the elements of <tt>xs</tt> that are less than <tt>x</tt>, sort the elements of <tt>xs</tt> that are greater than or equal to <tt>x</tt>, and concatenate (<tt>++</tt>) the results, with <tt>x</tt> sandwiched in the middle."<br />
<br />
===No core dumps===<br />
Most functional languages, and Haskell in particular, are '''strongly<br />
typed''', eliminating a huge class of easy-to-make errors at compile<br />
time. In particular, strong typing means '''no core dumps'''!<br />
There is simply no possibility of treating an integer as a pointer, or<br />
following a null pointer.<br />
<br />
===Code re-use===<br />
Of course, strong typing is available in many imperative languages, such as Ada or Pascal. However, Haskell's type system is much less restrictive than, say, Pascal's, because it uses '''[[polymorphism]]'''.<br />
<br />
For example, the qsort program given in Figure 1 will not only sort lists of integers, but also lists of floating point numbers, lists of characters, lists of lists; indeed, it will sort lists of ''anything'' for which it is meaningful to have "less-than" and "greater-than" operations. In contrast, the C version can only sort an array of integers, and nothing else.<br />
<br />
Polymorphism enhances re-usability.<br />
<br />
===Strong glue===<br />
"Non-strict" functional languages, such as Haskell, have another powerful feature: they<br />
only evaluate as much of the program as is required to get the answer<br />
- this is called [[Haskell/Lazy evaluation |'''lazy evaluation''']]. This feature is rather<br />
like Unix pipes. For example, the Unix command<br />
<pre><br />
grep printf Foo.c | wc<br />
</pre><br />
counts the number of lines in the file <tt> Foo.c </tt>that include the<br />
string <tt>printf</tt>.<br />
The command <br />
<pre><br />
grep printf Foo.c<br />
</pre><br />
produces all lines which contain the string &quot;<tt>printf</tt>&quot;,<br />
while the &quot;<tt>wc</tt>&quot; command counts them. The pipe,<br />
written &quot;<tt>|</tt>&quot;, takes the output from the first command<br />
and delivers it to the second. The two commands execute together, so<br />
that the output of the first is consumed more-or-less immediately by<br />
the second. In this way, no large intermediate files need be<br />
produced. You can think of <tt>wc</tt> &quot;demanding&quot;<br />
lines from the <tt>grep</tt>.<br />
<br />
If the second command only needs some of the output of the first, then<br />
execution of the first command might never need to be completed. For<br />
example,<br />
<br />
<pre><br />
grep printf Foo.c | head 5<br />
</pre><br />
just prints the first 5 lines which contain &quot;<tt>printf</tt>&quot;.<br />
There is no need to modify the <tt>grep</tt> command to take account of<br />
the fact that its execution might be abandoned.<br />
<br />
[[Lazy vs. non-strict |Non-strict]] languages provide exactly this kind of demand-driven<br />
evaluation. Data structures are evaluated just enough to deliver the<br />
answer, and parts of them may not be evaluated at all. As in the case<br />
of Unix commands, this provides powerful &quot;glue&quot; with which<br />
to compose existing programs together. What this means is that it is<br />
possible to '''re-use programs''', or pieces of programs, much more<br />
often than can be done in an imperative setting. [[Haskell/Lazy evaluation |Lazy evaluation]] allows us to write more '''[[modular programs]]'''.<br />
<br />
===Powerful abstractions===<br />
In general, functional languages offer powerful new ways to<br />
encapsulate '''abstractions'''. An abstraction allows you to define<br />
an object whose internal workings are hidden; a C procedure, for<br />
example, is an abstraction. Abstractions are ''the'' key to<br />
building modular, maintainable programs, so much so that a good<br />
question to ask of any new language is "what mechanisms for<br />
abstraction does it provide?". <br />
<br />
One powerful abstraction mechanism available in functional languages<br />
is the '''[[higher order function]]'''. In Haskell a function is a<br />
first-class citizen: it can freely be passed to other functions,<br />
returned as the result of a function, stored in a data structure, and<br />
so on. It turns out that the judicious use of higher order functions<br />
can substantially improve the structure and modularity of many<br />
programs.<br />
<br />
===Built-in memory management===<br />
Very many sophisticated programs need to allocate dynamic memory from a heap. In C this is done with a call to <tt> malloc</tt>, followed by code to initialize the store just allocated. The programmer is responsible for returning the store to the free pool when it isn't needed any more, a notorious source of "dangling-pointer" errors. To make matters worse, <tt>malloc</tt> is fairly expensive performance-wise, so programmers often <tt>malloc</tt> a single large chunk of store, and then allocate "by hand" out of this.<br />
<br />
Every functional language relieves the programmer of this storage management burden. Store is allocated and initialized implicitly, and recovered automatically by the garbage collector. The technology of storage allocation and garbage collection is now well developed, and the performance costs are rather slight.<br />
<br />
==When C is better==<br />
It isn't all roses, of course. The C quicksort uses an extremely<br />
ingenious technique, invented by Hoare, whereby it sorts the array<br />
''in place''; that is, without using any extra storage. As a<br />
result, it runs quickly, and in a small amount of memory. In<br />
contrast, the Haskell program allocates quite a lot of extra memory<br />
behind the scenes, and runs rather slower than the C program. <br />
<br />
In effect, the C quicksort does some very ingenious storage<br />
management, trading this algorithmic complexity for a reduction in<br />
run-time storage management costs.<br />
<br />
In applications where [[performance]] is required at any cost, or when the<br />
goal is detailed tuning of a low-level algorithm, an imperative<br />
language like C would probably be a better choice than Haskell,<br />
exactly because it provides more intimate control over the exact way<br />
in which the computation is carried out.<br />
<br />
===Functional vs imperative===<br />
But few programs require performance at any cost! After all, we all<br />
stopped writing assembly-language programs, except perhaps for key<br />
inner loops, long ago. The benefits of having a more supportive<br />
programming model (an arbitrary number of named, local variables<br />
instead of a fixed number of registers, for example) far outweigh the<br />
modest run-time costs.<br />
<br />
Similarly, we willingly accept the costs of a virtual memory paging<br />
system, in exchange for the more supportive programming model of an<br />
infinite virtual address space. The days of explicit memory overlays<br />
are over.<br />
<br />
Functional languages take another large step towards a higher-level<br />
programing model. Programs are easier to design, write and maintain,<br />
but the language offers the programmer less control over the machine.<br />
For most programs the result is perfectly acceptable.<br />
<br />
==What is Haskell?==<br />
Haskell is a modern, standard, non-strict, purely-functional<br />
programming language. It provides all the features sketched above,<br />
including polymorphic typing, lazy evaluation and higher-order<br />
functions. It also has an innovative type system which supports a<br />
systematic form of overloading and a module system.<br />
<br />
It is specifically designed to handle a wide range of applications,<br />
from numerical through to symbolic. To this end, Haskell has an<br />
expressive syntax, and a rich variety of built-in data types,<br />
including arbitrary-precision integers and rationals, as well as the<br />
more conventional integer, floating-point and boolean types.<br />
<br />
There are a number of [[compilers and interpreters]] available. All are<br />
free. First-time users may want to start with [[Hugs]], a small, portable Haskell interpreter. <br />
<br />
See also [[History of Haskell|the History of Haskell]]<br />
<br />
==Does anyone use functional programming?==<br />
Functional programming languages are used in substantial applications.<br />
For example:<br />
* Software AG, a major German software company, market an expert system (Natural Expert) which is programmed in a functional language. Their users find it easy to develop their applications in this language, through which they gain access to an underlying database system. It all runs on an IBM mainframe.<br />
*Ericsson have developed a new functional language, Erlang, to use in their future telephony applications. They have already written 130k-line Erlang applications, and find them very much shorter and faster to develop.<br />
*Amoco ran an experiment in which they re-coded in a functional language a substantial fraction of their main oil-reservoir simulation code, a critical application. The resulting program was vastly shorter, and its production revealed a number of errors in the existing software. Amoco subsequently transcribed the functional program into C++ with encouraging results.<br />
*A researcher at the MITRE corporation is using Haskell to prototype his digital signal-processing applications.<br />
*Researchers at Durham University used a functional language in a seven-year project to build LOLITA, a 30,000-line program for natural-language understanding.<br />
*Query is the query language of the O2 object-oriented database system. O2Query is probably the most sophisticated commercially-available object-oriented database query language and it is a functional language.<br />
*ICAD Inc market a CAD system for mechanical and aeronautical engineers. The language in which the engineers describe their design is functional, and it uses lazy evaluation extensively to avoid recomputing parts of the design which are not currently visible on the screen. This results in substantial performance improvements.<br />
*An incestuous example: the Glasgow Haskell compiler is written in Haskell: a 100,000-line application.<br />
*[http://pugscode.org Pugs], the leading perl6 implementation is written in Haskell<br />
*As is [http://darcs.net Darcs], a cutting edge distributed revision control system<br />
<br />
Some other examples of [[Haskell in practice]].<br />
<br />
Clifford Beshers, of [http://www.linspire.com/ Linspire Inc]., describes their experience with Haskell, and functional programming:<br />
<br />
<blockquote><br />
Linspire, Inc. has used functional programming since its inception in<br />
2001, beginning with extensive use of O'Caml, with a steady shift to<br />
Haskell as its implementations and libraries have matured. Hardware<br />
detection, software packaging and CGI web page generation are all areas<br />
where we have used functional programming extensively.<br />
</blockquote><br />
<br />
<blockquote><br />
Haskell's feature set lets us replace much of our use of little<br />
languages (e.g., bash or awk) and two-level languages (C or C++ bound to<br />
an interpreted language), allowing for faster development, better code<br />
sharing and ultimately faster implementations. Above all, we value<br />
static type checking for minimizing runtime errors in applications that<br />
run in unknown environments and for wrapping legacy programs in strongly<br />
typed functions to ensure that we pass valid arguments.<br />
</blockquote><br />
<br />
==Other frequently-asked questions==<br />
===''Is functional programming hard to learn?''===<br />
<blockquote><br />
Functional programming does require a change in perspective, which<br />
some programmers find hard. But Ericsson's experience in training<br />
programmers in Erlang is that most find the transition easy -<br />
provided they take the training need seriously rather than assuming<br />
that they can "pick it up on the day".<br />
</blockquote><br />
===''Aren't functional programs very slow?''===<br />
<blockquote>They used to be, perhaps 20 years ago. But the compilers<br />
have long since caught up. Haskell programs run fast for all but the<br />
most performance-demanding applications. At the time of writing, Haskell<br />
compiled via GHC is in 2nd place (behind C) in the <br />
[http://shootout.alioth.debian.org/gp4/benchmark.php?test=all&lang=all Great Language Shootout],<br />
with other functional languages also ranked highly.<br />
</blockquote><br />
<br />
===''I already have a large application in C or C++.''===<br />
Also worded as: ''Can I benefit from functional programming without rewriting my whole system?''<br />
<blockquote><br />
<br />
Haskell has been successfully integrated into existing applications in<br />
a number of ways. <br />
[http://www.haskell.org/hdirect/ HaskellDirect] is<br />
an IDL (Interface Description Language) based tool that allows Haskell<br />
programs to work with software components. Low level C/C++ interfaces<br />
can be generated with<br />
[http://www.haskell.org/greencard Green Card] or <br />
[http://www.cse.unsw.edu.au/~chak/haskell/c2hs/ C->Haskell], allowing <br />
tight integration between Haskell and C. These tools have been used<br />
to build a large number of successful, mixed language systems.<br />
</blockquote><br />
<br />
==='' What libraries does Haskell support?''===<br />
<blockquote><br />
Many software libraries have been developed for Haskell. See the<br />
[[Libraries and tools| list of Haskell libraries]] for a list of much of<br />
what is available.<br />
<br />
</blockquote><br />
<br />
==='' What other software tools for Haskell are there? ''===<br />
<blockquote><br />
Glasgow Haskell comes with a profiler which allows you to find which<br />
parts of your program are consuming most time and space. Chalmers<br />
Haskell has a space-profiling tool, and a quasi-parallel simulator<br />
which allows you to experiment with running your program in<br />
parallel. Hugs also has some similar tools. For a complete list, check<br />
the [[Libraries and tools|tools page]].<br />
</blockquote><br />
<br />
===''Can I get a support contract or a help-line?''===<br />
<blockquote><br />
It used to be the case that if you wanted help, you had to persuade a<br />
Haskell research group that your problem was interesting enough or<br />
important enough that they should spend time helping you for free.<br />
<br><br />
Whilst that is still an option, there is now a<br />
[[Consultants|directory of Haskell Consultants]] who provide:<br />
<br />
*Support for compilers, tools and libraries.<br />
*Help with improving code quality (time, space, robustness, maintainability, etc.) using code reviews and tools.<br />
*Help with using libraries, tools and advanced Haskell features such as type system extensions, exception handling, the foreign function interface, test harnesses, and concurrency.<br />
*Library and application development.<br />
*Staff training.<br />
<br />
These companies and individuals tend to work closely with those<br />
developing Haskell (indeed, they have usually made major<br />
contributions to Haskell themselves). <br />
<br />
</blockquote><br />
===''How can I learn Haskell?''===<br />
<blockquote><br />
The easiest way to learn Haskell is with a [[Books|textbook]]. There are a lot of [[Tutorials|online tutorials]], but you'll have a much easier time to learn the basics from a book. After all, Haskell is very different from traditional mainstream languages, it's like learning programming anew.<br />
</blockquote><br />
<br />
===''Comparisons to other languages''===<br />
<blockquote><br />
[[Comparison of functional programming languages | Click to see a table comparing features of Haskell to similar languages]]<br />
</blockquote><br />
<br />
''Based on a paper by Simon Peyton Jones.''</div>Gphilip