https://wiki.haskell.org/api.php?action=feedcontributions&user=Msouth&feedformat=atomHaskellWiki - User contributions [en]2021-05-18T12:17:13ZUser contributionsMediaWiki 1.27.4https://wiki.haskell.org/index.php?title=Hpaste.el&diff=14578Hpaste.el2007-07-21T17:43:23Z<p>Msouth: mentioned and linked to the makefile twb put on the talk page</p>
<hr />
<div>hpaste.el is an Emacs Lisp library that integrates [http://hpaste.org hpaste] into Emacs. It provides two functions, <code>hpaste-paste-region</code> and <code>hpaste-paste-buffer</code>, which send the region or buffer to the hpaste server as required. Most things are customisable: do M-x customize and browse to the Hpaste group to see what you can change. Code is available under the GPL license. (see the [[Talk:Hpaste.el| talk page]] for a makefile allowing you to keep up-to-date on this programatically). Enjoy!<br />
<br />
<pre-lisp><br />
;;; hpaste.el -- Integration with hpaste: http://hpaste.org.<br />
<br />
;; Author: David House <dmhouse@gmail.com><br />
;; Created: 14th April 2007<br />
;; Version: 1.0<br />
;; License: GPL<br />
<br />
(require 'url)<br />
<br />
(defgroup hpaste nil "Integration with the hpaste pastebin")<br />
(defcustom hpaste-server "http://hpaste.org" <br />
"Base URL for the hpaste server."<br />
:type '(string)<br />
:group 'hpaste)<br />
(defcustom hpaste-default-nick nil<br />
"What to tell the server your nick is. If NIL, then prompt every time."<br />
:type '(choice (string) (const :tag "Ask every time" nil))<br />
:group 'hpaste)<br />
(defcustom hpaste-blank-title nil<br />
"If non-NIL, don't send a title to the server."<br />
:type '(boolean)<br />
:group 'hpaste)<br />
(defcustom hpaste-announce 'ask<br />
"Whether to announce the paste in the #haskell channel on<br />
Freenode. If ALWAYS, then announce every time. If ASK, then<br />
prompt every time. If NEVER, then never announce."<br />
:type '(choice (const :tag "Always announce" always)<br />
(const :tag "Ask each time" ask) <br />
(const :tag "Never announce" never))<br />
:group 'hpaste)<br />
<br />
(defvar hpaste-last-paste-id nil<br />
"Numerical ID of the last paste.")<br />
<br />
(defun hpaste-after-paste (&optional redirect url)<br />
"Callback that runs after a paste is made. Messages the user<br />
and tell them that everything went smoothly, and save the paste<br />
ID for use as a default ID for annotations."<br />
(message "Paste successful")<br />
(if redirect<br />
(progn <br />
(string-match "/\\([0-9]*\\)\\(#.*\\)?$" url)<br />
(let ((id (match-string 1 url)))<br />
(if id (setq hpaste-last-paste-id id))))))<br />
<br />
(defun hpaste-prompt-for-annotate ()<br />
"Ask the user whether they want to send the paste as an<br />
annotation, and if so, the ID of the paste to<br />
annotate (defaulting to the last paste made through this<br />
interface)."<br />
(if (y-or-n-p "Send as annotation? ")<br />
(let* ((prompt<br />
(if hpaste-last-paste-id<br />
(format "Paste to annotate (default %s): "<br />
hpaste-last-paste-id)<br />
"Paste to annotate: "))<br />
(input (read-from-minibuffer prompt)))<br />
(if (> (length input) 0) input hpaste-last-paste-id))))<br />
<br />
(defun hpaste-paste-region (beg end)<br />
"Send the region to the hpaste server specified in<br />
`hpaste-server'. Use the nick in `hpaste-default-nick', or prompt<br />
for one if that is NIL. You can still appear as (anonymous) by<br />
just not filling out a nick when prompted (just hit RET). Prompt<br />
for a title, unless `hpaste-blank-title' is non-NIL, in which<br />
case just send a blank title. Pastes will be announced in<br />
#haskell on Freenode according to `hpaste-announce', see the<br />
docstring of that variable for more information.<br />
<br />
For more information on hpaste, see http://hpaste.org"<br />
(interactive "r")<br />
(let* ((nick (or hpaste-default-nick (read-from-minibuffer "Nick: ")))<br />
(title (if hpaste-blank-title "" (read-from-minibuffer "Title: ")))<br />
(annot-id (hpaste-prompt-for-annotate))<br />
(announce (if (or (eq hpaste-announce 'always)<br />
(and (eq hpaste-announce 'ask)<br />
(y-or-n-p "Announce paste? ")))<br />
"&announce=true"<br />
""))<br />
<br />
(url (concat hpaste-server<br />
(if annot-id (concat "/annotate/" annot-id)<br />
"/new")))<br />
(url-request-method "POST")<br />
(url-request-extra-headers<br />
'(("Content-Type" . "application/x-www-form-urlencoded")))<br />
(url-request-data<br />
(format "content=%s&nick=%s&title=%s%s&x=0&y=0\r\n"<br />
(url-hexify-string (buffer-substring-no-properties beg end))<br />
(url-hexify-string nick)<br />
(url-hexify-string title)<br />
announce)))<br />
(url-retrieve url 'hpaste-after-paste)))<br />
<br />
(defun hpaste-paste-buffer ()<br />
"Like `hpaste-paste-region', but paste the entire buffer instead."<br />
(interactive)<br />
(hpaste-paste-region (point-min) (point-max)))<br />
<br />
(provide 'hpaste)<br />
</pre-lisp></div>Msouthhttps://wiki.haskell.org/index.php?title=Monads_as_containers&diff=14576Monads as containers2007-07-21T17:16:56Z<p>Msouth: changed link to monad laws from apparently non existant server to internal wiki page</p>
<hr />
<div>''There now exists a translation of this article into [http://community.livejournal.com/ru_lambda/12467.html Russian]!''<br />
<br />
----<br />
<br />
A ''monad'' is a container type together with a few methods defined on it.<br />
Monads model different kinds of computations.<br />
<br />
Like Haskell lists, all the elements which a monadic container holds at any<br />
one time must be the same type (it is homogeneous).<br />
<br />
There are a few ways to choose the basic set of functions that one can perform<br />
on these containers to be able to define a monad. Haskell generally uses a<br />
pair of functions called return and bind <code>(>>=)</code>, but it is more natural<br />
sometimes to begin with map (<code>fmap</code>), return and join, as these are simpler<br />
to understand at first. We can later define bind in terms of these.<br />
<br />
The first of these three, generally called ''map'', (but called <code>fmap</code> in<br />
Haskell 98) actually comes from the definition of a ''functor''. We can think<br />
of a functor as a type of container where we are permitted to apply a single<br />
function to every object in the container.<br />
<br />
That is, if <code style="color:darkgreen">f</code> is a functor, and we are given a function of type <code>(<span style="color:firebrick">a</span> -> <span style="color:mediumblue">b</span>)</code>,<br />
and a container of type <code>(<span style="color:darkgreen">f</span> <span style="color:firebrick">a</span>)</code>, we can get a new container of type <code>(<span style="color:darkgreen">f</span> <span style="color:mediumblue">b</span>)</code>.<br />
<br />
This is expressed in the type of <code>fmap</code>:<br />
<haskell><br />
fmap :: (Functor f) => (a -> b) -> f a -> f b<br />
</haskell><br />
''If you will give me a blueberry for each apple I give you'' <code>(<span style="color:firebrick">a</span> -> <span style="color:mediumblue">b</span>)</code>'', and I have a box of apples ''<code>(<span style="color:darkgreen">f</span> <span style="color:firebrick">a</span>)</code>'', then I can get a box of blueberries ''<code>(<span style="color:darkgreen">f</span> <span style="color:mediumblue">b</span>)</code>''.''<br />
<br />
Every monad is a functor.<br />
<br />
The second method, ''return'', is specific to monads. If <code style="color:indigo">m</code> is a monad, then<br />
return takes an element of type <code style="color:firebrick">a</code>, and gives a container of type <code>(<span style="color:indigo">m</span> <span style="color:firebrick">a</span>)</code><br />
with that element in it. So, its type in Haskell is<br />
<haskell><br />
return :: (Monad m) => a -> m a<br />
</haskell><br />
''If I have an apple'' <code>(<span style="color:firebrick">a</span>)</code> ''then I can put it in a box'' <code>(<span style="color:indigo">m</span> <span style="color:firebrick">a</span>)</code>''.''<br />
<br />
The third method, ''join'', also specific to monads, takes a container of<br />
containers <code><span style="color:indigo">m</span> (<span style="color:indigo">m</span> <span style="color:firebrick">a</span>)</code>, and combines them into one <code><span style="color:indigo">m</span> <span style="color:firebrick">a</span></code> in some sensible<br />
fashion. Its Haskell type is<br />
<haskell><br />
join :: (Monad m) => m (m a) -> m a<br />
</haskell><br />
''If I have a box of boxes of apples'' <code>(<span style="color:indigo">m</span> (<span style="color:indigo">m</span> <span style="color:firebrick">a</span>))</code> ''then I can take the apples from each, and put them in a new box'' <code>(<span style="color:indigo">m</span> <span style="color:firebrick">a</span>)</code>''.''<br />
<br />
From these, we can construct an important operation called ''bind'' or<br />
''extend'', which is commonly given the symbol <code>(>>=)</code>. When you define your<br />
own monad in Haskell, it will expect you to define just return and bind. It<br />
turns out that mapping and joining come for free from these two. Although only<br />
return and bind are needed to define a monad, it is usually simpler to think<br />
about map, return, and join first, and then get bind from these, as map and<br />
join are in general simpler than bind.<br />
<br />
What bind does is to take a container of type <code>(<span style="color:indigo">m</span> <span style="color:firebrick">a</span>)</code> and a function of type<br />
<code>(<span style="color:firebrick">a</span> -> <span style="color:indigo">m</span> <span style="color:mediumblue">b</span>)</code>. It first maps the function over the container, (which would give <br />
an <code><span style="color:indigo">m</span> (<span style="color:indigo">m</span> <span style="color:mediumblue">b</span>)</code>) and then applies join to the result to get a container of type<br />
<code>(<span style="color:indigo">m</span> <span style="color:mediumblue">b</span>)</code>. Its type and definition in Haskell is then<br />
<haskell><br />
(>>=) :: (Monad m) => m a -> (a -> m b) -> m b<br />
xs >>= f = join (fmap f xs) <br />
-- how we would get bind (>>=) in Haskell if it were join and fmap<br />
-- that were chosen to be primitive<br />
</haskell><br />
''If I have a box of apples'' <code>(<span style="color:indigo">m</span> <span style="color:firebrick">a</span>)</code> ''and for each apple, you will give me a box of blueberries'' <code>(<span style="color:firebrick">a</span> -> <span style="color:indigo">m</span> <span style="color:mediumblue">b</span>)</code> ''then I can get a box with all the blueberries together'' <code>(<span style="color:indigo">m</span> <span style="color:mediumblue">b</span>)</code>''.''<br />
<br />
Note that for a given container type, there might be more than one way to<br />
define these basic operations (though for obvious reasons, Haskell will only<br />
permit one instance of the Monad class per actual type). [Technical side note:<br />
The functions return and bind need to satisfy<br />
[[Monad_laws | a few laws]] in order to<br />
make a monad, but if you define them in a sensible way given what they are<br />
supposed to do, the laws will work out. The laws are only a formal way to<br />
give the informal description of the meanings of return and bind I have here.]<br />
<br />
It would be good to have a concrete example of a monad at this point, as these<br />
functions are not very useful if we cannot find any examples of a type to<br />
apply them to.<br />
<br />
Lists are most likely the simplest, most illustrative example. Here, <code>fmap</code><br />
is just the usual <code>map</code>, <code>return</code> is just <code>(\x -> [x])</code> and <code>join</code> is <code>concat</code>.<br />
<br />
<haskell><br />
instance Monad [] where<br />
--return :: a -> [a]<br />
return x = [x] -- make a list containing the one element given<br />
<br />
--(>>=) :: [a] -> (a -> [b]) -> [b]<br />
xs >>= f = concat (map f xs) <br />
-- collect up all the results of f (which are lists)<br />
-- and combine them into a new list<br />
</haskell><br />
<br />
The list monad, in some sense, models computations which could return any<br />
number of values. Bind pumps values in, and catches all the values output.<br />
Such computations are known in computer science as ''nondeterministic''.<br />
That is, a list [x,y,z] represents a value which is all of the values x, y,<br />
and z at once.<br />
<br />
A couple examples of using this definition of bind:<br />
<haskell><br />
[10,20,30] >>= \x -> [x, x+1] <br />
-- a function which takes a number and gives both it and its<br />
-- successor at once<br />
= [10,11,20,21,30,31]<br />
<br />
[10,20,30] >>= \x -> [x, x+1] >>= \y -> if y > 20 then [] else [y,y]<br />
= [10,10,11,11,20,20]<br />
</haskell><br />
<br />
And a simple fractal, exploiting the fact that lists are ordered:<br />
<haskell><br />
f x | x == '#' = "# #"<br />
| otherwise = " "<br />
<br />
"#" >>= f >>= f >>= f >>= f<br />
= "# # # # # # # # # # # # # # # #"<br />
</haskell><br />
<br />
You might notice a similarity here between bind and function application or<br />
composition, and this is no coincidence. The reason that bind is so important<br />
is that it serves to chain computations on monadic containers together.<br />
<br />
You might be interested in how, given just bind and return, we can get back<br />
to map and join.<br />
<br />
Mapping is equivalent to binding to a function which only returns containers<br />
with a single value in them -- the value that we get from the function of<br />
type <code>(a -> b)</code> which we are handed.<br />
<br />
The function that does this for any monad in Haskell is called <code>liftM</code> -- it<br />
can be written in terms of return and bind as follows:<br />
<haskell><br />
liftM :: (Monad m) => (a -> b) -> m a -> m b<br />
liftM f xs = xs >>= (return . f)<br />
-- take a container full of a's, to each, apply f,<br />
-- put the resulting value of type b in a new container,<br />
-- and then join all the containers together.<br />
</haskell><br />
<br />
Joining is equivalent to binding a container with the identity map. This is<br />
indeed still called <code>join</code> in Haskell:<br />
<haskell><br />
join :: (Monad m) => m (m a) -> m a<br />
join xss = xss >>= id<br />
</haskell><br />
<br />
It is common when constructing monadic computations that one ends up with a<br />
large chain of binds and lambdas. For this reason, some syntactic sugar called<br />
"do notation" was created to simplify this process, and at the same time, make<br />
the computations look somewhat like imperative programs.<br />
<br />
Note that in what follows, the syntax emphasizes the fact that the list monad<br />
models nondeterminism: the code <code>y <- xs</code> can be thought of as <code>y</code> taking on<br />
all the values in the list <code>xs</code> at once.<br />
<br />
The above (perhaps somewhat silly) list computations could be written:<br />
<haskell><br />
do x <- [10,20,30]<br />
[x, x+1]<br />
</haskell><br />
and,<br />
<haskell><br />
do x <- [10,20,30]<br />
y <- [x, x+1]<br />
if y > 20 then [] else [y,y]<br />
</haskell><br />
<br />
The code for liftM could be written:<br />
<haskell><br />
liftM f xs = do a <- xs<br />
return (f a)<br />
</haskell><br />
<br />
----<br />
<br />
If you understood the above, then you have a good start on understanding<br />
monads in Haskell. Check out Maybe (containers with at most one thing in them,<br />
modelling computations that might not return a value) and State (modelling<br />
computations that carry around a state parameter using an odd sort of<br />
container described below), and you'll start getting the picture as to how<br />
things work.<br />
<br />
A good exercise is to figure out a definition of bind and return (or fmap,<br />
join and return) which make the following tree type a monad. Just keep in<br />
mind what they are supposed to do.<br />
<haskell><br />
data Tree a = Leaf a | Branch (Tree a) (Tree a)<br />
</haskell><br />
<br />
For more good examples of monads, and lots of explanation see<br />
[http://www.haskell.org/all_about_monads/html/ All About Monads] which has a catalogue of the commonest<br />
ones, more explanation as to why you might be interested in monads, and<br />
information about how they work.<br />
<br />
----<br />
<br />
The question that many people will be asking at this point is "What does this<br />
all have to do with IO?".<br />
<br />
Well, in Haskell, IO is a monad. How does this mesh with the notion of a<br />
container?<br />
<br />
Consider <code>getChar :: IO Char</code> -- that is, an IO container with a character<br />
value in it. The exact character that the container holds is determined when<br />
the program runs by which keys the user presses. Trying to get the character<br />
out of the box will cause a side effect: the program stops and waits for the<br />
user to press a key. This is generally true of IO values - when you get a<br />
value out with bind, side effects can occur. Many IO containers don't<br />
actually contain interesting values. For example,<br />
<haskell><br />
putStrLn "Hello, World!" :: IO ()<br />
</haskell><br />
That is, the value returned by <code>putStrLn "Hello, World!"</code> is an IO container<br />
filled with a value of type <code>()</code>, a not so interesting type. However, when<br />
you pull this value out during a bind operation, the string <code>Hello, World!</code><br />
is printed on the screen. So another way to think about values of type <code>IO t</code><br />
is as computations which when executed, may have side effects before<br />
returning a value of type t.<br />
<br />
One thing that you might notice as well, is that there is no ordinary Haskell<br />
function you can call (at least not in standard Haskell) to actually get a<br />
value out of an IO container/computation, other than bind, which puts it right<br />
back in. Such a function of type <code>IO a -> a</code> would be very unsafe in the pure<br />
Haskell world, because the value produced could be different each time it was<br />
called, and the IO computation could have side effects, and there would be no<br />
way to control when it was executed (Haskell is lazy after all). So how do IO<br />
actions ever get run? The IO action called <code>main</code> runs when the program is<br />
executed. It can make use of other IO actions in the process, and everything<br />
starts from there.<br />
<br />
When doing IO, a handy special form of bind when you just want the side<br />
effects and don't care about the values returned by the container on the left<br />
is this:<br />
<haskell><br />
(>>) :: Monad m => m a -> m b -> m b<br />
m >> k = m >>= \_ -> k<br />
</haskell><br />
<br />
An example of doing some IO in do notation:<br />
<haskell><br />
main = do putStrLn "Hello, what is your name?"<br />
name <- getLine<br />
putStrLn ("Hello " ++ name ++ "!")<br />
</haskell><br />
<br />
or in terms of bind, making use of the special form:<br />
<haskell><br />
main = putStrLn "Hello, what is your name?" >><br />
getLine >>= \name -><br />
putStrLn ("Hello " ++ name ++ "!")<br />
</haskell><br />
<br />
or, very primitive, without the special form for bind:<br />
<haskell><br />
main = putStrLn "Hello, what is your name?" >>= \x -><br />
getLine >>= \name -><br />
putStrLn ("Hello " ++ name ++ "!")<br />
</haskell><br />
<br />
----<br />
<br />
Another good example of a monad which perhaps isn't obviously a container at<br />
first, is the [[Reader monad]]. This monad basically consists<br />
of functions from a particular type: <code>((->) e)</code>, which might be written<br />
<code>(e ->)</code> if that were supported syntax. These can be viewed as containers<br />
indexed by values of type <code>e</code>, having one spot for each and every value of<br />
type <code>e</code>. The primitive operations on them follow naturally from thinking<br />
this way.<br />
<br />
The [[Reader monad]] models computations which read from (depend on) a shared<br />
environment. To clear up the correspondence, the type of the environment is<br />
the index type on our indexed containers.<br />
<br />
<haskell><br />
type Reader e = (->) e -- our monad<br />
</haskell><br />
<br />
Return simply produces the container having a given value at every spot.<br />
<haskell><br />
return :: a -> (Reader e a)<br />
return x = (\k -> x)<br />
</haskell><br />
<br />
Mapping a function over such a container turns out to be nothing more than<br />
what composition does.<br />
<br />
<haskell><br />
fmap :: (a -> b) -> Reader e a -> Reader e b<br />
= (a -> b) -> (e -> a) -> (e -> b)<br />
-- by definition, (Reader a b) = (a -> b)<br />
<br />
fmap f xs = f . xs<br />
</haskell><br />
<br />
How about join? Well, let's have a look at the types.<br />
<haskell><br />
join :: (Reader e) (Reader e a) -> (Reader e a)<br />
= (e -> e -> a) -> (e -> a) -- by definition of (Reader a)<br />
</haskell><br />
There's only one thing the function of type <code>(e -> a)</code> constructed could<br />
really be doing:<br />
<br />
<haskell><br />
join xss = (\k -> xss k k)<br />
</haskell><br />
<br />
From the container perspective, we are taking an indexed container of indexed<br />
containers and producing a new one which at index <code>k</code>, has the value at index<br />
<code>k</code> in the container at index <code>k</code>.<br />
<br />
So we can derive what we want bind to do based on this:<br />
<haskell><br />
(>>=) :: (Reader e a) -> (a -> Reader e b) -> (Reader e b)<br />
= (e -> a) -> (a -> (e -> b)) -> (e -> b) -- by definition<br />
<br />
xs >>= f = join (fmap f xs)<br />
= join (f . xs)<br />
= (\k -> (f . xs) k k)<br />
= (\k -> f (xs k) k)<br />
</haskell><br />
<br />
Which is exactly what you'll find in other definitions of the Reader monad.<br />
What is it doing? Well, it's taking a container <code>xs</code>, and a function <code>f</code> from<br />
the values in it to new containers, and producing a new container which at<br />
index <code>k</code>, holds the result of looking up the value at <code>k</code> in <code>xs</code>, and then<br />
applying <code>f</code> to it to get a new container, and finally looking up the value<br />
in that container at <code>k</code>.<br />
<br />
The [[Monads as computation]] perspective makes the purpose of such a monad perhaps<br />
more obvious: bind is taking a computation which may read from the environment<br />
before producing a value of type <code>a</code>, and a function from values of type <code>a</code><br />
to computations which may read from the environment before returning a value<br />
of type <code>b</code>, and composing these together, to get a computation which might<br />
read from the (shared) environment, before returning a value of type <code>b</code>.<br />
<br />
----<br />
<br />
How about the [[State monad]]? Although I'll admit that<br />
with State and IO in particular, it is generally more natural to take the<br />
view of [[Monads as computation]], it is good to see that the container analogy<br />
doesn't break down. The state monad is a particular refinement of the reader<br />
monad discussed above.<br />
<br />
I won't go into huge detail about the state monad here, so if you don't<br />
already know what it's for, what follows may seem a bit unnatural. It's<br />
perhaps better taken as a secondary way to look at the structure.<br />
<br />
For reference to the analogy, a value of type <code>(State s a)</code> is like a<br />
container indexed by values of type <code>s</code>, and at each index, it has a value of<br />
type <code>a</code> and another, new value of type <code>s</code>. The function <code>runState</code> does<br />
this "lookup".<br />
<br />
<haskell><br />
newtype State s a = State { runState :: (s -> (a,s)) }<br />
</haskell><br />
<br />
What does return do? It gives a <code>State</code> container with the given element at<br />
every index, and with the "address" (a.k.a. state parameter) unchanged.<br />
<haskell><br />
return :: a -> State s a<br />
return x = State (\s -> (x,s))<br />
</haskell><br />
<br />
Mapping does the natural thing, applying a function to each of the values of<br />
type <code>a</code>, throughout the structure.<br />
<haskell><br />
fmap :: (a -> b) -> (State s a) -> (State s b)<br />
fmap f (State m) = State (onVal f . m)<br />
where onVal f (x, s) = (f x, s)<br />
</haskell><br />
<br />
Joining needs a bit more thought. We want to take a value of type<br />
<code>(State s (State s a))</code> and turn it into a <code>(State s a)</code> in a natural way.<br />
This is essentially removal of indirection. We take the new address and new<br />
box that we get from looking up a given address in the box, and we do another<br />
lookup -- note that this is almost the same as what we did with the reader<br />
monad, only we use the new address that we get at the location, rather than<br />
the same address as for the first lookup.<br />
<br />
So we get:<br />
<haskell><br />
join :: (State s (State s a)) -> (State s a)<br />
join xss = State (\s -> uncurry runState (runState xss s))<br />
</haskell><br />
<br />
----<br />
<br />
I hope that the above was a reasonably clear introduction to what monads are<br />
about. Feel free to [[Talk:Monads as containers|make criticisms and ask questions]].<br />
<br />
- [[User:CaleGibbard|CaleGibbard]]<br />
<br />
[[Category:Tutorials]] [[Category:Monad]]</div>Msouthhttps://wiki.haskell.org/index.php?title=Why_Haskell_matters&diff=14111Why Haskell matters2007-07-09T18:18:34Z<p>Msouth: to hard too => too hard to</p>
<hr />
<div>==What are functional programming languages?==<br />
Programming languages such as C/C++/Java/Python are called imperative programming languages because they consist of sequences of actions. The programmer quite explicitly tells the computer how to perform a task, step-by-step.<br />
Functional programming languages work differently. Rather than performing actions in a sequence, they evaluate expressions.<br />
<br />
<br />
=== The level of abstraction ===<br />
There are two areas that are fundamental to programming a computer - resource management and sequencing. Resource management (allocating registers and memory) has been the target of vast abstraction, most new languages (imperative as well as functional) have implemented garbage collection to remove resource management from the problem, and lets the programmer focus on the algorithm instead of the book-keeping task of allocating memory.<br />
Sequencing have also undergone some abstraction, although not nearly to the same extent. Imperative languages have done so by introducing new keywords and standard libraries. For example, most imperative languages have special syntax for constructing several slightly different loops, you no longer have to do all the tasks of managing these loops yourself.<br />
But imperative languages are based upon the notion of sequencing - they can never escape it completely. The only way to raise the level of abstraction in the sequencing area for an imperative language is to introduce more keywords or standard functions, thus cluttering up the language.<br />
This close relationship between imperative languages and the task of sequencing commands for the processor to execute means that imperative languages can never rise above the task of sequencing, and as such can never reach the same level of abstraction that functional programming languages can.<br />
<br />
In Haskell, the sequencing task is removed. You only care what the program is to compute not how or when it is computed. This makes Haskell a more flexible and easy to use language. Haskell tends to be part of the solution for a problem, not a part of the problem itself.<br />
<br />
=== Functions and side-effects in functional languages ===<br />
Functions play an important role in functional programming languages. Functions are considered to be values just like Int or String. A function can return another function, it can take a function as a parameter, and it can even be constructed by composing functions.<br />
This offers a stronger "glue" to combine the modules of your program. A function that evaluates some expression can take part of the computation as an argument for instance, thus making the function more modular.<br />
You could also have a function construct another function. For instance, you could define a function "differentiate" that will differentiate a given function numerically. So if you then have a function "f" you could define "f' = differentiate f", and use it like you would normally do in a mathematic context.<br />
These types of functions are called ''higher order functions''.<br />
<br />
Here is a short Haskell example of a function numOf that counts the number of elements in a list that satisfy a certain property.<br />
<br />
<haskell>numOf p xs = length (filter p xs)</haskell><br />
<br />
We will discuss Haskell syntax later, but what this line says is just "To get the result, filter the list xs by the test p and compute the length of the result".<br />
Now p is a function that takes an element and returns True or False determining whether the element passes or fails the test. So numOf is a higher order function, some of the functionality is passed to it as an argument. Notice that filter is also a higher order function, it takes the "test function" as an argument.<br />
Let's play with this function and define some more specialized functions from it.<br />
<br />
<haskell>numOfEven xs = numOf even xs</haskell><br />
<br />
Here we define the function numOfEven which counts the number of even elements in a list. Note that we do not need to explicitly declare xs as a parameter. We could just as well write numOfEven = numOf even. A very clear definition indeed. But we'll explicitly type out the parameters for now.<br />
<br />
Let's define a a function which counts the number of elements that are greater or equal to 5 :<br />
<br />
<haskell>numOfGE5 xs = numOf (>=5) xs</haskell><br />
<br />
Here the test function is just ">=5" which is passed to numOf to give us the functionality we need.<br />
<br />
Hopefully you should now see that the modularity of functional programming allows us to define a generic functions where some of the functionality is passed as an argument, which we can later use to define shorthands for any specialized functions.<br />
This small example is somewhat trivial, it wouldn't be too hard to re-write the function definition for all the functions above, but for more complex functions this comes in handy.<br />
You can, for instance, write only one function for traversing an auto-balancing binary tree and have it take some of the functionality as a parameter (for instance the comparison function). This would allow you to traverse the tree for any data type by simply providing the relevant comparison function for your needs. Thus you can expend some effort in making sure the general function is correct, and then all the specialized functions will also be correct. Not to mention you wouldn't have to copy and paste code all over your project.<br />
This concept is possible in some imperative languages as well. In some object oriented languages you often have to provide a "Comparator object" for trees and other standard data structures. The difference is that the Haskell way is a lot more intuitive and elegant (creating a separate type just for comparing two other types and then passing an object of this type is hardly an elegant way of doing it), so it's more likely to be used frequently (and not just in the standard libraries).<br />
<br />
A central concept in functional languages is that the result of a function is determined by its input, and only by its input. There are no side-effects! This extends to variables as well - ''variables in Haskell do not vary''. This may sound strange if you're used to imperative programming (where ''most'' of the code consists of changing the "contents" of a variable), but it's really quite natural. A variable in Haskell is a name that is bound to some value, rather than an abstraction of some low-level concept of a memory cell like in imperative languages. When variables are thought of as short-hands for values (just like they are in mathematics), it makes perfect sense that variable updates are not allowed. You wouldn't expect "4 = 5" to be a valid assignment in ''any'' language, so it's really quite strange that "x = 4; x = 5" is.<br />
This is often hard to grasp for programmers who are very used to imperative languages, but it isn't as strange as it first seems. So when you start thinking things like "This is too weird, I'm going back to C++!", try to force yourself to continue learning Haskell - you'll be glad you did.<br />
<br />
<br />
Removing side-effects from the equation allows expressions to be evaluated in any order. A function will always return the same result if passed the same input - no exceptions.<br />
This determinism removes a whole class of bugs found in imperative programs. In fact, you could even argue that ''most'' bugs in large systems can be traced back to side-effects - if not directly caused by them, then caused by a flawed design that relies on side-effects.<br />
This means that functional programs tend to have far fewer bugs than imperative ones.<br />
<br />
<br />
=== Conclusion ===<br />
Because functional languages are more intuitive and offer more and easier ways to get the job done, functional programs tend to be shorter (usually between 2 to 10 times shorter). The semantics are most often a lot closer to the problem than an imperative version, which makes it easier to verify that a function is correct. Furthermore Haskell doesn't allow side-effects, which leads to less bugs. Thus Haskell programs are easier to write, more robust, and easier to maintain.<br />
<br />
== What can Haskell offer the programmer? ==<br />
Haskell is a modern general purpose language developed to incorporate the collective wisdom of the functional programming community into one elegant, powerful and general language.<br />
<br />
===Purity===<br />
Unlike some other functional programming languages Haskell is pure. It doesn't allow ''any'' side-effects. This is probably the most important feature of Haskell. We've already briefly discussed the benefits of pure, side-effect free, programming - and there's not much more we can say about that. You'll need to experience it yourself.<br />
<br />
===Laziness===<br />
Another feature of Haskell is that it is lazy (technically speaking, it's "non-strict"). This means that nothing is evaluated until it has to be evaluated. You could, for instance, define an infinite list of primes without ending up in infinite recursion. Only the elements of this list that are actually used will be computed. This allows for some very elegant solutions to many problems. A typical pattern of solving a problem would be to define a list of all possible solutions and then filtering away the illegal ones. The remaining list will then only contain legal solutions. Lazy evaluation makes this operation very clean. If you only need one solution you can simply extract the first element of the resulting list - lazy evaluation will make sure that nothing is needlessly computed.<br />
<br />
===Strong typing===<br />
Furthermore Haskell is strongly typed, this means just what it sounds like. It's impossible to inadvertently convert a Double to an Int, or follow a null pointer. This also leads to less bugs. It might be a pain in the neck in the rare cases where you need to convert an Int to a Double explicitly before performing some operation, but in practice this doesn't happen often enough to become a nuisance. In fact, forcing each conversion to be explicit often helps to highlight problem code.<br />
In other languages where these conversions are invisible, problems often arises when the compiler treats a double like an integer or, even worse, an integer like a pointer.<br />
<br />
Unlike other strongly typed languages types in Haskell are automatically inferred. This means that you very rarely have to declare the types of your functions, except as a means of code documentation. Haskell will look at how you use the variables and figure out from there what type the variable should be - then it will all be type-checked to ensure there are no type-mismatches. <br />
Python has the notion of "duck typing", meaning "If it walks and talks like a duck, it's a duck!". You could argue that Haskell has a much better form of duck typing. If a value walks and talks like a duck, then it will be considered a duck through type inference, but unlike Python the compiler will also catch errors if later on it tries to bray like a donkey!<br />
So you get the benefits of strong typing (bugs are caught at compile-time, rather than run-time) without the hassle that comes with it in other languages.<br />
Furthermore Haskell will always infer the most general type on a variable. So if you write, say, a sorting function without a type declaration, Haskell will make sure the function will work for all values that can be sorted.<br />
<br />
Compare how you would do this in certain object oriented languages. To gain polymorphism you would have to use some base class, and then declare your variables as instances of subclasses to this base class. It all amounts to tons of extra work and ridiculously complex declarations just to proclaim the existence of a variable. Furthermore you would have to perform tons of type conversions via explicit casts - definitely not a particularly elegant solution.<br />
If you want to write a polymorphic function in these object oriented languages you would probably declare the parameters as an object of a global base class (like "Object" in Java), which essentially allows the programmer to send anything into the function, even objects which can't logically be passed to the function. The end result is that most functions you write in these languages are not general, they only work on a single data type. You're also moving the error checking from compile-time to run-time. In large systems where some of the functionality is rarely used, these bugs might never be caught until they cause a fatal crash at the worst possible time.<br />
<br />
Haskell provides an elegant, concise and safe way to write your programs. Programs will not crash unexpectedly, nor produce strangely garbled output.<br />
<br />
===Elegance===<br />
Another property of Haskell that is very important to the programmer, even though it doesn't mean as much in terms of stability or performance, is the elegance of Haskell. To put it simply: stuff just works like you'd expect it to.<br />
<br />
To highlight the elegance of Haskell we shall now take a look at a small example. We choose QuickSort because it's a simple algorithm that is actually useful. We will look at two versions - one written in C++, an imperative language, and one written in Haskell.<br />
Both versions use only the functionality available to the programmer without importing any extra modules (otherwise we could just call "sort" in each language's standard library and be done with it!). Thus, we use the standard sequence primitives of each language (a "list" in Haskell and an "array" in C++). Both versions must also be polymorphic (which is done "automatically" in Haskell, and with templates in C++). Both versions must use the same recursive algorithm.<br />
<br />
Please note that this is ''not'' intended as a definite comparison between the two languages. It's intended to show the elegance of Haskell, the C++ version is only included for comparison (and would be coded quite differently if you used the Standard Template Libraries, for example).<br />
<br />
template <typename T><br />
void qsort (T *result, T *list, int n)<br />
{<br />
if (n == 0) return;<br />
T *smallerList, *largerList;<br />
smallerList = new T[n];<br />
largerList = new T[n]; <br />
T pivot = list[0];<br />
int numSmaller=0, numLarger=0; <br />
for (int i = 1; i < n; i++)<br />
if (list[i] < pivot)<br />
smallerList[numSmaller++] = list[i];<br />
else <br />
largerList[numLarger++] = list[i];<br />
<br />
qsort(smallerList,smallerList,numSmaller); <br />
qsort(largerList,largerList,numLarger);<br />
<br />
int pos = 0; <br />
for ( int i = 0; i < numSmaller; i++)<br />
result[pos++] = smallerList[i];<br />
<br />
result[pos++] = pivot;<br />
<br />
for ( int i = 0; i < numLarger; i++)<br />
result[pos++] = largerList[i];<br />
<br />
delete [] smallerList;<br />
delete [] largerList;<br />
};<br />
<br />
We will not explain this code further, just note how complex and difficult it is to understand at a glance, largely due to the programmer having to deal with low-level details which have nothing to do with the task at hand.<br />
Now, let's take a look at a Haskell version of QuickSort, which might look a something like this.<br />
<br />
<haskell><br />
qsort [] = []<br />
qsort (x:xs) = qsort less ++ [x] ++ qsort more<br />
where less = filter (<x) xs<br />
more = filter (>=x) xs<br />
</haskell><br />
<br />
Let's dissect this code in detail, since it uses quite a lot of Haskell syntax that you might not be familiar with.<br />
The function is called qsort and takes a list as a parameter. We define a function in Haskell like so: funcname a b c = expr, where funcname is the name of the function, a, b, and, c are the parameters and expr is the expression to be evaluated (most often using the parameters). Functions are called by simply putting the function name first and then the parameter(s). Haskell doesn't use parenthesis for function application. Functions simply bind more tightly than anything else, so "f 5 * 2", for instance, would apply f to 5 and then multiply by 2, if we wanted the multiplication to occur before the function application then we would use parenthesis like so "f (5*2)".<br />
<br />
Let's get back to QuickSort.<br />
First we see that we have two definitions of the functions. This is called pattern matching and we can briefly say that it will test the argument passed to the function top-to-bottom and use the first one that matches.<br />
The first definition matches against [] which in Haskell is the empty list (a list of 1,2 and 3 is [1,2,3] so it makes sense that an empty list is just two brackets).<br />
So when we try to sort an empty list, the result will be an empty list. Sounds reasonable enough, doesn't it?<br />
The second definition pattern matches against a list with at least one element. It does this by using (x:xs) for its argument. The "cons" operator is (:) and it simply puts an element in front of a list, so that 0 : [1,2,3] returns [0,1,2,3]. Pattern matching against (x:xs) is a match against the list with the head x and the tail xs (which may or may not be the empty list). In other words, (x:xs) is a list of at least one element.<br />
So since we will need to use the head of the list later, we can actually extract this very elegantly by using pattern matching. You can think of it as naming the contents of the list. This can be done on any data construct, not just a list. It is possible to pattern match against an arbitrary variable name and then use the head function on that to retrieve the head of the list.<br />
Now if we have a non empty list, the sorted list is produced by sorting all elements that are smaller than x and putting that in front of x, then we sort all elements larger than x and put those at the end. We do this by using the list concatenation operator ++. Notice that x is not a list so the ++ operator won't work on it alone, which is why we make it a singleton-list by putting it inside brackets.<br />
So the function reads "To sort the list, sandwich the head between the sorted list of all elements smaller than the head, and the sorted list of all elements larger than the head". Which could very well be the original algorithm description. This is very common in Haskell. A function definition usually resembles the informal description of the function very closely. This is why I say that Haskell has a smaller semantic gap than other languages.<br />
<br />
But wait, we're not done yet! How is the list less and more computed? Well, remember that we don't care about sequencing in Haskell, so we've defined them below the function using the where notation (which allows any definitions to use the parameters of the function to which they belong). We use the standard prelude function filter, I won't elaborate too much on this now, but the line less = filter (<x) xs will use filter (<x) xs to filter the list xs. You can see that we actually pass the function which will be used to filter the list to filter, an example of higher order functions.<br />
The function (<x) should be read out "the function 'less than x'" and will return True if an element passed to it is less than x (notice how easy it was to construct a function on the fly, we put the expression "<x", "less than x", in parenthesis and sent it off to the function - functions really are just another value!).<br />
All elements that pass the test are output from the filter function and put inside less. In a same way (>=x) is used to filter the list for all elements larger than or equal to x.<br />
<br />
Now that you've had the syntax explained to you, read the function definition again. Notice how little time it takes to get an understanding about what the function does. The function definitions in Haskell explain what it computes, not how.<br />
<br />
If you've already forgotten the syntax outlined above, don't worry! We'll go through it more thoroughly and at a slower pace in the tutorials. The important thing to get from this code example is that Haskell code is elegant and intuitive.<br />
<br />
===Haskell and bugs===<br />
We have several times stated that various features of Haskell help fight the occurrence of bugs. Let's recap these.<br />
<br />
Haskell programs have less bugs because Haskell is:<br />
<br />
* '''Pure'''. There are no side effects.<br />
<br />
* '''Strongly typed'''. There can be no dubious use of types. And No Core Dumps!<br />
<br />
* '''Concise'''. Programs are shorter which make it easier to look at a function and "take it all in" at once, convincing yourself that it's correct.<br />
<br />
* '''High level'''. Haskell programs most often reads out almost exactly like the algorithm description. Which makes it easier to verify that the function does what the algorithm states. By coding at a higher level of abstraction, leaving the details to the compiler, there is less room for bugs to sneak in.<br />
<br />
* '''Memory managed'''. There's no worrying about dangling pointers, the Garbage Collector takes care of all that. The programmer can worry about implementing the algorithm, not book-keeping of the memory.<br />
<br />
* '''Modular'''. Haskell offers stronger and more "glue" to compose your program from already developed modules. Thus Haskell programs can be more modular. Often used modular functions can thus be proven correct by induction. Combining two functions that are proven to be correct, will also give the correct result (assuming the combination is correct).<br />
<br />
Furthermore most people agree that you just think differently when solving problems in a functional language. You subdivide the problem into smaller and smaller functions and then you write these small (and "almost-guaranteed-to-be-correct") functions, which are composed in various ways to the final result. There just isn't any room for bugs!<br />
<br />
<br />
== Haskell vs OOP ==<br />
The great benefit of Object Oriented Programming is rarely that you group your data with the functions that act upon it together into an object - it's that it allows for great data encapsulation (separating the interface from implementation) and polymorphism (letting a whole set of data types behave the same way). However:<br />
<br />
''Data encapsulation and polymorphism are not exclusive to OOP!''<br />
<br />
Haskell has tools for abstracting data. We can't really get into it without first going through the module system and how abstract data types (ADT) work in Haskell, something which is well beyond the scope of this essay. We will therefore settle for a short description of how ADTs and polymorphism works in Haskell.<br />
<br />
Data encapsulation is done in Haskell by declaring each data type in a separate module, from this module you only export the interface. Internally there might be a host of functions that touch the actual data, but the interface is all that's visible from outside of the module.<br />
Note that the data type and the functions that act upon the data type are not grouped into an "object", but they are (typically) grouped into the same module, so you can choose to only export certain functions (and not the constructors for the data type) thus making these functions the only way to manipulate the data type - "hiding" the implementation from the interface.<br />
<br />
Polymorphism is done by using something called type classes. Now, if you come from a C++ or Java background you might associate classes with something resembling a template for how to construct an object, but that's not what they mean in Haskell. A type class in Haskell is really just what it sounds like. It's a set of rules for determining whether a type is an instance of that class.<br />
So Haskell separates the class instantiation and the construction of the data type. You might declare a type "Porsche", to be an instance of the "Car" type class, say. All functions that can be applied onto any other member of the Car type class can then be applied to a Porsche as well.<br />
A class that's included with Haskell is the Show type class, for which a type can be instantiated by providing a show function, which converts the data type to a String. Consequently almost all types in Haskell can be printed onto the screen by applying show on them to convert them to a String, and then using the relevant IO action (more on IO in the tutorials).<br />
Note how similar this is to the the object notion in OOP when it comes to the polymorphism aspect.<br />
The Haskell system is a more intuitive system for handling polymorphism. You won't have to worry about inheriting in the correct hierarchical order or to make sure that the inheritance is even sensible. A class is just a class, and types that are instances of this class really doesn't have to share some parent-child inheritance relationship. If your data type fulfills the requirements of a class, then it can be instantiated in that class. Simple, isn't it?<br />
Remember the QuickSort example? Remember that I said it was polymorphic? The secret behind the polymorphism in qsort is that it is defined to work on any type in the Ord type class (for "Ordered"). Ord has a set of functions defined, among them is "<" and ">=" which are sufficient for our needs because we only need to know whether an element is smaller than x or not. So if we were to define the Ord functions for our Porsche type (it's sufficient to implement, say, <= and ==, Haskell will figure out the rest from those) in an instantiation of the Ord type class, we could then use qsort to sort lists of Porsches (even though sorting Porsches might not make sense).<br />
Note that we never say anything about which classes the elements of the list must be defined for, Haskell will infer this automatically from just looking at which functions we have used (in the qsort example, only "<" and ">=" are relevant).<br />
<br />
So to summarize: Haskell does include mechanisms for data encapsulation that match or surpass those of OOP languages. The only thing Haskell does not provide is a way to group functions and data together into a single "object" (aside from creating a data type which includes a function - remember, functions are data!). This is, however, a very minor problem. To apply a function to an object you would write "func obj a b c" instead of something like "obj.func a b c".<br />
<br />
<br />
== Modularity ==<br />
A central concept in computing is modularity. A popular analogy is this: say you wanted to construct a wooden chair. If you construct the parts of it separately, and then glue them together, the task is solved easily. But if you were to carve the whole thing out of a solid piece of wood, it would prove to be quite a bit harder. John Hughes had this to say on the topic in his paper [http://www.cs.chalmers.se/~rjmh/Papers/whyfp.html Why Functional Programming Matters]<br />
<br />
''"Languages which aim to improve productivity must support modular programming well. But new scope rules and mechanisms for separate compilation are not enough - modularity means more than modules. Our ability to decompose a problem into parts depends directly on our ability to glue solutions together. To assist modular programming, a language must provide good glue.''<br />
<br />
''Functional programming languages provide two new kinds of glue - higher-order functions and lazy evaluation."''<br />
<br />
<br />
==The speed of Haskell ==<br />
Let me first state clearly that the following only applies to the general case in which speed isn't absolutely critical, where you can accept a few percent longer execution time for the benefit of reducing development time greatly. There are cases when speed is the primary concern, and then the following section will not apply to the same extent.<br />
<br />
Now, some C++ programmers might claim that the C++ version of QuickSort above is probably a bit faster than the Haskell version. And this might be true. For most applications, though, the difference in speed is so small that it's utterly irrelevant. For instance, take a look at the [http://shootout.alioth.debian.org/gp4/benchmark.php?test=all&lang=all Computer Language Shootout], where Haskell compares favorably to most of the so called "fast" languages.<br />
Now, these benchmarks don't prove all that much about real-world performance, but they do show that Haskell isn't as slow as some people think. At the time of writing it's in 2nd position, only slightly behind C (with C++ fairly far behind).<br />
<br />
Almost all programs in use today have a fairly even spread of processing time among its functions. The most notable exceptions are applications like MPEG encoders, and artificial benchmarks, which spend a large portion of their execution time within a small portion of the code. If you ''really'' need speed at all costs, consider using C instead of Haskell.<br />
<br />
There's an old rule in computer programming called the "80/20 rule". It states that 80% of the time is spent in 20% of the code. The consequence of this is that any given function in your system will likely be of minimal importance when it comes to optimizations for speed. There may be only a handful of functions important enough to optimize. These important functions could be written in C (using the excellent foreign function interface in Haskell). The role of C could, and probably will, take over the role of assembler programming - you use it for the really time-critical bits of your system, but not for the whole system itself.<br />
<br />
We should continue to move to higher levels of abstraction, just like we've done before. We should trade application speed for increased productivity, stability and maintainability. Programmer time is almost always more expensive than CPU time.<br />
We aren't writing applications in assembler anymore for the same reason we shouldn't be writing applications in C anymore.<br />
<br />
Finally remember that algorithmic optimization can give much better results than code optimization. For theoretical examples when factors such as development times and stability doesn't matter, then sure C is often faster than Haskell. But in the real world development times ''do'' matter, this isn't the case. If you can develop your Haskell application in one tenth the time it would take to develop it in C (from experience, this is not at all uncommon) you will have lots of time to profile and implement new algorithims. So in the "real world" where we don't have infinite amounts of time to program our applications, Haskell programs can often be much faster than C programs.<br />
<br />
<br />
== Epilogue ==<br />
So if Haskell is so great, how come it isn't "mainstream"? Well, one reason is that the operating system is probably written in C or some other imperative language, so if your application mainly interacts with the internals of the OS, you may have an easier time using imperative languages. Another reason for the lack of Haskell, and other functional languages, in mainstream use is that programming languages are rarely thought of as tools (even though they are). To most people their favorite programming language is much more like religion - it just seems unlikely that any other language exists that can get the job done better and faster.<br />
<br />
There is a essay by Paul Graham called [http://www.paulgraham.com/avg.html Beating the Averages] describing his experience using Common Lisp, another functional language, for an upstart company. In it he uses an analogy which he calls "The Blub Paradox".<br />
<br />
It goes a little something like this:<br />
If a programmer's favorite language is Blub, which is positioned somewhere in the middle of the "power spectrum", he can most often only identify languages that are lower down in the spectrum. He can look at COBOL and say "How can anyone get anything done in that language, it doesn't have feature x", x being a feature in Blub.<br />
<br />
However, this Blub programmer has a harder time looking the other way in the spectrum. If he examines languages that are higher up in the power spectrum, they will just seem "weird" because the Blub programmer is "thinking in Blub" and can not possibly see the uses for various features of more powerful languages. It goes without saying that this inductively leads to the conclusion that to be able to compare all languages you'll need to position yourself at the top of the power spectrum. It is my belief that functional languages, almost by definition, are closer to the top of the power spectrum than imperative ones.<br />
So languages can actually limit a programmers frame of thought. If all you've ever programmed is Blub, you may not see the limitations of Blub - you may only do that by switching to another level which is more powerful.<br />
<br />
One of the reasons the mainstream doesn't use Haskell is because people feel that "their" language does "everything they need". And of course it does, because they are thinking in Blub! Languages aren't just technology, it's a way of thinking. And if you're not thinking in Haskell, it is very hard to see the use of Haskell - even if Haskell would allow you to write better applications in a shorter amount of time!<br />
<br />
Hopefully this article has helped you break out of the Blub paradox. Even though you may not yet "think in Haskell", it is my hope that you are at least aware of any limitations in your frame of thought imposed by your current "favorite" language, and that you now have more motivation to expand it by learning something new.<br />
<br />
If you are committed to learn a functional language, to get a better view of the power spectrum, then Haskell is an excellent candidate.<br />
<br />
[[Category:Tutorials]]</div>Msouthhttps://wiki.haskell.org/index.php?title=Haskell_in_5_steps&diff=11672Haskell in 5 steps2007-03-01T14:48:17Z<p>Msouth: comma in sentences, clarification that it was "type" the verb, not the noun</p>
<hr />
<div>Haskell is a general purpose, purely functional programming language.<br />
This page will help you get started as quickly as possible.<br />
<br />
__TOC__<br />
<br />
== Install Haskell ==<br />
<br />
Haskell, like most other languages, comes in two flavors: batch oriented<br />
(''compiler'') and interactive (''interpreter''). An interactive system<br />
gives you a command line where you can experiment and evaluate<br />
expressions directly, and is probably a good choice to start with.<br />
<br />
{| class="wikitable"<br />
|[http://www.haskell.org/ghc/ GHC]<br />
|Compiler and interpreter (GHCi)<br />
|Probably the most feature-complete system<br />
|-<br />
|[http://www.haskell.org/hugs/ Hugs]<br />
|Interpreter only<br />
|Very portable, and more lightweight than GHC.<br />
|}<br />
<br />
While both GHC and Hugs work on Windows, Hugs has perhaps the best<br />
integration on that platform. There is also information available on<br />
[[Mac OS X|installing Haskell software on Mac OS X]].<br />
<br />
== Start Haskell ==<br />
<br />
Open a terminal. If you installed GHC, type '''ghci''' (the name of the executable of the GHC<br />
interpreter) at the command prompt. If you installed Hugs, type '''hugs'''. <br />
<br />
<pre><br />
$ ghci<br />
___ ___ _<br />
/ _ \ /\ /\/ __(_)<br />
/ /_\// /_/ / / | | GHC Interactive, version 6.4, for Haskell 98.<br />
/ /_\\/ __ / /___| | http://www.haskell.org/ghc/<br />
\____/\/ /_/\____/|_| Type :? for help.<br />
<br />
Loading package base-1.0 ... linking ... done.<br />
Prelude><br />
</pre><br />
<br />
And you are presented with a prompt. The Haskell system now attentively<br />
awaits your input.<br />
<br />
== Write your first Haskell program ==<br />
<br />
If you've learned to program another language, your first program<br />
probably was "Hello, world!", so let's do that:<br />
<br />
<haskell><br />
Prelude> "Hello, World!"<br />
"Hello, World!"<br />
</haskell><br />
<br />
The Haskell system evaluated the string, and printed the result. <br />
Or we can try a variation to print directly to standard output:<br />
<br />
<haskell><br />
Prelude> putStrLn "Hello World"<br />
Hello World<br />
</haskell><br />
<br />
Using a Haskell compiler, such as [http://haskell.org/ghc GHC], you can<br />
compile the code to a standalone executable. Create a source file<br />
'''hello.hs''' containing:<br />
<br />
<haskell><br />
main = putStrLn "Hello, World!"<br />
</haskell><br />
<br />
And compile it with:<br />
<br />
<pre><br />
$ ghc -o hello hello.hs<br />
</pre><br />
<br />
You can then run the executable ('''./hello''' on Unix systems, '''hello.exe''' on Windows):<br />
<br />
<pre><br />
$ ./hello<br />
Hello, World!<br />
</pre><br />
<br />
== Haskell the calculator ==<br />
<br />
Let's do something fun. In Haskell, your first true program is the<br />
factorial function. So back to the interpreter now and let's define it:<br />
<br />
<haskell><br />
Prelude> let fac n = if n == 0 then 1 else n * fac (n-1)<br />
</haskell><br />
<br />
This defines a new function called '''fac''' which computes the<br />
factorial of an integer.<br />
<br />
We can now run '''fac''' on some argument:<br />
<haskell><br />
Prelude> fac 42<br />
1405006117752879898543142606244511569936384000000000<br />
</haskell><br />
<br />
'''Congratulations!''' Programming made easy. Note that if you're using<br />
Hugs, you'll need to load the definition of '''fac''' from a file,<br />
'''fac.hs''', containing:<br />
<br />
<haskell><br />
fac n = if n == 0 then 1 else n * fac (n-1)<br />
</haskell><br />
<br />
And run it with Hugs as follows (this also works in GHCi):<br />
<br />
<haskell><br />
Hugs.Base> :load fac.hs<br />
Main> fac 42<br />
1405006117752879898543142606244511569936384000000000<br />
</haskell><br />
<br />
We can of course compile this program, to produce a standalone<br />
executable. In the file '''fac.hs''' we can write (and let's use<br />
elegant pattern matching syntax just for fun):<br />
<br />
<haskell><br />
fac 0 = 1<br />
fac n = n * fac (n-1)<br />
<br />
main = print (fac 42)<br />
</haskell><br />
<br />
which can then be compiled and run:<br />
<br />
<pre><br />
$ ghc -o fac fac.hs<br />
$ ./fac<br />
1405006117752879898543142606244511569936384000000000<br />
</pre><br />
<br />
'''Great!'''<br />
<br />
== Where to go from here ==<br />
<br />
There are many good Haskell tutorials and books. Here are some we recommend:<br />
<br />
'''Tutorials'''<br />
* [http://darcs.haskell.org/yaht/yaht.pdf Yet Another Haskell Tutorial] (English)<br />
* [http://www.haskell.org/tutorial/ A Gentle Introduction to Haskell] (English)<br />
<br />
For a complete list of textbooks, references and tutorials:<br />
<br />
* [[Books and tutorials]].<br />
<br />
'''Join the community!'''<br />
<br />
Talk to others in the Haskell community:<br />
<br />
* [http://haskell.org/mailman/listinfo/haskell-cafe Haskell-Cafe mailing list] <br />
* [[IRC channel]]<br />
<br />
[[Category:Tutorials]]</div>Msouth