IO inside: Difference between revisions

From HaskellWiki
(Now more implementation-independent: I/O in MicroHs doesn't use "worlds")
 
(95 intermediate revisions by 16 users not shown)
Line 1: Line 1:
Haskell I/O has always been a source of confusion and surprises for new Haskellers. While simple I/O code in Haskell looks very similar to its equivalents in imperative languages, attempts to write somewhat more complex code often result in a total mess. This is because Haskell I/O is really very different internally. Haskell is a pure language and even the I/O system can't break this purity.
''Haskell I/O can be a source of confusion and surprises for new Haskellers -
if that's you, a good place to start is the [[Introduction to IO]] which can
help you learn the basics (e.g. the syntax of I/O expressions) before
continuing on.''


The following text is an attempt to explain the details of Haskell I/O implementations. This explanation should help you eventually master all the smart I/O tricks. Moreover, I've added a detailed explanation of various traps you might encounter along the way. After reading this text, you will receive a "Master of Haskell I/O" degree that is equal to a Bachelor in Computer Science and Mathematics, simultaneously :)


If you are new to Haskell I/O you may prefer to start by reading the [[Introduction to IO]] page.


While simple I/O code in Haskell looks very similar to its equivalents in
imperative languages, attempts to write somewhat more complex code often
result in a total mess. This is because Haskell I/O is really very different
in how it actually works.


== Haskell is a pure language ==
The following text is an attempt to explain the inner workings of I/O in
Haskell. This explanation should help you eventually learn all the
smart I/O tips. Moreover, I've added a detailed explanation of various traps
you might encounter along the way. After reading this text, you will be well
on your way towards mastering I/O in Haskell.


Haskell is a pure language, which means that the result of any function call is fully determined by its arguments. Pseudo-functions like rand() or getchar() in C, which return different results on each call, are simply impossible to write in Haskell. Moreover, Haskell functions can't have side effects, which means that they can't effect any changes to the "real world", like changing files, writing to the screen, printing, sending data over the network, and so on. These two restrictions together mean that any function
call can be omitted, repeated, or replaced by the result of a previous call with the same parameters, and the language '''guarantees''' that all these rearrangements will not change the program result!


Let's compare this to C: optimizing C compilers try to guess which functions have no side effects and don't depend on mutable global variables. If this guess is wrong, an optimization can change the program's semantics! To avoid this kind of disaster, C optimizers are conservative in their guesses or require hints from the programmer about the purity of functions.
== Haskell is a pure language ==


Compared to an optimizing C compiler, a Haskell compiler is a set of pure mathematical transformations. This results in much better high-level optimization facilities. Moreover, pure mathematical computations can be much more easily divided into several threads that may be executed in parallel, which is increasingly important in these days of multi-core CPUs. Finally, pure computations are less error-prone and easier to verify, which adds to Haskell's robustness and to the speed of program development using Haskell.
Haskell is a pure language and even the I/O system can't break this purity.
Being pure means that the result of any function call is fully determined by
its arguments. Imperative routines like <code>rand()</code> or
<code>getchar()</code> in C, which return different results on each call, are
simply impossible to write in Haskell. Moreover, Haskell functions can't have
side effects, which means that they can't make any changes "outside the Haskell
program", like changing files, writing to the screen, printing, sending data
over the network, and so on. These two restrictions together mean that any
function call can be replaced by the result of a previous call with the same
parameters, and the language ''guarantees'' that all these rearrangements will
not change the program result! For example, the hyperbolic cosine function
<code>cosh</code> can be defined in Haskell as:


Haskell purity allows compiler to call only functions whose results
<haskell>
are really required to calculate final value of high-level function
cosh r = (exp r + 1/exp r)/2
(i.e., main) - this is called lazy evaluation. It's great thing for
</haskell>
pure mathematical computations, but how about I/O actions? Function
like (<hask>putStrLn "Press any key to begin formatting"</hask>) can't return any
meaningful result value, so how can we ensure that compiler will not
omit or reorder its execution? And in general: how we can work with
stateful algorithms and side effects in an entirely lazy language?
This question has had many different solutions proposed in 18 years of
Haskell development (see [[History of Haskell]]), though a solution based on [[monad]]s is now
the standard.
 
== What is a monad? ==
 
What is a [[monad]]? It's something from mathematical category theory, which I
don't know anymore :)  In order to understand how monads are used to
solve the problem of I/O and side effects, you don't need to know it. It's
enough to just know elementary mathematics, like I do :)


Let's imagine that we want to implement in Haskell the well-known
using identical calls to <code>exp</code>, which is another function. So
'getchar' function. What type should it have?  Let's try:
<code>cosh</code> can instead call <code>exp</code> once, and reuse the result:


<haskell>
<haskell>
getchar :: Char
cosh r = (x + 1/x)/2 where x = exp r
 
get2chars = [getchar,getchar]
</haskell>
</haskell>


What will we get with 'getchar' having just the 'Char' type? You can see
Let's compare this to C: optimizing C compilers try to guess which routines
all the possible problems in the definition of 'get2chars':
have no side effects and don't depend on mutable global variables. If this
guess is wrong, an optimization can change the program's semantics! To avoid
this kind of disaster, C optimizers are conservative in their guesses or
require hints from the programmer about the purity of routines.


# Because the Haskell compiler treats all functions as pure (not having side effects), it can avoid "excessive" calls to 'getchar' and use one returned value twice.
Compared to an optimizing C compiler, a Haskell compiler is a set of pure
# Even if it does make two calls, there is no way to determine which call should be performed first. Do you want to return the two chars in the order in which they were read, or in the opposite order? Nothing in the definition of 'get2chars' answers this question.
mathematical transformations. This results in much better high-level
optimization facilities. Moreover, pure mathematical computations can be much
more easily divided into several threads that may be executed in parallel,
which is increasingly important in these days of multi-core CPUs. Finally, pure
computations are less error-prone and easier to verify, which adds to Haskell's
robustness and to the speed of program development using Haskell.


How can these problems be solved, from the programmer's viewpoint?
Haskell's purity allows the compiler to call only functions whose results are
Let's introduce a fake parameter of 'getchar' to make each call
really required to calculate the final value of a top-level definition (e.g.
"different" from the compiler's point of view:
<code>main</code>) - this is called lazy evaluation. It's a great thing for
pure mathematical computations, but how about I/O actions? Something like


<haskell>
<haskell>
getchar :: Int -> Char
putStrLn "Press any key to begin formatting"
 
get2chars = [getchar 1, getchar 2]
</haskell>
</haskell>


Right away, this solves the first problem mentioned above - now the
can't return any meaningful result value, so how can we ensure that the
compiler will make two calls because it sees them as having different
compiler will not omit or reorder its execution? And in general: How we can
parameters. The whole 'get2chars' function should also have a
work with stateful algorithms and side effects in an entirely lazy language?
fake parameter, otherwise we will have the same problem calling it:
This question has had many different solutions proposed while Haskell was
developed (see [[History of Haskell|A History of Haskell]]), with one
solution eventually making its way into the current standard.


<haskell>
getchar  :: Int -> Char
get2chars :: Int -> String


get2chars _ = [getchar 1, getchar 2]
== I/O in Haskell, simplified ==
</haskell>


So what is actually inside an I/O action? Let's look at how
[https://github.com/augustss/MicroHs MicroHs] defines the I/O type:


Now we need to give the compiler some clue to determine which function it
<haskell>
should call first. The Haskell language doesn't provide any way to express
data IO a
order of evaluation... except for data dependencies! How about adding an
</haskell>
artificial data dependency which prevents evaluation of the second
'getchar' before the first one? In order to achieve this, we will
return an additional fake result from 'getchar' that will be used as a
parameter for the next 'getchar' call:


<haskell>
just as described on page 95 of 329 in the Haskell 2010
getchar :: Int -> (Char, Int)
[https://www.haskell.org/definition/haskell2010.pdf Report]: <i>no visible
data constructors.</i> So someone who is implementing Haskell could in fact
define functions and I/O actions in much the same way. The only difference
that really matters is this:


get2chars _ = [a,b]  where (a,i) = getchar 1
* Only I/O actions are allowed to make changes "outside the Haskell program".
                          (b,_) = getchar i
</haskell>


So far so good - now we can guarantee that 'a' is read before 'b'
(Or to state it more formally: only I/O actions are allowed to have externally
because reading 'b' needs the value ('i') that is returned by reading 'a'!
visible side effects).


We've added a fake parameter to 'get2chars' but the problem is that the
It doesn't get much simpler than that <code>:-D</code>
Haskell compiler is too smart! It can believe that the external 'getchar'
function is really dependent on its parameter but for 'get2chars' it
will see that we're just cheating because we throw it away! Therefore it won't feel obliged to execute the calls in the order we want.  How can we fix this? How about passing this fake parameter to the 'getchar' function?! In this case
the compiler can't guess that it is really unused :)


<haskell>
=== The question of purity ===
get2chars i0 = [a,b]  where (a,i1) = getchar i0
                            (b,i2) = getchar i1
</haskell>


So if Haskell uses the same side effects for I/O as an imperative language,
how can it possibly be "pure"?


And more - 'get2chars' has all the same purity problems as the 'getchar'
Because of what doesn't work in Haskell:
function. If you need to call it two times, you need a way to describe
the order of these calls. Look at:


<haskell>
:<haskell>
get4chars = [get2chars 1, get2chars 2]  -- order of 'get2chars' calls isn't defined
\msg x -> seq (putStrLn msg) x
</haskell>
</haskell>


We already know how to deal with these problems - 'get2chars' should
No, that won't work. And neither will this:
also return some fake value that can be used to order calls:


<haskell>
:<haskell>
get2chars :: Int -> (String, Int)
getChar >>= \c -> c
 
get4chars i0 = (a++b)  where (a,i1) = get2chars i0
                            (b,i2) = get2chars i1
</haskell>
</haskell>


Remember, Haskell functions can't have side effects - they can't make any
changes "outside the Haskell program". Therefore if either of these examples
really did work, then they would no longer be Haskell functions!
(More importantly, just imagine if those two were being used in parallel
somewhere in a program...)


But what's the fake value 'get2chars' should return? If we use some integer constant, the excessively-smart Haskell compiler will guess that we're cheating again :)  What about returning the value returned by 'getchar'? See:
So in Haskell, the result of running an I/O action must be another I/O action.
 
This restriction ensures that Haskell functions really are [[pure]].
<haskell>
get2chars :: Int -> (String, Int)
get2chars i0 = ([a,b], i2)  where (a,i1) = getchar i0
                                  (b,i2) = getchar i1
</haskell>


Believe it or not, but we've just constructed the whole "monadic"
Haskell I/O system.


== Welcome to the RealWorld, baby :) ==
== Running with I/O ==


The 'main' Haskell function has the type:
So, <code>main</code> just has the type <code>IO ()</code>. Let's look at
<code>main</code> calling <code>getChar</code> two times:


<haskell>
<haskell>
main :: RealWorld -> ((), RealWorld)
main :: IO ()
main = getChar >>= \a ->
      getChar >>= \b ->
      return ()
</haskell>
</haskell>


where 'RealWorld' is a fake type used instead of our Int. It's something
By defining a <code>Monad</code> instance for <code>IO a</code>:
like the baton passed in a relay race. When 'main' calls some IO function,
it passes the "RealWorld" it received as a parameter. All IO functions have
similar types involving RealWorld as a parameter and result. To be
exact, "IO" is a type synonym defined in the following way:


<haskell>
<haskell>
type IO a =  RealWorld -> (a, RealWorld)
unitIO :: a -> IO a                    -- implemented
bindIO :: IO a -> (a -> IO b) -> IO b  --  elsewhere
 
instance Monad IO where
    return = unitIO
    (>>=) = bindIO
</haskell>
</haskell>


So, 'main' just has type "IO ()", 'getChar' has type "IO Char" and so
we can then expand <code>main</code> to get:
on.  You can think of the type "IO Char" as meaning "take the current RealWorld, do something to it, and return a Char and a (possibly changed) RealWorld".  Let's look at 'main' calling 'getChar' two times:


<haskell>
<haskell>
getChar :: RealWorld -> (Char, RealWorld)
main = getChar `bindIO` (\a ->
      getChar `bindIO` (\b ->
      unitIO ()))
</haskell>


main :: RealWorld -> ((), RealWorld)
Now to run <code>main</code>:
main world0 = let (a, world1) = getChar world0
# <code>main = getChar `bindIO` (\a -> ...)</code> doesn't require evaluation, continuing;
                  (b, world2) = getChar world1
# run <code>getChar</code> to obtain a character <code>c1 :: Char</code>;
              in ((), world2)
# apply <code>(\a -> ...)</code> to <code>c1</code>;
</haskell>
# then evaluate the result to obtain the next action <code>getChar `bindIO` (\b -> ...)</code>;
# run <code>getChar</code> to obtain another character <code>c2 :: Char</code>;
# apply <code>(\b -> ...)</code> to <code>c2</code>;
# then evaluate the result to obtain the next action <code>unitIO ()</code>;
# run <code>unitIO ()</code> to obtain <code>() :: ()</code>, which ends the program.


From that example we can see that:


Look at this closely: 'main' passes to first 'getChar' the "world" it
* Each action is run - it doesn't matter if what's obtained from running it isn't actually used.
received. This 'getChar' returns some new value of type RealWorld
that gets used in the next call. Finally, 'main' returns the "world" it got
from the second 'getChar'.


# Is it possible here to omit any call of 'getChar' if the Char it read is not used? No, because we need to return the "world" that is the result of the second 'getChar' and this in turn requires the "world" returned from the first 'getChar'.
* Each action is run in the order it appears in the program - there is no reordering of actions.
# Is it possible to reorder the 'getChar' calls? No: the second 'getChar' can't be called before the first one because it uses the "world" returned from the first call.
# Is it possible to duplicate calls? In Haskell semantics - yes, but real compilers never duplicate work in such simple cases (otherwise, the programs generated will not have any speed guarantees).


* Each action is run once, then the next action is obtained or the program ends - if a program only uses an action once, then it is only run once.


As we already said, RealWorld values are used like a baton which gets passed
Overall, in order to obtain the final value of <code>main</code>, each I/O
between all routines called by 'main' in strict order. Inside each
action that is called from <code>main</code> - directly or indirectly - is
routine called, RealWorld values are used in the same way. Overall, in
run. This means that each action inserted in the chain will be performed
order to "compute" the world to be returned from 'main', we should perform
just at the moment (relative to the other I/O actions) when we intended it
each IO procedure that is called from 'main', directly or indirectly.
This means that each procedure inserted in the chain will be performed
just at the moment (relative to the other IO actions) when we intended it
to be called. Let's consider the following program:
to be called. Let's consider the following program:


Line 191: Line 188:
</haskell>
</haskell>


Now you have enough knowledge to rewrite it in a low-level way and
Now you have enough knowledge to rewrite it in a low-level way and check that
check that each operation that should be performed will really be
each operation that should be performed will really be performed with the
performed with the arguments it should have and in the order we expect.
arguments it should have and in the order we expect.
 


But what about conditional execution? No problem. Let's define the
But what about conditional execution? No problem. Let's define the well-known
well-known 'when' operation:
<code>when</code> function:


<haskell>
<haskell>
when :: Bool -> IO () -> IO ()
when :: Bool -> IO () -> IO ()
when condition action world =
when condition action =
     if condition
     if condition
       then action world
       then action
       else ((), world)
       else return ()
</haskell>
</haskell>


As you can see, we can easily include or exclude from the execution chain
Because it's a function:
IO procedures (actions) depending on the data values. If 'condition'
will be False on the call of 'when', 'action' will never be called because
real Haskell compilers, again, never call functions whose results
are not required to calculate the final result (''i.e.'', here, the final "world" value of 'main').
 
Loops and more complex control structures can be implemented in
the same way. Try it as an exercise!
 


Finally, you may want to know how much passing these RealWorld
* it will be applied to two arguments;
values around the program costs. It's free! These fake values exist solely for the compiler while it analyzes and optimizes the code, but when it gets to assembly code generation, it "suddenly" realize that this type is like "()", so
* its result (the conditional expression) will be evaluated;
all these parameters and result values can be omitted from the final generated code. Isn't it beautiful? :)
* then the chosen action will be run.


As you can see, we can easily include or exclude from the execution chain I/O
actions depending on the data values. If <code>condition</code> will be
<code>False</code> on the call of <code>when</code>, <code>action</code> will
never be run.


Loops and more complex control structures can be implemented in the same way.
Try it as an exercise!


== '>>=' and 'do' notation ==
== <code>(>>=)</code> and <code>do</code> notation ==


All beginners (including me :)) start by thinking that 'do' is some
All beginners (including me) start by thinking that <code>do</code> is some
magic statement that executes IO actions. That's wrong - 'do' is just
super-awesome statement that executes I/O actions. That's wrong - <code>do</code>
syntactic sugar that simplifies the writing of procedures that use IO (and also other monads, but that's beyond the scope of this tutorial). 'do' notation eventually gets translated to statements passing "world" values around like we've manually written above and is used to simplify the gluing of several
is just syntactic sugar that simplifies the writing of definitions that use I/O
IO actions together. You don't need to use 'do' for just one statement; for instance,
(and also other monads, but that's beyond the scope of this manual). <code>do</code>
notation eventually gets translated to a series of I/O actions much like we've
manually written above. This simplifies the gluing of several
I/O actions together. You don't need to use <code>do</code> for just one
action; for example,


<haskell>
<haskell>
  main = do putStr "Hello!"
main = do putStr "Hello!"
</haskell>
</haskell>
 
 
is desugared to:
is desugared to:


<haskell>
<haskell>
  main = putStr "Hello!"
main = putStr "Hello!"
</haskell>
</haskell>


But nevertheless it's considered Good Style to use 'do' even for one statement
Let's examine how to desugar a <code>do</code>-expression with multiple actions
because it simplifies adding new statements in the future.
in the following example:
 
 
Let's examine how to desugar a 'do' with multiple statements in the
following example:  


<haskell>
<haskell>
Line 253: Line 247:
</haskell>
</haskell>


The 'do' statement here just joins several IO actions that should be
The <code>do</code>-expression here just joins several I/O actions that should
performed sequentially. It's translated to sequential applications
be performed sequentially. It's translated to sequential applications of one
of one of the so-called "binding operators", namely '>>':
of the so-called "binding operators", namely <code>(>>)</code>:


<haskell>
<haskell>
Line 264: Line 258:
</haskell>
</haskell>


This binding operator just combines two IO actions, executing them
Defining <code>(>>)</code> looks easy:
sequentially by passing the "world" between them:


<haskell>
<haskell>
(>>) :: IO a -> IO b -> IO b
(>>) :: IO a -> IO b -> IO b
(action1 >> action2) world0 =
action1 >> action2 = action1 >>= \_ -> action2
  let (a, world1) = action1 world0
      (b, world2) = action2 world1
  in (b, world2)
</haskell>
</haskell>


If defining operators this way looks strange to you, read this
Now you can substitute the definition of <code>(>>)</code> at the places of
definition as follows:
its usage and check that program constructed by the <code>do</code> desugaring
 
is actually the same as we could write by using I/O actions manually.
<haskell>
action1 >> action2 = action
  where
    action world0 = let (a, world1) = action1 world0
                        (b, world2) = action2 world1
                    in (b, world2)
</haskell>


Now you can substitute the definition of '>>' at the places of its usage
A more complex example involves the binding of variables using <code><-</code>:
and check that program constructed by the 'do' desugaring is actually the
same as we could write by manually manipulating "world" values.
 
 
A more complex example involves the binding of variables using "<-":


<haskell>
<haskell>
Line 305: Line 283:
</haskell>
</haskell>


As you should remember, the '>>' binding operator silently ignores
where <code>(>>=)</code> corresponds to <code>bindIO</code>.
the value of its first action and returns as an overall result
the result of its second action only. On the other hand, the '>>=' binding operator (note the extra '=' at the end) allows us to use the result of its first action - it gets passed as an additional parameter to the second one! Look at the definition:
 
<haskell>
(>>=) :: IO a -> (a -> IO b) -> IO b
(action1 >>= action2) world0 =
  let (a, world1) = action1 world0
      (b, world2) = action2 a world1
  in (b, world2)
</haskell>


First, what does the type of the second "action" (more precisely, a function which returns an IO action), namely "a -> IO b", mean? By
As you now know, the <code>(>>)</code> binding operator silently ignores
substituting the "IO" definition, we get "a -> RealWorld -> (b, RealWorld)".
the value of its first action and returns as an overall result the result of
This means that second action actually has two parameters
its second action only. On the other hand, the <code>(>>=)</code> binding
- the type 'a' actually used inside it, and the value of type RealWorld used for sequencing of IO actions. That's always the case - any IO procedure has one
operator (note the extra <code>=</code> at the end) allows us to use the result
more parameter compared to what you see in its type signature. This
of its first action - it gets passed as an additional parameter to the second
parameter is hidden inside the definition of the type alias "IO".
one!


Second, you can use these '>>' and '>>=' operations to simplify your
You can use <code>(>>)</code> and <code>(>>=)</code> to simplify your program.
program. For example, in the code above we don't need to introduce the
For example, in the code above we don't need to introduce the variable, because
variable, because the result of 'readLn' can be send directly to 'print':
the result of running <code>readLn</code> can be passed directly to
<code>print</code>:


<haskell>
:<haskell>
main = readLn >>= print
main = readLn >>= print
</haskell>
</haskell>


 
As you see, the notation:
And third - as you see, the notation:


<haskell>
<haskell>
Line 340: Line 308:
</haskell>
</haskell>


where 'action1' has type "IO a" and 'action2' has type "IO b",
where <code>action1</code> has type <code>IO a</code> and <code>action2</code>
translates into:
has type <span style="white-space: nowrap"><code>IO b</code></span>, translates
into:


<haskell>
<haskell>
Line 347: Line 316:
</haskell>
</haskell>


where the second argument of '>>=' has the type "a -> IO b". It's the way
where the second argument of <code>(>>=)</code> has the type
the '<-' binding is processed - the name on the left-hand side of '<-' just becomes a parameter of subsequent operations represented as one large IO action.  Note also that if 'action1' has type "IO a" then 'x' will just have type "a"; you can think of the effect of '<-' as "unpacking" the IO value of 'action1' into 'x'. Note also that '<-' is not a true operator; it's pure syntax, just like 'do' itself.  Its meaning results only from the way it gets desugared.
<span style="white-space: nowrap"><code>a -> IO b</code></span>. It's the way
the <code><-</code> binding is processed - the name on the left-hand side of
<code><-</code> just becomes a parameter of subsequent operations represented
as one large I/O action.  Note also that if <code>action1</code> has type
<span style="white-space: nowrap"><code>IO a</code></span> then <code>x</code>
will just have type <code>a</code>; you can think of the effect of <code><-</code>
as "unpacking" the I/O value of <code>action1</code> into <code>x</code>.
Note also that <code><-</code> is not a true operator; it's pure syntax, just
like <code>do</code> itself.  Its meaning results only from the way it gets
desugared.


Look at the next example:  
Look at the next example:


<haskell>
<haskell>
Line 370: Line 348:
</haskell>
</haskell>


I omitted the parentheses here; both the '>>' and the '>>=' operators are
I omitted the parentheses here; both the <code>(>>)</code> and the
left-associative, but lambda-bindings always stretches as far to the right as possible, which means that the 'a' and 'b' bindings introduced
<code>(>>=)</code> operators are left-associative, but lambda-bindings
here are valid for all remaining actions. As an exercise, add the
always stretches as far to the right as possible, which means that the
parentheses yourself and translate this procedure into the low-level
<code>a</code> and <code>b</code> bindings introduced here are valid for all
code that explicitly passes "world" values. I think it should be enough to help you finally realize how the 'do' translation and binding operators work.
remaining actions. As an exercise, add the parentheses yourself and translate
this definition into action-level code. I
think it should be enough to help you finally realize how the <code>do</code>
translation and binding operators work.


Oh, no! I forgot the third monadic operator: <code>return</code>. But that's
understandable - it does very little! The resulting I/O action immediately
<i>returns</i> its given argument (when it is run).


Oh, no! I forgot the third monadic operator - 'return'. It just
How about translating a simple example of <code>return</code> usage? Say,
combines its two parameters - the value passed and "world":
 
<haskell>
return :: a -> IO a
return a world0  =  (a, world0)
</haskell>
 
How about translating a simple example of 'return' usage? Say,


<haskell>
<haskell>
Line 391: Line 367:
           return (a*2)
           return (a*2)
</haskell>
</haskell>


Programmers with an imperative language background often think that
Programmers with an imperative language background often think that
'return' in Haskell, as in other languages, immediately returns from
<code>return</code> in Haskell, as in other languages, immediately returns
the IO procedure. As you can see in its definition (and even just from its
from the I/O definition. As you can see in its definition (and even just from
type!), such an assumption is totally wrong. The only purpose of using
its type!), such an assumption is totally wrong. The only purpose of using
'return' is to "lift" some value (of type 'a') into the result of
<code>return</code> is to "lift" some value (of type <code>a</code>) into the
a whole action (of type "IO a") and therefore it should generally be used only as the last executed statement of some IO sequence. For example try to
result of a whole action (of type <code>IO a</code>) and therefore it should
translate the following procedure into the corresponding low-level code:
generally be used only as the last executed action of some I/O sequence. For
example try to translate the following definition into the corresponding
low-level code:


<haskell>
<haskell>
Line 408: Line 385:
</haskell>
</haskell>


and you will realize that the 'print' statement is executed even for non-negative values of 'a'. If you need to escape from the middle of an IO procedure, you can use the 'if' statement:
and you will realize that the <code>print</code> call is executed even for
non-negative values of <code>a</code>. If you need to escape from the middle
of an I/O definition, you can use an <code>if</code> expression:


<haskell>
<haskell>
Line 427: Line 406:
</haskell>
</haskell>


that may be useful for escaping from the middle of a longish 'do' statement.
that may be useful for escaping from the middle of a longish
<code>do</code>-expression.


 
Last exercise: implement a function <code>liftM</code> that lifts operations
Last exercise: implement a function 'liftM' that lifts operations on
on plain values to the operations on monadic ones. Its type signature:
plain values to the operations on monadic ones. Its type signature:


<haskell>
<haskell>
Line 437: Line 416:
</haskell>
</haskell>


If that's too hard for you, start with the following high-level
If that's too hard for you, start with the following high-level definition and
definition and rewrite it in low-level fashion:
rewrite it in low-level fashion:


<haskell>
<haskell>
Line 444: Line 423:
                     return (f x)
                     return (f x)
</haskell>
</haskell>




== Mutable data (references, arrays, hash tables...) ==
== Mutable data (references, arrays, hash tables...) ==


As you should know, every name in Haskell is bound to one fixed (immutable) value. This greatly simplifies understanding algorithms and code optimization, but it's inappropriate in some cases. As we all know, there are plenty of algorithms that are simpler to implement in terms of updatable
As you should know, every name in Haskell is bound to one fixed (immutable)
variables, arrays and so on. This means that the value associated with
value. This greatly simplifies understanding algorithms and code optimization,
a variable, for example, can be different at different execution points,
but it's inappropriate in some cases. As we all know, there are plenty of
so reading its value can't be considered as a pure function. Imagine,
algorithms that are simpler to implement in terms of updatable variables,
for example, the following code:
arrays and so on. This means that the value associated with a variable, for
example, can be different at different execution points, so reading its value
can't be considered as a pure function. Imagine, for example, the following
code:


<haskell>
<haskell>
Line 462: Line 443:
</haskell>
</haskell>


Does this look strange? First, the two calls to 'readVariable' look the same, so the compiler can just reuse the value returned by the first call. Second,
Does this look strange?
the result of the 'writeVariable' call isn't used so the compiler can (and will!) omit this call completely. To complete the picture, these three calls may be rearranged in any order because they appear to be independent of each
# The two calls to <code>readVariable</code> look the same, so the compiler can just reuse the value returned by the first call.
other.   This is obviously not what was intended.  What's the solution? You already know this - use IO actions! Using IO actions guarantees that:
# The result of the <code>writeVariable</code> call isn't used so the compiler can (and will!) omit this call completely.
# These three calls may be rearranged in any order because they appear to be independent of each other.
 
This is obviously not what was intended.  What's the solution? You already know
this - use I/O actions! Doing that guarantees:


# the result of the "same" action (such as <span style="white-space: nowrap"><code>readVariable varA</code></span>) will not be reused
# each action will have to be executed
# the execution order will be retained as written
# the execution order will be retained as written
# each action will have to be executed
# the result of the "same" action (such as "readVariable varA") will not be reused


So, the code above really should be written as:
So, the code above really should be written as:


<haskell>
<haskell>
import Data.IORef
main = do varA <- newIORef 0  -- Create and initialize a new variable
main = do varA <- newIORef 0  -- Create and initialize a new variable
           a0 <- readIORef varA
           a0 <- readIORef varA
Line 480: Line 466:
</haskell>
</haskell>


Here, 'varA' has the type "IORef Int" which means "a variable (reference) in
Here, <code>varA</code> has the type <span style="white-space: nowrap"><code>IORef Int</code></span>
the IO monad holding a value of type Int". newIORef creates a new variable
which means "a variable (reference) in the I/O monad holding a value of type
(reference) and returns it, and then read/write actions use this
<code>Int</code>". <code>newIORef</code> creates a new variable (reference)
reference. The value returned by the "readIORef varA" action depends not
and returns it, and then read/write actions use this reference. The value
only on the variable involved but also on the moment this operation is performed so it can return different values on each call.
returned by the <span style="white-space: nowrap"><code>readIORef varA</code></span>
action depends not only on the variable involved but also on the moment this
operation is performed so it can return different values on each call.


Arrays, hash tables and any other _mutable_ data structures are
Arrays, hash tables and any other ''mutable'' data structures are defined in
defined in the same way - for each of them, there's an operation that creates new "mutable values" and returns a reference to it. Then special read and write
the same way - for each of them, there's an operation that creates new "mutable
operations in the IO monad are used. The following code shows an example
values" and returns a reference to it. Then value-specific read and write
using mutable arrays:
operations in the I/O monad are used. The following code shows an example using
mutable arrays:


<haskell>
<haskell>
import Data.Array.IO
import Data.Array.IO
main = do arr <- newArray (1,10) 37 :: IO (IOArray Int Int)
main = do arr <- newArray (1,10) 37 :: IO (IOArray Int Int)
          a <- readArray arr 1
          a <- readArray arr 1
          writeArray arr 1 64
          writeArray arr 1 64
          b <- readArray arr 1
          b <- readArray arr 1
          print (a, b)
          print (a, b)
</haskell>
</haskell>


Here, an array of 10 elements with 37 as the initial value at each location is created. After reading the value of the first element (index 1) into 'a' this element's value is changed to 64 and then read again into 'b'. As you can see by executing this code, 'a' will be set to 37 and 'b' to 64.
Here, an array of 10 elements with 37 as the initial value at each location is
         
created. After reading the value of the first element (index 1) into
 
<code>a</code> this element's value is changed to 64 and then read again into
<code>b</code>. As you can see by executing this code, <code>a</code> will be
set to 37 and <code>b</code> to 64.


Other state-dependent operations are also often implemented as IO
Other state-dependent operations are also often implemented with I/O actions.
actions. For example, a random number generator should return a different
For example, a random number generator should return a different value on each
value on each call. It looks natural to give it a type involving IO:
call. It looks natural to give it a type involving <code>IO</code>:


<haskell>
<haskell>
Line 512: Line 503:
</haskell>
</haskell>


Moreover, when you import C routines you should be careful - if this
Moreover, when you import a C routine you should be careful - if this routine
routine is impure, i.e. its result depends on something in the "real
is impure, i.e. its result depends on something "outside the Haskell program"
world" (file system, memory contents...), internal state and so on,
(file system, memory contents, its own <code>static</code> internal state and
you should give it an IO type. Otherwise, the compiler can
so on), you should give it an <code>IO</code> type. Otherwise, the compiler
"optimize" repetitive calls of this procedure with the same parameters! :)
can "optimize" repetitive calls to the definition with the same parameters!


For example, we can write a non-IO type for:
For example, we can write a non-<code>IO</code> type for:


<haskell>
<haskell>
Line 525: Line 516:
</haskell>
</haskell>


because the result of 'sin' depends only on its argument, but
because the result of <code>sin</code> depends only on its argument, but


<haskell>
<haskell>
Line 532: Line 523:
</haskell>
</haskell>


If you will declare 'tell' as a pure function (without IO) then you may
If you will declare <code>tell</code> as a pure function (without
get the same position on each call! :)
<code>IO</code>) then you may get the same position on each call!
 
=== Encapsulated mutable data: ST ===


== IO actions as values ==
If you're going to be doing things like sending text to a screen or reading
data from a scanner, <code>IO</code> is the type to start with - you can then
customise existing I/O operations or add new ones as you see fit. But what if
that shiny-new (or classic) algorithm you're working on really only needs
mutable state - then having to drag that <code>IO</code> type from
<code>main</code> all the way through to wherever you're implementing the
algorithm can get quite irritating.


By this point you should understand why it's impossible to use IO
Fortunately there is a better way!  One that remains totally pure and yet
actions inside non-IO (pure) procedures. Such procedures just don't
allows the use of references, arrays, and so on - and it's done using, you
get a "baton"; they don't know any "world" value to pass to an IO action.
guessed it, Haskell's versatile type system (and one extension).
The RealWorld type is an abstract datatype, so pure functions also can't construct RealWorld values by themselves, and it's a strict type, so 'undefined' also can't be used. So, the prohibition of using IO actions inside pure procedures is just a type system trick (as it usually is in Haskell :)).


But while pure code can't _execute_ IO actions, it can work with them
Remember our definition of <code>IO</code>?
as with any other functional values - they can be stored in data
 
structures, passed as parameters, returned as results, collected in
<haskell>
lists, and partially applied. But an IO action will remain a
data IO a
functional value because we can't apply it to the last argument - of
</haskell>
type RealWorld.


In order to _execute_ the IO action we need to apply it to some
Well, the new <code>ST</code> type makes just one change - in theory, it can
RealWorld value.  That can be done only inside some IO procedure,
be used with any suitable state type:  
in its "actions chain". And real execution of this action will take
place only when this procedure is called as part of the process of
"calculating the final value of world" for 'main'. Look at this example:


<haskell>
<haskell>
main world0 = let get2chars = getChar >> getChar
data ST s a
                  ((), world1) = putStr "Press two keys" world0
                  (answer, world2) = get2chars world1
              in ((), world2)
</haskell>
</haskell>
         
 
Here we first bind a value to 'get2chars' and then write a binding
If we wanted to, we could even use <code>ST</code> to define <code>IO</code>:
involving 'putStr'. But what's the execution order? It's not defined
by the order of the 'let' bindings, it's defined by the order of processing
"world" values! You can arbitrarily reorder the binding statements - the execution order will be defined by the data dependency with respect to the
"world" values that get passed around. Let's see what this 'main' looks like in the 'do' notation:


<haskell>
<haskell>
main = do let get2chars = getChar >> getChar
type IO a = ST RealWorld a  -- RealWorld defined elsewhere
          putStr "Press two keys"
</haskell>
          get2chars
 
          return ()
Let's add some extra definitions:
 
<haskell>
newSTRef    :: a -> ST s (STRef s a)      -- these are
readSTRef    :: STRef s a -> ST s a        --  usually
writeSTRef  :: STRef s a -> a -> ST s ()  -- primitive
 
newSTArray  :: Ix i => (i, i) -> ST s (STArray s i e) -- also usually primitive
              ⋮
unitST      :: a -> ST s a
bindST      :: ST s a -> (a -> ST s b) -> ST s b
 
instance Monad (ST s) where
    return = unitST
    (>>=)  = bindST
</haskell>
 
...that's right - this new <code>ST</code> type is also monadic!
 
So what's the big difference between the <code>ST</code> and <code>IO</code>
types? In one word - <code>runST</code>:
<haskell>
runST :: (forall s . ST s a) -> a
</haskell>
 
Yes - it has a very unusual type. But that type allows you to run your
stateful computation ''as if it was a pure definition!''
 
The <code>s</code> type variable in <code>ST</code> is the type of the local
state.  Moreover, all the fun mutable stuff available for <code>ST</code> is
quantified over <code>s</code>:
<haskell>
newSTRef  :: a -> ST s (STRef s a)
newArray_ :: Ix i => (i, i) -> ST s (STArray s i e)
</haskell>
 
So why does <code>runST</code> have such a funky type? Let's see what would
happen if we wrote
<haskell>
makeSTRef :: a -> STRef s a
makeSTRef a = runST (newSTRef a)
</haskell>
This fails, because <code>newSTRef a</code> doesn't work for all state types
<code>s</code> - it only works for the <code>s</code> from the return type
<span style="white-space: nowrap"><code>STRef s a</code></span>.
 
This is all sort of wacky, but the result is that you can only run an
<code>ST</code> computation where the output type is functionally pure, and
makes no references to the internal mutable state of the computation. In
exchange for that, there's no access to I/O operations like writing to or
reading from the console. The monadic <code>ST</code> type only has references,
arrays, and such that are useful for performing pure computations.
 
Due to how similar <code>IO</code> and <code>ST</code> are internally, there's
this function:
 
<haskell>
stToIO :: ST RealWorld a -> IO a
</haskell>
 
The difference is that <code>ST</code> uses the type system to forbid unsafe
behavior like extracting mutable objects from their safe <code>ST</code>
wrapping, but allowing purely functional outputs to be performed with all the
handy access to mutable references and arrays.
 
For example, here's a particularly convoluted way to compute the integer that
comes after zero:
 
<haskell>
oneST :: ST s Integer -- note that this works correctly for any s
oneST = do var <- newSTRef 0
          modifySTRef var (+1)
          readSTRef var
 
one :: Int
one = runST oneST
</haskell>
 
 
== I/O actions as values ==
 
By this point you should understand why it's impossible to use I/O actions
inside non-I/O (pure) functions: when needed, fully-applied functions are
always evaluated - they aren't run like I/O actions. In addition, the
prohibition of using I/O actions inside pure functions is maintained by the
type system (as it usually is in Haskell).
 
But while pure code can't be used to run I/O actions, it can work with them as
with any other value - I/O actions can be stored in data structures, passed as
parameters, returned as results, collected in lists or in tuples. But what
won't work is something like:
 
<haskell>
\ msg x -> case putStrLn msg of _ -> x
</haskell>
</haskell>


As you can see, we've eliminated two of the 'let' bindings and left only the one defining 'get2chars'.  The non-'let' statements are executed in the exact order in which they're written, because they pass the "world" value from statement to statement as we described above.  Thus, this version of the function is much easier to understand because we don't have to mentally figure out the data dependency of the "world" value.
because it will be treated as a function, not an I/O action.
 
To run an I/O action, we need to make it part of <code>main</code>:
 
* either directly:
:<code>main = action</code>
 
* or in the "action chain" of another action which is already a part of the <code>main</code> "chain":
:<code>main = ... >>= \ _ -> action >>= ...</code>


Moreover, IO actions like 'get2chars' can't be executed directly
Only then will the action be run. For example, in:
because they are functions with a RealWorld parameter. To execute them,
we need to supply the RealWorld parameter, i.e. insert them in the 'main'
chain, placing them in some 'do' sequence executed from 'main' (either directly in the 'main' function, or indirectly in an IO function called from 'main'). Until that's done, they will remain like any function, in partially
evaluated form. And we can work with IO actions as with any other
functions - bind them to names (as we did above), save them in data
structures, pass them as function parameters and return them as results - and
they won't be performed until you give them the magic RealWorld
parameter!


<haskell>
main = do let skip2chars = getChar >> getChar >> return ()
          putStr "Press two keys"
          skip2chars
          return ()
</haskell>


the non-<code>let</code> actions are run in the exact order in which they're
written.


=== Example: a list of IO actions ===
=== Example: a list of I/O actions ===


Let's try defining a list of IO actions:
Let's try defining a list of I/O actions:


<haskell>
<haskell>
Line 601: Line 689:
</haskell>
</haskell>


I used additional parentheses around each action, although they aren't really required. If you still can't believe that these actions won't be executed immediately, just recall the real type of this list:
I used additional parentheses around each action, although they aren't really
         
required. If you still can't believe that these actions won't be run
immediately, remember that in this expression:
 
<haskell>
<haskell>
ioActions :: [RealWorld -> ((), RealWorld)]
\ b -> if b then (putStr "started...") (putStrLn "completed.")
</haskell>
</haskell>


Well, now we want to execute some of these actions. No problem, just
both I/O actions won't immediately be run either.
insert them into the 'main' chain:
 
Well, now we want to execute some of these actions. No problem, just insert
them into the <code>main</code> chain:


<haskell>
<haskell>
Line 616: Line 708:
</haskell>
</haskell>


Looks strange, right? :)  Really, any IO action that you write in a 'do'
Looks strange, right? Really, any I/O action that you write in a
statement (or use as a parameter for the '>>'/'>>=' operators) is an expression
<code>do</code>-expression (or use as a parameter for the
returning a result of type 'IO a' for some type 'a'. Typically, you use some function that has the type 'x -> y -> ... -> IO a' and provide all the x, y, etc. parameters. But you're not limited to this standard scenario -
<code>(>>)</code>/<code>(>>=)</code> operators) is an expression returning a
don't forget that Haskell is a functional language and you're free to
result of type <span style="white-space: nowrap"><code>IO a</code></span> for
compute the functional value required (recall that "IO a" is really a function
some type <code>a</code>. Typically, you use some function that has the type
type) in any possible way. Here we just extracted several functions
<span style="white-space: nowrap"><code>x -> y -> ... -> IO a</code></span>
from the list - no problem. This functional value can also be
and provide all the <code>x</code>, <code>y</code>, etc. parameters. But you're
constructed on-the-fly, as we've done in the previous example - that's also
not limited to this standard scenario - don't forget that Haskell is a
OK. Want to see this functional value passed as a parameter?
functional language and you're free to evaluate any value as required
Just look at the definition of 'when'.  Hey, we can buy, sell, and rent
(recall that <span style="white-space: nowrap"><code>IO a</code></span> is
these IO actions just like we can with any other functional values! For example, let's define a function that executes all the IO actions in the list:
really a function type) in any possible way. Here we just extracted several
functions from the list - no problem. This value can also be constructed
on-the-fly, as we've done in the previous example - that's also OK.
Want to see this value passed as a parameter? Just look at the
definition of <code>when</code>.  Hey, we can buy, sell, and rent these I/O
actions just like we can with any other values! For example, let's
define a function that executes all the I/O actions in the list:


<haskell>
<haskell>
Line 635: Line 733:
</haskell>
</haskell>


No black magic - we just extract IO actions from the list and insert
No smoke or mirrors - we just extract I/O actions from the list and insert
them into a chain of IO operations that should be performed one after another (in the same order that they occurred in the list) to "compute the final world value" of the entire 'sequence_' call.
them into a chain of I/O operations that should be performed one after another
(in the same order that they occurred in the list) to obtain the end result
of the entire <code>sequence_</code> call.


With the help of 'sequence_', we can rewrite our last 'main' function as:
With the help of <code>sequence_</code>, we can rewrite our last <code>main</code>
action as:


<haskell>
<haskell>
Line 644: Line 745:
</haskell>
</haskell>


 
Haskell's ability to work with I/O actions just like other values
Haskell's ability to work with IO actions as with any other
allows us to define control structures of arbitrary
(functional and non-functional) values allows us to define control
complexity. Try, for example, to define a control structure that repeats an
structures of arbitrary complexity. Try, for example, to define a control
action until it returns the <code>False</code> result:
structure that repeats an action until it returns the 'False' result:


<haskell>
<haskell>
Line 655: Line 755:
</haskell>
</haskell>


Most programming languages don't allow you to define control structures at all, and those that do often require you to use a macro-expansion system.  In Haskell, control structures are just trivial functions anyone can write.
Most programming languages don't allow you to define control structures at all,
 
and those that do often require you to use a macro-expansion system.  In
Haskell, control structures are just trivial functions anyone can write.


=== Example: returning an IO action as a result ===
=== Example: returning an I/O action as a result ===


How about returning an IO action as the result of a function?  Well, we've done
How about returning an I/O action as the result of a function?  Well, we've
this each time we've defined an IO procedure - they all return IO actions
done this for each I/O definition - they all return I/O actions built up from
that need a RealWorld value to be performed. While we usually just
other I/O actions (or themselves, if they're recursive). While we usually just
execute them as part of a higher-level IO procedure, it's also
execute them as part of a
possible to just collect them without actual execution:
higher-level I/O definition, it's also possible to just collect them without
actual execution:


<haskell>
<haskell>
main = do let a = sequence ioActions
main = do let a = sequence ioActions
               b = when True getChar
               b = when True getChar
               c = getChar >> getChar
               c = getChar >> getChar >> return ()
           putStr "These 'let' statements are not executed!"
           putStr "These let-bindings are not executed!"
</haskell>
</haskell>


These assigned IO procedures can be used as parameters to other
These assigned I/O actions can be used as parameters to other definitions, or
procedures, or written to global variables, or processed in some other
written to global variables, or processed in some other way, or just executed
way, or just executed later, as we did in the example with 'get2chars'.
later, as we did in the example with <code>skip2chars</code>.


But how about returning a parameterized IO action from an IO procedure? Let's define a procedure that returns the i'th byte from a file represented as a Handle:
But how about returning a parameterized I/O action from an I/O definition?
Here's a definition that returns the i'th byte from a file represented as a
Handle:


<haskell>
<haskell>
readi h i = do hSeek h i AbsoluteSeek
readi h i = do hSeek h AbsoluteSeek i
               hGetChar h
               hGetChar h
</haskell>
</haskell>


So far so good. But how about a procedure that returns the i'th byte of a file
So far so good. But how about a definition that returns the i'th byte of a
with a given name without reopening it each time?
file with a given name without reopening it each time?


<haskell>
<haskell>
Line 693: Line 797:
</haskell>
</haskell>


As you can see, it's an IO procedure that opens a file and returns...
As you can see, it's an I/O definition that opens a file and returns...an I/O
another IO procedure that will read the specified byte. But we can go
action that will read the specified byte. But we can go further and include
further and include the 'readi' body in 'readfilei':
the <code>readi</code> body in <code>readfilei</code>:


<haskell>
<haskell>
readfilei name = do h <- openFile name ReadMode
readfilei name = do h <- openFile name ReadMode
                     let readi h i = do hSeek h i AbsoluteSeek
                     let readi h i = do hSeek h AbsoluteSeek i
                                       hGetChar h
                                       hGetChar h
                     return (readi h)
                     return (readi h)
</haskell>
</haskell>


That's a little better.  But why do we add 'h' as a parameter to 'readi' if it can be obtained from the environment where 'readi' is now defined?  An even shorter version is this:
That's a little better.  But why do we add <code>h</code> as a parameter to
<code>readi</code> if it can be obtained from the environment where <code>readi</code>
is now defined?  An even shorter version is this:


<haskell>
<haskell>
readfilei name = do h <- openFile name ReadMode
readfilei name = do h <- openFile name ReadMode
                     let readi i = do hSeek h i AbsoluteSeek
                     let readi i = do hSeek h AbsoluteSeek i
                                     hGetChar h
                                     hGetChar h
                     return readi
                     return readi
</haskell>
</haskell>


What have we done here? We've build a parameterized IO action involving local
What have we done here? We've build a parameterized I/O action involving local
names inside 'readfilei' and returned it as the result. Now it can be
names inside <code>readfilei</code> and returned it as the result. Now it can
used in the following way:
be used in the following way:


<haskell>
<haskell>
Line 724: Line 830:
</haskell>
</haskell>


 
This way of using I/O actions is very typical for Haskell programs - you just
This way of using IO actions is very typical for Haskell programs - you
construct one or more I/O actions that you need, with or without parameters,
just construct one or more IO actions that you need,
possibly involving the parameters that your "constructor" received, and return
with or without parameters, possibly involving the parameters that your
them to the caller. Then these I/O actions can be used in the rest of the
"constructor" received, and return them to the caller. Then these IO actions
program without any knowledge of how you actually implemented them. One
can be used in the rest of the program without any knowledge about your
thing this can be used for is to partially emulate the OOP (or more precisely,
internal implementation strategy. One thing this can be used for is to
the ADT) programming paradigm.
partially emulate the OOP (or more precisely, the ADT) programming paradigm.
 


=== Example: a memory allocator generator ===
=== Example: a memory allocator generator ===


As an example, one of my programs has a module which is a memory suballocator. It receives the address and size of a large memory block and returns two
As an example, one of my programs has a module which is a memory suballocator.
procedures - one to allocate a subblock of a given size and the other to
It receives the address and size of a large memory block and returns two
free the allocated subblock:
specialised I/O operations - one to allocate a subblock of a given size and the
other to free the allocated subblock:


<haskell>
<haskell>
Line 752: Line 857:
</haskell>
</haskell>


How this is implemented? 'alloc' and 'free' work with references
How this is implemented? <code>alloc</code> and <code>free</code> work with
created inside the memoryAllocator procedure. Because the creation of these references is a part of the memoryAllocator IO actions chain, a new independent set of references will be created for each memory block for which
references created inside the <code>memoryAllocator</code> definition. Because
memoryAllocator is called:
the creation of these references is a part of the <code>memoryAllocator</code>
I/O-action chain, a new independent set of references will be created for each
memory block for which <code>memoryAllocator</code> is called:


<haskell>
<haskell>
memoryAllocator buf size = do start <- newIORef buf
memoryAllocator buf size =
                              end <- newIORef (buf `plusPtr` size)
  do start <- newIORef buf
                              ...
      end <- newIORef (buf `plusPtr` size)
      ...
</haskell>
</haskell>


These two references are read and written in the 'alloc' and 'free' definitions (we'll implement a very simple memory allocator for this example):
These two references are read and written in the <code>alloc</code> and
<code>free</code> definitions (we'll implement a very simple memory allocator
for this example):


<haskell>
<haskell>
Line 769: Line 879:
                           writeIORef start (addr `plusPtr` size)
                           writeIORef start (addr `plusPtr` size)
                           return addr
                           return addr
                         
 
       let free ptr = do writeIORef start ptr
       let free ptr = do writeIORef start ptr
</haskell>
</haskell>


What we've defined here is just a pair of closures that use state
What we've defined here is just a pair of closures that use state available at
available at the moment of their definition. As you can see, it's as
the moment of their definition. As you can see, it's as easy as in any other
easy as in any other functional language, despite Haskell's lack
functional language, despite Haskell's lack of direct support for impure
of direct support for impure functions.
routines.
     
 
The following example uses procedures, returned by memoryAllocator, to
The following example uses the operations returned by <code>memoryAllocator</code>,
simultaneously allocate/free blocks in two independent memory buffers:
to simultaneously allocate/free blocks in two independent memory buffers:


<haskell>
<haskell>
Line 793: Line 903:
           ptr22 <- alloc2 1000
           ptr22 <- alloc2 1000
</haskell>
</haskell>


=== Example: emulating OOP with record types ===
=== Example: emulating OOP with record types ===


Let's implement the classical OOP example: drawing figures. There are
Let's implement the classical OOP example: drawing figures. There are figures
figures of different types: circles, rectangles and so on. The task is
of different types: circles, rectangles and so on. The task is to create a
to create a heterogeneous list of figures. All figures in this list should
heterogeneous list of figures. All figures in this list should support the same
support the same set of operations: draw, move and so on. We will
set of operations: draw, move and so on. We will define these operations using
represent these operations as IO procedures. Instead of a "class" let's
I/O actions. Instead of a "class" let's define a structure from which all of
define a structure containing implementations of all the procedures
the required operations can be accessed:
required:


<haskell>
<haskell>
Line 814: Line 921:
</haskell>
</haskell>


 
The constructor of each figure's type should just return a <code>Figure</code>
The constructor of each figure's type should just return a Figure record:
record:


<haskell>
<haskell>
Line 825: Line 932:
</haskell>
</haskell>


 
We will "draw" figures by just printing their current parameters. Let's start
We will "draw" figures by just printing their current parameters.
with implementing simplified <code>circle</code> and <code>rectangle</code>
Let's start with a simplified implementation of the 'circle' and 'rectangle'
constructors, without actual <code>move</code> support:
constructors, without actual 'move' support:


<haskell>
<haskell>
Line 840: Line 946:
</haskell>
</haskell>


 
As you see, each constructor just returns a fixed <code>draw</code> operation
As you see, each constructor just returns a fixed 'draw' procedure that prints
that prints parameters with which the concrete figure was created. Let's test
parameters with which the concrete figure was created. Let's test it:
it:


<haskell>
<haskell>
Line 856: Line 962:
</haskell>
</haskell>


 
Now let's define "full-featured" figures that can actually be moved around. In
Now let's define "full-featured" figures that can actually be
order to achieve this, we should provide each figure with a mutable variable
moved around. In order to achieve this, we should provide each figure
that holds each figure's current screen location. The type of this variable
with a mutable variable that holds each figure's current screen location. The
will be <span style="white-space: nowrap"><code>IORef Point</code></span>. This
type of this variable will be "IORef Point". This variable should be created in the figure constructor and manipulated in IO procedures (closures) enclosed in
variable should be created in the figure constructor and manipulated in I/O
the Figure record:
operations (closures) enclosed in the <code>Figure</code> record:


<haskell>
<haskell>
circle center radius = do
circle center radius = do
     centerVar <- newIORef center
     centerVar <- newIORef center
   
 
     let drawF = do center <- readIORef centerVar
     let drawF = do center <- readIORef centerVar
                   putStrLn ("  Circle at "++show center
                   putStrLn ("  Circle at "++show center
                             ++" with radius "++show radius)
                             ++" with radius "++show radius)
                 
 
     let moveF (addX,addY) = do (x,y) <- readIORef centerVar
     let moveF (addX,addY) = do (x,y) <- readIORef centerVar
                               writeIORef centerVar (x+addX, y+addY)
                               writeIORef centerVar (x+addX, y+addY)
                             
 
     return $ Figure { draw=drawF, move=moveF }
     return $ Figure { draw=drawF, move=moveF }


   
rectangle from to = do
rectangle from to = do
     fromVar <- newIORef from
     fromVar <- newIORef from
Line 884: Line 989:
                   to  <- readIORef toVar
                   to  <- readIORef toVar
                   putStrLn ("  Rectangle "++show from++"-"++show to)
                   putStrLn ("  Rectangle "++show from++"-"++show to)
                 
 
     let moveF (addX,addY) = do (fromX,fromY) <- readIORef fromVar
     let moveF (addX,addY) = do (fromX,fromY) <- readIORef fromVar
                               (toX,toY)    <- readIORef toVar
                               (toX,toY)    <- readIORef toVar
Line 892: Line 997:
     return $ Figure { draw=drawF, move=moveF }
     return $ Figure { draw=drawF, move=moveF }
</haskell>
</haskell>


Now we can test the code which moves figures around:
Now we can test the code which moves figures around:
Line 904: Line 1,008:
</haskell>
</haskell>


 
It's important to realize that we are not limited to including only I/O actions
It's important to realize that we are not limited to including only IO actions
in a record that's intended to simulate a C++/Java-style interface. The record
in a record that's intended to simulate a C++/Java-style interface. The record can also include values, IORefs, pure functions - in short, any type of data. For example, we can easily add to the Figure interface fields for area and origin:
can also include values, <code>IORef</code>s, pure functions - in short, any
type of data. For example, we can easily add to the <code>Figure</code>
interface fields for area and origin:


<haskell>
<haskell>
Line 917: Line 1,023:




== Exception handling (under development) ==
Although Haskell provides a set of exception raising/handling features
comparable to those in popular OOP languages (C++, Java, C#), this part of the
language receives much less attention. This is for two reasons:


== Exception handling (under development) ==
* you just don't need to worry as much about them - most of the time it just works "behind the scenes".


Although Haskell provides set of exception rasising/handling features comparable to those in popular OOP languages (C++, Java, C#), this part of language receives much less attention than there. First reason is that you just don't need to pay attention - most times it just works "behind the scene". Second reason is that Haskell, being lacked OOP inheritance, doesn't allow to easily subclass exception types, therefore limiting flexibility of exception handling.
* Haskell, lacking OOP-style inheritance, doesn't allow the programmer to easily subclass exception types, therefore limiting the flexibility of exception handling.


First, Haskell RTS raise more exceptions than traditional languages - pattern match failures, calls with invalid arguments (such as '''head []''') and computations whose results depend on special values '''undefined''' and '''error "...."''' all raise their own exceptions:
Haskell can raise more exceptions than other programming languages - pattern
match failures, calls with invalid arguments (such as
<span style="white-space: nowrap"><code>head []</code></span>) and computations
whose results depend on special values <code>undefined</code> and
<span style="white-space: nowrap"><code>error "...."</code></span> all raise
their own exceptions:


example 1:
* example 1:
<haskell>
:<haskell>
main = print (f 2)
main = print (f 2)


Line 932: Line 1,048:
</haskell>
</haskell>


example 2:
* example 2:
<haskell>
:<haskell>
main = print (head [])
main = print (head [])
</haskell>
</haskell>


example 3:
* example 3:
<haskell>
:<haskell>
main = print (1 + (error "Value that wasn't initialized or cannot be computed"))
main = print (1 + (error "Value that wasn't initialized or cannot be computed"))
</haskell>
</haskell>


This allows to write programs in much more error-prone way.
This allows the writing of programs in a much more error-prone way.
 


== Interfacing with C/C++ and foreign libraries (under development) ==
== Interfacing with C/C++ and foreign libraries (under development) ==


While Haskell is great at algorithm development, speed isn't its best side. We can combine best of both worlds, though, by writing speed-critical parts of program in C and rest in Haskell. We just need a way to call C functions from Haskell and vice versa, and to marshal data between two worlds.
While Haskell is great at algorithm development, speed isn't its best side. We
can combine the best of both languages, though, by writing speed-critical parts
of program in C and the rest in Haskell. We just need a way to call C routines
from Haskell and vice versa, and to marshal data between the two languages.


We also need to interact with C world for using Windows/Linux APIs, linking to various libraries and DLLs. Even interfacing with other languages requires to go through C world as "common denominator". Appendix [6] to Haskell'98 standard provides complete description of interfacing with C.
We also need to interact with C to use Windows/Linux APIs, linking to various
libraries and DLLs. Even interfacing with other languages often requires going
through C, which acts as a "common denominator".
[https://www.haskell.org/onlinereport/haskell2010/haskellch8.html Chapter 8 of the Haskell 2010 report]
provides a complete description of interfacing with C.


We will learn FFI via series of examples. These examples includes C/C++ code, so they need C/C++ compilers to be installed, the same will be true if you need to include code written in C/C++ in your program (C/C++ compilers are not required when you need just to link with existing libraries providing APIs with C calling convention). On Unix (and MacOS?) systems system-wide default C/C++ compiler typically used by GHC installation. On Windows, no default compilers exist, so GHC typically shipped with C compiler, and you may find on download page GHC distribution with bundled C and C++ compilers. Alternatively, you may find and install gcc/mingw32 version compatible with your GHC installation.
We will learn to use the FFI via a series of examples. These examples include
C/C++ code, so they need C/C++ compilers to be installed, the same will be true
if you need to include code written in C/C++ in your program (C/C++ compilers
are not required when you just need to link with existing libraries providing
APIs with C calling convention). On Unix (and Mac OS?) systems, the system-wide
default C/C++ compiler is typically used by GHC installation. On Windows, no
default compilers exist, so GHC is typically shipped with a C compiler, and you
may find on the download page a GHC distribution bundled with C and C++
compilers. Alternatively, you may find and install a GCC/MinGW version
compatible with your GHC installation.


If you need to make your C/C++ code as fast as possible, you may compile your code by Intel compilers instead of gcc. However, these compilers are not free, moreover on Windows code compiled by Intel compilers may be interact with GHC-compiled code only if one of them is put into DLLs (due to RTS incompatibility) [not checked! please correct if i'm wrong].
If you need to make your C/C++ code as fast as possible, you may compile your
code by Intel compilers instead of GCC. However, these compilers are not free,
moreover on Windows, code compiled by Intel compilers may not interact
correctly with GHC-compiled code, unless one of them is put into DLLs (due to
object file incompatibility).


[http://www.haskell.org/haskellwiki/Applications_and_libraries/Interfacing_other_languages More links]:
[http://www.haskell.org/haskellwiki/Applications_and_libraries/Interfacing_other_languages More links]:
Line 959: Line 1,096:
:A lightweight tool for implementing access to C libraries from Haskell.
:A lightweight tool for implementing access to C libraries from Haskell.


;[http://hsffig.sourceforge.net HSFFIG]
;[[HSFFIG]]
:Haskell FFI Binding Modules Generator (HSFFIG) is a tool that takes a C library include file (.h) and generates Haskell Foreign Functions Interface import declarations for items (functions, structures, etc.) the header defines.
:The Haskell FFI Binding Modules Generator (HSFFIG) is a tool that takes a C library header (".h") and generates Haskell Foreign Function Interface import declarations for items (functions, structures, etc.) the header defines.


;[http://quux.org/devel/missingpy MissingPy]
;[http://quux.org/devel/missingpy MissingPy]
:MissingPy is really two libraries in one. At its lowest level, MissingPy is a library designed to make it easy to call into Python from Haskell. It provides full support for interpreting arbitrary Python code, interfacing with a good part of the Python/C API, and handling Python objects. It also provides tools for converting between Python objects and their Haskell equivalents. Memory management is handled for you, and Python exceptions get mapped to Haskell Dynamic exceptions. At a higher level, MissingPy contains Haskell interfaces to some Python modules.
:MissingPy is really two libraries in one. At its lowest level, MissingPy is a library designed to make it easy to call into Python from Haskell. It provides full support for interpreting arbitrary Python code, interfacing with a good part of the Python/C API, and handling Python objects. It also provides tools for converting between Python objects and their Haskell equivalents. Memory management is handled for you, and Python exceptions get mapped to Haskell <code>Dynamic</code> exceptions. At a higher level, MissingPy contains Haskell interfaces to some Python modules.


[[HsLua Haskell interface to Lua scripting language]]
;[[HsLua]]
:A Haskell interface to the Lua scripting language


=== Calling functions ===
=== Foreign calls ===


First, we will learn how to call C functions from Haskell and Haskell functions from C. The first example consists of three files:
We begin by learning how to call C routines from Haskell and Haskell
definitions from C. The first example consists of three files:


main.hs:
''main.hs:''
<haskell>
<haskell>
{-# LANGUAGE ForeignFunctionInterface #-}
{-# LANGUAGE ForeignFunctionInterface #-}


main = do print "Hello from main"
main = do print "Hello from main"
           c_function
           c_routine


haskell_function = print "Hello from haskell_function"
haskell_definition = print "Hello from haskell_definition"


foreign import ccall safe "prototypes.h"
foreign import ccall safe "prototypes.h"
     c_function :: IO ()
     c_routine :: IO ()


foreign export ccall
foreign export ccall
     haskell_function :: IO ()
     haskell_definition :: IO ()
</haskell>
</haskell>


evil.c:
''vile.c:''
<haskell>
<haskell>
#include <stdio.h>
#include <stdio.h>
#include "prototypes.h"
#include "prototypes.h"


void c_function (void)
void c_routine (void)
{
{
   printf("Hello from c_function\n");
   printf("Hello from c_routine\n");
   haskell_function();
   haskell_definition();
}  
}
</haskell>
</haskell>


prototypes.h:
''prototypes.h:''
<haskell>
<haskell>
extern void c_function (void);
extern void c_routine (void);
extern void haskell_function (void);
extern void haskell_definition (void);
</haskell>
</haskell>


It may be compiled and linked in one step by ghc:
It may be compiled and linked in one step by ghc:
   ghc --make main.hs evil.c
   ghc --make main.hs vile.c


Or, you may compile C module(s) separately and link in .o files (it may preferable if you use make and don't want to recompile unchanged sources - ghc's --make option provides smart recompilation only for .hs files):
Or, you may compile C module(s) separately and link in ".o" files (this may be
   ghc -c evil.c
preferable if you use <code>make</code> and don't want to recompile unchanged
   ghc --make main.hs evil.o
sources; ghc's <code>--make</code> option provides smart recompilation only for
".hs" files):
   ghc -c vile.c
   ghc --make main.hs vile.o


You may use gcc/g++ directly to compile your C/C++ files but i recommend to do linking via ghc because it adds a lots of libraries required for execution of Haskell code. For the same reasons, even if your main routine is written in C/C++, i recommend you to call it from Haskell function main - otherwise you'll have to explicitly init/shutdown the GHC RTS (run-time system).
You may use gcc/g++ directly to compile your C/C++ files but I recommend to do
linking via ghc because it adds a lot of libraries required for execution of
Haskell code. For the same reason, even if <code>main</code> in your program is
written in C/C++, I recommend calling it from the Haskell action <code>main</code> -  
otherwise you'll have to explicitly init/shutdown the GHC RTS (run-time system).


We use "foreign import" specification to import foreign routines into our Haskell world, and "foreign export" to export Haskell routines into external world. Note that import statement creates new Haskell symbol (from external one), while export statement uses Haskell symbol previously defined. Techically speaking, both types of statements creates wrappers that converts naming and calling conventions from C to Haskell world or vice versa.
We use the <code>foreign import</code> declaration to import foreign routines
into Haskell, and <code>foreign export</code> to export Haskell definitions
"outside" for imperative languages to use. Note that <code>import</code>
creates a new Haskell symbol (from the external one), while <code>export</code>
uses a Haskell symbol previously defined. Technically speaking, both types of
declarations create a wrapper that converts the names and calling conventions
from C to Haskell or vice versa.


=== All about "foreign" statement ===
=== All about the <code>foreign</code> declaration ===


"ccall" specifier in foreign statements means use of C (not C++ !) calling convention. This means that if you want to write external function in C++ (instead of C) you should add '''export "C"''' specification to its declaration - otherwise you'll get linking error. Let's rewrite out first example to use C++ instead of C:
The <code>ccall</code> specifier in foreign declarations means the use of the C
(not C++ !) calling convention. This means that if you want to write the
external routine in C++ (instead of C) you should add <code>export "C"</code>
specification to its declaration - otherwise you'll get linking errors. Let's
rewrite our first example to use C++ instead of C:


prototypes.h:
''prototypes.h:''
<haskell>
<haskell>
#ifdef __cplusplus
#ifdef __cplusplus
Line 1,026: Line 1,182:
#endif
#endif


extern void c_function (void);
extern void c_routine (void);
extern void haskell_function (void);
extern void haskell_definition (void);


#ifdef __cplusplus
#ifdef __cplusplus
Line 1,036: Line 1,192:
Compile it via:
Compile it via:


   ghc --make main.hs evil.cpp
   ghc --make main.hs vile.cpp


where evil.cpp is just renamed evil.c from the first example. Note that new prototypes.h is written in the manner that allows to compile it both as C and C++ code. When it's included from evil.cpp, it's compiled as C++ code. When GHC compiles main.hs via C compiler (enabled by -fvia-C option), it also includes prototypes.h but compiles it in C mode. It's why you need to specify .h files in "foreign" declarations - depending on Haskell compiler you use, these files may be included to check consistency of C and Haskell declarations.
where "vile.cpp" is just a renamed copy of "vile.c" from the first example.
Note that the new "prototypes.h" is written to allow compiling it both as C and
C++ code. When it's included from "vile.cpp", it's compiled as C++ code. When
GHC compiles "main.hs" via the C compiler (enabled by the <code>-fvia-C</code>
option), it also includes "prototypes.h" but compiles it in C mode. It's why
you need to specify ".h" files in <code>foreign</code> declarations - depending
on which Haskell compiler you use, these files may be included to check
consistency of C and Haskell declarations.


Quoted part of foreign statement may also be used to import or export function under another name:
The quoted part of the foreign declaration may also be used to give the import
or export another name - for example,


<haskell>
<haskell>
foreign import ccall safe "prototypes.h CFunction"
foreign import ccall safe "prototypes.h CRoutine"
     c_function :: IO ()
     c_routine :: IO ()


foreign export ccall "HaskellFunction"
foreign export ccall "HaskellDefinition"
     haskell_function :: IO ()
     haskell_definition :: IO ()
</haskell>
</haskell>


specifies that C function called CFunction will become known as Haskell function c_function, while Haskell function haskell_function will be known in C world as HaskellFunction. It's required when C name doesn't conform to Haskell naming requirements.
specifies that:
* the C routine called <code>CRoutine</code> will become known as <code>c_routine</code> in Haskell,
* while the Haskell definition <code>haskell_definition</code> will be known as <code>HaskellDefinition</code> in C.


Although Haskell FFI standard tells about many other call type conventions in addition to ccall - cplusplus, jvm, net - current Haskell implementations support only ccall and stdcall. Later, also known as Pascal calling convention, used to interface with WinAPI:
It's required when the C name doesn't conform to Haskell naming requirements.
 
Although the Haskell FFI standard tells about many other calling conventions in
addition to <code>ccall</code> (e.g. <code>cplusplus</code>, <code>jvm</code>,
<code>net</code>) current Haskell implementations support only <code>ccall</code>
and <code>stdcall</code>. The latter, also called the "Pascal" calling
convention, is used to interface with WinAPI:


<haskell>
<haskell>
Line 1,059: Line 1,231:
</haskell>
</haskell>


And finally about safe/unsafe specifier: C function imported with "unsafe" keyword is called directly and Haskell runtime is stopped while C function is executed (when there are several OS threads executing Haskell program, only current OS thread is delayed). This call doesn't allowed to recursively enter into Haskell world by calling any Haskell function - the Haskell RTS is just not prepared to such event. OTOH, unsafe calls are as quick as calls in C world. It's ideal for "momentary" calls that quickly returns back to the caller.
And finally, about the <code>safe</code>/<code>unsafe</code> specifier: a C
routine imported with the <code>unsafe</code> keyword is called directly and
the Haskell runtime is stopped while the C routine is executed (when there are
several OS threads executing the Haskell program, only the current OS thread is
delayed). This call doesn't allow recursively entering back into Haskell by
calling any Haskell definition - the Haskell RTS is just not prepared for such
an event. However, <code>unsafe</code> calls are as quick as calls in C. It's
ideal for "momentary" calls that quickly return back to the caller.


When "safe" is specified, C function called in safe environment - Haskell execution context is saved, so it's possible to call back to Haskell and, if C call executed too much time, other OS thread may be started to execute Haskell code (of course, in threads other that one called C code). This has its own price, though - around 1000 CPU ticks per call.
When <code>safe</code> is specified, the C routine is called in a safe
environment - the Haskell execution context is saved, so it's possible to call
back to Haskell and, if the C call takes a long time, another OS thread may be
started to execute Haskell code (of course, in threads other than the one that
called the C code). This has its own price, though - around 1000 CPU ticks per
call.


You can read more about interaction between FFI calls and Haskell concurrency in [7].
You can read more about interaction between FFI calls and Haskell concurrency
in [[#readmore|[7]]].


=== Marshalling simple types ===
=== Marshalling simple types ===


Calling by itself is relatively easy, the real problem of interfacing languages with different data models is passing data between them. There is no even guarantee that Haskell Int is the same type as C int, Haskell Double is the same as C double and so on. While on *some* platforms they are the same and you can write throw-away programs relying on these, portability goal requires you to declare imported and exported functions using special types described in FFI standard which are guaranteed to correspond to C types. These are:
Calling by itself is relatively easy; the real problem of interfacing languages
with different data models is passing data between them. In this case, there is
no guarantee that Haskell's <code>Int</code> is represented in memory the same
way as C's <code>int</code>, nor Haskell's <code>Double</code> the same as C's
<code>double</code> and so on. While on ''some'' platforms they are the same
and you can write throw-away programs relying on these, the goal of portability
requires you to declare foreign imports and exports using special types
described in the FFI standard, which are guaranteed to correspond to C types.
These are:


<haskell>
<haskell>
Line 1,077: Line 1,270:
</haskell>
</haskell>


Now we can import and export typeful C/Haskell functions:
Now we can typefully import and export to and from C and Haskell:
<haskell>
<haskell>
foreign import ccall unsafe "math.h"
foreign import ccall unsafe "math.h"
Line 1,083: Line 1,276:
</haskell>
</haskell>


Note that pure C functions (whose results are depend only on their arguments) are imported without IO in their return type. "const" C specifiers doesn't reflect in Haskell types, so appropriate compiler checks are not performed.
Note that C routines <i>which behave like pure functions</i> (those whose
results depend only on their arguments) are imported without <code>IO</code>
in their return type. The <code>const</code> specifier in C is not reflected
in Haskell types, so appropriate compiler checks are not performed. <!-- What would these be? -->


All these numeric types are instances of the same classes as their Haskell cousins (Ord, Num, Show and so on), so you may perform calculations on these data directly. Alternatively, you may convert them to native Haskell types. It's very typical to write simple wrappers around imported and exported functions just to provide interfaces having native Haskell types:
All these numeric types are instances of the same classes as their Haskell
cousins (<code>Ord</code>, <code>Num</code>, <code>Show</code> and so on), so
you may perform calculations on these data directly. Alternatively, you may
convert them to native Haskell types. It's very typical to write simple
wrappers around foreign imports and exports just to provide interfaces having
native Haskell types:


<haskell>
<haskell>
Line 1,109: Line 1,310:


<haskell>
<haskell>
-- |Type-conversion wrapper around c_strlen  
-- |Type-conversion wrapper around c_strlen
strlen :: String -> Int
strlen :: String -> Int
strlen = ....
strlen = ....
Line 1,116: Line 1,317:
=== Marshalling composite types ===
=== Marshalling composite types ===


C array may be manipulated in Haskell as [http://haskell.org/haskellwiki/Arrays#StorableArray_.28module_Data.Array.Storable.29 StorableArray].
A C array may be manipulated in Haskell as
[http://haskell.org/haskellwiki/Arrays#StorableArray_.28module_Data.Array.Storable.29 StorableArray].


There is no built-in support for marshalling C structures and using C constants in Haskell. These are implemented in c2hs preprocessor, though.
There is no built-in support for marshalling C structures and using C constants
in Haskell. These are implemented in the c2hs preprocessor, though.


Binary marshalling (serializing) of data structures of any complexity is implemented in library Binary.
Binary marshalling (serializing) of data structures of any complexity is
implemented in the library module "Binary".


=== Dynamic calls ===
=== Dynamic calls ===


=== DLLs ===
=== DLLs ===
''because i don't have experience of using DLLs, can someone write into this section? ultimately, we need to consider the following tasks:''
''because i don't have experience of using DLLs, can someone write into this
* using DLLs of 3rd-party libraries (such as ziplib)
section? Ultimately, we need to consider the following tasks:''
* putting your own C code into DLL to use in Haskell
* using DLLs of 3rd-party libraries (such as ''ziplib'')
* putting Haskell code into DLL which may be called from C code
* putting your own C code into a DLL to use in Haskell
* putting Haskell code into a DLL which may be called from C code


== Dark side of IO monad ==
=== unsafePerformIO ===


Programmers coming from an imperative language background often look for a way to execute IO actions inside a pure procedure. But what does this mean?
== '''The dark side of the I/O monad''' ==
Imagine that you're trying to write a procedure that reads the contents of a file with a given name, and you try to write it as a pure (non-IO) function:
 
Unless you are a systems developer, postgraduate CS student, or have alternate
(and eminent!) verifiable qualifications you should have '''no need whatsoever'''
for this section -
[https://stackoverflow.com/questions/9449239/unsafeperformio-in-threaded-applications-does-not-work here]
is just one tiny example of what can go wrong if you don't know what you are
doing. Look for other solutions!
 
=== '''unsafePerformIO''' ===
 
Do you remember this definition?


<haskell>
<haskell>
readContents :: Filename -> String
getChar >>= \c -> c
</haskell>
</haskell>


Defining readContents as a pure function will certainly simplify the code that uses it. But it will also create problems for the compiler:
Let's try to "define" something with it:
 
<haskell>
getchar :: Char
getchar = getChar >>= \c -> c


# This call is not inserted in a sequence of "world transformations", so the compiler doesn't know at what exact moment you want to execute this action. For example, if the file has one kind of contents at the beginning of the program and another at the end - which contents do you want to see?  You have no idea when (or even if) this function is going to get invoked, because Haskell sees this function as pure and feels free to reorder the execution of any or all pure functions as needed.
get2chars :: String
# Attempts to read the contents of files with the same name can be factored (''i.e.'' reduced to a single call) despite the fact that the file (or the current directory) can be changed between calls. Again, Haskell considers all non-IO functions to be pure and feels free to omit multiple calls with the same parameters.
get2chars = [a, b] where a = getchar
                        b = getchar
</haskell>


So, implementing pure functions that interact with the Real World is
But what makes all of that so wrong? Besides <code>getchar</code>
considered to be Bad Behavior. Good boys and girls never do it ;)
and <code>get2chars</code> not being I/O actions:


# Because the Haskell compiler treats all functions as pure (not having side effects), it can avoid "unnecessary" calls to <code>getchar</code> and use one returned value twice;
# Even if it does make two calls, there is no way to determine which call should be performed first. Do you want to return the two characters in the order in which they were read, or in the opposite order? Nothing in the definition of <code>get2chars</code> answers this question.


Nevertheless, there are (semi-official) ways to use IO actions inside
Despite these problems, programmers coming from an imperative language
of pure functions. As you should remember this is prohibited by
background often look for a way to do this - disguise one or more I/O
requiring the RealWorld "baton" in order to call an IO action. Pure functions don't have the baton, but there is a special "magic" procedure that produces this baton from nowhere, uses it to call an IO action and then throws the resulting "world" away!  It's a little low-level magic :)  This very special (and dangerous) procedure is:
actions as a pure definition. Having seen procedural entities similar
in appearance to:


<haskell>
<haskell>
unsafePerformIO :: IO a -> a
void putchar(char c);
</haskell>
</haskell>


Let's look at its (possible) definition:
the thought of just writing:


<haskell>
<haskell>
unsafePerformIO :: (RealWorld -> (a, RealWorld)) -> a
putchar :: Char -> ()
unsafePerformIO action = let (a, world1) = action createNewWorld
putchar c = ...
                        in a
</haskell>
</haskell>


where 'createNewWorld' is an internal function producing a new value of
would definitely be more appealing - for example, defining
the RealWorld type.
<code>readContents</code> as though it were a pure function:


Using unsafePerformIO, you can easily write pure functions that do
<haskell>
I/O inside. But don't do this without a real need, and remember to
readContents :: Filename -> String
follow this rule: the compiler doesn't know that you are cheating; it still
</haskell>
considers each non-IO function to be a pure one. Therefore, all the usual
optimization rules can (and will!) be applied to its execution. So
you must ensure that:


# The result of each call depends only on its arguments.
will certainly simplify the code that uses it. However, those exact same
# You don't rely on side-effects of this function, which may be not executed if its results are not needed.
problems are also lurking here:


# Attempts to read the contents of files with the same name can be factored (''i.e.'' reduced to a single call) despite the fact that the file (or the current directory) can be changed between calls. Haskell considers all non-<code>IO</code> functions to be pure and feels free to merge multiple calls with the same parameters.
# This call is not inserted in a sequence of I/O actions all the way from <code>main</code> so the compiler doesn't know at what exact moment you want to execute this action. For example, if the file has one kind of contents at the beginning of the program and another at the end - which contents do you want to see?  You have no idea when (or even if) this function is going to get invoked, because Haskell sees this function as pure and feels free to reorder the execution of any or all pure functions as needed.


Let's investigate this problem more deeply. Function evaluation in Haskell
So, implementing supposedly-pure functions that interact with the '''Real World'''
is determined by a value's necessity - the language computes only the values that are really required to calculate the final result. But what does this mean with respect to the 'main' function?  To "calculate the final world's" value, you need to perform all the intermediate IO actions that are included in the 'main' chain. By using 'unsafePerformIO' we call IO actions outside of this chain.  What guarantee do we have that they will be run at all? None. The only time they will be run is if running them is required to compute the overall function result (which in turn should be required to perform some action in the
is considered to be '''Bad Behavior'''. Nice programmers never do it <code>;-)</code>
'main' chain). This is an example of Haskell's evaluation-by-need strategy. Now you should clearly see the difference:


- An IO action inside an IO procedure is guaranteed to execute as long as
Nevertheless, there are (semi-official) ways to use I/O actions inside of pure
it is (directly or indirectly) inside the 'main' chain - even when its result isn't used (because the implicit "world" value it returns ''will'' be used). You directly specify the order of the action's execution inside the IO procedure. Data dependencies are simulated via the implicit "world" values that are passed from each IO action to the next.
functions - there is a ''(ahem)'' "special" definition that will (mis)use the
Haskell implementation to run an I/O action. This particular (and dangerous)
is:


- An IO action inside 'unsafePerformIO' will be performed only if
<haskell>
result of this operation is really used. The evaluation order is not
unsafePerformIO :: IO a -> a
guaranteed and you should not rely on it (except when you're sure about
</haskell>
whatever data dependencies may exist).


Using <code>unsafePerformIO</code>, you could easily write "pure-looking
functions" that actually do I/O inside. But don't do this without a real need,
and remember to follow this rule:


I should also say that inside 'unsafePerformIO' call you can organize
* the compiler doesn't know that you are cheating; it still considers each non-<code>IO</code> function to be a pure one. Therefore, all the usual optimization rules can (and will!) be applied to its execution.
a small internal chain of IO actions with the help of the same binding
operators and/or 'do' syntactic sugar we've seen above. For example, here's a particularly convoluted way to compute the integer that comes after zero:


<haskell>
So you must ensure that:
one :: Int
one = unsafePerformIO $ do var <- newIORef 0
                          modifyIORef var (+1)
                          readIORef var
</haskell>


and in this case ALL the operations in this chain will be performed as
* The result of each call depends only on its arguments.
long as the result of the 'unsafePerformIO' call is needed. To ensure this,
* You don't rely on side-effects of this function, which may be not executed if its results are not needed.
the actual 'unsafePerformIO' implementation evaluates the "world" returned
by the 'action':


<haskell>
Let's investigate this problem more deeply. Function evaluation in Haskell is
unsafePerformIO action = let (a,world1) = action createNewWorld
determined by a value's necessity - the language computes only the values that
                        in (world1 `seq` a)
are really required to calculate the final result. But what does this mean with
</haskell>
respect to the <code>main</code> action?  To run it to completion, all the
intermediate I/O actions that are included in <code>main</code>'s chain need to
be run. By using <code>unsafePerformIO</code> we call I/O actions outside of this
chain.  What guarantee do we have that they will be run at all? None. The only
time they will be run is if running them is required to compute the overall
function result (which in turn should be required to perform some action in the
<code>main</code> chain). This is an example of Haskell's evaluation-by-need
strategy. Now you should clearly see the difference:


(The 'seq' operation strictly evaluates its first argument before
* An I/O action inside an I/O definition is guaranteed to execute as long as it is (directly or indirectly) inside the <code>main</code> chain - even when its result isn't used (because it will be run anyway). You directly specify the order of the action's execution inside the I/O definition.
returning the value of the second one).


* An I/O action called by <code>unsafePerformIO</code> will be performed only if its result is really used. The evaluation order is not guaranteed and you should not rely on it (except when you're sure about whatever data dependencies may exist).


=== inlinePerformIO ===
I should also say that inside the <code>unsafePerformIO</code> call you can
organize a small internal chain of I/O actions with the help of the same
binding operators and/or <code>do</code> syntactic sugar we've seen above. So
here's how we'd rewrite our previous (pure!) definition of <code>one</code>
using <code>unsafePerformIO</code>:


inlinePerformIO has the same definition as unsafePerformIO but with addition of INLINE pragma:
<haskell>
<haskell>
-- | Just like unsafePerformIO, but we inline it. Big performance gains as
one :: Integer
-- it exposes lots of things to further inlining
one = unsafePerformIO $ do var <- newIORef 0
{-# INLINE inlinePerformIO #-}
                          modifyIORef var (+1)
inlinePerformIO action = let (a, world1) = action createNewWorld
                          readIORef var
                        in (world1 `seq` a)
#endif
</haskell>
</haskell>


Semantically inlinePerformIO = unsafePerformIO
and in this case ''all'' the I/O actions in this chain will be run when
in as much as either of those have any semantics at all.
the result of the <code>unsafePerformIO</code> call is needed.


The difference of course is that inlinePerformIO is even less safe than
=== '''inlinePerformIO''' ===
unsafePerformIO. While ghc will try not to duplicate or common up
different uses of unsafePerformIO, we aggressively inline
inlinePerformIO. So you can really only use it where the IO content is
really properly pure, like reading from an immutable memory buffer (as
in the case of ByteStrings). However things like allocating new buffers
should not be done inside inlinePerformIO since that can easily be
floated out and performed just once for the whole program, so you end up
with many things sharing the same buffer, which would be bad.


So the rule of thumb is that IO things wrapped in unsafePerformIO have
The internal code for <code>inlinePerformIO</code> is similar to that of
to be externally pure while with inlinePerformIO it has to be really
<code>unsafePerformIO</code>, sometimes having an <code>INLINE</code> pragma.
really pure or it'll all go horribly wrong.
Semantically <code>inlinePerformIO = unsafePerformIO</code> in as
much as either of those have any semantics at all.
 
The difference of course is that <code>inlinePerformIO</code> is even less safe
than <code>unsafePerformIO</code>. While ghc will try not to duplicate or
common up different uses of <code>unsafePerformIO</code>, we aggressively
inline <code>inlinePerformIO</code>. So you can really only use it where the
I/O content is really properly pure, like reading from an immutable memory
buffer (as in the case of <code>ByteString</code>s). However things like
allocating new buffers should not be done inside <code>inlinePerformIO</code>
since that can easily be floated out and performed just once for the whole
program, so you end up with many things sharing the same buffer, which would
be bad.
 
So the rule of thumb is that I/O actions wrapped in <code>unsafePerformIO</code>
have to be externally pure while with <code>inlinePerformIO</code> it has
to be really, ''really'' pure or it'll all go horribly wrong.


That said, here's some really hairy code. This should frighten any pure
That said, here's some really hairy code. This should frighten any pure
Line 1,253: Line 1,483:
write !n body = Put $ \c buf@(Buffer fp o u l) ->
write !n body = Put $ \c buf@(Buffer fp o u l) ->
   if n <= l
   if n <= l
     then write' c fp o u l
     then write</code> c fp o u l
     else write' (flushOld c n fp o u) (newBuffer c n) 0 0 0
     else write</code> (flushOld c n fp o u) (newBuffer c n) 0 0 0


   where {-# NOINLINE write' #-}
   where {-# NOINLINE write</code> #-}
         write' c !fp !o !u !l =
         write</code> c !fp !o !u !l =
           -- warning: this is a tad hardcore
           -- warning: this is a tad hardcore
           inlinePerformIO
           inlinePerformIO
Line 1,270: Line 1,500:
</haskell>
</haskell>


This does not adhere to my rule of thumb above. Don't ask exactly why we
This does not adhere to my rule of thumb above. Don't ask exactly why we claim
claim it's safe :-) (and if anyone really wants to know, ask Ross
it's safe <code>:-)</code> (and if anyone really wants to know, ask Ross Paterson who did it
Paterson who did it first in the Builder monoid)
first in the <code>Builder</code> monoid)
 
=== '''unsafeInterleaveIO''' ===
 
But there is an even stranger operation:
 
<haskell>
unsafeInterleaveIO :: IO a -> IO a
</haskell>
 
and here's one clear reason why:
 
<haskell>
{-# NOINLINE unsafeInterleaveIO #-}
unsafeInterleaveIO  :: IO a -> IO a
unsafeInterleaveIO a =  return (unsafePerformIO a)
</haskell>


=== unsafeInterleaveIO ===
So don't let that type signature fool you - <code>unsafeInterleaveIO</code>
also has to be used carefully! It too sets up its unsuspecting parameter to
run lazily, instead of running in the <code>main</code> action chain, with the
only difference being the result of running the parameter can only be used
by another I/O action. But this is of little benefit - ideally the parameter
and the <code>main</code> action chain should have no other interactions with
each other, otherwise things can get ugly!


But there is an even stranger operation called 'unsafeInterleaveIO' that
At least you have some appreciation as to why <code>unsafeInterleaveIO</code>
gets the "official baton", makes its own pirate copy, and then runs
is, well '''unsafe!''' Just don't ask - to talk further is bound to cause grief
an "illegal" relay-race in parallel with the main one! I can't talk further
and indignation. I won't say anything more about this ruffian I...use all the
about its behavior without causing grief and indignation, so it's no surprise
time (darn it!)
that this operation is widely used in countries that are hotbeds of software piracy such as Russia and China! ;)  Don't even ask me - I won't say anything more about this dirty trick I use all the time ;)


One can use unsafePerformIO (not unsafeInterleaveIO) to perform I/O
One can use <code>unsafePerformIO</code> (not <code>unsafeInterleaveIO</code>)
operations not in predefined order but by demand. For example, the
to perform I/O operations not in some predefined order but by demand. For
following code:
example, the following code:


<haskell>
<haskell>
Line 1,291: Line 1,542:
</haskell>
</haskell>


will perform getChar I/O call only when value of c is really required
will perform the <code>getChar</code> I/O call only when the value of
by code, i.e. it this call will be performed lazily as any usual
<code>c</code> is really required by the calling code, i.e. it this call will
Haskell computation.
be performed lazily like any regular Haskell computation.


Now imagine the following code:
Now imagine the following code:
Line 1,302: Line 1,553:
</haskell>
</haskell>


Three chars inside this list will be computed on demand too, and this
The three characters inside this list will be computed on demand too, and this
means that their values will depend on the order they are consumed. It
means that their values will depend on the order they are consumed. It is not
is not that we usually need :)
what we usually want.
 


unsafeInterleaveIO solves this problem - it performs I/O only on
<code>unsafeInterleaveIO</code> solves this problem - it performs I/O only on
demand but allows to define exact *internal* execution order for parts
demand but allows you to define the exact ''internal'' execution order for
of your datastructure. It is why I wrote that unsafeInterleaveIO makes
parts of your data structure.
illegal copy of baton :)


First, unsafeInterleaveIO has (IO a) action as a parameter and returns
* <code>unsafeInterleaveIO</code> accepts an I/O action as a parameter and returns another I/O action as the result:
value of type 'a':


<haskell>
:<haskell>
do str <- unsafeInterleaveIO myGetContents
do str <- unsafeInterleaveIO myGetContents
                    ⋮
</haskell>
</haskell>


Second, unsafeInterleaveIO don't perform any action immediately, it
* <code>unsafeInterleaveIO</code> doesn't perform any action immediately, it only creates a closure of type <code>a</code> which upon being needed will perform the action specified as the parameter.
only creates a box of type 'a' which on requesting this value will
perform action specified as a parameter.


Third, this action by itself may compute the whole value immediately
* this action by itself may compute the whole value immediately...or use <code>unsafeInterleaveIO</code> again to defer calculation of some sub-components:
or... use unsafeInterleaveIO again to defer calculation of some
sub-components:


<haskell>
:<haskell>
myGetContents = do
myGetContents = do
   c <- getChar
   c <- getChar
Line 1,334: Line 1,579:
</haskell>
</haskell>


This code will be executed only at the moment when value of str is
This code will be executed only at the moment when the value of <code>str</code>
really demanded. In this moment, getChar will be performed (with
is really demanded. In this moment, <code>getChar</code> will be performed
result assigned to c) and one more lazy IO box will be created - for s.
(with its result assigned to <code>c</code>) and a new lazy-I/O closure will be
This box again contains link to the myGetContents call
created - for <code>s</code>. This new closure also contains a link to a
<code>myGetContents</code> call.


Then, list cell returned that contains one char read and link to
The resulting list is then returned. It contains the <code>Char</code> that was
myGetContents call as a way to compute rest of the list. Only at the
just read and a link to another <code>myGetContents</code> call as a way to
moment when next value in list required, this operation will be
compute the rest of the list. Only at the moment when the next value in the
performed again
list is required will this operation be performed again.


As a final result, we get inability to read second char in list before
As a final result, we can postpone the read of the second <code>Char</code> in
first one, but lazy character of reading in whole. bingo!
the list before the first one, but have lazy reading of characters as a whole -
bingo!




PS: of course, actual code should include EOF checking. also note that
PS: of course, actual code should include EOF checking; also note that you can
you can read many chars/records at each call:
read multiple characters/records at each call:


<haskell>
<haskell>
myGetContents = do
myGetContents = do
   c <- replicateM 512 getChar
   l <- replicateM 512 getChar
   s <- unsafeInterleaveIO myGetContents
   s <- unsafeInterleaveIO myGetContents
   return (c++s)
   return (l++s)
</haskell>
</haskell>


== Welcome to the machine: the actual [[GHC]] implementation ==
and we can rewrite <code>myGetContents</code> to avoid needing to use
<code>unsafeInterleaveIO</code> where it's called:


A little disclaimer: I should say that I'm not describing
<haskell>
here exactly what a monad is (I don't even completely understand it myself) and my explanation shows only one _possible_ way to implement the IO monad in
myGetContents = unsafeInterleaveIO $ do
Haskell. For example, the hbc Haskell compiler implements IO monad via
  l <- replicateM 512 getChar
continuations. I also haven't said anything about exception handling,
  s <- myGetContents
which is a natural part of the "monad" concept. You can read the "All About
  return (l++s)
Monads" guide to learn more about these topics.
</haskell>


But there is some good news: first, the IO monad understanding you've just acquired will work with any implementation and with many other monads. You just can't work with RealWorld
values directly.


Second, the IO monad implementation described here is really used in the GHC,
== Welcome to the machine: taking off the covers ==
yhc/nhc (Hugs/jhc, too?) compilers. Here is the actual IO definition
from the GHC sources:


<haskell>
A little disclaimer: I should say that I'm not describing here exactly what a
monad is (I don't even completely understand it myself) and my explanation
shows only one ''possible'' way to implement the I/O monad in Haskell. For
example, the hbc compiler and the Hugs interpreter implements the I/O monad via
continuations [[#readmore|[9]]]. I also haven't said anything about exception
handling, which is a natural part of the "monad" concept. You can read the
[[All About Monads]] guide to learn more about these topics.
 
But there is some good news: the I/O monad understanding you've just acquired
will work with any implementation and with many other monads.
 
=== The [[GHC]] implementation ===
 
:<haskell>
newtype IO a = IO (State# RealWorld -> (# State# RealWorld, a #))
newtype IO a = IO (State# RealWorld -> (# State# RealWorld, a #))
</haskell>
</haskell>


It uses the "State# RealWorld" type instead of our RealWorld, it uses the "(# #)" strict tuple for optimization, and it adds an IO data constructor
It uses the <code>State# RealWorld</code> type and the strict tuple type
around the type. Nevertheless, there are no significant changes from the standpoint of our explanation. Knowing the principle of "chaining" IO actions via fake "state of the world" values, you can now easily understand and write low-level implementations of GHC I/O operations.
<code>(# ... #)</code> for optimization. It also uses an <code>IO</code> data
constructor. Nevertheless, there are no significant changes from the standpoint
of our explanation.


Of course, other compilers e.g. yhc/nhc (jhc, too?) define <code>IO</code> in
other ways.


=== The [[Yhc]]/nhc98 implementation ===
=== The [[Yhc]]/nhc98 implementation ===
Line 1,389: Line 1,650:
</haskell>
</haskell>


This implementation makes the "World" disappear somewhat, and returns Either a
This implementation makes the <code>World</code> disappear somewhat[[#readmore|[10]]],
result of type "a", or if an error occurs then "IOError". The lack of the World on the right-hand side of the function can only be done because the compiler knows special things about the IO type, and won't overoptimise it.
and returns <code>Either</code> a result of type <code>a</code>, or if an error
occurs then <code>IOError</code>. The lack of the <code>World</code> on the
right-hand side of the function can only be done because the compiler knows
special things about the <code>IO</code> type, and won't overoptimise it.




== Further reading ==
== <span id="readmore"></span>Further reading ==


[1] This tutorial is largely based on the Simon Peyton Jones' paper [http://research.microsoft.com/%7Esimonpj/Papers/marktoberdorf Tackling the awkward squad: monadic input/output, concurrency, exceptions, and foreign-language calls in Haskell]. I hope that my tutorial improves his original explanation of the Haskell I/O system and brings it closer to the point of view of beginning Haskell programmers. But if you need to learn about concurrency, exceptions and FFI in Haskell/GHC, the original paper is the best source of information.
[1] This manual is largely based on Simon Peyton Jones's paper [https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.13.9123&rep=rep1&type=pdf Tackling the awkward squad: monadic input/output, concurrency, exceptions, and foreign-language calls in Haskell]. I hope that my manual improves his original explanation of the Haskell I/O system and brings it closer to the point of view of new Haskell programmers. But if you need to learn about concurrency, exceptions and the FFI in Haskell/GHC, the original paper is the best source of information.


[2] You can find more information about concurrency, FFI and STM at the [[GHC/Concurrency#Starting points]] page.
[2] You can find more information about concurrency, the FFI and STM at the [[GHC/Concurrency#Starting points]] page.


[3] The [[Arrays]] page contains exhaustive explanations about using mutable arrays.
[3] The [[Arrays]] page contains exhaustive explanations about using mutable arrays.


[4] Look also at the [[Tutorials#Using_monads|Using monads]] page, which contains tutorials and papers really describing these mysterious monads :)
[4] Look also at the [[Tutorials#Using_monads|Using monads]] page, which contains tutorials and papers really describing these mysterious monads.
 
[5] An explanation of the basic monad functions, with examples, can be found in the reference guide [https://web.archive.org/web/20201109033750/members.chello.nl/hjgtuyl/tourdemonad.html A tour of the Haskell Monad functions], by Henk-Jan van Tuyl.
 
[6] Official FFI specifications can be found on the page [http://www.cse.unsw.edu.au/~chak/haskell/ffi/ The Haskell 98 Foreign Function Interface 1.0: An Addendum to the Haskell 98 Report]


[5] An explanation of the basic monad functions, with examples, can be found in the reference guide [http://members.chello.nl/hjgtuyl/tourdemonad.html A tour of the Haskell Monad functions], by Henk-Jan van Tuyl.
[7] Using the FFI in multithreaded programs is described in [http://www.haskell.org/~simonmar/bib/concffi04_abstract.html Extending the Haskell Foreign Function Interface with Concurrency]


[6] Official FFI specifiacations can be found on the page [http://www.cse.unsw.edu.au/~chak/haskell/ffi/ The Haskell 98 Foreign Function Interface 1.0: An Addendum to the Haskell 98 Report]
[8] This particular behaviour is not a requirement of Haskell 2010, so the operation of <code>seq</code> may differ between various Haskell implementations - if you're not sure, staying within the I/O monad is the safest option.


[7] Using FFI in multithreaded programs described in paper [http://www.haskell.org/~simonmar/bib/concffi04_abstract.html Extending the Haskell Foreign Function Interface with Concurrency]
[9] [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.91.3579&rep=rep1&type=pdf How to Declare an Imperative] by Phil Wadler provides an explanation of how this can be done.


Do you have more questions? Ask in the [http://www.haskell.org/mailman/listinfo/haskell-cafe haskell-cafe mailing list].
Do you have more questions? Ask in the
[http://www.haskell.org/mailman/listinfo/haskell-cafe haskell-cafe mailing list].


== To-do list ==
== To-do list ==


If you are interested in adding more information to this manual, please add your questions/topics here.
If you are interested in adding more information to this manual, please add
your questions/topics here.


Topics:
Topics:
* fixIO and 'mdo'
* <code>fixIO</code> and <code>mdo</code>
* ST monad
* <code>Q</code> monad
* Q monad


Questions:
Questions:
* split '>>='/'>>'/return section and 'do' section, more examples of using binding operators
* split <code>(>>=)</code>/<code>(>>)</code>/<code>return</code> section and <code>do</code> section, more examples of using binding operators
* IORef detailed explanation (==const*), usage examples, syntax sugar, unboxed refs
* <code>IORef</code> detailed explanation (==<code>const*</code>), usage examples, syntax sugar, unboxed refs
* explanation of how the actual data "in" mutable references are inside GHC's <code>RealWorld</code>, rather than inside the references themselves (<code>IORef</code>, <code>IOArray</code> & co.)
* control structures developing - much more examples
* control structures developing - much more examples
* unsafePerformIO usage examples: global variable, ByteString, other examples
* <code>unsafePerformIO</code> usage examples: global variable, <code>ByteString</code>, other examples
* actual GHC implementation - how to write low-level routines on example of newIORef implementation
* how <code>unsafeInterLeaveIO</code> can be seen as a kind of concurrency, and therefore isn't so unsafe (unlike <code>unsafeInterleaveST</code> which really is unsafe)
* discussion about different senses of <code>safe</code>/<code>unsafe</code> (like breaking equational reasoning vs. invoking undefined behaviour (so can corrupt the run-time system))
* actual code used by GHC - how to write low-level definitions based on example of how <code>newIORef</code> is implemented


This manual is collective work, so feel free to add more information to it yourself. The final goal is to collectively develop a comprehensive manual for using the IO monad.
This manual is collective work, so feel free to add more information to it
yourself. The final goal is to collectively develop a comprehensive manual
for using the I/O monad.


----
----


[[Category:Tutorials]]
[[Category:Tutorials]]

Latest revision as of 00:00, 8 March 2025

Haskell I/O can be a source of confusion and surprises for new Haskellers - if that's you, a good place to start is the Introduction to IO which can help you learn the basics (e.g. the syntax of I/O expressions) before continuing on.


While simple I/O code in Haskell looks very similar to its equivalents in imperative languages, attempts to write somewhat more complex code often result in a total mess. This is because Haskell I/O is really very different in how it actually works.

The following text is an attempt to explain the inner workings of I/O in Haskell. This explanation should help you eventually learn all the smart I/O tips. Moreover, I've added a detailed explanation of various traps you might encounter along the way. After reading this text, you will be well on your way towards mastering I/O in Haskell.


Haskell is a pure language

Haskell is a pure language and even the I/O system can't break this purity. Being pure means that the result of any function call is fully determined by its arguments. Imperative routines like rand() or getchar() in C, which return different results on each call, are simply impossible to write in Haskell. Moreover, Haskell functions can't have side effects, which means that they can't make any changes "outside the Haskell program", like changing files, writing to the screen, printing, sending data over the network, and so on. These two restrictions together mean that any function call can be replaced by the result of a previous call with the same parameters, and the language guarantees that all these rearrangements will not change the program result! For example, the hyperbolic cosine function cosh can be defined in Haskell as:

cosh r = (exp r + 1/exp r)/2

using identical calls to exp, which is another function. So cosh can instead call exp once, and reuse the result:

cosh r = (x + 1/x)/2 where x = exp r

Let's compare this to C: optimizing C compilers try to guess which routines have no side effects and don't depend on mutable global variables. If this guess is wrong, an optimization can change the program's semantics! To avoid this kind of disaster, C optimizers are conservative in their guesses or require hints from the programmer about the purity of routines.

Compared to an optimizing C compiler, a Haskell compiler is a set of pure mathematical transformations. This results in much better high-level optimization facilities. Moreover, pure mathematical computations can be much more easily divided into several threads that may be executed in parallel, which is increasingly important in these days of multi-core CPUs. Finally, pure computations are less error-prone and easier to verify, which adds to Haskell's robustness and to the speed of program development using Haskell.

Haskell's purity allows the compiler to call only functions whose results are really required to calculate the final value of a top-level definition (e.g. main) - this is called lazy evaluation. It's a great thing for pure mathematical computations, but how about I/O actions? Something like

putStrLn "Press any key to begin formatting"

can't return any meaningful result value, so how can we ensure that the compiler will not omit or reorder its execution? And in general: How we can work with stateful algorithms and side effects in an entirely lazy language? This question has had many different solutions proposed while Haskell was developed (see A History of Haskell), with one solution eventually making its way into the current standard.


I/O in Haskell, simplified

So what is actually inside an I/O action? Let's look at how MicroHs defines the I/O type:

data IO a

just as described on page 95 of 329 in the Haskell 2010 Report: no visible data constructors. So someone who is implementing Haskell could in fact define functions and I/O actions in much the same way. The only difference that really matters is this:

  • Only I/O actions are allowed to make changes "outside the Haskell program".

(Or to state it more formally: only I/O actions are allowed to have externally visible side effects).

It doesn't get much simpler than that :-D

The question of purity

So if Haskell uses the same side effects for I/O as an imperative language, how can it possibly be "pure"?

Because of what doesn't work in Haskell:

\msg x -> seq (putStrLn msg) x

No, that won't work. And neither will this:

getChar >>= \c -> c

Remember, Haskell functions can't have side effects - they can't make any changes "outside the Haskell program". Therefore if either of these examples really did work, then they would no longer be Haskell functions! (More importantly, just imagine if those two were being used in parallel somewhere in a program...)

So in Haskell, the result of running an I/O action must be another I/O action. This restriction ensures that Haskell functions really are pure.


Running with I/O

So, main just has the type IO (). Let's look at main calling getChar two times:

main :: IO ()
main = getChar >>= \a ->
       getChar >>= \b ->
       return ()

By defining a Monad instance for IO a:

unitIO :: a -> IO a                     -- implemented
bindIO :: IO a -> (a -> IO b) -> IO b   --  elsewhere

instance Monad IO where
    return = unitIO
    (>>=)  = bindIO

we can then expand main to get:

main = getChar `bindIO` (\a ->
       getChar `bindIO` (\b ->
       unitIO ()))

Now to run main:

  1. main = getChar `bindIO` (\a -> ...) doesn't require evaluation, continuing;
  2. run getChar to obtain a character c1 :: Char;
  3. apply (\a -> ...) to c1;
  4. then evaluate the result to obtain the next action getChar `bindIO` (\b -> ...);
  5. run getChar to obtain another character c2 :: Char;
  6. apply (\b -> ...) to c2;
  7. then evaluate the result to obtain the next action unitIO ();
  8. run unitIO () to obtain () :: (), which ends the program.

From that example we can see that:

  • Each action is run - it doesn't matter if what's obtained from running it isn't actually used.
  • Each action is run in the order it appears in the program - there is no reordering of actions.
  • Each action is run once, then the next action is obtained or the program ends - if a program only uses an action once, then it is only run once.

Overall, in order to obtain the final value of main, each I/O action that is called from main - directly or indirectly - is run. This means that each action inserted in the chain will be performed just at the moment (relative to the other I/O actions) when we intended it to be called. Let's consider the following program:

main = do a <- ask "What is your name?"
          b <- ask "How old are you?"
          return ()

ask s = do putStr s
           readLn

Now you have enough knowledge to rewrite it in a low-level way and check that each operation that should be performed will really be performed with the arguments it should have and in the order we expect.

But what about conditional execution? No problem. Let's define the well-known when function:

when :: Bool -> IO () -> IO ()
when condition action =
    if condition
      then action
      else return ()

Because it's a function:

  • it will be applied to two arguments;
  • its result (the conditional expression) will be evaluated;
  • then the chosen action will be run.

As you can see, we can easily include or exclude from the execution chain I/O actions depending on the data values. If condition will be False on the call of when, action will never be run.

Loops and more complex control structures can be implemented in the same way. Try it as an exercise!

(>>=) and do notation

All beginners (including me) start by thinking that do is some super-awesome statement that executes I/O actions. That's wrong - do is just syntactic sugar that simplifies the writing of definitions that use I/O (and also other monads, but that's beyond the scope of this manual). do notation eventually gets translated to a series of I/O actions much like we've manually written above. This simplifies the gluing of several I/O actions together. You don't need to use do for just one action; for example,

main = do putStr "Hello!"

is desugared to:

main = putStr "Hello!"

Let's examine how to desugar a do-expression with multiple actions in the following example:

main = do putStr "What is your name?"
          putStr "How old are you?"
          putStr "Nice day!"

The do-expression here just joins several I/O actions that should be performed sequentially. It's translated to sequential applications of one of the so-called "binding operators", namely (>>):

main = (putStr "What is your name?")
       >> ( (putStr "How old are you?")
            >> (putStr "Nice day!")
          )

Defining (>>) looks easy:

(>>) :: IO a -> IO b -> IO b
action1 >> action2 = action1 >>= \_ -> action2

Now you can substitute the definition of (>>) at the places of its usage and check that program constructed by the do desugaring is actually the same as we could write by using I/O actions manually.

A more complex example involves the binding of variables using <-:

main = do a <- readLn
          print a

This code is desugared into:

main = readLn
       >>= (\a -> print a)

where (>>=) corresponds to bindIO.

As you now know, the (>>) binding operator silently ignores the value of its first action and returns as an overall result the result of its second action only. On the other hand, the (>>=) binding operator (note the extra = at the end) allows us to use the result of its first action - it gets passed as an additional parameter to the second one!

You can use (>>) and (>>=) to simplify your program. For example, in the code above we don't need to introduce the variable, because the result of running readLn can be passed directly to print:

main = readLn >>= print

As you see, the notation:

 do x <- action1
    action2

where action1 has type IO a and action2 has type IO b, translates into:

 action1 >>= (\x -> action2)

where the second argument of (>>=) has the type a -> IO b. It's the way the <- binding is processed - the name on the left-hand side of <- just becomes a parameter of subsequent operations represented as one large I/O action. Note also that if action1 has type IO a then x will just have type a; you can think of the effect of <- as "unpacking" the I/O value of action1 into x. Note also that <- is not a true operator; it's pure syntax, just like do itself. Its meaning results only from the way it gets desugared.

Look at the next example:

main = do putStr "What is your name?"
          a <- readLn
          putStr "How old are you?"
          b <- readLn
          print (a,b)

This code is desugared into:

main = putStr "What is your name?"
       >> readLn
       >>= \a -> putStr "How old are you?"
       >> readLn
       >>= \b -> print (a,b)

I omitted the parentheses here; both the (>>) and the (>>=) operators are left-associative, but lambda-bindings always stretches as far to the right as possible, which means that the a and b bindings introduced here are valid for all remaining actions. As an exercise, add the parentheses yourself and translate this definition into action-level code. I think it should be enough to help you finally realize how the do translation and binding operators work.

Oh, no! I forgot the third monadic operator: return. But that's understandable - it does very little! The resulting I/O action immediately returns its given argument (when it is run).

How about translating a simple example of return usage? Say,

main = do a <- readLn
          return (a*2)

Programmers with an imperative language background often think that return in Haskell, as in other languages, immediately returns from the I/O definition. As you can see in its definition (and even just from its type!), such an assumption is totally wrong. The only purpose of using return is to "lift" some value (of type a) into the result of a whole action (of type IO a) and therefore it should generally be used only as the last executed action of some I/O sequence. For example try to translate the following definition into the corresponding low-level code:

main = do a <- readLn
          when (a>=0) $ do
              return ()
          print "a is negative"

and you will realize that the print call is executed even for non-negative values of a. If you need to escape from the middle of an I/O definition, you can use an if expression:

main = do a <- readLn
          if (a>=0)
            then return ()
            else print "a is negative"

Moreover, Haskell layout rules allow us to use the following layout:

main = do a <- readLn
          if (a>=0) then return ()
            else do
          print "a is negative"
          ...

that may be useful for escaping from the middle of a longish do-expression.

Last exercise: implement a function liftM that lifts operations on plain values to the operations on monadic ones. Its type signature:

liftM :: (a -> b) -> (IO a -> IO b)

If that's too hard for you, start with the following high-level definition and rewrite it in low-level fashion:

liftM f action = do x <- action
                    return (f x)


Mutable data (references, arrays, hash tables...)

As you should know, every name in Haskell is bound to one fixed (immutable) value. This greatly simplifies understanding algorithms and code optimization, but it's inappropriate in some cases. As we all know, there are plenty of algorithms that are simpler to implement in terms of updatable variables, arrays and so on. This means that the value associated with a variable, for example, can be different at different execution points, so reading its value can't be considered as a pure function. Imagine, for example, the following code:

main = do let a0 = readVariable varA
              _  = writeVariable varA 1
              a1 = readVariable varA
          print (a0, a1)

Does this look strange?

  1. The two calls to readVariable look the same, so the compiler can just reuse the value returned by the first call.
  2. The result of the writeVariable call isn't used so the compiler can (and will!) omit this call completely.
  3. These three calls may be rearranged in any order because they appear to be independent of each other.

This is obviously not what was intended. What's the solution? You already know this - use I/O actions! Doing that guarantees:

  1. the result of the "same" action (such as readVariable varA) will not be reused
  2. each action will have to be executed
  3. the execution order will be retained as written

So, the code above really should be written as:

import Data.IORef
main = do varA <- newIORef 0  -- Create and initialize a new variable
          a0 <- readIORef varA
          writeIORef varA 1
          a1 <- readIORef varA
          print (a0, a1)

Here, varA has the type IORef Int which means "a variable (reference) in the I/O monad holding a value of type Int". newIORef creates a new variable (reference) and returns it, and then read/write actions use this reference. The value returned by the readIORef varA action depends not only on the variable involved but also on the moment this operation is performed so it can return different values on each call.

Arrays, hash tables and any other mutable data structures are defined in the same way - for each of them, there's an operation that creates new "mutable values" and returns a reference to it. Then value-specific read and write operations in the I/O monad are used. The following code shows an example using mutable arrays:

import Data.Array.IO
main = do arr <- newArray (1,10) 37 :: IO (IOArray Int Int)
          a <- readArray arr 1
          writeArray arr 1 64
          b <- readArray arr 1
          print (a, b)

Here, an array of 10 elements with 37 as the initial value at each location is created. After reading the value of the first element (index 1) into a this element's value is changed to 64 and then read again into b. As you can see by executing this code, a will be set to 37 and b to 64.

Other state-dependent operations are also often implemented with I/O actions. For example, a random number generator should return a different value on each call. It looks natural to give it a type involving IO:

rand :: IO Int

Moreover, when you import a C routine you should be careful - if this routine is impure, i.e. its result depends on something "outside the Haskell program" (file system, memory contents, its own static internal state and so on), you should give it an IO type. Otherwise, the compiler can "optimize" repetitive calls to the definition with the same parameters!

For example, we can write a non-IO type for:

foreign import ccall
   sin :: Double -> Double

because the result of sin depends only on its argument, but

foreign import ccall
   tell :: Int -> IO Int

If you will declare tell as a pure function (without IO) then you may get the same position on each call!

Encapsulated mutable data: ST

If you're going to be doing things like sending text to a screen or reading data from a scanner, IO is the type to start with - you can then customise existing I/O operations or add new ones as you see fit. But what if that shiny-new (or classic) algorithm you're working on really only needs mutable state - then having to drag that IO type from main all the way through to wherever you're implementing the algorithm can get quite irritating.

Fortunately there is a better way! One that remains totally pure and yet allows the use of references, arrays, and so on - and it's done using, you guessed it, Haskell's versatile type system (and one extension).

Remember our definition of IO?

data IO a

Well, the new ST type makes just one change - in theory, it can be used with any suitable state type:

data ST s a

If we wanted to, we could even use ST to define IO:

type IO a = ST RealWorld a  -- RealWorld defined elsewhere

Let's add some extra definitions:

newSTRef     :: a -> ST s (STRef s a)      -- these are
readSTRef    :: STRef s a -> ST s a        --  usually
writeSTRef   :: STRef s a -> a -> ST s ()  -- primitive

newSTArray   :: Ix i => (i, i) -> ST s (STArray s i e) -- also usually primitive
              ⋮
unitST       :: a -> ST s a
bindST       :: ST s a -> (a -> ST s b) -> ST s b

instance Monad (ST s) where
    return = unitST
    (>>=)  = bindST

...that's right - this new ST type is also monadic!

So what's the big difference between the ST and IO types? In one word - runST:

runST :: (forall s . ST s a) -> a

Yes - it has a very unusual type. But that type allows you to run your stateful computation as if it was a pure definition!

The s type variable in ST is the type of the local state. Moreover, all the fun mutable stuff available for ST is quantified over s:

newSTRef  :: a -> ST s (STRef s a)
newArray_ :: Ix i => (i, i) -> ST s (STArray s i e)

So why does runST have such a funky type? Let's see what would happen if we wrote

makeSTRef :: a -> STRef s a
makeSTRef a = runST (newSTRef a)

This fails, because newSTRef a doesn't work for all state types s - it only works for the s from the return type STRef s a.

This is all sort of wacky, but the result is that you can only run an ST computation where the output type is functionally pure, and makes no references to the internal mutable state of the computation. In exchange for that, there's no access to I/O operations like writing to or reading from the console. The monadic ST type only has references, arrays, and such that are useful for performing pure computations.

Due to how similar IO and ST are internally, there's this function:

stToIO :: ST RealWorld a -> IO a

The difference is that ST uses the type system to forbid unsafe behavior like extracting mutable objects from their safe ST wrapping, but allowing purely functional outputs to be performed with all the handy access to mutable references and arrays.

For example, here's a particularly convoluted way to compute the integer that comes after zero:

oneST :: ST s Integer -- note that this works correctly for any s
oneST = do var <- newSTRef 0
           modifySTRef var (+1)
           readSTRef var

one :: Int
one = runST oneST


I/O actions as values

By this point you should understand why it's impossible to use I/O actions inside non-I/O (pure) functions: when needed, fully-applied functions are always evaluated - they aren't run like I/O actions. In addition, the prohibition of using I/O actions inside pure functions is maintained by the type system (as it usually is in Haskell).

But while pure code can't be used to run I/O actions, it can work with them as with any other value - I/O actions can be stored in data structures, passed as parameters, returned as results, collected in lists or in tuples. But what won't work is something like:

\ msg x -> case putStrLn msg of _ -> x

because it will be treated as a function, not an I/O action.

To run an I/O action, we need to make it part of main:

  • either directly:
main = action
  • or in the "action chain" of another action which is already a part of the main "chain":
main = ... >>= \ _ -> action >>= ...

Only then will the action be run. For example, in:

main = do let skip2chars = getChar >> getChar >> return ()
          putStr "Press two keys"
          skip2chars
          return ()

the non-let actions are run in the exact order in which they're written.

Example: a list of I/O actions

Let's try defining a list of I/O actions:

ioActions :: [IO ()]
ioActions = [(print "Hello!"),
             (putStr "just kidding"),
             (getChar >> return ())
            ]

I used additional parentheses around each action, although they aren't really required. If you still can't believe that these actions won't be run immediately, remember that in this expression:

\ b -> if b then (putStr "started...") (putStrLn "completed.")

both I/O actions won't immediately be run either.

Well, now we want to execute some of these actions. No problem, just insert them into the main chain:

main = do head ioActions
          ioActions !! 1
          last ioActions

Looks strange, right? Really, any I/O action that you write in a do-expression (or use as a parameter for the (>>)/(>>=) operators) is an expression returning a result of type IO a for some type a. Typically, you use some function that has the type x -> y -> ... -> IO a and provide all the x, y, etc. parameters. But you're not limited to this standard scenario - don't forget that Haskell is a functional language and you're free to evaluate any value as required (recall that IO a is really a function type) in any possible way. Here we just extracted several functions from the list - no problem. This value can also be constructed on-the-fly, as we've done in the previous example - that's also OK. Want to see this value passed as a parameter? Just look at the definition of when. Hey, we can buy, sell, and rent these I/O actions just like we can with any other values! For example, let's define a function that executes all the I/O actions in the list:

sequence_ :: [IO a] -> IO ()
sequence_ [] = return ()
sequence_ (x:xs) = do x
                      sequence_ xs

No smoke or mirrors - we just extract I/O actions from the list and insert them into a chain of I/O operations that should be performed one after another (in the same order that they occurred in the list) to obtain the end result of the entire sequence_ call.

With the help of sequence_, we can rewrite our last main action as:

main = sequence_ ioActions

Haskell's ability to work with I/O actions just like other values allows us to define control structures of arbitrary complexity. Try, for example, to define a control structure that repeats an action until it returns the False result:

while :: IO Bool -> IO ()
while action = ???

Most programming languages don't allow you to define control structures at all, and those that do often require you to use a macro-expansion system. In Haskell, control structures are just trivial functions anyone can write.

Example: returning an I/O action as a result

How about returning an I/O action as the result of a function? Well, we've done this for each I/O definition - they all return I/O actions built up from other I/O actions (or themselves, if they're recursive). While we usually just execute them as part of a higher-level I/O definition, it's also possible to just collect them without actual execution:

main = do let a = sequence ioActions
              b = when True getChar
              c = getChar >> getChar >> return ()
          putStr "These let-bindings are not executed!"

These assigned I/O actions can be used as parameters to other definitions, or written to global variables, or processed in some other way, or just executed later, as we did in the example with skip2chars.

But how about returning a parameterized I/O action from an I/O definition? Here's a definition that returns the i'th byte from a file represented as a Handle:

readi h i = do hSeek h AbsoluteSeek i
               hGetChar h

So far so good. But how about a definition that returns the i'th byte of a file with a given name without reopening it each time?

readfilei :: String -> IO (Integer -> IO Char)
readfilei name = do h <- openFile name ReadMode
                    return (readi h)

As you can see, it's an I/O definition that opens a file and returns...an I/O action that will read the specified byte. But we can go further and include the readi body in readfilei:

readfilei name = do h <- openFile name ReadMode
                    let readi h i = do hSeek h AbsoluteSeek i
                                       hGetChar h
                    return (readi h)

That's a little better. But why do we add h as a parameter to readi if it can be obtained from the environment where readi is now defined? An even shorter version is this:

readfilei name = do h <- openFile name ReadMode
                    let readi i = do hSeek h AbsoluteSeek i
                                     hGetChar h
                    return readi

What have we done here? We've build a parameterized I/O action involving local names inside readfilei and returned it as the result. Now it can be used in the following way:

main = do myfile <- readfilei "test"
          a <- myfile 0
          b <- myfile 1
          print (a,b)

This way of using I/O actions is very typical for Haskell programs - you just construct one or more I/O actions that you need, with or without parameters, possibly involving the parameters that your "constructor" received, and return them to the caller. Then these I/O actions can be used in the rest of the program without any knowledge of how you actually implemented them. One thing this can be used for is to partially emulate the OOP (or more precisely, the ADT) programming paradigm.

Example: a memory allocator generator

As an example, one of my programs has a module which is a memory suballocator. It receives the address and size of a large memory block and returns two specialised I/O operations - one to allocate a subblock of a given size and the other to free the allocated subblock:

memoryAllocator :: Ptr a -> Int -> IO (Int -> IO (Ptr b),
                                       Ptr c -> IO ())

memoryAllocator buf size = do ......
                              let alloc size = do ...
                                                  ...
                                  free ptr = do ...
                                                ...
                              return (alloc, free)

How this is implemented? alloc and free work with references created inside the memoryAllocator definition. Because the creation of these references is a part of the memoryAllocator I/O-action chain, a new independent set of references will be created for each memory block for which memoryAllocator is called:

memoryAllocator buf size =
   do start <- newIORef buf
      end <- newIORef (buf `plusPtr` size)
      ...

These two references are read and written in the alloc and free definitions (we'll implement a very simple memory allocator for this example):

      ...
      let alloc size = do addr <- readIORef start
                          writeIORef start (addr `plusPtr` size)
                          return addr

      let free ptr = do writeIORef start ptr

What we've defined here is just a pair of closures that use state available at the moment of their definition. As you can see, it's as easy as in any other functional language, despite Haskell's lack of direct support for impure routines.

The following example uses the operations returned by memoryAllocator, to simultaneously allocate/free blocks in two independent memory buffers:

main = do buf1 <- mallocBytes (2^16)
          buf2 <- mallocBytes (2^20)
          (alloc1, free1) <- memoryAllocator buf1 (2^16)
          (alloc2, free2) <- memoryAllocator buf2 (2^20)
          ptr11 <- alloc1 100
          ptr21 <- alloc2 1000
          free1 ptr11
          free2 ptr21
          ptr12 <- alloc1 100
          ptr22 <- alloc2 1000

Example: emulating OOP with record types

Let's implement the classical OOP example: drawing figures. There are figures of different types: circles, rectangles and so on. The task is to create a heterogeneous list of figures. All figures in this list should support the same set of operations: draw, move and so on. We will define these operations using I/O actions. Instead of a "class" let's define a structure from which all of the required operations can be accessed:

data Figure = Figure { draw :: IO (),
                       move :: Displacement -> IO ()
                     }

type Displacement = (Int, Int)  -- horizontal and vertical displacement in points

The constructor of each figure's type should just return a Figure record:

circle    :: Point -> Radius -> IO Figure
rectangle :: Point -> Point -> IO Figure

type Point = (Int, Int)  -- point coordinates
type Radius = Int        -- circle radius in points

We will "draw" figures by just printing their current parameters. Let's start with implementing simplified circle and rectangle constructors, without actual move support:

circle center radius = do
    let description = "  Circle at "++show center++" with radius "++show radius
    return $ Figure { draw = putStrLn description }

rectangle from to = do
    let description = "  Rectangle "++show from++"-"++show to)
    return $ Figure { draw = putStrLn description }

As you see, each constructor just returns a fixed draw operation that prints parameters with which the concrete figure was created. Let's test it:

drawAll :: [Figure] -> IO ()
drawAll figures = do putStrLn "Drawing figures:"
                     mapM_ draw figures

main = do figures <- sequence [circle (10,10) 5,
                               circle (20,20) 3,
                               rectangle (10,10) (20,20),
                               rectangle (15,15) (40,40)]
          drawAll figures

Now let's define "full-featured" figures that can actually be moved around. In order to achieve this, we should provide each figure with a mutable variable that holds each figure's current screen location. The type of this variable will be IORef Point. This variable should be created in the figure constructor and manipulated in I/O operations (closures) enclosed in the Figure record:

circle center radius = do
    centerVar <- newIORef center

    let drawF = do center <- readIORef centerVar
                   putStrLn ("  Circle at "++show center
                             ++" with radius "++show radius)

    let moveF (addX,addY) = do (x,y) <- readIORef centerVar
                               writeIORef centerVar (x+addX, y+addY)

    return $ Figure { draw=drawF, move=moveF }

rectangle from to = do
    fromVar <- newIORef from
    toVar   <- newIORef to

    let drawF = do from <- readIORef fromVar
                   to   <- readIORef toVar
                   putStrLn ("  Rectangle "++show from++"-"++show to)

    let moveF (addX,addY) = do (fromX,fromY) <- readIORef fromVar
                               (toX,toY)     <- readIORef toVar
                               writeIORef fromVar (fromX+addX, fromY+addY)
                               writeIORef toVar   (toX+addX, toY+addY)

    return $ Figure { draw=drawF, move=moveF }

Now we can test the code which moves figures around:

main = do figures <- sequence [circle (10,10) 5,
                               rectangle (10,10) (20,20)]
          drawAll figures
          mapM_ (\fig -> move fig (10,10)) figures
          drawAll figures

It's important to realize that we are not limited to including only I/O actions in a record that's intended to simulate a C++/Java-style interface. The record can also include values, IORefs, pure functions - in short, any type of data. For example, we can easily add to the Figure interface fields for area and origin:

data Figure = Figure { draw :: IO (),
                       move :: Displacement -> IO (),
                       area :: Double,
                       origin :: IORef Point
                     }


Exception handling (under development)

Although Haskell provides a set of exception raising/handling features comparable to those in popular OOP languages (C++, Java, C#), this part of the language receives much less attention. This is for two reasons:

  • you just don't need to worry as much about them - most of the time it just works "behind the scenes".
  • Haskell, lacking OOP-style inheritance, doesn't allow the programmer to easily subclass exception types, therefore limiting the flexibility of exception handling.

Haskell can raise more exceptions than other programming languages - pattern match failures, calls with invalid arguments (such as head []) and computations whose results depend on special values undefined and error "...." all raise their own exceptions:

  • example 1:
main = print (f 2)

f 0 = "zero"
f 1 = "one"
  • example 2:
main = print (head [])
  • example 3:
main = print (1 + (error "Value that wasn't initialized or cannot be computed"))

This allows the writing of programs in a much more error-prone way.


Interfacing with C/C++ and foreign libraries (under development)

While Haskell is great at algorithm development, speed isn't its best side. We can combine the best of both languages, though, by writing speed-critical parts of program in C and the rest in Haskell. We just need a way to call C routines from Haskell and vice versa, and to marshal data between the two languages.

We also need to interact with C to use Windows/Linux APIs, linking to various libraries and DLLs. Even interfacing with other languages often requires going through C, which acts as a "common denominator". Chapter 8 of the Haskell 2010 report provides a complete description of interfacing with C.

We will learn to use the FFI via a series of examples. These examples include C/C++ code, so they need C/C++ compilers to be installed, the same will be true if you need to include code written in C/C++ in your program (C/C++ compilers are not required when you just need to link with existing libraries providing APIs with C calling convention). On Unix (and Mac OS?) systems, the system-wide default C/C++ compiler is typically used by GHC installation. On Windows, no default compilers exist, so GHC is typically shipped with a C compiler, and you may find on the download page a GHC distribution bundled with C and C++ compilers. Alternatively, you may find and install a GCC/MinGW version compatible with your GHC installation.

If you need to make your C/C++ code as fast as possible, you may compile your code by Intel compilers instead of GCC. However, these compilers are not free, moreover on Windows, code compiled by Intel compilers may not interact correctly with GHC-compiled code, unless one of them is put into DLLs (due to object file incompatibility).

More links:

C->Haskell
A lightweight tool for implementing access to C libraries from Haskell.
HSFFIG
The Haskell FFI Binding Modules Generator (HSFFIG) is a tool that takes a C library header (".h") and generates Haskell Foreign Function Interface import declarations for items (functions, structures, etc.) the header defines.
MissingPy
MissingPy is really two libraries in one. At its lowest level, MissingPy is a library designed to make it easy to call into Python from Haskell. It provides full support for interpreting arbitrary Python code, interfacing with a good part of the Python/C API, and handling Python objects. It also provides tools for converting between Python objects and their Haskell equivalents. Memory management is handled for you, and Python exceptions get mapped to Haskell Dynamic exceptions. At a higher level, MissingPy contains Haskell interfaces to some Python modules.
HsLua
A Haskell interface to the Lua scripting language

Foreign calls

We begin by learning how to call C routines from Haskell and Haskell definitions from C. The first example consists of three files:

main.hs:

{-# LANGUAGE ForeignFunctionInterface #-}

main = do print "Hello from main"
          c_routine

haskell_definition = print "Hello from haskell_definition"

foreign import ccall safe "prototypes.h"
    c_routine :: IO ()

foreign export ccall
    haskell_definition :: IO ()

vile.c:

#include <stdio.h>
#include "prototypes.h"

void c_routine (void)
{
  printf("Hello from c_routine\n");
  haskell_definition();
}

prototypes.h:

extern void c_routine (void);
extern void haskell_definition (void);

It may be compiled and linked in one step by ghc:

 ghc --make main.hs vile.c

Or, you may compile C module(s) separately and link in ".o" files (this may be preferable if you use make and don't want to recompile unchanged sources; ghc's --make option provides smart recompilation only for ".hs" files):

 ghc -c vile.c
 ghc --make main.hs vile.o

You may use gcc/g++ directly to compile your C/C++ files but I recommend to do linking via ghc because it adds a lot of libraries required for execution of Haskell code. For the same reason, even if main in your program is written in C/C++, I recommend calling it from the Haskell action main - otherwise you'll have to explicitly init/shutdown the GHC RTS (run-time system).

We use the foreign import declaration to import foreign routines into Haskell, and foreign export to export Haskell definitions "outside" for imperative languages to use. Note that import creates a new Haskell symbol (from the external one), while export uses a Haskell symbol previously defined. Technically speaking, both types of declarations create a wrapper that converts the names and calling conventions from C to Haskell or vice versa.

All about the foreign declaration

The ccall specifier in foreign declarations means the use of the C (not C++ !) calling convention. This means that if you want to write the external routine in C++ (instead of C) you should add export "C" specification to its declaration - otherwise you'll get linking errors. Let's rewrite our first example to use C++ instead of C:

prototypes.h:

#ifdef __cplusplus
extern "C" {
#endif

extern void c_routine (void);
extern void haskell_definition (void);

#ifdef __cplusplus
}
#endif

Compile it via:

 ghc --make main.hs vile.cpp

where "vile.cpp" is just a renamed copy of "vile.c" from the first example. Note that the new "prototypes.h" is written to allow compiling it both as C and C++ code. When it's included from "vile.cpp", it's compiled as C++ code. When GHC compiles "main.hs" via the C compiler (enabled by the -fvia-C option), it also includes "prototypes.h" but compiles it in C mode. It's why you need to specify ".h" files in foreign declarations - depending on which Haskell compiler you use, these files may be included to check consistency of C and Haskell declarations.

The quoted part of the foreign declaration may also be used to give the import or export another name - for example,

foreign import ccall safe "prototypes.h CRoutine"
    c_routine :: IO ()

foreign export ccall "HaskellDefinition"
    haskell_definition :: IO ()

specifies that:

  • the C routine called CRoutine will become known as c_routine in Haskell,
  • while the Haskell definition haskell_definition will be known as HaskellDefinition in C.

It's required when the C name doesn't conform to Haskell naming requirements.

Although the Haskell FFI standard tells about many other calling conventions in addition to ccall (e.g. cplusplus, jvm, net) current Haskell implementations support only ccall and stdcall. The latter, also called the "Pascal" calling convention, is used to interface with WinAPI:

foreign import stdcall unsafe "windows.h SetFileApisToOEM"
  setFileApisToOEM :: IO ()

And finally, about the safe/unsafe specifier: a C routine imported with the unsafe keyword is called directly and the Haskell runtime is stopped while the C routine is executed (when there are several OS threads executing the Haskell program, only the current OS thread is delayed). This call doesn't allow recursively entering back into Haskell by calling any Haskell definition - the Haskell RTS is just not prepared for such an event. However, unsafe calls are as quick as calls in C. It's ideal for "momentary" calls that quickly return back to the caller.

When safe is specified, the C routine is called in a safe environment - the Haskell execution context is saved, so it's possible to call back to Haskell and, if the C call takes a long time, another OS thread may be started to execute Haskell code (of course, in threads other than the one that called the C code). This has its own price, though - around 1000 CPU ticks per call.

You can read more about interaction between FFI calls and Haskell concurrency in [7].

Marshalling simple types

Calling by itself is relatively easy; the real problem of interfacing languages with different data models is passing data between them. In this case, there is no guarantee that Haskell's Int is represented in memory the same way as C's int, nor Haskell's Double the same as C's double and so on. While on some platforms they are the same and you can write throw-away programs relying on these, the goal of portability requires you to declare foreign imports and exports using special types described in the FFI standard, which are guaranteed to correspond to C types. These are:

import Foreign.C.Types (               -- equivalent to the following C type:
         CChar, CUChar,                --  char/unsigned char
         CShort, CUShort,              --  short/unsigned short
         CInt, CUInt, CLong, CULong,   --  int/unsigned/long/unsigned long
         CFloat, CDouble...)           --  float/double

Now we can typefully import and export to and from C and Haskell:

foreign import ccall unsafe "math.h"
    c_sin :: CDouble -> CDouble

Note that C routines which behave like pure functions (those whose results depend only on their arguments) are imported without IO in their return type. The const specifier in C is not reflected in Haskell types, so appropriate compiler checks are not performed.

All these numeric types are instances of the same classes as their Haskell cousins (Ord, Num, Show and so on), so you may perform calculations on these data directly. Alternatively, you may convert them to native Haskell types. It's very typical to write simple wrappers around foreign imports and exports just to provide interfaces having native Haskell types:

-- |Type-conversion wrapper around c_sin
sin :: Double -> Double
sin = fromRational . c_sin . toRational

Memory management

Marshalling strings

import Foreign.C.String (   -- representation of strings in C
         CString,           -- = Ptr CChar
         CStringLen)        -- = (Ptr CChar, Int)
foreign import ccall unsafe "string.h"
    c_strlen :: CString -> IO CSize     -- CSize defined in Foreign.C.Types and is equal to size_t
-- |Type-conversion wrapper around c_strlen
strlen :: String -> Int
strlen = ....

Marshalling composite types

A C array may be manipulated in Haskell as StorableArray.

There is no built-in support for marshalling C structures and using C constants in Haskell. These are implemented in the c2hs preprocessor, though.

Binary marshalling (serializing) of data structures of any complexity is implemented in the library module "Binary".

Dynamic calls

DLLs

because i don't have experience of using DLLs, can someone write into this section? Ultimately, we need to consider the following tasks:

  • using DLLs of 3rd-party libraries (such as ziplib)
  • putting your own C code into a DLL to use in Haskell
  • putting Haskell code into a DLL which may be called from C code


The dark side of the I/O monad

Unless you are a systems developer, postgraduate CS student, or have alternate (and eminent!) verifiable qualifications you should have no need whatsoever for this section - here is just one tiny example of what can go wrong if you don't know what you are doing. Look for other solutions!

unsafePerformIO

Do you remember this definition?

getChar >>= \c -> c

Let's try to "define" something with it:

getchar :: Char
getchar = getChar >>= \c -> c

get2chars :: String
get2chars = [a, b] where a = getchar
                         b = getchar

But what makes all of that so wrong? Besides getchar and get2chars not being I/O actions:

  1. Because the Haskell compiler treats all functions as pure (not having side effects), it can avoid "unnecessary" calls to getchar and use one returned value twice;
  2. Even if it does make two calls, there is no way to determine which call should be performed first. Do you want to return the two characters in the order in which they were read, or in the opposite order? Nothing in the definition of get2chars answers this question.

Despite these problems, programmers coming from an imperative language background often look for a way to do this - disguise one or more I/O actions as a pure definition. Having seen procedural entities similar in appearance to:

void putchar(char c);

the thought of just writing:

putchar :: Char -> ()
putchar c = ...

would definitely be more appealing - for example, defining readContents as though it were a pure function:

readContents :: Filename -> String

will certainly simplify the code that uses it. However, those exact same problems are also lurking here:

  1. Attempts to read the contents of files with the same name can be factored (i.e. reduced to a single call) despite the fact that the file (or the current directory) can be changed between calls. Haskell considers all non-IO functions to be pure and feels free to merge multiple calls with the same parameters.
  2. This call is not inserted in a sequence of I/O actions all the way from main so the compiler doesn't know at what exact moment you want to execute this action. For example, if the file has one kind of contents at the beginning of the program and another at the end - which contents do you want to see? You have no idea when (or even if) this function is going to get invoked, because Haskell sees this function as pure and feels free to reorder the execution of any or all pure functions as needed.

So, implementing supposedly-pure functions that interact with the Real World is considered to be Bad Behavior. Nice programmers never do it ;-)

Nevertheless, there are (semi-official) ways to use I/O actions inside of pure functions - there is a (ahem) "special" definition that will (mis)use the Haskell implementation to run an I/O action. This particular (and dangerous) is:

unsafePerformIO :: IO a -> a

Using unsafePerformIO, you could easily write "pure-looking functions" that actually do I/O inside. But don't do this without a real need, and remember to follow this rule:

  • the compiler doesn't know that you are cheating; it still considers each non-IO function to be a pure one. Therefore, all the usual optimization rules can (and will!) be applied to its execution.

So you must ensure that:

  • The result of each call depends only on its arguments.
  • You don't rely on side-effects of this function, which may be not executed if its results are not needed.

Let's investigate this problem more deeply. Function evaluation in Haskell is determined by a value's necessity - the language computes only the values that are really required to calculate the final result. But what does this mean with respect to the main action? To run it to completion, all the intermediate I/O actions that are included in main's chain need to be run. By using unsafePerformIO we call I/O actions outside of this chain. What guarantee do we have that they will be run at all? None. The only time they will be run is if running them is required to compute the overall function result (which in turn should be required to perform some action in the main chain). This is an example of Haskell's evaluation-by-need strategy. Now you should clearly see the difference:

  • An I/O action inside an I/O definition is guaranteed to execute as long as it is (directly or indirectly) inside the main chain - even when its result isn't used (because it will be run anyway). You directly specify the order of the action's execution inside the I/O definition.
  • An I/O action called by unsafePerformIO will be performed only if its result is really used. The evaluation order is not guaranteed and you should not rely on it (except when you're sure about whatever data dependencies may exist).

I should also say that inside the unsafePerformIO call you can organize a small internal chain of I/O actions with the help of the same binding operators and/or do syntactic sugar we've seen above. So here's how we'd rewrite our previous (pure!) definition of one using unsafePerformIO:

one :: Integer
one = unsafePerformIO $ do var <- newIORef 0
                           modifyIORef var (+1)
                           readIORef var

and in this case all the I/O actions in this chain will be run when the result of the unsafePerformIO call is needed.

inlinePerformIO

The internal code for inlinePerformIO is similar to that of unsafePerformIO, sometimes having an INLINE pragma. Semantically inlinePerformIO = unsafePerformIO in as much as either of those have any semantics at all.

The difference of course is that inlinePerformIO is even less safe than unsafePerformIO. While ghc will try not to duplicate or common up different uses of unsafePerformIO, we aggressively inline inlinePerformIO. So you can really only use it where the I/O content is really properly pure, like reading from an immutable memory buffer (as in the case of ByteStrings). However things like allocating new buffers should not be done inside inlinePerformIO since that can easily be floated out and performed just once for the whole program, so you end up with many things sharing the same buffer, which would be bad.

So the rule of thumb is that I/O actions wrapped in unsafePerformIO have to be externally pure while with inlinePerformIO it has to be really, really pure or it'll all go horribly wrong.

That said, here's some really hairy code. This should frighten any pure functional programmer...

write :: Int -> (Ptr Word8 -> IO ()) -> Put ()
write !n body = Put $ \c buf@(Buffer fp o u l) ->
  if n <= l
    then write</code> c fp o u l
    else write</code> (flushOld c n fp o u) (newBuffer c n) 0 0 0

  where {-# NOINLINE write</code> #-}
        write</code> c !fp !o !u !l =
          -- warning: this is a tad hardcore
          inlinePerformIO
            (withForeignPtr fp
              (\p -> body $! (p `plusPtr` (o+u))))
          `seq` c () (Buffer fp o (u+n) (l-n))

it's used like:

word8 w = write 1 (\p -> poke p w)

This does not adhere to my rule of thumb above. Don't ask exactly why we claim it's safe :-) (and if anyone really wants to know, ask Ross Paterson who did it first in the Builder monoid)

unsafeInterleaveIO

But there is an even stranger operation:

unsafeInterleaveIO :: IO a -> IO a

and here's one clear reason why:

{-# NOINLINE unsafeInterleaveIO #-}
unsafeInterleaveIO   :: IO a -> IO a
unsafeInterleaveIO a =  return (unsafePerformIO a)

So don't let that type signature fool you - unsafeInterleaveIO also has to be used carefully! It too sets up its unsuspecting parameter to run lazily, instead of running in the main action chain, with the only difference being the result of running the parameter can only be used by another I/O action. But this is of little benefit - ideally the parameter and the main action chain should have no other interactions with each other, otherwise things can get ugly!

At least you have some appreciation as to why unsafeInterleaveIO is, well unsafe! Just don't ask - to talk further is bound to cause grief and indignation. I won't say anything more about this ruffian I...use all the time (darn it!)

One can use unsafePerformIO (not unsafeInterleaveIO) to perform I/O operations not in some predefined order but by demand. For example, the following code:

do let c = unsafePerformIO getChar
   do_proc c

will perform the getChar I/O call only when the value of c is really required by the calling code, i.e. it this call will be performed lazily like any regular Haskell computation.

Now imagine the following code:

do let s = [unsafePerformIO getChar, unsafePerformIO getChar, unsafePerformIO getChar]
   do_proc s

The three characters inside this list will be computed on demand too, and this means that their values will depend on the order they are consumed. It is not what we usually want.

unsafeInterleaveIO solves this problem - it performs I/O only on demand but allows you to define the exact internal execution order for parts of your data structure.

  • unsafeInterleaveIO accepts an I/O action as a parameter and returns another I/O action as the result:
do str <- unsafeInterleaveIO myGetContents
                    ⋮
  • unsafeInterleaveIO doesn't perform any action immediately, it only creates a closure of type a which upon being needed will perform the action specified as the parameter.
  • this action by itself may compute the whole value immediately...or use unsafeInterleaveIO again to defer calculation of some sub-components:
myGetContents = do
   c <- getChar
   s <- unsafeInterleaveIO myGetContents
   return (c:s)

This code will be executed only at the moment when the value of str is really demanded. In this moment, getChar will be performed (with its result assigned to c) and a new lazy-I/O closure will be created - for s. This new closure also contains a link to a myGetContents call.

The resulting list is then returned. It contains the Char that was just read and a link to another myGetContents call as a way to compute the rest of the list. Only at the moment when the next value in the list is required will this operation be performed again.

As a final result, we can postpone the read of the second Char in the list before the first one, but have lazy reading of characters as a whole - bingo!


PS: of course, actual code should include EOF checking; also note that you can read multiple characters/records at each call:

myGetContents = do
   l <- replicateM 512 getChar
   s <- unsafeInterleaveIO myGetContents
   return (l++s)

and we can rewrite myGetContents to avoid needing to use unsafeInterleaveIO where it's called:

myGetContents = unsafeInterleaveIO $ do
   l <- replicateM 512 getChar
   s <- myGetContents
   return (l++s)


Welcome to the machine: taking off the covers

A little disclaimer: I should say that I'm not describing here exactly what a monad is (I don't even completely understand it myself) and my explanation shows only one possible way to implement the I/O monad in Haskell. For example, the hbc compiler and the Hugs interpreter implements the I/O monad via continuations [9]. I also haven't said anything about exception handling, which is a natural part of the "monad" concept. You can read the All About Monads guide to learn more about these topics.

But there is some good news: the I/O monad understanding you've just acquired will work with any implementation and with many other monads.

The GHC implementation

newtype IO a = IO (State# RealWorld -> (# State# RealWorld, a #))

It uses the State# RealWorld type and the strict tuple type (# ... #) for optimization. It also uses an IO data constructor. Nevertheless, there are no significant changes from the standpoint of our explanation.

Of course, other compilers e.g. yhc/nhc (jhc, too?) define IO in other ways.

The Yhc/nhc98 implementation

data World = World
newtype IO a = IO (World -> Either IOError a)

This implementation makes the World disappear somewhat[10], and returns Either a result of type a, or if an error occurs then IOError. The lack of the World on the right-hand side of the function can only be done because the compiler knows special things about the IO type, and won't overoptimise it.


Further reading

[1] This manual is largely based on Simon Peyton Jones's paper Tackling the awkward squad: monadic input/output, concurrency, exceptions, and foreign-language calls in Haskell. I hope that my manual improves his original explanation of the Haskell I/O system and brings it closer to the point of view of new Haskell programmers. But if you need to learn about concurrency, exceptions and the FFI in Haskell/GHC, the original paper is the best source of information.

[2] You can find more information about concurrency, the FFI and STM at the GHC/Concurrency#Starting points page.

[3] The Arrays page contains exhaustive explanations about using mutable arrays.

[4] Look also at the Using monads page, which contains tutorials and papers really describing these mysterious monads.

[5] An explanation of the basic monad functions, with examples, can be found in the reference guide A tour of the Haskell Monad functions, by Henk-Jan van Tuyl.

[6] Official FFI specifications can be found on the page The Haskell 98 Foreign Function Interface 1.0: An Addendum to the Haskell 98 Report

[7] Using the FFI in multithreaded programs is described in Extending the Haskell Foreign Function Interface with Concurrency

[8] This particular behaviour is not a requirement of Haskell 2010, so the operation of seq may differ between various Haskell implementations - if you're not sure, staying within the I/O monad is the safest option.

[9] How to Declare an Imperative by Phil Wadler provides an explanation of how this can be done.

Do you have more questions? Ask in the haskell-cafe mailing list.

To-do list

If you are interested in adding more information to this manual, please add your questions/topics here.

Topics:

  • fixIO and mdo
  • Q monad

Questions:

  • split (>>=)/(>>)/return section and do section, more examples of using binding operators
  • IORef detailed explanation (==const*), usage examples, syntax sugar, unboxed refs
  • explanation of how the actual data "in" mutable references are inside GHC's RealWorld, rather than inside the references themselves (IORef, IOArray & co.)
  • control structures developing - much more examples
  • unsafePerformIO usage examples: global variable, ByteString, other examples
  • how unsafeInterLeaveIO can be seen as a kind of concurrency, and therefore isn't so unsafe (unlike unsafeInterleaveST which really is unsafe)
  • discussion about different senses of safe/unsafe (like breaking equational reasoning vs. invoking undefined behaviour (so can corrupt the run-time system))
  • actual code used by GHC - how to write low-level definitions based on example of how newIORef is implemented

This manual is collective work, so feel free to add more information to it yourself. The final goal is to collectively develop a comprehensive manual for using the I/O monad.