Haskell IO for Imperative Programmers
My goal is to help experienced procedural programmers make the transition to Haskell, not to provide a tour of language features. I don’t want to show you how to do imperative programming in Haskell; I want to show you how not to.
Introduction
There is a widespread misconception that basic input and output is hard to do in Haskell and that therefore the language is not suitable for writing practical programs. I’ve even heard it stated that one must master category theory in order to so much as write to standard output. Nothing could be further from the truth. Witness
main = putStrLn "Hello, world!"
About the only way that could be easier would be if there were a “Hello, world!”
primitive built into the language, and that would be silly. Stop and think about the basic premise — the whole point of running a program is to have side-effects — no one in their right mind is going to design a language that makes that hard to do. Who here thinks Erik Meijer is a nincompoop? Show of hands? It’s not like he’s brilliant as long as he’s working on VB and then turns into a moron the minute he starts thinking about Haskell.
Before proceeding please allow yourself to consider the possibility that I/O in Haskell might not be difficult. That said, I will concede that it is also not obvious, but for reasons that don’t have anything to do with some obtuse mathematical formalism. It is simply the result of a fundamental tension in functional languages:
- On the one hand you must have side-effects somewhere in order to justify all the energy you’re consuming,
- but on the other hand you don’t want side effects everywhere
mischief x = let y = 141421 * x _ = putStrLn "x multiplied by 141421" in y
- because they make it harder to write correct and composable programs.
The way to resolve that tension is to think differently. Instead of thinking about I/O as something you do, you need to shift gears and think about I/O as a result that you achieve. The programming task then becomes one of describing the program’s interaction with the outside world. (That turns out to be a much more powerful idea, the full consequences of which are beyond the scope of this post.) The catch, of course, is that “thinking differently” is one of those things that much easier to say than to do.
“Hello, world!” Reloaded
Returning to “Hello, world!”
— it’s cute, but it’s not a practical example. It doesn’t do any useful work. What is “useful work”? Your most basic utility program (say, to analyze a log file) needs to read some input line-by-line, process it, and write some output. Anything less than that is a waste of time. Coming from an imperative background you’re used to writing programs with this basic template
while (<>) { &processIt($_); }
So you naturally expect to write something in Haskell that looks like this
main = do
line <- getLine
processIt line
main
This will work (mostly), but you’re already in trouble. You just don’t know it yet. There are a couple of problems here, so let’s examine the thinking behind it:
- First, you’ve written a recursive function. You did that because you know Haskell doesn’t have any looping constructs, which is annoying but no big deal so you get over it and get on with your work.
- Second, there is a very real muddying of concerns. There’s a routine that does input that calls a processing routine that, by implication, if the program is to actually do anything, must be calling an output routine. These are separate issues that should be handled separately, but it’s not clear at this stage how to accomplish that.
Let’s talk about the recursion. Recursion can be used, among other things, to express an iteration. I have often stated that you simply don’t write iterative constructs in Haskell. That’s not literally true, I’ve written almost as many as I can count on one hand, but it’s certainly not something you do every day. Not the way every third statement is a for
or while
in an imperative language. It’s all handled by library functions. If you define a recursive data structure you will provide a collection of routines that iterate over it and no one else ever needs to know the details. If you’re writing an “iterative” program that doesn’t involve an explicit data structure, odds are it can be expressed as an operation over a list
fac n = product [1 .. n]
Writing Haskell you want to learn to think in terms of operations on aggregates: “do this to that collection.” If you’re sweating the details then you’re probably thinking procedurally. That’s why if you write an iteration it should be taken as a warning sign that you’re not using the language appropriately. Stop and think.
As for that separation of concerns, let’s expand the example a little more and add type signatures
main :: IO ()
main = do
line <- getLine
processIt line
main
processIt :: String -> IO ()
processIt s = do
print (length s)
At this point, the question you should be asking yourself is: does processIt
also have to perform I/O?
Yes, main
has to perform I/O - that's required by the current Haskell standard. But for all other definitions, you should watch out for something I call “IO creep” - in this example:
print
lives insideIO
,processIt
usesprint
,- thus
processIt
also lives insideIO
.
There’s no way around it, the type signatures just make it obvious. This is the type system’s way of trying to sound warning bells. Some things need to happen inside IO
, so do them there, but don’t do anything else. Haskell provides the tools you need to factor your code appropriately and avoid this sort of muddying of concerns. If you see IO
starting to creep throughout your program, that’s another sign that you’re thinking imperatively. Stop and refactor.
So where are we? We wrote the obvious program to try to actually do something useful with this !@#$% language and instead of saying “see, it’s just as easy as Perl” I told you that it contained two big flashing red lights warning us we’re not using the language correctly. Disgusting, isn’t it?
It’s important to realize at this point that the problem isn't an annoying facet of some vague field of mathematics - the problem is imperative-think. We wrote an imperative program in Haskell. See, it is possible to write imperative programs in Haskell! Easy, even. Now that you know you can, don’t.
But it does no good for me to say, “stop thinking imperatively.” Habits of thought die hard. I have to give you something to replace it with. Haskell provides the means for you to organize your programs in new and exciting ways that really bear very little resemblance to what you’re used to doing in imperative languages. But they can have practical benefits: for example, if you can keep the majority of your Haskell code free of I/O, there's the real possibility of running much of that code in parallel, quickly and easily!
Are you interested now?
Minimising your I/O code is just one of those multitude of ways to effectively arrange your programs or libraries; the rest could fill a book. For right now, though, one of the things that makes I/O in Haskell seem difficult at first glance is a failure to appreciate the virtues of laziness.
Laziness
I’m sure you’ve seen some attempts to explain laziness by way of apparently contrived examples involving infinite lists, but if you’re coming to Haskell from an imperative background you don’t get it. Trust me, you don’t. It is a simple idea with far-reaching implications and it is one of the defining features of the language. After learning Haskell, I’d rather gnaw off my own arm than try to live without laziness. So, what is it?
First let’s deal with the notion of strictness:
- A language is strict if it evaluates the arguments to a function before it evaluates the function.
- A non-strict language doesn’t. For example, define
f x y = x * y
In a strict language a call to f would be evaluated as
f (1+2) (3+4) = f 3 7 = 3 * 7 = 21
In a non-strict language that same call would be evaluated as
f (1+2) (3+4) = (1+2) * (3+4) = 3 * 7 = 21
It looks like a small difference. To say that Haskell is lazy means that it is aggressively non-strict — it never evaluates anything before it absolutely needs to (unless you tell it otherwise, but that’s another story). This changes the way you code.
Other languages have the occasional non-strict construct. Short-circuit booleans and ternary operators are very common and they let you write code such as
if (i < x.size() && x[i] > 0) {
which would otherwise explode. It’s a lot more convenient than nested if
s. Many languages let you create your own non-strict constructs, generally with macros
#define first(x, a, b) if (a > 0) x = a; else x = b first(x, a++, b++);
In this horribly contrived example, b
may never be incremented, depending on the value of a
. That is, presumably, the programmer’s intent. There is also the obvious bug that a
may well get incremented twice. Presumably, not the programmer’s intent. The example is deliberately sloppy to leave the door open for all kinds of other bugs the details of which aren’t really relevant.
The point is that, in an imperative language, this kind of thing is very dangerous. This is why the use of macros is generally discouraged. The “solution” here is to use an inline function (or Lisp). The inline would eliminate all the bugs, but it would also enforce strict semantics — that is, it will fail to do the thing we had hoped to achieve by using a macro in the first place.
There are a couple of points here:
- One is that while the notion of a non-strict operation may not seem mind-bending, coming from an imperative language it’s something you haven’t used much and therefore haven’t really internalized.
- The other is the relationship between laziness and side-effects. Clearly, they’re two great tastes that don’t go well together, like macaroni and blue-cheese dressing.
Haskell can’t have side effects because the laziness ensures that you don’t know what’s happening or when. That’s a great thing! Wonderful! There’s no reason to get bogged down in those sorts of low-level details unless you absolutely need to.
I/O
What does this have to do with I/O? While Hoogle-ing to find the getLine
routine used in our original program you may have run across the functions getContents
and lines
:
getContents
“returns all user input as a single string”;lines
“breaks a string up into a list of strings at newline characters”.
It’s clear that you could easily get a list of all the lines in the input using
do { s <- getContents ; return $ lines s }
It’s the equivalent of
@lines = <>;
But no way do you want to do that. Slurp in and split an entire file, not knowing how big it is? That’s crazy talk, man!
Or is it?
It’s probably clear from context that, no, it’s not. lines
returns the lines from its input on demand. getContents
reads from the input on demand. Every piece of the puzzle is doing its job as lazily as it possibly can. Even though you’ve written code that looks like it’s constructing a huge list of strings in memory, that’s not what’s happening. You’ve actually written instructions for generating a sequence of strings as you need them.
So rather than writing code that interleaves input with processing with output as you’re used to, you can write code that does input and then does processing and then does output, and all the messy interleaving is handled by the lazy semantics of the language. It’s as if you’ve defined a collection of cooperating processes, producers and consumers, all running in lockstep. The solution is much more modular, and your program will still run in constant space and linear time. It’s the closest thing to magic that I know of.
That’s what laziness buys you. Laziness enables a new kind of modularity that you may not have seen before. With laziness, you can easily separate generation from processing. When generation and processing are less tightly coupled both are freer to evolve. Let’s evolve the example.
main = do
s <- getContents
let r = map processIt (lines s)
putStr (unlines r)
processIt s = show (length s)
We’ve separated processIt
from IO
. Now, because functions are first-class values
main = interact (unlines . map processIt . lines)
interact f = do
s <- getContents
putStr (f s)
And because interact
is actually a library function
main = interact (unlines . map processIt . lines)
or
main = io (map processIt)
io f = interact (unlines . f . lines)
That’s an idiomatic Haskell program for doing simple I/O. You can find several more good examples here. To write these programs you don’t have to know a thing about some inexplicable mathematical concept. You do have to rethink your overall strategy for solving the problem. No more can you think about reading this and writing that, and all of that busy, busy “doing stuff.” Now you need to think in terms of describing the overall result that you want. That’s hard. Or, rather, it requires some practice. Once you’ve succeeded in thinking clearly in terms of the result you want to achieve without letting yourself get bogged down in the details of how to go about it, step by step, then the mechanics of making it happen are, frankly, trivial.
More
I’m anticipating two possible reactions to the above code:
- You may think that it looks so simple that you have a hard time believing that it actually does anything. It’s deceptive, because the end result is actually easier than the corresponding code in an imperative language. Probably not what you were expecting when you came here. If this is the case, then reread the last few code blocks until you can see the flow.
- Alternately, you may think that this still counts as trivial, not really significantly different from “Hello, world”, and you want your money back. In that case, let’s crank up the heat. The rest of this will be more examples of increasing complexity and decreasing explanation.
There’s more to life than standard input. Can’t we let the user specify input files on the command-line?
import System.Environment
main = do
args <- getArgs
mapM_ (interactFile (unlines . map processIt . lines)) args
interactFile f fileName = do
s <- readFile fileName
putStr (f s)
That processes each file individually, but maybe we’d rather do something more Perl-esque and treat all the command-line arguments as one big virtual file.
import System.Environment
main = do
args <- getArgs
interactFiles (unlines . map processIt . lines) args
interactFiles f fileNames = do
ss <- mapM readFile fileNames
putStr $ f (concat ss)
This is a hard program to write. It’s hard because never in a million years would you dream of opening an arbitrary list of files, all at once, loading the entire contents of every file into memory, glomming it all into one huge freaking string, breaking that string into an equally huge list of lines (a list, not even an array!), doing something to each element of that list to produce yet another giant list, concatenating all those output lines into still another massive string, and then, finally, writing that string out. That’s insanity! Of course, that’s not at all what’s happening — it’s an abstraction made possible by laziness. As for actually doing the I/O, there’s readFile
and putStr
and …, well, that’s it. That’s all you need to know. Mysterious mathematical musings be damned.
Don’t let it bother you. Nobody ever makes their first jump.
But aren’t both these examples just so much hooey? There’s no error handling, and you never know what a user is going to type in. I suppose we could substitute for readFile
with
safeRead s = catch (readFile s) $ \_ -> return ""
Contrasting this with Java is left as an exercise for the reader.
Maybe line-by-line processing isn’t your thing. We could, just as an example, add up all the rows in our input
main = interact $ show . sum . map read . lines
Even by column.
main = interact $ show . foldl addRows (repeat 0) . map row . lines
addRows = zipWith (+)
row = map read . words
That may make you a little cross-eyed. Sorry. I’m not going to dissect it, but I do want to call your attention to the fact that it is all about the what and not at all about the how. This is an I/O‑intensive program, but you wouldn’t know it to look at it because there’s only one word — interact
— that even looks like it has anything to do with I/O. You could take the argument to interact and drop it into either of the earlier programs as an argument to interactFile
or interactFiles
and get all the benefits of their approaches to I/O, without touching the processing code. That’s what decoupling is all about.
So far, so good. It’s easy to see how you can do simple file processing in Haskell. This is adequate for, say, reading and summarizing log files. What about something more interesting, such as configuration files? A configuration file is likely to have some sort of include mechanism.
In an imperative language
sub processFile { open FH, shift; while () { if (&isInclude($_)) { &processFile(&includeFile($_)); } else { # other stuff } } close FH; }
vs.
readWithIncludes :: String -> IO [String]
readWithIncludes f = do
s <- readFile f
ss <- mapM expandIncludes (lines s)
return (concat ss)
expandIncludes :: String -> IO [String]
expandIncludes s =
if isInclude s
then
readWithIncludes (includeFile s)
else
return [s]
Two mutually‑recursive I/O functions...you’ll note that this isn’t a simple use of recursion to express an iteration, as even the imperative solution is naturally expressed recursively. What’s much more important is that these functions are only doing I/O. Processing is still “somewhere else,” not buried three levels deep inside an I/O routine. This means that our new include file mechanism can be freely mixed with all the other techniques we’ve been developing with no loss of modularity.
Monads are Frosting
Alert readers may have noticed I've not mentioned monads: now I have. But I only mention them because all the other tutorials do and so you’re expecting it. That said, forget about them; monads aren't the problem.
The thing that makes I/O in Haskell seem hard is learning to think in terms of a pure, lazy, functional language. If you can unlearn some old habits of thought it can be even easier than what you’re used to doing in imperative languages. I realize that’s like saying you can catch a bird if you put salt on its tail so I’ve tried to provide some help in that direction.
The secret to success is to start thinking about I/O as a result that you achieve rather than something that you do. Monadic I/O is a formalization of that idea. Yes, monads are not the easiest thing in the world to wrap your head around. We didn’t cover them because they obscure the real challenges of just learning to think functionally. But with experience, you can use them to make your life easier. They add more power, letting you think about actions as tangible objects that you can manipulate and even letting you define your own control structures. So being able to grok monads will help you to be a more effective Haskell programmer.
But that's for the future! Right now, you don’t need all of that power in order to just get started. You can write practical programs and do useful work by using just the techniques covered above. Then (and maybe with a dash of regular expressions) you can start to rock and roll.