Haskell for multicores
GHC Haskell comes with a large set of libraries and tools for building programs that exploit multicore architectures.
This site attempts to document all our available information on exploiting such hardware with Haskell.
Throughout, we focus on exploiting shared-memory SMP systems, with aim of lowering absolute wall clock times. The machines we target are typical 2x to 32x desktop multicore machine, on which vanilla GHC will run.
Introduction[edit]
To get an idea of what we aim to do -- reduce running times by exploiting more cores -- here's a naive "hello, world" of parallel programs: parallel, naive fib. It simply tells us whether or not the SMP runtime is working:
import Control.Parallel
import Control.Monad
import Text.Printf
cutoff = 35
fib' :: Int -> Integer
fib' 0 = 0
fib' 1 = 1
fib' n = fib' (n-1) + fib' (n-2)
fib :: Int -> Integer
fib n | n < cutoff = fib' n
| otherwise = r `par` (l `pseq` l + r)
where
l = fib (n-1)
r = fib (n-2)
main = forM_ [0..45] $ \i ->
printf "n=%d => %d\n" i (fib i)
We compile it with the `-threaded` flag:
$ ghc -O2 -threaded --make fib.hs [1 of 1] Compiling Main ( fib.hs, fib.o ) Linking fib ...
And run it with:
+RTS -Nx
where 'x' is the number of cores you have (or a slightly higher value). Here, on a quad core linux system:
./fib +RTS -N4 76.81s user 0.75s system 351% cpu 22.059 total
So we were able to use 3.5/4 of the available cpu time. And this is typical, most problems aren't easily scalable, and we must trade off work on more cores, for more overhead with communication.
Examples[edit]
Further reading[edit]
- GHC's multiprocessor guide
- runtime options to enable SMP parallelism
- API documentation for paralell strategies
- Real World Haskell: Concurrent and Parallel Programming
- Blog posts about parallelism
Thread primitives[edit]
Control.Concurrent Control.Concurrent
For explicit concurrency and/or parallelism, Haskell implementations have a light-weight thread system that schedules logical threads on the available operating system threads. These light and cheap threads can be created with forkIO. (We won't discuss full OS threads which are created via forkOS
, as they have significantly higher overhead and are only useful in a few situations like in FFIs.)
forkIO :: IO () -> IO ThreadId
Lets take a simple Haskell application that hashes two files and prints the result:
import Data.Digest.Pure.MD5 (md5)
import qualified Data.ByteString.Lazy as L
import System.Environment (getArgs)
main = do
[fileA, fileB] <- getArgs
hashAndPrint fileA
hashAndPrint fileB
hashAndPrint f = L.readFile f >>= return . md5 >>= \h -> putStrLn (f ++ ": " ++ show h)
This is a straight forward solution that hashs the files one at a time printing the resulting hash to the screen. What if we wanted to use more than one processor to hash the files in parallel?
One solution is to start a new thread, hash in parallel, and print the answers as they are computed:
import Control.Concurrent (forkIO)
import Data.Digest.Pure.MD5 (md5)
import qualified Data.ByteString.Lazy as L
import System.Environment (getArgs)
main = do
[fileA,fileB] <- getArgs
forkIO $ hashAndPrint fileA
hashAndPrint fileB
hashAndPrint f = L.readFile f >>= return . md5 >>= \h -> putStrLn (f ++ ": " ++ show h)
Now we have a rough program with great performance boost - which is expected given the trivially parallel computation.
But wait! You say there is a bug? Two, actually. One is that if the main thread is finished hashing fileB first, the program will exit before the child thread is done with fileA. The second is a potential for garbled output due to two threads writing to stdout. Both these problems can be solved using some inter-thread communication - we'll pick this example up in the MVar section.
Further reading[edit]
- A concurrent port scanner
- Research papers on concurrency in Haskell
- Research papers on parallel Haskell
Synchronisation with locks[edit]
Locking mutable variables (MVars) can be used to great effect not only for communicating values (such as the resulting string for a single function to print) but it is also common for programmers to use their locking features as a signaling mechanism.
MVars are a polymorphic mutable variables that might or might not contain a value at any given time. Common functions include:
newMVar :: a -> IO (MVar a)
newEmptyMVar :: IO (MVar a)
takeMVar :: MVar a -> IO a
putMVar :: MVar a -> a -> IO ()
isEmptyMVar :: MVar a -> IO Bool
While they are fairly self-explanitory it should be noted that takeMVar
will block until the MVar is non-empty and putMVar
will block until the current MVar is empty. Taking an MVar will leave the MVar empty when returning the value.
In the forkIO
example we developed a program to hash two files in parallel and ended with a couple small bugs because the program terminated prematurely (the main thread would exit when done). A second issue was that threads can conflict with each others use of stdout.
Lets now generalize the example to operate on any number of files, block until the hashing is complete, and print all the results from just one thread so no stdout garbling occurs.
{-# LANGUAGE BangPatterns #-}
import Data.Digest.Pure.MD5
import qualified Data.ByteString.Lazy as L
import System.Environment
import Control.Concurrent
import Control.Monad (replicateM_)
main = do
files <- getArgs
str <- newEmptyMVar
mapM_ (forkIO . hashAndPrint str) files
printNrResults (length files) str
printNrResults i var = replicateM_ i (takeMVar var >>= putStrLn)
hashAndPrint str f = do
bs <- L.readFile f
let !h = show $ md5 bs
putMVar str (f ++ ": " ++ h)
We define a new variable, str
, as an empty MVar. After the hashing, the result is reported with putMVar
- remember this function blocks when the MVar is already full so no hashes are dropped on account of the mutable memory. Similarly, printNrResults
uses the takeMVar
function which will block until the MVar is full - or once the next file is done being hashed in this case.
Note how the value is evaluated before the putMVar call. If the argument is an unevaluated thunk then printNrResults
will have to evaluate the thunks before it prints the result and our efforts would have been worthless.
Knowing the str
MVar will be filled 'length files
' times we can let the main thread exit after printing the given number of results, thus terminating the program.
$ ghc exMVar.hs -o exMVar-threaded --make -O2 -threaded $ time ./exMVar-threaded +RTS -N2 -RTS 2GB 2GB 2GB 2GB 2GB: b8f1f1faa6dda5426abffb3a7811c1fb 2GB: b8f1f1faa6dda5426abffb3a7811c1fb 2GB: b8f1f1faa6dda5426abffb3a7811c1fb 2GB: b8f1f1faa6dda5426abffb3a7811c1fb real 0m40.524s $ time ./exMVar-threaded +RTS -N1 -RTS 2GB 2GB 2GB 2GB 2GB: b8f1f1faa6dda5426abffb3a7811c1fb 2GB: b8f1f1faa6dda5426abffb3a7811c1fb 2GB: b8f1f1faa6dda5426abffb3a7811c1fb 2GB: b8f1f1faa6dda5426abffb3a7811c1fb real 1m8.170s
Further reading[edit]
Message passing channels[edit]
For streaming data it is hard to beat the performance of channels. After declaring a channel (newChan
), you can pipe data between threads (writeChan
, readChan
) and tee data to separate readers (dupChan
). The flexibility of channels makes them useful for a wide range of communications.
Continuing with our hashing example, lets say the names of the files needing hashed are coming available, or need streaming for other reasons. We can fork a set of worker threads and feed them the filenames through a channel. For consistancy, the program has also been modified to communicate the result from worker to printer via a channel.
{-# LANGUAGE BangPatterns #-}
import Data.Digest.Pure.MD5
import qualified Data.ByteString.Lazy as L
import System.Environment
import Control.Concurrent
import Control.Concurrent.Chan
import Control.Monad (forever, forM_, replicateM_)
nrWorkers = 2
main = do
files <- getArgs
str <- newChan
fileChan <- newChan
forM_ [1..nrWorkers] (\_ -> forkIO $ worker str fileChan)
forM_ files (writeChan fileChan)
printNrResults (length files) str
printNrResults i var = replicateM_ i (readChan var >>= putStrLn)
worker :: Chan String -> Chan String -> IO ()
worker str fileChan = forever (readChan fileChan >>= hashAndPrint str)
hashAndPrint str f = do
bs <- L.readFile f
let !h = show $ md5 bs
writeChan str (f ++ ": " ++ h)
Notice that this has advantages: results are available incrementally, and the performance has improved with the number of parallel hash operations matching the number of cores.
Examples[edit]
Further reading[edit]
Lock-free synchronisation[edit]
- STM
Todo
Further reading[edit]
Asynchronous messages[edit]
Control.Exception:asynchronous
- Async exceptions
Todo
Examples[edit]
Further reading[edit]
Parallelism strategies[edit]
- Parallel, pure strategies
Todo
Further reading[edit]
Data parallel arrays[edit]
Todo
Further reading[edit]
Foreign languages calls and concurrency[edit]
Non-blocking foreign calls in concurrent threads.
Profiling and measurement[edit]
+RTS -sstderr
Further reading[edit]
Todo