Haskell for multicores: Difference between revisions
DonStewart (talk | contribs) No edit summary |
DonStewart (talk | contribs) No edit summary |
||
Line 1: | Line 1: | ||
[[Category:Parallelism]] | |||
[http://haskell.org/ghc GHC Haskell] comes with a large set of libraries and tools for building programs that exploit [http://en.wikipedia.org/wiki/Multi-core_(computing) multicore architectures]. | [http://haskell.org/ghc GHC Haskell] comes with a large set of libraries and tools for building programs that exploit [http://en.wikipedia.org/wiki/Multi-core_(computing) multicore architectures]. | ||
This site attempts to document all our available information on exploiting such hardware with Haskell. | This site attempts to document all our available information on | ||
exploiting such hardware with Haskell. | |||
Throughout, we focus on exploiting shared-memory [http://en.wikipedia.org/wiki/Symmetric_multiprocessing SMP systems], with aim of lowering absolute wall clock times. The machines we target are typical 2x to 32x desktop multicore machine, on which vanilla GHC will run. | Throughout, we focus on exploiting shared-memory [http://en.wikipedia.org/wiki/Symmetric_multiprocessing SMP systems], with aim of lowering absolute wall clock times. The machines we target are typical 2x to 32x desktop multicore machine, on which vanilla GHC will run. | ||
Line 50: | Line 53: | ||
So we were able to use 3.5/4 of the available cpu time. And this is typical, most problems aren't easily scalable, and we must trade off work on more cores, for more overhead with communication. | So we were able to use 3.5/4 of the available cpu time. And this is typical, most problems aren't easily scalable, and we must trade off work on more cores, for more overhead with communication. | ||
=== Examples === | |||
* [http://haskell.org/haskellwiki/Concurrency_demos/Zeta Riemann's zeta function] | |||
=== Further reading === | === Further reading === | ||
Line 57: | Line 64: | ||
* [http://www.haskell.org/ghc/docs/latest/html/libraries/parallel/Control-Parallel-Strategies.html API documentation for paralell strategies] | * [http://www.haskell.org/ghc/docs/latest/html/libraries/parallel/Control-Parallel-Strategies.html API documentation for paralell strategies] | ||
* [http://book.realworldhaskell.org/read/concurrent-and-multicore-programming.html Real World Haskell: Concurrent and Parallel Programming] | * [http://book.realworldhaskell.org/read/concurrent-and-multicore-programming.html Real World Haskell: Concurrent and Parallel Programming] | ||
* [http://haskell.org/haskellwiki/Blog_articles/Parallel Blog posts about parallelism] | |||
== Thread primitives == | == Thread primitives == | ||
Line 65: | Line 73: | ||
''Todo'' | ''Todo'' | ||
=== Further reading === | |||
* [http://blog.moertel.com/articles/2004/03/13/concurrent-port-scanner-in-haskell A concurrent port scanner] | |||
* [http://haskell.org/haskellwiki/Research_papers/Parallelism_and_concurrency#Concurrent_Haskell Research papers on concurrency in Haskell] | |||
* [http://haskell.org/haskellwiki/Research_papers/Parallelism_and_concurrency#Parallel_Haskell Research papes on parallel Haskell | |||
== Synchronisation with locks == | == Synchronisation with locks == | ||
Line 73: | Line 87: | ||
''Todo'' | ''Todo'' | ||
=== Further reading === | |||
== Message passing channels == | == Message passing channels == | ||
Line 81: | Line 97: | ||
''Todo'' | ''Todo'' | ||
=== Examples === | |||
* [http://haskell.org/haskellwiki/Implement_a_chat_server Implementing a chat server] | |||
* [http://haskell.org/haskellwiki/Concurrency_demos/Two_reader_threads Two reader threads] | |||
=== Further reading === | |||
== Lock-free synchronisation == | == Lock-free synchronisation == | ||
Line 89: | Line 112: | ||
''Todo'' | ''Todo'' | ||
=== Further reading === | |||
* [http://haskell.org/haskellwiki/Research_papers/Parallelism_and_concurrency#Lock_free_data_structures_and_transactional_memory Research papers on transactional memory] | |||
== Asynchronous messages == | == Asynchronous messages == | ||
Line 95: | Line 122: | ||
* Async exceptions | * Async exceptions | ||
''Todo'' | ''Todo'' | ||
=== Examples === | |||
* [http://haskell.org/haskellwiki/Concurrency_demos/Graceful_exit Graceful exit] | |||
=== Further reading === | |||
== Parallelism strategies == | == Parallelism strategies == | ||
Line 105: | Line 139: | ||
''Todo'' | ''Todo'' | ||
=== Further reading === | |||
* [http://haskell.org/haskellwiki/Shootout/Parallel/BinaryTrees Parallel binary trees benchmark] | |||
== Data parallel arrays == | == Data parallel arrays == | ||
Line 111: | Line 149: | ||
''Todo'' | ''Todo'' | ||
=== Further reading === | |||
== Foreign languages calls and concurrency == | == Foreign languages calls and concurrency == | ||
Line 119: | Line 159: | ||
+RTS -sstderr | +RTS -sstderr | ||
=== Further reading === | |||
''Todo'' | ''Todo'' |
Revision as of 02:12, 2 September 2008
GHC Haskell comes with a large set of libraries and tools for building programs that exploit multicore architectures.
This site attempts to document all our available information on exploiting such hardware with Haskell.
Throughout, we focus on exploiting shared-memory SMP systems, with aim of lowering absolute wall clock times. The machines we target are typical 2x to 32x desktop multicore machine, on which vanilla GHC will run.
Introduction
To get an idea of what we aim to do -- reduce running times by exploiting more cores -- here's a naive "hello, world" of parallel programs: parallel, naive fib. It simply tells us whether or not the SMP runtime is working:
import Control.Parallel
import Control.Monad
import Text.Printf
cutoff = 35
fib' :: Int -> Integer
fib' 0 = 0
fib' 1 = 1
fib' n = fib' (n-1) + fib' (n-2)
fib :: Int -> Integer
fib n | n < cutoff = fib' n
| otherwise = r `par` (l `pseq` l + r)
where
l = fib (n-1)
r = fib (n-2)
main = forM_ [0..45] $ \i ->
printf "n=%d => %d\n" i (fib i)
We compile it with the `-threaded` flag:
$ ghc -O2 -threaded --make fib.hs [1 of 1] Compiling Main ( fib.hs, fib.o ) Linking fib ...
And run it with:
+RTS -Nx
where 'x' is the number of cores you have (or a slightly higher value). Here, on a quad core linux system:
./fib +RTS -N4 76.81s user 0.75s system 351% cpu 22.059 total
So we were able to use 3.5/4 of the available cpu time. And this is typical, most problems aren't easily scalable, and we must trade off work on more cores, for more overhead with communication.
Examples
Further reading
- GHC's multiprocessor guide
- runtime options to enable SMP parallelism
- API documentation for paralell strategies
- Real World Haskell: Concurrent and Parallel Programming
- Blog posts about parallelism
Thread primitives
Control.Concurrent Control.Concurrent
- forkIO
Todo
Further reading
- A concurrent port scanner
- Research papers on concurrency in Haskell
- [http://haskell.org/haskellwiki/Research_papers/Parallelism_and_concurrency#Parallel_Haskell Research papes on parallel Haskell
Synchronisation with locks
- MVar
Todo
Further reading
Message passing channels
- Chan
Todo
Examples
Further reading
Lock-free synchronisation
- STM
Todo
Further reading
Asynchronous messages
Control.Exception:asynchronous
- Async exceptions
Todo
Examples
Further reading
Parallelism strategies
- Parallel, pure strategies
Todo
Further reading
Data parallel arrays
Todo
Further reading
Foreign languages calls and concurrency
Non-blocking foreign calls in concurrent threads.
Profiling and measurement
+RTS -sstderr
Further reading
Todo