Difference between revisions of "Parallelism"

From HaskellWiki
Jump to navigation Jump to search
(copy from GHC/Concurrency (content to be divorced))
 
(cut out concurrency stuff)
Line 1: Line 1:
 
== Parallel and Concurrent Programming in GHC ==
 
== Parallel and Concurrent Programming in GHC ==
   
This page contains notes and information about how to write concurrent programs in GHC.
+
This page contains notes and information about how to write parallel programs in GHC.
   
You may be interested in [[GHC/Parallel|parallelism]] (speeding up pure functions) instead. Have a look there too.
+
You may be interested in [[GHC/Concurrency|concurrency]] instead. Have a look there too.
   
GHC provides multi-scale support for parallel programming, from very fine-grained, small "sparks", to coarse-grained explicit threads and locks, along with other models of concurrent and parallel programming, including actors, CSP-style concurrency, nested data parallelism and Intel Concurrent Collections. Synchronization between tasks is possible via messages, regular Haskell variables, MVar shared state or transactional memory.
+
GHC provides multi-scale support for parallel programming, from very fine-grained, small "sparks", to coarse-grained explicit threads and locks (using concurrency), along with other models of parallel programming.
   
 
* See "Real World Haskell" [http://book.realworldhaskell.org/read/concurrent-and-multicore-programming.html chapter 24], for an introduction to the most common forms of concurrent and parallel programming in GHC.
 
* See "Real World Haskell" [http://book.realworldhaskell.org/read/concurrent-and-multicore-programming.html chapter 24], for an introduction to the most common forms of concurrent and parallel programming in GHC.
Line 11: Line 11:
 
* The [http://stackoverflow.com/questions/3063652/whats-the-status-of-multicore-programming-in-haskell status of parallel and concurrent programming] in Haskell.
 
* The [http://stackoverflow.com/questions/3063652/whats-the-status-of-multicore-programming-in-haskell status of parallel and concurrent programming] in Haskell.
 
 
The concurrent and parallel programming models in GHC can be divided into the following forms:
+
The parallel programming models in GHC can be divided into the following forms:
   
 
* Very fine grained: parallel sparks and futures, as described in the paper "[http://www.haskell.org/~simonmar/bib/multicore-ghc-09_abstract.html Runtime Support for Multicore Haskell]"
 
* Very fine grained: parallel sparks and futures, as described in the paper "[http://www.haskell.org/~simonmar/bib/multicore-ghc-09_abstract.html Runtime Support for Multicore Haskell]"
* Fine grained: lightweight Haskell threads, explicit synchronization with STM or MVars. See the paper "Tackling the Awkward Squad" below.
 
 
* Nested data parallelism: a parallel programming model based on bulk data parallelism, in the form of the [http://www.haskell.org/haskellwiki/GHC/Data_Parallel_Haskell DPH] and [http://hackage.haskell.org/package/repa Repa] libraries for transparently parallel arrays.
 
* Nested data parallelism: a parallel programming model based on bulk data parallelism, in the form of the [http://www.haskell.org/haskellwiki/GHC/Data_Parallel_Haskell DPH] and [http://hackage.haskell.org/package/repa Repa] libraries for transparently parallel arrays.
 
* Intel [http://software.intel.com/en-us/blogs/2010/05/27/announcing-intel-concurrent-collections-for-haskell-01/ Concurrent Collections for Haskell]: a graph-oriented parallel programming model.
 
* Intel [http://software.intel.com/en-us/blogs/2010/05/27/announcing-intel-concurrent-collections-for-haskell-01/ Concurrent Collections for Haskell]: a graph-oriented parallel programming model.
* [http://www.cs.kent.ac.uk/projects/ofa/chp/ CHP]: CSP-style concurrency for Haskell.
 
   
The most important (as of 2010) to get to know are the basic "concurrent Haskell" model of threads using forkIO and MVars, the use of transactional memory via STM, implicit parallelism via sparks and, if you're interested in scientific programming specifically, nested data parallelism in Haskell.
+
The most important (as of 2010) to get to know are implicit parallelism via sparks. If you're interested in scientific programming specifically, you may also be interested in current research on nested data parallelism in Haskell.
   
 
=== Starting points ===
 
=== Starting points ===
 
* '''Basic concurrency: forkIO and MVars'''.
 
* '''Software Transactional Memory''' (STM) is a new way to coordinate concurrent threads. There's a separate [[Software transactional memory|Wiki page devoted to STM]].
 
: STM was added to GHC 6.4, and is described in the paper [http://research.microsoft.com/~simonpj/papers/stm/index.htm Composable memory transactions]. The paper [http://research.microsoft.com/~simonpj/papers/stm/lock-free.htm Lock-free data structures using Software Transactional Memory in Haskell] gives further examples of concurrent programming using STM.
 
 
* '''Foreign function interface'''. If you are calling foreign functions in a concurrent program, you need to know about ''bound threads''. They are described in a Haskell workshop paper, [http://research.microsoft.com/~simonpj/Papers/conc-ffi/index.htm Extending the Haskell Foreign Function Interface with Concurrency]. The GHC Commentary [http://darcs.haskell.org/ghc/docs/comm/rts-libs/multi-thread.html Supporting multi-threaded interoperation] contains more detailed explanation of cooperation between FFI calls and multi-threaded runtime.
 
   
 
* '''Nested Data Parallelism'''. For an approach to exploiting the implicit parallelism in array programs for multiprocessors, see [[GHC/Data Parallel Haskell|Data Parallel Haskell]] (work in progress).
 
* '''Nested Data Parallelism'''. For an approach to exploiting the implicit parallelism in array programs for multiprocessors, see [[GHC/Data Parallel Haskell|Data Parallel Haskell]] (work in progress).
 
=== Using concurrency in GHC ===
 
 
* You get access to concurrency operations by importing the library [http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurrent.html Control.Concurrent].
 
 
* The GHC manual gives a few useful flags that control scheduling (not usually necessary) [http://www.haskell.org/ghc/docs/latest/html/users_guide/sec-using-parallel.html#parallel-rts-opts RTS options].
 
   
 
=== Multicore GHC ===
 
=== Multicore GHC ===

Revision as of 14:24, 16 March 2011

Parallel and Concurrent Programming in GHC

This page contains notes and information about how to write parallel programs in GHC.

You may be interested in concurrency instead. Have a look there too.

GHC provides multi-scale support for parallel programming, from very fine-grained, small "sparks", to coarse-grained explicit threads and locks (using concurrency), along with other models of parallel programming.

The parallel programming models in GHC can be divided into the following forms:

The most important (as of 2010) to get to know are implicit parallelism via sparks. If you're interested in scientific programming specifically, you may also be interested in current research on nested data parallelism in Haskell.

Starting points

  • Nested Data Parallelism. For an approach to exploiting the implicit parallelism in array programs for multiprocessors, see Data Parallel Haskell (work in progress).

Multicore GHC

Since 2004, GHC supports running programs in parallel on an SMP or multi-core machine. How to do it:

  • Compile your program using the -threaded switch.
  • Run the program with +RTS -N2 to use 2 threads, for example (RTS stands for runtime system; see the GHC users' guide). You should use a -N value equal to the number of CPU cores on your machine (not including Hyper-threading cores). As of GHC v6.12, you can leave off the number of cores and all available cores will be used (you still need to pass -N however, like so: +RTS -N).
  • Concurrent threads (forkIO) will run in parallel, and you can also use the par combinator and Strategies from the Control.Parallel.Strategies module to create parallelism.
  • Use +RTS -sstderr for timing stats.
  • To debug parallel program performance, use ThreadScope.

Related work