Difference between revisions of "Parallelism"

From HaskellWiki
Jump to navigation Jump to search
(cut dead links, redirect to portal)
Line 7: Line 7:
 
GHC provides multi-scale support for parallel programming, from very fine-grained, small "sparks", to coarse-grained explicit threads and locks (using concurrency), along with other models of parallel programming.
 
GHC provides multi-scale support for parallel programming, from very fine-grained, small "sparks", to coarse-grained explicit threads and locks (using concurrency), along with other models of parallel programming.
   
* See "Real World Haskell" [http://book.realworldhaskell.org/read/concurrent-and-multicore-programming.html chapter 24], for an introduction to the most common forms of concurrent and parallel programming in GHC.
 
* A [http://donsbot.wordpress.com/2009/09/03/parallel-programming-in-haskell-a-reading-list/ reading list for parallelism in Haskell].
 
* The [http://stackoverflow.com/questions/3063652/whats-the-status-of-multicore-programming-in-haskell status of parallel and concurrent programming] in Haskell.
 
 
 
The parallel programming models in GHC can be divided into the following forms:
 
The parallel programming models in GHC can be divided into the following forms:
   
Line 31: Line 27:
 
=== Related work ===
 
=== Related work ===
   
  +
* [[Parallel]] portal
  +
* [[Parallel/Research|Ongoing research in Parallel Haskell]]
 
* The Sun project to improve http://ghcsparc.blogspot.com/ GHC performance on Sparc]
 
* The Sun project to improve http://ghcsparc.blogspot.com/ GHC performance on Sparc]
 
* A [http://www.well-typed.com/blog/38 Microsoft project to improve industrial applications of GHC parallelism].
 
* A [http://www.well-typed.com/blog/38 Microsoft project to improve industrial applications of GHC parallelism].
* [http://www.haskell.org/~simonmar/bib/bib.html Simon Marlow's publications on parallelism and GHC]
 
* [http://www.macs.hw.ac.uk/~dsg/gph/ Glasgow Parallel Haskell]
 
* [http://www.macs.hw.ac.uk/~dsg/gdh/ Glasgow Distributed Haskell]
 
* http://www-i2.informatik.rwth-aachen.de/~stolz/dhs/
 
* http://www.informatik.uni-kiel.de/~fhu/PUBLICATIONS/1999/ifl.html
 
* [http://www.mathematik.uni-marburg.de/~eden Eden]
 

Revision as of 16:53, 16 March 2011

Parallel Programming in GHC

This page contains notes and information about how to use parallelism in GHC to speed up pure functions in your program.

You may be interested in concurrency instead, which would allow you to manage simultaneous IO actions.

GHC provides multi-scale support for parallel programming, from very fine-grained, small "sparks", to coarse-grained explicit threads and locks (using concurrency), along with other models of parallel programming.

The parallel programming models in GHC can be divided into the following forms:

The most important (as of 2010) to get to know are implicit parallelism via sparks. If you're interested in scientific programming specifically, you may also be interested in current research on nested data parallelism in Haskell.

Starting points

  • Control.Parallel. The first thing to start with parallel programming in Haskell is the use of par/pseq from the parallel library.
  • Nested Data Parallelism. For an approach to exploiting the implicit parallelism in array programs for multiprocessors, see Data Parallel Haskell (work in progress).

Multicore GHC

Since 2004, GHC supports running programs in parallel on an SMP or multi-core machine. How to do it:

  • Compile your program using the -threaded switch.
  • Run the program with +RTS -N2 to use 2 threads, for example (RTS stands for runtime system; see the GHC users' guide). You should use a -N value equal to the number of CPU cores on your machine (not including Hyper-threading cores). As of GHC v6.12, you can leave off the number of cores and all available cores will be used (you still need to pass -N however, like so: +RTS -N).
  • Concurrent threads (forkIO) will run in parallel, and you can also use the par combinator and Strategies from the Control.Parallel.Strategies module to create parallelism.
  • Use +RTS -sstderr for timing stats.
  • To debug parallel program performance, use ThreadScope.

Related work