Concurrency

From HaskellWiki
Revision as of 06:21, 12 June 2023 by Atravers (talk | contribs) (>_<)


This page contains notes and information about how to write concurrent programs in Haskell. If you're more interested in performance than non-determinism, learn about parallelism first.

For practicality, the content is GHC-centric at the moment, although this may change as Haskell evolves.

Overview

GHC provides multi-scale support for parallel and concurrent programming, from very fine-grained, small "sparks", to coarse-grained explicit threads and locks, along with other models of concurrent and parallel programming, including actors, CSP-style concurrency, nested data parallelism and Intel Concurrent Collections. Synchronization between tasks is possible via messages, regular Haskell variables, MVar shared state or transactional memory.

Getting started

The most important (as of 2010) to get to know are the basic "concurrent Haskell" model of threads using forkIO and MVars, the use of transactional memory via STM.

See "Tackling the Awkward Squad" to get started.

From there, try the reading list for parallelism in Haskell.

Digging deeper

  • Software Transactional Memory (STM) is a newer way to coordinate concurrent threads. There's a separate Wiki page devoted to STM.
STM was added to GHC 6.4, and is described in the paper Composable memory transactions. The paper Lock-free data structures using STM in Haskell gives further examples of concurrent programming using STM.

GHC concurrency specifics

You get access to concurrency operations by importing the library Control.Concurrent.

Since 2004, GHC supports running programs in parallel on an SMP or multi-core machine. How to do it:

  • Compile your program using the -threaded switch.
  • Run the program with +RTS -N2 to use 2 threads, for example (RTS stands for runtime system; see the GHC users' guide). You should use a -N value equal to the number of CPU cores on your machine (not including Hyper-threading cores). As of GHC v6.12, you can leave off the number of cores and all available cores will be used (you still need to pass -N however, like so: +RTS -N).
  • Concurrent threads (forkIO) will run in parallel, and you can also use the par combinator and Strategies from the Control.Parallel.Strategies module to create parallelism.
  • Use +RTS -sstderr for timing stats.
  • To debug parallel program performance, use ThreadScope.

Support for low-level parallelism features of modern processors is slowly coming along. As of version 7.8, GHC includes the ability to emit SIMD instructions, and also has a rudimentary ability to use atomic memory operations.

Alternative approaches

  • CHP: CSP-style concurrency for Haskell.

See also