Parallel/Glossary

From HaskellWiki

Glossary: Parallelism and concurrency

A-H[edit]

bound thread
A bound thread is a haskell thread that is bound to an operating system thread. While the bound thread is still scheduled by the Haskell run-time system, the operating system thread takes care of all the foreign calls made by the bound thread. All foreign exported functions are run in a bound thread (bound to the OS thread that called the function). Also, the main action of every Haskell program is run in a bound thread.
concurrency
Implementing a program by using multiple I/O-performing threads. While a concurrent Haskell program can run on a parallel machine, the primary goal of using concurrency is not to gain performance, but rather because that is the simplest and most direct way to write the program. Since the threads perform I/O, the semantics of the program is necessarily non-deterministic.
see parallelism (vs concurrency)
data parallelism
dataflow parallelism
A model for parallelism where dependencies are seen as forming a directed graph between sub-computations. Divergent parts of the graph with a common ancestor can be seen as computations that can be run in parallel. Connected nodes can be seen as forcibly sequential computations.
see monad-par
distributed
distributed memory model
Haskell thread
A Haskell thread is a thread of execution for IO code. Multiple Haskell threads can execute IO code concurrently and they can communicate using shared mutable variables and channels.
see spark (vs threads)
see Haskell thread (vs OS thread)
Haskell thread (vs OS thread)
HEC (Haskell Execution Context)

I-M[edit]

MapReduce
TODO: non-Haskellers may have heard of MapReduce - what does it translate to in Haskell terms?
monad-par
A deterministic parallel Haskell library. It provides an API that resembles Concurrent Haskell (without sacrificing predictability). Interesting traits: more verbose code, threads instead of sparks, a hyperstrict default (a good thing for parallelism)
see Strategies
MVar
A locked mutable variable that can be shared across Haskell threads. MVar's can be full or empty. When reading an empty MVar, the reading thread blocks until it is full; and conversely, when writing to a full MVar the writing thread blocks until it is empty.

N-R[edit]

nested data parallelism
parallelism
Running a Haskell program on multiple processors, with the goal of improving performance. Ideally, this should be done invisibly, and with no semantic changes.
parallelism (vs concurrency)
Discussed in Parallelism vs. Concurrency

S-Z[edit]

shared memory model
spark
Sparks are specific to parallel Haskell. Abstractly, a spark is a pure computation which may be evaluated in parallel. Sparks are introduced with the par combinator; the expression (x `par` y) "sparks off" x, telling the runtime that it may evaluate the value of x in parallel to other work. Whether or not a spark is evaluated in parallel with other computations, or other Haskell IO threads, depends on what your hardware supports and on how your program is written. Sparks are put in a work queue and when a CPU core is idle, it can execute a spark by taking one from the work queue and evaluating it.
see spark (vs thread)
spark (vs thread)
On a multi-core machine, both threads and sparks can be used to achieve parallelism. Threads give you concurrent, non-deterministic parallelism, while sparks give you pure deterministic parallelism. Haskell threads are ideal for applications like network servers where you need to do lots of I/O and using concurrency fits the nature of the problem. Sparks are ideal for speeding up pure calculations where adding non-deterministic concurrency would just make things more complicated.
STM
task parallelism
OS thread
thread
see Haskell thread, OS thread and bound thread