https://wiki.haskell.org/api.php?action=feedcontributions&user=EricKow&feedformat=atomHaskellWiki - User contributions [en]2024-03-28T11:31:45ZUser contributionsMediaWiki 1.35.5https://wiki.haskell.org/index.php?title=Parallel/Glossary&diff=57901Parallel/Glossary2014-04-17T16:32:07Z<p>EricKow: oh</p>
<hr />
<div>[[Category:Glossary]]<br />
<br />
== A-H ==<br />
<br />
; bound thread<br />
: A bound thread is a haskell thread that is bound to an operating system thread. While the bound thread is still scheduled by the Haskell run-time system, the operating system thread takes care of all the foreign calls made by the bound thread. All foreign exported functions are run in a bound thread (bound to the OS thread that called the function). Also, the main action of every Haskell program is run in a bound thread.<br />
<br />
; concurrency<br />
: Implementing a program by using multiple I/O-performing threads. While a concurrent Haskell program can run on a parallel machine, the primary goal of using concurrency is not to gain performance, but rather because that is the simplest and most direct way to write the program. Since the threads perform I/O, the semantics of the program is necessarily non-deterministic.<br />
: ''see parallelism (vs concurrency)''<br />
<br />
; data parallelism<br />
<br />
; dataflow parallelism<br />
: A model for parallelism where dependencies are seen as forming a directed graph between sub-computations. Divergent parts of the graph with a common ancestor can be seen as computations that can be run in parallel. Connected nodes can be seen as forcibly sequential computations.<br />
: ''see monad-par''<br />
<br />
; distributed<br />
<br />
; distributed memory model<br />
<br />
; Haskell thread<br />
: A Haskell thread is a thread of execution for IO code. Multiple Haskell threads can execute IO code concurrently and they can communicate using shared mutable variables and channels.<br />
: ''see spark (vs threads)''<br />
: ''see Haskell thread (vs OS thread)''<br />
<br />
; Haskell thread (vs OS thread)<br />
<br />
; HEC (Haskell Execution Context)<br />
<br />
== I-M ==<br />
<br />
; MapReduce<br />
: ''TODO: non-Haskellers may have heard of MapReduce - what does it translate to in Haskell terms?''<br />
<br />
; monad-par<br />
: A deterministic parallel Haskell library. It provides an API that resembles Concurrent Haskell (without sacrificing predictability). Interesting traits: more verbose code, threads instead of sparks, a hyperstrict default (a good thing for parallelism)<br />
: ''see Strategies''<br />
<br />
; MVar<br />
: A locked mutable variable that can be shared across Haskell threads. MVar's can be full or empty. When reading an empty MVar, the reading thread blocks until it is full; and conversely, when writing to a full MVar the writing thread blocks until it is empty.<br />
<br />
== N-R ==<br />
<br />
; nested data parallelism<br />
<br />
; parallelism<br />
: Running a Haskell program on multiple processors, with the goal of improving performance. Ideally, this should be done invisibly, and with no semantic changes.<br />
<br />
; parallelism (vs concurrency)<br />
: Discussed in [[Parallelism vs. Concurrency]]<br />
<br />
== S-Z ==<br />
<br />
; shared memory model<br />
<br />
; spark<br />
: Sparks are specific to parallel Haskell. Abstractly, a spark is a pure computation which may be evaluated in parallel. Sparks are introduced with the par combinator; the expression (<code>x `par` y</code>) "sparks off" <code>x</code>, telling the runtime that it may evaluate the value of <code>x</code> in parallel to other work. Whether or not a spark is evaluated in parallel with other computations, or other Haskell IO threads, depends on what your hardware supports and on how your program is written. Sparks are put in a work queue and when a CPU core is idle, it can execute a spark by taking one from the work queue and evaluating it.<br />
: ''see spark (vs thread)''<br />
<br />
; spark (vs thread)<br />
: On a multi-core machine, both threads and sparks can be used to achieve parallelism. Threads give you concurrent, non-deterministic parallelism, while sparks give you pure deterministic parallelism. Haskell threads are ideal for applications like network servers where you need to do lots of I/O and using concurrency fits the nature of the problem. Sparks are ideal for speeding up pure calculations where adding non-deterministic concurrency would just make things more complicated.<br />
<br />
; STM<br />
<br />
; task parallelism<br />
<br />
; OS thread<br />
<br />
; thread<br />
: ''see Haskell thread, OS thread and bound thread''</div>EricKowhttps://wiki.haskell.org/index.php?title=Parallel/Glossary&diff=57900Parallel/Glossary2014-04-17T16:31:47Z<p>EricKow: oops</p>
<hr />
<div>[Cagetory:Glossary]<br />
<br />
== A-H ==<br />
<br />
; bound thread<br />
: A bound thread is a haskell thread that is bound to an operating system thread. While the bound thread is still scheduled by the Haskell run-time system, the operating system thread takes care of all the foreign calls made by the bound thread. All foreign exported functions are run in a bound thread (bound to the OS thread that called the function). Also, the main action of every Haskell program is run in a bound thread.<br />
<br />
; concurrency<br />
: Implementing a program by using multiple I/O-performing threads. While a concurrent Haskell program can run on a parallel machine, the primary goal of using concurrency is not to gain performance, but rather because that is the simplest and most direct way to write the program. Since the threads perform I/O, the semantics of the program is necessarily non-deterministic.<br />
: ''see parallelism (vs concurrency)''<br />
<br />
; data parallelism<br />
<br />
; dataflow parallelism<br />
: A model for parallelism where dependencies are seen as forming a directed graph between sub-computations. Divergent parts of the graph with a common ancestor can be seen as computations that can be run in parallel. Connected nodes can be seen as forcibly sequential computations.<br />
: ''see monad-par''<br />
<br />
; distributed<br />
<br />
; distributed memory model<br />
<br />
; Haskell thread<br />
: A Haskell thread is a thread of execution for IO code. Multiple Haskell threads can execute IO code concurrently and they can communicate using shared mutable variables and channels.<br />
: ''see spark (vs threads)''<br />
: ''see Haskell thread (vs OS thread)''<br />
<br />
; Haskell thread (vs OS thread)<br />
<br />
; HEC (Haskell Execution Context)<br />
<br />
== I-M ==<br />
<br />
; MapReduce<br />
: ''TODO: non-Haskellers may have heard of MapReduce - what does it translate to in Haskell terms?''<br />
<br />
; monad-par<br />
: A deterministic parallel Haskell library. It provides an API that resembles Concurrent Haskell (without sacrificing predictability). Interesting traits: more verbose code, threads instead of sparks, a hyperstrict default (a good thing for parallelism)<br />
: ''see Strategies''<br />
<br />
; MVar<br />
: A locked mutable variable that can be shared across Haskell threads. MVar's can be full or empty. When reading an empty MVar, the reading thread blocks until it is full; and conversely, when writing to a full MVar the writing thread blocks until it is empty.<br />
<br />
== N-R ==<br />
<br />
; nested data parallelism<br />
<br />
; parallelism<br />
: Running a Haskell program on multiple processors, with the goal of improving performance. Ideally, this should be done invisibly, and with no semantic changes.<br />
<br />
; parallelism (vs concurrency)<br />
: Discussed in [[Parallelism vs. Concurrency]]<br />
<br />
== S-Z ==<br />
<br />
; shared memory model<br />
<br />
; spark<br />
: Sparks are specific to parallel Haskell. Abstractly, a spark is a pure computation which may be evaluated in parallel. Sparks are introduced with the par combinator; the expression (<code>x `par` y</code>) "sparks off" <code>x</code>, telling the runtime that it may evaluate the value of <code>x</code> in parallel to other work. Whether or not a spark is evaluated in parallel with other computations, or other Haskell IO threads, depends on what your hardware supports and on how your program is written. Sparks are put in a work queue and when a CPU core is idle, it can execute a spark by taking one from the work queue and evaluating it.<br />
: ''see spark (vs thread)''<br />
<br />
; spark (vs thread)<br />
: On a multi-core machine, both threads and sparks can be used to achieve parallelism. Threads give you concurrent, non-deterministic parallelism, while sparks give you pure deterministic parallelism. Haskell threads are ideal for applications like network servers where you need to do lots of I/O and using concurrency fits the nature of the problem. Sparks are ideal for speeding up pure calculations where adding non-deterministic concurrency would just make things more complicated.<br />
<br />
; STM<br />
<br />
; task parallelism<br />
<br />
; OS thread<br />
<br />
; thread<br />
: ''see Haskell thread, OS thread and bound thread''</div>EricKowhttps://wiki.haskell.org/index.php?title=Parallel/Glossary&diff=57899Parallel/Glossary2014-04-17T16:31:33Z<p>EricKow: put in glossary category</p>
<hr />
<div>[[Cagetory:Glossary]]<br />
<br />
== A-H ==<br />
<br />
; bound thread<br />
: A bound thread is a haskell thread that is bound to an operating system thread. While the bound thread is still scheduled by the Haskell run-time system, the operating system thread takes care of all the foreign calls made by the bound thread. All foreign exported functions are run in a bound thread (bound to the OS thread that called the function). Also, the main action of every Haskell program is run in a bound thread.<br />
<br />
; concurrency<br />
: Implementing a program by using multiple I/O-performing threads. While a concurrent Haskell program can run on a parallel machine, the primary goal of using concurrency is not to gain performance, but rather because that is the simplest and most direct way to write the program. Since the threads perform I/O, the semantics of the program is necessarily non-deterministic.<br />
: ''see parallelism (vs concurrency)''<br />
<br />
; data parallelism<br />
<br />
; dataflow parallelism<br />
: A model for parallelism where dependencies are seen as forming a directed graph between sub-computations. Divergent parts of the graph with a common ancestor can be seen as computations that can be run in parallel. Connected nodes can be seen as forcibly sequential computations.<br />
: ''see monad-par''<br />
<br />
; distributed<br />
<br />
; distributed memory model<br />
<br />
; Haskell thread<br />
: A Haskell thread is a thread of execution for IO code. Multiple Haskell threads can execute IO code concurrently and they can communicate using shared mutable variables and channels.<br />
: ''see spark (vs threads)''<br />
: ''see Haskell thread (vs OS thread)''<br />
<br />
; Haskell thread (vs OS thread)<br />
<br />
; HEC (Haskell Execution Context)<br />
<br />
== I-M ==<br />
<br />
; MapReduce<br />
: ''TODO: non-Haskellers may have heard of MapReduce - what does it translate to in Haskell terms?''<br />
<br />
; monad-par<br />
: A deterministic parallel Haskell library. It provides an API that resembles Concurrent Haskell (without sacrificing predictability). Interesting traits: more verbose code, threads instead of sparks, a hyperstrict default (a good thing for parallelism)<br />
: ''see Strategies''<br />
<br />
; MVar<br />
: A locked mutable variable that can be shared across Haskell threads. MVar's can be full or empty. When reading an empty MVar, the reading thread blocks until it is full; and conversely, when writing to a full MVar the writing thread blocks until it is empty.<br />
<br />
== N-R ==<br />
<br />
; nested data parallelism<br />
<br />
; parallelism<br />
: Running a Haskell program on multiple processors, with the goal of improving performance. Ideally, this should be done invisibly, and with no semantic changes.<br />
<br />
; parallelism (vs concurrency)<br />
: Discussed in [[Parallelism vs. Concurrency]]<br />
<br />
== S-Z ==<br />
<br />
; shared memory model<br />
<br />
; spark<br />
: Sparks are specific to parallel Haskell. Abstractly, a spark is a pure computation which may be evaluated in parallel. Sparks are introduced with the par combinator; the expression (<code>x `par` y</code>) "sparks off" <code>x</code>, telling the runtime that it may evaluate the value of <code>x</code> in parallel to other work. Whether or not a spark is evaluated in parallel with other computations, or other Haskell IO threads, depends on what your hardware supports and on how your program is written. Sparks are put in a work queue and when a CPU core is idle, it can execute a spark by taking one from the work queue and evaluating it.<br />
: ''see spark (vs thread)''<br />
<br />
; spark (vs thread)<br />
: On a multi-core machine, both threads and sparks can be used to achieve parallelism. Threads give you concurrent, non-deterministic parallelism, while sparks give you pure deterministic parallelism. Haskell threads are ideal for applications like network servers where you need to do lots of I/O and using concurrency fits the nature of the problem. Sparks are ideal for speeding up pure calculations where adding non-deterministic concurrency would just make things more complicated.<br />
<br />
; STM<br />
<br />
; task parallelism<br />
<br />
; OS thread<br />
<br />
; thread<br />
: ''see Haskell thread, OS thread and bound thread''</div>EricKowhttps://wiki.haskell.org/index.php?title=Parallel_GHC_Project&diff=47005Parallel GHC Project2012-07-21T08:09:57Z<p>EricKow: /* The Parallel Haskell Digest */ part 2</p>
<hr />
<div>[[Category:Parallel]]<br />
<br />
== Overview ==<br />
<br />
The Parallel GHC Project is an [http://research.microsoft.com MSR]-funded project to push the real-world use of [[Parallel|parallel Haskell]]. The aim is to demonstrate that parallel Haskell can be employed successfully in industrial projects.<br />
<br />
In the last few years GHC has gained impressive support for parallel programming on commodity multi-core systems. In addition to traditional threads and shared variables, it supports pure parallelism, software transactional memory (STM), and data parallelism. With much of this research and development complete, the next stage is to get the technology into more widespread use.<br />
<br />
This project aims to do the engineering work to solve whatever remaining practical problems are blocking organisations from making serious use of parallelism with GHC. The driving force is the ''applications'' rather than the ''technology''.<br />
<br />
The project involves a partnership with [[#Participating organisations|six groups from commercial and scientific organisations]]. Over the course of two years these groups are applying parallel Haskell in their specific domains. They are being supported by GHC HQ and [http://www.well-typed.com/ Well-Typed] who are providing advice on Haskell tools and techniques, and applying engineering effort to resolve any issues that are hindering these groups' progress.<br />
<br />
The project is being coordinated by [http://www.well-typed.com/ Well-Typed] and they are providing the bulk of the support and engineering effort. The project started in the summer of 2010.<br />
<br />
== Project News ==<br />
<br />
<br />
<br />
=== ThreadScope and friends ===<br />
<br />
We have been continuing our work to make [[ThreadScope]] more helpful and informative in tracking down your parallel and concurrent Haskell performance problems. We now have the ability to collect heap statistics from the GHC runtime system and present them in ThreadScope. These features will be available for users of a recent development GHC (7.5.x) or the eventual 7.6 release. In addition to heap statistics, we have been working on collecting information from hardware performance counters, more specifically adding support for Linux Perf Events. This could be useful for studying IO-heavy programs, the idea being to visualise system calls as being distinct from actual execution of Haskell code.<br />
<br />
=== Cloud Haskell ===<br />
<br />
We are continuing work on the new Cloud Haskell implementation, [http://sneezy.cs.nott.ac.uk/fun/2012-02/coutts-2012-02-28.pdf recently presented] by Duncan Coutts. Lately, we have been focused on reducing message latency. This consists of work in three areas: improving binary serialisation, investigating the implications of using Chan and MVar to pass messages between threads, and perhaps improving the Haskell network library implementation to compete better with a direct C implementation.<br />
<br />
For more information on our implementation, see the [https://github.com/haskell-distributed/distributed-process distributed-process GitHub page] and particularly the updated [https://github.com/haskell-distributed/distributed-process/wiki/New-backend-and-transport-design design document], which incorporates feedback on our initial design proposal.<br />
<br />
== Project artefacts == <br />
<br />
Some of the work by our project partners is available to the public<br />
<br />
{| class="wikitable"<br />
! Project<br />
! Partner<br />
! Description<br />
! Status<br />
|-<br />
| [http://www.mew.org/~kazu/proj/mighttpd/en/ mightttpd2]<br />
| IIJ<br />
| File/CGI server on top of Warp<br />
| version 2.5.7 released 2012-04-05<br />
|-<br />
| [http://hackage.haskell.org/package/webserver webserver]<br />
| IIJ<br />
| HTTP server library<br />
| version 0.4.6 released 2011−10−05<br />
|-<br />
| [http://hackage.haskell.org/package/wai-app-file-cgi wai-app-file-cgi]<br />
| IIJ<br />
| File/CGI WAI application (used by Mighttpd)<br />
| version 0.5.8 released 2012-04-05<br />
|-<br />
| [http://hackage.haskell.org/package/wai-logger wai-logger]<br />
| IIJ<br />
| Logging system for WAI (used by Mighttpd)<br />
| version 0.1.4 released 2012-02-13<br />
|-<br />
| [http://hackage.haskell.org/package/http-date http-date]<br />
| IIJ<br />
| Fast parser and formatter for HTTP Date<br />
| version 0.0.2 released 2012-02-17<br />
|-<br />
| dns<br />
| IIJ<br />
| DNS library<br />
| version 0.2.0 released 2011−08−31<br />
|-<br />
| [http://www.mew.org/~kazu/proj/iproute/en/ iproute]<br />
| IIJ<br />
| IP routing table<br />
| version 1.2.5 released 2012-04-02<br />
|-<br />
| [http://hackage.haskell.org/package/domain-auth domain-auth]<br />
| IIJ<br />
| Library for Sender Policy Framework, SenderID, DomainKeys and DKIM.<br />
| version 0.2.0 released 2011−08-31<br />
|-<br />
| [http://www.mew.org/~kazu/proj/rpf/en/ RPF]<br />
| IIJ<br />
| Receiver Policy Framework (milter)<br />
| version 0.2.0 released 2011−08-31<br />
|}<br />
<br />
In addition to helping the [[#participating organisations|participating organisations]], the project will whenever possible make improvements to libraries and tools that are useful to Haskell users more generally.<br />
<br />
{| class="wikitable"<br />
! Project<br />
! Description<br />
! Status<br />
|-<br />
| multiprocess Threadscope<br />
| profiling of multi-process or distributed Haskell systems such as client/server or MPI programs.<br />
| '''in progress'''<br />
|-<br />
| [https://github.com/bjpop/lfg LFG]<br />
| Haskell implementation of some pseudo random number generators from the SPRNG library<br />
| '''testing'''<br />
|-<br />
| [https://github.com/bjpop/haskell-sprng SPRNG binding]<br />
| Haskell wrapper around SPRNG<br />
| '''in progress'''<br />
|-<br />
| ThreadScope improvements<br />
| new spark profiling tools, GUI enhancements, bug fixes<br />
| version 0.2.1 released 2012-01-14<br />
|-<br />
| ghc-events improvements<br />
| spark events support<br />
| version 0.4.0.0 released 2012-01-14<br />
|-<br />
| gtk2hs maintenance & release<br />
| GHC 7.2 support<br />
| version 0.12.2 released 2011-11-13<br />
|-<br />
| [http://hackage.haskell.org/package/haskell-mpi Haskell-MPI]<br />
| Haskell bindings to C MPI library<br />
| version 1.2.1 released 2012-02-15<br />
|-<br />
| rowspan="5" | GHC RTS improvements<br />
| [http://hackage.haskell.org/trac/ghc/ticket/4449 &nbsp;#4449] - GHC 7 can't do IO when daemonized<br />
| fixed in 7.0.x branch<br />
|-<br />
| [http://hackage.haskell.org/trac/ghc/ticket/4504 &nbsp;#4504] - "awaitSignal Nothing" does not block thread with -threaded<br />
| fixed in 7.0.2<br />
|-<br />
| [http://hackage.haskell.org/trac/ghc/ticket/4512 &nbsp;#4512] - EventLog does not play well with forkProcess<br />
| fixed in 7.0.x branch<br />
|-<br />
| [http://hackage.haskell.org/trac/ghc/ticket/4514 &nbsp;#4514] - IO manager can deadlock if a file descriptor is closed behind its back<br />
| fixed in 7.0.x branch<br />
|-<br />
| [http://hackage.haskell.org/trac/ghc/ticket/4854 &nbsp;#4854] - Validating on a PPC Mac OS X: Fix miscellaneous errors and warnings<br />
| fixed in 7.0.x branch<br />
|-<br />
| [http://www.cse.unsw.edu.au/~chak/haskell/c2hs/ c2hs] improvements<br />
| marshalling functions now can have arguments supplied to them.<br />
| version 0.16.3 released 2011−03−24<br />
|}<br />
<br />
The project will also aim to document existing tools and parallel programming practices, making them accesible to a wider public.<br />
<br />
{| class="wikitable"<br />
! Project<br />
! Description<br />
! Status<br />
|-<br />
| [[ThreadScope Tour]]<br />
| a short guide to using ThreadScope to help analyse parallel program performance<br />
| unveiled 2012-01-14<br />
|-<br />
| rowspan="2" | submissions to TMR 19<br />
| Mighttpd – a High Performance Web Server in Haskell (Kazu Yamamoto)<br />
| submitted<br />
|-<br />
| High Performance Haskell with MPI (Bernie Pope and Dmitry Astapov)<br />
| submitted<br />
|-<br />
| [[Parallel|Parallel Haskell Portal]]<br />
| one-stop resource oriented for users of parallelism and concurrency in Haskell<br />
| unveiled 2011−04−20<br />
|}<br />
<br />
== The Parallel Haskell Digest ==<br />
<br />
We have been publishing a regular newsletter containing project news, other parallel news from around the Haskell community and short "Word of the Month" articles giving brief intrductions to important concepts in parallelism.<br />
<br />
The back issues are here:<br />
<br />
* [http://www.well-typed.com/blog/52 Parallel Haskell Digest 1] with word of the month '''spark'''<br />
* [http://www.well-typed.com/blog/53 Parallel Haskell Digest 2] with word of the month '''thread of execution'''<br />
* [http://www.well-typed.com/blog/55 Parallel Haskell Digest 3] with word of the month '''parallel arrays'''<br />
* [http://www.well-typed.com/blog/56 Parallel Haskell Digest 4] with words of the month '''par''' and '''pseq'''<br />
* [http://www.well-typed.com/blog/58 Parallel Haskell Digest 5] with word of the month '''strategy'''<br />
* [http://www.well-typed.com/blog/60 Parallel Haskell Digest 6] with word of the month '''dataflow''' as in dataflow parallelism<br />
* [http://www.well-typed.com/blog/62 Parallel Haskell Digest 7] (catching up on community news)<br />
* [http://www.well-typed.com/blog/64 Parallel Haskell Digest 8] with word of the month '''MVar'''<br />
* [http://www.well-typed.com/blog/65 Parallel Haskell Digest 9] with the word of the month '''transaction'''<br />
* [http://www.well-typed.com/blog/66 Parallel Haskell Digest 10] with the word of the month '''channel'''<br />
* [http://www.well-typed.com/blog/67 Parallel Haskell Digest 11] with the word of the month '''actor''' (see part 2, [http://www.well-typed.com/blog/68 A Cloud Haskell Appetiser])<br />
<br />
== Getting involved ==<br />
<br />
Progress reports will be posted to the [http://groups.google.com/group/parallel-haskell parallel Haskell mailing list] and to the [http://www.well-typed.com/blog/ Well-Typed blog].<br />
<br />
The best starting point to get involved is to join the mailing list. Note that the list is for parallel Haskell generally, not just the Parallel GHC Project.<br />
<br />
== Participating organisations ==<br />
<br />
;[http://www.dragonfly.co.nz/ Dragonfly]<br />
:Cloudy Bayes: Hierarchical Bayesian modeling in Haskell<br />
<br />
:The Cloudy Bayes project aims to develop a fast Bayesian model fitter that takes advantage of modern multiprocessor machines. It will support model descriptions in the BUGS model description language (WinBUGS, OpenBUGS, and JAGS). It will be implemented as an embedded domain specific language (EDSL) within Haskell. A wide range of model hierarchical Bayesian model structures will be possible, including many of the models used in medical, ecological, and biological sciences.<br />
<br />
:Cloudy Bayes will provide an easy to use interface for describing models, running Monte Carlo Markov chain (MCMC) fitters, diagnosing performance and convergence criteria as it runs, and collecting output for post-processing. Haskell's strong type system will be used to ensure that model descriptions make sense, providing a fast, safe development cycle.<br />
<br />
;[http://www.iij-ii.co.jp/en/ IIJ Innovation Institute Inc.]<br />
:Haskell is suitable for many kinds of domain, and GHC's support for lightweight threads makes it attractive for concurrency applications. An exception has been network server programming because GHC 6.12 and earlier have an IO manager that is limited to 1024 network sockets. GHC 7 has a new IO manager implementation that gets rid of this limitation.<br />
<br />
:This project will implement several network servers to demonstrate that Haskell is suitable for network servers that handle a massive number of concurrent connections.<br />
<br />
;[http://www.lanl.gov/ Los Alamos National Laboratory]<br />
:This project will use parallel Haskell to implement high-performance Monte Carlo algorithms, a class of algorithms which use randomness to sample large or otherwise intractable solution spaces. The initial goal is a particle-based MC algorithm suitable for modeling the flow of radiation, with application to problems in astrophysics. From this, the project is expected to move to identification of suitable abstractions for expressing a wider variety of Monte Carlo algorithms, and using models for different physical phenomena.<br />
<br />
;[http://www.willowgarage.com/ Willow Garage Inc.]<br />
:Distributed Rigid Body Dynamics in ROS<br />
<br />
:Willow Garage seeks a high-level representation for a distributed rigid body dynamics simulation, capable of excellent parallel speedup on current and foreseeable hardware, yet linking to existing optimized libraries for low-level message passing and matrix math.<br />
<br />
:This project will drive API, performance, and profiling tool requirements for Haskell's interface to the Message Passing Interface (MPI) specification, an industry-standard in High Performance Computing (HPC), as used on clusters of many nodes.<br />
<br />
:Competing internal initiatives use C++/MPI and CUDA directly.<br />
<br />
:Willow Garage aims to lay the groundwork for personal robotics applications in everyday life. ROS ([http://ros.org Robot Operating System]) is an open source, meta-operating system for your robot.<br />
<br />
; [http://www.tid.es/en/ Telefónica I+D]<br />
<br />
: This project is to demonstrate parallel Haskell technology using the example of graph algorithms in large graphs representing social networks. The current work is on parallel versions of the [http://en.wikipedia.org/wiki/Bron%E2%80%93Kerbosch_algorithm Bron-Kerbosch algorithm] for finding maximal cliques in a graph. The initial goal is to demonstrate good speedups on multi-core and the overall aim to demonstrate good speedups of a distributed version of the algorithm using Cloud Haskell.<br />
<br />
; [http://www.vett.co.uk/ VETT UK]<br />
<br />
: VETT are working on a transaction processing application using Cloud Haskell. More details will be available shortly.</div>EricKowhttps://wiki.haskell.org/index.php?title=Parallel/Digest&diff=47004Parallel/Digest2012-07-21T08:09:12Z<p>EricKow: add the word of the month for ph digest 11</p>
<hr />
<div>The Parallel Haskell Digest is a newsletter aiming to show off all the work that's going on using parallelism and concurrency in the Haskell community.<br />
<br />
We hope to offer a monthly recap of news, interesting blog posts and discussions about parallelism in Haskell. For people who are new to parallelism and concurrency in Haskell, or maybe just have a passing interest, we hope to offer small tastes of parallelism and concurrency, with regular features like the Word of Month, Featured Code and Parallel Puzzlers.<br />
<br />
== Archives ==<br />
<br />
# [http://www.well-typed.com/blog/52 2011-03-31] - spark and Hulk<br />
# [http://www.well-typed.com/blog/53 2011-05-11] - threads<br />
# [http://www.well-typed.com/blog/55 2011-06-16] - parallel arrays<br />
# [http://www.well-typed.com/blog/56 2011-07-22] - par and pseq<br />
# [http://www.well-typed.com/blog/58 2011-08-21] - strategy<br />
# [http://www.well-typed.com/blog/60 2011-10-06] - dataflow<br />
# [http://www.well-typed.com/blog/62 2011-12-24] - (news catch up)<br />
# [http://www.well-typed.com/blog/64 2012-03-02] - MVar (lock)<br />
# [http://www.well-typed.com/blog/65 2012-04-20] - transaction<br />
# [http://www.well-typed.com/blog/66 2012-05-18] - channel<br />
# [http://www.well-typed.com/blog/67 2012-07-05] - actor ([http://www.well-typed.com/blog/68 part 2])</div>EricKowhttps://wiki.haskell.org/index.php?title=GHC/Data_Parallel_Haskell&diff=46527GHC/Data Parallel Haskell2012-07-12T10:35:42Z<p>EricKow: /* Data Parallel Haskell */</p>
<hr />
<div>[[Category:GHC|Data Parallel Haskell]]<br />
== Data Parallel Haskell ==<br />
<br />
:''Searching for Parallel Haskell? DPH is a fantastic effort, but it's not the only way to do parallelism in Haskell. Try the [[Parallel|Parallel Haskell portal]] for a more general view.''<br />
<br />
''Data Parallel Haskell'' is the codename for an extension to the Glasgow Haskell Compiler and its libraries to support [http://www.cs.cmu.edu/~scandal/cacm/cacm2.html nested data parallelism] with a focus to utilise multicore CPUs. Nested data parallelism extends the programming model of flat data parallelism, as known from parallel Fortran dialects, to irregular parallel computations (such as divide-and-conquer algorithms) and irregular data structures (such as sparse matrices and tree structures). An introduction to nested data parallelism in Haskell, including some examples, can be found in the paper [http://www.cse.unsw.edu.au/~chak/papers/papers.html#ndp-haskell Nepal – Nested Data-Parallelism in Haskell]. <br />
<br />
<center><br />
http://17.media.tumblr.com/VtG26AnzIklk0sh6YkZSLYNPo1_400.png<br />
</center><br />
<br />
''This is the performance of a dot product of two vectors of 10 million doubles each using Data Parallel Haskell. Both machines have 8 cores. Each core of the T2 has 8 hardware thread contexts. ''<br />
<br />
__TOC__<br />
<br />
<br />
<br />
<br />
=== Project status ===<br />
<br />
Data Parallel Haskell (DPH) is available as an add-on for [http://haskell.org/ghc/download_ghc_7_4_1 GHC 7.4] in the form of a few separate cabal package. All major components of DPH are implemented, including code vectorisation and parallel execution on multicore systems. However, the implementation has many limitations and probably also many bugs. Major limitations include the inability to mix vectorised and non-vectorised code in a single Haskell module, the need to use a feature-deprived, special-purpose Prelude for vectorised code, and a lack of optimisations (leading to poor performance in some cases).<br />
<br />
The current implementation should work well for code with nested parallelism, where the depth of nesting is statically fixed or no user-defined nested-parallel datatypes are used. Support for user-defined nested-parallel datatypes is still rather experimental and will likely result in inefficient code.<br />
<br />
DPH focuses on irregular data parallelism. For regular data parallel code in Haskell, please consider using the companion library [http://repa.ouroborus.net/ Repa], which builds on the parallel array infrastructure of DPH.<br />
<br />
'''Note:''' This page describes version 0.6.* of the DPH libraries. We only support this version of DPH as well as the current development version.<br />
<br />
'''Disclaimer:''' Data Parallel Haskell is very much '''work in progress.''' Some components are already usable, and we explain here how to use them. However, please be aware that APIs are still in flux and functionality may change during development.<br />
<br />
=== Where to get it ===<br />
<br />
To get DPH, install [http://haskell.org/ghc/download_ghc_7_4_1 GHC 7.4] and then install the DPH libraries with <code>cabal install</code> as follows:<br />
<blockquote><br />
<code>$ cabal update</code><br><br />
<code>$ cabal install dph-examples</code><br />
</blockquote><br />
This will install all DPH packages, including a set of simple examples, see [http://hackage.haskell.org/package/dph-examples dph-examples]. (The package [http://hackage.haskell.org/package/dph-examples dph-examples] does depend on OpenGL and Gloss as both are used in a visualiser for the nobody example.)<br />
<br />
'''WARNING:''' The vanilla GHC distribution does '''not''' include <code>cabal install</code>. This is in contrast to the Haskell Platform, which does include <code>cabal install</code>. If you want to avoid installing the <code>cabal-intstall</code> package and its dependencies explicitly, simply install GHC 7.4.1 in addition to your current Haskell Platform installation. (How to do that depends on your platform and personal preferences. One option is to install a bindist into your home directory with symbolic links to the binaries including the version number.) Then, install DPH with the following command:<br />
<blockquote><br />
<code>cabal install --with-compiler=`which ghc-7.4.1` --with-hc-pkg=`which ghc-pkg-7.4.1` dph-examples</code><br />
</blockquote><br />
<br />
=== Overview ===<br />
<br />
From a user's point of view, Data Parallel Haskell adds a new data type to Haskell –namely, ''parallel arrays''– as well as operations on parallel arrays. Syntactically, parallel arrays are like lists, only that instead of square brackets <hask>[</hask> and <hask>]</hask>, parallel arrays use square brackets with a colon <hask>[:</hask> and <hask>:]</hask>. In particular, <hask>[:e:]</hask> is the type of parallel arrays with elements of type <hask>e</hask>; the expression <hask>[:x, y, z:]</hask> denotes a three element parallel array with elements <hask>x</hask>, <hask>y</hask>, and <hask>z</hask>; and <hask>[:x + 1 | x <- xs:]</hask> represents a simple array comprehension. More sophisticated array comprehensions (including the equivalent of [http://www.haskell.org/ghc/docs/latest/html/users_guide/syntax-extns.html#parallel-list-comprehensions parallel list comprehensions]) as well as enumerations and pattern matching proceed in an analog manner. Moreover, the array library of DPH defines variants of most list operations from the Haskell Prelude and the standard <hask>List</hask> library (e.g., we have <hask>lengthP</hask>, <hask>sumP</hask>, <hask>mapP</hask>, and so on).<br />
<br />
The two main differences between lists and parallel arrays are that (1) parallel arrays are a strict data structure and (2) that they are not inductively defined. Parallel arrays are strict in that by using a single element, all elements of an array are demanded. Hence, all elements of a parallel array might be evaluated in parallel. To facilitate such parallel evaluation, operations on parallel arrays should treat arrays as aggregate structures that are manipulated in their entirety (instead of the inductive, element-wise processing that is the foundation of all Haskell list functions.)<br />
<br />
As a consequence, parallel arrays are always finite, and standard functions that yield infinite lists, such as <hask>enumFrom</hask> and <hask>repeat</hask>, have no corresponding array operation. Moreover, parallel arrays only have an undirected fold function <hask>foldP</hask> that requires an associative function as an argument – such a fold function has a parallel step complexity of O(log ''n'') for arrays of length ''n''. Parallel arrays also come with some aggregate operations that are absent from the standard list library, such as <hask>permuteP</hask>.<br />
<br />
=== A simple example ===<br />
<br />
As a simple example of a DPH program, consider the following code that computes the dot product of two vectors given as parallel arrays:<br />
<haskell><br />
dotp :: Num a => [:a:] -> [:a:] -> a<br />
dotp xs ys = sumP [:x * y | x <- xs | y <- ys:]<br />
</haskell><br />
This code uses an array variant of [http://www.haskell.org/ghc/docs/latest/html/users_guide/syntax-extns.html#parallel-list-comprehensions parallel list comprehensions], which could alternatively be written as <hask>[:x * y | (x, y) <- zipP xs ys:]</hask>, but should otherwise be self-explanatory to any Haskell programmer.<br />
<br />
=== Running DPH programs ===<br />
<br />
Unfortunately, we cannot use the above implementation of <hask>dotp</hask> directly in the current preliminary implementation of DPH. In the following, we will discuss how the code needs to be modified and how it needs to be compiled and run for parallel execution. GHC applies an elaborate transformation to DPH code, called ''vectorisation'', that turns nested into flat data parallelism. This transformation is only useful for code that is executed in parallel (i.e., code that manipulates parallel arrays), but for parallel code it dramatically simplifies load balancing.<br />
<br />
==== No type classes ====<br />
<br />
Unfortunately, vectorisation does not handle type classes at the moment. Hence, we currently need to avoid overloaded operations in parallel code. To account for that limitation, we specialise <hask>dotp</hask> on doubles.<br />
<haskell><br />
dotp_double :: [:Double:] -> [:Double:] -> Double<br />
dotp_double xs ys = sumP [:x * y | x <- xs | y <- ys:]<br />
</haskell><br />
<br />
==== Special Prelude ====<br />
<br />
As the current implementation of vectorisation cannot handle some language constructs, we cannot use it to vectorise those parts of the standard Prelude that might be used in parallel code (such as arithmetic operations). Instead, DPH comes with its own (rather limited) Prelude in [http://darcs.haskell.org/packages/dph/dph-common/Data/Array/Parallel/Prelude.hs Data.Array.Parallel.Prelude] plus three extra modules to support one numeric type each [http://darcs.haskell.org/packages/dph/dph-common/Data/Array/Parallel/Prelude/Float.hs Data.Array.Parallel.Prelude.Float], [http://darcs.haskell.org/packages/dph/dph-common/Data/Array/Parallel/Prelude/Double.hs Data.Array.Parallel.Prelude.Double], [http://darcs.haskell.org/packages/dph/dph-common/Data/Array/Parallel/Prelude/Int.hs Data.Array.Parallel.Prelude.Int], and [http://darcs.haskell.org/packages/dph/dph-common/Data/Array/Parallel/Prelude/Word8.hs Data.Array.Parallel.Prelude.Word8]. These four modules support the same functions (on different types) and if a program needs to use more than one, they need to be imported qualified (as we cannot use type classes in vectorised code in the current version). Moreover, we have [http://darcs.haskell.org/packages/dph/dph-common/Data/Array/Parallel/Prelude/Bool.hs Data.Array.Parallel.Prelude.Bool]. If your code needs any other numeric types or functions that are not implemented in these Prelude modules, you currently need to implement and vectorise that functionality yourself.<br />
<br />
To compile <hask>dotp_double</hask>, we add the following import statements:<br />
<haskell><br />
import qualified Prelude<br />
import Data.Array.Parallel<br />
import Data.Array.Parallel.Prelude<br />
import Data.Array.Parallel.Prelude.Double<br />
</haskell><br />
<br />
==== Impedance matching ====<br />
<br />
Special care is needed at the interface between vectorised and non-vectorised code. Currently, only simple types can be passed between these different kinds of code. In particular, parallel arrays (which might be nested) '''cannot''' be passed. Instead, we need to pass flat arrays of type <hask>PArray</hask>. This type is exported by our special-purpose Prelude together with a conversion function <hask>fromPArrayP</hask> (which is specific to the element type due to the lack of type classes in vectorised code). <br />
<br />
Using this conversion function, we define a wrapper function for <hask>dotp_double</hask> that we export and use from non-vectorised code.<br />
<haskell><br />
dotp_wrapper :: PArray Double -> PArray Double -> Double<br />
{-# NOINLINE dotp_wrapper #-}<br />
dotp_wrapper v w = dotp_double (fromPArrayP v) (fromPArrayP w)<br />
</haskell><br />
It is important to mark this function as <hask>NOINLINE</hask> as we don't want it to be inlined into non-vectorised code.<br />
<br />
==== Compiling vectorised code ====<br />
<br />
The syntax for parallel arrays is an extension to Haskell 2010 that needs to be enabled with the language option <hask>ParallelArrays</hask>. Furthermore, we need to explicitly tell GHC if we want to vectorise a module by using the <hask>-fvectorise</hask> option.<br />
<br />
Currently, GHC either vectorises all code in a module or none. This can be inconvenient as some parts of a program cannot be vectorised – for example, code in the <hask>IO</hask> monad (the radical re-ordering of computations performed by the vectorisation transformation is only valid for pure code). As a consequence, the programmer currently needs to partition vectorised and non-vectorised code carefully over different modules.<br />
<br />
Overall, we get the following complete module definition for the dot-product code:<br />
<haskell><br />
{-# LANGUAGE ParallelArrays #-}<br />
{-# OPTIONS_GHC -fvectorise #-}<br />
<br />
module DotP (dotp_wrapper)<br />
where<br />
<br />
import qualified Prelude<br />
import Data.Array.Parallel<br />
import Data.Array.Parallel.Prelude<br />
import Data.Array.Parallel.Prelude.Double<br />
<br />
dotp_double :: [:Double:] -> [:Double:] -> Double<br />
dotp_double xs ys = sumP [:x * y | x <- xs | y <- ys:]<br />
<br />
dotp_wrapper :: PArray Double -> PArray Double -> Double<br />
{-# NOINLINE dotp_wrapper #-}<br />
dotp_wrapper v w = dotp_double (fromPArrayP v) (fromPArrayP w)<br />
</haskell><br />
Assuming the module is in a file <hask>DotP.hs</hask>, we compile it as follows:<br />
<blockquote><br />
<code>ghc -c -Odph -fdph-par DotP.hs</code><br />
</blockquote><br />
The option <code>-Odph</code> enables a predefined set of GHC optimisation options that works best for DPH code and <code>-fdph-par</code> selects the standard parallel DPH backend library. (This is currently the only relevant backend, but there may be others in the future.)<br />
<br />
==== Using vectorised code ====<br />
<br />
Finally, we need a main module that calls the vectorised code, but is itself not vectorised, so that it may contain I/O. In this simple example, we convert two simple lists to parallel arrays, compute their dot product, and print the result:<br />
<haskell><br />
import Data.Array.Parallel<br />
import Data.Array.Parallel.PArray (PArray, fromList)<br />
<br />
import DotP (dotp_wrapper) -- import vectorised code<br />
<br />
main :: IO ()<br />
main<br />
= let v = fromList [1..10] -- convert lists...<br />
w = fromList [1,2..20] -- ...to parallel arrays<br />
result = dotp_wrapper v w -- invoke vectorised code<br />
in<br />
print result -- print the result<br />
</haskell><br />
We compile this module with<br />
<blockquote><br />
<code>ghc -c -Odph -fdph-par Main.hs</code><br />
</blockquote><br />
and finally link the two modules into an executable <code>dotp</code> with<br />
<blockquote><br />
<code>ghc -o dotp -threaded -fdph-par -rtsopts DotP.o Main.o</code><br />
</blockquote><br />
We need the <code>-threaded</code> option to link with GHC's multi-threaded runtime and <code>-fdph-par</code> to link with the standard parallel DPH backend. We include <code>-rtsopts</code> to be able to explicitly determine the number of OS threads used to execute our code.<br />
<br />
==== Generating input data ====<br />
<br />
To see any benefit from parallel execution, a data-parallel program needs to operate on a sufficiently large data set. Hence, instead of two small constant vectors, we might want to generate some larger input data:<br />
<haskell><br />
import System.Random (newStdGen)<br />
import Data.Array.Parallel<br />
import Data.Array.Parallel.PArray (PArray, randomRs)<br />
<br />
import DotP (dotp_wrapper) -- import vectorised code<br />
<br />
main :: IO ()<br />
main<br />
= do <br />
gen1 <- newStdGen<br />
gen2 <- newStdGen<br />
let v = randomRs n range gen1<br />
w = randomRs n range gen2<br />
print $ dotp_wrapper v w -- invoke vectorised code and print the result<br />
where<br />
n = 10000 -- vector length<br />
range = (-100, 100) -- range of vector elements<br />
</haskell><br />
We compile and link the program as described above.<br />
<br />
'''NOTE:''' The code as presented is unsuitable for benchmarking as we wouldn't want to measure the purely sequential random number generation (that dominates this simple program). For benchmarking, we would want to guarantee that the generated vectors are fully evaluated before taking the time. The module [http://www.haskell.org/ghc/docs/latest/html/libraries/dph-par/Data-Array-Parallel-PArray.html Data.Array.Parallel.PArray] exports the function <hask>nf</hask> for this purpose. For a variant of the dot-product example code that determines the CPU time consumed by <hask>dotp_wrapper</hask>, see [[GHC/Data Parallel Haskell/MainTimed|timed dot product]].<br />
<br />
==== Parallel execution ====<br />
<br />
By default, a Haskell program uses only one OS thread, and hence, also only one CPU core for execution. To use multiple cores, we need to invoke the executable with an explicit RTS command line option — e.g., <code>./dotp +RTS -N2</code> uses two cores. (Strictly speaking, it uses two OS threads, which will be scheduled on two separate cores if available.) To determine the runtime of parallel code, measuring CPU time, as demonstrated in the [[GHC/Data Parallel Haskell/MainTimed|timed variant of the dot product example]], is not sufficient anymore. We need to measure wall clock times instead. For proper benchmarking, it is advisable to use a library, such as [http://hackage.haskell.org/package/criterion criterion].<br />
<br />
A beautiful property of DPH is that the number of threads used to execute a program only affects its performance, but not the result. So, it is fine to do all debugging concerning correctness with just one core and to move to multiple cores only for performance debugging.<br />
<br />
Data Parallel Haskell –and more generally, GHC's multi-threading support– currently only aims at multicore processors or uniform memory access (UMA) multi-processors. Performance on non-uniform memory access (NUMA) machines is often bad as GHC's runtime makes no effort at optimising placement.<br />
<br />
=== Further examples and documentation ===<br />
<br />
Further examples are available in the [http://darcs.haskell.org/packages/dph/dph-examples/ examples directory of the package dph source]. This code also includes reference implementations for some of the example that are useful for benchmarking. <br />
<br />
The interfaces of the various components of the DPH library are in the [http://hackage.haskell.org/package/dph-par library documentation] on Hackage.<br />
<br />
=== Designing parallel programs ===<br />
<br />
Data Parallel Haskell is a high-level language to code parallel algorithms. Like plain Haskell, DPH frees the programmer from many low-level operational considerations (such as thread creation, thread synchronisation, critical sections, and deadlock avoidance). Nevertheless, the full responsibility for parallel algorithm design and many performance considerations (such as when does a computation have sufficient parallelism to make it worthwhile to exploit that parallelism) are still with the programmer.<br />
<br />
DPH encourages a data-driven style of parallel programming and, in good Haskell tradition, puts the choice of data types first. Specifically, the choice between using lists or parallel arrays for a data structure determines whether operations on the structure will be executed sequentially or in parallel. In addition to suitably combining standard lists and parallel arrays, it is often also useful to embed parallel arrays in a user-defined inductive structure, such as the following definition of parallel rose trees:<br />
<haskell><br />
data RTree a = RNode [:RTree a:]<br />
</haskell><br />
The tree is inductively defined; hence, tree traversals will proceed sequentially, level by level. However, the children of each node are held in parallel arrays, and hence, may be traversed in parallel. This structure is, for example, useful in parallel adaptive algorithms based on a hierarchical decomposition, such as the Barnes-Hut algorithm for solving the ''N''-body problem as discussed in more detail in the paper [http://www.cse.unsw.edu.au/~chak/papers/PLKC08.html Harnessing the Multicores: Nested Data Parallelism in Haskell.]<br />
<br />
For a general introduction to nested data parallelism and its cost model, see Blelloch's [http://www.cs.cmu.edu/~scandal/cacm/cacm2.html Programming Parallel Algorithms.]<br />
<br />
=== Further reading and information on the implementation ===<br />
<br />
DPH has two major components: (1) the ''vectorisation transformation'' and (2) the ''generic DPH library for flat parallel arrays''. The vectorisation transformation turns nested into flat data-parallelism and is described in detail in the paper [http://www.cse.unsw.edu.au/~chak/papers/PLKC08.html Harnessing the Multicores: Nested Data Parallelism in Haskell.] The generic array library maps flat data-parallelism to GHC's multi-threaded multicore support and is described in the paper [http://www.cse.unsw.edu.au/~chak/papers/CLPKM06.html Data Parallel Haskell: a status report]. The same topics are also covered in the slides for the two talks [http://research.microsoft.com/~simonpj/papers/ndp/NdpSlides.pdf Nested data parallelism in Haskell] and [http://dataparallel.googlegroups.com/web/UNSW%20CGO%20DP%202007.pdf Compiling nested data parallelism by program transformation].<br />
<br />
For further reading, consult this [[GHC/Data Parallel Haskell/References|collection of background papers, and pointers to other people's work]]. If you are really curious and like to know implementation details and the internals of the Data Parallel Haskell project, much of it is described on the GHC developer wiki on the pages covering [http://hackage.haskell.org/trac/ghc/wiki/DataParallel data parallelism] and [http://hackage.haskell.org/trac/ghc/wiki/TypeFunctions type families].<br />
<br />
=== Feedback ===<br />
<br />
Please file bug reports at [http://hackage.haskell.org/trac/ghc/ GHC's bug tracker]. Moreover, comments and suggestions are very welcome. Please post them to the [mailto:glasgow-haskell-users@haskell.org GHC user's mailing list], or contact the DPH developers directly:<br />
* [http://www.cse.unsw.edu.au/~chak/ Manuel Chakravarty]<br />
* [http://www.cse.unsw.edu.au/~keller/ Gabriele Keller]<br />
* [http://www.cse.unsw.edu.au/~rl/ Roman Leshchinskiy]<br />
* [http://www.cse.unsw.edu.au/~benl/ Ben Lippmeier]<br />
* [http://research.microsoft.com/~simonpj/ Simon Peyton Jones]</div>EricKowhttps://wiki.haskell.org/index.php?title=GHC/Data_Parallel_Haskell&diff=46526GHC/Data Parallel Haskell2012-07-12T10:35:08Z<p>EricKow: /* Data Parallel Haskell */</p>
<hr />
<div>[[Category:GHC|Data Parallel Haskell]]<br />
== Data Parallel Haskell ==<br />
<br />
''Searching for Parallel Haskell? DPH is a fantastic effort, but it's not the only way to do parallelism in Haskell. Try the [[Parallel|Parallel Haskell portal]]''<br />
<br />
''Data Parallel Haskell'' is the codename for an extension to the Glasgow Haskell Compiler and its libraries to support [http://www.cs.cmu.edu/~scandal/cacm/cacm2.html nested data parallelism] with a focus to utilise multicore CPUs. Nested data parallelism extends the programming model of flat data parallelism, as known from parallel Fortran dialects, to irregular parallel computations (such as divide-and-conquer algorithms) and irregular data structures (such as sparse matrices and tree structures). An introduction to nested data parallelism in Haskell, including some examples, can be found in the paper [http://www.cse.unsw.edu.au/~chak/papers/papers.html#ndp-haskell Nepal – Nested Data-Parallelism in Haskell]. <br />
<br />
<center><br />
http://17.media.tumblr.com/VtG26AnzIklk0sh6YkZSLYNPo1_400.png<br />
</center><br />
<br />
''This is the performance of a dot product of two vectors of 10 million doubles each using Data Parallel Haskell. Both machines have 8 cores. Each core of the T2 has 8 hardware thread contexts. ''<br />
<br />
__TOC__<br />
<br />
<br />
<br />
<br />
=== Project status ===<br />
<br />
Data Parallel Haskell (DPH) is available as an add-on for [http://haskell.org/ghc/download_ghc_7_4_1 GHC 7.4] in the form of a few separate cabal package. All major components of DPH are implemented, including code vectorisation and parallel execution on multicore systems. However, the implementation has many limitations and probably also many bugs. Major limitations include the inability to mix vectorised and non-vectorised code in a single Haskell module, the need to use a feature-deprived, special-purpose Prelude for vectorised code, and a lack of optimisations (leading to poor performance in some cases).<br />
<br />
The current implementation should work well for code with nested parallelism, where the depth of nesting is statically fixed or no user-defined nested-parallel datatypes are used. Support for user-defined nested-parallel datatypes is still rather experimental and will likely result in inefficient code.<br />
<br />
DPH focuses on irregular data parallelism. For regular data parallel code in Haskell, please consider using the companion library [http://repa.ouroborus.net/ Repa], which builds on the parallel array infrastructure of DPH.<br />
<br />
'''Note:''' This page describes version 0.6.* of the DPH libraries. We only support this version of DPH as well as the current development version.<br />
<br />
'''Disclaimer:''' Data Parallel Haskell is very much '''work in progress.''' Some components are already usable, and we explain here how to use them. However, please be aware that APIs are still in flux and functionality may change during development.<br />
<br />
=== Where to get it ===<br />
<br />
To get DPH, install [http://haskell.org/ghc/download_ghc_7_4_1 GHC 7.4] and then install the DPH libraries with <code>cabal install</code> as follows:<br />
<blockquote><br />
<code>$ cabal update</code><br><br />
<code>$ cabal install dph-examples</code><br />
</blockquote><br />
This will install all DPH packages, including a set of simple examples, see [http://hackage.haskell.org/package/dph-examples dph-examples]. (The package [http://hackage.haskell.org/package/dph-examples dph-examples] does depend on OpenGL and Gloss as both are used in a visualiser for the nobody example.)<br />
<br />
'''WARNING:''' The vanilla GHC distribution does '''not''' include <code>cabal install</code>. This is in contrast to the Haskell Platform, which does include <code>cabal install</code>. If you want to avoid installing the <code>cabal-intstall</code> package and its dependencies explicitly, simply install GHC 7.4.1 in addition to your current Haskell Platform installation. (How to do that depends on your platform and personal preferences. One option is to install a bindist into your home directory with symbolic links to the binaries including the version number.) Then, install DPH with the following command:<br />
<blockquote><br />
<code>cabal install --with-compiler=`which ghc-7.4.1` --with-hc-pkg=`which ghc-pkg-7.4.1` dph-examples</code><br />
</blockquote><br />
<br />
=== Overview ===<br />
<br />
From a user's point of view, Data Parallel Haskell adds a new data type to Haskell –namely, ''parallel arrays''– as well as operations on parallel arrays. Syntactically, parallel arrays are like lists, only that instead of square brackets <hask>[</hask> and <hask>]</hask>, parallel arrays use square brackets with a colon <hask>[:</hask> and <hask>:]</hask>. In particular, <hask>[:e:]</hask> is the type of parallel arrays with elements of type <hask>e</hask>; the expression <hask>[:x, y, z:]</hask> denotes a three element parallel array with elements <hask>x</hask>, <hask>y</hask>, and <hask>z</hask>; and <hask>[:x + 1 | x <- xs:]</hask> represents a simple array comprehension. More sophisticated array comprehensions (including the equivalent of [http://www.haskell.org/ghc/docs/latest/html/users_guide/syntax-extns.html#parallel-list-comprehensions parallel list comprehensions]) as well as enumerations and pattern matching proceed in an analog manner. Moreover, the array library of DPH defines variants of most list operations from the Haskell Prelude and the standard <hask>List</hask> library (e.g., we have <hask>lengthP</hask>, <hask>sumP</hask>, <hask>mapP</hask>, and so on).<br />
<br />
The two main differences between lists and parallel arrays are that (1) parallel arrays are a strict data structure and (2) that they are not inductively defined. Parallel arrays are strict in that by using a single element, all elements of an array are demanded. Hence, all elements of a parallel array might be evaluated in parallel. To facilitate such parallel evaluation, operations on parallel arrays should treat arrays as aggregate structures that are manipulated in their entirety (instead of the inductive, element-wise processing that is the foundation of all Haskell list functions.)<br />
<br />
As a consequence, parallel arrays are always finite, and standard functions that yield infinite lists, such as <hask>enumFrom</hask> and <hask>repeat</hask>, have no corresponding array operation. Moreover, parallel arrays only have an undirected fold function <hask>foldP</hask> that requires an associative function as an argument – such a fold function has a parallel step complexity of O(log ''n'') for arrays of length ''n''. Parallel arrays also come with some aggregate operations that are absent from the standard list library, such as <hask>permuteP</hask>.<br />
<br />
=== A simple example ===<br />
<br />
As a simple example of a DPH program, consider the following code that computes the dot product of two vectors given as parallel arrays:<br />
<haskell><br />
dotp :: Num a => [:a:] -> [:a:] -> a<br />
dotp xs ys = sumP [:x * y | x <- xs | y <- ys:]<br />
</haskell><br />
This code uses an array variant of [http://www.haskell.org/ghc/docs/latest/html/users_guide/syntax-extns.html#parallel-list-comprehensions parallel list comprehensions], which could alternatively be written as <hask>[:x * y | (x, y) <- zipP xs ys:]</hask>, but should otherwise be self-explanatory to any Haskell programmer.<br />
<br />
=== Running DPH programs ===<br />
<br />
Unfortunately, we cannot use the above implementation of <hask>dotp</hask> directly in the current preliminary implementation of DPH. In the following, we will discuss how the code needs to be modified and how it needs to be compiled and run for parallel execution. GHC applies an elaborate transformation to DPH code, called ''vectorisation'', that turns nested into flat data parallelism. This transformation is only useful for code that is executed in parallel (i.e., code that manipulates parallel arrays), but for parallel code it dramatically simplifies load balancing.<br />
<br />
==== No type classes ====<br />
<br />
Unfortunately, vectorisation does not handle type classes at the moment. Hence, we currently need to avoid overloaded operations in parallel code. To account for that limitation, we specialise <hask>dotp</hask> on doubles.<br />
<haskell><br />
dotp_double :: [:Double:] -> [:Double:] -> Double<br />
dotp_double xs ys = sumP [:x * y | x <- xs | y <- ys:]<br />
</haskell><br />
<br />
==== Special Prelude ====<br />
<br />
As the current implementation of vectorisation cannot handle some language constructs, we cannot use it to vectorise those parts of the standard Prelude that might be used in parallel code (such as arithmetic operations). Instead, DPH comes with its own (rather limited) Prelude in [http://darcs.haskell.org/packages/dph/dph-common/Data/Array/Parallel/Prelude.hs Data.Array.Parallel.Prelude] plus three extra modules to support one numeric type each [http://darcs.haskell.org/packages/dph/dph-common/Data/Array/Parallel/Prelude/Float.hs Data.Array.Parallel.Prelude.Float], [http://darcs.haskell.org/packages/dph/dph-common/Data/Array/Parallel/Prelude/Double.hs Data.Array.Parallel.Prelude.Double], [http://darcs.haskell.org/packages/dph/dph-common/Data/Array/Parallel/Prelude/Int.hs Data.Array.Parallel.Prelude.Int], and [http://darcs.haskell.org/packages/dph/dph-common/Data/Array/Parallel/Prelude/Word8.hs Data.Array.Parallel.Prelude.Word8]. These four modules support the same functions (on different types) and if a program needs to use more than one, they need to be imported qualified (as we cannot use type classes in vectorised code in the current version). Moreover, we have [http://darcs.haskell.org/packages/dph/dph-common/Data/Array/Parallel/Prelude/Bool.hs Data.Array.Parallel.Prelude.Bool]. If your code needs any other numeric types or functions that are not implemented in these Prelude modules, you currently need to implement and vectorise that functionality yourself.<br />
<br />
To compile <hask>dotp_double</hask>, we add the following import statements:<br />
<haskell><br />
import qualified Prelude<br />
import Data.Array.Parallel<br />
import Data.Array.Parallel.Prelude<br />
import Data.Array.Parallel.Prelude.Double<br />
</haskell><br />
<br />
==== Impedance matching ====<br />
<br />
Special care is needed at the interface between vectorised and non-vectorised code. Currently, only simple types can be passed between these different kinds of code. In particular, parallel arrays (which might be nested) '''cannot''' be passed. Instead, we need to pass flat arrays of type <hask>PArray</hask>. This type is exported by our special-purpose Prelude together with a conversion function <hask>fromPArrayP</hask> (which is specific to the element type due to the lack of type classes in vectorised code). <br />
<br />
Using this conversion function, we define a wrapper function for <hask>dotp_double</hask> that we export and use from non-vectorised code.<br />
<haskell><br />
dotp_wrapper :: PArray Double -> PArray Double -> Double<br />
{-# NOINLINE dotp_wrapper #-}<br />
dotp_wrapper v w = dotp_double (fromPArrayP v) (fromPArrayP w)<br />
</haskell><br />
It is important to mark this function as <hask>NOINLINE</hask> as we don't want it to be inlined into non-vectorised code.<br />
<br />
==== Compiling vectorised code ====<br />
<br />
The syntax for parallel arrays is an extension to Haskell 2010 that needs to be enabled with the language option <hask>ParallelArrays</hask>. Furthermore, we need to explicitly tell GHC if we want to vectorise a module by using the <hask>-fvectorise</hask> option.<br />
<br />
Currently, GHC either vectorises all code in a module or none. This can be inconvenient as some parts of a program cannot be vectorised – for example, code in the <hask>IO</hask> monad (the radical re-ordering of computations performed by the vectorisation transformation is only valid for pure code). As a consequence, the programmer currently needs to partition vectorised and non-vectorised code carefully over different modules.<br />
<br />
Overall, we get the following complete module definition for the dot-product code:<br />
<haskell><br />
{-# LANGUAGE ParallelArrays #-}<br />
{-# OPTIONS_GHC -fvectorise #-}<br />
<br />
module DotP (dotp_wrapper)<br />
where<br />
<br />
import qualified Prelude<br />
import Data.Array.Parallel<br />
import Data.Array.Parallel.Prelude<br />
import Data.Array.Parallel.Prelude.Double<br />
<br />
dotp_double :: [:Double:] -> [:Double:] -> Double<br />
dotp_double xs ys = sumP [:x * y | x <- xs | y <- ys:]<br />
<br />
dotp_wrapper :: PArray Double -> PArray Double -> Double<br />
{-# NOINLINE dotp_wrapper #-}<br />
dotp_wrapper v w = dotp_double (fromPArrayP v) (fromPArrayP w)<br />
</haskell><br />
Assuming the module is in a file <hask>DotP.hs</hask>, we compile it as follows:<br />
<blockquote><br />
<code>ghc -c -Odph -fdph-par DotP.hs</code><br />
</blockquote><br />
The option <code>-Odph</code> enables a predefined set of GHC optimisation options that works best for DPH code and <code>-fdph-par</code> selects the standard parallel DPH backend library. (This is currently the only relevant backend, but there may be others in the future.)<br />
<br />
==== Using vectorised code ====<br />
<br />
Finally, we need a main module that calls the vectorised code, but is itself not vectorised, so that it may contain I/O. In this simple example, we convert two simple lists to parallel arrays, compute their dot product, and print the result:<br />
<haskell><br />
import Data.Array.Parallel<br />
import Data.Array.Parallel.PArray (PArray, fromList)<br />
<br />
import DotP (dotp_wrapper) -- import vectorised code<br />
<br />
main :: IO ()<br />
main<br />
= let v = fromList [1..10] -- convert lists...<br />
w = fromList [1,2..20] -- ...to parallel arrays<br />
result = dotp_wrapper v w -- invoke vectorised code<br />
in<br />
print result -- print the result<br />
</haskell><br />
We compile this module with<br />
<blockquote><br />
<code>ghc -c -Odph -fdph-par Main.hs</code><br />
</blockquote><br />
and finally link the two modules into an executable <code>dotp</code> with<br />
<blockquote><br />
<code>ghc -o dotp -threaded -fdph-par -rtsopts DotP.o Main.o</code><br />
</blockquote><br />
We need the <code>-threaded</code> option to link with GHC's multi-threaded runtime and <code>-fdph-par</code> to link with the standard parallel DPH backend. We include <code>-rtsopts</code> to be able to explicitly determine the number of OS threads used to execute our code.<br />
<br />
==== Generating input data ====<br />
<br />
To see any benefit from parallel execution, a data-parallel program needs to operate on a sufficiently large data set. Hence, instead of two small constant vectors, we might want to generate some larger input data:<br />
<haskell><br />
import System.Random (newStdGen)<br />
import Data.Array.Parallel<br />
import Data.Array.Parallel.PArray (PArray, randomRs)<br />
<br />
import DotP (dotp_wrapper) -- import vectorised code<br />
<br />
main :: IO ()<br />
main<br />
= do <br />
gen1 <- newStdGen<br />
gen2 <- newStdGen<br />
let v = randomRs n range gen1<br />
w = randomRs n range gen2<br />
print $ dotp_wrapper v w -- invoke vectorised code and print the result<br />
where<br />
n = 10000 -- vector length<br />
range = (-100, 100) -- range of vector elements<br />
</haskell><br />
We compile and link the program as described above.<br />
<br />
'''NOTE:''' The code as presented is unsuitable for benchmarking as we wouldn't want to measure the purely sequential random number generation (that dominates this simple program). For benchmarking, we would want to guarantee that the generated vectors are fully evaluated before taking the time. The module [http://www.haskell.org/ghc/docs/latest/html/libraries/dph-par/Data-Array-Parallel-PArray.html Data.Array.Parallel.PArray] exports the function <hask>nf</hask> for this purpose. For a variant of the dot-product example code that determines the CPU time consumed by <hask>dotp_wrapper</hask>, see [[GHC/Data Parallel Haskell/MainTimed|timed dot product]].<br />
<br />
==== Parallel execution ====<br />
<br />
By default, a Haskell program uses only one OS thread, and hence, also only one CPU core for execution. To use multiple cores, we need to invoke the executable with an explicit RTS command line option — e.g., <code>./dotp +RTS -N2</code> uses two cores. (Strictly speaking, it uses two OS threads, which will be scheduled on two separate cores if available.) To determine the runtime of parallel code, measuring CPU time, as demonstrated in the [[GHC/Data Parallel Haskell/MainTimed|timed variant of the dot product example]], is not sufficient anymore. We need to measure wall clock times instead. For proper benchmarking, it is advisable to use a library, such as [http://hackage.haskell.org/package/criterion criterion].<br />
<br />
A beautiful property of DPH is that the number of threads used to execute a program only affects its performance, but not the result. So, it is fine to do all debugging concerning correctness with just one core and to move to multiple cores only for performance debugging.<br />
<br />
Data Parallel Haskell –and more generally, GHC's multi-threading support– currently only aims at multicore processors or uniform memory access (UMA) multi-processors. Performance on non-uniform memory access (NUMA) machines is often bad as GHC's runtime makes no effort at optimising placement.<br />
<br />
=== Further examples and documentation ===<br />
<br />
Further examples are available in the [http://darcs.haskell.org/packages/dph/dph-examples/ examples directory of the package dph source]. This code also includes reference implementations for some of the example that are useful for benchmarking. <br />
<br />
The interfaces of the various components of the DPH library are in the [http://hackage.haskell.org/package/dph-par library documentation] on Hackage.<br />
<br />
=== Designing parallel programs ===<br />
<br />
Data Parallel Haskell is a high-level language to code parallel algorithms. Like plain Haskell, DPH frees the programmer from many low-level operational considerations (such as thread creation, thread synchronisation, critical sections, and deadlock avoidance). Nevertheless, the full responsibility for parallel algorithm design and many performance considerations (such as when does a computation have sufficient parallelism to make it worthwhile to exploit that parallelism) are still with the programmer.<br />
<br />
DPH encourages a data-driven style of parallel programming and, in good Haskell tradition, puts the choice of data types first. Specifically, the choice between using lists or parallel arrays for a data structure determines whether operations on the structure will be executed sequentially or in parallel. In addition to suitably combining standard lists and parallel arrays, it is often also useful to embed parallel arrays in a user-defined inductive structure, such as the following definition of parallel rose trees:<br />
<haskell><br />
data RTree a = RNode [:RTree a:]<br />
</haskell><br />
The tree is inductively defined; hence, tree traversals will proceed sequentially, level by level. However, the children of each node are held in parallel arrays, and hence, may be traversed in parallel. This structure is, for example, useful in parallel adaptive algorithms based on a hierarchical decomposition, such as the Barnes-Hut algorithm for solving the ''N''-body problem as discussed in more detail in the paper [http://www.cse.unsw.edu.au/~chak/papers/PLKC08.html Harnessing the Multicores: Nested Data Parallelism in Haskell.]<br />
<br />
For a general introduction to nested data parallelism and its cost model, see Blelloch's [http://www.cs.cmu.edu/~scandal/cacm/cacm2.html Programming Parallel Algorithms.]<br />
<br />
=== Further reading and information on the implementation ===<br />
<br />
DPH has two major components: (1) the ''vectorisation transformation'' and (2) the ''generic DPH library for flat parallel arrays''. The vectorisation transformation turns nested into flat data-parallelism and is described in detail in the paper [http://www.cse.unsw.edu.au/~chak/papers/PLKC08.html Harnessing the Multicores: Nested Data Parallelism in Haskell.] The generic array library maps flat data-parallelism to GHC's multi-threaded multicore support and is described in the paper [http://www.cse.unsw.edu.au/~chak/papers/CLPKM06.html Data Parallel Haskell: a status report]. The same topics are also covered in the slides for the two talks [http://research.microsoft.com/~simonpj/papers/ndp/NdpSlides.pdf Nested data parallelism in Haskell] and [http://dataparallel.googlegroups.com/web/UNSW%20CGO%20DP%202007.pdf Compiling nested data parallelism by program transformation].<br />
<br />
For further reading, consult this [[GHC/Data Parallel Haskell/References|collection of background papers, and pointers to other people's work]]. If you are really curious and like to know implementation details and the internals of the Data Parallel Haskell project, much of it is described on the GHC developer wiki on the pages covering [http://hackage.haskell.org/trac/ghc/wiki/DataParallel data parallelism] and [http://hackage.haskell.org/trac/ghc/wiki/TypeFunctions type families].<br />
<br />
=== Feedback ===<br />
<br />
Please file bug reports at [http://hackage.haskell.org/trac/ghc/ GHC's bug tracker]. Moreover, comments and suggestions are very welcome. Please post them to the [mailto:glasgow-haskell-users@haskell.org GHC user's mailing list], or contact the DPH developers directly:<br />
* [http://www.cse.unsw.edu.au/~chak/ Manuel Chakravarty]<br />
* [http://www.cse.unsw.edu.au/~keller/ Gabriele Keller]<br />
* [http://www.cse.unsw.edu.au/~rl/ Roman Leshchinskiy]<br />
* [http://www.cse.unsw.edu.au/~benl/ Ben Lippmeier]<br />
* [http://research.microsoft.com/~simonpj/ Simon Peyton Jones]</div>EricKowhttps://wiki.haskell.org/index.php?title=GHC/Data_Parallel_Haskell&diff=46525GHC/Data Parallel Haskell2012-07-12T10:34:54Z<p>EricKow: /* Data Parallel Haskell */ promote Parallel Haskell portal for confused people</p>
<hr />
<div>[[Category:GHC|Data Parallel Haskell]]<br />
== Data Parallel Haskell ==<br />
<br />
'''Searching for Parallel Haskell? DPH is a fantastic effort, but it's not the only way to do parallelism in Haskell. Try the [[Parallel|Parallel Haskell portal]]'''<br />
<br />
''Data Parallel Haskell'' is the codename for an extension to the Glasgow Haskell Compiler and its libraries to support [http://www.cs.cmu.edu/~scandal/cacm/cacm2.html nested data parallelism] with a focus to utilise multicore CPUs. Nested data parallelism extends the programming model of flat data parallelism, as known from parallel Fortran dialects, to irregular parallel computations (such as divide-and-conquer algorithms) and irregular data structures (such as sparse matrices and tree structures). An introduction to nested data parallelism in Haskell, including some examples, can be found in the paper [http://www.cse.unsw.edu.au/~chak/papers/papers.html#ndp-haskell Nepal – Nested Data-Parallelism in Haskell]. <br />
<br />
<center><br />
http://17.media.tumblr.com/VtG26AnzIklk0sh6YkZSLYNPo1_400.png<br />
</center><br />
<br />
''This is the performance of a dot product of two vectors of 10 million doubles each using Data Parallel Haskell. Both machines have 8 cores. Each core of the T2 has 8 hardware thread contexts. ''<br />
<br />
__TOC__<br />
<br />
<br />
<br />
<br />
=== Project status ===<br />
<br />
Data Parallel Haskell (DPH) is available as an add-on for [http://haskell.org/ghc/download_ghc_7_4_1 GHC 7.4] in the form of a few separate cabal package. All major components of DPH are implemented, including code vectorisation and parallel execution on multicore systems. However, the implementation has many limitations and probably also many bugs. Major limitations include the inability to mix vectorised and non-vectorised code in a single Haskell module, the need to use a feature-deprived, special-purpose Prelude for vectorised code, and a lack of optimisations (leading to poor performance in some cases).<br />
<br />
The current implementation should work well for code with nested parallelism, where the depth of nesting is statically fixed or no user-defined nested-parallel datatypes are used. Support for user-defined nested-parallel datatypes is still rather experimental and will likely result in inefficient code.<br />
<br />
DPH focuses on irregular data parallelism. For regular data parallel code in Haskell, please consider using the companion library [http://repa.ouroborus.net/ Repa], which builds on the parallel array infrastructure of DPH.<br />
<br />
'''Note:''' This page describes version 0.6.* of the DPH libraries. We only support this version of DPH as well as the current development version.<br />
<br />
'''Disclaimer:''' Data Parallel Haskell is very much '''work in progress.''' Some components are already usable, and we explain here how to use them. However, please be aware that APIs are still in flux and functionality may change during development.<br />
<br />
=== Where to get it ===<br />
<br />
To get DPH, install [http://haskell.org/ghc/download_ghc_7_4_1 GHC 7.4] and then install the DPH libraries with <code>cabal install</code> as follows:<br />
<blockquote><br />
<code>$ cabal update</code><br><br />
<code>$ cabal install dph-examples</code><br />
</blockquote><br />
This will install all DPH packages, including a set of simple examples, see [http://hackage.haskell.org/package/dph-examples dph-examples]. (The package [http://hackage.haskell.org/package/dph-examples dph-examples] does depend on OpenGL and Gloss as both are used in a visualiser for the nobody example.)<br />
<br />
'''WARNING:''' The vanilla GHC distribution does '''not''' include <code>cabal install</code>. This is in contrast to the Haskell Platform, which does include <code>cabal install</code>. If you want to avoid installing the <code>cabal-intstall</code> package and its dependencies explicitly, simply install GHC 7.4.1 in addition to your current Haskell Platform installation. (How to do that depends on your platform and personal preferences. One option is to install a bindist into your home directory with symbolic links to the binaries including the version number.) Then, install DPH with the following command:<br />
<blockquote><br />
<code>cabal install --with-compiler=`which ghc-7.4.1` --with-hc-pkg=`which ghc-pkg-7.4.1` dph-examples</code><br />
</blockquote><br />
<br />
=== Overview ===<br />
<br />
From a user's point of view, Data Parallel Haskell adds a new data type to Haskell –namely, ''parallel arrays''– as well as operations on parallel arrays. Syntactically, parallel arrays are like lists, only that instead of square brackets <hask>[</hask> and <hask>]</hask>, parallel arrays use square brackets with a colon <hask>[:</hask> and <hask>:]</hask>. In particular, <hask>[:e:]</hask> is the type of parallel arrays with elements of type <hask>e</hask>; the expression <hask>[:x, y, z:]</hask> denotes a three element parallel array with elements <hask>x</hask>, <hask>y</hask>, and <hask>z</hask>; and <hask>[:x + 1 | x <- xs:]</hask> represents a simple array comprehension. More sophisticated array comprehensions (including the equivalent of [http://www.haskell.org/ghc/docs/latest/html/users_guide/syntax-extns.html#parallel-list-comprehensions parallel list comprehensions]) as well as enumerations and pattern matching proceed in an analog manner. Moreover, the array library of DPH defines variants of most list operations from the Haskell Prelude and the standard <hask>List</hask> library (e.g., we have <hask>lengthP</hask>, <hask>sumP</hask>, <hask>mapP</hask>, and so on).<br />
<br />
The two main differences between lists and parallel arrays are that (1) parallel arrays are a strict data structure and (2) that they are not inductively defined. Parallel arrays are strict in that by using a single element, all elements of an array are demanded. Hence, all elements of a parallel array might be evaluated in parallel. To facilitate such parallel evaluation, operations on parallel arrays should treat arrays as aggregate structures that are manipulated in their entirety (instead of the inductive, element-wise processing that is the foundation of all Haskell list functions.)<br />
<br />
As a consequence, parallel arrays are always finite, and standard functions that yield infinite lists, such as <hask>enumFrom</hask> and <hask>repeat</hask>, have no corresponding array operation. Moreover, parallel arrays only have an undirected fold function <hask>foldP</hask> that requires an associative function as an argument – such a fold function has a parallel step complexity of O(log ''n'') for arrays of length ''n''. Parallel arrays also come with some aggregate operations that are absent from the standard list library, such as <hask>permuteP</hask>.<br />
<br />
=== A simple example ===<br />
<br />
As a simple example of a DPH program, consider the following code that computes the dot product of two vectors given as parallel arrays:<br />
<haskell><br />
dotp :: Num a => [:a:] -> [:a:] -> a<br />
dotp xs ys = sumP [:x * y | x <- xs | y <- ys:]<br />
</haskell><br />
This code uses an array variant of [http://www.haskell.org/ghc/docs/latest/html/users_guide/syntax-extns.html#parallel-list-comprehensions parallel list comprehensions], which could alternatively be written as <hask>[:x * y | (x, y) <- zipP xs ys:]</hask>, but should otherwise be self-explanatory to any Haskell programmer.<br />
<br />
=== Running DPH programs ===<br />
<br />
Unfortunately, we cannot use the above implementation of <hask>dotp</hask> directly in the current preliminary implementation of DPH. In the following, we will discuss how the code needs to be modified and how it needs to be compiled and run for parallel execution. GHC applies an elaborate transformation to DPH code, called ''vectorisation'', that turns nested into flat data parallelism. This transformation is only useful for code that is executed in parallel (i.e., code that manipulates parallel arrays), but for parallel code it dramatically simplifies load balancing.<br />
<br />
==== No type classes ====<br />
<br />
Unfortunately, vectorisation does not handle type classes at the moment. Hence, we currently need to avoid overloaded operations in parallel code. To account for that limitation, we specialise <hask>dotp</hask> on doubles.<br />
<haskell><br />
dotp_double :: [:Double:] -> [:Double:] -> Double<br />
dotp_double xs ys = sumP [:x * y | x <- xs | y <- ys:]<br />
</haskell><br />
<br />
==== Special Prelude ====<br />
<br />
As the current implementation of vectorisation cannot handle some language constructs, we cannot use it to vectorise those parts of the standard Prelude that might be used in parallel code (such as arithmetic operations). Instead, DPH comes with its own (rather limited) Prelude in [http://darcs.haskell.org/packages/dph/dph-common/Data/Array/Parallel/Prelude.hs Data.Array.Parallel.Prelude] plus three extra modules to support one numeric type each [http://darcs.haskell.org/packages/dph/dph-common/Data/Array/Parallel/Prelude/Float.hs Data.Array.Parallel.Prelude.Float], [http://darcs.haskell.org/packages/dph/dph-common/Data/Array/Parallel/Prelude/Double.hs Data.Array.Parallel.Prelude.Double], [http://darcs.haskell.org/packages/dph/dph-common/Data/Array/Parallel/Prelude/Int.hs Data.Array.Parallel.Prelude.Int], and [http://darcs.haskell.org/packages/dph/dph-common/Data/Array/Parallel/Prelude/Word8.hs Data.Array.Parallel.Prelude.Word8]. These four modules support the same functions (on different types) and if a program needs to use more than one, they need to be imported qualified (as we cannot use type classes in vectorised code in the current version). Moreover, we have [http://darcs.haskell.org/packages/dph/dph-common/Data/Array/Parallel/Prelude/Bool.hs Data.Array.Parallel.Prelude.Bool]. If your code needs any other numeric types or functions that are not implemented in these Prelude modules, you currently need to implement and vectorise that functionality yourself.<br />
<br />
To compile <hask>dotp_double</hask>, we add the following import statements:<br />
<haskell><br />
import qualified Prelude<br />
import Data.Array.Parallel<br />
import Data.Array.Parallel.Prelude<br />
import Data.Array.Parallel.Prelude.Double<br />
</haskell><br />
<br />
==== Impedance matching ====<br />
<br />
Special care is needed at the interface between vectorised and non-vectorised code. Currently, only simple types can be passed between these different kinds of code. In particular, parallel arrays (which might be nested) '''cannot''' be passed. Instead, we need to pass flat arrays of type <hask>PArray</hask>. This type is exported by our special-purpose Prelude together with a conversion function <hask>fromPArrayP</hask> (which is specific to the element type due to the lack of type classes in vectorised code). <br />
<br />
Using this conversion function, we define a wrapper function for <hask>dotp_double</hask> that we export and use from non-vectorised code.<br />
<haskell><br />
dotp_wrapper :: PArray Double -> PArray Double -> Double<br />
{-# NOINLINE dotp_wrapper #-}<br />
dotp_wrapper v w = dotp_double (fromPArrayP v) (fromPArrayP w)<br />
</haskell><br />
It is important to mark this function as <hask>NOINLINE</hask> as we don't want it to be inlined into non-vectorised code.<br />
<br />
==== Compiling vectorised code ====<br />
<br />
The syntax for parallel arrays is an extension to Haskell 2010 that needs to be enabled with the language option <hask>ParallelArrays</hask>. Furthermore, we need to explicitly tell GHC if we want to vectorise a module by using the <hask>-fvectorise</hask> option.<br />
<br />
Currently, GHC either vectorises all code in a module or none. This can be inconvenient as some parts of a program cannot be vectorised – for example, code in the <hask>IO</hask> monad (the radical re-ordering of computations performed by the vectorisation transformation is only valid for pure code). As a consequence, the programmer currently needs to partition vectorised and non-vectorised code carefully over different modules.<br />
<br />
Overall, we get the following complete module definition for the dot-product code:<br />
<haskell><br />
{-# LANGUAGE ParallelArrays #-}<br />
{-# OPTIONS_GHC -fvectorise #-}<br />
<br />
module DotP (dotp_wrapper)<br />
where<br />
<br />
import qualified Prelude<br />
import Data.Array.Parallel<br />
import Data.Array.Parallel.Prelude<br />
import Data.Array.Parallel.Prelude.Double<br />
<br />
dotp_double :: [:Double:] -> [:Double:] -> Double<br />
dotp_double xs ys = sumP [:x * y | x <- xs | y <- ys:]<br />
<br />
dotp_wrapper :: PArray Double -> PArray Double -> Double<br />
{-# NOINLINE dotp_wrapper #-}<br />
dotp_wrapper v w = dotp_double (fromPArrayP v) (fromPArrayP w)<br />
</haskell><br />
Assuming the module is in a file <hask>DotP.hs</hask>, we compile it as follows:<br />
<blockquote><br />
<code>ghc -c -Odph -fdph-par DotP.hs</code><br />
</blockquote><br />
The option <code>-Odph</code> enables a predefined set of GHC optimisation options that works best for DPH code and <code>-fdph-par</code> selects the standard parallel DPH backend library. (This is currently the only relevant backend, but there may be others in the future.)<br />
<br />
==== Using vectorised code ====<br />
<br />
Finally, we need a main module that calls the vectorised code, but is itself not vectorised, so that it may contain I/O. In this simple example, we convert two simple lists to parallel arrays, compute their dot product, and print the result:<br />
<haskell><br />
import Data.Array.Parallel<br />
import Data.Array.Parallel.PArray (PArray, fromList)<br />
<br />
import DotP (dotp_wrapper) -- import vectorised code<br />
<br />
main :: IO ()<br />
main<br />
= let v = fromList [1..10] -- convert lists...<br />
w = fromList [1,2..20] -- ...to parallel arrays<br />
result = dotp_wrapper v w -- invoke vectorised code<br />
in<br />
print result -- print the result<br />
</haskell><br />
We compile this module with<br />
<blockquote><br />
<code>ghc -c -Odph -fdph-par Main.hs</code><br />
</blockquote><br />
and finally link the two modules into an executable <code>dotp</code> with<br />
<blockquote><br />
<code>ghc -o dotp -threaded -fdph-par -rtsopts DotP.o Main.o</code><br />
</blockquote><br />
We need the <code>-threaded</code> option to link with GHC's multi-threaded runtime and <code>-fdph-par</code> to link with the standard parallel DPH backend. We include <code>-rtsopts</code> to be able to explicitly determine the number of OS threads used to execute our code.<br />
<br />
==== Generating input data ====<br />
<br />
To see any benefit from parallel execution, a data-parallel program needs to operate on a sufficiently large data set. Hence, instead of two small constant vectors, we might want to generate some larger input data:<br />
<haskell><br />
import System.Random (newStdGen)<br />
import Data.Array.Parallel<br />
import Data.Array.Parallel.PArray (PArray, randomRs)<br />
<br />
import DotP (dotp_wrapper) -- import vectorised code<br />
<br />
main :: IO ()<br />
main<br />
= do <br />
gen1 <- newStdGen<br />
gen2 <- newStdGen<br />
let v = randomRs n range gen1<br />
w = randomRs n range gen2<br />
print $ dotp_wrapper v w -- invoke vectorised code and print the result<br />
where<br />
n = 10000 -- vector length<br />
range = (-100, 100) -- range of vector elements<br />
</haskell><br />
We compile and link the program as described above.<br />
<br />
'''NOTE:''' The code as presented is unsuitable for benchmarking as we wouldn't want to measure the purely sequential random number generation (that dominates this simple program). For benchmarking, we would want to guarantee that the generated vectors are fully evaluated before taking the time. The module [http://www.haskell.org/ghc/docs/latest/html/libraries/dph-par/Data-Array-Parallel-PArray.html Data.Array.Parallel.PArray] exports the function <hask>nf</hask> for this purpose. For a variant of the dot-product example code that determines the CPU time consumed by <hask>dotp_wrapper</hask>, see [[GHC/Data Parallel Haskell/MainTimed|timed dot product]].<br />
<br />
==== Parallel execution ====<br />
<br />
By default, a Haskell program uses only one OS thread, and hence, also only one CPU core for execution. To use multiple cores, we need to invoke the executable with an explicit RTS command line option — e.g., <code>./dotp +RTS -N2</code> uses two cores. (Strictly speaking, it uses two OS threads, which will be scheduled on two separate cores if available.) To determine the runtime of parallel code, measuring CPU time, as demonstrated in the [[GHC/Data Parallel Haskell/MainTimed|timed variant of the dot product example]], is not sufficient anymore. We need to measure wall clock times instead. For proper benchmarking, it is advisable to use a library, such as [http://hackage.haskell.org/package/criterion criterion].<br />
<br />
A beautiful property of DPH is that the number of threads used to execute a program only affects its performance, but not the result. So, it is fine to do all debugging concerning correctness with just one core and to move to multiple cores only for performance debugging.<br />
<br />
Data Parallel Haskell –and more generally, GHC's multi-threading support– currently only aims at multicore processors or uniform memory access (UMA) multi-processors. Performance on non-uniform memory access (NUMA) machines is often bad as GHC's runtime makes no effort at optimising placement.<br />
<br />
=== Further examples and documentation ===<br />
<br />
Further examples are available in the [http://darcs.haskell.org/packages/dph/dph-examples/ examples directory of the package dph source]. This code also includes reference implementations for some of the example that are useful for benchmarking. <br />
<br />
The interfaces of the various components of the DPH library are in the [http://hackage.haskell.org/package/dph-par library documentation] on Hackage.<br />
<br />
=== Designing parallel programs ===<br />
<br />
Data Parallel Haskell is a high-level language to code parallel algorithms. Like plain Haskell, DPH frees the programmer from many low-level operational considerations (such as thread creation, thread synchronisation, critical sections, and deadlock avoidance). Nevertheless, the full responsibility for parallel algorithm design and many performance considerations (such as when does a computation have sufficient parallelism to make it worthwhile to exploit that parallelism) are still with the programmer.<br />
<br />
DPH encourages a data-driven style of parallel programming and, in good Haskell tradition, puts the choice of data types first. Specifically, the choice between using lists or parallel arrays for a data structure determines whether operations on the structure will be executed sequentially or in parallel. In addition to suitably combining standard lists and parallel arrays, it is often also useful to embed parallel arrays in a user-defined inductive structure, such as the following definition of parallel rose trees:<br />
<haskell><br />
data RTree a = RNode [:RTree a:]<br />
</haskell><br />
The tree is inductively defined; hence, tree traversals will proceed sequentially, level by level. However, the children of each node are held in parallel arrays, and hence, may be traversed in parallel. This structure is, for example, useful in parallel adaptive algorithms based on a hierarchical decomposition, such as the Barnes-Hut algorithm for solving the ''N''-body problem as discussed in more detail in the paper [http://www.cse.unsw.edu.au/~chak/papers/PLKC08.html Harnessing the Multicores: Nested Data Parallelism in Haskell.]<br />
<br />
For a general introduction to nested data parallelism and its cost model, see Blelloch's [http://www.cs.cmu.edu/~scandal/cacm/cacm2.html Programming Parallel Algorithms.]<br />
<br />
=== Further reading and information on the implementation ===<br />
<br />
DPH has two major components: (1) the ''vectorisation transformation'' and (2) the ''generic DPH library for flat parallel arrays''. The vectorisation transformation turns nested into flat data-parallelism and is described in detail in the paper [http://www.cse.unsw.edu.au/~chak/papers/PLKC08.html Harnessing the Multicores: Nested Data Parallelism in Haskell.] The generic array library maps flat data-parallelism to GHC's multi-threaded multicore support and is described in the paper [http://www.cse.unsw.edu.au/~chak/papers/CLPKM06.html Data Parallel Haskell: a status report]. The same topics are also covered in the slides for the two talks [http://research.microsoft.com/~simonpj/papers/ndp/NdpSlides.pdf Nested data parallelism in Haskell] and [http://dataparallel.googlegroups.com/web/UNSW%20CGO%20DP%202007.pdf Compiling nested data parallelism by program transformation].<br />
<br />
For further reading, consult this [[GHC/Data Parallel Haskell/References|collection of background papers, and pointers to other people's work]]. If you are really curious and like to know implementation details and the internals of the Data Parallel Haskell project, much of it is described on the GHC developer wiki on the pages covering [http://hackage.haskell.org/trac/ghc/wiki/DataParallel data parallelism] and [http://hackage.haskell.org/trac/ghc/wiki/TypeFunctions type families].<br />
<br />
=== Feedback ===<br />
<br />
Please file bug reports at [http://hackage.haskell.org/trac/ghc/ GHC's bug tracker]. Moreover, comments and suggestions are very welcome. Please post them to the [mailto:glasgow-haskell-users@haskell.org GHC user's mailing list], or contact the DPH developers directly:<br />
* [http://www.cse.unsw.edu.au/~chak/ Manuel Chakravarty]<br />
* [http://www.cse.unsw.edu.au/~keller/ Gabriele Keller]<br />
* [http://www.cse.unsw.edu.au/~rl/ Roman Leshchinskiy]<br />
* [http://www.cse.unsw.edu.au/~benl/ Ben Lippmeier]<br />
* [http://research.microsoft.com/~simonpj/ Simon Peyton Jones]</div>EricKowhttps://wiki.haskell.org/index.php?title=Parallel_GHC_Project&diff=46515Parallel GHC Project2012-07-12T07:35:47Z<p>EricKow: /* The Parallel Haskell Digest */ ph-digest 11</p>
<hr />
<div>[[Category:Parallel]]<br />
<br />
== Overview ==<br />
<br />
The Parallel GHC Project is an [http://research.microsoft.com MSR]-funded project to push the real-world use of [[Parallel|parallel Haskell]]. The aim is to demonstrate that parallel Haskell can be employed successfully in industrial projects.<br />
<br />
In the last few years GHC has gained impressive support for parallel programming on commodity multi-core systems. In addition to traditional threads and shared variables, it supports pure parallelism, software transactional memory (STM), and data parallelism. With much of this research and development complete, the next stage is to get the technology into more widespread use.<br />
<br />
This project aims to do the engineering work to solve whatever remaining practical problems are blocking organisations from making serious use of parallelism with GHC. The driving force is the ''applications'' rather than the ''technology''.<br />
<br />
The project involves a partnership with [[#Participating organisations|six groups from commercial and scientific organisations]]. Over the course of two years these groups are applying parallel Haskell in their specific domains. They are being supported by GHC HQ and [http://www.well-typed.com/ Well-Typed] who are providing advice on Haskell tools and techniques, and applying engineering effort to resolve any issues that are hindering these groups' progress.<br />
<br />
The project is being coordinated by [http://www.well-typed.com/ Well-Typed] and they are providing the bulk of the support and engineering effort. The project started in the summer of 2010.<br />
<br />
== Project News ==<br />
<br />
<br />
<br />
=== ThreadScope and friends ===<br />
<br />
We have been continuing our work to make [[ThreadScope]] more helpful and informative in tracking down your parallel and concurrent Haskell performance problems. We now have the ability to collect heap statistics from the GHC runtime system and present them in ThreadScope. These features will be available for users of a recent development GHC (7.5.x) or the eventual 7.6 release. In addition to heap statistics, we have been working on collecting information from hardware performance counters, more specifically adding support for Linux Perf Events. This could be useful for studying IO-heavy programs, the idea being to visualise system calls as being distinct from actual execution of Haskell code.<br />
<br />
=== Cloud Haskell ===<br />
<br />
We are continuing work on the new Cloud Haskell implementation, [http://sneezy.cs.nott.ac.uk/fun/2012-02/coutts-2012-02-28.pdf recently presented] by Duncan Coutts. Lately, we have been focused on reducing message latency. This consists of work in three areas: improving binary serialisation, investigating the implications of using Chan and MVar to pass messages between threads, and perhaps improving the Haskell network library implementation to compete better with a direct C implementation.<br />
<br />
For more information on our implementation, see the [https://github.com/haskell-distributed/distributed-process distributed-process GitHub page] and particularly the updated [https://github.com/haskell-distributed/distributed-process/wiki/New-backend-and-transport-design design document], which incorporates feedback on our initial design proposal.<br />
<br />
== Project artefacts == <br />
<br />
Some of the work by our project partners is available to the public<br />
<br />
{| class="wikitable"<br />
! Project<br />
! Partner<br />
! Description<br />
! Status<br />
|-<br />
| [http://www.mew.org/~kazu/proj/mighttpd/en/ mightttpd2]<br />
| IIJ<br />
| File/CGI server on top of Warp<br />
| version 2.5.7 released 2012-04-05<br />
|-<br />
| [http://hackage.haskell.org/package/webserver webserver]<br />
| IIJ<br />
| HTTP server library<br />
| version 0.4.6 released 2011−10−05<br />
|-<br />
| [http://hackage.haskell.org/package/wai-app-file-cgi wai-app-file-cgi]<br />
| IIJ<br />
| File/CGI WAI application (used by Mighttpd)<br />
| version 0.5.8 released 2012-04-05<br />
|-<br />
| [http://hackage.haskell.org/package/wai-logger wai-logger]<br />
| IIJ<br />
| Logging system for WAI (used by Mighttpd)<br />
| version 0.1.4 released 2012-02-13<br />
|-<br />
| [http://hackage.haskell.org/package/http-date http-date]<br />
| IIJ<br />
| Fast parser and formatter for HTTP Date<br />
| version 0.0.2 released 2012-02-17<br />
|-<br />
| dns<br />
| IIJ<br />
| DNS library<br />
| version 0.2.0 released 2011−08−31<br />
|-<br />
| [http://www.mew.org/~kazu/proj/iproute/en/ iproute]<br />
| IIJ<br />
| IP routing table<br />
| version 1.2.5 released 2012-04-02<br />
|-<br />
| [http://hackage.haskell.org/package/domain-auth domain-auth]<br />
| IIJ<br />
| Library for Sender Policy Framework, SenderID, DomainKeys and DKIM.<br />
| version 0.2.0 released 2011−08-31<br />
|-<br />
| [http://www.mew.org/~kazu/proj/rpf/en/ RPF]<br />
| IIJ<br />
| Receiver Policy Framework (milter)<br />
| version 0.2.0 released 2011−08-31<br />
|}<br />
<br />
In addition to helping the [[#participating organisations|participating organisations]], the project will whenever possible make improvements to libraries and tools that are useful to Haskell users more generally.<br />
<br />
{| class="wikitable"<br />
! Project<br />
! Description<br />
! Status<br />
|-<br />
| multiprocess Threadscope<br />
| profiling of multi-process or distributed Haskell systems such as client/server or MPI programs.<br />
| '''in progress'''<br />
|-<br />
| [https://github.com/bjpop/lfg LFG]<br />
| Haskell implementation of some pseudo random number generators from the SPRNG library<br />
| '''testing'''<br />
|-<br />
| [https://github.com/bjpop/haskell-sprng SPRNG binding]<br />
| Haskell wrapper around SPRNG<br />
| '''in progress'''<br />
|-<br />
| ThreadScope improvements<br />
| new spark profiling tools, GUI enhancements, bug fixes<br />
| version 0.2.1 released 2012-01-14<br />
|-<br />
| ghc-events improvements<br />
| spark events support<br />
| version 0.4.0.0 released 2012-01-14<br />
|-<br />
| gtk2hs maintenance & release<br />
| GHC 7.2 support<br />
| version 0.12.2 released 2011-11-13<br />
|-<br />
| [http://hackage.haskell.org/package/haskell-mpi Haskell-MPI]<br />
| Haskell bindings to C MPI library<br />
| version 1.2.1 released 2012-02-15<br />
|-<br />
| rowspan="5" | GHC RTS improvements<br />
| [http://hackage.haskell.org/trac/ghc/ticket/4449 &nbsp;#4449] - GHC 7 can't do IO when daemonized<br />
| fixed in 7.0.x branch<br />
|-<br />
| [http://hackage.haskell.org/trac/ghc/ticket/4504 &nbsp;#4504] - "awaitSignal Nothing" does not block thread with -threaded<br />
| fixed in 7.0.2<br />
|-<br />
| [http://hackage.haskell.org/trac/ghc/ticket/4512 &nbsp;#4512] - EventLog does not play well with forkProcess<br />
| fixed in 7.0.x branch<br />
|-<br />
| [http://hackage.haskell.org/trac/ghc/ticket/4514 &nbsp;#4514] - IO manager can deadlock if a file descriptor is closed behind its back<br />
| fixed in 7.0.x branch<br />
|-<br />
| [http://hackage.haskell.org/trac/ghc/ticket/4854 &nbsp;#4854] - Validating on a PPC Mac OS X: Fix miscellaneous errors and warnings<br />
| fixed in 7.0.x branch<br />
|-<br />
| [http://www.cse.unsw.edu.au/~chak/haskell/c2hs/ c2hs] improvements<br />
| marshalling functions now can have arguments supplied to them.<br />
| version 0.16.3 released 2011−03−24<br />
|}<br />
<br />
The project will also aim to document existing tools and parallel programming practices, making them accesible to a wider public.<br />
<br />
{| class="wikitable"<br />
! Project<br />
! Description<br />
! Status<br />
|-<br />
| [[ThreadScope Tour]]<br />
| a short guide to using ThreadScope to help analyse parallel program performance<br />
| unveiled 2012-01-14<br />
|-<br />
| rowspan="2" | submissions to TMR 19<br />
| Mighttpd – a High Performance Web Server in Haskell (Kazu Yamamoto)<br />
| submitted<br />
|-<br />
| High Performance Haskell with MPI (Bernie Pope and Dmitry Astapov)<br />
| submitted<br />
|-<br />
| [[Parallel|Parallel Haskell Portal]]<br />
| one-stop resource oriented for users of parallelism and concurrency in Haskell<br />
| unveiled 2011−04−20<br />
|}<br />
<br />
== The Parallel Haskell Digest ==<br />
<br />
We have been publishing a regular newsletter containing project news, other parallel news from around the Haskell community and short "Word of the Month" articles giving brief intrductions to important concepts in parallelism.<br />
<br />
The back issues are here:<br />
<br />
* [http://www.well-typed.com/blog/52 Parallel Haskell Digest 1] with word of the month '''spark'''<br />
* [http://www.well-typed.com/blog/53 Parallel Haskell Digest 2] with word of the month '''thread of execution'''<br />
* [http://www.well-typed.com/blog/55 Parallel Haskell Digest 3] with word of the month '''parallel arrays'''<br />
* [http://www.well-typed.com/blog/56 Parallel Haskell Digest 4] with words of the month '''par''' and '''pseq'''<br />
* [http://www.well-typed.com/blog/58 Parallel Haskell Digest 5] with word of the month '''strategy'''<br />
* [http://www.well-typed.com/blog/60 Parallel Haskell Digest 6] with word of the month '''dataflow''' as in dataflow parallelism<br />
* [http://www.well-typed.com/blog/62 Parallel Haskell Digest 7] (catching up on community news)<br />
* [http://www.well-typed.com/blog/64 Parallel Haskell Digest 8] with word of the month '''MVar'''<br />
* [http://www.well-typed.com/blog/65 Parallel Haskell Digest 9] with the word of the month '''transaction'''<br />
* [http://www.well-typed.com/blog/66 Parallel Haskell Digest 10] with the word of the month '''channel'''<br />
* [http://www.well-typed.com/blog/67 Parallel Haskell Digest 11] with the word of the month '''actor'''<br />
<br />
== Getting involved ==<br />
<br />
Progress reports will be posted to the [http://groups.google.com/group/parallel-haskell parallel Haskell mailing list] and to the [http://www.well-typed.com/blog/ Well-Typed blog].<br />
<br />
The best starting point to get involved is to join the mailing list. Note that the list is for parallel Haskell generally, not just the Parallel GHC Project.<br />
<br />
== Participating organisations ==<br />
<br />
;[http://www.dragonfly.co.nz/ Dragonfly]<br />
:Cloudy Bayes: Hierarchical Bayesian modeling in Haskell<br />
<br />
:The Cloudy Bayes project aims to develop a fast Bayesian model fitter that takes advantage of modern multiprocessor machines. It will support model descriptions in the BUGS model description language (WinBUGS, OpenBUGS, and JAGS). It will be implemented as an embedded domain specific language (EDSL) within Haskell. A wide range of model hierarchical Bayesian model structures will be possible, including many of the models used in medical, ecological, and biological sciences.<br />
<br />
:Cloudy Bayes will provide an easy to use interface for describing models, running Monte Carlo Markov chain (MCMC) fitters, diagnosing performance and convergence criteria as it runs, and collecting output for post-processing. Haskell's strong type system will be used to ensure that model descriptions make sense, providing a fast, safe development cycle.<br />
<br />
;[http://www.iij-ii.co.jp/en/ IIJ Innovation Institute Inc.]<br />
:Haskell is suitable for many kinds of domain, and GHC's support for lightweight threads makes it attractive for concurrency applications. An exception has been network server programming because GHC 6.12 and earlier have an IO manager that is limited to 1024 network sockets. GHC 7 has a new IO manager implementation that gets rid of this limitation.<br />
<br />
:This project will implement several network servers to demonstrate that Haskell is suitable for network servers that handle a massive number of concurrent connections.<br />
<br />
;[http://www.lanl.gov/ Los Alamos National Laboratory]<br />
:This project will use parallel Haskell to implement high-performance Monte Carlo algorithms, a class of algorithms which use randomness to sample large or otherwise intractable solution spaces. The initial goal is a particle-based MC algorithm suitable for modeling the flow of radiation, with application to problems in astrophysics. From this, the project is expected to move to identification of suitable abstractions for expressing a wider variety of Monte Carlo algorithms, and using models for different physical phenomena.<br />
<br />
;[http://www.willowgarage.com/ Willow Garage Inc.]<br />
:Distributed Rigid Body Dynamics in ROS<br />
<br />
:Willow Garage seeks a high-level representation for a distributed rigid body dynamics simulation, capable of excellent parallel speedup on current and foreseeable hardware, yet linking to existing optimized libraries for low-level message passing and matrix math.<br />
<br />
:This project will drive API, performance, and profiling tool requirements for Haskell's interface to the Message Passing Interface (MPI) specification, an industry-standard in High Performance Computing (HPC), as used on clusters of many nodes.<br />
<br />
:Competing internal initiatives use C++/MPI and CUDA directly.<br />
<br />
:Willow Garage aims to lay the groundwork for personal robotics applications in everyday life. ROS ([http://ros.org Robot Operating System]) is an open source, meta-operating system for your robot.<br />
<br />
; [http://www.tid.es/en/ Telefónica I+D]<br />
<br />
: This project is to demonstrate parallel Haskell technology using the example of graph algorithms in large graphs representing social networks. The current work is on parallel versions of the [http://en.wikipedia.org/wiki/Bron%E2%80%93Kerbosch_algorithm Bron-Kerbosch algorithm] for finding maximal cliques in a graph. The initial goal is to demonstrate good speedups on multi-core and the overall aim to demonstrate good speedups of a distributed version of the algorithm using Cloud Haskell.<br />
<br />
; [http://www.vett.co.uk/ VETT UK]<br />
<br />
: VETT are working on a transaction processing application using Cloud Haskell. More details will be available shortly.</div>EricKowhttps://wiki.haskell.org/index.php?title=Parallel/Digest&diff=46274Parallel/Digest2012-07-05T15:45:10Z<p>EricKow: /* Archives */</p>
<hr />
<div>The Parallel Haskell Digest is a newsletter aiming to show off all the work that's going on using parallelism and concurrency in the Haskell community.<br />
<br />
We hope to offer a monthly recap of news, interesting blog posts and discussions about parallelism in Haskell. For people who are new to parallelism and concurrency in Haskell, or maybe just have a passing interest, we hope to offer small tastes of parallelism and concurrency, with regular features like the Word of Month, Featured Code and Parallel Puzzlers.<br />
<br />
== Archives ==<br />
<br />
# [http://www.well-typed.com/blog/52 2011-03-31] - spark and Hulk<br />
# [http://www.well-typed.com/blog/53 2011-05-11] - threads<br />
# [http://www.well-typed.com/blog/55 2011-06-16] - parallel arrays<br />
# [http://www.well-typed.com/blog/56 2011-07-22] - par and pseq<br />
# [http://www.well-typed.com/blog/58 2011-08-21] - strategy<br />
# [http://www.well-typed.com/blog/60 2011-10-06] - dataflow<br />
# [http://www.well-typed.com/blog/62 2011-12-24] - (news catch up)<br />
# [http://www.well-typed.com/blog/64 2012-03-02] - MVar (lock)<br />
# [http://www.well-typed.com/blog/65 2012-04-20] - transaction<br />
# [http://www.well-typed.com/blog/66 2012-05-18] - channel<br />
# [http://www.well-typed.com/blog/67 2012-07-05] - actor</div>EricKowhttps://wiki.haskell.org/index.php?title=Parallel/Digest&diff=46273Parallel/Digest2012-07-05T15:43:57Z<p>EricKow: PH Digest 11</p>
<hr />
<div>The Parallel Haskell Digest is a newsletter aiming to show off all the work that's going on using parallelism and concurrency in the Haskell community.<br />
<br />
We hope to offer a monthly recap of news, interesting blog posts and discussions about parallelism in Haskell. For people who are new to parallelism and concurrency in Haskell, or maybe just have a passing interest, we hope to offer small tastes of parallelism and concurrency, with regular features like the Word of Month, Featured Code and Parallel Puzzlers.<br />
<br />
== Archives ==<br />
<br />
# [http://www.well-typed.com/blog/67 2012-07-05] - actor<br />
# [http://www.well-typed.com/blog/66 2012-05-18] - channel<br />
# [http://www.well-typed.com/blog/65 2012-04-20] - transaction<br />
# [http://www.well-typed.com/blog/64 2012-03-02] - MVar (lock)<br />
# [http://www.well-typed.com/blog/62 2011-12-24] - (news catch up)<br />
# [http://www.well-typed.com/blog/60 2011-10-06] - dataflow<br />
# [http://www.well-typed.com/blog/58 2011-08-21] - strategy<br />
# [http://www.well-typed.com/blog/56 2011-07-22] - par and pseq<br />
# [http://www.well-typed.com/blog/55 2011-06-16] - Parallel Arrays<br />
# [http://www.well-typed.com/blog/53 2011-05-11] - Threads<br />
# [http://www.well-typed.com/blog/52 2011-03-31] - Spark and Hulk</div>EricKowhttps://wiki.haskell.org/index.php?title=WxHaskell/Mac&diff=45991WxHaskell/Mac2012-06-11T08:10:29Z<p>EricKow: /* Installing on MacOS X */ clarify path</p>
<hr />
<div>== Installing on MacOS X ==<br />
<br />
<ol><br />
<li> Install the Developer Tools<br />
<li> Install wxWidgets 2.9 by hand<br />
<ul><br />
<li>If you use HomeBrew:<br />
<br><code>brew install wxmac --devel</code><br />
<br>or on Lion, possibly <code>brew install wxmac --use-llvm --devel</code><br />
<li>If you use MacPorts:<br><br />
<code><br />
sudo port install wxWidgets-devel +universal<br />
</code><br />
<li>If you want to install it manually, download the source code and install it with<br />
<pre><br />
./configure --enable-unicode --disable-debug --with-osx_cocoa<br />
--prefix=/usr/local --enable-stc --enable-aui<br />
--enable-propgrid --enable-xrc --enable-ribbon<br />
--enable-richtext --enable-webkit --with-opengl<br />
make && make install<br />
</pre><br />
</ul><br />
<li> (OS X 10.6 or below) Check your path to make sure you are using your wxWidgets and not the default Mac one (should probably not be <code>/usr/bin</code>)<br><br />
<code>which wx-config</code><br />
<li> <code>cabal install wx cabal-macosx</code><br />
<li>Compile and run a [https://raw.github.com/jodonoghue/wxHaskell/master/samples/wxcore/HelloWorld.hs sample wxcore application]:<br />
<br><pre>ghc --make HelloWorld.hs<br />
cabal-macosx HelloWorld<br />
./HelloWorld.app/Contents/MacOS/HelloWorld<br />
</pre>(see note 2012-04-24-MacPorts if you use MacPorts)<br />
</li><br />
</ol><br />
<br />
== Known working configurations ==<br />
<br />
{|<br />
!|Date<br />
!|Arch<br />
!|OS/XCode<br />
!|GHC<br />
!|Haskell Platform<br />
!|wxWidgets<br />
!|wxHaskell<br />
|-<br />
|2012-04<br />
|Intel 64-bit<br />
|Lion (10.7.3), XCode 4.3<br />
|7.4.1<br />
|<br />
|2.9.3 (HomeBrew)<br />
|0.90 (see notes)<br />
|-<br />
|2012-04<br />
|Intel 64-bit<br />
|Lion (10.7.3), Xcode 4.3<br />
|7.0.4<br />
|2011.4.0.0<br />
|2.9.3 (HomeBrew)<br />
|0.90<br />
|-<br />
|2012-04<br />
|Intel 32-bit<br />
|Snow Leopard (10.6.8), Xcode 3.2.6<br />
|7.0.4<br />
|2011.4.0.0<br />
|2.9.3 (MacPorts)<br />
|0.90 (see notes)<br />
|}<br />
<br />
== Notes ==<br />
<br />
These notes tend to be a bit ephemeral and are thus dated to help you figure out if they may still apply or not.<br />
<br />
* 2012-04-24 MacPorts: If you use MacPorts, you may run into a problem with the iconv library. Tell GHC that you prefer the system libraries first: <code>ghc HelloWorld.hs -L/usr/lib</code><br />
* 2012-04-17: The MacPorts version of wxWidgets 2.9.3 can be used. I added a few flags to the Portfile, but they are probably not necessary.<br />
* 2012-04-14: On MacOS X Lion, to install wxWidgets 2.9 with HomeBrew, you may need to run <code>brew install wxmac --use-llvm --devel</code><br />
<br />
== Using wxHaskell on MacOS X platforms ==<br />
<br />
Even though graphical applications on MacOS X look great, it is a still a developers nightmare to get them working :-). This page describes how to circumvent some of the pitfalls.<br />
<br />
<ul><br />
<li>Executables generated with GHC do not work when executed directly if they use the graphical API; they need to be upgraded into so called [https://en.wikipedia.org/wiki/Application_Bundle application bundles] for MacOS X. Use the [https://github.com/gimbo/cabal-macosx cabal-macosx] package to automate this. It can be integrated with Cabal and/or used as a standalone <code>macosx-app</code> script.<br />
</li><br />
<li><p>''Note: The following no longer applies to <code>wxcore >= 0.90.0.1</code>.''</p><br />
<p>Due to complicated MacOS X restrictions, graphical wxHaskell applications do not work directly when used from GHCi. Fortunately, Wolfgang Thaller has kindly provided an ingenious [http://wxhaskell.sourceforge.net/download/EnableGUI.hs Haskell module] that solves this problem. Just import the (compiled) module [http://wxhaskell.sourceforge.net/download/EnableGUI.hs <tt>EnableGUI</tt>] in your program and issue the following command to run <tt>main</tt> from your GHCi prompt:</p><br />
<pre>&gt; enableGUI &gt;&gt; main</pre><br />
<p>Compiling and using enableGUI needs some command line flags:</p><br />
<pre>&gt; ghc -XForeignFunctionInterface -c EnableGUI.hs<br />
&gt; ghci -framework Carbon HelloWorld.hs<br />
GHCi, version 6.8.2: http://www.haskell.org/ghc/ :? for help<br />
Loading package base ... linking ... done.<br />
Loading object (framework) Carbon ... done<br />
final link ... done<br />
[2 of 2] Compiling Main ( Main.hs, interpreted )<br />
Ok, modules loaded: Main, EnableGUI.<br />
*Main&gt; enableGUI<br />
*Main&gt; main</pre><br />
</li><br />
</ul><br />
<br />
== Troubleshooting ==<br />
<br />
See [[../Troubleshooting]] for help getting your wxhaskell applications running<br />
<br />
<ul><li><p>The dynamic link libraries used by wxHaskell can not always be found. If your application seems to start (the icon bounces) but terminates mysteriously, you need to set the dynamic link library search path to the wxHaskell library directory. For example:</p><br />
<pre>&gt; setenv DYLD_LIBRARY_PATH /usr/local/wxhaskell/lib</pre><br />
<br />
or <br />
<br />
<pre>&gt; setenv DYLD_LIBRARY_PATH $HOME/.cabal/local/lib/wxhaskell-0.11.0/lib</pre></li></li></ul><br />
<br />
[[Category:wxHaskell|MacOS X]]</div>EricKowhttps://wiki.haskell.org/index.php?title=WxHaskell&diff=45990WxHaskell2012-06-11T08:07:44Z<p>EricKow: /* Documentation */</p>
<hr />
<div>__NOTOC__<br />
<br />
<br />
[[Image:Wxhaskell-black-small.png|center]]<br />
<br />
== What is it? ==<br />
<br />
wxHaskell is a portable and native GUI library for [http://www.haskell.org Haskell]. The goal of the project is to provide an industrial strength GUI library for Haskell, but without the burden of developing (and <br />
maintaining) one ourselves.<br />
<br />
wxHaskell is therefore built on top of [http://www.wxwidgets.org wxWidgets] – a comprehensive C++ library that is portable across all major GUI platforms; including GTK, Windows, X11, and MacOS X. Furthermore, it is a mature library (in development since 1992) that supports a wide range of widgets with the native look-and-feel, and it has a very active community (ranked among the top 25 most active projects on sourceforge).<br />
<br />
We maintain two branches of wxHaskell.<br />
<br />
The 0.13 branch supports wxWidgets 2.8.x, and is the easiest to get working. Many Linux distributions come with packaged wxWidgets 2.8.x, and Windows users can download the pre-built [http://wxpack.sourceforge.net wxPack] distribution. This branch is in a maintenance mode, and will not receive significant new development.<br />
<br />
The 0.90 branch supports wxWidgets 2.9.x. The downside of choosing this version is that you will likely need to build wxWidgets for yourself, which is slightly time consuming and requires you to have g++ installed on your system. The benefit is that it supports quite a number of new and more modern GUI elements. ''wxHaskell 0.90 is essential if you want to build for 64bit MacOS X targets (e.g. Lion)''.<br />
<br />
== Status ==<br />
<br />
The core interface of wxHaskell was originally derived from the [http://elj.sourceforge.net/projects/gui/ewxw/ wxEiffel] binding. Work on this has been dormant for several years, but the wxHaskell maintainers now support updates to the wxWidgets API themselves - we generally respond to new releases of wxWidgets within a few weeks at most.<br />
<br />
There are four key components of wxHaskell from version 0.90 onwards (three in earlier branches).<br />
* wxDirect parses specially written C headers and generates low level Haskell FFI bindings for the exported functions.<br />
* wxc is a C language binding for wxWidgets. It is needed because the Haskell FFI can only bind to C as it does not understand C++ name mangling. Because it is a C language wrapper over wxWidgets, and is generated as a standard dynamic library on all supported platforms, wxc could be used as the basis for a wxWidgets wrapper for any language which supports linking to C (so that would be all of them then). In older versions of wxHaskell, the wxc components were built as a monolithic static library with wxcore.<br />
* wxcore is a set of low-level Haskell bindings to wxc. A large part is generated automatically by wxdirect, with some key abstractions being hand-coded in Haskell. You can program directly to the wxcore interface if you wish (it is sometimes the only way, in fact).<br />
* wx is a set of higher-level wrappers over wxcore. It is intended to make it easier to write reasonably idiomatic Haskell. Most wxHaskell software is about 80% wx and 20% wxcore (at least in my experience).<br />
<br />
The C wrapper is, unfortunately, generated by hand, so there is some (mainly tedious boilerplate) work involved in porting a new set of widgets to wxHaskell. Some work has been done into automating this aspect, but we are far from being able to replicate the approach reliably over then entire API as yet.<br />
<br />
From the perspective of the user (rather than the developer) about 90% of the core wxWidgets functionality is already supported, excluding more &quot;exotic&quot; widgets like dockable windows. The library supports Windows, GTK (Linux) and MacOS X.<br />
<br />
== News ==<br />
<br />
; 13 April 2012: wxHaskell 0.90 is released. This version supports wxWidgets >= 2.9.x, and includes many new features.<br />
; 5 January 2012: wxHaskell 0.13.2 is [http://comments.gmane.org/gmane.comp.lang.haskell.wxhaskell.general/1123 released]. Mainly bugfixes. This is the last version to support wxWidgets 2.8.x.<br />
; 13 October 2009 : wxHaskell 0.12.1.2 is released. Since the previous new we have added support for XRC files (XML GUI design) and installation by Cabal<br />
; 4 January 2009 : wxHaskell 0.11.0 is [http://sourceforge.net/project/showfiles.php?group_id=73133 released]. See the [http://www.nabble.com/ANN:-wxHaskell-0.11.1-td21277376.html announcement] (indicates rev. 0.11.1, SourceForge has rev. 0.11.0)<br />
; 8 August 2008 : Switched official darcs repository to code.haskell.org (<code>darcs get --partial http://code.haskell.org/wxhaskell</code>). You can use previous darcs.haskell.org's darcs repository, too.<br />
; 5 August 2008 : Homepage (except for screenshots) now moved to Haskell wiki<br />
; 23 March 2008 : wxHaskell 0.10.3 is [http://sourceforge.net/project/showfiles.php?group_id=73133&package_id=73173 released].<br />
; 20 January 2007 : wxHaskell has a new set of maintainers, led by Jeremy O'Donoghue. We are working on a release for version 0.10, with Unicode support, a Cabalized build process and more. All recent development is taking place under a new darcs repository (<code><nowiki>darcs get http://darcs.haskell.org/wxhaskell/</nowiki></code>).<br />
<br />
== Documentation ==<br />
<br />
* [http://wxhaskell.sourceforge.net/screenshots.html Screenshots]<br />
** [http://wxhaskell.sourceforge.net/samples.html Samples] ( the links to the source code on that page are broken, but you can see the sources [http://code.haskell.org/wxhaskell/samples/ here] )<br />
** [http://wxhaskell.sourceforge.net/applications.html Applications]<br />
* [[/Documentation/|Using wxHaskell]]<br />
** [[/License/]]<br />
** [[/Quick start/]]<br />
** [[/FAQ/]]<br />
** [[/Short guide/]]<br />
** [[/Tips and tricks/]]<br />
* [[/Download/]]<br />
* Building and installing. Please refer to your platform.<br />
** [[/Linux/]]<br />
** [[/Mac/]]<br />
** [[/Windows/]]<br />
** Additional information here. Please help migrating these parts to their appropriate platform.<br />
*** [[/2.8/|wxWidgets 2.8.x]]<br />
*** [[/0.13/|wxHaskell 0.13 for wxWidgets 2.8.x]]<br />
*** [[/0.90/|wxHaskell 0.90 for wxWidgets 2.9.x]]<br />
*[[/Development/]]<br />
** [[/Development/Environment |Working on wxHaskell]]<br />
*[[/Contribute/]]<br />
<br />
== Resources ==<br />
* [http://sourceforge.net/tracker/?group_id=73133 Bugtracker] <br />
* [https://lists.sourceforge.net/lists/listinfo/wxhaskell-devel The developer mailing list (wxhaskell-devel)] [http://sourceforge.net/mailarchive/forum.php?forum_name=wxhaskell-devel (archive)]<br />
* [https://lists.sourceforge.net/lists/listinfo/wxhaskell-users The wxHaskell users mailing list (wxhaskell-users)] [http://sourceforge.net/mailarchive/forum.php?forum_name=wxhaskell-users (archive)]<br />
<br />
== External links ==<br />
<br />
* Daan Leijen: [http://legacy.cs.uu.nl/daan/download/papers/wxhaskell.pdf wxHaskell / A Portable and Concise GUI Library for Haskell] (pdf)<br />
* Wei Tan: [http://www.cse.unsw.edu.au/~cs4132/lecture/wlta543.pdf GUI programming with wxHaskell] (pdf)<br />
* [http://www.cse.chalmers.se/edu/course/afp/lab1.html Assignment 1] part of the course [http://www.cse.chalmers.se/edu/course/afp/index.html Advanced Functional Programming], by [http://www.cs.chalmers.se/~koen/ Koen Lindström Claessen] and [http://www.cs.chalmers.se/~bringert/ Björn Bringert], a portal like page (html)<br />
* [http://en.wordpress.com/tag/wxhaskell/ Blog articles about wxHaskell]<br />
* Sander Evers, Peter Achten, and Jan Kuper: [http://www.st.cs.ru.nl/papers/2005/eves2005-FFormsIFL04.pdf A Functional Programming Technique for Forms in GUI] (PDF)<br />
* [http://www.sandr.dds.nl/FunctionalForms/ FunctionalForms], a combinator library/domain specific language for wxHaskell which enables a very concise programming style for forms (not maintained since 2005)<br />
<br />
== See also ==<br />
<br />
* [http://hackage.haskell.org/package/wxhnotepad An example of how to implement a basic notepad with wxHaskell]<br />
* [http://en.wikibooks.org/wiki/Haskell/GUI The Haskell wikibook GUI chapter]<br />
* [http://lindstroem.wordpress.com/2008/05/21/using-wxgeneric/ WxGeneric]<br />
* [[wxFruit]]<br />
* [http://www.haskell.org/jcp/hw05.pdf Can GUI Programming Be Liberated From The IO Monad]<br />
* [[Phooey]]: a purely functional layer on top of wxHaskell<br />
* [[GuiTV]]: GUI-based tangible values & composable interfaces, on [[TV]], [[Phooey]] and wxHaskell.<br />
* [[wxAsteroids]]: a game demonstrating wxHaskell.<br />
* [[GeBoP]]: the General Boardgames Player, offers a set of board games: Ataxx, Bamp, Halma, Hez, Kram, Nim, Reversi, TicTacToe, and Zenix.<br />
* [https://github.com/HeinrichApfelmus/Haskell-BlackBoard/blob/master/README.md Haskell-BlackBoard:] a drawing application for making slideshows and videos, based on wxHaskell and [[Functional Reactive Programming]]<br />
* [[Reactive-banana|reactive-banana]] - FRP library with bindings to wxHaskell.<br />
<br />
<br />
[[Category:User interfaces]]<br />
[[Category:Libraries]]<br />
[[Category:wxHaskell]]<br />
[[Category:Packages]]</div>EricKowhttps://wiki.haskell.org/index.php?title=WxHaskell/Mac&diff=45849WxHaskell/Mac2012-05-31T11:09:11Z<p>EricKow: /* Notes */</p>
<hr />
<div>== Installing on MacOS X ==<br />
<br />
<ol><br />
<li> Install the Developer Tools<br />
<li> Install wxWidgets 2.9 by hand<br />
<ul><br />
<li>If you use HomeBrew:<br />
<br><code>brew install wxmac --devel</code><br />
<br>or on Lion, possibly <code>brew install wxmac --use-llvm --devel</code>)<br />
<li>If you use MacPorts:<br><br />
<code><br />
sudo port install wxWidgets-devel +universal<br />
</code><br />
</ul><br />
<li> Check your path to make sure you are using your wxWidgets and not the default Mac one<br />
<li> <code>cabal install wx cabal-macosx</code><br />
<li>Compile and run a [https://raw.github.com/jodonoghue/wxHaskell/master/samples/wxcore/HelloWorld.hs sample wxcore application]:<br />
<br><pre>ghc --make HelloWorld.hs<br />
cabal-macosx HelloWorld<br />
./HelloWorld.app/Contents/MacOS/HelloWorld<br />
</pre>(see note 2012-04-24-MacPorts if you use MacPorts)<br />
</li><br />
</ol><br />
<br />
== Known working configurations ==<br />
<br />
{|<br />
!|Date<br />
!|Arch<br />
!|OS/XCode<br />
!|GHC<br />
!|Haskell Platform<br />
!|wxWidgets<br />
!|wxHaskell<br />
|-<br />
|2012-04<br />
|Intel 64-bit<br />
|Lion (10.7.3), XCode 4.3<br />
|7.4.1<br />
|<br />
|2.9.3 (HomeBrew)<br />
|0.90 (see notes)<br />
|-<br />
|2012-04<br />
|Intel 64-bit<br />
|Lion (10.7.3), Xcode 4.3<br />
|7.0.4<br />
|2011.4.0.0<br />
|2.9.3 (HomeBrew)<br />
|0.90<br />
|-<br />
|2012-04<br />
|Intel 32-bit<br />
|Snow Leopard (10.6.8), Xcode 3.2.6<br />
|7.0.4<br />
|2011.4.0.0<br />
|2.9.3 (MacPorts)<br />
|0.90 (see notes)<br />
|}<br />
<br />
== Notes ==<br />
<br />
These notes tend to be a bit ephemeral and are thus dated to help you figure out if they may still apply or not.<br />
<br />
* 2012-04-17: The MacPorts version of wxWidgets 2.9.3 can be used, though I added a few flags to the Portfile. I seem to have a few issues with functionality, but they may not necessarily be related to MacPorts.<br />
* 2012-04-14: On MacOS X Lion, to install wxWidgets 2.9 with HomeBrew, you may need to run <code>brew install wxmac --use-llvm --devel</code><br />
* 2012-04-24 MacPorts: If you use MacPorts, you may run into a problem with the iconv library. Tell GHC that you prefer the system libraries first: <code>ghc HelloWorld.hs -L/usr/lib</code><br />
<br />
== Using wxHaskell on MacOS X platforms ==<br />
<br />
Even though graphical applications on MacOS X look great, it is a still a developers nightmare to get them working :-). Furthermore, the MacOS X port of wxWidgets is the least mature and still has some quirks. This page describes how to circumvent some of the pitfalls.<br />
<br />
<br />
<ul><br />
<li>Executables generated with GHC do not work when executed directly if they use the graphical API; they need to be upgraded into so called [https://en.wikipedia.org/wiki/Application_Bundle application bundles] for MacOS X. Use the [https://github.com/gimbo/cabal-macosx cabal-macosx] package to automate this. It can be integrated with Cabal and/or used as a standalone `macosx-app` script.<br />
</li><br />
<li><p>''Note: The following no longer applies to (future) versions of <code>wxcore > 0.90</code>.''</p><br />
<p>Due to complicated MacOS X restrictions, graphical wxHaskell applications do not work directly when used from GHCi. Fortunately, Wolfgang Thaller has kindly provided an ingenious [http://wxhaskell.sourceforge.net/download/EnableGUI.hs Haskell module] that solves this problem. Just import the (compiled) module [http://wxhaskell.sourceforge.net/download/EnableGUI.hs <tt>EnableGUI</tt>] in your program and issue the following command to run <tt>main</tt> from your GHCi prompt:</p><br />
<pre>&gt; enableGUI &gt;&gt; main</pre><br />
<p>Compiling and using enableGUI needs some command line flags:</p><br />
<pre>&gt; ghc -XForeignFunctionInterface -c EnableGUI.hs<br />
&gt; ghci -framework Carbon HelloWorld.hs<br />
GHCi, version 6.8.2: http://www.haskell.org/ghc/ :? for help<br />
Loading package base ... linking ... done.<br />
Loading object (framework) Carbon ... done<br />
final link ... done<br />
[2 of 2] Compiling Main ( Main.hs, interpreted )<br />
Ok, modules loaded: Main, EnableGUI.<br />
*Main&gt; enableGUI<br />
*Main&gt; main</pre><br />
</li><br />
<li><p>The dynamic link libraries used by wxHaskell can not always be found. If your application seems to start (the icon bounces) but terminates mysteriously, you need to set the dynamic link library search path to the wxHaskell library directory. For example:</p><br />
<pre>&gt; setenv DYLD_LIBRARY_PATH /usr/local/wxhaskell/lib</pre><br />
<br />
or <br />
<br />
<pre>&gt; setenv DYLD_LIBRARY_PATH $HOME/.cabal/local/lib/wxhaskell-0.11.0/lib</pre></li></li></ul><br />
<br />
== Troubleshooting ==<br />
<br />
See [[../Troubleshooting]] for help getting your wxhaskell applications running<br />
<br />
# Why do I have to <code>macosx-app</code> my binaries?<br />
#* 2009-04-01: we don't know for sure yet. <code>macosx-app</code> is just a shell script that runs <code>Rez</code> and also creates an application bundle. If you are a MacOS developer, especially a wxWidgets one, we would love some help answering this question.<br />
#* 2009-11-24: Please see also Andy Gimblett's [https://github.com/gimbo/cabal-macosx cabal-macosx] project<br />
<br />
[[Category:wxHaskell|MacOS X]]</div>EricKowhttps://wiki.haskell.org/index.php?title=WxHaskell/Mac&diff=45848WxHaskell/Mac2012-05-31T11:06:15Z<p>EricKow: wxcore sample</p>
<hr />
<div>== Installing on MacOS X ==<br />
<br />
<ol><br />
<li> Install the Developer Tools<br />
<li> Install wxWidgets 2.9 by hand<br />
<ul><br />
<li>If you use HomeBrew:<br />
<br><code>brew install wxmac --devel</code><br />
<br>or on Lion, possibly <code>brew install wxmac --use-llvm --devel</code>)<br />
<li>If you use MacPorts:<br><br />
<code><br />
sudo port install wxWidgets-devel +universal<br />
</code><br />
</ul><br />
<li> Check your path to make sure you are using your wxWidgets and not the default Mac one<br />
<li> <code>cabal install wx cabal-macosx</code><br />
<li>Compile and run a [https://raw.github.com/jodonoghue/wxHaskell/master/samples/wxcore/HelloWorld.hs sample wxcore application]:<br />
<br><pre>ghc --make HelloWorld.hs<br />
cabal-macosx HelloWorld<br />
./HelloWorld.app/Contents/MacOS/HelloWorld<br />
</pre>(see note 2012-04-24-MacPorts if you use MacPorts)<br />
</li><br />
</ol><br />
<br />
== Known working configurations ==<br />
<br />
{|<br />
!|Date<br />
!|Arch<br />
!|OS/XCode<br />
!|GHC<br />
!|Haskell Platform<br />
!|wxWidgets<br />
!|wxHaskell<br />
|-<br />
|2012-04<br />
|Intel 64-bit<br />
|Lion (10.7.3), XCode 4.3<br />
|7.4.1<br />
|<br />
|2.9.3 (HomeBrew)<br />
|0.90 (see notes)<br />
|-<br />
|2012-04<br />
|Intel 64-bit<br />
|Lion (10.7.3), Xcode 4.3<br />
|7.0.4<br />
|2011.4.0.0<br />
|2.9.3 (HomeBrew)<br />
|0.90<br />
|-<br />
|2012-04<br />
|Intel 32-bit<br />
|Snow Leopard (10.6.8), Xcode 3.2.6<br />
|7.0.4<br />
|2011.4.0.0<br />
|2.9.3 (MacPorts)<br />
|0.90 (see notes)<br />
|}<br />
<br />
== Notes ==<br />
<br />
These notes tend to be a bit ephemeral and are thus dated to help you figure out if they may still apply or not.<br />
<br />
* 2012-04-17: The MacPorts version of wxWidgets 2.9.3 can be used, though I added a few flags to the Portfile. I seem to have a few issues with functionality, but they may not necessarily be related to MacPorts.<br />
* 2012-04-14: On MacOS X Lion, to install wxWidgets 2.9 with HomeBrew, you may need to run <code>brew install wxmac --use-llvm --devel</code><br />
* 2012-04-24 MacPorts: If you use MacPorts, you may run into a problem with the iconv library. Tell GHC that you prefer the system libraries first.<br />
<br><code>ghc HelloWorld.hs -L/usr/lib</code><br />
<br />
== Using wxHaskell on MacOS X platforms ==<br />
<br />
Even though graphical applications on MacOS X look great, it is a still a developers nightmare to get them working :-). Furthermore, the MacOS X port of wxWidgets is the least mature and still has some quirks. This page describes how to circumvent some of the pitfalls.<br />
<br />
<br />
<ul><br />
<li>Executables generated with GHC do not work when executed directly if they use the graphical API; they need to be upgraded into so called [https://en.wikipedia.org/wiki/Application_Bundle application bundles] for MacOS X. Use the [https://github.com/gimbo/cabal-macosx cabal-macosx] package to automate this. It can be integrated with Cabal and/or used as a standalone `macosx-app` script.<br />
</li><br />
<li><p>''Note: The following no longer applies to (future) versions of <code>wxcore > 0.90</code>.''</p><br />
<p>Due to complicated MacOS X restrictions, graphical wxHaskell applications do not work directly when used from GHCi. Fortunately, Wolfgang Thaller has kindly provided an ingenious [http://wxhaskell.sourceforge.net/download/EnableGUI.hs Haskell module] that solves this problem. Just import the (compiled) module [http://wxhaskell.sourceforge.net/download/EnableGUI.hs <tt>EnableGUI</tt>] in your program and issue the following command to run <tt>main</tt> from your GHCi prompt:</p><br />
<pre>&gt; enableGUI &gt;&gt; main</pre><br />
<p>Compiling and using enableGUI needs some command line flags:</p><br />
<pre>&gt; ghc -XForeignFunctionInterface -c EnableGUI.hs<br />
&gt; ghci -framework Carbon HelloWorld.hs<br />
GHCi, version 6.8.2: http://www.haskell.org/ghc/ :? for help<br />
Loading package base ... linking ... done.<br />
Loading object (framework) Carbon ... done<br />
final link ... done<br />
[2 of 2] Compiling Main ( Main.hs, interpreted )<br />
Ok, modules loaded: Main, EnableGUI.<br />
*Main&gt; enableGUI<br />
*Main&gt; main</pre><br />
</li><br />
<li><p>The dynamic link libraries used by wxHaskell can not always be found. If your application seems to start (the icon bounces) but terminates mysteriously, you need to set the dynamic link library search path to the wxHaskell library directory. For example:</p><br />
<pre>&gt; setenv DYLD_LIBRARY_PATH /usr/local/wxhaskell/lib</pre><br />
<br />
or <br />
<br />
<pre>&gt; setenv DYLD_LIBRARY_PATH $HOME/.cabal/local/lib/wxhaskell-0.11.0/lib</pre></li></li></ul><br />
<br />
== Troubleshooting ==<br />
<br />
See [[../Troubleshooting]] for help getting your wxhaskell applications running<br />
<br />
# Why do I have to <code>macosx-app</code> my binaries?<br />
#* 2009-04-01: we don't know for sure yet. <code>macosx-app</code> is just a shell script that runs <code>Rez</code> and also creates an application bundle. If you are a MacOS developer, especially a wxWidgets one, we would love some help answering this question.<br />
#* 2009-11-24: Please see also Andy Gimblett's [https://github.com/gimbo/cabal-macosx cabal-macosx] project<br />
<br />
[[Category:wxHaskell|MacOS X]]</div>EricKowhttps://wiki.haskell.org/index.php?title=WxHaskell/Mac&diff=45847WxHaskell/Mac2012-05-31T11:04:32Z<p>EricKow: hello world?</p>
<hr />
<div>== Installing on MacOS X ==<br />
<br />
<ol><br />
<li> Install the Developer Tools<br />
<li> Install wxWidgets 2.9 by hand<br />
<ul><br />
<li>If you use HomeBrew:<br />
<br><code>brew install wxmac --devel</code><br />
<br>or on Lion, possibly <code>brew install wxmac --use-llvm --devel</code>)<br />
<li>If you use MacPorts:<br><br />
<code><br />
sudo port install wxWidgets-devel +universal<br />
</code><br />
</ul><br />
<li> Check your path to make sure you are using your wxWidgets and not the default Mac one<br />
<li> <code>cabal install wx cabal-macosx</code><br />
<li>Compile and run a [https://raw.github.com/jodonoghue/wxHaskell/master/samples/wxcore/HelloWorld.hs sample wxcore application]:<br />
<br><pre>ghc --make HelloWorld.hs<br />
cabal-macosx HelloWorld<br />
./HelloWorld.app/Contents/MacOS/HelloWorld<br />
</pre><br />
</li><br />
</ol><br />
<br />
<ul><br />
<li>If you use MacPorts, you may run into a problem with the iconv library. Tell GHC that you prefer the system libraries first.<br />
<br><code>ghc HelloWorld.hs -L/usr/lib</code><br />
</ul><br />
<br />
== Known working configurations ==<br />
<br />
{|<br />
!|Date<br />
!|Arch<br />
!|OS/XCode<br />
!|GHC<br />
!|Haskell Platform<br />
!|wxWidgets<br />
!|wxHaskell<br />
|-<br />
|2012-04<br />
|Intel 64-bit<br />
|Lion (10.7.3), XCode 4.3<br />
|7.4.1<br />
|<br />
|2.9.3 (HomeBrew)<br />
|0.90 (see notes)<br />
|-<br />
|2012-04<br />
|Intel 64-bit<br />
|Lion (10.7.3), Xcode 4.3<br />
|7.0.4<br />
|2011.4.0.0<br />
|2.9.3 (HomeBrew)<br />
|0.90<br />
|-<br />
|2012-04<br />
|Intel 32-bit<br />
|Snow Leopard (10.6.8), Xcode 3.2.6<br />
|7.0.4<br />
|2011.4.0.0<br />
|2.9.3 (MacPorts)<br />
|0.90 (see notes)<br />
|}<br />
<br />
== Notes ==<br />
<br />
These notes tend to be a bit ephemeral and are thus dated to help you figure out if they may still apply or not.<br />
<br />
* 2012-04-17: The MacPorts version of wxWidgets 2.9.3 can be used, though I added a few flags to the Portfile. I seem to have a few issues with functionality, but they may not necessarily be related to MacPorts.<br />
* 2012-04-14: On MacOS X Lion, to install wxWidgets 2.9 with HomeBrew, you may need to run <code>brew install wxmac --use-llvm --devel</code><br />
<br />
== Using wxHaskell on MacOS X platforms ==<br />
<br />
Even though graphical applications on MacOS X look great, it is a still a developers nightmare to get them working :-). Furthermore, the MacOS X port of wxWidgets is the least mature and still has some quirks. This page describes how to circumvent some of the pitfalls.<br />
<br />
<br />
<ul><br />
<li>Executables generated with GHC do not work when executed directly if they use the graphical API; they need to be upgraded into so called [https://en.wikipedia.org/wiki/Application_Bundle application bundles] for MacOS X. Use the [https://github.com/gimbo/cabal-macosx cabal-macosx] package to automate this. It can be integrated with Cabal and/or used as a standalone `macosx-app` script.<br />
</li><br />
<li><p>''Note: The following no longer applies to (future) versions of <code>wxcore > 0.90</code>.''</p><br />
<p>Due to complicated MacOS X restrictions, graphical wxHaskell applications do not work directly when used from GHCi. Fortunately, Wolfgang Thaller has kindly provided an ingenious [http://wxhaskell.sourceforge.net/download/EnableGUI.hs Haskell module] that solves this problem. Just import the (compiled) module [http://wxhaskell.sourceforge.net/download/EnableGUI.hs <tt>EnableGUI</tt>] in your program and issue the following command to run <tt>main</tt> from your GHCi prompt:</p><br />
<pre>&gt; enableGUI &gt;&gt; main</pre><br />
<p>Compiling and using enableGUI needs some command line flags:</p><br />
<pre>&gt; ghc -XForeignFunctionInterface -c EnableGUI.hs<br />
&gt; ghci -framework Carbon HelloWorld.hs<br />
GHCi, version 6.8.2: http://www.haskell.org/ghc/ :? for help<br />
Loading package base ... linking ... done.<br />
Loading object (framework) Carbon ... done<br />
final link ... done<br />
[2 of 2] Compiling Main ( Main.hs, interpreted )<br />
Ok, modules loaded: Main, EnableGUI.<br />
*Main&gt; enableGUI<br />
*Main&gt; main</pre><br />
</li><br />
<li><p>The dynamic link libraries used by wxHaskell can not always be found. If your application seems to start (the icon bounces) but terminates mysteriously, you need to set the dynamic link library search path to the wxHaskell library directory. For example:</p><br />
<pre>&gt; setenv DYLD_LIBRARY_PATH /usr/local/wxhaskell/lib</pre><br />
<br />
or <br />
<br />
<pre>&gt; setenv DYLD_LIBRARY_PATH $HOME/.cabal/local/lib/wxhaskell-0.11.0/lib</pre></li></li></ul><br />
<br />
== Troubleshooting ==<br />
<br />
See [[../Troubleshooting]] for help getting your wxhaskell applications running<br />
<br />
# Why do I have to <code>macosx-app</code> my binaries?<br />
#* 2009-04-01: we don't know for sure yet. <code>macosx-app</code> is just a shell script that runs <code>Rez</code> and also creates an application bundle. If you are a MacOS developer, especially a wxWidgets one, we would love some help answering this question.<br />
#* 2009-11-24: Please see also Andy Gimblett's [https://github.com/gimbo/cabal-macosx cabal-macosx] project<br />
<br />
[[Category:wxHaskell|MacOS X]]</div>EricKowhttps://wiki.haskell.org/index.php?title=WrapConc&diff=45736WrapConc2012-05-19T09:26:19Z<p>EricKow: chris says WrapConc is museum material</p>
<hr />
<div>''This code is now likely obsolete and has been archived in the wiki history''</div>EricKowhttps://wiki.haskell.org/index.php?title=Applications_and_libraries/Concurrency_and_parallelism&diff=45735Applications and libraries/Concurrency and parallelism2012-05-19T09:25:23Z<p>EricKow: chris says WrapConc is museum material</p>
<hr />
<div>'''Concurrent and Parallel Programming'''<br />
<br />
Haskell offers a broad spectrum of tools for developing parallel or concurrent programs. For parallelism, Haskell libraries enable concise high-level parallel programs with results that are guaranteed to be deterministic, i.e., independent of the number of cores and the scheduling being used. Concurrency is supported with lightweight threads and high level abstractions such as [[software transactional memory]] for managing information shared across threads. Distributed programming is still mainly a research area. Some low-level tools (MPI bindings) and research prototypes are available and new approaches being developed, such as Cloud Haskell (Erlang-style actors as a Haskell library).<br />
<br />
This page lists libraries and extensions for programming<br />
concurrent and parallel applications in Haskell.<br />
<br />
See also the<br />
[[Parallel|parallel Haskell portal]] for research papers, tutorials and on<br />
parallel and concurrent Haskell.<br />
<br />
== Parallelism ==<br />
<br />
=== Strategies and Par ===<br />
<br />
;Strategies<br />
:A high-level compositional API for parallel programming.<br />
:* [http://hackage.haskell.org/package/parallel The parallel package on Hackage]<br />
:* [http://hackage.haskell.org/packages/archive/parallel/3.1.0.1/doc/html/Control-Parallel-Strategies.html Control.Parallel.Strategies Documentation]<br />
:* Latest paper: [http://research.microsoft.com/apps/pubs/default.aspx?id=138042 Seq no more: Better Strategies for Parallel Haskell]<br />
:* Original paper: [http://www.macs.hw.ac.uk/~dsg/gph/papers/html/Strategies/strategies.html Algorithm + Strategy = Parallelism]<br />
;[http://hackage.haskell.org/package/monad-par Monad-par]<br />
:An alternative parallel programming API to that provided by the parallel package. The Par monad allows the simple description of parallel computations, and can be used to add parallelism to pure Haskell code. The basic API is straightforward: the monad supports forking and simple communication in terms of IVars.<br />
<br />
=== Data parallelism ===<br />
<br />
;[http://repa.ouroborus.net/ Repa]<br />
:REgular PArallel arrays: high performance, regular, multi-dimensional, shape polymorphic parallel arrays.<br />
;[https://github.com/AccelerateHS/accelerate Accelerate]<br />
:An embedded language of array computations for high-performance computing in Haskell. Computations on multi-dimensional, regular arrays are expressed in the form of parameterised collective operations (such as maps, reductions, and permutations).<br />
;[http://haskell.org/haskellwiki/GHC/Data_Parallel_Haskell Data Parallel Haskell]<br />
:Implicitly parallel, high performance (nested) arrays, supporting large multicore programming.<br />
<br />
=== Research efforts ===<br />
<br />
; Feedback-directed implicit parallelism<br />
: Implicit parallelism in Haskell, and a feedback-directed mechanism to increase its granularity ([http://research.microsoft.com/~tharris/papers/2007-fdip.pdf FDIP paper])<br />
;[http://www.macs.hw.ac.uk/~dsg/gph/ GpH: Glasgow Parallel Haskell]<br />
:A complete, GHC-based implementation of the parallel Haskell extension GpH and of evaluation strategies is available. Extensions of the runtime-system and language to improve performance and support new platforms are under development.<br />
<br />
----<br />
<br />
== Concurrency ==<br />
<br />
=== Concurrent Haskell ===<br />
<br />
;[[Concurrency|Concurrent Haskell]]<br />
:GHC has supported concurrency with lightweight threads for more than a decade, and it [http://shootout.alioth.debian.org/gp4/benchmark.php?test=threadring&lang=all is very fast]. Threads in Haskell are preemptively scheduled and support everything you would normally expect from threads, including blocking I/O and foreign calls.<br />
:* [http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurrent.html Documentation]<br />
:* [http://haskell.org/haskellwiki/Concurrency_demos Examples]<br />
<br />
=== Software transactional memory ===<br />
<br />
;[[Software transactional memory|Software Transactional Memory]]<br />
:GHC supports a sophisticated version of software transactional memory. Software Transactional Memory (STM) is a new way to coordinate concurrent threads.<br />
:* [http://hackage.haskell.org/package/stm Documentation]<br />
:* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/stm.pdf Composable memory transactions]. <br />
:* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/lock-free-flops06.pdf Lock-free data structures using Software Transactional Memory in Haskell] gives further examples of concurrent programming using STM.<br />
<br />
=== Research efforts ===<br />
<br />
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/actor Actors with multi-headed receive clauses]<br />
:Actor-based concurrency for Haskell<br />
;[http://www.cs.kent.ac.uk/projects/ofa/chp/ CHP: Communicating Haskell Processes]<br />
:CHP is built on the ideas of CSP (Communicating Sequential Processes), featuring encapsulated parallel processes (no shared data!) communicating over synchronous channels. This is a very composable mode that also allows choice on communications, so that a process may offer to either read on one channel or write on another, but will only take the first that is available.<br />
<br />
----<br />
<br />
== Distributed programming ==<br />
<br />
=== MPI ===<br />
<br />
;[http://www.foldr.org/~michaelw/hmpi/ hMPI]<br />
:hMPI is an acronym for HaskellMPI. It is a Haskell binding conforming to MPI (Message Passing Interface) standard 1.1/1.2. The programmer is in full control over the communication between the nodes of a cluster.<br />
<br />
;[http://hackage.haskell.org/package/haskell-mpi Haskell-MPI]<br />
:Haskell-MPI provides a Haskell interface to MPI, built on top of the foreign function interface. It is notionally a descendant of hMPI, but is mostly a rewrite.<br />
<br />
=== Distributed Haskell ===<br />
<br />
;[http://www.macs.hw.ac.uk/~dsg/gdh/ GdH: Glasgow Distributed Haskell]<br />
:GdH supports distributed stateful interactions on multiple locations. It is a conservative extension of both Concurrent Haskell and GpH, enabling the distribution of the stateful IO threads of the former on the multiple locations of the latter. The programming model includes forking stateful threads on remote locations, explicit communication over channels, and distributed exception handling.<br />
<br />
;[http://www.mathematik.uni-marburg.de/~eden Eden]<br />
:Eden extends Haskell with a small set of syntactic constructs for explicit process specification and creation. While providing enough control to implement parallel algorithms efficiently, it frees the programmer from the tedious task of managing low-level details by introducing automatic communication (via head-strict lazy lists), synchronisation, and process handling.<br />
<br />
=== Research efforts ===<br />
<br />
;[https://github.com/haskell-distributed/distributed-process Cloud Haskell] :Erlang-style actors for distributed programming in Haskell, with typed channels for extra safety<br />
:*[http://research.microsoft.com/en-us/um/people/simonpj/papers/parallel/remote.pdf Haskell for the Cloud]: paper by Epstein et al<br />
:*[http://hackage.haskell.org/package/remote remote package]: working prototype developed in the original paper<br />
:*[https://github.com/haskell-distributed/distributed-process distributed-process] current reimplementation effort<br />
;[http://www.cs.kent.ac.uk/~cr3/HCPN/ HCPN: Haskell-Coloured Petri Nets]<br />
:Haskell-Coloured Petri Nets (HCPN) are an instance of high-level Petri Nets, in which anonymous tokens are replaced by Haskell data objects (and transitions can operate on that data, in addition to moving it around). This gives us a hybrid graphical/textual modelling formalism for Haskell, especially suited for modelling concurrent and distributed systems.<br />
<br />
<br />
[[Category:Parallel]]</div>EricKowhttps://wiki.haskell.org/index.php?title=Parallel_GHC_Project&diff=45727Parallel GHC Project2012-05-18T14:43:41Z<p>EricKow: /* The Parallel Haskell Digest */</p>
<hr />
<div>[[Category:Parallel]]<br />
<br />
== Overview ==<br />
<br />
The Parallel GHC Project is an [http://research.microsoft.com MSR]-funded project to push the real-world use of [[Parallel|parallel Haskell]]. The aim is to demonstrate that parallel Haskell can be employed successfully in industrial projects.<br />
<br />
In the last few years GHC has gained impressive support for parallel programming on commodity multi-core systems. In addition to traditional threads and shared variables, it supports pure parallelism, software transactional memory (STM), and data parallelism. With much of this research and development complete, the next stage is to get the technology into more widespread use.<br />
<br />
This project aims to do the engineering work to solve whatever remaining practical problems are blocking organisations from making serious use of parallelism with GHC. The driving force is the ''applications'' rather than the ''technology''.<br />
<br />
The project involves a partnership with [[#Participating organisations|six groups from commercial and scientific organisations]]. Over the course of two years these groups are applying parallel Haskell in their specific domains. They are being supported by GHC HQ and [http://www.well-typed.com/ Well-Typed] who are providing advice on Haskell tools and techniques, and applying engineering effort to resolve any issues that are hindering these groups' progress.<br />
<br />
The project is being coordinated by [http://www.well-typed.com/ Well-Typed] and they are providing the bulk of the support and engineering effort. The project started in the summer of 2010.<br />
<br />
== Project News ==<br />
<br />
<br />
<br />
=== ThreadScope and friends ===<br />
<br />
We have been continuing our work to make [[ThreadScope]] more helpful and informative in tracking down your parallel and concurrent Haskell performance problems. We now have the ability to collect heap statistics from the GHC runtime system and present them in ThreadScope. These features will be available for users of a recent development GHC (7.5.x) or the eventual 7.6 release. In addition to heap statistics, we have been working on collecting information from hardware performance counters, more specifically adding support for Linux Perf Events. This could be useful for studying IO-heavy programs, the idea being to visualise system calls as being distinct from actual execution of Haskell code.<br />
<br />
=== Cloud Haskell ===<br />
<br />
We are continuing work on the new Cloud Haskell implementation, [http://sneezy.cs.nott.ac.uk/fun/2012-02/coutts-2012-02-28.pdf recently presented] by Duncan Coutts. Lately, we have been focused on reducing message latency. This consists of work in three areas: improving binary serialisation, investigating the implications of using Chan and MVar to pass messages between threads, and perhaps improving the Haskell network library implementation to compete better with a direct C implementation.<br />
<br />
For more information on our implementation, see the [https://github.com/haskell-distributed/distributed-process distributed-process GitHub page] and particularly the updated [https://github.com/haskell-distributed/distributed-process/wiki/New-backend-and-transport-design design document], which incorporates feedback on our initial design proposal.<br />
<br />
== Project artefacts == <br />
<br />
Some of the work by our project partners is available to the public<br />
<br />
{| class="wikitable"<br />
! Project<br />
! Partner<br />
! Description<br />
! Status<br />
|-<br />
| [http://www.mew.org/~kazu/proj/mighttpd/en/ mightttpd2]<br />
| IIJ<br />
| File/CGI server on top of Warp<br />
| version 2.5.7 released 2012-04-05<br />
|-<br />
| [http://hackage.haskell.org/package/webserver webserver]<br />
| IIJ<br />
| HTTP server library<br />
| version 0.4.6 released 2011−10−05<br />
|-<br />
| [http://hackage.haskell.org/package/wai-app-file-cgi wai-app-file-cgi]<br />
| IIJ<br />
| File/CGI WAI application (used by Mighttpd)<br />
| version 0.5.8 released 2012-04-05<br />
|-<br />
| [http://hackage.haskell.org/package/wai-logger wai-logger]<br />
| IIJ<br />
| Logging system for WAI (used by Mighttpd)<br />
| version 0.1.4 released 2012-02-13<br />
|-<br />
| [http://hackage.haskell.org/package/http-date http-date]<br />
| IIJ<br />
| Fast parser and formatter for HTTP Date<br />
| version 0.0.2 released 2012-02-17<br />
|-<br />
| dns<br />
| IIJ<br />
| DNS library<br />
| version 0.2.0 released 2011−08−31<br />
|-<br />
| [http://www.mew.org/~kazu/proj/iproute/en/ iproute]<br />
| IIJ<br />
| IP routing table<br />
| version 1.2.5 released 2012-04-02<br />
|-<br />
| [http://hackage.haskell.org/package/domain-auth domain-auth]<br />
| IIJ<br />
| Library for Sender Policy Framework, SenderID, DomainKeys and DKIM.<br />
| version 0.2.0 released 2011−08-31<br />
|-<br />
| [http://www.mew.org/~kazu/proj/rpf/en/ RPF]<br />
| IIJ<br />
| Receiver Policy Framework (milter)<br />
| version 0.2.0 released 2011−08-31<br />
|}<br />
<br />
In addition to helping the [[#participating organisations|participating organisations]], the project will whenever possible make improvements to libraries and tools that are useful to Haskell users more generally.<br />
<br />
{| class="wikitable"<br />
! Project<br />
! Description<br />
! Status<br />
|-<br />
| multiprocess Threadscope<br />
| profiling of multi-process or distributed Haskell systems such as client/server or MPI programs.<br />
| '''in progress'''<br />
|-<br />
| [https://github.com/bjpop/lfg LFG]<br />
| Haskell implementation of some pseudo random number generators from the SPRNG library<br />
| '''testing'''<br />
|-<br />
| [https://github.com/bjpop/haskell-sprng SPRNG binding]<br />
| Haskell wrapper around SPRNG<br />
| '''in progress'''<br />
|-<br />
| ThreadScope improvements<br />
| new spark profiling tools, GUI enhancements, bug fixes<br />
| version 0.2.1 released 2012-01-14<br />
|-<br />
| ghc-events improvements<br />
| spark events support<br />
| version 0.4.0.0 released 2012-01-14<br />
|-<br />
| gtk2hs maintenance & release<br />
| GHC 7.2 support<br />
| version 0.12.2 released 2011-11-13<br />
|-<br />
| [http://hackage.haskell.org/package/haskell-mpi Haskell-MPI]<br />
| Haskell bindings to C MPI library<br />
| version 1.2.1 released 2012-02-15<br />
|-<br />
| rowspan="5" | GHC RTS improvements<br />
| [http://hackage.haskell.org/trac/ghc/ticket/4449 &nbsp;#4449] - GHC 7 can't do IO when daemonized<br />
| fixed in 7.0.x branch<br />
|-<br />
| [http://hackage.haskell.org/trac/ghc/ticket/4504 &nbsp;#4504] - "awaitSignal Nothing" does not block thread with -threaded<br />
| fixed in 7.0.2<br />
|-<br />
| [http://hackage.haskell.org/trac/ghc/ticket/4512 &nbsp;#4512] - EventLog does not play well with forkProcess<br />
| fixed in 7.0.x branch<br />
|-<br />
| [http://hackage.haskell.org/trac/ghc/ticket/4514 &nbsp;#4514] - IO manager can deadlock if a file descriptor is closed behind its back<br />
| fixed in 7.0.x branch<br />
|-<br />
| [http://hackage.haskell.org/trac/ghc/ticket/4854 &nbsp;#4854] - Validating on a PPC Mac OS X: Fix miscellaneous errors and warnings<br />
| fixed in 7.0.x branch<br />
|-<br />
| [http://www.cse.unsw.edu.au/~chak/haskell/c2hs/ c2hs] improvements<br />
| marshalling functions now can have arguments supplied to them.<br />
| version 0.16.3 released 2011−03−24<br />
|}<br />
<br />
The project will also aim to document existing tools and parallel programming practices, making them accesible to a wider public.<br />
<br />
{| class="wikitable"<br />
! Project<br />
! Description<br />
! Status<br />
|-<br />
| [[ThreadScope Tour]]<br />
| a short guide to using ThreadScope to help analyse parallel program performance<br />
| unveiled 2012-01-14<br />
|-<br />
| rowspan="2" | submissions to TMR 19<br />
| Mighttpd – a High Performance Web Server in Haskell (Kazu Yamamoto)<br />
| submitted<br />
|-<br />
| High Performance Haskell with MPI (Bernie Pope and Dmitry Astapov)<br />
| submitted<br />
|-<br />
| [[Parallel|Parallel Haskell Portal]]<br />
| one-stop resource oriented for users of parallelism and concurrency in Haskell<br />
| unveiled 2011−04−20<br />
|}<br />
<br />
== The Parallel Haskell Digest ==<br />
<br />
We have been publishing a regular newsletter containing project news, other parallel news from around the Haskell community and short "Word of the Month" articles giving brief intrductions to important concepts in parallelism.<br />
<br />
The back issues are here:<br />
<br />
* [http://www.well-typed.com/blog/52 Parallel Haskell Digest 1] with word of the month '''spark'''<br />
* [http://www.well-typed.com/blog/53 Parallel Haskell Digest 2] with word of the month '''thread of execution'''<br />
* [http://www.well-typed.com/blog/55 Parallel Haskell Digest 3] with word of the month '''parallel arrays'''<br />
* [http://www.well-typed.com/blog/56 Parallel Haskell Digest 4] with words of the month '''par''' and '''pseq'''<br />
* [http://www.well-typed.com/blog/58 Parallel Haskell Digest 5] with word of the month '''strategy'''<br />
* [http://www.well-typed.com/blog/60 Parallel Haskell Digest 6] with word of the month '''dataflow''' as in dataflow parallelism<br />
* [http://www.well-typed.com/blog/62 Parallel Haskell Digest 7] (catching up on community news)<br />
* [http://www.well-typed.com/blog/64 Parallel Haskell Digest 8] with word of the month '''MVar'''<br />
* [http://www.well-typed.com/blog/65 Parallel Haskell Digest 9] with the word of the month '''transaction'''<br />
* [http://www.well-typed.com/blog/66 Parallel Haskell Digest 10] with the word of the month '''channel'''<br />
<br />
== Getting involved ==<br />
<br />
Progress reports will be posted to the [http://groups.google.com/group/parallel-haskell parallel Haskell mailing list] and to the [http://www.well-typed.com/blog/ Well-Typed blog].<br />
<br />
The best starting point to get involved is to join the mailing list. Note that the list is for parallel Haskell generally, not just the Parallel GHC Project.<br />
<br />
== Participating organisations ==<br />
<br />
;[http://www.dragonfly.co.nz/ Dragonfly]<br />
:Cloudy Bayes: Hierarchical Bayesian modeling in Haskell<br />
<br />
:The Cloudy Bayes project aims to develop a fast Bayesian model fitter that takes advantage of modern multiprocessor machines. It will support model descriptions in the BUGS model description language (WinBUGS, OpenBUGS, and JAGS). It will be implemented as an embedded domain specific language (EDSL) within Haskell. A wide range of model hierarchical Bayesian model structures will be possible, including many of the models used in medical, ecological, and biological sciences.<br />
<br />
:Cloudy Bayes will provide an easy to use interface for describing models, running Monte Carlo Markov chain (MCMC) fitters, diagnosing performance and convergence criteria as it runs, and collecting output for post-processing. Haskell's strong type system will be used to ensure that model descriptions make sense, providing a fast, safe development cycle.<br />
<br />
;[http://www.iij-ii.co.jp/en/ IIJ Innovation Institute Inc.]<br />
:Haskell is suitable for many kinds of domain, and GHC's support for lightweight threads makes it attractive for concurrency applications. An exception has been network server programming because GHC 6.12 and earlier have an IO manager that is limited to 1024 network sockets. GHC 7 has a new IO manager implementation that gets rid of this limitation.<br />
<br />
:This project will implement several network servers to demonstrate that Haskell is suitable for network servers that handle a massive number of concurrent connections.<br />
<br />
;[http://www.lanl.gov/ Los Alamos National Laboratory]<br />
:This project will use parallel Haskell to implement high-performance Monte Carlo algorithms, a class of algorithms which use randomness to sample large or otherwise intractable solution spaces. The initial goal is a particle-based MC algorithm suitable for modeling the flow of radiation, with application to problems in astrophysics. From this, the project is expected to move to identification of suitable abstractions for expressing a wider variety of Monte Carlo algorithms, and using models for different physical phenomena.<br />
<br />
;[http://www.willowgarage.com/ Willow Garage Inc.]<br />
:Distributed Rigid Body Dynamics in ROS<br />
<br />
:Willow Garage seeks a high-level representation for a distributed rigid body dynamics simulation, capable of excellent parallel speedup on current and foreseeable hardware, yet linking to existing optimized libraries for low-level message passing and matrix math.<br />
<br />
:This project will drive API, performance, and profiling tool requirements for Haskell's interface to the Message Passing Interface (MPI) specification, an industry-standard in High Performance Computing (HPC), as used on clusters of many nodes.<br />
<br />
:Competing internal initiatives use C++/MPI and CUDA directly.<br />
<br />
:Willow Garage aims to lay the groundwork for personal robotics applications in everyday life. ROS ([http://ros.org Robot Operating System]) is an open source, meta-operating system for your robot.<br />
<br />
; [http://www.tid.es/en/ Telefónica I+D]<br />
<br />
: This project is to demonstrate parallel Haskell technology using the example of graph algorithms in large graphs representing social networks. The current work is on parallel versions of the [http://en.wikipedia.org/wiki/Bron%E2%80%93Kerbosch_algorithm Bron-Kerbosch algorithm] for finding maximal cliques in a graph. The initial goal is to demonstrate good speedups on multi-core and the overall aim to demonstrate good speedups of a distributed version of the algorithm using Cloud Haskell.<br />
<br />
; [http://www.vett.co.uk/ VETT UK]<br />
<br />
: VETT are working on a transaction processing application using Cloud Haskell. More details will be available shortly.</div>EricKowhttps://wiki.haskell.org/index.php?title=Parallel/Digest&diff=45726Parallel/Digest2012-05-18T14:28:00Z<p>EricKow: PH Digest 10</p>
<hr />
<div>The Parallel Haskell Digest is a newsletter aiming to show off all the work that's going on using parallelism and concurrency in the Haskell community.<br />
<br />
We hope to offer a monthly recap of news, interesting blog posts and discussions about parallelism in Haskell. For people who are new to parallelism and concurrency in Haskell, or maybe just have a passing interest, we hope to offer small tastes of parallelism and concurrency, with regular features like the Word of Month, Featured Code and Parallel Puzzlers.<br />
<br />
== Archives ==<br />
<br />
# [http://www.well-typed.com/blog/66 2012-05-18] - channel<br />
# [http://www.well-typed.com/blog/65 2012-04-20] - transaction<br />
# [http://www.well-typed.com/blog/64 2012-03-02] - MVar (lock)<br />
# [http://www.well-typed.com/blog/62 2011-12-24] - (news catch up)<br />
# [http://www.well-typed.com/blog/60 2011-10-06] - dataflow<br />
# [http://www.well-typed.com/blog/58 2011-08-21] - strategy<br />
# [http://www.well-typed.com/blog/56 2011-07-22] - par and pseq<br />
# [http://www.well-typed.com/blog/55 2011-06-16] - Parallel Arrays<br />
# [http://www.well-typed.com/blog/53 2011-05-11] - Threads<br />
# [http://www.well-typed.com/blog/52 2011-03-31] - Spark and Hulk</div>EricKowhttps://wiki.haskell.org/index.php?title=Performance&diff=45677Performance2012-05-17T08:34:56Z<p>EricKow: /* Additional Tips */ link to TS home page</p>
<hr />
<div>{{Performance infobox}}<br />
Welcome to the '''Haskell Performance Resource''', the collected wisdom on how to make your Haskell programs go faster. <br />
<br />
== Introduction ==<br />
<br />
One question that often comes up is along the general lines of "Can I write this program in Haskell so that it performs as well as, or better than, the same program written in some other language?"<br />
<br />
This is a difficult question to answer in general because Haskell is a language, not an implementation. Performance can only be measured relative to a specific language implementation.<br />
<br />
Moreover, it's often not clear if two programs which supposedly have the same functionality really do the same thing. Different languages sometimes require very different ways of expressing the same intent. Certain types of bug are rare in typical Haskell programs that are more common in other languages and vice versa, due to strong typing, automatic memory management and lazy evaluation.<br />
<br />
Nonetheless, it is usually possible to write a Haskell program that performs as well as, or better than, the same program written in any other language. The main caveat is that you may have to modify your code significantly in order to improve its performance. Compilers such as GHC are good at eliminating layers of abstraction, but they aren't perfect, and often need some help. <br />
<br />
There are many non-invasive techniques: compiler options, for example. Then there are techniques that require adding some small amounts of performance cruft to your program: strictness annotations, for example. If you still don't get the best performance, though, it might be necessary to resort to larger refactorings.<br />
<br />
Sometimes the code tweaks required to get the best performance are non-portable, perhaps because they require language extensions that aren't implemented in all compilers (e.g. unboxing), or because they require using platform-specific features or libraries. This might not be acceptable in your setting.<br />
<br />
If the worst comes to the worst, you can always write your critical code in C and use the FFI to call it. Beware of the boundaries though - marshaling data across the FFI can be expensive, and multi-language memory management can be complex and error-prone. It's usually better to stick to Haskell if possible.<br />
<br />
== Basic techniques ==<br />
<br />
The key tool to use in making your Haskell program run faster is ''profiling''. Profiling is provided by [[GHC]] and [[nhc98]]. There is ''no substitute'' for finding where your program's time/space is ''really'' going, as opposed to where you imagine it is going.<br />
<br />
Another point to bear in mind: By far the best way to improve a program's performance ''dramatically'' is to use better algorithms. Once profiling has thrown the spotlight on the guilty time-consumer(s), it may be better to re-think your program than to try all the tweaks listed below.<br />
<br />
Another extremely efficient way to make your program snappy is to use library code that has been Seriously Tuned By Someone Else. You ''might'' be able to write a better sorting function than the one in <tt>Data.List</tt>, but it will take you much longer than typing <tt>import Data.List</tt>.<br />
<br />
We have chosen to organise the rest of this resource first by Haskell construct (data types, pattern matching, integers), and then within each category to describe techniques that apply across implementations, and also techniques that are specific to a certain Haskell implementation (e.g. GHC). There are some implementation-specific techniques that apply in general - those are linked from the [[Haskell Performance Resource#General_Implementation-Specific_Techniques | General Implementation-Specific Techniques]] section below.<br />
<br />
== Haskell constructs ==<br />
<br />
* [[/Data Types/]]<br />
* [[/Functions/]]<br />
* [[/Overloading/]]<br />
* [[/FFI/]]<br />
* [[/Arrays/]]<br />
* [[/Strings/]]<br />
* [[/Integers/]]<br />
* [[/IO | I/O ]]<br />
* [[/Floating Point/]]<br />
* [[/Concurrency/]]<br />
* [[/Modules/]]<br />
* [[/Monads/]]<br />
<br />
== General techniques ==<br />
<br />
* [[/Strictness/]]<br />
* [[/Laziness/]]<br />
* [[/Space | Avoiding space leaks]]<br />
* [[/Accumulating parameter|Accumulating parameters]]<br />
* [[Stack_overflow|Avoiding stack overflow]]<br />
<br />
== Compiler specific techniques ==<br />
<br />
* [[/GHC/]]<br />
* [[/NHC98| nhc98]]<br />
* [[/Hugs/]]<br />
* [[/Yhc/]]<br />
* [[/JHC/]]<br />
<br />
== More information ==<br />
<br />
* There are plenty of good examples of Haskell code written for performance in the [http://shootout.alioth.debian.org/ The Computer Language Shootout Benchmarks]<br />
* And many alternatives, with discussion, on the [http://web.archive.org/web/20060209215702/http://haskell.org/hawiki/ShootoutEntry old Haskell wiki]<br />
* There are ~100 [http://blog.johantibell.com/2010/09/slides-from-my-high-performance-haskell.html slides on High-Performance Haskell] from the 2010 CUFP tutorial on that topic. <br />
<br />
== Specific comparisons of data structures ==<br />
=== Data.Sequence vs. lists ===<br />
<br />
Data.Sequence has complexity O(log(min(i,n-i))) for access, insertion and update to position i of a sequence of length n.<br />
<br />
List has complexity O(i).<br />
<br />
List is a non-trivial constant-factor faster for operations at the head (cons and head), making it a more efficient choice for stack-like and stream-like access patterns. Data.Sequence is faster for every other access pattern, such as queue and random access.<br />
<br />
See the following program for proof:<br />
<haskell><br />
import Data.Sequence<br />
<br />
insert_million 0 sequence = sequence<br />
insert_million n sequence = insert_million (n - 1)(sequence |> n)<br />
<br />
main = print (Data.Sequence.length (insert_million 1000000 empty))<br />
</haskell><br />
<pre><br />
$ ghc -O2 --make InsertMillionElements.hs && time ./InsertMillionElements +RTS -K100M<br />
1000000<br />
real 0m7.238s<br />
user 0m6.804s<br />
sys 0m0.228s<br />
</pre><br />
<haskell><br />
insert_million 0 list = reverse list<br />
insert_million n list = insert_million (n -1) (n:list)<br />
<br />
main = print (length (insert_million 1000000 []))<br />
</haskell><br />
<pre><br />
$ ghc -O2 --make InsertMillionElements.hs && time ./InsertMillionElementsList +RTS -K100M<br />
1000000<br />
real 0m0.588s<br />
user 0m0.528s<br />
sys 0m0.052s<br />
</pre><br />
Lists are substantially faster on this micro-benchmark.<br />
<br />
A sequence uses between 5/6 and 4/3 times as much space as the equivalent list (assuming an overhead of one word per node, as in GHC).<br />
If only deque operations are used, the space usage will be near the lower end of the range, because all internal nodes will be ternary.<br />
Heavy use of split and append will result in sequences using approximately the same space as lists.<br />
In detail:<br />
* a list of length ''n'' consists of ''n'' cons nodes, each occupying 3 words.<br />
* a sequence of length ''n'' has approximately ''n''/(''k''-1) nodes, where ''k'' is the average arity of the internal nodes (each 2 or 3). There is a pointer, a size and overhead for each node, plus a pointer for each element, i.e. ''n''(3/(''k''-1) + 1) words.<br />
<br />
== Additional Tips ==<br />
<br />
* Use strict returns ( return $! ...) unless you absolutely need them lazy.<br />
* Profile, profile, profile - understand who is hanging on to the memory (+RTS -hc) and how it's being used (+RTS -hb).<br />
* Use +RTS -p to understand who's doing all the allocations and where your time is being spent.<br />
* Approach profiling like a science experiment - make one change, observe if anything is different, rollback and make another change - observe the change. Keep notes!<br />
* Use [[ThreadScope]] to visualize GHC eventlog traces.<br />
<br />
[[Category:Idioms]]<br />
[[Category:Language]]<br />
[[Category:Performance|*]]</div>EricKowhttps://wiki.haskell.org/index.php?title=ThreadScope&diff=45676ThreadScope2012-05-17T07:48:52Z<p>EricKow: call TS Tour the user guide</p>
<hr />
<div>'''ThreadScope''' is a tool for performance profiling of parallel Haskell programs.<br />
<br />
The ThreadScope program allows us to debug the parallel performance of Haskell programs. Using ThreadScope we can check to see that work is well balanced across the available processors and spot performance issues relating to garbage collection or poor load balancing.<br />
<br />
== Getting Started ==<br />
<br />
Have gtk on your machine? (Note that you don't need all of gtk and gtk2hs, e.g., libxml is unneeded.) See the gtk2hs install instructions <span style="font-size:8pt">([[Gtk2Hs/Windows|Windows]] ∙ [[Gtk2Hs/Mac|Mac]] ∙ [[Gtk2Hs/Linux|Linux]])</span> and then<br />
<br />
cabal update<br />
gtk-demo<br />
cabal install gtk<br />
cabal install threadscope<br />
<br />
Next, check out the user guide. We call it the [[ThreadScope Tour]].<br />
<br />
== Features ==<br />
<br />
ThreadScope is a graphical viewer for thread profile information generated by the Glasgow Haskell compiler (GHC). An example is shown below:<br />
<br />
[[Image:ThreadScope-Screenshot1.png]]<br />
<br />
ThreadScope version 0.2.0 can be used to help debug performance issues with parallel and concurrent Haskell programs. The program has the following features.<br />
<br />
* The program displays the activity on each Haskell Execution Context (HEC) which roughly corresponds to an operating system thread. For each thread you can see whether it is running a Haskell thread or performing garbage collection. You can find out information about when Haskell threads are ready to run and information about why a Haskell thread was suspended.<br />
<br />
* An activity profile indicates the rough utilization of the HECs and when the number of HECs are greater than the number of processing cores this gives a rough guide to the overall utilization.<br />
<br />
* You can place bookmarks at various points in the time profile to help with navigation. Bookmarks can be emitted from Haskell code using the `traceEvent` action.<br />
<br />
* You can view the rate at which "par sparks" are created and evaluated during the program, and the size of the spark queue on each HEC. (This feature requires GHC-7.3 or later which is currently the [http://hackage.haskell.org/trac/ghc/wiki/Building development version].)<br />
<br />
== Using ThreadScope ==<br />
<br />
To compile a program for parallel profiling use the -eventlog flag and you will also want to use the -threaded flag to compile with the multi-threaded runtime e.g.<br />
<br />
ghc -threaded -eventlog -rtsopts --make Wombat.hs<br />
<br />
To execute a program and generate a profile use the -ls flag after +RTS. Then pass the profile to ThreadScope:<br />
<br />
./Wombat +RTS -ls -N2<br />
threadscope Wombat.eventlog # on Windows: Wombat.exe.eventlog <br />
<br />
The -N2 flag specifies the use of two Haskell Execution Contexts (i.e. cores). Once the program has been run it will produce a profile file called Wombat.eventlog or Wombat.exe.eventlog (depending on your operating system). You can now view this file with threadscope by specifying the eventlog filename as a command line argument or by navigating to it from the File menu of ThreadScope.<br />
<br />
For more detailed instructions, have a look at the [[ThreadScope Tour]].<br />
<br />
== Installing ThreadScope ==<br />
<br />
The recommendation is to use the [http://hackage.haskell.org/platform/ Haskell Platform]. This includes GHC and the cabal package tool. At minimum you need GHC-6.12.<br />
<br />
ThreadScope itself is [http://hackage.haskell.org/package/threadscope available from hackage].<br />
<br />
ThreadScope has a dependency on the Haskell Gtk+ binding (Gtk2Hs) which involves a bit of manual work on Windows and Mac OS X to install the Gtk+ C libraries.<br />
<br />
See the Gtk2Hs installation instructions for details:<br />
<br />
* [[Gtk2Hs/Linux]] (and other unix)<br />
* [[Gtk2Hs/Windows]]<br />
* [[Gtk2Hs/Mac]]<br />
<br />
Once you have the Gtk+ C libraries installed it is just a matter of running:<br />
<br />
cabal install threadscope<br />
<br />
You can now try to run ThreadScope to make sure it built correctly by viewing a built-in sample trace:<br />
<br />
threadscope --test ch8<br />
<br />
You should see something like<br />
[[Image:ThreadScope-ch8.png|600px]]<br />
<br />
== More Information ==<br />
<br />
* [http://research.microsoft.com/apps/pubs/default.aspx?id=80976 Parallel Performance Tuning for Haskell].<br />
* [http://research.microsoft.com/apps/pubs/default.aspx?id=79856 Runtime Support for Multicore Haskell].<br />
* [http://research.microsoft.com/apps/pubs/default.aspx?id=74058 A Tutorial on Parallel and Concurrent Programming in Haskell].<br />
* A [http://www.youtube.com/watch?v=qZXq8fxebKU video] by Simon Marlow which demos ThreadScope.<br />
<br />
Please send comments, corrections etc. to [mailto:satnams@microsoft.com satnams@microsoft.com]<br />
<br />
You may also wish to join the [http://groups.google.com/group/parallel-haskell parallel-haskell google group].<br />
<br />
== Development and reporting bugs ==<br />
<br />
There is a [http://trac.haskell.org/ThreadScope/ bug tracker and developer wiki].<br />
<br />
The source for ghc-events and threadscope is available:<br />
<br />
darcs get http://code.haskell.org/ghc-events/<br />
darcs get http://code.haskell.org/ThreadScope/<br />
<br />
== People ==<br />
<br />
* Donnie Jones, donnie@darthik.com<br />
* Simon Marlow, simonmar@microsoft.com, http://www.haskell.org/~simonmar/<br />
* Satnam Singh, s.singh@acm.org, http://cs.bham.ac.uk/~singhsu/<br />
* Duncan Coutts, duncan@well-typed.com, http://www.well-typed.com/who_we_are<br />
* Mikolaj Konarski, mikolaj@well-typed.com<br />
* Nicolas Wu, nick@well-typed.com<br />
* Eric Kow, eric@well-typed.com<br />
<br />
== Publications and Talks ==<br />
<br />
Simon Marlow, Simon Peyton Jones, and Satnam Singh, [http://research.microsoft.com/apps/pubs/default.aspx?id=79856 Runtime Support for Multicore Haskell], in ''ICFP 2009'', Association for Computing Machinery, Inc., 5 September 2009<br />
<br />
Don Jones Jr., Simon Marlow, and Satnam Singh, [http://research.microsoft.com/apps/pubs/default.aspx?id=80976 Parallel Performance Tuning for Haskell], in ''ACM SIGPLAN 2009 Haskell Symposium'', Association for Computing Machinery, Inc., 3 September 2009<br />
<br />
Duncan Coutts, Mikolaj Konarski and Andres Loeh, [[HaskellImplementorsWorkshop/2011/Coutts|Spark Visualization in ThreadScope]], Haskell Implementors Workshop 2011<br />
<br />
[[Category:ThreadScope]]</div>EricKowhttps://wiki.haskell.org/index.php?title=ThreadScope_Tour&diff=45675ThreadScope Tour2012-05-17T07:47:13Z<p>EricKow: maybe better wording?</p>
<hr />
<div>__NOTOC__<br />
''A guided tour of ThreadScope''<br />
<br />
Have parallel Haskell but not enough performance? Try [[ThreadScope]]! It won't fix your program for you, but it may help you to understand what is slowing your program down. We in the ThreadScope team have put together this user guide to help you get started and make the most of this tool.<br />
<br />
You can also treat this manual as a tutorial. We'll be working through concrete examples on using ThreadScope to debug the performance of parallel programs. We aim to keep each module in this tutorial self-contained, so you can either follow the progression suggested or jump to just the sections we need.<br />
<br />
<div class="subtitle">Software</div><br />
<br />
This tutorial is written with the following software versions in mind.<br />
<br />
* [[ThreadScope]] 0.2.1<br />
* GHC 7.4. (earlier versions work, but lack more advanced features like spark events)<br />
<br />
<div class="subtitle">Getting started</div><br />
<br />
[[Image:ThreadScope-ch8.png|thumb]]<br />
<br />
# [[ThreadScope_Tour/Install|Installation]]: install ThreadScope and run a sample trace<br />
# [[ThreadScope_Tour/Run|Hello world]]: run ThreadScope on a small test program<br />
<br />
<div class="subtitle">Basic skills</div><br />
<br />
[[Image:ThreadScope-sudoku2.png|thumb]]<br />
<br />
<ol start="3" style="list-style-type: decimal;"><br />
<li>[[ThreadScope_Tour/Statistics|Initial statistics]]: collect some simple statistics</li><br />
<li>[[ThreadScope_Tour/Profile|Profile]]: examine the profile for a real program </li><br />
<li>[[ThreadScope_Tour/Profile2|Profile 2]]: examine the profile for an improved program</li><br />
<li>[[ThreadScope_Tour/Zoom|Zoom]]: zoom in to see performance behaviour at a finer resolution</li><br />
<li>[[ThreadScope_Tour/Bookmark|Bookmark]]: place a temporary marker in the eventlog</li><br />
<li>[[ThreadScope_Tour/Consolidate|Consolidate]]: tease out the sequential parts of code</li></ol><br />
<br />
<div class="subtitle">Digging into a program with spark events</div><br />
<br />
[[Image:spark-lifecycle.png|thumb]]<br />
<br />
<ol start="9" style="list-style-type: decimal;"><br />
<li>[[ThreadScope_Tour/SparkOverview|Spark overview]]<br />
<ul><br />
<li>[[ThreadScope_Tour/RTS|GHC RTS flags]]: a subset of flags relevant to ThreadScope</li><br />
<li>[[Special:FilePath/spark-lifecycle.png|Spark lifecycle]]: Lifecycle of a spark</li></ul><br />
</li><br />
<li>[[ThreadScope_Tour/Spark|Spark rates]]: study spark creation/conversion <br />
</li><br />
<li>[[ThreadScope_Tour/Spark2|Spark rates 2]]: spark debugging continued</li><br />
</ol><br />
<br />
<div class="subtitle">Reference</div><br />
<br />
* [[ThreadScope_Tour/Downloads|Downloads]]: examples used in this tutorial<br />
<br />
''This tutorial was initially written by Well-Typed in the context of the [[Parallel GHC Project]]. [mailto:eric@well-typed.com Feedback] would be most appreciated!''<br />
<br />
[[Category:ThreadScope]]</div>EricKowhttps://wiki.haskell.org/index.php?title=ThreadScope_Tour&diff=45674ThreadScope Tour2012-05-17T07:44:36Z<p>EricKow: </p>
<hr />
<div>__NOTOC__<br />
''A guided tour of ThreadScope''<br />
<br />
Have parallel Haskell but not enough performance? Try [[ThreadScope]]! It won't fix your program for you, but it may help you to understand what is slowing your program down. To help you get started and make the most of this tool, the ThreadScope team have put together a little tour guide. We'll be working through concrete examples on using ThreadScope to debug the performance of parallel programs. We aim to keep each module in this tutorial self-contained, so you can either follow the progression suggested or jump to just the sections we need.<br />
<br />
<div class="subtitle">Software</div><br />
<br />
This tutorial is written with the following software versions in mind.<br />
<br />
* [[ThreadScope]] 0.2.1<br />
* GHC 7.4. (earlier versions work, but lack more advanced features like spark events)<br />
<br />
<div class="subtitle">Getting started</div><br />
<br />
[[Image:ThreadScope-ch8.png|thumb]]<br />
<br />
# [[ThreadScope_Tour/Install|Installation]]: install ThreadScope and run a sample trace<br />
# [[ThreadScope_Tour/Run|Hello world]]: run ThreadScope on a small test program<br />
<br />
<div class="subtitle">Basic skills</div><br />
<br />
[[Image:ThreadScope-sudoku2.png|thumb]]<br />
<br />
<ol start="3" style="list-style-type: decimal;"><br />
<li>[[ThreadScope_Tour/Statistics|Initial statistics]]: collect some simple statistics</li><br />
<li>[[ThreadScope_Tour/Profile|Profile]]: examine the profile for a real program </li><br />
<li>[[ThreadScope_Tour/Profile2|Profile 2]]: examine the profile for an improved program</li><br />
<li>[[ThreadScope_Tour/Zoom|Zoom]]: zoom in to see performance behaviour at a finer resolution</li><br />
<li>[[ThreadScope_Tour/Bookmark|Bookmark]]: place a temporary marker in the eventlog</li><br />
<li>[[ThreadScope_Tour/Consolidate|Consolidate]]: tease out the sequential parts of code</li></ol><br />
<br />
<div class="subtitle">Digging into a program with spark events</div><br />
<br />
[[Image:spark-lifecycle.png|thumb]]<br />
<br />
<ol start="9" style="list-style-type: decimal;"><br />
<li>[[ThreadScope_Tour/SparkOverview|Spark overview]]<br />
<ul><br />
<li>[[ThreadScope_Tour/RTS|GHC RTS flags]]: a subset of flags relevant to ThreadScope</li><br />
<li>[[Special:FilePath/spark-lifecycle.png|Spark lifecycle]]: Lifecycle of a spark</li></ul><br />
</li><br />
<li>[[ThreadScope_Tour/Spark|Spark rates]]: study spark creation/conversion <br />
</li><br />
<li>[[ThreadScope_Tour/Spark2|Spark rates 2]]: spark debugging continued</li><br />
</ol><br />
<br />
<div class="subtitle">Reference</div><br />
<br />
* [[ThreadScope_Tour/Downloads|Downloads]]: examples used in this tutorial<br />
<br />
''This tutorial was initially written by Well-Typed in the context of the [[Parallel GHC Project]]. [mailto:eric@well-typed.com Feedback] would be most appreciated!''<br />
<br />
[[Category:ThreadScope]]</div>EricKowhttps://wiki.haskell.org/index.php?title=ThreadScope_Tour&diff=45673ThreadScope Tour2012-05-17T07:44:19Z<p>EricKow: compact</p>
<hr />
<div>__NOTOC__<br />
''A guided tour of ThreadScope''<br />
<br />
Have parallel Haskell but not enough performance. Try [[ThreadScope]]! It won't fix your program for you, but it may help you to understand what is slowing your program down. To help you get started and make the most of this tool, the ThreadScope team have put together a little tour guide. We'll be working through concrete examples on using ThreadScope to debug the performance of parallel programs. We aim to keep each module in this tutorial self-contained, so you can either follow the progression suggested or jump to just the sections we need.<br />
<br />
<div class="subtitle">Software</div><br />
<br />
This tutorial is written with the following software versions in mind.<br />
<br />
* [[ThreadScope]] 0.2.1<br />
* GHC 7.4. (earlier versions work, but lack more advanced features like spark events)<br />
<br />
<div class="subtitle">Getting started</div><br />
<br />
[[Image:ThreadScope-ch8.png|thumb]]<br />
<br />
# [[ThreadScope_Tour/Install|Installation]]: install ThreadScope and run a sample trace<br />
# [[ThreadScope_Tour/Run|Hello world]]: run ThreadScope on a small test program<br />
<br />
<div class="subtitle">Basic skills</div><br />
<br />
[[Image:ThreadScope-sudoku2.png|thumb]]<br />
<br />
<ol start="3" style="list-style-type: decimal;"><br />
<li>[[ThreadScope_Tour/Statistics|Initial statistics]]: collect some simple statistics</li><br />
<li>[[ThreadScope_Tour/Profile|Profile]]: examine the profile for a real program </li><br />
<li>[[ThreadScope_Tour/Profile2|Profile 2]]: examine the profile for an improved program</li><br />
<li>[[ThreadScope_Tour/Zoom|Zoom]]: zoom in to see performance behaviour at a finer resolution</li><br />
<li>[[ThreadScope_Tour/Bookmark|Bookmark]]: place a temporary marker in the eventlog</li><br />
<li>[[ThreadScope_Tour/Consolidate|Consolidate]]: tease out the sequential parts of code</li></ol><br />
<br />
<div class="subtitle">Digging into a program with spark events</div><br />
<br />
[[Image:spark-lifecycle.png|thumb]]<br />
<br />
<ol start="9" style="list-style-type: decimal;"><br />
<li>[[ThreadScope_Tour/SparkOverview|Spark overview]]<br />
<ul><br />
<li>[[ThreadScope_Tour/RTS|GHC RTS flags]]: a subset of flags relevant to ThreadScope</li><br />
<li>[[Special:FilePath/spark-lifecycle.png|Spark lifecycle]]: Lifecycle of a spark</li></ul><br />
</li><br />
<li>[[ThreadScope_Tour/Spark|Spark rates]]: study spark creation/conversion <br />
</li><br />
<li>[[ThreadScope_Tour/Spark2|Spark rates 2]]: spark debugging continued</li><br />
</ol><br />
<br />
<div class="subtitle">Reference</div><br />
<br />
* [[ThreadScope_Tour/Downloads|Downloads]]: examples used in this tutorial<br />
<br />
''This tutorial was initially written by Well-Typed in the context of the [[Parallel GHC Project]]. [mailto:eric@well-typed.com Feedback] would be most appreciated!''<br />
<br />
[[Category:ThreadScope]]</div>EricKowhttps://wiki.haskell.org/index.php?title=ThreadScope_Tour&diff=45672ThreadScope Tour2012-05-17T07:40:19Z<p>EricKow: make it a bit clearer that this is *the* user's guide</p>
<hr />
<div>__NOTOC__<br />
''A guided tour of ThreadScope''<br />
<br />
Want to get the best performance out of your parallel Haskell program? [[ThreadScope]] is just the tool for the job. We in the ThreadScope<br />
team have put together a little user guide to help you get started and<br />
hopefully make the most of ThreadScope. <br />
<br />
In this tutorial, we'll be working through concrete examples on using ThreadScope to debug the performance of parallel programs. We aim to keep each module in the tutorial self-contained, so you can either follow the progression suggested or jump to just the sections we need.<br />
<br />
<div class="subtitle">Software</div><br />
<br />
This tutorial is written with the following software versions in mind.<br />
<br />
* [[ThreadScope]] 0.2.1<br />
* GHC 7.4. (earlier versions work, but lack more advanced features like spark events)<br />
<br />
<div class="subtitle">Getting started</div><br />
<br />
[[Image:ThreadScope-ch8.png|thumb]]<br />
<br />
# [[ThreadScope_Tour/Install|Installation]]: install ThreadScope and run a sample trace<br />
# [[ThreadScope_Tour/Run|Hello world]]: run ThreadScope on a small test program<br />
<br />
<div class="subtitle">Basic skills</div><br />
<br />
[[Image:ThreadScope-sudoku2.png|thumb]]<br />
<br />
<ol start="3" style="list-style-type: decimal;"><br />
<li>[[ThreadScope_Tour/Statistics|Initial statistics]]: collect some simple statistics</li><br />
<li>[[ThreadScope_Tour/Profile|Profile]]: examine the profile for a real program </li><br />
<li>[[ThreadScope_Tour/Profile2|Profile 2]]: examine the profile for an improved program</li><br />
<li>[[ThreadScope_Tour/Zoom|Zoom]]: zoom in to see performance behaviour at a finer resolution</li><br />
<li>[[ThreadScope_Tour/Bookmark|Bookmark]]: place a temporary marker in the eventlog</li><br />
<li>[[ThreadScope_Tour/Consolidate|Consolidate]]: tease out the sequential parts of code</li></ol><br />
<br />
<div class="subtitle">Digging into a program with spark events</div><br />
<br />
[[Image:spark-lifecycle.png|thumb]]<br />
<br />
<ol start="9" style="list-style-type: decimal;"><br />
<li>[[ThreadScope_Tour/SparkOverview|Spark overview]]<br />
<ul><br />
<li>[[ThreadScope_Tour/RTS|GHC RTS flags]]: a subset of flags relevant to ThreadScope</li><br />
<li>[[Special:FilePath/spark-lifecycle.png|Spark lifecycle]]: Lifecycle of a spark</li></ul><br />
</li><br />
<li>[[ThreadScope_Tour/Spark|Spark rates]]: study spark creation/conversion <br />
</li><br />
<li>[[ThreadScope_Tour/Spark2|Spark rates 2]]: spark debugging continued</li><br />
</ol><br />
<br />
<div class="subtitle">Reference</div><br />
<br />
* [[ThreadScope_Tour/Downloads|Downloads]]: examples used in this tutorial<br />
<br />
''This tutorial was initially written by Well-Typed in the context of the [[Parallel GHC Project]]. [mailto:eric@well-typed.com Feedback] would be most appreciated!''<br />
<br />
[[Category:ThreadScope]]</div>EricKowhttps://wiki.haskell.org/index.php?title=Applications_and_libraries/Concurrency_and_parallelism&diff=45610Applications and libraries/Concurrency and parallelism2012-05-08T16:06:28Z<p>EricKow: parallel haskell portal</p>
<hr />
<div>'''Concurrent and Parallel Programming'''<br />
<br />
Haskell offers a broad spectrum of tools for developing parallel or concurrent programs. For parallelism, Haskell libraries enable concise high-level parallel programs with results that are guaranteed to be deterministic, i.e., independent of the number of cores and the scheduling being used. Concurrency is supported with lightweight threads and high level abstractions such as [[software transactional memory]] for managing information shared across threads. Distributed programming is still mainly a research area. Some low-level tools (MPI bindings) and research prototypes are available and new approaches being developed, such as Cloud Haskell (Erlang-style actors as a Haskell library).<br />
<br />
This page lists libraries and extensions for programming<br />
concurrent and parallel applications in Haskell.<br />
<br />
See also the<br />
[[Parallel|parallel Haskell portal]] for research papers, tutorials and on<br />
parallel and concurrent Haskell.<br />
<br />
== Parallelism ==<br />
<br />
=== Strategies and Par ===<br />
<br />
;Strategies<br />
:A high-level compositional API for parallel programming.<br />
:* [http://hackage.haskell.org/package/parallel The parallel package on Hackage]<br />
:* [http://hackage.haskell.org/packages/archive/parallel/3.1.0.1/doc/html/Control-Parallel-Strategies.html Control.Parallel.Strategies Documentation]<br />
:* Latest paper: [http://research.microsoft.com/apps/pubs/default.aspx?id=138042 Seq no more: Better Strategies for Parallel Haskell]<br />
:* Original paper: [http://www.macs.hw.ac.uk/~dsg/gph/papers/html/Strategies/strategies.html Algorithm + Strategy = Parallelism]<br />
;[http://hackage.haskell.org/package/monad-par Monad-par]<br />
:An alternative parallel programming API to that provided by the parallel package. The Par monad allows the simple description of parallel computations, and can be used to add parallelism to pure Haskell code. The basic API is straightforward: the monad supports forking and simple communication in terms of IVars.<br />
<br />
=== Data parallelism ===<br />
<br />
;[http://repa.ouroborus.net/ Repa]<br />
:REgular PArallel arrays: high performance, regular, multi-dimensional, shape polymorphic parallel arrays.<br />
;[https://github.com/AccelerateHS/accelerate Accelerate]<br />
:An embedded language of array computations for high-performance computing in Haskell. Computations on multi-dimensional, regular arrays are expressed in the form of parameterised collective operations (such as maps, reductions, and permutations).<br />
;[http://haskell.org/haskellwiki/GHC/Data_Parallel_Haskell Data Parallel Haskell]<br />
:Implicitly parallel, high performance (nested) arrays, supporting large multicore programming.<br />
<br />
=== Research efforts ===<br />
<br />
; Feedback-directed implicit parallelism<br />
: Implicit parallelism in Haskell, and a feedback-directed mechanism to increase its granularity ([http://research.microsoft.com/~tharris/papers/2007-fdip.pdf FDIP paper])<br />
;[http://www.macs.hw.ac.uk/~dsg/gph/ GpH: Glasgow Parallel Haskell]<br />
:A complete, GHC-based implementation of the parallel Haskell extension GpH and of evaluation strategies is available. Extensions of the runtime-system and language to improve performance and support new platforms are under development.<br />
<br />
----<br />
<br />
== Concurrency ==<br />
<br />
=== Concurrent Haskell ===<br />
<br />
;[[Concurrency|Concurrent Haskell]]<br />
:GHC has supported concurrency with lightweight threads for more than a decade, and it [http://shootout.alioth.debian.org/gp4/benchmark.php?test=threadring&lang=all is very fast]. Threads in Haskell are preemptively scheduled and support everything you would normally expect from threads, including blocking I/O and foreign calls.<br />
:* [http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurrent.html Documentation]<br />
:* [http://haskell.org/haskellwiki/Concurrency_demos Examples]<br />
<br />
=== Software transactional memory ===<br />
<br />
;[[Software transactional memory|Software Transactional Memory]]<br />
:GHC supports a sophisticated version of software transactional memory. Software Transactional Memory (STM) is a new way to coordinate concurrent threads.<br />
:* [http://hackage.haskell.org/package/stm Documentation]<br />
:* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/stm.pdf Composable memory transactions]. <br />
:* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/lock-free-flops06.pdf Lock-free data structures using Software Transactional Memory in Haskell] gives further examples of concurrent programming using STM.<br />
<br />
=== Research efforts ===<br />
<br />
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/actor Actors with multi-headed receive clauses]<br />
:Actor-based concurrency for Haskell<br />
;[http://www.cs.kent.ac.uk/projects/ofa/chp/ CHP: Communicating Haskell Processes]<br />
:CHP is built on the ideas of CSP (Communicating Sequential Processes), featuring encapsulated parallel processes (no shared data!) communicating over synchronous channels. This is a very composable mode that also allows choice on communications, so that a process may offer to either read on one channel or write on another, but will only take the first that is available.<br />
<br />
=== Helper tools ===<br />
<br />
;[[WrapConc | Wrapped Concurrency]]<br />
:A wrapper around Control.Concurrency and Control.Exception that provides versions of forkIO that have more guarantees.<br />
<br />
----<br />
<br />
== Distributed programming ==<br />
<br />
=== MPI ===<br />
<br />
;[http://www.foldr.org/~michaelw/hmpi/ hMPI]<br />
:hMPI is an acronym for HaskellMPI. It is a Haskell binding conforming to MPI (Message Passing Interface) standard 1.1/1.2. The programmer is in full control over the communication between the nodes of a cluster.<br />
<br />
;[http://hackage.haskell.org/package/haskell-mpi Haskell-MPI]<br />
:Haskell-MPI provides a Haskell interface to MPI, built on top of the foreign function interface. It is notionally a descendant of hMPI, but is mostly a rewrite.<br />
<br />
=== Distributed Haskell ===<br />
<br />
;[http://www.macs.hw.ac.uk/~dsg/gdh/ GdH: Glasgow Distributed Haskell]<br />
:GdH supports distributed stateful interactions on multiple locations. It is a conservative extension of both Concurrent Haskell and GpH, enabling the distribution of the stateful IO threads of the former on the multiple locations of the latter. The programming model includes forking stateful threads on remote locations, explicit communication over channels, and distributed exception handling.<br />
<br />
;[http://www.mathematik.uni-marburg.de/~eden Eden]<br />
:Eden extends Haskell with a small set of syntactic constructs for explicit process specification and creation. While providing enough control to implement parallel algorithms efficiently, it frees the programmer from the tedious task of managing low-level details by introducing automatic communication (via head-strict lazy lists), synchronisation, and process handling.<br />
<br />
=== Research efforts ===<br />
<br />
;[https://github.com/haskell-distributed/distributed-process Cloud Haskell] :Erlang-style actors for distributed programming in Haskell, with typed channels for extra safety<br />
:*[http://research.microsoft.com/en-us/um/people/simonpj/papers/parallel/remote.pdf Haskell for the Cloud]: paper by Epstein et al<br />
:*[http://hackage.haskell.org/package/remote remote package]: working prototype developed in the original paper<br />
:*[https://github.com/haskell-distributed/distributed-process distributed-process] current reimplementation effort<br />
;[http://www.cs.kent.ac.uk/~cr3/HCPN/ HCPN: Haskell-Coloured Petri Nets]<br />
:Haskell-Coloured Petri Nets (HCPN) are an instance of high-level Petri Nets, in which anonymous tokens are replaced by Haskell data objects (and transitions can operate on that data, in addition to moving it around). This gives us a hybrid graphical/textual modelling formalism for Haskell, especially suited for modelling concurrent and distributed systems.<br />
<br />
<br />
[[Category:Parallel]]</div>EricKowhttps://wiki.haskell.org/index.php?title=Parallel/Glossary&diff=45609Parallel/Glossary2012-05-08T15:48:38Z<p>EricKow: /* S-Z */ task parallelism</p>
<hr />
<div>== A-H ==<br />
<br />
; bound thread<br />
: A bound thread is a haskell thread that is bound to an operating system thread. While the bound thread is still scheduled by the Haskell run-time system, the operating system thread takes care of all the foreign calls made by the bound thread. All foreign exported functions are run in a bound thread (bound to the OS thread that called the function). Also, the main action of every Haskell program is run in a bound thread.<br />
<br />
; concurrency<br />
: Implementing a program by using multiple I/O-performing threads. While a concurrent Haskell program can run on a parallel machine, the primary goal of using concurrency is not to gain performance, but rather because that is the simplest and most direct way to write the program. Since the threads perform I/O, the semantics of the program is necessarily non-deterministic.<br />
: ''see parallelism (vs concurrency)''<br />
<br />
; data parallelism<br />
<br />
; dataflow parallelism<br />
: A model for parallelism where dependencies are seen as forming a directed graph between sub-computations. Divergent parts of the graph with a common ancestor can be seen as computations that can be run in parallel. Connected nodes can be seen as forcibly sequential computations.<br />
: ''see monad-par''<br />
<br />
; distributed<br />
<br />
; distributed memory model<br />
<br />
; Haskell thread<br />
: A Haskell thread is a thread of execution for IO code. Multiple Haskell threads can execute IO code concurrently and they can communicate using shared mutable variables and channels.<br />
: ''see spark (vs threads)''<br />
: ''see Haskell thread (vs OS thread)''<br />
<br />
; Haskell thread (vs OS thread)<br />
<br />
; HEC (Haskell Execution Context)<br />
<br />
== I-M ==<br />
<br />
; MapReduce<br />
: ''TODO: non-Haskellers may have heard of MapReduce - what does it translate to in Haskell terms?''<br />
<br />
; monad-par<br />
: A deterministic parallel Haskell library. It provides an API that resembles Concurrent Haskell (without sacrificing predictability). Interesting traits: more verbose code, threads instead of sparks, a hyperstrict default (a good thing for parallelism)<br />
: ''see Strategies''<br />
<br />
; MVar<br />
: A locked mutable variable that can be shared across Haskell threads. MVar's can be full or empty. When reading an empty MVar, the reading thread blocks until it is full; and conversely, when writing to a full MVar the writing thread blocks until it is empty.<br />
<br />
== N-R ==<br />
<br />
; nested data parallelism<br />
<br />
; parallelism<br />
: Running a Haskell program on multiple processors, with the goal of improving performance. Ideally, this should be done invisibly, and with no semantic changes.<br />
<br />
; parallelism (vs concurrency)<br />
: Discussed in [[Parallelism vs. Concurrency]]<br />
<br />
== S-Z ==<br />
<br />
; shared memory model<br />
<br />
; spark<br />
: Sparks are specific to parallel Haskell. Abstractly, a spark is a pure computation which may be evaluated in parallel. Sparks are introduced with the par combinator; the expression (<code>x `par` y</code>) "sparks off" <code>x</code>, telling the runtime that it may evaluate the value of <code>x</code> in parallel to other work. Whether or not a spark is evaluated in parallel with other computations, or other Haskell IO threads, depends on what your hardware supports and on how your program is written. Sparks are put in a work queue and when a CPU core is idle, it can execute a spark by taking one from the work queue and evaluating it.<br />
: ''see spark (vs thread)''<br />
<br />
; spark (vs thread)<br />
: On a multi-core machine, both threads and sparks can be used to achieve parallelism. Threads give you concurrent, non-deterministic parallelism, while sparks give you pure deterministic parallelism. Haskell threads are ideal for applications like network servers where you need to do lots of I/O and using concurrency fits the nature of the problem. Sparks are ideal for speeding up pure calculations where adding non-deterministic concurrency would just make things more complicated.<br />
<br />
; STM<br />
<br />
; task parallelism<br />
<br />
; OS thread<br />
<br />
; thread<br />
: ''see Haskell thread, OS thread and bound thread''</div>EricKowhttps://wiki.haskell.org/index.php?title=Parallel/Glossary&diff=45608Parallel/Glossary2012-05-08T15:48:00Z<p>EricKow: /* I-M */ MVar</p>
<hr />
<div>== A-H ==<br />
<br />
; bound thread<br />
: A bound thread is a haskell thread that is bound to an operating system thread. While the bound thread is still scheduled by the Haskell run-time system, the operating system thread takes care of all the foreign calls made by the bound thread. All foreign exported functions are run in a bound thread (bound to the OS thread that called the function). Also, the main action of every Haskell program is run in a bound thread.<br />
<br />
; concurrency<br />
: Implementing a program by using multiple I/O-performing threads. While a concurrent Haskell program can run on a parallel machine, the primary goal of using concurrency is not to gain performance, but rather because that is the simplest and most direct way to write the program. Since the threads perform I/O, the semantics of the program is necessarily non-deterministic.<br />
: ''see parallelism (vs concurrency)''<br />
<br />
; data parallelism<br />
<br />
; dataflow parallelism<br />
: A model for parallelism where dependencies are seen as forming a directed graph between sub-computations. Divergent parts of the graph with a common ancestor can be seen as computations that can be run in parallel. Connected nodes can be seen as forcibly sequential computations.<br />
: ''see monad-par''<br />
<br />
; distributed<br />
<br />
; distributed memory model<br />
<br />
; Haskell thread<br />
: A Haskell thread is a thread of execution for IO code. Multiple Haskell threads can execute IO code concurrently and they can communicate using shared mutable variables and channels.<br />
: ''see spark (vs threads)''<br />
: ''see Haskell thread (vs OS thread)''<br />
<br />
; Haskell thread (vs OS thread)<br />
<br />
; HEC (Haskell Execution Context)<br />
<br />
== I-M ==<br />
<br />
; MapReduce<br />
: ''TODO: non-Haskellers may have heard of MapReduce - what does it translate to in Haskell terms?''<br />
<br />
; monad-par<br />
: A deterministic parallel Haskell library. It provides an API that resembles Concurrent Haskell (without sacrificing predictability). Interesting traits: more verbose code, threads instead of sparks, a hyperstrict default (a good thing for parallelism)<br />
: ''see Strategies''<br />
<br />
; MVar<br />
: A locked mutable variable that can be shared across Haskell threads. MVar's can be full or empty. When reading an empty MVar, the reading thread blocks until it is full; and conversely, when writing to a full MVar the writing thread blocks until it is empty.<br />
<br />
== N-R ==<br />
<br />
; nested data parallelism<br />
<br />
; parallelism<br />
: Running a Haskell program on multiple processors, with the goal of improving performance. Ideally, this should be done invisibly, and with no semantic changes.<br />
<br />
; parallelism (vs concurrency)<br />
: Discussed in [[Parallelism vs. Concurrency]]<br />
<br />
== S-Z ==<br />
<br />
; shared memory model<br />
<br />
; spark<br />
: Sparks are specific to parallel Haskell. Abstractly, a spark is a pure computation which may be evaluated in parallel. Sparks are introduced with the par combinator; the expression (<code>x `par` y</code>) "sparks off" <code>x</code>, telling the runtime that it may evaluate the value of <code>x</code> in parallel to other work. Whether or not a spark is evaluated in parallel with other computations, or other Haskell IO threads, depends on what your hardware supports and on how your program is written. Sparks are put in a work queue and when a CPU core is idle, it can execute a spark by taking one from the work queue and evaluating it.<br />
: ''see spark (vs thread)''<br />
<br />
; spark (vs thread)<br />
: On a multi-core machine, both threads and sparks can be used to achieve parallelism. Threads give you concurrent, non-deterministic parallelism, while sparks give you pure deterministic parallelism. Haskell threads are ideal for applications like network servers where you need to do lots of I/O and using concurrency fits the nature of the problem. Sparks are ideal for speeding up pure calculations where adding non-deterministic concurrency would just make things more complicated.<br />
<br />
; STM<br />
<br />
; OS thread<br />
<br />
; thread<br />
: ''see Haskell thread, OS thread and bound thread''</div>EricKowhttps://wiki.haskell.org/index.php?title=Applications_and_libraries/Concurrency_and_parallelism&diff=45489Applications and libraries/Concurrency and parallelism2012-04-28T12:22:01Z<p>EricKow: </p>
<hr />
<div>'''Concurrent and Parallel Programming'''<br />
<br />
Haskell offers a broad spectrum of tools for developing parallel or concurrent programs. For parallelism, Haskell libraries enable concise high-level parallel programs with results that are guaranteed to be deterministic, i.e., independent of the number of cores and the scheduling being used. Concurrency is supported with lightweight threads and high level abstractions such as [[software transactional memory]] for managing information shared across threads. Distributed programming is still mainly a research area. Some low-level tools (MPI bindings) and research prototypes are available and new approaches being developed, such as Cloud Haskell (Erlang-style actors as a Haskell library).<br />
<br />
This page lists libraries and extensions for programming<br />
concurrent and parallel applications in Haskell.<br />
<br />
See also the<br />
[[Parallel|parallel portal]] for research papers, tutorials and on<br />
parallel and concurrent Haskell.<br />
<br />
== Parallelism ==<br />
<br />
=== Strategies and Par ===<br />
<br />
;Strategies<br />
:A high-level compositional API for parallel programming.<br />
:* [http://hackage.haskell.org/package/parallel The parallel package on Hackage]<br />
:* [http://hackage.haskell.org/packages/archive/parallel/3.1.0.1/doc/html/Control-Parallel-Strategies.html Control.Parallel.Strategies Documentation]<br />
:* Latest paper: [http://research.microsoft.com/apps/pubs/default.aspx?id=138042 Seq no more: Better Strategies for Parallel Haskell]<br />
:* Original paper: [http://www.macs.hw.ac.uk/~dsg/gph/papers/html/Strategies/strategies.html Algorithm + Strategy = Parallelism]<br />
;[http://hackage.haskell.org/package/monad-par Monad-par]<br />
:An alternative parallel programming API to that provided by the parallel package. The Par monad allows the simple description of parallel computations, and can be used to add parallelism to pure Haskell code. The basic API is straightforward: the monad supports forking and simple communication in terms of IVars.<br />
<br />
=== Data parallelism ===<br />
<br />
;[http://repa.ouroborus.net/ Repa]<br />
:REgular PArallel arrays: high performance, regular, multi-dimensional, shape polymorphic parallel arrays.<br />
;[https://github.com/AccelerateHS/accelerate Accelerate]<br />
:An embedded language of array computations for high-performance computing in Haskell. Computations on multi-dimensional, regular arrays are expressed in the form of parameterised collective operations (such as maps, reductions, and permutations).<br />
;[http://haskell.org/haskellwiki/GHC/Data_Parallel_Haskell Data Parallel Haskell]<br />
:Implicitly parallel, high performance (nested) arrays, supporting large multicore programming.<br />
<br />
=== Research efforts ===<br />
<br />
; Feedback-directed implicit parallelism<br />
: Implicit parallelism in Haskell, and a feedback-directed mechanism to increase its granularity ([http://research.microsoft.com/~tharris/papers/2007-fdip.pdf FDIP paper])<br />
;[http://www.macs.hw.ac.uk/~dsg/gph/ GpH: Glasgow Parallel Haskell]<br />
:A complete, GHC-based implementation of the parallel Haskell extension GpH and of evaluation strategies is available. Extensions of the runtime-system and language to improve performance and support new platforms are under development.<br />
<br />
----<br />
<br />
== Concurrency ==<br />
<br />
=== Concurrent Haskell ===<br />
<br />
;[[Concurrency|Concurrent Haskell]]<br />
:GHC has supported concurrency with lightweight threads for more than a decade, and it [http://shootout.alioth.debian.org/gp4/benchmark.php?test=threadring&lang=all is very fast]. Threads in Haskell are preemptively scheduled and support everything you would normally expect from threads, including blocking I/O and foreign calls.<br />
:* [http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurrent.html Documentation]<br />
:* [http://haskell.org/haskellwiki/Concurrency_demos Examples]<br />
<br />
=== Software transactional memory ===<br />
<br />
;[[Software transactional memory|Software Transactional Memory]]<br />
:GHC supports a sophisticated version of software transactional memory. Software Transactional Memory (STM) is a new way to coordinate concurrent threads.<br />
:* [http://hackage.haskell.org/package/stm Documentation]<br />
:* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/stm.pdf Composable memory transactions]. <br />
:* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/lock-free-flops06.pdf Lock-free data structures using Software Transactional Memory in Haskell] gives further examples of concurrent programming using STM.<br />
<br />
=== Research efforts ===<br />
<br />
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/actor Actors with multi-headed receive clauses]<br />
:Actor-based concurrency for Haskell<br />
;[http://www.cs.kent.ac.uk/projects/ofa/chp/ CHP: Communicating Haskell Processes]<br />
:CHP is built on the ideas of CSP (Communicating Sequential Processes), featuring encapsulated parallel processes (no shared data!) communicating over synchronous channels. This is a very composable mode that also allows choice on communications, so that a process may offer to either read on one channel or write on another, but will only take the first that is available.<br />
<br />
=== Helper tools ===<br />
<br />
;[[WrapConc | Wrapped Concurrency]]<br />
:A wrapper around Control.Concurrency and Control.Exception that provides versions of forkIO that have more guarantees.<br />
<br />
----<br />
<br />
== Distributed programming ==<br />
<br />
=== MPI ===<br />
<br />
;[http://www.foldr.org/~michaelw/hmpi/ hMPI]<br />
:hMPI is an acronym for HaskellMPI. It is a Haskell binding conforming to MPI (Message Passing Interface) standard 1.1/1.2. The programmer is in full control over the communication between the nodes of a cluster.<br />
<br />
;[http://hackage.haskell.org/package/haskell-mpi Haskell-MPI]<br />
:Haskell-MPI provides a Haskell interface to MPI, built on top of the foreign function interface. It is notionally a descendant of hMPI, but is mostly a rewrite.<br />
<br />
=== Distributed Haskell ===<br />
<br />
;[http://www.macs.hw.ac.uk/~dsg/gdh/ GdH: Glasgow Distributed Haskell]<br />
:GdH supports distributed stateful interactions on multiple locations. It is a conservative extension of both Concurrent Haskell and GpH, enabling the distribution of the stateful IO threads of the former on the multiple locations of the latter. The programming model includes forking stateful threads on remote locations, explicit communication over channels, and distributed exception handling.<br />
<br />
;[http://www.mathematik.uni-marburg.de/~eden Eden]<br />
:Eden extends Haskell with a small set of syntactic constructs for explicit process specification and creation. While providing enough control to implement parallel algorithms efficiently, it frees the programmer from the tedious task of managing low-level details by introducing automatic communication (via head-strict lazy lists), synchronisation, and process handling.<br />
<br />
=== Research efforts ===<br />
<br />
;[https://github.com/haskell-distributed/distributed-process Cloud Haskell] :Erlang-style actors for distributed programming in Haskell, with typed channels for extra safety<br />
:*[http://research.microsoft.com/en-us/um/people/simonpj/papers/parallel/remote.pdf Haskell for the Cloud]: paper by Epstein et al<br />
:*[http://hackage.haskell.org/package/remote remote package]: working prototype developed in the original paper<br />
:*[https://github.com/haskell-distributed/distributed-process distributed-process] current reimplementation effort<br />
;[http://www.cs.kent.ac.uk/~cr3/HCPN/ HCPN: Haskell-Coloured Petri Nets]<br />
:Haskell-Coloured Petri Nets (HCPN) are an instance of high-level Petri Nets, in which anonymous tokens are replaced by Haskell data objects (and transitions can operate on that data, in addition to moving it around). This gives us a hybrid graphical/textual modelling formalism for Haskell, especially suited for modelling concurrent and distributed systems.<br />
<br />
<br />
[[Category:Parallel]]</div>EricKowhttps://wiki.haskell.org/index.php?title=Applications_and_libraries/Concurrency_and_parallelism&diff=45488Applications and libraries/Concurrency and parallelism2012-04-28T12:21:15Z<p>EricKow: rewrite introduction to libraries</p>
<hr />
<div>'''Concurrent and Parallel Programming'''<br />
<br />
Haskell offers a broad spectrum of tools for developing parallel or concurrent programs. For parallelism, Haskell libraries enable concise high-level parallel programs with results that are guaranteed to be deterministic, i.e., independent of the number of cores and the scheduling being used. Concurrency is supported with lightweight threads and high level abstractions such as [[software transactional memory]] for managing information shared across threads. Distributed programming is still mainly a research area. Some low-level tools (MPI bindings) and research prototypes are available and new approaches, e.g. Cloud Haskell (Erlang-style actors as a Haskell library) being actively developed.<br />
<br />
This page lists libraries and extensions for programming<br />
concurrent and parallel applications in Haskell.<br />
<br />
See also the<br />
[[Parallel|parallel portal]] for research papers, tutorials and on<br />
parallel and concurrent Haskell.<br />
<br />
== Parallelism ==<br />
<br />
=== Strategies and Par ===<br />
<br />
;Strategies<br />
:A high-level compositional API for parallel programming.<br />
:* [http://hackage.haskell.org/package/parallel The parallel package on Hackage]<br />
:* [http://hackage.haskell.org/packages/archive/parallel/3.1.0.1/doc/html/Control-Parallel-Strategies.html Control.Parallel.Strategies Documentation]<br />
:* Latest paper: [http://research.microsoft.com/apps/pubs/default.aspx?id=138042 Seq no more: Better Strategies for Parallel Haskell]<br />
:* Original paper: [http://www.macs.hw.ac.uk/~dsg/gph/papers/html/Strategies/strategies.html Algorithm + Strategy = Parallelism]<br />
;[http://hackage.haskell.org/package/monad-par Monad-par]<br />
:An alternative parallel programming API to that provided by the parallel package. The Par monad allows the simple description of parallel computations, and can be used to add parallelism to pure Haskell code. The basic API is straightforward: the monad supports forking and simple communication in terms of IVars.<br />
<br />
=== Data parallelism ===<br />
<br />
;[http://repa.ouroborus.net/ Repa]<br />
:REgular PArallel arrays: high performance, regular, multi-dimensional, shape polymorphic parallel arrays.<br />
;[https://github.com/AccelerateHS/accelerate Accelerate]<br />
:An embedded language of array computations for high-performance computing in Haskell. Computations on multi-dimensional, regular arrays are expressed in the form of parameterised collective operations (such as maps, reductions, and permutations).<br />
;[http://haskell.org/haskellwiki/GHC/Data_Parallel_Haskell Data Parallel Haskell]<br />
:Implicitly parallel, high performance (nested) arrays, supporting large multicore programming.<br />
<br />
=== Research efforts ===<br />
<br />
; Feedback-directed implicit parallelism<br />
: Implicit parallelism in Haskell, and a feedback-directed mechanism to increase its granularity ([http://research.microsoft.com/~tharris/papers/2007-fdip.pdf FDIP paper])<br />
;[http://www.macs.hw.ac.uk/~dsg/gph/ GpH: Glasgow Parallel Haskell]<br />
:A complete, GHC-based implementation of the parallel Haskell extension GpH and of evaluation strategies is available. Extensions of the runtime-system and language to improve performance and support new platforms are under development.<br />
<br />
----<br />
<br />
== Concurrency ==<br />
<br />
=== Concurrent Haskell ===<br />
<br />
;[[Concurrency|Concurrent Haskell]]<br />
:GHC has supported concurrency with lightweight threads for more than a decade, and it [http://shootout.alioth.debian.org/gp4/benchmark.php?test=threadring&lang=all is very fast]. Threads in Haskell are preemptively scheduled and support everything you would normally expect from threads, including blocking I/O and foreign calls.<br />
:* [http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurrent.html Documentation]<br />
:* [http://haskell.org/haskellwiki/Concurrency_demos Examples]<br />
<br />
=== Software transactional memory ===<br />
<br />
;[[Software transactional memory|Software Transactional Memory]]<br />
:GHC supports a sophisticated version of software transactional memory. Software Transactional Memory (STM) is a new way to coordinate concurrent threads.<br />
:* [http://hackage.haskell.org/package/stm Documentation]<br />
:* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/stm.pdf Composable memory transactions]. <br />
:* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/lock-free-flops06.pdf Lock-free data structures using Software Transactional Memory in Haskell] gives further examples of concurrent programming using STM.<br />
<br />
=== Research efforts ===<br />
<br />
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/actor Actors with multi-headed receive clauses]<br />
:Actor-based concurrency for Haskell<br />
;[http://www.cs.kent.ac.uk/projects/ofa/chp/ CHP: Communicating Haskell Processes]<br />
:CHP is built on the ideas of CSP (Communicating Sequential Processes), featuring encapsulated parallel processes (no shared data!) communicating over synchronous channels. This is a very composable mode that also allows choice on communications, so that a process may offer to either read on one channel or write on another, but will only take the first that is available.<br />
<br />
=== Helper tools ===<br />
<br />
;[[WrapConc | Wrapped Concurrency]]<br />
:A wrapper around Control.Concurrency and Control.Exception that provides versions of forkIO that have more guarantees.<br />
<br />
----<br />
<br />
== Distributed programming ==<br />
<br />
=== MPI ===<br />
<br />
;[http://www.foldr.org/~michaelw/hmpi/ hMPI]<br />
:hMPI is an acronym for HaskellMPI. It is a Haskell binding conforming to MPI (Message Passing Interface) standard 1.1/1.2. The programmer is in full control over the communication between the nodes of a cluster.<br />
<br />
;[http://hackage.haskell.org/package/haskell-mpi Haskell-MPI]<br />
:Haskell-MPI provides a Haskell interface to MPI, built on top of the foreign function interface. It is notionally a descendant of hMPI, but is mostly a rewrite.<br />
<br />
=== Distributed Haskell ===<br />
<br />
;[http://www.macs.hw.ac.uk/~dsg/gdh/ GdH: Glasgow Distributed Haskell]<br />
:GdH supports distributed stateful interactions on multiple locations. It is a conservative extension of both Concurrent Haskell and GpH, enabling the distribution of the stateful IO threads of the former on the multiple locations of the latter. The programming model includes forking stateful threads on remote locations, explicit communication over channels, and distributed exception handling.<br />
<br />
;[http://www.mathematik.uni-marburg.de/~eden Eden]<br />
:Eden extends Haskell with a small set of syntactic constructs for explicit process specification and creation. While providing enough control to implement parallel algorithms efficiently, it frees the programmer from the tedious task of managing low-level details by introducing automatic communication (via head-strict lazy lists), synchronisation, and process handling.<br />
<br />
=== Research efforts ===<br />
<br />
;[https://github.com/haskell-distributed/distributed-process Cloud Haskell] :Erlang-style actors for distributed programming in Haskell, with typed channels for extra safety<br />
:*[http://research.microsoft.com/en-us/um/people/simonpj/papers/parallel/remote.pdf Haskell for the Cloud]: paper by Epstein et al<br />
:*[http://hackage.haskell.org/package/remote remote package]: working prototype developed in the original paper<br />
:*[https://github.com/haskell-distributed/distributed-process distributed-process] current reimplementation effort<br />
;[http://www.cs.kent.ac.uk/~cr3/HCPN/ HCPN: Haskell-Coloured Petri Nets]<br />
:Haskell-Coloured Petri Nets (HCPN) are an instance of high-level Petri Nets, in which anonymous tokens are replaced by Haskell data objects (and transitions can operate on that data, in addition to moving it around). This gives us a hybrid graphical/textual modelling formalism for Haskell, especially suited for modelling concurrent and distributed systems.<br />
<br />
<br />
[[Category:Parallel]]</div>EricKowhttps://wiki.haskell.org/index.php?title=Applications_and_libraries/Concurrency_and_parallelism&diff=45460Applications and libraries/Concurrency and parallelism2012-04-26T14:58:36Z<p>EricKow: /* Research efforts */</p>
<hr />
<div>'''Concurrent and Parallel Programming'''<br />
<br />
Haskell has been designed for parallel and concurrent programming since<br />
its inception. In particular, Haskell's [http://haskell.org/haskellwiki/Why_Haskell_matters purity] greatly simplifies reasoning about<br />
parallel programs. This page lists libraries and extensions for programming<br />
concurrent and parallel applications in Haskell. See also the<br />
[[Parallel|parallel portal]] for research papers, tutorials and on<br />
parallel and concurrent Haskell.<br />
<br />
== Parallelism ==<br />
<br />
=== Strategies and Par ===<br />
<br />
;Strategies<br />
:A high-level compositional API for parallel programming.<br />
:* [http://hackage.haskell.org/package/parallel The parallel package on Hackage]<br />
:* [http://hackage.haskell.org/packages/archive/parallel/3.1.0.1/doc/html/Control-Parallel-Strategies.html Control.Parallel.Strategies Documentation]<br />
:* Latest paper: [http://research.microsoft.com/apps/pubs/default.aspx?id=138042 Seq no more: Better Strategies for Parallel Haskell]<br />
:* Original paper: [http://www.macs.hw.ac.uk/~dsg/gph/papers/html/Strategies/strategies.html Algorithm + Strategy = Parallelism]<br />
;[http://hackage.haskell.org/package/monad-par Monad-par]<br />
:An alternative parallel programming API to that provided by the parallel package. The Par monad allows the simple description of parallel computations, and can be used to add parallelism to pure Haskell code. The basic API is straightforward: the monad supports forking and simple communication in terms of IVars.<br />
<br />
=== Data parallelism ===<br />
<br />
;[http://repa.ouroborus.net/ Repa]<br />
:REgular PArallel arrays: high performance, regular, multi-dimensional, shape polymorphic parallel arrays.<br />
;[https://github.com/AccelerateHS/accelerate Accelerate]<br />
:An embedded language of array computations for high-performance computing in Haskell. Computations on multi-dimensional, regular arrays are expressed in the form of parameterised collective operations (such as maps, reductions, and permutations).<br />
;[http://haskell.org/haskellwiki/GHC/Data_Parallel_Haskell Data Parallel Haskell]<br />
:Implicitly parallel, high performance (nested) arrays, supporting large multicore programming.<br />
<br />
=== Research efforts ===<br />
<br />
; Feedback-directed implicit parallelism<br />
: Implicit parallelism in Haskell, and a feedback-directed mechanism to increase its granularity ([http://research.microsoft.com/~tharris/papers/2007-fdip.pdf FDIP paper])<br />
;[http://www.macs.hw.ac.uk/~dsg/gph/ GpH: Glasgow Parallel Haskell]<br />
:A complete, GHC-based implementation of the parallel Haskell extension GpH and of evaluation strategies is available. Extensions of the runtime-system and language to improve performance and support new platforms are under development.<br />
<br />
----<br />
<br />
== Concurrency ==<br />
<br />
=== Concurrent Haskell ===<br />
<br />
;[[Concurrency|Concurrent Haskell]]<br />
:GHC has supported concurrency with lightweight threads for more than a decade, and it [http://shootout.alioth.debian.org/gp4/benchmark.php?test=threadring&lang=all is very fast]. Threads in Haskell are preemptively scheduled and support everything you would normally expect from threads, including blocking I/O and foreign calls.<br />
:* [http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurrent.html Documentation]<br />
:* [http://haskell.org/haskellwiki/Concurrency_demos Examples]<br />
<br />
=== Software transactional memory ===<br />
<br />
;[[Software transactional memory|Software Transactional Memory]]<br />
:GHC supports a sophisticated version of software transactional memory. Software Transactional Memory (STM) is a new way to coordinate concurrent threads.<br />
:* [http://hackage.haskell.org/package/stm Documentation]<br />
:* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/stm.pdf Composable memory transactions]. <br />
:* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/lock-free-flops06.pdf Lock-free data structures using Software Transactional Memory in Haskell] gives further examples of concurrent programming using STM.<br />
<br />
=== Research efforts ===<br />
<br />
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/actor Actors with multi-headed receive clauses]<br />
:Actor-based concurrency for Haskell<br />
;[http://www.cs.kent.ac.uk/projects/ofa/chp/ CHP: Communicating Haskell Processes]<br />
:CHP is built on the ideas of CSP (Communicating Sequential Processes), featuring encapsulated parallel processes (no shared data!) communicating over synchronous channels. This is a very composable mode that also allows choice on communications, so that a process may offer to either read on one channel or write on another, but will only take the first that is available.<br />
<br />
=== Helper tools ===<br />
<br />
;[[WrapConc | Wrapped Concurrency]]<br />
:A wrapper around Control.Concurrency and Control.Exception that provides versions of forkIO that have more guarantees.<br />
<br />
----<br />
<br />
== Distributed programming ==<br />
<br />
=== MPI ===<br />
<br />
;[http://www.foldr.org/~michaelw/hmpi/ hMPI]<br />
:hMPI is an acronym for HaskellMPI. It is a Haskell binding conforming to MPI (Message Passing Interface) standard 1.1/1.2. The programmer is in full control over the communication between the nodes of a cluster.<br />
<br />
;[http://hackage.haskell.org/package/haskell-mpi Haskell-MPI]<br />
:Haskell-MPI provides a Haskell interface to MPI, built on top of the foreign function interface. It is notionally a descendant of hMPI, but is mostly a rewrite.<br />
<br />
=== Distributed Haskell ===<br />
<br />
;[http://www.macs.hw.ac.uk/~dsg/gdh/ GdH: Glasgow Distributed Haskell]<br />
:GdH supports distributed stateful interactions on multiple locations. It is a conservative extension of both Concurrent Haskell and GpH, enabling the distribution of the stateful IO threads of the former on the multiple locations of the latter. The programming model includes forking stateful threads on remote locations, explicit communication over channels, and distributed exception handling.<br />
<br />
;[http://www.mathematik.uni-marburg.de/~eden Eden]<br />
:Eden extends Haskell with a small set of syntactic constructs for explicit process specification and creation. While providing enough control to implement parallel algorithms efficiently, it frees the programmer from the tedious task of managing low-level details by introducing automatic communication (via head-strict lazy lists), synchronisation, and process handling.<br />
<br />
=== Research efforts ===<br />
<br />
;[https://github.com/haskell-distributed/distributed-process Cloud Haskell] :Erlang-style actors for distributed programming in Haskell, with typed channels for extra safety<br />
:*[http://research.microsoft.com/en-us/um/people/simonpj/papers/parallel/remote.pdf Haskell for the Cloud]: paper by Epstein et al<br />
:*[http://hackage.haskell.org/package/remote remote package]: working prototype developed in the original paper<br />
:*[https://github.com/haskell-distributed/distributed-process distributed-process] current reimplementation effort<br />
;[http://www.cs.kent.ac.uk/~cr3/HCPN/ HCPN: Haskell-Coloured Petri Nets]<br />
:Haskell-Coloured Petri Nets (HCPN) are an instance of high-level Petri Nets, in which anonymous tokens are replaced by Haskell data objects (and transitions can operate on that data, in addition to moving it around). This gives us a hybrid graphical/textual modelling formalism for Haskell, especially suited for modelling concurrent and distributed systems.</div>EricKowhttps://wiki.haskell.org/index.php?title=Applications_and_libraries/Concurrency_and_parallelism&diff=45459Applications and libraries/Concurrency and parallelism2012-04-26T14:58:16Z<p>EricKow: /* Research efforts */</p>
<hr />
<div>'''Concurrent and Parallel Programming'''<br />
<br />
Haskell has been designed for parallel and concurrent programming since<br />
its inception. In particular, Haskell's [http://haskell.org/haskellwiki/Why_Haskell_matters purity] greatly simplifies reasoning about<br />
parallel programs. This page lists libraries and extensions for programming<br />
concurrent and parallel applications in Haskell. See also the<br />
[[Parallel|parallel portal]] for research papers, tutorials and on<br />
parallel and concurrent Haskell.<br />
<br />
== Parallelism ==<br />
<br />
=== Strategies and Par ===<br />
<br />
;Strategies<br />
:A high-level compositional API for parallel programming.<br />
:* [http://hackage.haskell.org/package/parallel The parallel package on Hackage]<br />
:* [http://hackage.haskell.org/packages/archive/parallel/3.1.0.1/doc/html/Control-Parallel-Strategies.html Control.Parallel.Strategies Documentation]<br />
:* Latest paper: [http://research.microsoft.com/apps/pubs/default.aspx?id=138042 Seq no more: Better Strategies for Parallel Haskell]<br />
:* Original paper: [http://www.macs.hw.ac.uk/~dsg/gph/papers/html/Strategies/strategies.html Algorithm + Strategy = Parallelism]<br />
;[http://hackage.haskell.org/package/monad-par Monad-par]<br />
:An alternative parallel programming API to that provided by the parallel package. The Par monad allows the simple description of parallel computations, and can be used to add parallelism to pure Haskell code. The basic API is straightforward: the monad supports forking and simple communication in terms of IVars.<br />
<br />
=== Data parallelism ===<br />
<br />
;[http://repa.ouroborus.net/ Repa]<br />
:REgular PArallel arrays: high performance, regular, multi-dimensional, shape polymorphic parallel arrays.<br />
;[https://github.com/AccelerateHS/accelerate Accelerate]<br />
:An embedded language of array computations for high-performance computing in Haskell. Computations on multi-dimensional, regular arrays are expressed in the form of parameterised collective operations (such as maps, reductions, and permutations).<br />
;[http://haskell.org/haskellwiki/GHC/Data_Parallel_Haskell Data Parallel Haskell]<br />
:Implicitly parallel, high performance (nested) arrays, supporting large multicore programming.<br />
<br />
=== Research efforts ===<br />
<br />
; Feedback-directed implicit parallelism<br />
: Implicit parallelism in Haskell, and a feedback-directed mechanism to increase its granularity ([http://research.microsoft.com/~tharris/papers/2007-fdip.pdf FDIP paper])<br />
;[http://www.macs.hw.ac.uk/~dsg/gph/ GpH: Glasgow Parallel Haskell]<br />
:A complete, GHC-based implementation of the parallel Haskell extension GpH and of evaluation strategies is available. Extensions of the runtime-system and language to improve performance and support new platforms are under development.<br />
<br />
----<br />
<br />
== Concurrency ==<br />
<br />
=== Concurrent Haskell ===<br />
<br />
;[[Concurrency|Concurrent Haskell]]<br />
:GHC has supported concurrency with lightweight threads for more than a decade, and it [http://shootout.alioth.debian.org/gp4/benchmark.php?test=threadring&lang=all is very fast]. Threads in Haskell are preemptively scheduled and support everything you would normally expect from threads, including blocking I/O and foreign calls.<br />
:* [http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurrent.html Documentation]<br />
:* [http://haskell.org/haskellwiki/Concurrency_demos Examples]<br />
<br />
=== Software transactional memory ===<br />
<br />
;[[Software transactional memory|Software Transactional Memory]]<br />
:GHC supports a sophisticated version of software transactional memory. Software Transactional Memory (STM) is a new way to coordinate concurrent threads.<br />
:* [http://hackage.haskell.org/package/stm Documentation]<br />
:* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/stm.pdf Composable memory transactions]. <br />
:* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/lock-free-flops06.pdf Lock-free data structures using Software Transactional Memory in Haskell] gives further examples of concurrent programming using STM.<br />
<br />
=== Research efforts ===<br />
<br />
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/actor Actors with multi-headed receive clauses]<br />
:Actor-based concurrency for Haskell<br />
;[http://www.cs.kent.ac.uk/projects/ofa/chp/ CHP: Communicating Haskell Processes]<br />
:CHP is built on the ideas of CSP (Communicating Sequential Processes), featuring encapsulated parallel processes (no shared data!) communicating over synchronous channels. This is a very composable mode that also allows choice on communications, so that a process may offer to either read on one channel or write on another, but will only take the first that is available.<br />
<br />
=== Helper tools ===<br />
<br />
;[[WrapConc | Wrapped Concurrency]]<br />
:A wrapper around Control.Concurrency and Control.Exception that provides versions of forkIO that have more guarantees.<br />
<br />
----<br />
<br />
== Distributed programming ==<br />
<br />
=== MPI ===<br />
<br />
;[http://www.foldr.org/~michaelw/hmpi/ hMPI]<br />
:hMPI is an acronym for HaskellMPI. It is a Haskell binding conforming to MPI (Message Passing Interface) standard 1.1/1.2. The programmer is in full control over the communication between the nodes of a cluster.<br />
<br />
;[http://hackage.haskell.org/package/haskell-mpi Haskell-MPI]<br />
:Haskell-MPI provides a Haskell interface to MPI, built on top of the foreign function interface. It is notionally a descendant of hMPI, but is mostly a rewrite.<br />
<br />
=== Distributed Haskell ===<br />
<br />
;[http://www.macs.hw.ac.uk/~dsg/gdh/ GdH: Glasgow Distributed Haskell]<br />
:GdH supports distributed stateful interactions on multiple locations. It is a conservative extension of both Concurrent Haskell and GpH, enabling the distribution of the stateful IO threads of the former on the multiple locations of the latter. The programming model includes forking stateful threads on remote locations, explicit communication over channels, and distributed exception handling.<br />
<br />
;[http://www.mathematik.uni-marburg.de/~eden Eden]<br />
:Eden extends Haskell with a small set of syntactic constructs for explicit process specification and creation. While providing enough control to implement parallel algorithms efficiently, it frees the programmer from the tedious task of managing low-level details by introducing automatic communication (via head-strict lazy lists), synchronisation, and process handling.<br />
<br />
=== Research efforts ===<br />
<br />
;[http://www.cs.kent.ac.uk/~cr3/HCPN/ HCPN: Haskell-Coloured Petri Nets]<br />
:Haskell-Coloured Petri Nets (HCPN) are an instance of high-level Petri Nets, in which anonymous tokens are replaced by Haskell data objects (and transitions can operate on that data, in addition to moving it around). This gives us a hybrid graphical/textual modelling formalism for Haskell, especially suited for modelling concurrent and distributed systems.<br />
;[https://github.com/haskell-distributed/distributed-process Cloud Haskell] :Erlang-style actors for distributed programming in Haskell, with typed channels for extra safety<br />
:*[http://research.microsoft.com/en-us/um/people/simonpj/papers/parallel/remote.pdf Haskell for the Cloud]: paper by Epstein et al<br />
:*[http://hackage.haskell.org/package/remote remote package]: working prototype developed in the original paper<br />
:*[https://github.com/haskell-distributed/distributed-process distributed-process] current reimplementation effort</div>EricKowhttps://wiki.haskell.org/index.php?title=Applications_and_libraries/Concurrency_and_parallelism&diff=45458Applications and libraries/Concurrency and parallelism2012-04-26T14:57:58Z<p>EricKow: /* Research efforts */ Cloud Haskell!</p>
<hr />
<div>'''Concurrent and Parallel Programming'''<br />
<br />
Haskell has been designed for parallel and concurrent programming since<br />
its inception. In particular, Haskell's [http://haskell.org/haskellwiki/Why_Haskell_matters purity] greatly simplifies reasoning about<br />
parallel programs. This page lists libraries and extensions for programming<br />
concurrent and parallel applications in Haskell. See also the<br />
[[Parallel|parallel portal]] for research papers, tutorials and on<br />
parallel and concurrent Haskell.<br />
<br />
== Parallelism ==<br />
<br />
=== Strategies and Par ===<br />
<br />
;Strategies<br />
:A high-level compositional API for parallel programming.<br />
:* [http://hackage.haskell.org/package/parallel The parallel package on Hackage]<br />
:* [http://hackage.haskell.org/packages/archive/parallel/3.1.0.1/doc/html/Control-Parallel-Strategies.html Control.Parallel.Strategies Documentation]<br />
:* Latest paper: [http://research.microsoft.com/apps/pubs/default.aspx?id=138042 Seq no more: Better Strategies for Parallel Haskell]<br />
:* Original paper: [http://www.macs.hw.ac.uk/~dsg/gph/papers/html/Strategies/strategies.html Algorithm + Strategy = Parallelism]<br />
;[http://hackage.haskell.org/package/monad-par Monad-par]<br />
:An alternative parallel programming API to that provided by the parallel package. The Par monad allows the simple description of parallel computations, and can be used to add parallelism to pure Haskell code. The basic API is straightforward: the monad supports forking and simple communication in terms of IVars.<br />
<br />
=== Data parallelism ===<br />
<br />
;[http://repa.ouroborus.net/ Repa]<br />
:REgular PArallel arrays: high performance, regular, multi-dimensional, shape polymorphic parallel arrays.<br />
;[https://github.com/AccelerateHS/accelerate Accelerate]<br />
:An embedded language of array computations for high-performance computing in Haskell. Computations on multi-dimensional, regular arrays are expressed in the form of parameterised collective operations (such as maps, reductions, and permutations).<br />
;[http://haskell.org/haskellwiki/GHC/Data_Parallel_Haskell Data Parallel Haskell]<br />
:Implicitly parallel, high performance (nested) arrays, supporting large multicore programming.<br />
<br />
=== Research efforts ===<br />
<br />
; Feedback-directed implicit parallelism<br />
: Implicit parallelism in Haskell, and a feedback-directed mechanism to increase its granularity ([http://research.microsoft.com/~tharris/papers/2007-fdip.pdf FDIP paper])<br />
;[http://www.macs.hw.ac.uk/~dsg/gph/ GpH: Glasgow Parallel Haskell]<br />
:A complete, GHC-based implementation of the parallel Haskell extension GpH and of evaluation strategies is available. Extensions of the runtime-system and language to improve performance and support new platforms are under development.<br />
<br />
----<br />
<br />
== Concurrency ==<br />
<br />
=== Concurrent Haskell ===<br />
<br />
;[[Concurrency|Concurrent Haskell]]<br />
:GHC has supported concurrency with lightweight threads for more than a decade, and it [http://shootout.alioth.debian.org/gp4/benchmark.php?test=threadring&lang=all is very fast]. Threads in Haskell are preemptively scheduled and support everything you would normally expect from threads, including blocking I/O and foreign calls.<br />
:* [http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurrent.html Documentation]<br />
:* [http://haskell.org/haskellwiki/Concurrency_demos Examples]<br />
<br />
=== Software transactional memory ===<br />
<br />
;[[Software transactional memory|Software Transactional Memory]]<br />
:GHC supports a sophisticated version of software transactional memory. Software Transactional Memory (STM) is a new way to coordinate concurrent threads.<br />
:* [http://hackage.haskell.org/package/stm Documentation]<br />
:* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/stm.pdf Composable memory transactions]. <br />
:* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/lock-free-flops06.pdf Lock-free data structures using Software Transactional Memory in Haskell] gives further examples of concurrent programming using STM.<br />
<br />
=== Research efforts ===<br />
<br />
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/actor Actors with multi-headed receive clauses]<br />
:Actor-based concurrency for Haskell<br />
;[http://www.cs.kent.ac.uk/projects/ofa/chp/ CHP: Communicating Haskell Processes]<br />
:CHP is built on the ideas of CSP (Communicating Sequential Processes), featuring encapsulated parallel processes (no shared data!) communicating over synchronous channels. This is a very composable mode that also allows choice on communications, so that a process may offer to either read on one channel or write on another, but will only take the first that is available.<br />
<br />
=== Helper tools ===<br />
<br />
;[[WrapConc | Wrapped Concurrency]]<br />
:A wrapper around Control.Concurrency and Control.Exception that provides versions of forkIO that have more guarantees.<br />
<br />
----<br />
<br />
== Distributed programming ==<br />
<br />
=== MPI ===<br />
<br />
;[http://www.foldr.org/~michaelw/hmpi/ hMPI]<br />
:hMPI is an acronym for HaskellMPI. It is a Haskell binding conforming to MPI (Message Passing Interface) standard 1.1/1.2. The programmer is in full control over the communication between the nodes of a cluster.<br />
<br />
;[http://hackage.haskell.org/package/haskell-mpi Haskell-MPI]<br />
:Haskell-MPI provides a Haskell interface to MPI, built on top of the foreign function interface. It is notionally a descendant of hMPI, but is mostly a rewrite.<br />
<br />
=== Distributed Haskell ===<br />
<br />
;[http://www.macs.hw.ac.uk/~dsg/gdh/ GdH: Glasgow Distributed Haskell]<br />
:GdH supports distributed stateful interactions on multiple locations. It is a conservative extension of both Concurrent Haskell and GpH, enabling the distribution of the stateful IO threads of the former on the multiple locations of the latter. The programming model includes forking stateful threads on remote locations, explicit communication over channels, and distributed exception handling.<br />
<br />
;[http://www.mathematik.uni-marburg.de/~eden Eden]<br />
:Eden extends Haskell with a small set of syntactic constructs for explicit process specification and creation. While providing enough control to implement parallel algorithms efficiently, it frees the programmer from the tedious task of managing low-level details by introducing automatic communication (via head-strict lazy lists), synchronisation, and process handling.<br />
<br />
=== Research efforts ===<br />
<br />
;[http://www.cs.kent.ac.uk/~cr3/HCPN/ HCPN: Haskell-Coloured Petri Nets]<br />
:Haskell-Coloured Petri Nets (HCPN) are an instance of high-level Petri Nets, in which anonymous tokens are replaced by Haskell data objects (and transitions can operate on that data, in addition to moving it around). This gives us a hybrid graphical/textual modelling formalism for Haskell, especially suited for modelling concurrent and distributed systems.<br />
;[https://github.com/haskell-distributed/distributed-process Cloud Haskell] Erlang-style actors for distributed programming in Haskell, with typed channels for extra safety<br />
:*[http://research.microsoft.com/en-us/um/people/simonpj/papers/parallel/remote.pdf Haskell for the Cloud]: paper by Epstein et al<br />
:*[http://hackage.haskell.org/package/remote remote package]: working prototype developed in the original paper<br />
:*[https://github.com/haskell-distributed/distributed-process distributed-process] current reimplementation effort</div>EricKowhttps://wiki.haskell.org/index.php?title=Applications_and_libraries/Concurrency_and_parallelism&diff=45457Applications and libraries/Concurrency and parallelism2012-04-26T14:53:05Z<p>EricKow: /* Research tools */</p>
<hr />
<div>'''Concurrent and Parallel Programming'''<br />
<br />
Haskell has been designed for parallel and concurrent programming since<br />
its inception. In particular, Haskell's [http://haskell.org/haskellwiki/Why_Haskell_matters purity] greatly simplifies reasoning about<br />
parallel programs. This page lists libraries and extensions for programming<br />
concurrent and parallel applications in Haskell. See also the<br />
[[Parallel|parallel portal]] for research papers, tutorials and on<br />
parallel and concurrent Haskell.<br />
<br />
== Parallelism ==<br />
<br />
=== Strategies and Par ===<br />
<br />
;Strategies<br />
:A high-level compositional API for parallel programming.<br />
:* [http://hackage.haskell.org/package/parallel The parallel package on Hackage]<br />
:* [http://hackage.haskell.org/packages/archive/parallel/3.1.0.1/doc/html/Control-Parallel-Strategies.html Control.Parallel.Strategies Documentation]<br />
:* Latest paper: [http://research.microsoft.com/apps/pubs/default.aspx?id=138042 Seq no more: Better Strategies for Parallel Haskell]<br />
:* Original paper: [http://www.macs.hw.ac.uk/~dsg/gph/papers/html/Strategies/strategies.html Algorithm + Strategy = Parallelism]<br />
;[http://hackage.haskell.org/package/monad-par Monad-par]<br />
:An alternative parallel programming API to that provided by the parallel package. The Par monad allows the simple description of parallel computations, and can be used to add parallelism to pure Haskell code. The basic API is straightforward: the monad supports forking and simple communication in terms of IVars.<br />
<br />
=== Data parallelism ===<br />
<br />
;[http://repa.ouroborus.net/ Repa]<br />
:REgular PArallel arrays: high performance, regular, multi-dimensional, shape polymorphic parallel arrays.<br />
;[https://github.com/AccelerateHS/accelerate Accelerate]<br />
:An embedded language of array computations for high-performance computing in Haskell. Computations on multi-dimensional, regular arrays are expressed in the form of parameterised collective operations (such as maps, reductions, and permutations).<br />
;[http://haskell.org/haskellwiki/GHC/Data_Parallel_Haskell Data Parallel Haskell]<br />
:Implicitly parallel, high performance (nested) arrays, supporting large multicore programming.<br />
<br />
=== Research efforts ===<br />
<br />
; Feedback-directed implicit parallelism<br />
: Implicit parallelism in Haskell, and a feedback-directed mechanism to increase its granularity ([http://research.microsoft.com/~tharris/papers/2007-fdip.pdf FDIP paper])<br />
;[http://www.macs.hw.ac.uk/~dsg/gph/ GpH: Glasgow Parallel Haskell]<br />
:A complete, GHC-based implementation of the parallel Haskell extension GpH and of evaluation strategies is available. Extensions of the runtime-system and language to improve performance and support new platforms are under development.<br />
<br />
----<br />
<br />
== Concurrency ==<br />
<br />
=== Concurrent Haskell ===<br />
<br />
;[[Concurrency|Concurrent Haskell]]<br />
:GHC has supported concurrency with lightweight threads for more than a decade, and it [http://shootout.alioth.debian.org/gp4/benchmark.php?test=threadring&lang=all is very fast]. Threads in Haskell are preemptively scheduled and support everything you would normally expect from threads, including blocking I/O and foreign calls.<br />
:* [http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurrent.html Documentation]<br />
:* [http://haskell.org/haskellwiki/Concurrency_demos Examples]<br />
<br />
=== Software transactional memory ===<br />
<br />
;[[Software transactional memory|Software Transactional Memory]]<br />
:GHC supports a sophisticated version of software transactional memory. Software Transactional Memory (STM) is a new way to coordinate concurrent threads.<br />
:* [http://hackage.haskell.org/package/stm Documentation]<br />
:* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/stm.pdf Composable memory transactions]. <br />
:* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/lock-free-flops06.pdf Lock-free data structures using Software Transactional Memory in Haskell] gives further examples of concurrent programming using STM.<br />
<br />
=== Research efforts ===<br />
<br />
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/actor Actors with multi-headed receive clauses]<br />
:Actor-based concurrency for Haskell<br />
;[http://www.cs.kent.ac.uk/projects/ofa/chp/ CHP: Communicating Haskell Processes]<br />
:CHP is built on the ideas of CSP (Communicating Sequential Processes), featuring encapsulated parallel processes (no shared data!) communicating over synchronous channels. This is a very composable mode that also allows choice on communications, so that a process may offer to either read on one channel or write on another, but will only take the first that is available.<br />
<br />
=== Helper tools ===<br />
<br />
;[[WrapConc | Wrapped Concurrency]]<br />
:A wrapper around Control.Concurrency and Control.Exception that provides versions of forkIO that have more guarantees.<br />
<br />
----<br />
<br />
== Distributed programming ==<br />
<br />
=== MPI ===<br />
<br />
;[http://www.foldr.org/~michaelw/hmpi/ hMPI]<br />
:hMPI is an acronym for HaskellMPI. It is a Haskell binding conforming to MPI (Message Passing Interface) standard 1.1/1.2. The programmer is in full control over the communication between the nodes of a cluster.<br />
<br />
;[http://hackage.haskell.org/package/haskell-mpi Haskell-MPI]<br />
:Haskell-MPI provides a Haskell interface to MPI, built on top of the foreign function interface. It is notionally a descendant of hMPI, but is mostly a rewrite.<br />
<br />
=== Distributed Haskell ===<br />
<br />
;[http://www.macs.hw.ac.uk/~dsg/gdh/ GdH: Glasgow Distributed Haskell]<br />
:GdH supports distributed stateful interactions on multiple locations. It is a conservative extension of both Concurrent Haskell and GpH, enabling the distribution of the stateful IO threads of the former on the multiple locations of the latter. The programming model includes forking stateful threads on remote locations, explicit communication over channels, and distributed exception handling.<br />
<br />
;[http://www.mathematik.uni-marburg.de/~eden Eden]<br />
:Eden extends Haskell with a small set of syntactic constructs for explicit process specification and creation. While providing enough control to implement parallel algorithms efficiently, it frees the programmer from the tedious task of managing low-level details by introducing automatic communication (via head-strict lazy lists), synchronisation, and process handling.<br />
<br />
=== Research efforts ===<br />
<br />
;[http://www.cs.kent.ac.uk/~cr3/HCPN/ HCPN: Haskell-Coloured Petri Nets]<br />
:Haskell-Coloured Petri Nets (HCPN) are an instance of high-level Petri Nets, in which anonymous tokens are replaced by Haskell data objects (and transitions can operate on that data, in addition to moving it around). This gives us a hybrid graphical/textual modelling formalism for Haskell, especially suited for modelling concurrent and distributed systems.</div>EricKowhttps://wiki.haskell.org/index.php?title=Applications_and_libraries/Concurrency_and_parallelism&diff=45456Applications and libraries/Concurrency and parallelism2012-04-26T14:52:42Z<p>EricKow: /* Actors */ Make this just a research efforts section</p>
<hr />
<div>'''Concurrent and Parallel Programming'''<br />
<br />
Haskell has been designed for parallel and concurrent programming since<br />
its inception. In particular, Haskell's [http://haskell.org/haskellwiki/Why_Haskell_matters purity] greatly simplifies reasoning about<br />
parallel programs. This page lists libraries and extensions for programming<br />
concurrent and parallel applications in Haskell. See also the<br />
[[Parallel|parallel portal]] for research papers, tutorials and on<br />
parallel and concurrent Haskell.<br />
<br />
== Parallelism ==<br />
<br />
=== Strategies and Par ===<br />
<br />
;Strategies<br />
:A high-level compositional API for parallel programming.<br />
:* [http://hackage.haskell.org/package/parallel The parallel package on Hackage]<br />
:* [http://hackage.haskell.org/packages/archive/parallel/3.1.0.1/doc/html/Control-Parallel-Strategies.html Control.Parallel.Strategies Documentation]<br />
:* Latest paper: [http://research.microsoft.com/apps/pubs/default.aspx?id=138042 Seq no more: Better Strategies for Parallel Haskell]<br />
:* Original paper: [http://www.macs.hw.ac.uk/~dsg/gph/papers/html/Strategies/strategies.html Algorithm + Strategy = Parallelism]<br />
;[http://hackage.haskell.org/package/monad-par Monad-par]<br />
:An alternative parallel programming API to that provided by the parallel package. The Par monad allows the simple description of parallel computations, and can be used to add parallelism to pure Haskell code. The basic API is straightforward: the monad supports forking and simple communication in terms of IVars.<br />
<br />
=== Data parallelism ===<br />
<br />
;[http://repa.ouroborus.net/ Repa]<br />
:REgular PArallel arrays: high performance, regular, multi-dimensional, shape polymorphic parallel arrays.<br />
;[https://github.com/AccelerateHS/accelerate Accelerate]<br />
:An embedded language of array computations for high-performance computing in Haskell. Computations on multi-dimensional, regular arrays are expressed in the form of parameterised collective operations (such as maps, reductions, and permutations).<br />
;[http://haskell.org/haskellwiki/GHC/Data_Parallel_Haskell Data Parallel Haskell]<br />
:Implicitly parallel, high performance (nested) arrays, supporting large multicore programming.<br />
<br />
=== Research efforts ===<br />
<br />
; Feedback-directed implicit parallelism<br />
: Implicit parallelism in Haskell, and a feedback-directed mechanism to increase its granularity ([http://research.microsoft.com/~tharris/papers/2007-fdip.pdf FDIP paper])<br />
;[http://www.macs.hw.ac.uk/~dsg/gph/ GpH: Glasgow Parallel Haskell]<br />
:A complete, GHC-based implementation of the parallel Haskell extension GpH and of evaluation strategies is available. Extensions of the runtime-system and language to improve performance and support new platforms are under development.<br />
<br />
----<br />
<br />
== Concurrency ==<br />
<br />
=== Concurrent Haskell ===<br />
<br />
;[[Concurrency|Concurrent Haskell]]<br />
:GHC has supported concurrency with lightweight threads for more than a decade, and it [http://shootout.alioth.debian.org/gp4/benchmark.php?test=threadring&lang=all is very fast]. Threads in Haskell are preemptively scheduled and support everything you would normally expect from threads, including blocking I/O and foreign calls.<br />
:* [http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurrent.html Documentation]<br />
:* [http://haskell.org/haskellwiki/Concurrency_demos Examples]<br />
<br />
=== Software transactional memory ===<br />
<br />
;[[Software transactional memory|Software Transactional Memory]]<br />
:GHC supports a sophisticated version of software transactional memory. Software Transactional Memory (STM) is a new way to coordinate concurrent threads.<br />
:* [http://hackage.haskell.org/package/stm Documentation]<br />
:* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/stm.pdf Composable memory transactions]. <br />
:* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/lock-free-flops06.pdf Lock-free data structures using Software Transactional Memory in Haskell] gives further examples of concurrent programming using STM.<br />
<br />
=== Research efforts ===<br />
<br />
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/actor Actors with multi-headed receive clauses]<br />
:Actor-based concurrency for Haskell<br />
;[http://www.cs.kent.ac.uk/projects/ofa/chp/ CHP: Communicating Haskell Processes]<br />
:CHP is built on the ideas of CSP (Communicating Sequential Processes), featuring encapsulated parallel processes (no shared data!) communicating over synchronous channels. This is a very composable mode that also allows choice on communications, so that a process may offer to either read on one channel or write on another, but will only take the first that is available.<br />
<br />
=== Helper tools ===<br />
<br />
;[[WrapConc | Wrapped Concurrency]]<br />
:A wrapper around Control.Concurrency and Control.Exception that provides versions of forkIO that have more guarantees.<br />
<br />
----<br />
<br />
== Distributed programming ==<br />
<br />
=== MPI ===<br />
<br />
;[http://www.foldr.org/~michaelw/hmpi/ hMPI]<br />
:hMPI is an acronym for HaskellMPI. It is a Haskell binding conforming to MPI (Message Passing Interface) standard 1.1/1.2. The programmer is in full control over the communication between the nodes of a cluster.<br />
<br />
;[http://hackage.haskell.org/package/haskell-mpi Haskell-MPI]<br />
:Haskell-MPI provides a Haskell interface to MPI, built on top of the foreign function interface. It is notionally a descendant of hMPI, but is mostly a rewrite.<br />
<br />
=== Distributed Haskell ===<br />
<br />
;[http://www.macs.hw.ac.uk/~dsg/gdh/ GdH: Glasgow Distributed Haskell]<br />
:GdH supports distributed stateful interactions on multiple locations. It is a conservative extension of both Concurrent Haskell and GpH, enabling the distribution of the stateful IO threads of the former on the multiple locations of the latter. The programming model includes forking stateful threads on remote locations, explicit communication over channels, and distributed exception handling.<br />
<br />
;[http://www.mathematik.uni-marburg.de/~eden Eden]<br />
:Eden extends Haskell with a small set of syntactic constructs for explicit process specification and creation. While providing enough control to implement parallel algorithms efficiently, it frees the programmer from the tedious task of managing low-level details by introducing automatic communication (via head-strict lazy lists), synchronisation, and process handling.<br />
<br />
=== Research tools ===<br />
<br />
;[http://www.cs.kent.ac.uk/~cr3/HCPN/ HCPN: Haskell-Coloured Petri Nets]<br />
:Haskell-Coloured Petri Nets (HCPN) are an instance of high-level Petri Nets, in which anonymous tokens are replaced by Haskell data objects (and transitions can operate on that data, in addition to moving it around). This gives us a hybrid graphical/textual modelling formalism for Haskell, especially suited for modelling concurrent and distributed systems.</div>EricKowhttps://wiki.haskell.org/index.php?title=Applications_and_libraries/Concurrency_and_parallelism&diff=45455Applications and libraries/Concurrency and parallelism2012-04-26T14:52:15Z<p>EricKow: /* Research tools */ s/tools/efforts/</p>
<hr />
<div>'''Concurrent and Parallel Programming'''<br />
<br />
Haskell has been designed for parallel and concurrent programming since<br />
its inception. In particular, Haskell's [http://haskell.org/haskellwiki/Why_Haskell_matters purity] greatly simplifies reasoning about<br />
parallel programs. This page lists libraries and extensions for programming<br />
concurrent and parallel applications in Haskell. See also the<br />
[[Parallel|parallel portal]] for research papers, tutorials and on<br />
parallel and concurrent Haskell.<br />
<br />
== Parallelism ==<br />
<br />
=== Strategies and Par ===<br />
<br />
;Strategies<br />
:A high-level compositional API for parallel programming.<br />
:* [http://hackage.haskell.org/package/parallel The parallel package on Hackage]<br />
:* [http://hackage.haskell.org/packages/archive/parallel/3.1.0.1/doc/html/Control-Parallel-Strategies.html Control.Parallel.Strategies Documentation]<br />
:* Latest paper: [http://research.microsoft.com/apps/pubs/default.aspx?id=138042 Seq no more: Better Strategies for Parallel Haskell]<br />
:* Original paper: [http://www.macs.hw.ac.uk/~dsg/gph/papers/html/Strategies/strategies.html Algorithm + Strategy = Parallelism]<br />
;[http://hackage.haskell.org/package/monad-par Monad-par]<br />
:An alternative parallel programming API to that provided by the parallel package. The Par monad allows the simple description of parallel computations, and can be used to add parallelism to pure Haskell code. The basic API is straightforward: the monad supports forking and simple communication in terms of IVars.<br />
<br />
=== Data parallelism ===<br />
<br />
;[http://repa.ouroborus.net/ Repa]<br />
:REgular PArallel arrays: high performance, regular, multi-dimensional, shape polymorphic parallel arrays.<br />
;[https://github.com/AccelerateHS/accelerate Accelerate]<br />
:An embedded language of array computations for high-performance computing in Haskell. Computations on multi-dimensional, regular arrays are expressed in the form of parameterised collective operations (such as maps, reductions, and permutations).<br />
;[http://haskell.org/haskellwiki/GHC/Data_Parallel_Haskell Data Parallel Haskell]<br />
:Implicitly parallel, high performance (nested) arrays, supporting large multicore programming.<br />
<br />
=== Research efforts ===<br />
<br />
; Feedback-directed implicit parallelism<br />
: Implicit parallelism in Haskell, and a feedback-directed mechanism to increase its granularity ([http://research.microsoft.com/~tharris/papers/2007-fdip.pdf FDIP paper])<br />
;[http://www.macs.hw.ac.uk/~dsg/gph/ GpH: Glasgow Parallel Haskell]<br />
:A complete, GHC-based implementation of the parallel Haskell extension GpH and of evaluation strategies is available. Extensions of the runtime-system and language to improve performance and support new platforms are under development.<br />
<br />
----<br />
<br />
== Concurrency ==<br />
<br />
=== Concurrent Haskell ===<br />
<br />
;[[Concurrency|Concurrent Haskell]]<br />
:GHC has supported concurrency with lightweight threads for more than a decade, and it [http://shootout.alioth.debian.org/gp4/benchmark.php?test=threadring&lang=all is very fast]. Threads in Haskell are preemptively scheduled and support everything you would normally expect from threads, including blocking I/O and foreign calls.<br />
:* [http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurrent.html Documentation]<br />
:* [http://haskell.org/haskellwiki/Concurrency_demos Examples]<br />
<br />
=== Software transactional memory ===<br />
<br />
;[[Software transactional memory|Software Transactional Memory]]<br />
:GHC supports a sophisticated version of software transactional memory. Software Transactional Memory (STM) is a new way to coordinate concurrent threads.<br />
:* [http://hackage.haskell.org/package/stm Documentation]<br />
:* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/stm.pdf Composable memory transactions]. <br />
:* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/lock-free-flops06.pdf Lock-free data structures using Software Transactional Memory in Haskell] gives further examples of concurrent programming using STM.<br />
<br />
=== Actors ===<br />
<br />
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/actor Actors with multi-headed receive clauses]<br />
:Actor-based concurrency for Haskell<br />
;[http://www.cs.kent.ac.uk/projects/ofa/chp/ CHP: Communicating Haskell Processes]<br />
:CHP is built on the ideas of CSP (Communicating Sequential Processes), featuring encapsulated parallel processes (no shared data!) communicating over synchronous channels. This is a very composable mode that also allows choice on communications, so that a process may offer to either read on one channel or write on another, but will only take the first that is available.<br />
<br />
=== Helper tools ===<br />
<br />
;[[WrapConc | Wrapped Concurrency]]<br />
:A wrapper around Control.Concurrency and Control.Exception that provides versions of forkIO that have more guarantees.<br />
<br />
----<br />
<br />
== Distributed programming ==<br />
<br />
=== MPI ===<br />
<br />
;[http://www.foldr.org/~michaelw/hmpi/ hMPI]<br />
:hMPI is an acronym for HaskellMPI. It is a Haskell binding conforming to MPI (Message Passing Interface) standard 1.1/1.2. The programmer is in full control over the communication between the nodes of a cluster.<br />
<br />
;[http://hackage.haskell.org/package/haskell-mpi Haskell-MPI]<br />
:Haskell-MPI provides a Haskell interface to MPI, built on top of the foreign function interface. It is notionally a descendant of hMPI, but is mostly a rewrite.<br />
<br />
=== Distributed Haskell ===<br />
<br />
;[http://www.macs.hw.ac.uk/~dsg/gdh/ GdH: Glasgow Distributed Haskell]<br />
:GdH supports distributed stateful interactions on multiple locations. It is a conservative extension of both Concurrent Haskell and GpH, enabling the distribution of the stateful IO threads of the former on the multiple locations of the latter. The programming model includes forking stateful threads on remote locations, explicit communication over channels, and distributed exception handling.<br />
<br />
;[http://www.mathematik.uni-marburg.de/~eden Eden]<br />
:Eden extends Haskell with a small set of syntactic constructs for explicit process specification and creation. While providing enough control to implement parallel algorithms efficiently, it frees the programmer from the tedious task of managing low-level details by introducing automatic communication (via head-strict lazy lists), synchronisation, and process handling.<br />
<br />
=== Research tools ===<br />
<br />
;[http://www.cs.kent.ac.uk/~cr3/HCPN/ HCPN: Haskell-Coloured Petri Nets]<br />
:Haskell-Coloured Petri Nets (HCPN) are an instance of high-level Petri Nets, in which anonymous tokens are replaced by Haskell data objects (and transitions can operate on that data, in addition to moving it around). This gives us a hybrid graphical/textual modelling formalism for Haskell, especially suited for modelling concurrent and distributed systems.</div>EricKowhttps://wiki.haskell.org/index.php?title=Applications_and_libraries/Concurrency_and_parallelism&diff=45454Applications and libraries/Concurrency and parallelism2012-04-26T14:51:13Z<p>EricKow: /* Concurrency */</p>
<hr />
<div>'''Concurrent and Parallel Programming'''<br />
<br />
Haskell has been designed for parallel and concurrent programming since<br />
its inception. In particular, Haskell's [http://haskell.org/haskellwiki/Why_Haskell_matters purity] greatly simplifies reasoning about<br />
parallel programs. This page lists libraries and extensions for programming<br />
concurrent and parallel applications in Haskell. See also the<br />
[[Parallel|parallel portal]] for research papers, tutorials and on<br />
parallel and concurrent Haskell.<br />
<br />
== Parallelism ==<br />
<br />
=== Strategies and Par ===<br />
<br />
;Strategies<br />
:A high-level compositional API for parallel programming.<br />
:* [http://hackage.haskell.org/package/parallel The parallel package on Hackage]<br />
:* [http://hackage.haskell.org/packages/archive/parallel/3.1.0.1/doc/html/Control-Parallel-Strategies.html Control.Parallel.Strategies Documentation]<br />
:* Latest paper: [http://research.microsoft.com/apps/pubs/default.aspx?id=138042 Seq no more: Better Strategies for Parallel Haskell]<br />
:* Original paper: [http://www.macs.hw.ac.uk/~dsg/gph/papers/html/Strategies/strategies.html Algorithm + Strategy = Parallelism]<br />
;[http://hackage.haskell.org/package/monad-par Monad-par]<br />
:An alternative parallel programming API to that provided by the parallel package. The Par monad allows the simple description of parallel computations, and can be used to add parallelism to pure Haskell code. The basic API is straightforward: the monad supports forking and simple communication in terms of IVars.<br />
<br />
=== Data parallelism ===<br />
<br />
;[http://repa.ouroborus.net/ Repa]<br />
:REgular PArallel arrays: high performance, regular, multi-dimensional, shape polymorphic parallel arrays.<br />
;[https://github.com/AccelerateHS/accelerate Accelerate]<br />
:An embedded language of array computations for high-performance computing in Haskell. Computations on multi-dimensional, regular arrays are expressed in the form of parameterised collective operations (such as maps, reductions, and permutations).<br />
;[http://haskell.org/haskellwiki/GHC/Data_Parallel_Haskell Data Parallel Haskell]<br />
:Implicitly parallel, high performance (nested) arrays, supporting large multicore programming.<br />
<br />
=== Research tools ===<br />
<br />
; Feedback-directed implicit parallelism<br />
: Implicit parallelism in Haskell, and a feedback-directed mechanism to increase its granularity ([http://research.microsoft.com/~tharris/papers/2007-fdip.pdf FDIP paper])<br />
;[http://www.macs.hw.ac.uk/~dsg/gph/ GpH: Glasgow Parallel Haskell]<br />
:A complete, GHC-based implementation of the parallel Haskell extension GpH and of evaluation strategies is available. Extensions of the runtime-system and language to improve performance and support new platforms are under development.<br />
<br />
----<br />
<br />
== Concurrency ==<br />
<br />
=== Concurrent Haskell ===<br />
<br />
;[[Concurrency|Concurrent Haskell]]<br />
:GHC has supported concurrency with lightweight threads for more than a decade, and it [http://shootout.alioth.debian.org/gp4/benchmark.php?test=threadring&lang=all is very fast]. Threads in Haskell are preemptively scheduled and support everything you would normally expect from threads, including blocking I/O and foreign calls.<br />
:* [http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurrent.html Documentation]<br />
:* [http://haskell.org/haskellwiki/Concurrency_demos Examples]<br />
<br />
=== Software transactional memory ===<br />
<br />
;[[Software transactional memory|Software Transactional Memory]]<br />
:GHC supports a sophisticated version of software transactional memory. Software Transactional Memory (STM) is a new way to coordinate concurrent threads.<br />
:* [http://hackage.haskell.org/package/stm Documentation]<br />
:* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/stm.pdf Composable memory transactions]. <br />
:* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/lock-free-flops06.pdf Lock-free data structures using Software Transactional Memory in Haskell] gives further examples of concurrent programming using STM.<br />
<br />
=== Actors ===<br />
<br />
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/actor Actors with multi-headed receive clauses]<br />
:Actor-based concurrency for Haskell<br />
;[http://www.cs.kent.ac.uk/projects/ofa/chp/ CHP: Communicating Haskell Processes]<br />
:CHP is built on the ideas of CSP (Communicating Sequential Processes), featuring encapsulated parallel processes (no shared data!) communicating over synchronous channels. This is a very composable mode that also allows choice on communications, so that a process may offer to either read on one channel or write on another, but will only take the first that is available.<br />
<br />
=== Helper tools ===<br />
<br />
;[[WrapConc | Wrapped Concurrency]]<br />
:A wrapper around Control.Concurrency and Control.Exception that provides versions of forkIO that have more guarantees.<br />
<br />
----<br />
<br />
== Distributed programming ==<br />
<br />
=== MPI ===<br />
<br />
;[http://www.foldr.org/~michaelw/hmpi/ hMPI]<br />
:hMPI is an acronym for HaskellMPI. It is a Haskell binding conforming to MPI (Message Passing Interface) standard 1.1/1.2. The programmer is in full control over the communication between the nodes of a cluster.<br />
<br />
;[http://hackage.haskell.org/package/haskell-mpi Haskell-MPI]<br />
:Haskell-MPI provides a Haskell interface to MPI, built on top of the foreign function interface. It is notionally a descendant of hMPI, but is mostly a rewrite.<br />
<br />
=== Distributed Haskell ===<br />
<br />
;[http://www.macs.hw.ac.uk/~dsg/gdh/ GdH: Glasgow Distributed Haskell]<br />
:GdH supports distributed stateful interactions on multiple locations. It is a conservative extension of both Concurrent Haskell and GpH, enabling the distribution of the stateful IO threads of the former on the multiple locations of the latter. The programming model includes forking stateful threads on remote locations, explicit communication over channels, and distributed exception handling.<br />
<br />
;[http://www.mathematik.uni-marburg.de/~eden Eden]<br />
:Eden extends Haskell with a small set of syntactic constructs for explicit process specification and creation. While providing enough control to implement parallel algorithms efficiently, it frees the programmer from the tedious task of managing low-level details by introducing automatic communication (via head-strict lazy lists), synchronisation, and process handling.<br />
<br />
=== Research tools ===<br />
<br />
;[http://www.cs.kent.ac.uk/~cr3/HCPN/ HCPN: Haskell-Coloured Petri Nets]<br />
:Haskell-Coloured Petri Nets (HCPN) are an instance of high-level Petri Nets, in which anonymous tokens are replaced by Haskell data objects (and transitions can operate on that data, in addition to moving it around). This gives us a hybrid graphical/textual modelling formalism for Haskell, especially suited for modelling concurrent and distributed systems.</div>EricKowhttps://wiki.haskell.org/index.php?title=Applications_and_libraries/Concurrency_and_parallelism&diff=45453Applications and libraries/Concurrency and parallelism2012-04-26T14:49:55Z<p>EricKow: /* Parallelism */ flesh out the hierarchy a bit (repa, accelerate)</p>
<hr />
<div>'''Concurrent and Parallel Programming'''<br />
<br />
Haskell has been designed for parallel and concurrent programming since<br />
its inception. In particular, Haskell's [http://haskell.org/haskellwiki/Why_Haskell_matters purity] greatly simplifies reasoning about<br />
parallel programs. This page lists libraries and extensions for programming<br />
concurrent and parallel applications in Haskell. See also the<br />
[[Parallel|parallel portal]] for research papers, tutorials and on<br />
parallel and concurrent Haskell.<br />
<br />
== Parallelism ==<br />
<br />
=== Strategies and Par ===<br />
<br />
;Strategies<br />
:A high-level compositional API for parallel programming.<br />
:* [http://hackage.haskell.org/package/parallel The parallel package on Hackage]<br />
:* [http://hackage.haskell.org/packages/archive/parallel/3.1.0.1/doc/html/Control-Parallel-Strategies.html Control.Parallel.Strategies Documentation]<br />
:* Latest paper: [http://research.microsoft.com/apps/pubs/default.aspx?id=138042 Seq no more: Better Strategies for Parallel Haskell]<br />
:* Original paper: [http://www.macs.hw.ac.uk/~dsg/gph/papers/html/Strategies/strategies.html Algorithm + Strategy = Parallelism]<br />
;[http://hackage.haskell.org/package/monad-par Monad-par]<br />
:An alternative parallel programming API to that provided by the parallel package. The Par monad allows the simple description of parallel computations, and can be used to add parallelism to pure Haskell code. The basic API is straightforward: the monad supports forking and simple communication in terms of IVars.<br />
<br />
=== Data parallelism ===<br />
<br />
;[http://repa.ouroborus.net/ Repa]<br />
:REgular PArallel arrays: high performance, regular, multi-dimensional, shape polymorphic parallel arrays.<br />
;[https://github.com/AccelerateHS/accelerate Accelerate]<br />
:An embedded language of array computations for high-performance computing in Haskell. Computations on multi-dimensional, regular arrays are expressed in the form of parameterised collective operations (such as maps, reductions, and permutations).<br />
;[http://haskell.org/haskellwiki/GHC/Data_Parallel_Haskell Data Parallel Haskell]<br />
:Implicitly parallel, high performance (nested) arrays, supporting large multicore programming.<br />
<br />
=== Research tools ===<br />
<br />
; Feedback-directed implicit parallelism<br />
: Implicit parallelism in Haskell, and a feedback-directed mechanism to increase its granularity ([http://research.microsoft.com/~tharris/papers/2007-fdip.pdf FDIP paper])<br />
;[http://www.macs.hw.ac.uk/~dsg/gph/ GpH: Glasgow Parallel Haskell]<br />
:A complete, GHC-based implementation of the parallel Haskell extension GpH and of evaluation strategies is available. Extensions of the runtime-system and language to improve performance and support new platforms are under development.<br />
<br />
----<br />
<br />
== Concurrency ==<br />
<br />
;[[Concurrency|Concurrent Haskell]]<br />
:GHC has supported concurrency with lightweight threads for more than a decade, and it [http://shootout.alioth.debian.org/gp4/benchmark.php?test=threadring&lang=all is very fast]. Threads in Haskell are preemptively scheduled and support everything you would normally expect from threads, including blocking I/O and foreign calls.<br />
:* [http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurrent.html Documentation]<br />
:* [http://haskell.org/haskellwiki/Concurrency_demos Examples]<br />
;[[Software transactional memory|Software Transactional Memory]]<br />
:GHC supports a sophisticated version of software transactional memory. Software Transactional Memory (STM) is a new way to coordinate concurrent threads.<br />
:* [http://hackage.haskell.org/package/stm Documentation]<br />
:* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/stm.pdf Composable memory transactions]. <br />
:* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/lock-free-flops06.pdf Lock-free data structures using Software Transactional Memory in Haskell] gives further examples of concurrent programming using STM.<br />
<br />
=== Actors ===<br />
<br />
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/actor Actors with multi-headed receive clauses]<br />
:Actor-based concurrency for Haskell<br />
;[http://www.cs.kent.ac.uk/projects/ofa/chp/ CHP: Communicating Haskell Processes]<br />
:CHP is built on the ideas of CSP (Communicating Sequential Processes), featuring encapsulated parallel processes (no shared data!) communicating over synchronous channels. This is a very composable mode that also allows choice on communications, so that a process may offer to either read on one channel or write on another, but will only take the first that is available.<br />
<br />
=== Helper tools ===<br />
<br />
;[[WrapConc | Wrapped Concurrency]]<br />
:A wrapper around Control.Concurrency and Control.Exception that provides versions of forkIO that have more guarantees.<br />
<br />
----<br />
<br />
== Distributed programming ==<br />
<br />
=== MPI ===<br />
<br />
;[http://www.foldr.org/~michaelw/hmpi/ hMPI]<br />
:hMPI is an acronym for HaskellMPI. It is a Haskell binding conforming to MPI (Message Passing Interface) standard 1.1/1.2. The programmer is in full control over the communication between the nodes of a cluster.<br />
<br />
;[http://hackage.haskell.org/package/haskell-mpi Haskell-MPI]<br />
:Haskell-MPI provides a Haskell interface to MPI, built on top of the foreign function interface. It is notionally a descendant of hMPI, but is mostly a rewrite.<br />
<br />
=== Distributed Haskell ===<br />
<br />
;[http://www.macs.hw.ac.uk/~dsg/gdh/ GdH: Glasgow Distributed Haskell]<br />
:GdH supports distributed stateful interactions on multiple locations. It is a conservative extension of both Concurrent Haskell and GpH, enabling the distribution of the stateful IO threads of the former on the multiple locations of the latter. The programming model includes forking stateful threads on remote locations, explicit communication over channels, and distributed exception handling.<br />
<br />
;[http://www.mathematik.uni-marburg.de/~eden Eden]<br />
:Eden extends Haskell with a small set of syntactic constructs for explicit process specification and creation. While providing enough control to implement parallel algorithms efficiently, it frees the programmer from the tedious task of managing low-level details by introducing automatic communication (via head-strict lazy lists), synchronisation, and process handling.<br />
<br />
=== Research tools ===<br />
<br />
;[http://www.cs.kent.ac.uk/~cr3/HCPN/ HCPN: Haskell-Coloured Petri Nets]<br />
:Haskell-Coloured Petri Nets (HCPN) are an instance of high-level Petri Nets, in which anonymous tokens are replaced by Haskell data objects (and transitions can operate on that data, in addition to moving it around). This gives us a hybrid graphical/textual modelling formalism for Haskell, especially suited for modelling concurrent and distributed systems.</div>EricKowhttps://wiki.haskell.org/index.php?title=Applications_and_libraries/Concurrency_and_parallelism&diff=45450Applications and libraries/Concurrency and parallelism2012-04-26T07:50:08Z<p>EricKow: /* Parallelism */</p>
<hr />
<div>'''Concurrent and Parallel Programming'''<br />
<br />
Haskell has been designed for parallel and concurrent programming since<br />
its inception. In particular, Haskell's [http://haskell.org/haskellwiki/Why_Haskell_matters purity] greatly simplifies reasoning about<br />
parallel programs. This page lists libraries and extensions for programming<br />
concurrent and parallel applications in Haskell. See also the<br />
[[Parallel|parallel portal]] for research papers, tutorials and on<br />
parallel and concurrent Haskell.<br />
<br />
== Parallelism ==<br />
<br />
;Strategies<br />
:A high-level compositional API for parallel programming.<br />
:* [http://hackage.haskell.org/package/parallel The parallel package on Hackage]<br />
:* [http://hackage.haskell.org/packages/archive/parallel/3.1.0.1/doc/html/Control-Parallel-Strategies.html Control.Parallel.Strategies Documentation]<br />
:* Latest paper: [http://research.microsoft.com/apps/pubs/default.aspx?id=138042 Seq no more: Better Strategies for Parallel Haskell]<br />
:* Original paper: [http://www.macs.hw.ac.uk/~dsg/gph/papers/html/Strategies/strategies.html Algorithm + Strategy = Parallelism]<br />
;[http://hackage.haskell.org/package/monad-par Monad-par]<br />
:An alternative parallel programming API to that provided by the parallel package. The Par monad allows the simple description of parallel computations, and can be used to add parallelism to pure Haskell code. The basic API is straightforward: the monad supports forking and simple communication in terms of IVars.<br />
;[http://haskell.org/haskellwiki/GHC/Data_Parallel_Haskell Data Parallel Haskell]<br />
:Implicitly parallel, high performance (nested) arrays, supporting large multicore programming.<br />
<br />
=== Research tools ===<br />
<br />
; Feedback-directed implicit parallelism<br />
: Implicit parallelism in Haskell, and a feedback-directed mechanism to increase its granularity ([http://research.microsoft.com/~tharris/papers/2007-fdip.pdf FDIP paper])<br />
;[http://www.macs.hw.ac.uk/~dsg/gph/ GpH: Glasgow Parallel Haskell]<br />
:A complete, GHC-based implementation of the parallel Haskell extension GpH and of evaluation strategies is available. Extensions of the runtime-system and language to improve performance and support new platforms are under development.<br />
<br />
----<br />
<br />
== Concurrency ==<br />
<br />
;[[Concurrency|Concurrent Haskell]]<br />
:GHC has supported concurrency with lightweight threads for more than a decade, and it [http://shootout.alioth.debian.org/gp4/benchmark.php?test=threadring&lang=all is very fast]. Threads in Haskell are preemptively scheduled and support everything you would normally expect from threads, including blocking I/O and foreign calls.<br />
:* [http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurrent.html Documentation]<br />
:* [http://haskell.org/haskellwiki/Concurrency_demos Examples]<br />
;[[Software transactional memory|Software Transactional Memory]]<br />
:GHC supports a sophisticated version of software transactional memory. Software Transactional Memory (STM) is a new way to coordinate concurrent threads.<br />
:* [http://hackage.haskell.org/package/stm Documentation]<br />
:* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/stm.pdf Composable memory transactions]. <br />
:* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/lock-free-flops06.pdf Lock-free data structures using Software Transactional Memory in Haskell] gives further examples of concurrent programming using STM.<br />
<br />
=== Actors ===<br />
<br />
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/actor Actors with multi-headed receive clauses]<br />
:Actor-based concurrency for Haskell<br />
;[http://www.cs.kent.ac.uk/projects/ofa/chp/ CHP: Communicating Haskell Processes]<br />
:CHP is built on the ideas of CSP (Communicating Sequential Processes), featuring encapsulated parallel processes (no shared data!) communicating over synchronous channels. This is a very composable mode that also allows choice on communications, so that a process may offer to either read on one channel or write on another, but will only take the first that is available.<br />
<br />
=== Helper tools ===<br />
<br />
;[[WrapConc | Wrapped Concurrency]]<br />
:A wrapper around Control.Concurrency and Control.Exception that provides versions of forkIO that have more guarantees.<br />
<br />
----<br />
<br />
== Distributed programming ==<br />
<br />
=== MPI ===<br />
<br />
;[http://www.foldr.org/~michaelw/hmpi/ hMPI]<br />
:hMPI is an acronym for HaskellMPI. It is a Haskell binding conforming to MPI (Message Passing Interface) standard 1.1/1.2. The programmer is in full control over the communication between the nodes of a cluster.<br />
<br />
;[http://hackage.haskell.org/package/haskell-mpi Haskell-MPI]<br />
:Haskell-MPI provides a Haskell interface to MPI, built on top of the foreign function interface. It is notionally a descendant of hMPI, but is mostly a rewrite.<br />
<br />
=== Distributed Haskell ===<br />
<br />
;[http://www.macs.hw.ac.uk/~dsg/gdh/ GdH: Glasgow Distributed Haskell]<br />
:GdH supports distributed stateful interactions on multiple locations. It is a conservative extension of both Concurrent Haskell and GpH, enabling the distribution of the stateful IO threads of the former on the multiple locations of the latter. The programming model includes forking stateful threads on remote locations, explicit communication over channels, and distributed exception handling.<br />
<br />
;[http://www.mathematik.uni-marburg.de/~eden Eden]<br />
:Eden extends Haskell with a small set of syntactic constructs for explicit process specification and creation. While providing enough control to implement parallel algorithms efficiently, it frees the programmer from the tedious task of managing low-level details by introducing automatic communication (via head-strict lazy lists), synchronisation, and process handling.<br />
<br />
=== Research tools ===<br />
<br />
;[http://www.cs.kent.ac.uk/~cr3/HCPN/ HCPN: Haskell-Coloured Petri Nets]<br />
:Haskell-Coloured Petri Nets (HCPN) are an instance of high-level Petri Nets, in which anonymous tokens are replaced by Haskell data objects (and transitions can operate on that data, in addition to moving it around). This gives us a hybrid graphical/textual modelling formalism for Haskell, especially suited for modelling concurrent and distributed systems.</div>EricKowhttps://wiki.haskell.org/index.php?title=Applications_and_libraries/Concurrency_and_parallelism&diff=45449Applications and libraries/Concurrency and parallelism2012-04-26T07:47:34Z<p>EricKow: /* Parallelism */</p>
<hr />
<div>'''Concurrent and Parallel Programming'''<br />
<br />
Haskell has been designed for parallel and concurrent programming since<br />
its inception. In particular, Haskell's [http://haskell.org/haskellwiki/Why_Haskell_matters purity] greatly simplifies reasoning about<br />
parallel programs. This page lists libraries and extensions for programming<br />
concurrent and parallel applications in Haskell. See also the<br />
[[Parallel|parallel portal]] for research papers, tutorials and on<br />
parallel and concurrent Haskell.<br />
<br />
== Parallelism ==<br />
<br />
;Strategies<br />
:A high-level compositional API for parallel programming.<br />
:* [http://hackage.haskell.org/package/parallel The parallel package on Hackage]<br />
:* [http://hackage.haskell.org/packages/archive/parallel/3.1.0.1/doc/html/Control-Parallel-Strategies.html Control.Parallel.Strategies Documentation]<br />
:* Latest paper: [http://research.microsoft.com/apps/pubs/default.aspx?id=138042 Seq no more: Better Strategies for Parallel Haskell]<br />
:* Original paper: [http://www.macs.hw.ac.uk/~dsg/gph/papers/html/Strategies/strategies.html Algorithm + Strategy = Parallelism]<br />
;[http://hackage.haskell.org/package/monad-par Monad-par]<br />
:An alternative parallel programming API to that provided by the parallel package. The Par monad allows the simple description of parallel computations, and can be used to add parallelism to pure Haskell code. The basic API is straightforward: the monad supports forking and simple communication in terms of IVars.<br />
<br />
=== Data Parallel Haskell ===<br />
<br />
;[http://haskell.org/haskellwiki/GHC/Data_Parallel_Haskell Data Parallel Haskell]<br />
:Implicitly parallel, high performance (nested) arrays, supporting large multicore programming.<br />
<br />
=== Research tools ===<br />
<br />
; Feedback-directed implicit parallelism<br />
: Implicit parallelism in Haskell, and a feedback-directed mechanism to increase its granularity ([http://research.microsoft.com/~tharris/papers/2007-fdip.pdf FDIP paper])<br />
;[http://www.macs.hw.ac.uk/~dsg/gph/ GpH: Glasgow Parallel Haskell]<br />
:A complete, GHC-based implementation of the parallel Haskell extension GpH and of evaluation strategies is available. Extensions of the runtime-system and language to improve performance and support new platforms are under development.<br />
<br />
----<br />
<br />
== Concurrency ==<br />
<br />
;[[Concurrency|Concurrent Haskell]]<br />
:GHC has supported concurrency with lightweight threads for more than a decade, and it [http://shootout.alioth.debian.org/gp4/benchmark.php?test=threadring&lang=all is very fast]. Threads in Haskell are preemptively scheduled and support everything you would normally expect from threads, including blocking I/O and foreign calls.<br />
:* [http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurrent.html Documentation]<br />
:* [http://haskell.org/haskellwiki/Concurrency_demos Examples]<br />
;[[Software transactional memory|Software Transactional Memory]]<br />
:GHC supports a sophisticated version of software transactional memory. Software Transactional Memory (STM) is a new way to coordinate concurrent threads.<br />
:* [http://hackage.haskell.org/package/stm Documentation]<br />
:* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/stm.pdf Composable memory transactions]. <br />
:* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/lock-free-flops06.pdf Lock-free data structures using Software Transactional Memory in Haskell] gives further examples of concurrent programming using STM.<br />
<br />
=== Actors ===<br />
<br />
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/actor Actors with multi-headed receive clauses]<br />
:Actor-based concurrency for Haskell<br />
;[http://www.cs.kent.ac.uk/projects/ofa/chp/ CHP: Communicating Haskell Processes]<br />
:CHP is built on the ideas of CSP (Communicating Sequential Processes), featuring encapsulated parallel processes (no shared data!) communicating over synchronous channels. This is a very composable mode that also allows choice on communications, so that a process may offer to either read on one channel or write on another, but will only take the first that is available.<br />
<br />
=== Helper tools ===<br />
<br />
;[[WrapConc | Wrapped Concurrency]]<br />
:A wrapper around Control.Concurrency and Control.Exception that provides versions of forkIO that have more guarantees.<br />
<br />
----<br />
<br />
== Distributed programming ==<br />
<br />
=== MPI ===<br />
<br />
;[http://www.foldr.org/~michaelw/hmpi/ hMPI]<br />
:hMPI is an acronym for HaskellMPI. It is a Haskell binding conforming to MPI (Message Passing Interface) standard 1.1/1.2. The programmer is in full control over the communication between the nodes of a cluster.<br />
<br />
;[http://hackage.haskell.org/package/haskell-mpi Haskell-MPI]<br />
:Haskell-MPI provides a Haskell interface to MPI, built on top of the foreign function interface. It is notionally a descendant of hMPI, but is mostly a rewrite.<br />
<br />
=== Distributed Haskell ===<br />
<br />
;[http://www.macs.hw.ac.uk/~dsg/gdh/ GdH: Glasgow Distributed Haskell]<br />
:GdH supports distributed stateful interactions on multiple locations. It is a conservative extension of both Concurrent Haskell and GpH, enabling the distribution of the stateful IO threads of the former on the multiple locations of the latter. The programming model includes forking stateful threads on remote locations, explicit communication over channels, and distributed exception handling.<br />
<br />
;[http://www.mathematik.uni-marburg.de/~eden Eden]<br />
:Eden extends Haskell with a small set of syntactic constructs for explicit process specification and creation. While providing enough control to implement parallel algorithms efficiently, it frees the programmer from the tedious task of managing low-level details by introducing automatic communication (via head-strict lazy lists), synchronisation, and process handling.<br />
<br />
=== Research tools ===<br />
<br />
;[http://www.cs.kent.ac.uk/~cr3/HCPN/ HCPN: Haskell-Coloured Petri Nets]<br />
:Haskell-Coloured Petri Nets (HCPN) are an instance of high-level Petri Nets, in which anonymous tokens are replaced by Haskell data objects (and transitions can operate on that data, in addition to moving it around). This gives us a hybrid graphical/textual modelling formalism for Haskell, especially suited for modelling concurrent and distributed systems.</div>EricKowhttps://wiki.haskell.org/index.php?title=Applications_and_libraries/Concurrency_and_parallelism&diff=45448Applications and libraries/Concurrency and parallelism2012-04-26T07:47:12Z<p>EricKow: /* Parallel Strategies */</p>
<hr />
<div>'''Concurrent and Parallel Programming'''<br />
<br />
Haskell has been designed for parallel and concurrent programming since<br />
its inception. In particular, Haskell's [http://haskell.org/haskellwiki/Why_Haskell_matters purity] greatly simplifies reasoning about<br />
parallel programs. This page lists libraries and extensions for programming<br />
concurrent and parallel applications in Haskell. See also the<br />
[[Parallel|parallel portal]] for research papers, tutorials and on<br />
parallel and concurrent Haskell.<br />
<br />
== Parallelism ==<br />
<br />
=== Parallel Strategies ===<br />
<br />
;Strategies<br />
:A high-level compositional API for parallel programming.<br />
:* [http://hackage.haskell.org/package/parallel The parallel package on Hackage]<br />
:* [http://hackage.haskell.org/packages/archive/parallel/3.1.0.1/doc/html/Control-Parallel-Strategies.html Control.Parallel.Strategies Documentation]<br />
:* Latest paper: [http://research.microsoft.com/apps/pubs/default.aspx?id=138042 Seq no more: Better Strategies for Parallel Haskell]<br />
:* Original paper: [http://www.macs.hw.ac.uk/~dsg/gph/papers/html/Strategies/strategies.html Algorithm + Strategy = Parallelism]<br />
<br />
=== Monad-par ===<br />
<br />
;[http://hackage.haskell.org/package/monad-par Monad-par]<br />
:An alternative parallel programming API to that provided by the parallel package. The Par monad allows the simple description of parallel computations, and can be used to add parallelism to pure Haskell code. The basic API is straightforward: the monad supports forking and simple communication in terms of IVars.<br />
<br />
=== Data Parallel Haskell ===<br />
<br />
;[http://haskell.org/haskellwiki/GHC/Data_Parallel_Haskell Data Parallel Haskell]<br />
:Implicitly parallel, high performance (nested) arrays, supporting large multicore programming.<br />
<br />
=== Research tools ===<br />
<br />
; Feedback-directed implicit parallelism<br />
: Implicit parallelism in Haskell, and a feedback-directed mechanism to increase its granularity ([http://research.microsoft.com/~tharris/papers/2007-fdip.pdf FDIP paper])<br />
;[http://www.macs.hw.ac.uk/~dsg/gph/ GpH: Glasgow Parallel Haskell]<br />
:A complete, GHC-based implementation of the parallel Haskell extension GpH and of evaluation strategies is available. Extensions of the runtime-system and language to improve performance and support new platforms are under development.<br />
<br />
----<br />
<br />
== Concurrency ==<br />
<br />
;[[Concurrency|Concurrent Haskell]]<br />
:GHC has supported concurrency with lightweight threads for more than a decade, and it [http://shootout.alioth.debian.org/gp4/benchmark.php?test=threadring&lang=all is very fast]. Threads in Haskell are preemptively scheduled and support everything you would normally expect from threads, including blocking I/O and foreign calls.<br />
:* [http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurrent.html Documentation]<br />
:* [http://haskell.org/haskellwiki/Concurrency_demos Examples]<br />
;[[Software transactional memory|Software Transactional Memory]]<br />
:GHC supports a sophisticated version of software transactional memory. Software Transactional Memory (STM) is a new way to coordinate concurrent threads.<br />
:* [http://hackage.haskell.org/package/stm Documentation]<br />
:* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/stm.pdf Composable memory transactions]. <br />
:* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/lock-free-flops06.pdf Lock-free data structures using Software Transactional Memory in Haskell] gives further examples of concurrent programming using STM.<br />
<br />
=== Actors ===<br />
<br />
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/actor Actors with multi-headed receive clauses]<br />
:Actor-based concurrency for Haskell<br />
;[http://www.cs.kent.ac.uk/projects/ofa/chp/ CHP: Communicating Haskell Processes]<br />
:CHP is built on the ideas of CSP (Communicating Sequential Processes), featuring encapsulated parallel processes (no shared data!) communicating over synchronous channels. This is a very composable mode that also allows choice on communications, so that a process may offer to either read on one channel or write on another, but will only take the first that is available.<br />
<br />
=== Helper tools ===<br />
<br />
;[[WrapConc | Wrapped Concurrency]]<br />
:A wrapper around Control.Concurrency and Control.Exception that provides versions of forkIO that have more guarantees.<br />
<br />
----<br />
<br />
== Distributed programming ==<br />
<br />
=== MPI ===<br />
<br />
;[http://www.foldr.org/~michaelw/hmpi/ hMPI]<br />
:hMPI is an acronym for HaskellMPI. It is a Haskell binding conforming to MPI (Message Passing Interface) standard 1.1/1.2. The programmer is in full control over the communication between the nodes of a cluster.<br />
<br />
;[http://hackage.haskell.org/package/haskell-mpi Haskell-MPI]<br />
:Haskell-MPI provides a Haskell interface to MPI, built on top of the foreign function interface. It is notionally a descendant of hMPI, but is mostly a rewrite.<br />
<br />
=== Distributed Haskell ===<br />
<br />
;[http://www.macs.hw.ac.uk/~dsg/gdh/ GdH: Glasgow Distributed Haskell]<br />
:GdH supports distributed stateful interactions on multiple locations. It is a conservative extension of both Concurrent Haskell and GpH, enabling the distribution of the stateful IO threads of the former on the multiple locations of the latter. The programming model includes forking stateful threads on remote locations, explicit communication over channels, and distributed exception handling.<br />
<br />
;[http://www.mathematik.uni-marburg.de/~eden Eden]<br />
:Eden extends Haskell with a small set of syntactic constructs for explicit process specification and creation. While providing enough control to implement parallel algorithms efficiently, it frees the programmer from the tedious task of managing low-level details by introducing automatic communication (via head-strict lazy lists), synchronisation, and process handling.<br />
<br />
=== Research tools ===<br />
<br />
;[http://www.cs.kent.ac.uk/~cr3/HCPN/ HCPN: Haskell-Coloured Petri Nets]<br />
:Haskell-Coloured Petri Nets (HCPN) are an instance of high-level Petri Nets, in which anonymous tokens are replaced by Haskell data objects (and transitions can operate on that data, in addition to moving it around). This gives us a hybrid graphical/textual modelling formalism for Haskell, especially suited for modelling concurrent and distributed systems.</div>EricKowhttps://wiki.haskell.org/index.php?title=Applications_and_libraries/Concurrency_and_parallelism&diff=45447Applications and libraries/Concurrency and parallelism2012-04-26T07:46:42Z<p>EricKow: /* Concurrency */</p>
<hr />
<div>'''Concurrent and Parallel Programming'''<br />
<br />
Haskell has been designed for parallel and concurrent programming since<br />
its inception. In particular, Haskell's [http://haskell.org/haskellwiki/Why_Haskell_matters purity] greatly simplifies reasoning about<br />
parallel programs. This page lists libraries and extensions for programming<br />
concurrent and parallel applications in Haskell. See also the<br />
[[Parallel|parallel portal]] for research papers, tutorials and on<br />
parallel and concurrent Haskell.<br />
<br />
== Parallelism ==<br />
<br />
=== Parallel Strategies ===<br />
<br />
Strategies provide a high-level compositional API for parallel programming.<br />
<br />
* [http://hackage.haskell.org/package/parallel The parallel package on Hackage]<br />
* [http://hackage.haskell.org/packages/archive/parallel/3.1.0.1/doc/html/Control-Parallel-Strategies.html Control.Parallel.Strategies Documentation]<br />
* Latest paper: [http://research.microsoft.com/apps/pubs/default.aspx?id=138042 Seq no more: Better Strategies for Parallel Haskell]<br />
* Original paper: [http://www.macs.hw.ac.uk/~dsg/gph/papers/html/Strategies/strategies.html Algorithm + Strategy = Parallelism]<br />
<br />
=== Monad-par ===<br />
<br />
;[http://hackage.haskell.org/package/monad-par Monad-par]<br />
:An alternative parallel programming API to that provided by the parallel package. The Par monad allows the simple description of parallel computations, and can be used to add parallelism to pure Haskell code. The basic API is straightforward: the monad supports forking and simple communication in terms of IVars.<br />
<br />
=== Data Parallel Haskell ===<br />
<br />
;[http://haskell.org/haskellwiki/GHC/Data_Parallel_Haskell Data Parallel Haskell]<br />
:Implicitly parallel, high performance (nested) arrays, supporting large multicore programming.<br />
<br />
=== Research tools ===<br />
<br />
; Feedback-directed implicit parallelism<br />
: Implicit parallelism in Haskell, and a feedback-directed mechanism to increase its granularity ([http://research.microsoft.com/~tharris/papers/2007-fdip.pdf FDIP paper])<br />
;[http://www.macs.hw.ac.uk/~dsg/gph/ GpH: Glasgow Parallel Haskell]<br />
:A complete, GHC-based implementation of the parallel Haskell extension GpH and of evaluation strategies is available. Extensions of the runtime-system and language to improve performance and support new platforms are under development.<br />
<br />
----<br />
<br />
== Concurrency ==<br />
<br />
;[[Concurrency|Concurrent Haskell]]<br />
:GHC has supported concurrency with lightweight threads for more than a decade, and it [http://shootout.alioth.debian.org/gp4/benchmark.php?test=threadring&lang=all is very fast]. Threads in Haskell are preemptively scheduled and support everything you would normally expect from threads, including blocking I/O and foreign calls.<br />
:* [http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurrent.html Documentation]<br />
:* [http://haskell.org/haskellwiki/Concurrency_demos Examples]<br />
;[[Software transactional memory|Software Transactional Memory]]<br />
:GHC supports a sophisticated version of software transactional memory. Software Transactional Memory (STM) is a new way to coordinate concurrent threads.<br />
:* [http://hackage.haskell.org/package/stm Documentation]<br />
:* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/stm.pdf Composable memory transactions]. <br />
:* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/lock-free-flops06.pdf Lock-free data structures using Software Transactional Memory in Haskell] gives further examples of concurrent programming using STM.<br />
<br />
=== Actors ===<br />
<br />
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/actor Actors with multi-headed receive clauses]<br />
:Actor-based concurrency for Haskell<br />
;[http://www.cs.kent.ac.uk/projects/ofa/chp/ CHP: Communicating Haskell Processes]<br />
:CHP is built on the ideas of CSP (Communicating Sequential Processes), featuring encapsulated parallel processes (no shared data!) communicating over synchronous channels. This is a very composable mode that also allows choice on communications, so that a process may offer to either read on one channel or write on another, but will only take the first that is available.<br />
<br />
=== Helper tools ===<br />
<br />
;[[WrapConc | Wrapped Concurrency]]<br />
:A wrapper around Control.Concurrency and Control.Exception that provides versions of forkIO that have more guarantees.<br />
<br />
----<br />
<br />
== Distributed programming ==<br />
<br />
=== MPI ===<br />
<br />
;[http://www.foldr.org/~michaelw/hmpi/ hMPI]<br />
:hMPI is an acronym for HaskellMPI. It is a Haskell binding conforming to MPI (Message Passing Interface) standard 1.1/1.2. The programmer is in full control over the communication between the nodes of a cluster.<br />
<br />
;[http://hackage.haskell.org/package/haskell-mpi Haskell-MPI]<br />
:Haskell-MPI provides a Haskell interface to MPI, built on top of the foreign function interface. It is notionally a descendant of hMPI, but is mostly a rewrite.<br />
<br />
=== Distributed Haskell ===<br />
<br />
;[http://www.macs.hw.ac.uk/~dsg/gdh/ GdH: Glasgow Distributed Haskell]<br />
:GdH supports distributed stateful interactions on multiple locations. It is a conservative extension of both Concurrent Haskell and GpH, enabling the distribution of the stateful IO threads of the former on the multiple locations of the latter. The programming model includes forking stateful threads on remote locations, explicit communication over channels, and distributed exception handling.<br />
<br />
;[http://www.mathematik.uni-marburg.de/~eden Eden]<br />
:Eden extends Haskell with a small set of syntactic constructs for explicit process specification and creation. While providing enough control to implement parallel algorithms efficiently, it frees the programmer from the tedious task of managing low-level details by introducing automatic communication (via head-strict lazy lists), synchronisation, and process handling.<br />
<br />
=== Research tools ===<br />
<br />
;[http://www.cs.kent.ac.uk/~cr3/HCPN/ HCPN: Haskell-Coloured Petri Nets]<br />
:Haskell-Coloured Petri Nets (HCPN) are an instance of high-level Petri Nets, in which anonymous tokens are replaced by Haskell data objects (and transitions can operate on that data, in addition to moving it around). This gives us a hybrid graphical/textual modelling formalism for Haskell, especially suited for modelling concurrent and distributed systems.</div>EricKowhttps://wiki.haskell.org/index.php?title=Applications_and_libraries/Concurrency_and_parallelism&diff=45446Applications and libraries/Concurrency and parallelism2012-04-26T07:45:59Z<p>EricKow: /* Concurrency */ maybe we don't need an extra header?</p>
<hr />
<div>'''Concurrent and Parallel Programming'''<br />
<br />
Haskell has been designed for parallel and concurrent programming since<br />
its inception. In particular, Haskell's [http://haskell.org/haskellwiki/Why_Haskell_matters purity] greatly simplifies reasoning about<br />
parallel programs. This page lists libraries and extensions for programming<br />
concurrent and parallel applications in Haskell. See also the<br />
[[Parallel|parallel portal]] for research papers, tutorials and on<br />
parallel and concurrent Haskell.<br />
<br />
== Parallelism ==<br />
<br />
=== Parallel Strategies ===<br />
<br />
Strategies provide a high-level compositional API for parallel programming.<br />
<br />
* [http://hackage.haskell.org/package/parallel The parallel package on Hackage]<br />
* [http://hackage.haskell.org/packages/archive/parallel/3.1.0.1/doc/html/Control-Parallel-Strategies.html Control.Parallel.Strategies Documentation]<br />
* Latest paper: [http://research.microsoft.com/apps/pubs/default.aspx?id=138042 Seq no more: Better Strategies for Parallel Haskell]<br />
* Original paper: [http://www.macs.hw.ac.uk/~dsg/gph/papers/html/Strategies/strategies.html Algorithm + Strategy = Parallelism]<br />
<br />
=== Monad-par ===<br />
<br />
;[http://hackage.haskell.org/package/monad-par Monad-par]<br />
:An alternative parallel programming API to that provided by the parallel package. The Par monad allows the simple description of parallel computations, and can be used to add parallelism to pure Haskell code. The basic API is straightforward: the monad supports forking and simple communication in terms of IVars.<br />
<br />
=== Data Parallel Haskell ===<br />
<br />
;[http://haskell.org/haskellwiki/GHC/Data_Parallel_Haskell Data Parallel Haskell]<br />
:Implicitly parallel, high performance (nested) arrays, supporting large multicore programming.<br />
<br />
=== Research tools ===<br />
<br />
; Feedback-directed implicit parallelism<br />
: Implicit parallelism in Haskell, and a feedback-directed mechanism to increase its granularity ([http://research.microsoft.com/~tharris/papers/2007-fdip.pdf FDIP paper])<br />
;[http://www.macs.hw.ac.uk/~dsg/gph/ GpH: Glasgow Parallel Haskell]<br />
:A complete, GHC-based implementation of the parallel Haskell extension GpH and of evaluation strategies is available. Extensions of the runtime-system and language to improve performance and support new platforms are under development.<br />
<br />
----<br />
<br />
== Concurrency ==<br />
<br />
;[[Concurrency|Concurrent Haskell]]<br />
:GHC has supported concurrency with lightweight threads for more than a decade, and it [http://shootout.alioth.debian.org/gp4/benchmark.php?test=threadring&lang=all is very fast]. Threads in Haskell are preemptively scheduled and support everything you would normally expect from threads, including blocking I/O and foreign calls.<br />
* [http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurrent.html Documentation]<br />
* [http://haskell.org/haskellwiki/Concurrency_demos Examples]<br />
;[[Software transactional memory|Software Transactional Memory]]<br />
:GHC supports a sophisticated version of software transactional memory. Software Transactional Memory (STM) is a new way to coordinate concurrent threads.<br />
* [http://hackage.haskell.org/package/stm Documentation]<br />
* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/stm.pdf Composable memory transactions]. <br />
* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/lock-free-flops06.pdf Lock-free data structures using Software Transactional Memory in Haskell] gives further examples of concurrent programming using STM.<br />
<br />
=== Actors ===<br />
<br />
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/actor Actors with multi-headed receive clauses]<br />
:Actor-based concurrency for Haskell<br />
;[http://www.cs.kent.ac.uk/projects/ofa/chp/ CHP: Communicating Haskell Processes]<br />
:CHP is built on the ideas of CSP (Communicating Sequential Processes), featuring encapsulated parallel processes (no shared data!) communicating over synchronous channels. This is a very composable mode that also allows choice on communications, so that a process may offer to either read on one channel or write on another, but will only take the first that is available.<br />
<br />
=== Helper tools ===<br />
<br />
;[[WrapConc | Wrapped Concurrency]]<br />
:A wrapper around Control.Concurrency and Control.Exception that provides versions of forkIO that have more guarantees.<br />
<br />
----<br />
<br />
== Distributed programming ==<br />
<br />
=== MPI ===<br />
<br />
;[http://www.foldr.org/~michaelw/hmpi/ hMPI]<br />
:hMPI is an acronym for HaskellMPI. It is a Haskell binding conforming to MPI (Message Passing Interface) standard 1.1/1.2. The programmer is in full control over the communication between the nodes of a cluster.<br />
<br />
;[http://hackage.haskell.org/package/haskell-mpi Haskell-MPI]<br />
:Haskell-MPI provides a Haskell interface to MPI, built on top of the foreign function interface. It is notionally a descendant of hMPI, but is mostly a rewrite.<br />
<br />
=== Distributed Haskell ===<br />
<br />
;[http://www.macs.hw.ac.uk/~dsg/gdh/ GdH: Glasgow Distributed Haskell]<br />
:GdH supports distributed stateful interactions on multiple locations. It is a conservative extension of both Concurrent Haskell and GpH, enabling the distribution of the stateful IO threads of the former on the multiple locations of the latter. The programming model includes forking stateful threads on remote locations, explicit communication over channels, and distributed exception handling.<br />
<br />
;[http://www.mathematik.uni-marburg.de/~eden Eden]<br />
:Eden extends Haskell with a small set of syntactic constructs for explicit process specification and creation. While providing enough control to implement parallel algorithms efficiently, it frees the programmer from the tedious task of managing low-level details by introducing automatic communication (via head-strict lazy lists), synchronisation, and process handling.<br />
<br />
=== Research tools ===<br />
<br />
;[http://www.cs.kent.ac.uk/~cr3/HCPN/ HCPN: Haskell-Coloured Petri Nets]<br />
:Haskell-Coloured Petri Nets (HCPN) are an instance of high-level Petri Nets, in which anonymous tokens are replaced by Haskell data objects (and transitions can operate on that data, in addition to moving it around). This gives us a hybrid graphical/textual modelling formalism for Haskell, especially suited for modelling concurrent and distributed systems.</div>EricKowhttps://wiki.haskell.org/index.php?title=Applications_and_libraries/Concurrency_and_parallelism&diff=45445Applications and libraries/Concurrency and parallelism2012-04-26T07:45:15Z<p>EricKow: /* Monad-par */</p>
<hr />
<div>'''Concurrent and Parallel Programming'''<br />
<br />
Haskell has been designed for parallel and concurrent programming since<br />
its inception. In particular, Haskell's [http://haskell.org/haskellwiki/Why_Haskell_matters purity] greatly simplifies reasoning about<br />
parallel programs. This page lists libraries and extensions for programming<br />
concurrent and parallel applications in Haskell. See also the<br />
[[Parallel|parallel portal]] for research papers, tutorials and on<br />
parallel and concurrent Haskell.<br />
<br />
== Parallelism ==<br />
<br />
=== Parallel Strategies ===<br />
<br />
Strategies provide a high-level compositional API for parallel programming.<br />
<br />
* [http://hackage.haskell.org/package/parallel The parallel package on Hackage]<br />
* [http://hackage.haskell.org/packages/archive/parallel/3.1.0.1/doc/html/Control-Parallel-Strategies.html Control.Parallel.Strategies Documentation]<br />
* Latest paper: [http://research.microsoft.com/apps/pubs/default.aspx?id=138042 Seq no more: Better Strategies for Parallel Haskell]<br />
* Original paper: [http://www.macs.hw.ac.uk/~dsg/gph/papers/html/Strategies/strategies.html Algorithm + Strategy = Parallelism]<br />
<br />
=== Monad-par ===<br />
<br />
;[http://hackage.haskell.org/package/monad-par Monad-par]<br />
:An alternative parallel programming API to that provided by the parallel package. The Par monad allows the simple description of parallel computations, and can be used to add parallelism to pure Haskell code. The basic API is straightforward: the monad supports forking and simple communication in terms of IVars.<br />
<br />
=== Data Parallel Haskell ===<br />
<br />
;[http://haskell.org/haskellwiki/GHC/Data_Parallel_Haskell Data Parallel Haskell]<br />
:Implicitly parallel, high performance (nested) arrays, supporting large multicore programming.<br />
<br />
=== Research tools ===<br />
<br />
; Feedback-directed implicit parallelism<br />
: Implicit parallelism in Haskell, and a feedback-directed mechanism to increase its granularity ([http://research.microsoft.com/~tharris/papers/2007-fdip.pdf FDIP paper])<br />
;[http://www.macs.hw.ac.uk/~dsg/gph/ GpH: Glasgow Parallel Haskell]<br />
:A complete, GHC-based implementation of the parallel Haskell extension GpH and of evaluation strategies is available. Extensions of the runtime-system and language to improve performance and support new platforms are under development.<br />
<br />
----<br />
<br />
== Concurrency ==<br />
<br />
;[[Concurrency|Concurrent Haskell]]<br />
:GHC has supported concurrency with lightweight threads for more than a decade, and it [http://shootout.alioth.debian.org/gp4/benchmark.php?test=threadring&lang=all is very fast]. Threads in Haskell are preemptively scheduled and support everything you would normally expect from threads, including blocking I/O and foreign calls.<br />
* [http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurrent.html Documentation]<br />
* [http://haskell.org/haskellwiki/Concurrency_demos Examples]<br />
<br />
=== Software transactional memory ===<br />
<br />
;[[Software transactional memory|Software Transactional Memory]]<br />
:GHC supports a sophisticated version of software transactional memory. Software Transactional Memory (STM) is a new way to coordinate concurrent threads.<br />
* [http://hackage.haskell.org/package/stm Documentation]<br />
* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/stm.pdf Composable memory transactions]. <br />
* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/lock-free-flops06.pdf Lock-free data structures using Software Transactional Memory in Haskell] gives further examples of concurrent programming using STM.<br />
<br />
=== Actors ===<br />
<br />
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/actor Actors with multi-headed receive clauses]<br />
:Actor-based concurrency for Haskell<br />
;[http://www.cs.kent.ac.uk/projects/ofa/chp/ CHP: Communicating Haskell Processes]<br />
:CHP is built on the ideas of CSP (Communicating Sequential Processes), featuring encapsulated parallel processes (no shared data!) communicating over synchronous channels. This is a very composable mode that also allows choice on communications, so that a process may offer to either read on one channel or write on another, but will only take the first that is available.<br />
<br />
=== Helper tools ===<br />
<br />
;[[WrapConc | Wrapped Concurrency]]<br />
:A wrapper around Control.Concurrency and Control.Exception that provides versions of forkIO that have more guarantees.<br />
<br />
----<br />
<br />
== Distributed programming ==<br />
<br />
=== MPI ===<br />
<br />
;[http://www.foldr.org/~michaelw/hmpi/ hMPI]<br />
:hMPI is an acronym for HaskellMPI. It is a Haskell binding conforming to MPI (Message Passing Interface) standard 1.1/1.2. The programmer is in full control over the communication between the nodes of a cluster.<br />
<br />
;[http://hackage.haskell.org/package/haskell-mpi Haskell-MPI]<br />
:Haskell-MPI provides a Haskell interface to MPI, built on top of the foreign function interface. It is notionally a descendant of hMPI, but is mostly a rewrite.<br />
<br />
=== Distributed Haskell ===<br />
<br />
;[http://www.macs.hw.ac.uk/~dsg/gdh/ GdH: Glasgow Distributed Haskell]<br />
:GdH supports distributed stateful interactions on multiple locations. It is a conservative extension of both Concurrent Haskell and GpH, enabling the distribution of the stateful IO threads of the former on the multiple locations of the latter. The programming model includes forking stateful threads on remote locations, explicit communication over channels, and distributed exception handling.<br />
<br />
;[http://www.mathematik.uni-marburg.de/~eden Eden]<br />
:Eden extends Haskell with a small set of syntactic constructs for explicit process specification and creation. While providing enough control to implement parallel algorithms efficiently, it frees the programmer from the tedious task of managing low-level details by introducing automatic communication (via head-strict lazy lists), synchronisation, and process handling.<br />
<br />
=== Research tools ===<br />
<br />
;[http://www.cs.kent.ac.uk/~cr3/HCPN/ HCPN: Haskell-Coloured Petri Nets]<br />
:Haskell-Coloured Petri Nets (HCPN) are an instance of high-level Petri Nets, in which anonymous tokens are replaced by Haskell data objects (and transitions can operate on that data, in addition to moving it around). This gives us a hybrid graphical/textual modelling formalism for Haskell, especially suited for modelling concurrent and distributed systems.</div>EricKowhttps://wiki.haskell.org/index.php?title=Applications_and_libraries/Concurrency_and_parallelism&diff=45444Applications and libraries/Concurrency and parallelism2012-04-26T07:43:27Z<p>EricKow: /* Modelling concurrent and distributed systems */</p>
<hr />
<div>'''Concurrent and Parallel Programming'''<br />
<br />
Haskell has been designed for parallel and concurrent programming since<br />
its inception. In particular, Haskell's [http://haskell.org/haskellwiki/Why_Haskell_matters purity] greatly simplifies reasoning about<br />
parallel programs. This page lists libraries and extensions for programming<br />
concurrent and parallel applications in Haskell. See also the<br />
[[Parallel|parallel portal]] for research papers, tutorials and on<br />
parallel and concurrent Haskell.<br />
<br />
== Parallelism ==<br />
<br />
=== Parallel Strategies ===<br />
<br />
Strategies provide a high-level compositional API for parallel programming.<br />
<br />
* [http://hackage.haskell.org/package/parallel The parallel package on Hackage]<br />
* [http://hackage.haskell.org/packages/archive/parallel/3.1.0.1/doc/html/Control-Parallel-Strategies.html Control.Parallel.Strategies Documentation]<br />
* Latest paper: [http://research.microsoft.com/apps/pubs/default.aspx?id=138042 Seq no more: Better Strategies for Parallel Haskell]<br />
* Original paper: [http://www.macs.hw.ac.uk/~dsg/gph/papers/html/Strategies/strategies.html Algorithm + Strategy = Parallelism]<br />
<br />
=== Monad-par ===<br />
<br />
This library offers an alternative parallel programming API to that provided by the parallel package. The Par monad allows the simple description of parallel computations, and can be used to add parallelism to pure Haskell code. The basic API is straightforward: the monad supports forking and simple communication in terms of IVars.<br />
<br />
* [http://hackage.haskell.org/package/monad-par Monad-par hackage page]<br />
<br />
=== Data Parallel Haskell ===<br />
<br />
;[http://haskell.org/haskellwiki/GHC/Data_Parallel_Haskell Data Parallel Haskell]<br />
:Implicitly parallel, high performance (nested) arrays, supporting large multicore programming.<br />
<br />
=== Research tools ===<br />
<br />
; Feedback-directed implicit parallelism<br />
: Implicit parallelism in Haskell, and a feedback-directed mechanism to increase its granularity ([http://research.microsoft.com/~tharris/papers/2007-fdip.pdf FDIP paper])<br />
;[http://www.macs.hw.ac.uk/~dsg/gph/ GpH: Glasgow Parallel Haskell]<br />
:A complete, GHC-based implementation of the parallel Haskell extension GpH and of evaluation strategies is available. Extensions of the runtime-system and language to improve performance and support new platforms are under development.<br />
<br />
----<br />
<br />
== Concurrency ==<br />
<br />
;[[Concurrency|Concurrent Haskell]]<br />
:GHC has supported concurrency with lightweight threads for more than a decade, and it [http://shootout.alioth.debian.org/gp4/benchmark.php?test=threadring&lang=all is very fast]. Threads in Haskell are preemptively scheduled and support everything you would normally expect from threads, including blocking I/O and foreign calls.<br />
* [http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurrent.html Documentation]<br />
* [http://haskell.org/haskellwiki/Concurrency_demos Examples]<br />
<br />
=== Software transactional memory ===<br />
<br />
;[[Software transactional memory|Software Transactional Memory]]<br />
:GHC supports a sophisticated version of software transactional memory. Software Transactional Memory (STM) is a new way to coordinate concurrent threads.<br />
* [http://hackage.haskell.org/package/stm Documentation]<br />
* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/stm.pdf Composable memory transactions]. <br />
* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/lock-free-flops06.pdf Lock-free data structures using Software Transactional Memory in Haskell] gives further examples of concurrent programming using STM.<br />
<br />
=== Actors ===<br />
<br />
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/actor Actors with multi-headed receive clauses]<br />
:Actor-based concurrency for Haskell<br />
;[http://www.cs.kent.ac.uk/projects/ofa/chp/ CHP: Communicating Haskell Processes]<br />
:CHP is built on the ideas of CSP (Communicating Sequential Processes), featuring encapsulated parallel processes (no shared data!) communicating over synchronous channels. This is a very composable mode that also allows choice on communications, so that a process may offer to either read on one channel or write on another, but will only take the first that is available.<br />
<br />
=== Helper tools ===<br />
<br />
;[[WrapConc | Wrapped Concurrency]]<br />
:A wrapper around Control.Concurrency and Control.Exception that provides versions of forkIO that have more guarantees.<br />
<br />
----<br />
<br />
== Distributed programming ==<br />
<br />
=== MPI ===<br />
<br />
;[http://www.foldr.org/~michaelw/hmpi/ hMPI]<br />
:hMPI is an acronym for HaskellMPI. It is a Haskell binding conforming to MPI (Message Passing Interface) standard 1.1/1.2. The programmer is in full control over the communication between the nodes of a cluster.<br />
<br />
;[http://hackage.haskell.org/package/haskell-mpi Haskell-MPI]<br />
:Haskell-MPI provides a Haskell interface to MPI, built on top of the foreign function interface. It is notionally a descendant of hMPI, but is mostly a rewrite.<br />
<br />
=== Distributed Haskell ===<br />
<br />
;[http://www.macs.hw.ac.uk/~dsg/gdh/ GdH: Glasgow Distributed Haskell]<br />
:GdH supports distributed stateful interactions on multiple locations. It is a conservative extension of both Concurrent Haskell and GpH, enabling the distribution of the stateful IO threads of the former on the multiple locations of the latter. The programming model includes forking stateful threads on remote locations, explicit communication over channels, and distributed exception handling.<br />
<br />
;[http://www.mathematik.uni-marburg.de/~eden Eden]<br />
:Eden extends Haskell with a small set of syntactic constructs for explicit process specification and creation. While providing enough control to implement parallel algorithms efficiently, it frees the programmer from the tedious task of managing low-level details by introducing automatic communication (via head-strict lazy lists), synchronisation, and process handling.<br />
<br />
=== Research tools ===<br />
<br />
;[http://www.cs.kent.ac.uk/~cr3/HCPN/ HCPN: Haskell-Coloured Petri Nets]<br />
:Haskell-Coloured Petri Nets (HCPN) are an instance of high-level Petri Nets, in which anonymous tokens are replaced by Haskell data objects (and transitions can operate on that data, in addition to moving it around). This gives us a hybrid graphical/textual modelling formalism for Haskell, especially suited for modelling concurrent and distributed systems.</div>EricKowhttps://wiki.haskell.org/index.php?title=Applications_and_libraries/Concurrency_and_parallelism&diff=45443Applications and libraries/Concurrency and parallelism2012-04-26T07:43:03Z<p>EricKow: consolidate research tools</p>
<hr />
<div>'''Concurrent and Parallel Programming'''<br />
<br />
Haskell has been designed for parallel and concurrent programming since<br />
its inception. In particular, Haskell's [http://haskell.org/haskellwiki/Why_Haskell_matters purity] greatly simplifies reasoning about<br />
parallel programs. This page lists libraries and extensions for programming<br />
concurrent and parallel applications in Haskell. See also the<br />
[[Parallel|parallel portal]] for research papers, tutorials and on<br />
parallel and concurrent Haskell.<br />
<br />
== Parallelism ==<br />
<br />
=== Parallel Strategies ===<br />
<br />
Strategies provide a high-level compositional API for parallel programming.<br />
<br />
* [http://hackage.haskell.org/package/parallel The parallel package on Hackage]<br />
* [http://hackage.haskell.org/packages/archive/parallel/3.1.0.1/doc/html/Control-Parallel-Strategies.html Control.Parallel.Strategies Documentation]<br />
* Latest paper: [http://research.microsoft.com/apps/pubs/default.aspx?id=138042 Seq no more: Better Strategies for Parallel Haskell]<br />
* Original paper: [http://www.macs.hw.ac.uk/~dsg/gph/papers/html/Strategies/strategies.html Algorithm + Strategy = Parallelism]<br />
<br />
=== Monad-par ===<br />
<br />
This library offers an alternative parallel programming API to that provided by the parallel package. The Par monad allows the simple description of parallel computations, and can be used to add parallelism to pure Haskell code. The basic API is straightforward: the monad supports forking and simple communication in terms of IVars.<br />
<br />
* [http://hackage.haskell.org/package/monad-par Monad-par hackage page]<br />
<br />
=== Data Parallel Haskell ===<br />
<br />
;[http://haskell.org/haskellwiki/GHC/Data_Parallel_Haskell Data Parallel Haskell]<br />
:Implicitly parallel, high performance (nested) arrays, supporting large multicore programming.<br />
<br />
=== Research tools ===<br />
<br />
; Feedback-directed implicit parallelism<br />
: Implicit parallelism in Haskell, and a feedback-directed mechanism to increase its granularity ([http://research.microsoft.com/~tharris/papers/2007-fdip.pdf FDIP paper])<br />
;[http://www.macs.hw.ac.uk/~dsg/gph/ GpH: Glasgow Parallel Haskell]<br />
:A complete, GHC-based implementation of the parallel Haskell extension GpH and of evaluation strategies is available. Extensions of the runtime-system and language to improve performance and support new platforms are under development.<br />
<br />
----<br />
<br />
== Concurrency ==<br />
<br />
;[[Concurrency|Concurrent Haskell]]<br />
:GHC has supported concurrency with lightweight threads for more than a decade, and it [http://shootout.alioth.debian.org/gp4/benchmark.php?test=threadring&lang=all is very fast]. Threads in Haskell are preemptively scheduled and support everything you would normally expect from threads, including blocking I/O and foreign calls.<br />
* [http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurrent.html Documentation]<br />
* [http://haskell.org/haskellwiki/Concurrency_demos Examples]<br />
<br />
=== Software transactional memory ===<br />
<br />
;[[Software transactional memory|Software Transactional Memory]]<br />
:GHC supports a sophisticated version of software transactional memory. Software Transactional Memory (STM) is a new way to coordinate concurrent threads.<br />
* [http://hackage.haskell.org/package/stm Documentation]<br />
* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/stm.pdf Composable memory transactions]. <br />
* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/lock-free-flops06.pdf Lock-free data structures using Software Transactional Memory in Haskell] gives further examples of concurrent programming using STM.<br />
<br />
=== Actors ===<br />
<br />
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/actor Actors with multi-headed receive clauses]<br />
:Actor-based concurrency for Haskell<br />
;[http://www.cs.kent.ac.uk/projects/ofa/chp/ CHP: Communicating Haskell Processes]<br />
:CHP is built on the ideas of CSP (Communicating Sequential Processes), featuring encapsulated parallel processes (no shared data!) communicating over synchronous channels. This is a very composable mode that also allows choice on communications, so that a process may offer to either read on one channel or write on another, but will only take the first that is available.<br />
<br />
=== Helper tools ===<br />
<br />
;[[WrapConc | Wrapped Concurrency]]<br />
:A wrapper around Control.Concurrency and Control.Exception that provides versions of forkIO that have more guarantees.<br />
<br />
----<br />
<br />
== Distributed programming ==<br />
<br />
=== MPI ===<br />
<br />
;[http://www.foldr.org/~michaelw/hmpi/ hMPI]<br />
:hMPI is an acronym for HaskellMPI. It is a Haskell binding conforming to MPI (Message Passing Interface) standard 1.1/1.2. The programmer is in full control over the communication between the nodes of a cluster.<br />
<br />
;[http://hackage.haskell.org/package/haskell-mpi Haskell-MPI]<br />
:Haskell-MPI provides a Haskell interface to MPI, built on top of the foreign function interface. It is notionally a descendant of hMPI, but is mostly a rewrite.<br />
<br />
=== Distributed Haskell ===<br />
<br />
;[http://www.macs.hw.ac.uk/~dsg/gdh/ GdH: Glasgow Distributed Haskell]<br />
:GdH supports distributed stateful interactions on multiple locations. It is a conservative extension of both Concurrent Haskell and GpH, enabling the distribution of the stateful IO threads of the former on the multiple locations of the latter. The programming model includes forking stateful threads on remote locations, explicit communication over channels, and distributed exception handling.<br />
<br />
;[http://www.mathematik.uni-marburg.de/~eden Eden]<br />
:Eden extends Haskell with a small set of syntactic constructs for explicit process specification and creation. While providing enough control to implement parallel algorithms efficiently, it frees the programmer from the tedious task of managing low-level details by introducing automatic communication (via head-strict lazy lists), synchronisation, and process handling.<br />
<br />
=== Modelling concurrent and distributed systems ===<br />
<br />
;[http://www.cs.kent.ac.uk/~cr3/HCPN/ HCPN: Haskell-Coloured Petri Nets]<br />
:Haskell-Coloured Petri Nets (HCPN) are an instance of high-level Petri Nets, in which anonymous tokens are replaced by Haskell data objects (and transitions can operate on that data, in addition to moving it around). This gives us a hybrid graphical/textual modelling formalism for Haskell, especially suited for modelling concurrent and distributed systems.</div>EricKowhttps://wiki.haskell.org/index.php?title=Applications_and_libraries/Concurrency_and_parallelism&diff=45442Applications and libraries/Concurrency and parallelism2012-04-26T07:41:38Z<p>EricKow: /* Parallelism */ add monad-par, kill par/pseq</p>
<hr />
<div>'''Concurrent and Parallel Programming'''<br />
<br />
Haskell has been designed for parallel and concurrent programming since<br />
its inception. In particular, Haskell's [http://haskell.org/haskellwiki/Why_Haskell_matters purity] greatly simplifies reasoning about<br />
parallel programs. This page lists libraries and extensions for programming<br />
concurrent and parallel applications in Haskell. See also the<br />
[[Parallel|parallel portal]] for research papers, tutorials and on<br />
parallel and concurrent Haskell.<br />
<br />
== Parallelism ==<br />
<br />
=== Parallel Strategies ===<br />
<br />
Strategies provide a high-level compositional API for parallel programming.<br />
<br />
* [http://hackage.haskell.org/package/parallel The parallel package on Hackage]<br />
* [http://hackage.haskell.org/packages/archive/parallel/3.1.0.1/doc/html/Control-Parallel-Strategies.html Control.Parallel.Strategies Documentation]<br />
* Latest paper: [http://research.microsoft.com/apps/pubs/default.aspx?id=138042 Seq no more: Better Strategies for Parallel Haskell]<br />
* Original paper: [http://www.macs.hw.ac.uk/~dsg/gph/papers/html/Strategies/strategies.html Algorithm + Strategy = Parallelism]<br />
<br />
=== Monad-par ===<br />
<br />
This library offers an alternative parallel programming API to that provided by the parallel package. The Par monad allows the simple description of parallel computations, and can be used to add parallelism to pure Haskell code. The basic API is straightforward: the monad supports forking and simple communication in terms of IVars.<br />
<br />
* [http://hackage.haskell.org/package/monad-par Monad-par hackage page]<br />
<br />
=== Data Parallel Haskell ===<br />
<br />
;[http://haskell.org/haskellwiki/GHC/Data_Parallel_Haskell Data Parallel Haskell]<br />
:Implicitly parallel, high performance (nested) arrays, supporting large multicore programming.<br />
<br />
=== Feedback-directed implicit parallelism ===<br />
<br />
[http://research.microsoft.com/~tharris/papers/2007-fdip.pdf Implicit parallelism in Haskell, and a feedback-directed mechanism to increase its granularity]<br />
<br />
=== Glasgow Parallel Haskell ===<br />
<br />
''EYK: is this redundant with strategies?'' <br />
<br />
;[http://www.macs.hw.ac.uk/~dsg/gph/ GpH: Glasgow Parallel Haskell]<br />
:A complete, GHC-based implementation of the parallel Haskell extension GpH and of evaluation strategies is available. Extensions of the runtime-system and language to improve performance and support new platforms are under development.<br />
<br />
----<br />
<br />
== Concurrency ==<br />
<br />
;[[Concurrency|Concurrent Haskell]]<br />
:GHC has supported concurrency with lightweight threads for more than a decade, and it [http://shootout.alioth.debian.org/gp4/benchmark.php?test=threadring&lang=all is very fast]. Threads in Haskell are preemptively scheduled and support everything you would normally expect from threads, including blocking I/O and foreign calls.<br />
* [http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurrent.html Documentation]<br />
* [http://haskell.org/haskellwiki/Concurrency_demos Examples]<br />
<br />
=== Software transactional memory ===<br />
<br />
;[[Software transactional memory|Software Transactional Memory]]<br />
:GHC supports a sophisticated version of software transactional memory. Software Transactional Memory (STM) is a new way to coordinate concurrent threads.<br />
* [http://hackage.haskell.org/package/stm Documentation]<br />
* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/stm.pdf Composable memory transactions]. <br />
* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/lock-free-flops06.pdf Lock-free data structures using Software Transactional Memory in Haskell] gives further examples of concurrent programming using STM.<br />
<br />
=== Actors ===<br />
<br />
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/actor Actors with multi-headed receive clauses]<br />
:Actor-based concurrency for Haskell<br />
;[http://www.cs.kent.ac.uk/projects/ofa/chp/ CHP: Communicating Haskell Processes]<br />
:CHP is built on the ideas of CSP (Communicating Sequential Processes), featuring encapsulated parallel processes (no shared data!) communicating over synchronous channels. This is a very composable mode that also allows choice on communications, so that a process may offer to either read on one channel or write on another, but will only take the first that is available.<br />
<br />
=== Helper tools ===<br />
<br />
;[[WrapConc | Wrapped Concurrency]]<br />
:A wrapper around Control.Concurrency and Control.Exception that provides versions of forkIO that have more guarantees.<br />
<br />
----<br />
<br />
== Distributed programming ==<br />
<br />
=== MPI ===<br />
<br />
;[http://www.foldr.org/~michaelw/hmpi/ hMPI]<br />
:hMPI is an acronym for HaskellMPI. It is a Haskell binding conforming to MPI (Message Passing Interface) standard 1.1/1.2. The programmer is in full control over the communication between the nodes of a cluster.<br />
<br />
;[http://hackage.haskell.org/package/haskell-mpi Haskell-MPI]<br />
:Haskell-MPI provides a Haskell interface to MPI, built on top of the foreign function interface. It is notionally a descendant of hMPI, but is mostly a rewrite.<br />
<br />
=== Distributed Haskell ===<br />
<br />
;[http://www.macs.hw.ac.uk/~dsg/gdh/ GdH: Glasgow Distributed Haskell]<br />
:GdH supports distributed stateful interactions on multiple locations. It is a conservative extension of both Concurrent Haskell and GpH, enabling the distribution of the stateful IO threads of the former on the multiple locations of the latter. The programming model includes forking stateful threads on remote locations, explicit communication over channels, and distributed exception handling.<br />
<br />
;[http://www.mathematik.uni-marburg.de/~eden Eden]<br />
:Eden extends Haskell with a small set of syntactic constructs for explicit process specification and creation. While providing enough control to implement parallel algorithms efficiently, it frees the programmer from the tedious task of managing low-level details by introducing automatic communication (via head-strict lazy lists), synchronisation, and process handling.<br />
<br />
=== Modelling concurrent and distributed systems ===<br />
<br />
;[http://www.cs.kent.ac.uk/~cr3/HCPN/ HCPN: Haskell-Coloured Petri Nets]<br />
:Haskell-Coloured Petri Nets (HCPN) are an instance of high-level Petri Nets, in which anonymous tokens are replaced by Haskell data objects (and transitions can operate on that data, in addition to moving it around). This gives us a hybrid graphical/textual modelling formalism for Haskell, especially suited for modelling concurrent and distributed systems.</div>EricKowhttps://wiki.haskell.org/index.php?title=Applications_and_libraries/Concurrency_and_parallelism&diff=45441Applications and libraries/Concurrency and parallelism2012-04-26T07:38:52Z<p>EricKow: kill some seemingly old/dead packages</p>
<hr />
<div>'''Concurrent and Parallel Programming'''<br />
<br />
Haskell has been designed for parallel and concurrent programming since<br />
its inception. In particular, Haskell's [http://haskell.org/haskellwiki/Why_Haskell_matters purity] greatly simplifies reasoning about<br />
parallel programs. This page lists libraries and extensions for programming<br />
concurrent and parallel applications in Haskell. See also the<br />
[[Parallel|parallel portal]] for research papers, tutorials and on<br />
parallel and concurrent Haskell.<br />
<br />
== Parallelism ==<br />
<br />
=== Parallel Strategies ===<br />
<br />
Strategies provide a high-level compositional API for parallel programming.<br />
<br />
* [http://hackage.haskell.org/package/parallel The parallel package on Hackage]<br />
* [http://hackage.haskell.org/packages/archive/parallel/3.1.0.1/doc/html/Control-Parallel-Strategies.html Control.Parallel.Strategies Documentation]<br />
* Latest paper: [http://research.microsoft.com/apps/pubs/default.aspx?id=138042 Seq no more: Better Strategies for Parallel Haskell]<br />
* Original paper: [http://www.macs.hw.ac.uk/~dsg/gph/papers/html/Strategies/strategies.html Algorithm + Strategy = Parallelism]<br />
<br />
=== Data Parallel Haskell ===<br />
<br />
;[http://haskell.org/haskellwiki/GHC/Data_Parallel_Haskell Data Parallel Haskell]<br />
:Implicitly parallel, high performance (nested) arrays, supporting large multicore programming.<br />
<br />
=== Low-level parallelism: par and pseq ===<br />
<br />
The Control.Parallel module provides the low-level operations for parallelism on which Strategies are built.<br />
<br />
* [http://hackage.haskell.org/packages/archive/parallel/latest/doc/html/Control-Parallel.html Control.Parallel Documentation]<br />
<br />
=== Feedback-directed implicit parallelism ===<br />
<br />
[http://research.microsoft.com/~tharris/papers/2007-fdip.pdf Implicit parallelism in Haskell, and a feedback-directed mechanism to increase its granularity]<br />
<br />
=== Glasgow Parallel Haskell ===<br />
<br />
''EYK: is this redundant with strategies?'' <br />
<br />
;[http://www.macs.hw.ac.uk/~dsg/gph/ GpH: Glasgow Parallel Haskell]<br />
:A complete, GHC-based implementation of the parallel Haskell extension GpH and of evaluation strategies is available. Extensions of the runtime-system and language to improve performance and support new platforms are under development.<br />
<br />
----<br />
<br />
== Concurrency ==<br />
<br />
;[[Concurrency|Concurrent Haskell]]<br />
:GHC has supported concurrency with lightweight threads for more than a decade, and it [http://shootout.alioth.debian.org/gp4/benchmark.php?test=threadring&lang=all is very fast]. Threads in Haskell are preemptively scheduled and support everything you would normally expect from threads, including blocking I/O and foreign calls.<br />
* [http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurrent.html Documentation]<br />
* [http://haskell.org/haskellwiki/Concurrency_demos Examples]<br />
<br />
=== Software transactional memory ===<br />
<br />
;[[Software transactional memory|Software Transactional Memory]]<br />
:GHC supports a sophisticated version of software transactional memory. Software Transactional Memory (STM) is a new way to coordinate concurrent threads.<br />
* [http://hackage.haskell.org/package/stm Documentation]<br />
* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/stm.pdf Composable memory transactions]. <br />
* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/lock-free-flops06.pdf Lock-free data structures using Software Transactional Memory in Haskell] gives further examples of concurrent programming using STM.<br />
<br />
=== Actors ===<br />
<br />
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/actor Actors with multi-headed receive clauses]<br />
:Actor-based concurrency for Haskell<br />
;[http://www.cs.kent.ac.uk/projects/ofa/chp/ CHP: Communicating Haskell Processes]<br />
:CHP is built on the ideas of CSP (Communicating Sequential Processes), featuring encapsulated parallel processes (no shared data!) communicating over synchronous channels. This is a very composable mode that also allows choice on communications, so that a process may offer to either read on one channel or write on another, but will only take the first that is available.<br />
<br />
=== Helper tools ===<br />
<br />
;[[WrapConc | Wrapped Concurrency]]<br />
:A wrapper around Control.Concurrency and Control.Exception that provides versions of forkIO that have more guarantees.<br />
<br />
----<br />
<br />
== Distributed programming ==<br />
<br />
=== MPI ===<br />
<br />
;[http://www.foldr.org/~michaelw/hmpi/ hMPI]<br />
:hMPI is an acronym for HaskellMPI. It is a Haskell binding conforming to MPI (Message Passing Interface) standard 1.1/1.2. The programmer is in full control over the communication between the nodes of a cluster.<br />
<br />
;[http://hackage.haskell.org/package/haskell-mpi Haskell-MPI]<br />
:Haskell-MPI provides a Haskell interface to MPI, built on top of the foreign function interface. It is notionally a descendant of hMPI, but is mostly a rewrite.<br />
<br />
=== Distributed Haskell ===<br />
<br />
;[http://www.macs.hw.ac.uk/~dsg/gdh/ GdH: Glasgow Distributed Haskell]<br />
:GdH supports distributed stateful interactions on multiple locations. It is a conservative extension of both Concurrent Haskell and GpH, enabling the distribution of the stateful IO threads of the former on the multiple locations of the latter. The programming model includes forking stateful threads on remote locations, explicit communication over channels, and distributed exception handling.<br />
<br />
;[http://www.mathematik.uni-marburg.de/~eden Eden]<br />
:Eden extends Haskell with a small set of syntactic constructs for explicit process specification and creation. While providing enough control to implement parallel algorithms efficiently, it frees the programmer from the tedious task of managing low-level details by introducing automatic communication (via head-strict lazy lists), synchronisation, and process handling.<br />
<br />
=== Modelling concurrent and distributed systems ===<br />
<br />
;[http://www.cs.kent.ac.uk/~cr3/HCPN/ HCPN: Haskell-Coloured Petri Nets]<br />
:Haskell-Coloured Petri Nets (HCPN) are an instance of high-level Petri Nets, in which anonymous tokens are replaced by Haskell data objects (and transitions can operate on that data, in addition to moving it around). This gives us a hybrid graphical/textual modelling formalism for Haskell, especially suited for modelling concurrent and distributed systems.</div>EricKowhttps://wiki.haskell.org/index.php?title=Applications_and_libraries/Concurrency_and_parallelism&diff=45440Applications and libraries/Concurrency and parallelism2012-04-26T07:35:02Z<p>EricKow: /* Distributed Haskell */ kill dead stuff</p>
<hr />
<div>'''Concurrent and Parallel Programming'''<br />
<br />
Haskell has been designed for parallel and concurrent programming since<br />
its inception. In particular, Haskell's [http://haskell.org/haskellwiki/Why_Haskell_matters purity] greatly simplifies reasoning about<br />
parallel programs. This page lists libraries and extensions for programming<br />
concurrent and parallel applications in Haskell. See also the<br />
[[Parallel|parallel portal]] for research papers, tutorials and on<br />
parallel and concurrent Haskell.<br />
<br />
== Parallelism ==<br />
<br />
=== Parallel Strategies ===<br />
<br />
Strategies provide a high-level compositional API for parallel programming.<br />
<br />
* [http://hackage.haskell.org/package/parallel The parallel package on Hackage]<br />
* [http://hackage.haskell.org/packages/archive/parallel/3.1.0.1/doc/html/Control-Parallel-Strategies.html Control.Parallel.Strategies Documentation]<br />
* Latest paper: [http://research.microsoft.com/apps/pubs/default.aspx?id=138042 Seq no more: Better Strategies for Parallel Haskell]<br />
* Original paper: [http://www.macs.hw.ac.uk/~dsg/gph/papers/html/Strategies/strategies.html Algorithm + Strategy = Parallelism]<br />
<br />
=== Data Parallel Haskell ===<br />
<br />
;[http://haskell.org/haskellwiki/GHC/Data_Parallel_Haskell Data Parallel Haskell]<br />
:Implicitly parallel, high performance (nested) arrays, supporting large multicore programming.<br />
<br />
=== Low-level parallelism: par and pseq ===<br />
<br />
The Control.Parallel module provides the low-level operations for parallelism on which Strategies are built.<br />
<br />
* [http://hackage.haskell.org/packages/archive/parallel/latest/doc/html/Control-Parallel.html Control.Parallel Documentation]<br />
<br />
=== Feedback-directed implicit parallelism ===<br />
<br />
[http://research.microsoft.com/~tharris/papers/2007-fdip.pdf Implicit parallelism in Haskell, and a feedback-directed mechanism to increase its granularity]<br />
<br />
=== Glasgow Parallel Haskell ===<br />
<br />
''EYK: is this redundant with strategies?'' <br />
<br />
;[http://www.macs.hw.ac.uk/~dsg/gph/ GpH: Glasgow Parallel Haskell]<br />
:A complete, GHC-based implementation of the parallel Haskell extension GpH and of evaluation strategies is available. Extensions of the runtime-system and language to improve performance and support new platforms are under development.<br />
<br />
----<br />
<br />
== Concurrency ==<br />
<br />
;[[Concurrency|Concurrent Haskell]]<br />
:GHC has supported concurrency with lightweight threads for more than a decade, and it [http://shootout.alioth.debian.org/gp4/benchmark.php?test=threadring&lang=all is very fast]. Threads in Haskell are preemptively scheduled and support everything you would normally expect from threads, including blocking I/O and foreign calls.<br />
* [http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurrent.html Documentation]<br />
* [http://haskell.org/haskellwiki/Concurrency_demos Examples]<br />
<br />
=== Software transactional memory ===<br />
<br />
;[[Software transactional memory|Software Transactional Memory]]<br />
:GHC supports a sophisticated version of software transactional memory. Software Transactional Memory (STM) is a new way to coordinate concurrent threads.<br />
* [http://hackage.haskell.org/package/stm Documentation]<br />
* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/stm.pdf Composable memory transactions]. <br />
* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/lock-free-flops06.pdf Lock-free data structures using Software Transactional Memory in Haskell] gives further examples of concurrent programming using STM.<br />
<br />
=== Actors ===<br />
<br />
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/actor Actors with multi-headed receive clauses]<br />
:Actor-based concurrency for Haskell<br />
;[http://www.cs.kent.ac.uk/projects/ofa/chp/ CHP: Communicating Haskell Processes]<br />
:CHP is built on the ideas of CSP (Communicating Sequential Processes), featuring encapsulated parallel processes (no shared data!) communicating over synchronous channels. This is a very composable mode that also allows choice on communications, so that a process may offer to either read on one channel or write on another, but will only take the first that is available.<br />
<br />
=== Helper tools ===<br />
<br />
;[[WrapConc | Wrapped Concurrency]]<br />
:A wrapper around Control.Concurrency and Control.Exception that provides versions of forkIO that have more guarantees.<br />
<br />
=== Experimental tools ===<br />
<br />
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/transactional-events Transactional events, based on Concurrent ML]<br />
:Transactional events for Haskell.<br />
<br />
----<br />
<br />
== Distributed programming ==<br />
<br />
=== MPI ===<br />
<br />
;[http://www.foldr.org/~michaelw/hmpi/ hMPI]<br />
:hMPI is an acronym for HaskellMPI. It is a Haskell binding conforming to MPI (Message Passing Interface) standard 1.1/1.2. The programmer is in full control over the communication between the nodes of a cluster.<br />
<br />
;[http://hackage.haskell.org/package/haskell-mpi Haskell-MPI]<br />
:Haskell-MPI provides a Haskell interface to MPI, built on top of the foreign function interface. It is notionally a descendant of hMPI, but is mostly a rewrite.<br />
<br />
=== Distributed Haskell ===<br />
<br />
;[http://www.macs.hw.ac.uk/~dsg/gdh/ GdH: Glasgow Distributed Haskell]<br />
:GdH supports distributed stateful interactions on multiple locations. It is a conservative extension of both Concurrent Haskell and GpH, enabling the distribution of the stateful IO threads of the former on the multiple locations of the latter. The programming model includes forking stateful threads on remote locations, explicit communication over channels, and distributed exception handling.<br />
<br />
;[http://www.mathematik.uni-marburg.de/~eden Eden]<br />
:Eden extends Haskell with a small set of syntactic constructs for explicit process specification and creation. While providing enough control to implement parallel algorithms efficiently, it frees the programmer from the tedious task of managing low-level details by introducing automatic communication (via head-strict lazy lists), synchronisation, and process handling.<br />
<br />
=== Ports ===<br />
<br />
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/ports-0.4.3.2 The Haskell Ports Library (HPL)]<br />
:Ports are an abstraction for modelling variables whose values evolve over time without the need to resort to mutable variable, such as IORefs. More precisely, a port represents all values that a time-dependent variable successively takes as a stream, where each element of the stream corresponds to a state change - we can also say that a port represents a time series. Moreover, a port supports concurrent construction of the time series, or stream of values.<br />
<br />
=== Modelling concurrent and distributed systems ===<br />
<br />
;[http://www.cs.kent.ac.uk/~cr3/HCPN/ HCPN: Haskell-Coloured Petri Nets]<br />
:Haskell-Coloured Petri Nets (HCPN) are an instance of high-level Petri Nets, in which anonymous tokens are replaced by Haskell data objects (and transitions can operate on that data, in addition to moving it around). This gives us a hybrid graphical/textual modelling formalism for Haskell, especially suited for modelling concurrent and distributed systems.</div>EricKow