Revision as of 12:55, 7 November 2012
Cloud Haskell is a domain-speciﬁc language for developing programs for a distributed computing environment. Implemented as a shallow embedding in Haskell, it provides a message passing communication model, inspired by Erlang, without introducing incompatibility with Haskell’s established shared-memory concurrency.
Cloud Haskell is available from Hackage as distributed-process. You will probably also want to install a backend:
- The distributed-process-simplelocalnet backend is designed to get you started and experiment with Cloud Haskell on your local machine or local network.
- The distributed-process-azure backend makes it possible to run Cloud Haskell applications on Microsoft Azure virtual machines.
The cutting edge development version of Cloud Haskell is on github.
2 Documentation and Support
For an overview of Cloud Haskell it's probably a good idea to read Towards Haskell in the Cloud (details below). The relevant API documentation of the distributed-process package (in order of importance) is
If you want to know more details about Closure or Static (without the Template Haskell magic on top) you might want to read
Probably the best place to ask questions is the parallel-haskell google group.
3 Current Status
The summary about the new implementation is that it exists, it works, it's on hackage, and we think it is now ready for serious experiments.
Compared to the previous prototype:
- it is much faster;
- it can run on multiple kinds of network;
- has backends to support different environments (like cluster or cloud);
- has a new system for dealing with node disconnect and reconnect, and a more precisely defined semantics (see section Semantics, below)
- supports composable, polymorphic serialisable closures;
- and internally the code is better structured and easier to work with.
We need your help! The issue tracker on github lists all currently known issues; we also maintain a separate page with the most important open issues; the wiki generally contains more developer oriented documentation, though possibly not enough (the implementation of the network transport layer is documented in more detail). Patches are most welcome! (Before you spent serious time on an issue it might be a good idea to add a comment to an issue with what you intend to do.)
In addition, if you are experimenting with Cloud Haskell and find problems, or even just areas where the documentation is unclear, please open new issues documenting those problems.
There is also an effort underway to develop an OTP-like platform for Cloud Haskell. Your help would be much appreciated there too!
5 Videos and Blog Posts
Cloud Haskell intros
- blog: A Cloud Haskell Appetiser (Parallel Haskell Digest 11)
- video: (1hr) Cloud Haskell: a general introduction and tutorial, focusing on what it does and how to use it. It also covers some details about the current implementation.
- video: (1hr) Towards Haskell in the Cloud: an older but more detailed introduction by Simon Peyton Jones about the problem area and the design decisions and internals of Cloud Haskell. In particular it covers the details of how sending functions over the wire really works.
- video: (25min) Cloud Haskell 2.0 A more technical overview of the new implementation (the slides are available too).
Well-Typed have a series of blog posts "Communication Patterns in Cloud Haskell"
- Part 1: Master-Slave, Work-Stealing and Work-Pushing
- Part 2: Performance
- Part 3: Map-Reduce
- Part 4: K-Means
Alen Ribic has a series of blog posts about (Cloud) Haskell on the Raspberry Pi
Other blog posts
- Using Cloud Haskell in HPC Cluster by Mal Código
Cloud Haskell Semantics (PDF) is an draft document that gives a more precise semantics to messaging in Cloud Haskell. The semantics is based on the Unified Semantics for Future Erlang paper, but extends it with a notion of "reconnecting" (this is described in detail in the introduction of the document).
The document also describes some open issues, in particular in relation to ordering of link and monitor notifications relative to regular messages (and messages sent on typed channels). Note however that Cloud Haskell backends that use the TCP network (that is, all backends currently available) do not suffer from the problems described in that section (essentially because the TCP transport maintains a single TCP connection between Cloud Haskell nodes and orders all messages sent on that one connection). However, be aware that if you take advantage of this in your code that your code may not work with Cloud Haskell backends that use more esoteric network transports.
- Towards Haskell in the Cloud, Jeff Epstein, Andrew Black, and and Simon Peyton Jones. Haskell Symposium, Tokyo, Sept 2011.
- Functional programming for the data centre, Jeff Epstein. Masters Thesis, University of Cambridge, 2011
8 Other Useful Packages
A core concept in Cloud Haskell is that of serializable values. The Serializable type class combines Typeable and Binary. ghc can automatically derive Typeable instances for custom data types, but you need a package to derive Binary. There are various packages available that assist with this:
binary-generic and derive have been confirmed to work with Cloud Haskell; the status of the other packages is unknown -- YMMV (please feel free to update this wiki page if you have more information).
9 Migration from remote
Here are some suggestions that might ease the migration from the Cloud Haskell prototype remote to distributed-process.
- The "implicit" type of mkClosure has changed (implicit because mkClosure is a Template Haskell function). In distributed-process mkClosure takes a function of type T1 -> T2 and returns a function of type T1 -> Closure T2. In other words, the first argument to your function becomes the closure environment; if you want two items in your closure environment, create a function of type (T1, T2) -> T3; if you want none, create a function of type () -> T1.
- distributed-process follows the naming conventions in Towards Haskell in the Cloud rather than in remote so the functions that deal with typed channels are called sendChan, receiveChan and newChan instead of sendChannel, receiveChannel and newChannel.
- sendChan, receiveChan (and send) never fail in distributed-process (in remote they might throw a TransmitException). Instead, if you want to be notified of communication failure, you need to use monitor or link.
- The function forkProcess in remote is called spawnLocal in distributed-process
- The Process monad is called Process in distributed-process (rather than ProcessM). Similarly, the type Match replaces MatchM (and is no longer a monad).
- Initialization is different. See the documentation of of the Control.Distributed.Process.SimpleLocalnet to get started (note that the master/slave distinction in SimpleLocalnet is optional and does not need to be used).
- Peer discovery is different. The functions getPeers and nameQuery are no longer available. The function findPeers from SimpleLocalnet replaces some, but not all, of the functionality of getPeers. You can use whereisRemoteAsync to find processes that have been registered by name.