Concurrency with oracles

From HaskellWiki
Revision as of 22:49, 18 March 2021 by Atravers (talk | contribs) (Attempt to improve clarity of description...)

Jump to: navigation, search

Oracles, defined

An oracle is a value that can be viewed as having the ability to predict e.g:

  • which out of two computations will finish first;
  • which input event will arrive first;
  • whether a computation will finish before an input event arrives.

In practice this apparent predictive power is the result of the measured use of outside information (possibly as a result of external effects). The oracle, by seeming to contain the prediction, will preserve the referential transparency of a language while allowing expression of computations whose outcomes depend on execution time and arrival time.

Solutions tend to involve infinite trees of oracles, so you can pull one out whenever you need one, and pass an infinite subtree to future computations. Of course once an oracle has been used, it can't be reused. Referential transparency demands that the outcome of applying the oracle is fixed.

Connections to concurrency

On page 32 of 60 in Tackling the Awkward Squad, Simon Peyton Jones introduces non-determinism as a result of adding concurrency to the operational semantics he provides for I/O. As shown by Peter Dybjer, Herbert Sander and Mieke Massink, the throughtful use of oracles can help to recover referential transparency in models of concurrency despite them being non-deterministic.