https://wiki.haskell.org/api.php?action=feedcontributions&user=SimonFarnsworth&feedformat=atomHaskellWiki - User contributions [en]2020-12-02T13:45:31ZUser contributionsMediaWiki 1.27.4https://wiki.haskell.org/index.php?title=Lazy_vs._non-strict&diff=43796Lazy vs. non-strict2012-01-03T17:16:52Z<p>SimonFarnsworth: Minimal change to reference the existing page on WHNF</p>
<hr />
<div>Haskell is often described as a lazy language.<br />
However, the language specification simply states that Haskell is [[Non-strict semantics|non-strict]], which is not quite the same thing as [[lazy evaluation|lazy]].<br />
<br />
<br />
== Direction of evaluation ==<br />
<br />
[[Non-strict semantics|Non-strictness]] means that [[reduction]] (the mathematical term for [[evaluation]])<br />
proceeds from the outside in,<br />
so if you have <hask>(a+(b*c))</hask> then first you reduce the <hask>+</hask>,<br />
then you reduce the inner <hask>(b*c)</hask>.<br />
Strict languages work the other way around, starting with the innermost brackets and working outwards.<br />
<br />
This matters to the semantics because if you have an expression that evaluates to [[bottom]]<br />
(i.e. an <hask>error</hask> or endless loop) then any language that starts at the inside and<br />
works outwards will always find that bottom value, and hence the bottom will propagate outwards. <br />
However if you start from the outside and work in then some of the sub-expressions are eliminated by the outer reductions,<br />
so they don't get evaluated and you don't get "bottom".<br />
<br />
[[Lazy evaluation]], on the other hand, means only evaluating an expression <br />
when its results are needed (note the shift from "reduction" to "evaluation").<br />
So when the evaluation engine sees an expression it builds a [[thunk]] data structure<br />
containing whatever values are needed to evaluate the expression, plus a pointer to the expression itself.<br />
When the result is actually needed the evaluation engine calls the expression<br />
and then replaces the thunk with the result for future reference.<br />
<br />
Obviously there is a strong correspondence between a thunk and a partly-evaluated expression.<br />
Hence in most cases the terms "lazy" and "non-strict" are synonyms. But not quite.<br />
For instance you could imagine an evaluation engine on highly parallel hardware<br />
that fires off sub-expression evaluation eagerly, but then throws away results that are not needed.<br />
<br />
In practice Haskell is not a purely lazy language:<br />
for instance pattern matching is usually strict<br />
(So trying a pattern match forces evaluation to happen at least far enough to accept or reject the match.<br />
You can prepend a <hask>~</hask> in order to make pattern matches lazy).<br />
The [[strictness analyzer]] also looks for cases where sub-expressions are ''always'' required by the outer expression,<br />
and converts those into eager evaluation.<br />
It can do this because the semantics (in terms of "bottom") don't change.<br />
Programmers can also use the <hask>seq</hask> primitive to force an expression to evaluate<br />
regardless of whether the result will ever be used.<br />
<hask>$!</hask> is defined in terms of <hask>seq</hask>.<br />
<br />
<br />
Source:<br />
* Paul Johnson in Haskell Cafe [http://www.haskell.org/pipermail/haskell-cafe/2007-November/034814.html What is the role of $! ?]<br />
<br />
<br />
== WHNF ==<br />
<br />
WHNF is an abbreviation for [[weak head normal form]].<br />
<br />
<br />
== Further references ==<br />
<br />
Laziness is simply a common implementation technique for non-strict languages, but it is not the only possible technique. One major drawback with lazy implementations is that they are not generally amenable to parallelisation. This paper states that experiments indicate that little parallelism can be extracted from lazy programs:<br />
<br />
"The Impact of Laziness on Parallelism and the Limits of Strictness Analysis"<br />
(G. Tremblay G. R. Gao)<br />
http://citeseer.ist.psu.edu/tremblay95impact.html<br />
<br />
Lenient, or optimistic, evaluation is an implementation approach that lies somewhere between lazy and strict, and combines eager evaluation with non-strict semantics. This seems to be considered more promising for parallelisation.<br />
<br />
This paper implies (section 2.2.1) that lenient evaluation can handle circular data structures and recursive definitions, but cannot express infinite structures without explicit use of delays:<br />
<br />
"How Much Non-strictness do Lenient Programs Require?"<br />
(Klaus E. Schauser, Seth C. Goldstein)<br />
http://citeseer.ist.psu.edu/schauser95how.html<br />
<br />
Some experiments with non-lazy Haskell compilers have been attempted:<br />
[[Research_papers/Runtime_systems#Optimistic_Evaluation]]<br />
<br />
[[Category:Theoretical_foundations]]</div>SimonFarnsworthhttps://wiki.haskell.org/index.php?title=Weak_head_normal_form&diff=43112Weak head normal form2011-11-21T11:52:43Z<p>SimonFarnsworth: </p>
<hr />
<div>An expression is in weak head normal form, iff it is either:<br />
* a constructor (eventually applied to arguments) like True, Just (square 42) or (:) 1<br />
* a built-in function applied to too few arguments (perhaps none) like (+) 2 or sqrt.<br />
* or a lambda abstraction \x -> expression.<br />
<br />
Note that the arguments do not themselves have to be fully evaluated for an expression to be in weak head normal form; thus, while (square 42) can be reduced to (42 * 42), which can itself be reduced to a normal form of 1764, Just (square 42) is WHNF without further evaluation. Similarly, (+) (2 * 3 * 4) is WHNF, even though (2 * 3 * 4) could be reduced to the normal form 24.<br />
<br />
== See also ==<br />
* [http://en.wikibooks.org/wiki/Haskell/Graph_reduction#Weak_Head_Normal_Form The Haskell wikibook]<br />
* [http://stackoverflow.com/questions/6872898/haskell-what-is-weak-head-normal-form stackoverflow]<br />
* [http://encyclopedia2.thefreedictionary.com/Weak+Head+Normal+Form TheFreeDictionary].</div>SimonFarnsworth