# Lazy vs. non-strict

### From HaskellWiki

m |
Jameshfisher (Talk | contribs) (→Direction of evaluation: single line breaks seem to trigger new paragraphs; eliminate them) |
||

(2 intermediate revisions by 2 users not shown) | |||

Line 5: | Line 5: | ||

== Direction of evaluation == | == Direction of evaluation == | ||

− | [[Non-strict semantics|Non-strictness]] means that [[reduction]] (the mathematical term for [[evaluation]]) | + | [[Non-strict semantics|Non-strictness]] means that [[reduction]] (the mathematical term for [[evaluation]]) proceeds from the outside in, so if you have <hask>(a+(b*c))</hask> then first you reduce the <hask>+</hask>, then you reduce the inner <hask>(b*c)</hask>. Strict languages work the other way around, starting with the innermost brackets and working outwards. |

− | proceeds from the outside in, | + | |

− | so if you have <hask>(a+(b*c))</hask> then first you reduce the <hask>+</hask>, | + | |

− | then you reduce the inner <hask>(b*c)</hask>. | + | |

− | Strict languages work the other way around, starting with the innermost brackets and working outwards. | + | |

− | This matters to the semantics because if you have an expression that evaluates to [[bottom]] | + | This matters to the semantics because if you have an expression that evaluates to [[bottom]] (i.e. an <hask>error</hask> or endless loop) then any language that starts at the inside and works outwards will always find that bottom value, and hence the bottom will propagate outwards. However if you start from the outside and work in then some of the sub-expressions are eliminated by the outer reductions, so they don't get evaluated and you don't get "bottom". |

− | (i.e. an <hask>error</hask> or endless loop) then any language that starts at the inside and | + | |

− | works outwards will always find that bottom value, and hence the bottom will propagate outwards. | + | |

− | However if you start from the outside and work in then some of the sub-expressions are eliminated by the outer reductions, | + | |

− | so they don't get evaluated and you don't get "bottom". | + | |

− | [[Lazy evaluation]], on the other hand, means only evaluating an expression | + | [[Lazy evaluation]], on the other hand, means only evaluating an expression when its results are needed (note the shift from "reduction" to "evaluation"). So when the evaluation engine sees an expression it builds a [[thunk]] data structure containing whatever values are needed to evaluate the expression, plus a pointer to the expression itself. When the result is actually needed the evaluation engine calls the expression and then replaces the thunk with the result for future reference. |

− | when its results are needed (note the shift from "reduction" to "evaluation"). | + | |

− | So when the evaluation engine sees an expression it builds a [[thunk]] data structure | + | |

− | containing whatever values are needed to evaluate the expression, plus a pointer to the expression itself. | + | |

− | When the result is actually needed the evaluation engine calls the expression | + | |

− | and then replaces the thunk with the result for future reference. | + | |

− | Obviously there is a strong correspondence between a thunk and a partly-evaluated expression. | + | Obviously there is a strong correspondence between a thunk and a partly-evaluated expression. Hence in most cases the terms "lazy" and "non-strict" are synonyms. But not quite. For instance you could imagine an evaluation engine on highly parallel hardware that fires off sub-expression evaluation eagerly, but then throws away results that are not needed. |

− | Hence in most cases the terms "lazy" and "non-strict" are synonyms. But not quite. | + | |

− | For instance you could imagine an evaluation engine on highly parallel hardware | + | |

− | that fires off sub-expression evaluation eagerly, but then throws away results that are not needed. | + | |

− | In practice Haskell is not a purely lazy language: | + | In practice Haskell is not a purely lazy language: for instance pattern matching is usually strict (So trying a pattern match forces evaluation to happen at least far enough to accept or reject the match. You can prepend a <hask>~</hask> in order to make pattern matches lazy). The [[strictness analyzer]] also looks for cases where sub-expressions are ''always'' required by the outer expression, and converts those into eager evaluation. It can do this because the semantics (in terms of "bottom") don't change. Programmers can also use the <hask>seq</hask> primitive to force an expression to evaluate regardless of whether the result will ever be used. <hask>$!</hask> is defined in terms of <hask>seq</hask>. |

− | for instance pattern matching is usually strict | + | |

− | (So trying a pattern match forces evaluation to happen at least far enough to accept or reject the match. | + | |

− | You can prepend a <hask>~</hask> in order to make pattern matches lazy). | + | |

− | The [[strictness analyzer]] also looks for cases where sub-expressions are ''always'' required by the outer expression, | + | |

− | and converts those into eager evaluation. | + | |

− | It can do this because the semantics (in terms of "bottom") don't change. | + | |

− | Programmers can also use the <hask>seq</hask> primitive to force an expression to evaluate | + | |

− | regardless of whether the result will ever be used. | + | |

− | <hask>$!</hask> is defined in terms of <hask>seq</hask>. | + | |

Source: | Source: | ||

* Paul Johnson in Haskell Cafe [http://www.haskell.org/pipermail/haskell-cafe/2007-November/034814.html What is the role of $! ?] | * Paul Johnson in Haskell Cafe [http://www.haskell.org/pipermail/haskell-cafe/2007-November/034814.html What is the role of $! ?] | ||

− | |||

== WHNF == | == WHNF == | ||

− | + | WHNF is an abbreviation for [[weak head normal form]]. | |

Line 67: | Line 41: | ||

Some experiments with non-lazy Haskell compilers have been attempted: | Some experiments with non-lazy Haskell compilers have been attempted: | ||

− | + | [[Research_papers/Runtime_systems#Optimistic_Evaluation]] | |

[[Category:Theoretical_foundations]] | [[Category:Theoretical_foundations]] |

## Revision as of 09:11, 22 March 2012

Haskell is often described as a lazy language. However, the language specification simply states that Haskell is non-strict, which is not quite the same thing as lazy.

## 1 Direction of evaluation

Non-strictness means that reduction (the mathematical term for evaluation) proceeds from the outside in, so if you haveLazy evaluation, on the other hand, means only evaluating an expression when its results are needed (note the shift from "reduction" to "evaluation"). So when the evaluation engine sees an expression it builds a thunk data structure containing whatever values are needed to evaluate the expression, plus a pointer to the expression itself. When the result is actually needed the evaluation engine calls the expression and then replaces the thunk with the result for future reference.

Obviously there is a strong correspondence between a thunk and a partly-evaluated expression. Hence in most cases the terms "lazy" and "non-strict" are synonyms. But not quite. For instance you could imagine an evaluation engine on highly parallel hardware that fires off sub-expression evaluation eagerly, but then throws away results that are not needed.

In practice Haskell is not a purely lazy language: for instance pattern matching is usually strict (So trying a pattern match forces evaluation to happen at least far enough to accept or reject the match. You can prepend a*always*required by the outer expression, and converts those into eager evaluation. It can do this because the semantics (in terms of "bottom") don't change. Programmers can also use the

Source:

- Paul Johnson in Haskell Cafe What is the role of $! ?

## 2 WHNF

WHNF is an abbreviation for weak head normal form.

## 3 Further references

Laziness is simply a common implementation technique for non-strict languages, but it is not the only possible technique. One major drawback with lazy implementations is that they are not generally amenable to parallelisation. This paper states that experiments indicate that little parallelism can be extracted from lazy programs:

"The Impact of Laziness on Parallelism and the Limits of Strictness Analysis" (G. Tremblay G. R. Gao) http://citeseer.ist.psu.edu/tremblay95impact.html

Lenient, or optimistic, evaluation is an implementation approach that lies somewhere between lazy and strict, and combines eager evaluation with non-strict semantics. This seems to be considered more promising for parallelisation.

This paper implies (section 2.2.1) that lenient evaluation can handle circular data structures and recursive definitions, but cannot express infinite structures without explicit use of delays:

"How Much Non-strictness do Lenient Programs Require?" (Klaus E. Schauser, Seth C. Goldstein) http://citeseer.ist.psu.edu/schauser95how.html

Some experiments with non-lazy Haskell compilers have been attempted: Research_papers/Runtime_systems#Optimistic_Evaluation