Difference between revisions of "Performance/Strictness"
m (→Limitations of strictness analysis: font) 

(9 intermediate revisions by 8 users not shown)  
Line 3:  Line 3:  
Haskell is a nonstrict language, and most implementations use a strategy called ''laziness'' to run your program. Basically laziness == nonstrictness + sharing. 
Haskell is a nonstrict language, and most implementations use a strategy called ''laziness'' to run your program. Basically laziness == nonstrictness + sharing. 

−  [[Performance/LazinessLaziness]] can be a useful tool for improving performance, but more often than not it reduces performance by adding a constant overhead to everything. Because of laziness, the compiler can't evaluate a function argument and pass the value to the function, it has to record the expression in the heap in a ''suspension'' (or ''thunk'') in case it is evaluated later. Storing and evaluating suspensions is costly, and unnecessary if the expression was going to be evaluated anyway. 
+  [[Performance/LazinessLaziness]] can be a useful tool for improving performance, but more often than not it reduces performance by adding a constant overhead to everything. Because of laziness, the compiler can't evaluate a function argument and pass the value to the function, it has to record the expression in the heap in a ''suspension'' (or ''[[thunk]]'') in case it is evaluated later. Storing and evaluating suspensions is costly, and unnecessary if the expression was going to be evaluated anyway. 
+  == Strictness analysis == 

−  Optimising compilers like GHC try to reduce the cost of laziness using ''strictness analysis'', which attempts to determine which function arguments are always evaluated by the function, and hence can be evaluated by the caller instead. Sometimes this leads to bigger gains; a strict <tt>Int</tt> can be passed as an unboxed value, for example. Strictness analysis sometimes does wonderful things; for example it is very good at optimising <tt>fac</tt>: 

+  Optimising compilers like GHC try to reduce the cost of laziness using ''strictness analysis'', which attempts to determine which function arguments are always evaluated by the function, and hence can be evaluated by the caller instead. Sometimes this leads to bigger gains; a strict <hask>Int</hask> can be passed as an unboxed value, for example. Strictness analysis sometimes does wonderful things; for example it is very good at optimising <hask>fac</hask>: 

−  fac :: Int > Int 

+  <haskell> 

−  fac n = if n <= 1 then 1 else n * fac (n1) 

+  fac :: Int > Int 

+  fac n = if n <= 1 then 1 else n * fac (n1) 

+  </haskell> 

−  Strictness analysis can spot the fact that the argument < 
+  Strictness analysis can spot the fact that the argument <hask>n</hask> is strict, and can be represented unboxed. The resulting function won't use any heap while it is running, as you'd expect. 
+  
+  The common case of misunderstanding of strictness analysis is when [[Foldfolding]] (reducing) lists. If this program 

+  <haskell> 

+  main = print (foldl (+) 0 [1..1000000]) 

+  </haskell> 

+  is compiled in GHC without "O" flag, it uses a lot of heap and stack. A programmer knows that the long list (<hask>[1..1000000]</hask>) is stored as a thunk, not fully, because the programmer read about [[nonstrict semantics]] and [[lazy vs. nonstrict]]. The programmer explicitly wrote <hask>sum</hask> as [[Tail recursiontail recursive]], so the program should use a small amount of stack, because the programmer knows about [[stack overflow]]. So behavior of the program looks mysterious to the programmer. 

+  
+  The programmer concludes that the program somehow decides to store the long list fully in the heap, or garbage collector is not able to remove dead prefix of the long list. Wrong. The long list is fine. 

+  
+  Look at the definition from the standard library. 

+  <haskell> 

+  foldl :: (a > b > a) > a > [b] > a 

+  foldl f z0 xs0 = lgo z0 xs0 

+  where 

+  lgo z [] = z 

+  lgo z (x:xs) = lgo (f z x) xs 

+  </haskell> 

+  
+  <hask>lgo</hask>, instead of adding elements of the long list, creates '''a thunk''' for <hask>(f z x)</hask>. <hask>z</hask> is stored within that thunk, and <hask>z</hask> is a thunk also, created during the previous call to <hask>lgo</hask>. The program creates the long chain of thunks. Stack is bloated when evaluating that chain. 

+  
+  With "O" flag GHC performs strictness analysis, then it knows that <hask>lgo</hask> is strict in <hask>z</hask> argument, therefore thunks are not needed and are not created. 

+  
+  == Limitations of strictness analysis == 

It's easy to accidentally write functions that aren't strict, though. Often a lazy function can be sitting around eating up your performance, when making it strict wouldn't change the meaning of the program. For example: 
It's easy to accidentally write functions that aren't strict, though. Often a lazy function can be sitting around eating up your performance, when making it strict wouldn't change the meaning of the program. For example: 

+  <haskell> 

+  suminit :: [Int] > Int > Int > (Int,[Int]) 

+  suminit xs len acc = case len == 0 of 

+  True > (acc,xs) 

+  False > case xs of 

+  [] > (acc,[]) 

+  x:xs > suminit xs (len1) (acc+x) 

+  main = print (fst (suminit [1..] 1000000 0)) 

+  </haskell> 

+  this function sums the first <hask>len</hask> elements of a list, returning the sum and the remaining list. We've already tried to improve performance by using an [[Performance/Accumulating parameteraccumulating parameter]]. However, the parameter <hask>acc</hask> isn't strict, because there's no guarantee that the caller will evaluate it. The compiler will use a fully boxed <hask>Int</hask> to represent <hask>acc</hask>, although it will probably use an unboxed <hask>Int</hask> to represent <hask>len</hask>. The expression <hask>(acc+x)</hask> will be saved as a suspension, rather than evaluated on the spot. (Incidentally, this is a common pattern we see crop up time and again in small recursive functions with a few parameters). 

+  == Explicit strictness == 

−  suminit :: [Int] > Int > Int > (Int,[Int]) 

−  suminit xs len acc 

−   len == 0 = (acc,xs) 

−   otherwise = case xs of 

−  [] > (acc,[]) 

−  x:xs > suminit xs (len1) (acc+x) 

+  We can make an argument strict explicitly. 

−  this function sums the first len elements of a list, returning the sum and the remaining list. We've already tried to improve performance by using an [[Performance/Accumulating parameteraccumulating parameter]]. However, the parameter acc isn't strict, because there's no guarantee that the caller will evaluate it. The compiler will use a fully boxed Int to represent acc, although it will probably use an unboxed Int to represent len. The expression (acc+x) will be saved as a suspension, rather than evaluated on the spot. (incidentally, this is a common pattern we see crop up time and again in small recursive functions with a few parameters). 

+  In the <hask>foldl</hask> example, replace <hask>foldl</hask> with <hask>foldl'</hask>. 

−  To fix it, we need to make acc explicitly strict. The way to do this is using seq: 

+  For <hask>suminit</hask>, we need to make <hask>acc</hask> strict. The way to do this is using <hask>seq</hask>: 

−  suminit :: [Int] > Int > Int > (Int,[Int]) 

+  <haskell> 

−  suminit xs len acc 

+  suminit :: [Int] > Int > Int > (Int,[Int]) 

−   len `seq` acc `seq` False = undefined  *** add this line 

−  +  suminit xs len acc = acc `seq` case len == 0 of 

+  True > (acc,xs) 

−   otherwise = case xs of 

+  False > case xs of 

−  [] > (acc,[]) 

−  +  [] > (acc,[]) 

+  x:xs > suminit xs (len1) (acc+x) 

+  </haskell> 

−  +  Some other languages (eg. Clean) have strictness annotations on types, which is a less ugly way to express this, but for now there are no Haskell compilers that support this. 

+  With the <tt>BangPatterns</tt> GHC extension enabled, the above can be written as 

−  Incidentally, GHC will also eliminate the tuple returned by this function if the caller immediately deconstructs it. 

+  {{NoteFor strict data structures, see [[Performance/Data_types]].}} 

−  == Evaluating expressions strictly == 

+  <haskell> 

−  There's a useful variant of the infix application operator <tt>($)</tt> that evaluates its argument strictly: <tt>($!)</tt>. This can often be used to great effect in eliminating unnecessary suspensions that the compiler hasn't spotted. eg. in a function application 

+  suminit xs !len !acc = … 

+  </haskell> 

+  Incidentally, GHC will also eliminate the tuple returned by this function if the caller immediately deconstructs it. 

−  f (g x) 

+  == Evaluating expressions strictly == 

−  writing instead 

−  
−  f $! (g x) 

+  There's a useful variant of the infix application operator <hask>($)</hask> that evaluates its argument strictly: <hask>($!)</hask>. This can often be used to great effect in eliminating unnecessary suspensions that the compiler hasn't spotted. eg. in a function application 

−  will be more efficient if (a) you were going to evaluate <tt>(g x)</tt> anyway, and (b) f isn't visibly strict, or inlined. If f is strict or inlined, then the chances are that <tt>($!)</tt> is unnecessary cruft here. 

+  <haskell> 

+  f (g x) 

+  </haskell> 

+  writing instead 

+  <haskell> 

+  f $! (g x) 

+  </haskell> 

+  will be more efficient if (a) you were going to evaluate <hask>(g x)</hask> anyway, and (b) <hask>f</hask> isn't visibly strict, or inlined. If <hask>f</hask> is strict or inlined, then the chances are that <hask>($!)</hask> is unnecessary cruft here. 

A good example is the monadic return. If you find yourself writing 
A good example is the monadic return. If you find yourself writing 

+  <haskell> 

−  
−  +  do … 

−  +  … 

−  +  return (fn x) 

+  </haskell> 

then consider instead writing 
then consider instead writing 

+  <haskell> 

+  do … 

+  … 

+  return $! fn x 

+  </haskell> 

+  it is very rare to actually need laziness in the argument of return here. 

+  Warning: Using any kind of strictness annotations as above can have unexpected impact on program semantics, in particular when certain optimizations are performed by the compiler. See [[correctness of short cut fusion]]. 

−  do ... 

−  ... 

−  return $! fn x 

+  == Rule of Thumb for Strictness Annotation == 

−  it is very rare to actually need laziness in the argument of return here. 

+  A rule of thumb for when strictness annotation might be needed: 

−  NB. do not do this if the expression on the right of $! is a variable  that just wastes effort, because it does not eliminate a suspension. The only reason to do this would be if you were eliminating a [[space leak]]. 

+  When a function <hask>f</hask> with argument <hask>x</hask> satisfies both conditions: 

−  Warning: Using any kind of strictness annotations as above can have unexpected impact on program semantics, in particular when certain optimizations are performed by the compiler. See [[correctness of short cut fusion]]. 

+  * <hask>f</hask> calls a function on a function of <hask>x</hask>: <hask>(h (g x))</hask> 

+  * is not already strict in <hask>x</hask> (does not inspect <hask>x</hask>'s value), 

+  then it can be helpful to force evaluation: 

+  
+  Example: 

+  <haskell> 

+   Force Strict: Make g's argument smaller. 

+  f x = g $! (h x) 

+  
+   Don't force: f isn't building on x, so just let g deal with it. 

+  f x = g x 

+  
+   Don't force: f is already strict in x 

+  f x = case x of 

+  0 > (h (g x)) 

+  </haskell> 
Latest revision as of 17:16, 8 June 2022
Haskell Performance Resource
Constructs: Techniques: 
Haskell is a nonstrict language, and most implementations use a strategy called laziness to run your program. Basically laziness == nonstrictness + sharing.
Laziness can be a useful tool for improving performance, but more often than not it reduces performance by adding a constant overhead to everything. Because of laziness, the compiler can't evaluate a function argument and pass the value to the function, it has to record the expression in the heap in a suspension (or thunk) in case it is evaluated later. Storing and evaluating suspensions is costly, and unnecessary if the expression was going to be evaluated anyway.
Strictness analysis
Optimising compilers like GHC try to reduce the cost of laziness using strictness analysis, which attempts to determine which function arguments are always evaluated by the function, and hence can be evaluated by the caller instead. Sometimes this leads to bigger gains; a strict Int
can be passed as an unboxed value, for example. Strictness analysis sometimes does wonderful things; for example it is very good at optimising fac
:
fac :: Int > Int
fac n = if n <= 1 then 1 else n * fac (n1)
Strictness analysis can spot the fact that the argument n
is strict, and can be represented unboxed. The resulting function won't use any heap while it is running, as you'd expect.
The common case of misunderstanding of strictness analysis is when folding (reducing) lists. If this program
main = print (foldl (+) 0 [1..1000000])
is compiled in GHC without "O" flag, it uses a lot of heap and stack. A programmer knows that the long list ([1..1000000]
) is stored as a thunk, not fully, because the programmer read about nonstrict semantics and lazy vs. nonstrict. The programmer explicitly wrote sum
as tail recursive, so the program should use a small amount of stack, because the programmer knows about stack overflow. So behavior of the program looks mysterious to the programmer.
The programmer concludes that the program somehow decides to store the long list fully in the heap, or garbage collector is not able to remove dead prefix of the long list. Wrong. The long list is fine.
Look at the definition from the standard library.
foldl :: (a > b > a) > a > [b] > a
foldl f z0 xs0 = lgo z0 xs0
where
lgo z [] = z
lgo z (x:xs) = lgo (f z x) xs
lgo
, instead of adding elements of the long list, creates a thunk for (f z x)
. z
is stored within that thunk, and z
is a thunk also, created during the previous call to lgo
. The program creates the long chain of thunks. Stack is bloated when evaluating that chain.
With "O" flag GHC performs strictness analysis, then it knows that lgo
is strict in z
argument, therefore thunks are not needed and are not created.
Limitations of strictness analysis
It's easy to accidentally write functions that aren't strict, though. Often a lazy function can be sitting around eating up your performance, when making it strict wouldn't change the meaning of the program. For example:
suminit :: [Int] > Int > Int > (Int,[Int])
suminit xs len acc = case len == 0 of
True > (acc,xs)
False > case xs of
[] > (acc,[])
x:xs > suminit xs (len1) (acc+x)
main = print (fst (suminit [1..] 1000000 0))
this function sums the first len
elements of a list, returning the sum and the remaining list. We've already tried to improve performance by using an accumulating parameter. However, the parameter acc
isn't strict, because there's no guarantee that the caller will evaluate it. The compiler will use a fully boxed Int
to represent acc
, although it will probably use an unboxed Int
to represent len
. The expression (acc+x)
will be saved as a suspension, rather than evaluated on the spot. (Incidentally, this is a common pattern we see crop up time and again in small recursive functions with a few parameters).
Explicit strictness
We can make an argument strict explicitly.
In the foldl
example, replace foldl
with foldl'
.
For suminit
, we need to make acc
strict. The way to do this is using seq
:
suminit :: [Int] > Int > Int > (Int,[Int])
suminit xs len acc = acc `seq` case len == 0 of
True > (acc,xs)
False > case xs of
[] > (acc,[])
x:xs > suminit xs (len1) (acc+x)
Some other languages (eg. Clean) have strictness annotations on types, which is a less ugly way to express this, but for now there are no Haskell compilers that support this.
With the BangPatterns GHC extension enabled, the above can be written as
∗ For strict data structures, see Performance/Data_types.
suminit xs !len !acc = …
Incidentally, GHC will also eliminate the tuple returned by this function if the caller immediately deconstructs it.
Evaluating expressions strictly
There's a useful variant of the infix application operator ($)
that evaluates its argument strictly: ($!)
. This can often be used to great effect in eliminating unnecessary suspensions that the compiler hasn't spotted. eg. in a function application
f (g x)
writing instead
f $! (g x)
will be more efficient if (a) you were going to evaluate (g x)
anyway, and (b) f
isn't visibly strict, or inlined. If f
is strict or inlined, then the chances are that ($!)
is unnecessary cruft here.
A good example is the monadic return. If you find yourself writing
do …
…
return (fn x)
then consider instead writing
do …
…
return $! fn x
it is very rare to actually need laziness in the argument of return here.
Warning: Using any kind of strictness annotations as above can have unexpected impact on program semantics, in particular when certain optimizations are performed by the compiler. See correctness of short cut fusion.
Rule of Thumb for Strictness Annotation
A rule of thumb for when strictness annotation might be needed:
When a function f
with argument x
satisfies both conditions:
f
calls a function on a function ofx
:(h (g x))
 is not already strict in
x
(does not inspectx
's value),
then it can be helpful to force evaluation:
Example:
 Force Strict: Make g's argument smaller.
f x = g $! (h x)
 Don't force: f isn't building on x, so just let g deal with it.
f x = g x
 Don't force: f is already strict in x
f x = case x of
0 > (h (g x))