Infinity and efficiency
m (minor edit)
Latest revision as of 18:41, 25 April 2009
In this article we demonstrate how to check the efficiency of an implementation by checking for proper results for infinite input. In general, it is harder to reason about time and memory complexity of an implementation than about its correctness. In fact, in Haskell inefficient implementations sometimes turn out to be wrong implementations.
A very simple example is the function
reverse . reverse which seems to be an inefficient implementation of
In a language with strict semantics, these two functions are the same.
But since the non-strict semantics of Haskell allows infinite data structures, there is a subtle difference,
because for an infinite input list, the function
reverse . reverse is undefined, whereas
id is defined.
Now let's consider a more complicated example.
Say we want to program a function that removes elements from the end of a list,
dropWhile removes elements from the beginning of a list.
We want to call it
(As a more concrete example, imagine a function which removes trailing spaces.)
A simple implementation is
dropWhileRev :: (a -> Bool) -> [a] -> [a] dropWhileRev p = reverse . dropWhile p . reverse
You probably have already guessed that it also does not work for infinite input lists.
Incidentally, it is also inefficient, because
reverse requires to compute all list nodes
(although it does not require to compute the values stored in the nodes).
Thus the full list skeleton must be held in memory.
However, it is possible to implement
dropWhileRev in a way that works for more kinds of inputs.
dropWhileRev :: (a -> Bool) -> [a] -> [a] dropWhileRev p = foldr (\x xs -> if p x && null xs then  else x:xs) 
foldr formally inspects the list from right to left,
but it actually processes data from left to right.
Whenever a run of elements that matches the condition
p occurs, these elements are held until the end of the list is encountered (then they are dropped),
or when a non-matching list element is found (then they are emitted).
The crux is the part
null xs, which requires to do recursive calls within
This works in many cases, but it fails if the number of matching elements becomes too large.
The maximum memory consumption depends on the length of the runs of non-matching elements,
which is much more efficient than the naive implementation above.