seq

From HaskellWiki

While its name might suggest otherwise, the seq function's purpose is to introduce strictness to a Haskell program. As indicated by its type signature:

seq :: a -> b -> b

it takes two arguments of any type, and returns the second. However, it also has the important property that it is always strict in its first argument. In essence, seq is defined by the following two equations:

⊥ `seq` b         = ⊥
a `seq` b | a ≠ ⊥ = b

(See Bottom for an explanation of the symbol.)


History

The need to specify an order of evaluation in otherwise-nonstrict programming languages has appeared before:

We can use VAL to define a "sequential evaluation" operator which evaluates its first argument and then returns its second:

a ; b = (λx. b) val a

and we can define other functions which control evaluation order [...]

The Design and Implementation of Programming Languages (page 88 of 159).

10.1. Forcing Sequential Execution
One of the primary complications of side-effects is that one must be able to order their execution. In conventional languages the flow of control explicit in the semantics accomplishes this ordering. In ALFL, however, there is no explicit flow of control (a feature!), and so it is necessary to introduce a special operator to do this if we are to admit any form of side-effects.

ALFL Reference Manual and Programmer's Guide (page 17 of 29).

`seq' applied to two values, returns the second but checks that the first value is not completely undefined. Sometimes needed, e.g. to ensure correct synchronisation in interactive programs.

> seq :: *->**->** ||defined internally
The Miranda Standard Environment © Research Software Limited 1989
/*
**      seq:            sequentailly evaluate two values and return the second.
**                      The first value is evaluated to WHNF.
*/
src/lib/seq.m - LML 0.99

and in Haskell since at least 1996:

The seq combinator implements sequential composition. When the expression e1 ‘seq‘ e2 is evaluated, e1 is evaluated to weak head normal form first, and then the value of e2 is returned. In the following parallel nfib function, seq is used to force the evaluation of n2 before the addition takes place. This is because Haskell does not specify which operand is evaluated first, and if n1 was evaluated before n2, there would be no parallelism.

nfib :: Int -> Int
nfib n | n <= 1    = 1
       | otherwise = n1 ‘par‘ (n2 ‘seq‘ n1 + n2 + 1)
                     where
                       n1 = nfib (n-1)
                       n2 = nfib (n-2)
Accidents always Come in Threes: A Case Study of Data-intensive Programs in Parallel Haskell (page 2 of 14).

...the same year seq was introduced in Haskell 1.3 as a method of the (now-abandoned) Eval type class:

class Eval a where
   strict :: (a -> b) -> a -> b
   seq    :: a -> b -> b

   strict f x = x ‘seq‘ f x

However, despite that need by the time Haskell 98 was released seq had been reduced to a primitive strictness definition. But in 2009, all doubts about the need for a primitive sequencing definition were vanquished:

2.1 The need for pseq

The pseq combinator is used for sequencing; informally, it evaluates its first argument to weak-head normal form, and then evaluates its second argument, returning the value of its second argument. Consider this definition of parMap:

parMap f []     = []
parMap f (x:xs) = y ‘par‘ (ys ‘pseq‘ y:ys)
   where y  = f x
         ys = parMap f xs

The intention here is to spark the evaluation of f x, and then evaluate parMap f xs, before returning the new list y:ys. The programmer is hoping to express an ordering of the evaluation: first spark y, then evaluate ys.

Runtime Support for Multicore Haskell (page 2 of 12).

Alas, this confirmation failed to influence Haskell 2010 - to this day, seq remains just a primitive strictness definition. So for enhanced confusion the only Haskell implementation still in widespread use now provides both seq and pseq.

Demystifying seq

A common misconception regarding seq is that seq x "evaluates" x. Well, sort of. seq doesn't evaluate anything just by virtue of existing in the source file, all it does is introduce an artificial data dependency of one value on another: when the result of seq is evaluated, the first argument must also (sort of; see below) be evaluated. As an example, suppose x :: Integer, then seq x b behaves essentially like if x == 0 then b else b – unconditionally equal to b, but forcing x along the way. In particular, the expression x `seq` x is completely redundant, and always has exactly the same effect as just writing x.

Strictly speaking, the two equations of seq are all it must satisfy, and if the compiler can statically prove that the first argument is not ⊥, or that its second argument is, it doesn't have to evaluate anything to meet its obligations. In practice, this almost never happens, and would probably be considered highly counterintuitive behaviour on the part of GHC (or whatever else you use to run your code). So for example, in seq a b it is perfectly legitimate for seq to:

1. evaluate b - its second argument,
2. before evaluating a - its first argument,
3. then returning b.

In this larger example:

let x = ... in
let y = sum [0..47] in
x `seq` 3 + y + y^2

seq immediately evaluating its second argument (3 + y + y^2) avoids having to allocate space to store y:

let x = ... in
case sum [0..47] of
  y -> x `seq` 3 + y + y^2

However, sometimes this ambiguity is undesirable, hence the need for pseq.

Common uses of seq

seq is typically used in the semantic interpretation of other strictness techniques, like strictness annotations in data types, or GHC's BangPatterns extension. For example, the meaning of this:

f !x !y = z

is this:

f x y | x `seq` y `seq` False = undefined
      | otherwise = z

although that literal translation may not actually take place.

seq is frequently used with accumulating parameters to ensure that they don't become huge thunks, which will be forced at the end anyway. For example, strict foldl:

foldl' :: (a -> b -> a) -> a -> [b] -> a
foldl' _ z [] = z
foldl' f z (x:xs) = let z' = f z x in z' `seq` foldl' f z' xs

It's also used to define strict application:

($!) :: (a -> b) -> a -> b
f $! x = x `seq` f x

which is useful for some of the same reasons.

Controversy?

The presence of seq in Haskell does have some disadvantages:

1. It is the only reason why Haskell programs are able to distinguish between the following two values:
undefined       :: a -> b
const undefined :: a -> b

This violates the principle of extensionality of functions, or eta-conversion from the lambda calculus, because f and \x -> f x are distinct functions, even though they return the same output for every input.

2. It can invalidate optimisation techniques which would normally be safe, causing the following two expressions to differ:
foldr ⊥ 0 (build seq) = foldr ⊥ 0 (seq (:) []) = foldr ⊥ 0 [] = 0
seq ⊥ 0                                                       = ⊥

This weakens the ability to use parametricity, which implies foldr k z (build g) == g k z for suitable values of g, k and z.

3. It can invalidate laws which would otherwise hold, also causing expressions to have differing results:
seq (⊥ >>= return :: State s a) True = True
seq (⊥ :: State s a) True            = ⊥

This violates the first monad law, that m >>= return == m.

But seq (sequential or otherwise) isn't alone in causing such difficulties:

1. When combined with call-by-need semantics, the use of weak-head normal form is also detrimental to extensionality.
2. When combined with GADTs, the associated map functions which uphold the functor laws is also problematic for parametricity.
3. The ability to define the fixed-point combinator in Haskell using recursion:
yet :: (a -> a) -> a
yet f = f (yet f)

means parametricity is further restricted.

4. Strictness must be considered when deriving laws for various recursive algorithms (see chapter 6).
5. Similar to seq and , the use of division and zero present their own challenges!

Therefore all such claims against seq (or calls for its outright removal) should be examined in this context.

Amelioration

Parametricity

In 2004, Johann and Voigtländer provided a solution to the loss of parametricity caused by seq:

(first page)

[...] parametricity results can be recovered in the presence of seq by restricting attention to left-closed, total, and admissible relations instead.

Extensionality

In 2016, Johnson-Freyd, Downen and Ariola suggested the use of head evaluation instead of weak-head evaluation to preserve (functional) extensionality, and not just in the call-by-name lambda calculus:

(page 16 of 18)

[...] it may be that effective approaches to head evaluation such as those in this paper are of interest even in call-by-value languages or in settings (such as Haskell with seq) that lack the η axiom.

One complication is the fact that compiled Haskell functions are usually implemented directly as machine code, which would have to be run somehow, without arguments, to obtain the necessary result. Fortunately, using head evaluation everywhere in Haskell isn't necessary. Haskell lacks the η axiom because of the seq function (and its use of weak-head evaluation). Thus restoring extensionality in Haskell only requires head evaluation to be used by seq.

For discovering what changes seq requires:

  (\ x -> ⊥ x) False

= ⊥ False
=

     (\ x -> not x) False

= not False
= True

     (\ x -> (⊥ . not) x) False

= (⊥ . not) False
= ⊥ (not False)
=

     (\ x -> (seq True . not) x) False

= (seq True . not) False
= seq True (not False)
= not False
= True

Hence:
  seq (\ x -> ⊥ x) y

=

  seq (\ x -> not x) y

= y

  seq (\ x -> (⊥ . not) x) y

=

  seq (\ x -> (seq True . not) x) y

= y

Therefore:
  (\ x -> ⊥ x) UKN

= ⊥ UKN
=

  (\ x -> not x) UKN

= not UKN
= UKN

  (\ x -> (⊥ . not) x) UKN

= (⊥ . not) UKN
= ⊥ (not UKN)
=

  (\ x -> (seq UKN . not) x) UKN

= (seq UKN . not) UKN
= seq UKN (not UKN)
= not UKN
= UKN

with the ordinary Haskell values False and True being replaced by the generic UKN.

For (seq UKN . not) UKN, the intermediate steps:

  • seq UKN (not UKN) replaced by not UKN
  • not UKN replaced by UKN

suggests an exception-driven implementation, with seq catching UKN and the implementation intervening directly if an attempt to evaluate UKN occurs (by replacing the original application with UKN).

Using a context more suitable for exceptions:

data Ukn = Ukn

instance Show Ukn where show Ukn = "unknown value"
instance Exception Ukn

seqM x y = catch (evalM x >> return y) (\ (_ :: Ukn) -> return y)
seq x y  = runM (seqM x y)

ukn      = throw Ukn
ukn'     = throw Ukn

to verify that example, because it does use the Ukn exception twice:

  (\ x -> (seq ukn' . not) x) ukn
= (seq ukn' . not) ukn
= seq ukn' (not ukn)
= not ukn
= ukn

then evalM can also be defined:

evalM x =
 do a <- evalWHNF x
    e <- tryFunction a
    case e of
      Left a           -> return a
      Right (h, arity) -> do r <- applyM h (replicate arity (throw Ukn))
                             evalM r

though the need for tryFunction to inspect its argument so intensively to return a variadic value h is further evidence that it would be more appropriate to define seq directly within the implementation.

So for the price of a few extra definitions in the implementation (or reusing existing ones with sufficient ingenuity), seq and the extensionality of the η axiom can both exist in Haskell.

See also