Difference between revisions of "Monad laws"

From HaskellWiki
Jump to: navigation, search
(But why should monads obey these laws?: remove one incorrect statement added in the previous edit)
(Sections rewritten and reorganised)
Line 1: Line 1:
All instances of the [[Monad]] typeclass should obey the three '''monad laws''':
 
  +
== The three laws ==
  +
 
All instances of the [[Monad]] typeclass should satisfy the following laws:
   
 
{|
 
{|
 
|-
 
|-
| '''Left identity:'''
+
| ''Left identity:''
| <haskell>return a </haskell>
+
| <haskell>return a</haskell>
| <haskell>>>= f</haskell>
+
| <tt>'''>>='''</tt>
  +
| <haskell>h</haskell>
 
| ≡
 
| ≡
| <haskell>f a</haskell>
+
| <haskell>h a</haskell>
 
|-
 
|-
| '''Right identity:'''
+
| ''Right identity:''
| <haskell>m </haskell>
+
| <haskell>m</haskell>
| <haskell>>>= return</haskell>
+
| <tt>'''>>='''</tt>
  +
| <haskell>return</haskell>
 
| ≡
 
| ≡
 
| <haskell>m</haskell>
 
| <haskell>m</haskell>
 
|-
 
|-
| '''Associativity:'''
+
| ''Associativity:''
| <haskell>(m >>= f)</haskell>
+
| <haskell>(m >>= g)</haskell>
| <haskell>>>= g</haskell>
+
| <tt>'''>>='''</tt>
  +
| <haskell>h</haskell>
  +
| ≡
  +
| <haskell>m >>= (\x -> g x >>= h)</haskell>
  +
|}
   
 
Here, ''p'' ≡ ''q'' simply means that you can replace ''p'' with ''q'' and vice-versa, and the behaviour of your program will not change: ''p'' and ''q'' are equivalent.
  +
 
Using eta-expansion, the ''associativity'' law can be re-written for clarity as:
  +
  +
{|
  +
|
 
|<haskell>(m >>= (\x -> g x)) >>= h</haskell>
  +
|-
 
| ≡
 
| ≡
| <haskell>m >>= (\x -> f x >>= g)</haskell>
+
|<haskell> m >>= (\x -> g x >>= h)</haskell>
 
|}
 
|}
   
The last law can be re-written for clarity as
 
  +
or equally:
  +
  +
{|
  +
|
  +
|<haskell>(m >>= (\x -> g x)) >>= (\y -> h y)</haskell>
  +
|-
  +
| ≡
  +
|<haskell> m >>= (\x -> g x >>= (\y -> h y))</haskell>
  +
|}
  +
  +
=== Is that really an "associative law"? ===
  +
----
  +
In this form, not at first glance. To see precisely why they're known as ''identity'' and ''associative'' laws, you have to change your notation slightly.
  +
  +
The monad-composition operator <code>(>=>)</code> (also known as the ''Kleisli-composition'' operator) is defined in <code>Control.Monad</code>:
  +
 
<haskell>
 
<haskell>
(m >>= (\x -> f x)) >>= g ≡
 
  +
infixr 1 >=>
m >>= (\x -> f x >>= g)
+
(>=>) :: Monad m => (a -> m b) -> (b -> m c) -> (a -> m c)
</haskell>
+
f >=> g = \x -> f x >>= g
or, equally,
 
<haskell>
 
(m >>= (\x -> f x)) >>= (\y -> g y) ≡
 
m >>= (\x -> f x >>= (\y -> g y))
 
 
</haskell>
 
</haskell>
   
Here, ''p'' ≡ ''q'' simply means that you can replace ''p'' with ''q'' and vice-versa, and the behaviour of your program will not change: ''p'' and ''q'' are equivalent.
 
  +
Using this operator, the three laws can be expressed like this:
  +
  +
{|
  +
|-
  +
| ''Left identity:''
  +
| <haskell>return</haskell>
  +
| <tt>'''>=>'''</tt>
  +
| <haskell>h</haskell>
  +
| ≡
  +
| <haskell>h</haskell>
  +
|-
  +
| ''Right identity:''
  +
| <haskell>f</haskell>
  +
| <tt>'''>=>'''</tt>
  +
| <haskell>return</haskell>
  +
| ≡
  +
| <haskell>f</haskell>
  +
|-
  +
| ''Associativity:''
  +
| <haskell>(f >=> g)</haskell>
  +
| <tt>'''>=>'''</tt>
  +
| <haskell>h</haskell>
  +
| ≡
  +
| <haskell>f >=> (g >=> h)</haskell>
  +
|}
  +
  +
It's now easy to see that monad composition is an associative operator with left and right identities.
  +
 
This is a very important way to express the three monad laws, because they are precisely the laws that are required for monads to form a mathematical [[Category theory|category]]. To summarise in ''haiku'':
  +
  +
:<tt>
 
:''Monad axioms;''
 
:''Kleisli composition forms''
 
:''a category.''
  +
:</tt>
   
== What is the practical meaning of the monad laws? ==
 
  +
== The monad laws in practice ==
   
Let us re-write the laws in do-notation:
+
If we re-write the laws using Haskell's <code>do</code>-notation:
   
 
{|
 
{|
 
|-
 
|-
| '''Left identity:'''
+
| ''Left identity:''
 
| <haskell>
 
| <haskell>
 
do { x′ <- return x;
 
do { x′ <- return x;
Line 49: Line 108:
 
</haskell>
 
</haskell>
 
|-
 
|-
| '''Right identity:'''
+
| ''Right identity:''
 
| <haskell>
 
| <haskell>
 
do { x <- m;
 
do { x <- m;
Line 60: Line 119:
 
</haskell>
 
</haskell>
 
|-
 
|-
| '''Associativity:'''
+
| ''Associativity:''
 
| <haskell>
 
| <haskell>
 
do { y <- do { x <- m;
 
do { y <- do { x <- m;
Line 85: Line 144:
 
|}
 
|}
   
In this notation the laws appear as plain common-sense transformations of imperative programs.
+
we can see that the laws represent plain, ordinary common-sense transformations of imperative programs.
   
== But why should monads obey these laws? ==
+
=== But why should monadic types satisfy these laws? ===
  +
----
  +
When we see a program written in a form on the left-hand side, we expect it to do the same thing as the corresponding right-hand side; and vice versa. And in practice, people do write like the lengthier left-hand side once in a while.
   
When we see a program written in a form on the left-hand side, we expect it to do
 
 
* First example: beginners tend to write
the same thing as the corresponding right-hand side; and vice versa. And in
 
practice, people do write like the lengthier left-hand side once in a while.
 
First example: beginners tend to write
 
   
<haskell>
+
:<haskell>
skip_and_get = do
+
skip_and_get = do unused <- getLine
unused <- getLine
+
line <- getLine
line <- getLine
+
return line
return line
 
 
</haskell>
 
</haskell>
   
and it would really throw off both beginners and veterans if that did
+
:and it would really throw off both beginners and veterans if that did not act like (by ''right identity''):
not act like (by right identity)
 
   
<haskell>
+
:<haskell>
skip_and_get = do
+
skip_and_get = do unused <- getLine
unused <- getLine
+
getLine
getLine
 
 
</haskell>
 
</haskell>
   
Second example: Next, you go ahead to use skip_and_get:
+
* Second example: Next, you go ahead and use <code>skip_and_get</code>:
   
<haskell>
+
:<haskell>
main = do
+
main = do answer <- skip_and_get
answer <- skip_and_get
+
putStrLn answer
putStrLn answer
 
 
</haskell>
 
</haskell>
   
The most popular way of comprehending this program is by inlining
+
:The most popular way of comprehending this program is by ''inlining'' (whether the compiler does or not is an orthogonal issue):
(whether the compiler does or not is an orthogonal issue):
 
   
<haskell>
+
:<haskell>
main = do
+
main = do answer <- (do unused <- getLine
answer <- do
+
getLine)
unused <- getLine
+
putStrLn answer
getLine
 
putStrLn answer
 
 
</haskell>
 
</haskell>
   
and applying associativity so you can pretend it is
+
:and applying associativity so you can pretend it is:
   
<haskell>
+
:<haskell>
main = do
+
main = do unused <- getLine
unused <- getLine
+
answer <- getLine
answer <- getLine
+
putStrLn answer
putStrLn answer
 
 
</haskell>
 
</haskell>
   
The associativity law is amazingly pervasive: you have always assumed it, and you
+
The associativity law is amazingly pervasive: you have always assumed it, and you have never noticed it.
have never noticed it.
 
 
Associativity of a ''binary'' operator allows for ''any'' number of operands to be combined by applying the binary operator with any arbitrary grouping to get the same well-defined result, just like the result of summing up a list of numbers is fully defined by the binary (+) operator no matter which parenthesization is used (yes, just like in folding a list of any type of monoidal values).
 
 
Whether compilers exploit the laws or not, you still want the laws for
 
your own sake, just so you can avoid pulling your hair for
 
counter-intuitive program behaviour that brittlely depends on how many
 
redundant "return"s you insert or how you nest your do-blocks.
 
 
== But it doesn't look exactly like an "associative law"... ==
 
 
Not in this form, no. To see precisely why they're called "identity laws" and an "associative law", you have to change your notation slightly.
 
 
The monad composition operator (also known as the Kleisli composition operator) is defined in Control.Monad:
 
 
<haskell>
 
(>=>) :: Monad m => (a -> m b) -> (b -> m c) -> a -> m c
 
(m >=> n) x = do
 
y <- m x
 
n y
 
</haskell>
 
 
Using this operator, the three laws can be expressed like this:
 
 
{|
 
| '''Left identity:'''
 
| <haskell> return >=> g </haskell>
 
| ≡
 
| <haskell> g </haskell>
 
|-
 
| '''Right identity:'''
 
| <haskell> f >=> return </haskell>
 
| ≡
 
| <haskell> f </haskell>
 
|-
 
| '''Associativity:'''
 
| <haskell> (f >=> g) >=> h </haskell>
 
| ≡
 
| <haskell> f >=> (g >=> h) </haskell>
 
|}
 
 
It's now easy to see that monad composition is an associative operator with left and right identities.
 
   
This is a very important way to express the three monad laws, because they are precisely the laws that are required for monads to form a mathematical [[Category theory|category]]. So the monad laws can be summarised in convenient Haiku form:
 
  +
The associativity of a ''binary'' operator allows for any number of operands to be combined by applying the binary operator with any arbitrary grouping to get the same well-defined result, just like the result of summing up a list of numbers is fully defined by the binary <code>(+)</code> operator no matter which parenthesization is used (yes, just like in folding a list of any type of monoidal values).
   
:Monad axioms:
 
  +
Whether compilers make use of them or not, you still want the laws for your own sake, just so you can avoid pulling your hair out over counter-intuitive program behaviour, which depends (in brittle fashion!) on e.g. how many redundant <code>return</code>s you insert or how you nest your <code>do</code>-blocks...
:Kleisli composition forms
 
:a category.
 
   
 
[[Category:Standard_classes]]
 
[[Category:Standard_classes]]

Revision as of 12:38, 15 March 2021

The three laws

All instances of the Monad typeclass should satisfy the following laws:

Left identity:
return a
>>=
h
h a
Right identity:
m
>>=
return
m
Associativity:
(m >>= g)
>>=
h
m >>= (\x -> g x >>= h)

Here, pq simply means that you can replace p with q and vice-versa, and the behaviour of your program will not change: p and q are equivalent.

Using eta-expansion, the associativity law can be re-written for clarity as:

(m  >>=  (\x -> g x))  >>=  h
 m  >>=  (\x -> g x    >>=  h)

or equally:

(m  >>=  (\x -> g x))  >>=  (\y -> h y)
 m  >>=  (\x -> g x    >>=  (\y -> h y))

Is that really an "associative law"?


In this form, not at first glance. To see precisely why they're known as identity and associative laws, you have to change your notation slightly.

The monad-composition operator (>=>) (also known as the Kleisli-composition operator) is defined in Control.Monad:

infixr 1 >=>
(>=>)   :: Monad m => (a -> m b) -> (b -> m c) -> (a -> m c)
f >=> g =  \x -> f x >>= g

Using this operator, the three laws can be expressed like this:

Left identity:
return
>=>
h
h
Right identity:
f
>=>
return
f
Associativity:
(f >=> g)
>=>
h
f >=> (g >=> h)

It's now easy to see that monad composition is an associative operator with left and right identities.

This is a very important way to express the three monad laws, because they are precisely the laws that are required for monads to form a mathematical category. To summarise in haiku:

Monad axioms;
Kleisli composition forms
a category.

The monad laws in practice

If we re-write the laws using Haskell's do-notation:

Left identity:
do { x <- return x;
     f x
   }
do { f x }
Right identity:
do { x <- m;
     return x
   }
do { m }
Associativity:
do { y <- do { x <- m;
               f x
             }
     g y
   }
do { x <- m;
     do { y <- f x;
          g y
        }
   }
do { x <- m;
     y <- f x;
     g y
   }

we can see that the laws represent plain, ordinary common-sense transformations of imperative programs.

But why should monadic types satisfy these laws?


When we see a program written in a form on the left-hand side, we expect it to do the same thing as the corresponding right-hand side; and vice versa. And in practice, people do write like the lengthier left-hand side once in a while.

  • First example: beginners tend to write
skip_and_get = do unused <- getLine
                  line <- getLine
                  return line
and it would really throw off both beginners and veterans if that did not act like (by right identity):
skip_and_get = do unused <- getLine
                  getLine
  • Second example: Next, you go ahead and use skip_and_get:
main = do answer <- skip_and_get
          putStrLn answer
The most popular way of comprehending this program is by inlining (whether the compiler does or not is an orthogonal issue):
main = do answer <- (do unused <- getLine
                        getLine)
          putStrLn answer
and applying associativity so you can pretend it is:
main = do unused <- getLine
          answer <- getLine
          putStrLn answer

The associativity law is amazingly pervasive: you have always assumed it, and you have never noticed it.

The associativity of a binary operator allows for any number of operands to be combined by applying the binary operator with any arbitrary grouping to get the same well-defined result, just like the result of summing up a list of numbers is fully defined by the binary (+) operator no matter which parenthesization is used (yes, just like in folding a list of any type of monoidal values).

Whether compilers make use of them or not, you still want the laws for your own sake, just so you can avoid pulling your hair out over counter-intuitive program behaviour, which depends (in brittle fashion!) on e.g. how many redundant returns you insert or how you nest your do-blocks...