Difference between revisions of "Monad laws"
(replace ' with ′ to fix syntax highlighting) 
m (Corrected haiku) 

(6 intermediate revisions by 3 users not shown)  
Line 1:  Line 1:  
⚫  
+  == The three laws == 

+  
⚫  
{ 
{ 

 
 

−   
+   ''Left identity:'' 
−   <haskell>return a 
+   <haskell>return a</haskell> 
+   <tt>'''>>='''</tt> 

+   <haskell>h</haskell> 

 ≡ 
 ≡ 

−   <haskell> 
+   <haskell>h a</haskell> 
 
 

−   
+   ''Right identity:'' 
−   <haskell>m 
+   <haskell>m</haskell> 
+   <tt>'''>>='''</tt> 

+   <haskell>return</haskell> 

 ≡ 
 ≡ 

 <haskell>m</haskell> 
 <haskell>m</haskell> 

 
 

−   
+   ''Associativity:'' 
−   <haskell>(m 
+   <haskell>(m >>= g)</haskell> 
+   <tt>'''>>='''</tt> 

+   <haskell>h</haskell> 

 ≡ 
 ≡ 

−   <haskell>m >>= (\x > 
+   <haskell>m >>= (\x > g x >>= h)</haskell> 
} 
} 

Here, ''p'' ≡ ''q'' simply means that you can replace ''p'' with ''q'' and viceversa, and the behaviour of your program will not change: ''p'' and ''q'' are equivalent. 
Here, ''p'' ≡ ''q'' simply means that you can replace ''p'' with ''q'' and viceversa, and the behaviour of your program will not change: ''p'' and ''q'' are equivalent. 

−  == What is the practical meaning of the monad laws? == 

+  Using etaexpansion, the ''associativity'' law can be rewritten for clarity as: 

⚫  
+  { 

+   

+  <haskell>(m >>= (\x > g x)) >>= h</haskell> 

+   

+   ≡ 

+  <haskell> m >>= (\x > g x >>= h)</haskell> 

+  } 

+  
+  or equally: 

+  
+  { 

+   

+  <haskell>(m >>= (\x > g x)) >>= (\y > h y)</haskell> 

+   

+   ≡ 

+  <haskell> m >>= (\x > g x >>= (\y > h y))</haskell> 

+  } 

+  
+  === Is that really an "associative law"? === 

+   

+  In this form, not at first glance. To see precisely why they're known as ''identity'' and ''associative'' laws, you have to change your notation slightly. 

+  
+  The monadcomposition operator <code>(>=>)</code> (also known as the ''Kleislicomposition'' operator) is defined in <code>Control.Monad</code>: 

+  
+  <haskell> 

+  infixr 1 >=> 

+  (>=>) :: Monad m => (a > m b) > (b > m c) > (a > m c) 

+  f >=> g = \x > f x >>= g 

+  </haskell> 

+  
+  Using this operator, the three laws can be expressed like this: 

+  
+  { 

+   

+   ''Left identity:'' 

+   <haskell>return</haskell> 

+   <tt>'''>=>'''</tt> 

+   <haskell>h</haskell> 

+   ≡ 

+   <haskell>h</haskell> 

+   

+   ''Right identity:'' 

+   <haskell>f</haskell> 

+   <tt>'''>=>'''</tt> 

+   <haskell>return</haskell> 

+   ≡ 

+   <haskell>f</haskell> 

+   

+   ''Associativity:'' 

+   <haskell>(f >=> g)</haskell> 

+   <tt>'''>=>'''</tt> 

+   <haskell>h</haskell> 

+   ≡ 

+   <haskell>f >=> (g >=> h)</haskell> 

+  } 

+  
+  It's now easy to see that monad composition is an associative operator with left and right identities. 

+  
⚫  
+  
+  :<tt> 

⚫  
⚫  
⚫  
+  :</tt> 

+  
+  == The monad laws in practice == 

+  
⚫  
{ 
{ 

 
 

−   
+   ''Left identity:'' 
 <haskell> 
 <haskell> 

do { x′ < return x; 
do { x′ < return x; 

Line 38:  Line 108:  
</haskell> 
</haskell> 

 
 

−   
+   ''Right identity:'' 
 <haskell> 
 <haskell> 

do { x < m; 
do { x < m; 

Line 49:  Line 119:  
</haskell> 
</haskell> 

 
 

−   
+   ''Associativity:'' 
 <haskell> 
 <haskell> 

do { y < do { x < m; 
do { y < do { x < m; 

Line 74:  Line 144:  
} 
} 

−  +  we can see that the laws represent plain, ordinary commonsense transformations of imperative programs. 

−  == But why should 
+  === But why should monadic types satisfy these laws? === 
+   

+  When we see a program written in a form on the lefthand side, we expect it to do the same thing as the corresponding righthand side; and vice versa. And in practice, people do write like the lengthier lefthand side once in a while. 

−  When we see a program written in a form on the lefthand side, we expect it to do 

⚫  
−  the same thing as the corresponding righthand side; and vice versa. And in 

−  practice, people do write like the lengthier lefthand side once in a while. 

⚫  
−  <haskell> 
+  :<haskell> 
−  skip_and_get = do 
+  skip_and_get = do unused < getLine 
−  +  line < getLine 

−  line 
+  return line 
−  return line 

</haskell> 
</haskell> 

−  and it would really throw off both beginners and veterans if that did 
+  :and it would really throw off both beginners and veterans if that did not act like (by ''right identity''): 
−  not act like (by right identity) 

−  <haskell> 
+  :<haskell> 
−  skip_and_get = do 
+  skip_and_get = do unused < getLine 
−  +  getLine 

−  getLine 

</haskell> 
</haskell> 

−  Second example: Next, you go ahead 
+  * Second example: Next, you go ahead and use <code>skip_and_get</code>: 
−  <haskell> 
+  :<haskell> 
−  main = do 
+  main = do answer < skip_and_get 
−  answer 
+  putStrLn answer 
−  putStrLn answer 

</haskell> 
</haskell> 

−  The most popular way of comprehending this program is by inlining 
+  :The most popular way of comprehending this program is by ''inlining'' (whether the compiler does or not is an orthogonal issue): 
−  (whether the compiler does or not is an orthogonal issue): 

−  <haskell> 
+  :<haskell> 
−  main = do 
+  main = do answer < (do unused < getLine 
−  +  getLine) 

−  +  putStrLn answer 

−  getLine 

−  putStrLn answer 

</haskell> 
</haskell> 

−  and applying associativity so you can pretend it is 
+  :and applying associativity so you can pretend it is: 
−  <haskell> 
+  :<haskell> 
−  main = do 
+  main = do unused < getLine 
−  +  answer < getLine 

−  answer 
+  putStrLn answer 
−  putStrLn answer 

</haskell> 
</haskell> 

−  The associativity law is amazingly pervasive: you have always assumed it, and you 
+  The associativity law is amazingly pervasive: you have always assumed it, and you have never noticed it. 
−  have never noticed it. 

−  
−  Whether compilers exploit the laws or not, you still want the laws for 

−  your own sake, just so you can avoid pulling your hair for 

−  counterintuitive program behaviour that brittlely depends on how many 

−  redundant "return"s you insert or how you nest your doblocks. 

−  
−  == But it doesn't look exactly like an "associative law"... == 

−  
−  Not in this form, no. To see precisely why they're called "identity laws" and an "associative law", you have to change your notation slightly. 

−  
−  The monad composition operator (also known as the Kleisli composition operator) is defined in Control.Monad: 

−  
−  <haskell> 

−  (>=>) :: Monad m => (a > m b) > (b > m c) > a > m c 

−  (m >=> n) x = do 

−  y < m x 

−  n y 

−  </haskell> 

−  
−  Using this operator, the three laws can be expressed like this: 

−  
−  { 

−   '''Left identity:''' 

−   <haskell> return >=> g </haskell> 

−   ≡ 

−   <haskell> g </haskell> 

−   

−   '''Right identity:''' 

−   <haskell> f >=> return </haskell> 

−   ≡ 

−   <haskell> f </haskell> 

−   

−   '''Associativity:''' 

−   <haskell> (f >=> g) >=> h </haskell> 

−   ≡ 

−   <haskell> f >=> (g >=> h) </haskell> 

−  } 

−  
−  It's now easy to see that monad composition is an associative operator with left and right identities. 

⚫  
+  The associativity of a ''binary'' operator allows for any number of operands to be combined by applying the binary operator with any arbitrary grouping to get the same welldefined result, just like the result of summing up a list of numbers is fully defined by the binary <code>(+)</code> operator no matter which parenthesization is used (yes, just like in folding a list of any type of monoidal values). 

⚫  
+  Whether compilers make use of them or not, you still want the laws for your own sake, just so you can avoid pulling your hair out over counterintuitive program behaviour, which depends (in brittle fashion!) on e.g. how many redundant <code>return</code>s you insert or how you nest your <code>do</code>blocks... 

⚫  
⚫  
[[Category:Standard_classes]] 
[[Category:Standard_classes]] 
Latest revision as of 05:52, 26 April 2021
Contents
The three laws
All instances of the Monad typeclass should satisfy the following laws:
Left identity:  return a

>>=  h

≡  h a

Right identity:  m

>>=  return

≡  m

Associativity:  (m >>= g)

>>=  h

≡  m >>= (\x > g x >>= h)

Here, p ≡ q simply means that you can replace p with q and viceversa, and the behaviour of your program will not change: p and q are equivalent.
Using etaexpansion, the associativity law can be rewritten for clarity as:
(m >>= (\x > g x)) >>= h
 
≡  m >>= (\x > g x >>= h)

or equally:
(m >>= (\x > g x)) >>= (\y > h y)
 
≡  m >>= (\x > g x >>= (\y > h y))

Is that really an "associative law"?
In this form, not at first glance. To see precisely why they're known as identity and associative laws, you have to change your notation slightly.
The monadcomposition operator (>=>)
(also known as the Kleislicomposition operator) is defined in Control.Monad
:
infixr 1 >=>
(>=>) :: Monad m => (a > m b) > (b > m c) > (a > m c)
f >=> g = \x > f x >>= g
Using this operator, the three laws can be expressed like this:
Left identity:  return

>=>  h

≡  h

Right identity:  f

>=>  return

≡  f

Associativity:  (f >=> g)

>=>  h

≡  f >=> (g >=> h)

It's now easy to see that monad composition is an associative operator with left and right identities.
This is a very important way to express the three monad laws, because they are precisely the laws that are required for monads to form a mathematical category. To summarise in haiku:
 Monad axioms:
 Kleisli composition forms
 a category.
The monad laws in practice
If we rewrite the laws using Haskell's do
notation:
Left identity:  do { x′ < return x;
f x′
}

≡  do { f x }
 
Right identity:  do { x < m;
return x
}

≡  do { m }
 
Associativity:  do { y < do { x < m;
f x
}
g y
}

≡  do { x < m;
do { y < f x;
g y
}
}

≡  do { x < m;
y < f x;
g y
}

we can see that the laws represent plain, ordinary commonsense transformations of imperative programs.
But why should monadic types satisfy these laws?
When we see a program written in a form on the lefthand side, we expect it to do the same thing as the corresponding righthand side; and vice versa. And in practice, people do write like the lengthier lefthand side once in a while.
 First example: beginners tend to write
skip_and_get = do unused < getLine line < getLine return line
 and it would really throw off both beginners and veterans if that did not act like (by right identity):
skip_and_get = do unused < getLine getLine
 Second example: Next, you go ahead and use
skip_and_get
:
main = do answer < skip_and_get putStrLn answer
 The most popular way of comprehending this program is by inlining (whether the compiler does or not is an orthogonal issue):
main = do answer < (do unused < getLine getLine) putStrLn answer
 and applying associativity so you can pretend it is:
main = do unused < getLine answer < getLine putStrLn answer
The associativity law is amazingly pervasive: you have always assumed it, and you have never noticed it.
The associativity of a binary operator allows for any number of operands to be combined by applying the binary operator with any arbitrary grouping to get the same welldefined result, just like the result of summing up a list of numbers is fully defined by the binary (+)
operator no matter which parenthesization is used (yes, just like in folding a list of any type of monoidal values).
Whether compilers make use of them or not, you still want the laws for your own sake, just so you can avoid pulling your hair out over counterintuitive program behaviour, which depends (in brittle fashion!) on e.g. how many redundant return
s you insert or how you nest your do
blocks...