Difference between revisions of "Monad laws"
(by "3.14"? please clarify) 
m (Corrected haiku) 

(18 intermediate revisions by 7 users not shown)  
Line 1:  Line 1:  
−  All instances of the [[Monad]] class should obey: 

+  == The three laws == 

−  # "Left identity": <hask> return a >>= f = f a </hask> 

+  All instances of the [[Monad]] typeclass should satisfy the following laws: 

−  # "Right identity": <hask> m >>= return = m </hask> 

−  # "Associativity": <hask> (m >>= f) >>= g = m >>= (\x > f x >>= g) </hask> 

−  == What is the practical meaning of the monad laws? == 

+  { 

+   

+   ''Left identity:'' 

+   <haskell>return a</haskell> 

+   <tt>'''>>='''</tt> 

+   <haskell>h</haskell> 

+   ≡ 

+   <haskell>h a</haskell> 

+   

+   ''Right identity:'' 

+   <haskell>m</haskell> 

+   <tt>'''>>='''</tt> 

+   <haskell>return</haskell> 

+   ≡ 

+   <haskell>m</haskell> 

+   

+   ''Associativity:'' 

+   <haskell>(m >>= g)</haskell> 

+   <tt>'''>>='''</tt> 

+   <haskell>h</haskell> 

+   ≡ 

+   <haskell>m >>= (\x > g x >>= h)</haskell> 

+  } 

−  Let us rewrite the laws in donotation: 

+  Here, ''p'' ≡ ''q'' simply means that you can replace ''p'' with ''q'' and viceversa, and the behaviour of your program will not change: ''p'' and ''q'' are equivalent. 

−  <haskell> 

+  Using etaexpansion, the ''associativity'' law can be rewritten for clarity as: 

−  1. do { x' < return x do { f x 

−  ; f x' == } 

−  } 

−  2. do { x < m == do { m 

+  { 

−  ; return x } } 

+   

+  <haskell>(m >>= (\x > g x)) >>= h</haskell> 

+   

+   ≡ 

+  <haskell> m >>= (\x > g x >>= h)</haskell> 

+  } 

−  3. do { y < do { x < m do { x < m 

+  or equally: 

−  ; f x ; do { y < f x 

−  } == ; g y 

−  ; g y } 

−  } } 

−  do { x < m 

+  { 

−  ; y < f x 

+   

−  == ; g y 

+  <haskell>(m >>= (\x > g x)) >>= (\y > h y)</haskell> 

−  } 

+   

−  </haskell> 

+   ≡ 

+  <haskell> m >>= (\x > g x >>= (\y > h y))</haskell> 

+  } 

−  In this notation the laws appear as plain common sense. 

+  === Is that really an "associative law"? === 

+   

+  In this form, not at first glance. To see precisely why they're known as ''identity'' and ''associative'' laws, you have to change your notation slightly. 

−  == But why should monads obey these laws? == 

+  The monadcomposition operator <code>(>=>)</code> (also known as the ''Kleislicomposition'' operator) is defined in <code>Control.Monad</code>: 

−  
−  When we see a program written in a form on the LHS, we expect it to do 

−  the same thing as the corresponding RHS; and vice versa. And in 

−  practice, people do write like the lengthier LHS once in a while. 

−  First example: beginners tend to write 

<haskell> 
<haskell> 

−  skip_and_get = do { unused < getLine 

+  infixr 1 >=> 

−  +  (>=>) :: Monad m => (a > m b) > (b > m c) > (a > m c) 

−  +  f >=> g = \x > f x >>= g 

−  } 

</haskell> 
</haskell> 

−  and it would really throw off both beginners and veterans if that did 

+  Using this operator, the three laws can be expressed like this: 

−  not act like (by law #2) 

−  <haskell> 

+  { 

−  skip_and_get = do { unused < getLine 

+   

−  ; getLine 

+   ''Left identity:'' 

−  } 

+   <haskell>return</haskell> 

−  </haskell> 

+   <tt>'''>=>'''</tt> 

+   <haskell>h</haskell> 

+   ≡ 

+   <haskell>h</haskell> 

+   

+   ''Right identity:'' 

+   <haskell>f</haskell> 

+   <tt>'''>=>'''</tt> 

+   <haskell>return</haskell> 

+   ≡ 

+   <haskell>f</haskell> 

+   

+   ''Associativity:'' 

+   <haskell>(f >=> g)</haskell> 

+   <tt>'''>=>'''</tt> 

+   <haskell>h</haskell> 

+   ≡ 

+   <haskell>f >=> (g >=> h)</haskell> 

+  } 

−  Second example: Next, you go ahead to use skip_and_get: 

+  It's now easy to see that monad composition is an associative operator with left and right identities. 

−  <haskell> 

+  This is a very important way to express the three monad laws, because they are precisely the laws that are required for monads to form a mathematical [[Category theorycategory]]. To summarise in ''haiku'': 

−  main = do { answer < skip_and_get 

−  ; putStrLn answer 

−  } 

−  </haskell> 

−  The most popular way of comprehending this program is by inlining 

+  :<tt> 

−  (whether the compiler does or not is an orthogonal issue): 

+  :''Monad axioms:'' 

+  :''Kleisli composition forms'' 

+  :''a category.'' 

+  :</tt> 

−  <haskell> 

+  == The monad laws in practice == 

−  main = do { answer < do { unused < getLine 

+  
−  ; getLine 

+  If we rewrite the laws using Haskell's <code>do</code>notation: 

−  } 

+  
−  ; putStrLn answer 

+  { 

−  } 

+   

+   ''Left identity:'' 

+   <haskell> 

+  do { x′ < return x; 

+  f x′ 

+  } 

+  </haskell> 

+   ≡ 

+   <haskell> 

+  do { f x } 

+  </haskell> 

+   

+   ''Right identity:'' 

+   <haskell> 

+  do { x < m; 

+  return x 

+  } 

+  </haskell> 

+   ≡ 

+   <haskell> 

+  do { m } 

+  </haskell> 

+   

+   ''Associativity:'' 

+   <haskell> 

+  do { y < do { x < m; 

+  f x 

+  } 

+  g y 

+  } 

+  </haskell> 

+   ≡ 

+   <haskell> 

+  do { x < m; 

+  do { y < f x; 

+  g y 

+  } 

+  } 

+  </haskell> 

+   ≡ 

+   <haskell> 

+  do { x < m; 

+  y < f x; 

+  g y 

+  } 

</haskell> 
</haskell> 

+  } 

−  and applying law #3 so you can pretend it is 

+  we can see that the laws represent plain, ordinary commonsense transformations of imperative programs. 

−  <haskell> 

+  === But why should monadic types satisfy these laws? === 

−  main = do { unused < getLine 

+   

−  ; answer < getLine 

+  When we see a program written in a form on the lefthand side, we expect it to do the same thing as the corresponding righthand side; and vice versa. And in practice, people do write like the lengthier lefthand side once in a while. 

−  ; putStrLn answer 

+  
−  } 

+  * First example: beginners tend to write 

+  
+  :<haskell> 

+  skip_and_get = do unused < getLine 

+  line < getLine 

+  return line 

</haskell> 
</haskell> 

−  Law #3 is amazingly pervasive: you have always assumed it, and you 

+  :and it would really throw off both beginners and veterans if that did not act like (by ''right identity''): 

−  have never noticed it. 

−  Whether compilers exploit the laws or not, you still want the laws for 

+  :<haskell> 

−  your own sake, just so you can avoid pulling your hair for 

+  skip_and_get = do unused < getLine 

−  counterintuitive program behaviour that brittlely depends on how many 

+  getLine 

−  redundant "return"s you insert or how you nest your doblocks. 

+  </haskell> 

−  == But it doesn't look exactly like an "associative law"... == 

+  * Second example: Next, you go ahead and use <code>skip_and_get</code>: 

−  Not in this form, no. To see precisely why they're called "identity laws" and an "associative law", you have to change your notation slightly. 

+  :<haskell> 

+  main = do answer < skip_and_get 

+  putStrLn answer 

+  </haskell> 

−  The monad composition operator (also known as the Kleisli composition operator) is defined in Control.Monad: 

+  :The most popular way of comprehending this program is by ''inlining'' (whether the compiler does or not is an orthogonal issue): 

−  <haskell> 
+  :<haskell> 
−  +  main = do answer < (do unused < getLine 

−  +  getLine) 

+  putStrLn answer 

</haskell> 
</haskell> 

−  Using this operator, the three laws can be expressed like this: 

+  :and applying associativity so you can pretend it is: 

−  # "Left identity": <hask> return >=> g = g </hask> 

+  :<haskell> 

−  # "Right identity": <hask> f >=> return = f </hask> 

+  main = do unused < getLine 

−  # "Associativity": <hask> (f >=> g) >=> h = f >=> (g >=> h) </hask> 

+  answer < getLine 

+  putStrLn answer 

+  </haskell> 

−  It's now easy to see that monad composition is an associative operator with left and right identities. 

+  The associativity law is amazingly pervasive: you have always assumed it, and you have never noticed it. 

−  This is a very important way to express the three monad laws, because they are precisely the laws that are required for monads to form a mathematical [[Category theorycategory]]. So the monad laws can be summarised in convenient Haiku form: 

+  The associativity of a ''binary'' operator allows for any number of operands to be combined by applying the binary operator with any arbitrary grouping to get the same welldefined result, just like the result of summing up a list of numbers is fully defined by the binary <code>(+)</code> operator no matter which parenthesization is used (yes, just like in folding a list of any type of monoidal values). 

−  <blockquote> 

+  Whether compilers make use of them or not, you still want the laws for your own sake, just so you can avoid pulling your hair out over counterintuitive program behaviour, which depends (in brittle fashion!) on e.g. how many redundant <code>return</code>s you insert or how you nest your <code>do</code>blocks... 

−  Monad axioms:<br/> 

−  Kleisli composition forms<br/> 

−  a category. 

−  </blockquote> 

[[Category:Standard_classes]] 
[[Category:Standard_classes]] 
Latest revision as of 05:52, 26 April 2021
Contents
The three laws
All instances of the Monad typeclass should satisfy the following laws:
Left identity:  return a

>>=  h

≡  h a

Right identity:  m

>>=  return

≡  m

Associativity:  (m >>= g)

>>=  h

≡  m >>= (\x > g x >>= h)

Here, p ≡ q simply means that you can replace p with q and viceversa, and the behaviour of your program will not change: p and q are equivalent.
Using etaexpansion, the associativity law can be rewritten for clarity as:
(m >>= (\x > g x)) >>= h
 
≡  m >>= (\x > g x >>= h)

or equally:
(m >>= (\x > g x)) >>= (\y > h y)
 
≡  m >>= (\x > g x >>= (\y > h y))

Is that really an "associative law"?
In this form, not at first glance. To see precisely why they're known as identity and associative laws, you have to change your notation slightly.
The monadcomposition operator (>=>)
(also known as the Kleislicomposition operator) is defined in Control.Monad
:
infixr 1 >=>
(>=>) :: Monad m => (a > m b) > (b > m c) > (a > m c)
f >=> g = \x > f x >>= g
Using this operator, the three laws can be expressed like this:
Left identity:  return

>=>  h

≡  h

Right identity:  f

>=>  return

≡  f

Associativity:  (f >=> g)

>=>  h

≡  f >=> (g >=> h)

It's now easy to see that monad composition is an associative operator with left and right identities.
This is a very important way to express the three monad laws, because they are precisely the laws that are required for monads to form a mathematical category. To summarise in haiku:
 Monad axioms:
 Kleisli composition forms
 a category.
The monad laws in practice
If we rewrite the laws using Haskell's do
notation:
Left identity:  do { x′ < return x;
f x′
}

≡  do { f x }
 
Right identity:  do { x < m;
return x
}

≡  do { m }
 
Associativity:  do { y < do { x < m;
f x
}
g y
}

≡  do { x < m;
do { y < f x;
g y
}
}

≡  do { x < m;
y < f x;
g y
}

we can see that the laws represent plain, ordinary commonsense transformations of imperative programs.
But why should monadic types satisfy these laws?
When we see a program written in a form on the lefthand side, we expect it to do the same thing as the corresponding righthand side; and vice versa. And in practice, people do write like the lengthier lefthand side once in a while.
 First example: beginners tend to write
skip_and_get = do unused < getLine line < getLine return line
 and it would really throw off both beginners and veterans if that did not act like (by right identity):
skip_and_get = do unused < getLine getLine
 Second example: Next, you go ahead and use
skip_and_get
:
main = do answer < skip_and_get putStrLn answer
 The most popular way of comprehending this program is by inlining (whether the compiler does or not is an orthogonal issue):
main = do answer < (do unused < getLine getLine) putStrLn answer
 and applying associativity so you can pretend it is:
main = do unused < getLine answer < getLine putStrLn answer
The associativity law is amazingly pervasive: you have always assumed it, and you have never noticed it.
The associativity of a binary operator allows for any number of operands to be combined by applying the binary operator with any arbitrary grouping to get the same welldefined result, just like the result of summing up a list of numbers is fully defined by the binary (+)
operator no matter which parenthesization is used (yes, just like in folding a list of any type of monoidal values).
Whether compilers make use of them or not, you still want the laws for your own sake, just so you can avoid pulling your hair out over counterintuitive program behaviour, which depends (in brittle fashion!) on e.g. how many redundant return
s you insert or how you nest your do
blocks...