Difference between revisions of "Monomorphism restriction"

From HaskellWiki
Jump to navigation Jump to search
(Added an example of problems caused by MR)
(5 intermediate revisions by 3 users not shown)
Line 1: Line 1:
The monomorphism restriction is probably the most annoying and controversial feature of Haskell's type system. All seem to agree that it is evil, but whether or not it is considered a necessary evil depends on who you ask.
+
The monomorphism restriction is probably the most annoying and controversial feature of Haskell's type system. And is turned off with the use of the NoMonomorphismRestriction language pragma. All seem to agree that it is evil, but whether or not it is considered a necessary evil depends on who you ask.
   
 
The definition of the restriction is fairly technical, but to a first approximation it means that you often cannot overload a function unless you provide an explicit type signature. In summary:
 
The definition of the restriction is fairly technical, but to a first approximation it means that you often cannot overload a function unless you provide an explicit type signature. In summary:
Line 46: Line 46:
 
----
 
----
   
''It is not clear to me how this whole thing about being computed once or twice works. Isn't type checking/inference something that happens at compile-time and shouldn't have any effect on what happens at run-time, as long as the typecheck passes? --dainichi''
+
It is not clear to me how this whole thing about being computed once or twice works. Isn't type checking/inference something that happens at compile-time and shouldn't have any effect on what happens at run-time, as long as the typecheck passes? [[User:Dainichi|Dainichi]]
   
 
The trouble is that typeclasses essentially introduce additional function parameters -- specifically, the dictionary of code implementing the instances in question. In the case of typeclass polymorphic pattern bindings, you end up turning something that looked like a pattern binding -- a constant that would only ever be evaluated once, into what is really a function binding, something which will not be memoised. [[User:CaleGibbard|CaleGibbard]] 23:46, 1 February 2008 (UTC)
 
The trouble is that typeclasses essentially introduce additional function parameters -- specifically, the dictionary of code implementing the instances in question. In the case of typeclass polymorphic pattern bindings, you end up turning something that looked like a pattern binding -- a constant that would only ever be evaluated once, into what is really a function binding, something which will not be memoised. [[User:CaleGibbard|CaleGibbard]] 23:46, 1 February 2008 (UTC)
Line 57: Line 57:
   
 
This means that <i>while compiling f</i> the compiler is unable to memoise len - clearly if a /= b then different code is executed to compute the first and second appearance of len in the pair. It's possible the compiler could do something more clever <i>when f is actually applied</i> if a == b, but I'm supposing this isn't a straight-forward thing to implement in the compilers. [[User:Dozer|Dozer]] 23:54, 4 February 2008 (GMT)
 
This means that <i>while compiling f</i> the compiler is unable to memoise len - clearly if a /= b then different code is executed to compute the first and second appearance of len in the pair. It's possible the compiler could do something more clever <i>when f is actually applied</i> if a == b, but I'm supposing this isn't a straight-forward thing to implement in the compilers. [[User:Dozer|Dozer]] 23:54, 4 February 2008 (GMT)
  +
  +
Thank you, the nature of the ''problem'' is getting clearer now, but I'm still confused about how the restriction of top level definitions is supposed to ''solve'' this problem. To me, the given example explains why f's type is inferred to be <haskell>Num a => [x] -> (a, a)</haskell>, not <haskell>(Num a, Num b) => [x] -> (a, b)</haskell>, but not why this means that you cannot define top-level overloading outside pattern bindings. Is there an example which makes this clearer?
  +
  +
Maybe I need to read up on the Hindley–Milner type system, but this seems related to the existence of functions (such as genericLength and read) that are polymorphic in their return type. Would MR need to exist without these functions?
  +
  +
I'm a bit confused about functions like this, since I somehow feel they belong to more of a system with dependent types.
  +
  +
--[[User:Dainichi|Dainichi]] 06:53 15 Aug 2011 (UTC)
  +
  +
   
 
----
 
----
Line 168: Line 178:
   
 
And give me a type error in the button line. If I uncomment the type signature, it'll work.
 
And give me a type error in the button line. If I uncomment the type signature, it'll work.
  +
----
  +
  +
I wasn't expecting the following to fail...
  +
  +
<haskell>
  +
square :: (Num a) => a -> a
  +
square x = x * x
  +
dx = 0.0000001
  +
deriv1 :: (Fractional a) => (a -> a) -> (a -> a)
  +
deriv1 g = (\x -> ((g (x + 2) - (g x)) / dx ))
  +
main = printf "res==%g %g\n" (square 5.12::Double) ((deriv1 square) 2::Float)
  +
</haskell>
  +
  +
and for this to work.
  +
  +
<haskell>
  +
square :: (Num a) => a -> a
  +
square x = x * x
  +
dx = 0.0000001
  +
deriv1 :: (Fractional a) => (a -> a) -> (a -> a)
  +
deriv1 g = (\x -> ((g (x + 2) - (g x)) / 0.0000001 ))
  +
main = printf "res==%g %g\n" (square 5.12::Double) ((deriv1 square) 2::Float)
  +
</haskell>
  +
  +
The fix was to add
  +
  +
<haskell>
  +
dx :: Fractional a => a
  +
</haskell>
  +
  +
--Harry
   
 
----
 
----

Revision as of 04:24, 4 January 2013

The monomorphism restriction is probably the most annoying and controversial feature of Haskell's type system. And is turned off with the use of the NoMonomorphismRestriction language pragma. All seem to agree that it is evil, but whether or not it is considered a necessary evil depends on who you ask.

The definition of the restriction is fairly technical, but to a first approximation it means that you often cannot overload a function unless you provide an explicit type signature. In summary:

-- This is allowed
f1 x = show x

-- This is not allowed
f2 = \x -> show x

-- ...but this is allowed
f3 :: (Show a) => a -> String
f3 = \x -> show x

-- This is not allowed
f4 = show

-- ...but this is allowed
f5 :: (Show a) => a -> String
f5 = show

Arguably, these should all be equivalent, but thanks to the monomorphism restriction, they are not.

The difference between the first and second version is that the first version binds x via a "simple pattern binding" (see section 4.4.3.2 of the Haskell 98 report), and is therefore unrestricted, but the second version does not. The reason why one is allowed and the other is not is that it's considered clear that sharing f1 will not share any computation, and less clear that sharing f2 will have the same effect. If this seems arbitrary, that's because it is. It is difficult to design an objective rule which disallows subjective unexpected behaviour. Some people are going to fall foul of the rule even though they're doing quite reasonable things.

So why is the restriction imposed? The reasoning behind it is fairly subtle, and is fully explained in the Haskell 98 report. Basically, it solves one practical problem (without the restriction, there would be some ambiguous types) and one semantic problem (without the restriction, there would be some repeated evaluation where a programmer might expect the evaluation to be shared). Those who are for the restriction argue that these cases should be dealt with correctly. Those who are against the restriction argue that these cases are so rare that it's not worth sacrificing the type-independence of eta reduction.

An example, from A History of Haskell: Consider the genericLength function, from Data.List
genericLength :: Num a => [b] -> a
And consider the function:
f xs = (len,len)
     where
       len = genericLength xs
len has type Num a => a and, without the monomorphism restriction, it could be computed twice. --ARG

It is not clear to me how this whole thing about being computed once or twice works. Isn't type checking/inference something that happens at compile-time and shouldn't have any effect on what happens at run-time, as long as the typecheck passes? Dainichi

The trouble is that typeclasses essentially introduce additional function parameters -- specifically, the dictionary of code implementing the instances in question. In the case of typeclass polymorphic pattern bindings, you end up turning something that looked like a pattern binding -- a constant that would only ever be evaluated once, into what is really a function binding, something which will not be memoised. CaleGibbard 23:46, 1 February 2008 (UTC)

The type of f, if no signature is given, then the compiler doesn't know that the two elements of the returned pair are of the same type. It's return value will be:

f::(Num a, Num b) => [x] -> (a, b)

This means that while compiling f the compiler is unable to memoise len - clearly if a /= b then different code is executed to compute the first and second appearance of len in the pair. It's possible the compiler could do something more clever when f is actually applied if a == b, but I'm supposing this isn't a straight-forward thing to implement in the compilers. Dozer 23:54, 4 February 2008 (GMT)

Thank you, the nature of the problem is getting clearer now, but I'm still confused about how the restriction of top level definitions is supposed to solve this problem. To me, the given example explains why f's type is inferred to be
Num a => [x] -> (a, a)
, not
(Num a, Num b) => [x] -> (a, b)
, but not why this means that you cannot define top-level overloading outside pattern bindings. Is there an example which makes this clearer?

Maybe I need to read up on the Hindley–Milner type system, but this seems related to the existence of functions (such as genericLength and read) that are polymorphic in their return type. Would MR need to exist without these functions?

I'm a bit confused about functions like this, since I somehow feel they belong to more of a system with dependent types.

--Dainichi 06:53 15 Aug 2011 (UTC)



Oversimplifying the debate somewhat: Those in favour tend to be those who have written Haskell Implementations and those against tend to be those who have written complex combinator libraries (and hence have hit their collective heads against the restriction all too often). It often boils down to the fact that programmers want to avoid legalese, and language implementors want to avoid cruft.

In almost all cases, you can get around the restriction by including explicit type declarations. Those who are for the restriction are usually quick to point out that including explicit type declarations is good programming practice anyway. In a few very rare cases, however, you may need to supply a type signature which is not valid Haskell. (Such type signatures require a type system extension such as Scoped type variables.) Unless you're writing some weird combinator libraries, or are in the habit of not writing type declarations, you're unlikely to come across it. Even so, most Haskell Implementations provide a way to turn the restriction off.

See also: Section 4.5.5, Haskell 98 report.

-- Andrew Bromage

Some question or suggestion: As I understand the problem arises from the situation that two different forms of assignment are described by the same notation. There are two forms of assignment, namely the inspection of data structures ("unpacking", "pattern binding") and the definition of functions ("function binding"). Unique examples are:

let f x = y  -- function definition
let F x = y  -- data structure decomposition

In the first case we have the identifier f starting with lower case. This means this is a function binding. The second assignment starts with F, which must be a constructor. That's why this is a pattern binding. The monomorphism restriction applies only to the pattern binding. I think this was not defined in order to please compiler writers, but has shown to be useful in practice, or am I wrong? But the different handling of these binding types leads to a problem since both types have a common case.

let x = y  -- function or pattern binding?

So, what speaks against differentiating the assignments notationally, say

let f x  = y  -- function definition
let F x <= y  -- data structure decomposition

and keep the monomorphism restriction as it is?

-- Henning Thielemann

The problem isn't just pattern bindings, it's that pattern bindings which are typeclass polymorphic are actually function bindings in disguise, since the usual implementation of typeclasses adds parameters to such definitions, to allow the definition to take the typeclass dictionaries involved. Thus, such pattern bindings have different properties with respect to sharing (they're generally less shared than you want). In especially bad cases, without the MR, it is possible to have programs which run exponentially slower without type signatures than when signatures are added. Just distinguishing pattern bindings with a new notation doesn't solve the problem, since they'll have to be converted into function bindings in that case anyway. If you intend to keep the MR, then you don't need to change anything. The issue with the MR is just the fact that it's annoying to have eta-reduction fail in the absence of explicit type signatures, and the fact that it makes otherwise perfectly valid programs fail to compile on speculation that there might be loss of sharing (when there usually isn't, or at least the impact isn't large enough to worry about).

John Hughes recently advocated the removal of the MR on the Haskell Prime mailing list, and suggested replacing it with two forms of pattern binding: one for call-by-name (polymorphic, not shared), and one for call-by-need (monomorphic, guaranteed shared). This might be similar to what you're suggesting. If you look at it too closely, it seems like a good solution, but the overall impact on Haskell code seems too large to me, to resolve a distinction which it ought to be statically possible to determine.

I'm of the opinion that it would be better to find a way to restore sharing lost through the typeclass transformation in some way, or else implement typeclasses in an altogether different way which doesn't run into this problem. Additional runtime machinery seems like a likely candidate for this -- the interactions with garbage collection are somewhat subtle, but I think it should be doable. It's also possible to restore the sharing via whole-program analysis, but advocates of separate compilation will probably complain, unless we were to find a mechanism to fix the problem from the object code (and potentially temporaries) at link time.

- Cale Gibbard

I think it'd be useful to collect a set of examples of the Monormorphism Restriction biting people in an unexpected way. This may help to inform the debate over the MR by giving real-life examples. Add more examples here if (an only if) they constitute an unexpected MR-related incident in your life or someone else's. No invented examples! -- Simon Peyton Jones

  • GHC Trac bug 1749
  • In trying to build an editor with undoable actions:
class EditAction e a | e -> a where
  apply :: a -> e -> a

data ListAction a = Append a | Remove

instance EditAction (ListAction a) [a] where
  apply list (Append a) = a:list
  apply (x:xs) Remove = xs

-- Apply all the EditActions to the input
--edit :: EditAction e a => a -> [e] -> a -- monomorphism restriction - I have to put this in!
edit = foldl apply

Back before forM was in the Control.Monad library, I once spent about 1/2 an hour trying to figure out why my action in the ST monad was having its 's' parameter squished to (). I tore the code apart for quite a while before discovering that it was that the MR was applying to my definition of forM:

forM = flip mapM

I recently got tired of typing print "blah" in a ghci shell session and tried let p = print. Thanks to MR and Haskell defaulting, the type of p silently became () -> IO (). No surprise that my new "short" version of print was only capable of printing void values -

Prelude> p () () Prelude> p "blah" <interactive>:1:2: Couldn't match expected type `()' against inferred type `[Char]' In the first argument of `p', namely `"blah"' In the expression: p "blah" In the definition of `it': it = p "blah"


import Graphics.UI.Gtk
import Graphics.UI.Gtk.Glade

-- xmlGetWidget' :: WidgetClass widget => (GObject -> widget) -> String -> IO widget
xmlGetWidget' = xmlGetWidget undefined

main :: IO ()
main
  = do
    initGUI
    window <- xmlGetWidget' castToWindow "window1"
    button <- xmlGetWidget' castToButton "button1"
    widgetShowAll window
    mainGUI

If I comment main, I cannot compile this code because of the monomorphism restriction. With main, it'll infer the type:

xmlGetWidget' :: (GObject -> Window) -> String -> IO Window

And give me a type error in the button line. If I uncomment the type signature, it'll work.


I wasn't expecting the following to fail...

square :: (Num a) => a -> a 
square x = x * x 
dx = 0.0000001
deriv1 :: (Fractional a) => (a -> a) -> (a -> a)
deriv1 g = (\x -> ((g (x + 2) - (g x)) / dx ))   
main = printf "res==%g %g\n" (square 5.12::Double) ((deriv1 square) 2::Float)

and for this to work.

square :: (Num a) => a -> a 
square x = x * x 
dx = 0.0000001
deriv1 :: (Fractional a) => (a -> a) -> (a -> a)
deriv1 g = (\x -> ((g (x + 2) - (g x)) / 0.0000001 ))   
main = printf "res==%g %g\n" (square 5.12::Double) ((deriv1 square) 2::Float)

The fix was to add

dx :: Fractional a => a

--Harry


Along the same lines as Simon's question above, does anyone have any real examples of being bitten by the lack of MR? I know what it's for, but I can't really think of any realistic cases when it would be a problem. --pumpkin