Monomorphism restriction
The "monomorphism restriction" is a counter-intuitive rule in Haskell type inference. If you forget to provide a type signature, sometimes this rule will fill the free type variables with specific types using "type defaulting" rules. The resulting type signature is always less polymorphic than you'd expect, so often this results in the compiler throwing type errors at you in situations where you expected it to infer a perfectly sane type for a polymorphic expression.
A simple example is plus = (+)
. Without an explicit signature for plus
, the compiler will not infer the type (+) :: (Num a) => a -> a -> a
for `plus`, but will apply defaulting rules to specify plus :: Integer -> Integer -> Integer
. When applied to plus 3.5 2.7
, GHCi will then produce the somewhat-misleading-looking error, No instance for (Fractional Integer) arising from the literal ‘3.5’
.
The restriction is turned on by default in compiled modules, and turned off by default at the GHCi prompt (since GHC 7.8.1). You can override these defaults by using the MonomorphismRestriction and NoMonomorphismRestriction language pragmas.
The exact definition of the restriction is fairly technical; it can be found in section 4.5.5 of the Haskell 2010 Report. To a first approximation it means that you often cannot overload a function unless you provide an explicit type signature. In summary:
-- This yields f1 :: (Show x) => x -> String
f1 x = show x
-- But this doesn't. Instead, f2 :: () -> String
f2 = \x -> show x
-- ...but we can fix that with an explicit type signature.
f3 :: (Show a) => a -> String
f3 = \x -> show x
-- Similarly this produces () -> String
f4 = show
-- ...but this is allowed
f5 :: (Show a) => a -> String
f5 = show
Arguably, these should all be equivalent, but thanks to the monomorphism restriction, they are not.
The difference between the first and second version is that the first version binds x via a "function binding" (see section 4.4.3 of the Haskell 2010 Report), and is therefore unrestricted, but the second version does not. The reason why one is allowed and the other is not is that it's considered clear that sharing f1 will not share any computation, and less clear that sharing f2 will have the same effect. If this seems arbitrary, that's because it is. It is difficult to design an objective rule which disallows subjective unexpected behaviour. Some people are going to fall foul of the rule even though they're doing quite reasonable things.
The monomorphism restriction is probably the most annoying and controversial feature of Haskell's type system. All seem to agree that it is evil - it is commonly called "The Dreaded Monomorphism Restriction" - but whether or not it is considered a necessary evil depends on whom you ask.
So why is the restriction imposed? The reasoning behind it is fairly subtle, and is fully explained in section 4.5.5 of the Haskell 2010 Report. Basically, it solves one practical problem (without the restriction, there would be some ambiguous types) and one semantic problem (without the restriction, there would be some repeated evaluation where a programmer might expect the evaluation to be shared). Those who are for the restriction argue that these cases should be dealt with correctly. Those who are against the restriction argue that these cases are so rare that it's not worth sacrificing the type-independence of eta reduction.
- An example, from A History of Haskell: Consider the
genericLength
function, fromData.List
genericLength :: Num a => [b] -> a
- And consider the function:
f xs = (len,len)
where
len = genericLength xs
len
has typeNum a => a
and, without the monomorphism restriction, it could be computed twice. --ARG
(This introductory section slightly expanded and re-arranged by YitzGale.)
It is not clear to me how this whole thing about being computed once or twice works. Isn't type checking/inference something that happens at compile-time and shouldn't have any effect on what happens at run-time, as long as the typecheck passes? Dainichi
The trouble is that typeclasses essentially introduce additional function parameters -- specifically, the dictionary of code implementing the instances in question. In the case of typeclass polymorphic pattern bindings, you end up turning something that looked like a pattern binding -- a constant that would only ever be evaluated once, into what is really a function binding, something which will not be memoised. CaleGibbard 23:46, 1 February 2008 (UTC)
The type of f
, if no signature is given, then the compiler doesn't know that the two elements of the returned pair are of the same type. It's return value will be:
f::(Num a, Num b) => [x] -> (a, b)
This means that while compiling f the compiler is unable to memoise len - clearly if a /= b then different code is executed to compute the first and second appearance of len in the pair. It's possible the compiler could do something more clever when f is actually applied if a == b, but I'm supposing this isn't a straight-forward thing to implement in the compilers. Dozer 23:54, 4 February 2008 (GMT)
Thank you, the nature of the problem is getting clearer now, but I'm still confused about how the restriction of top level definitions is supposed to solve this problem. To me, the given example explains why f's type is inferred to beNum a => [x] -> (a, a)
(Num a, Num b) => [x] -> (a, b)
Maybe I need to read up on the Hindley–Milner type system, but this seems related to the existence of functions (such as genericLength and read) that are polymorphic in their return type. Would MR need to exist without these functions?
I'm a bit confused about functions like this, since I somehow feel they belong to more of a system with dependent types.
--Dainichi 06:53 15 Aug 2011 (UTC)
The following compiles for me on ghc 7.4.1:
module Main where
import Data.List
main = putStrLn $ show (f [])
f xs = (len,len)
where
len = genericLength xs
If i add f2 = \x -> show x
, compilation fails, demonstrating that i still have the monomorphism restriction enabled.
This is not due to type inference from the putStrLn $ show (f []) doing anything; in fact, this compiles fine:
module Main where
import Data.List
main = putStrLn ""
f xs = (len,len)
where
len = genericLength xs
-- f2 = \x -> show x
whereas this fails to compile:
module Main where
import Data.List
main = putStrLn ""
--f xs = (len,len)
-- where
-- len = genericLength xs
f2 = \x -> show x
Why does the example from History of Haskell not fail? -- BayleShanks
Oversimplifying the debate somewhat: Those in favour tend to be those who have written Haskell Implementations and those against tend to be those who have written complex combinator libraries (and hence have hit their collective heads against the restriction all too often). It often boils down to the fact that programmers want to avoid legalese, and language implementors want to avoid cruft.
In almost all cases, you can get around the restriction by including explicit type declarations. Those who are for the restriction are usually quick to point out that including explicit type declarations is good programming practice anyway. In a few very rare cases, however, you may need to supply a type signature which is not valid Haskell. (Such type signatures require a type system extension such as Scoped type variables.) Unless you're writing some weird combinator libraries, or are in the habit of not writing type declarations, you're unlikely to come across it. Even so, most Haskell Implementations provide a way to turn the restriction off.
See also: Section 4.5.5, Haskell 98 report.
Some question or suggestion: As I understand the problem arises from the situation that two different forms of assignment are described by the same notation. There are two forms of assignment, namely the inspection of data structures ("unpacking", "pattern binding") and the definition of functions ("function binding"). Unique examples are:
let f x = y -- function definition
let F x = y -- data structure decomposition
In the first case we have the identifier f starting with lower case. This means this is a function binding. The second assignment starts with F, which must be a constructor. That's why this is a pattern binding. The monomorphism restriction applies only to the pattern binding. I think this was not defined in order to please compiler writers, but has shown to be useful in practice, or am I wrong? But the different handling of these binding types leads to a problem since both types have a common case.
let x = y -- function or pattern binding?
So, what speaks against differentiating the assignments notationally, say
let f x = y -- function definition
let F x <= y -- data structure decomposition
and keep the monomorphism restriction as it is?
The problem isn't just pattern bindings, it's that pattern bindings which are typeclass polymorphic are actually function bindings in disguise, since the usual implementation of typeclasses adds parameters to such definitions, to allow the definition to take the typeclass dictionaries involved. Thus, such pattern bindings have different properties with respect to sharing (they're generally less shared than you want). In especially bad cases, without the MR, it is possible to have programs which run exponentially slower without type signatures than when signatures are added. Just distinguishing pattern bindings with a new notation doesn't solve the problem, since they'll have to be converted into function bindings in that case anyway. If you intend to keep the MR, then you don't need to change anything. The issue with the MR is just the fact that it's annoying to have eta-reduction fail in the absence of explicit type signatures, and the fact that it makes otherwise perfectly valid programs fail to compile on speculation that there might be loss of sharing (when there usually isn't, or at least the impact isn't large enough to worry about).
John Hughes recently advocated the removal of the MR on the Haskell Prime mailing list, and suggested replacing it with two forms of pattern binding: one for call-by-name (polymorphic, not shared), and one for call-by-need (monomorphic, guaranteed shared). This might be similar to what you're suggesting. If you look at it too closely, it seems like a good solution, but the overall impact on Haskell code seems too large to me, to resolve a distinction which it ought to be statically possible to determine.
I'm of the opinion that it would be better to find a way to restore sharing lost through the typeclass transformation in some way, or else implement typeclasses in an altogether different way which doesn't run into this problem. Additional runtime machinery seems like a likely candidate for this -- the interactions with garbage collection are somewhat subtle, but I think it should be doable. It's also possible to restore the sharing via whole-program analysis, but advocates of separate compilation will probably complain, unless we were to find a mechanism to fix the problem from the object code (and potentially temporaries) at link time.
I think it'd be useful to collect a set of examples of the Monormorphism Restriction biting people in an unexpected way. This may help to inform the debate over the MR by giving real-life examples. Add more examples here if (an only if) they constitute an unexpected MR-related incident in your life or someone else's. No invented examples! -- Simon Peyton Jones
- GHC Trac bug 1749
- In trying to build an editor with undoable actions:
class EditAction e a | e -> a where
apply :: a -> e -> a
data ListAction a = Append a | Remove
instance EditAction (ListAction a) [a] where
apply list (Append a) = a:list
apply (x:xs) Remove = xs
-- Apply all the EditActions to the input
--edit :: EditAction e a => a -> [e] -> a -- monomorphism restriction - I have to put this in!
edit = foldl apply
Back before forM was in the Control.Monad library, I once spent about 1/2 an hour trying to figure out why my action in the ST monad was having its 's
' parameter squished to ()
. I tore the code apart for quite a while before discovering that it was that the MR was applying to my definition of forM
:
forM = flip mapM
I recently got tired of typing print "blah"
in a ghci shell session and tried let p = print
. Thanks to MR and Haskell defaulting, the type of p
silently became () -> IO ()
. No surprise that my new "short" version of print was only capable of printing void values -
Prelude> p () () Prelude> p "blah" <interactive>:1:2: Couldn't match expected type `()' against inferred type `[Char]' In the first argument of `p', namely `"blah"' In the expression: p "blah" In the definition of `it': it = p "blah"
import Graphics.UI.Gtk
import Graphics.UI.Gtk.Glade
-- xmlGetWidget' :: WidgetClass widget => (GObject -> widget) -> String -> IO widget
xmlGetWidget' = xmlGetWidget undefined
main :: IO ()
main
= do
initGUI
window <- xmlGetWidget' castToWindow "window1"
button <- xmlGetWidget' castToButton "button1"
widgetShowAll window
mainGUI
If I comment main, I cannot compile this code because of the monomorphism restriction. With main, it'll infer the type:
xmlGetWidget' :: (GObject -> Window) -> String -> IO Window
And give me a type error in the button line. If I uncomment the type signature, it'll work.
I wasn't expecting the following to fail...
square :: (Num a) => a -> a
square x = x * x
dx = 0.0000001
deriv1 :: (Fractional a) => (a -> a) -> (a -> a)
deriv1 g = (\x -> ((g (x + 2) - (g x)) / dx ))
main = printf "res==%g %g\n" (square 5.12::Double) ((deriv1 square) 2::Float)
and for this to work.
square :: (Num a) => a -> a
square x = x * x
dx = 0.0000001
deriv1 :: (Fractional a) => (a -> a) -> (a -> a)
deriv1 g = (\x -> ((g (x + 2) - (g x)) / 0.0000001 ))
main = printf "res==%g %g\n" (square 5.12::Double) ((deriv1 square) 2::Float)
The fix was to add
dx :: Fractional a => a
--Harry
Along the same lines as Simon's question above, does anyone have any real examples of being bitten by the lack of MR? I know what it's for, but I can't really think of any realistic cases when it would be a problem. --pumpkin
I tried to define foldl in terms of foldr, and discovered
foldl' = foldr (\x y -> (\a h -> y (h a x) h)) const
doesn't typecheck while
foldl' xs v f = foldr (\x y -> (\a h -> y (h a x) h)) const xs v f
does. Spent a solid couple hours trying to find where my derivation went wrong before accidentally stumbled upon the solution. -- wzy