Syntactic sugar/Cons
This page is dedicated to arguments against syntactic sugar. The request for extended syntactic sugar is present everywhere and the reasons for syntactic sugar are obvious, but there are also serious objections to them. The objections listed here may help to decide when to do without syntactic sugar and which special notations should better be dropped in future versions of Haskell.
General
Haskell's basic syntax consists of function definition and function application.
Though in some cases function application is hard to read
and digs into details that are not essential for the situation they describe.
For this purpose special syntaxes like do
syntax, guards, list notation, list comprehension, infix notation were introduced for some frequent programming tasks to allow a more pleasant look.
Many people seem to like Haskell only because of its syntactic sugar. But adding syntactic sugar to a language is not a big achievement. Python, Perl, C++ have lots of syntactic sugar, but I wouldn't prefer them to Haskell. Why? Because they lack the transparency of data dependency of functional programming languages, they lack static but easy to use polymorphism, they lack lazy evaluation, they lack reliable modularisation. It's not amazing that Haskell provides a lot of syntactic sugar. It's amazing that every syntactic sugar has pure functional explanations. That proves the power of the functional concept.
Syntactic heroin
Compiler writers can only lose if they give way to the insistence of users requesting more syntactic sugar. Every user has his own preferred applications, everyone has his taste and everyone wants his special application and his taste to be respected in future language revisions. Who is authorised to decide which application is general and which is too special? Is it more important to have many syntactic alternatives such that all people can write with their individual styles or is it more important that code of several authors have homogenous appearance such that it can be read by all people?
You can bet if new syntactic sugar arises many users will rush at it and forget about the analytic expression the special notation shall replace. To argue against that is like trying to take the most beloved toy from children.
Every special notation leads to the question if it can be extended and generalised.
Guards are extended to pattern guards and
list comprehension is generalised to parallel list comprehension
in current versions of Haskell compilers.
Infix notation for alphanumeric functions is already possible in Haskell98
but "lacks" the possibility to add arguments like in x `rel c` y
.
The last is not implemented, but was already requested.
A solution using only Haskell98 infix operators is already
invented.
Further on, the more general MixFix notation was already proposed,
not to forget the silent lifting of map data structures to
functions,
postfix operators,
sections of
tuples, like (?,x,?)
,
symbolic prefix operators.
What comes next?
Rodney Bates called the phenomena not only "syntactic sugar" but "syntactic heroin".
People start with a small dosis of syntactic sugar, they quickly want more, because the initial dose isn't enough for ecstasy any longer. If one drug no longer helps then stronger ones are requested. It is so much tempting because the users requesting syntactic sugar are not responsible for implementing it and for avoiding inferences with other language features.
Parse errors
Compiler users have contradictory wishes. On the one hand they want more syntactic sugar, on the other hand they want better parser error messages. They don't realize that one is quite the opposite of the other.
E.g. when a parser reads an opening bracket
it doesn't know whether it is the start of a list comprehension expression
like [f x | x <- xs]
or the start of a list of comma separated expressions
like [f x, f y, g z]
.
Thus if you accidentally mix bars and commas
the parser don't know if you wanted to write a list comprehension or a comma separated list.
So it can't tell you precisely what you made wrong.
Type error messages of GHC have already reached a complexity which can't be processed by many Haskell newbies. It is the price to be paid for a type system which tries to cope with as few as possible type hints.
Let's consider another example from the view of a compiler. Internally it transforms the source code
(+1)
to
flip (+) 1
then it compiles it like regular functional code.
Though what happens if it encounters an error?
If it reports the error like
type error in
flip (+) 1
(as Hugs November 2002)
you wouldn't understand it,
because you typed (+1)
but not flip (+) 1
.
A compiler which handles this properly
must support syntactic sugar at the same level like regular syntax
which is obviously more complicated.
Sugar adds complexity
Syntactic sugar are usually special grammatical constructions. They can interfere badly with other constructions:
But syntactic sugar does not only touch the compilers.
Many other tools like those for
syntax highlighting (emacs, nedit),
source code markup (lhs2TeX),
source code formatting (Language.Haskell.Pretty),
source code transform (e.g. symbolic differentation),
program proofs,
debugging,
dependency analysis,
documentation extraction (haddock)
are affected.
Each tool becomes more complicated by more syntactic sugar.
Flexibility
The use of functions and functions of functions (i.e. higher order functions) allows for very flexible usage of program units. This is also true for the function notation, but it is not true for some syntactic sugar.
E.g. map
can be used with partial application
which is not possible for list comprehension syntax.
Thus map toLower
can be generalised to lists of strings simply by lifting map toLower
with map
, again, leading to map (map toLower)
.
In contrast to that \s -> [toLower c | c <- s]
has to be turned into \ss -> [[toLower c | c <- s] | s <- ss]
or \ss -> map (\s -> [toLower c | c <- s]) ss
.
A function can get more arguments as the development goes on.
If you are used to write x `rel` y
then you have to switch to rel c x y
after you added a new parameter to rel
.
The extended infix notation x `rel c` y
is (currently?) not allowed,
probably because then also nested infixes like in x `a `superRel` b` y
must be handled.
The prefix notation rel x y
tends to need less rewriting.
Guards need to be rewritten to if
s or to Case statements
when the result of a function needs post-processing.
Say we have the functions
isLeapYear :: Int -> Bool
isLeapYear year = mod year 4 == 0 && (mod year 100 /= 0 || mod year 400 == 0)
leapYearText :: Int -> String
leapYearText year
| isLeapYear year = "A leap year"
| otherwise = "Not a leap year"
where leapYearText
shall be extended to other languages
using the fictitious function translate
.
If you stick to guards you will possibly rewrite it to the clumsy
leapYearText :: Language -> Int -> String
leapYearText lang year =
translate lang (case () of ()
| isLeapYear year -> "A leap year"
| otherwise -> "Not a leap year")
But what about
leapYearText :: Language -> Int -> String
leapYearText lang year =
translate lang (if (isLeapYear year)
then "A leap year"
else "Not a leap year")
So if you find that simpler why not using if
also in the original definition?
leapYearText :: Int -> String
leapYearText year =
if (isLeapYear year)
then "A leap year"
else "Not a leap year"
Examples
The following section consider several notations and their specific problems.
Infix notation
Precedences
Infix notation is problematic for both human readers and source code formatters. The reader doesn't know the precedences of custom infix operators, he has to read the modules which the operators are imported from. This is even more difficult because infix operators are usually imported unqualified, that is you don't know from which module an operator is imported. The same problem arises for source code formatters. You certainly prefer the formatting
a +
b * c
to
a + b *
c
because the first formatting reflects the high precedence of *
.
A source code formatter can format this properly
only if it has access to the imported modules.
This is certainly uncommon for a plain source code formatter.
The problem also occurs if you use an infix operator, that you did forget to import.
E.g. GHC-6.4.1 may say then
Main.hs:52:6:
precedence parsing error
cannot mix `($)' [infixl 9] and `(.)' [infixr 9] in the same infix expression
Main.hs:52:13: Not in scope: `$'
Actually, only the second error is relevant.
It has been noticed by many people,
that the integer numbered precedences are not enough for describing the relations of all the infix operators.
http://www.haskell.org/pipermail/haskell-cafe/2005-February/009260.html
Fractional and negative fixities were already proposed:
http://www.haskell.org/pipermail/haskell-cafe/2006-November/019293.html
Indeed, rules like "multiplication and division precede addition and subtraction" would be more natural.
However, the Show
class would no longer be so simple.
"Infixisation"
You can't pass an argument to a function written in infix notation.
x `rel c` y
or x `lift rel` y
is not allowed.
Some library functions are designed for a "reversed" order of arguments,
this means that you will most oftenly leave out the first argument on partial application
rather than the second one.
E.g. the functions div
and mod
have parameters in the order of common mathematical notation.
But you will more oftenly use flip div x
than div x
and
flip mod x
more often than mod x
.
This is because the library designer expect that the user will prefer the infix style,
writing x `div` y
and thus `div` y
.
For functions which are not bound to a traditional notation
one should avoid this order!
A bad example in this respect is the module Data.Bits
in the version that comes with GHC-6.2.
Many of the functions of this module alter some bits in a machine word,
thus they can be considered as update functions and their type signature should end with a -> a
.
Then you could easily combine several operations by
shiftL 2 . clearBit 7 . setBit 4 . setBit 1
instead of
flip shiftL 2 . flip clearBit 7 . flip setBit 4 . flip setBit 1
or
(`shiftL` 2) . (`clearBit` 7) . (`setBit` 4) . (`setBit` 1)
.
Lists
Special notation for the list type
The type of a list over type a
is named [a]
rather than List a
.
This is confusing, since [a]
looks like the notation of a single element list.
For beginners it becomes even more complicated to distinguish between the type and the value of a list.
Some people try to do some kind of list comprehension by enclosing expressions in brackets
just like it is done for the list type.
See Singleton list confusion.
I don't see the advantage of [a]
and would like to see List a
in Haskell two.
Comma separated list elements
We are used to the list notation [0,1,2,3]
.
I think many Haskell users are not aware that it is a special notation.
They don't know that it is a replacement for (0:1:2:3:[])
,
and because of that they also can't derive
that a function for constructing single element list can be written as (:[])
.
The comma separated list notation [0,1,2,3]
is very common, but is it sensible?
There are two reasons against:
- The theoretical reason: The intuitive list notation using comma separation requires one comma less than the number of elements, an empty list would need -1 commas, which can't be written, obviously.
- The practical reason: The colon is like a terminator. Each list element is followed by the colon, thus it is easier to reorder the elements of a list in an editor. If you have written
(1:2:3:[])
you can simply cut some elements and the subsequent ':' and then you can insert them whereever you want.
Although the list type has so many special support by the Haskell 98 language,
there is no need for some syntactic support.
The definition
data List a = End | (:) a (List a)
is regular Haskell98 code.
The colon should have precedence below ($)
.
Then a list type can be List Int
and
a list value can be 1 : 2 : 3 : End
.
Again, this proves the power of the basic features of Haskell98.
Parallel list comprehension
Parallel list comprehension can be replaced by using zip
in many (all?) cases.
(n+k) patterns
Therer are some notational ambiguities concerning (n+k) patterns.
See Why I hate n+k
If-Then-Else
The construction if
-then
-else
can be considered as syntactic sugar for a function if
of type Bool -> a -> a -> a
as presented on Case.
The definition as plain function had the advantages that it can be used with foldr
and zipWith3
and
that then
and else
became regular identifiers.
Some people prefer the explicit then
and else
for readability reasons.
A generalisation of this syntactic exception was already proposed as "MixFix" notation.
But it's worth to turn round the question:
What is so special about if
that it need a special syntax?
Conclusion
- Guards can be dropped completely.
if
should be turned into a regular function.case expr of
could be turned into a function, i.e.case 0 -> 'a'; 1 -> 'b';
could an expression of typeInt -> Char
. It should be complemented byselect
function like that in Case. - Infix notation is good for nested application, because
(0:1:2:[])
reflects the represented structure better than((:) 0 ((:) 1 ((:) 2 [])))
. - Infix usage of functions with alphanumeric names is often just a matter of habit, just for the sake of fanciness, such as
toLower `map` s
which doesn't add anything to readability. If this feature is kept it should remain restricted to function names. It should not be extended to partially applied functions. - List comprehension should be used rarely, parallel list comprehension should be dropped completely.
do
notation is good for representing imperative and stateful program structures.(n+k)
patterns simulate a number representation which is not used internally and thus it must be emulated with much effort. It should be dropped. Numeric patterns such as0
involve conversions likefromInteger
and real comparisons (Eq
class!) for matching. It should be thought about dropping them, too.