A Haskell enthusiastic, a forever newbie, I don't do Haskell partly because I never took a computing course (officially), and partly because Haskell is cryptic. Yes, cryptic. Even though it has the most lucid lexical and syntactical structure.
I write Java code occasionally (again, for no particular reason, this "new generation"... they are so... lost!) and therefore crave for the functional approach.
My love for Haskell is difficult to explain, I know that because I tried to explain it to my friends every now and then, but I think the reason is that Haskell was the first programming language I came across that took the propositions-as-types (or programs-as-proofs) principle seriously.
So here is my own understanding and criticism of the language.
Part of life (...sigh...). Haskell is particularly rough here, as far as I can tell. Whenever there is an
error "I don\'t know what to do here!" occurs, the program terminates abruptly. This is bad.
Bad, both computationally and logically. Under the isomorphism, ("bottom", or, it's counterpart in Haskell
undefined, equivalent to non-termination), is reminiscent of an error. Behold!
undefined have the respective types
String -> a and
a, which is not at all funny. If you can prove absurdity (if you have a value of a type that is not inhibited) then you can prove everything, they say.
The designers of Haskell seems to have concluded that making the language logically sound is too hard and they should give it up. They were right.
Now, when you cannot return a value, what type would be the returned value then? Obviously it should have the type , which does not have a value. So in Haskell, it makes sense to type it as
a, as you better leave any attempts to type it, since there is no type called , and there is no type without a value (but... but...)
It just struck me that contructive type thoery is not necessarily extensional. Instead of having just one you can have several phantom types, each one being a distinguishable error. I was so inspired by Ben Rudiak-Gould et al's Haskell is not not ML paper (and Philip Wadler's dual calculus), but nobody seems to be too serious about them. To be totally honest, I have no clue what they talk about.
Even then, my pricipal motivation for going through this pain of writing this down is that in Haskell we have a computing interpretation of all the logical operators (although, interpreting is a bit vague because of currying), bar the . Why be partial? :-)
So what I am asking for is a systematic approach to error handling built into the language (not just passing a exeption handler to every function). This is not a Haskell' wish, as Haskell' is not ready for this.
- The advantage of this approach is you can write recursive functions. For instance:
power2 0 = 1 power2 n = 2 * (power2 (n - 1))
- You see how
power2refers to itself? It's the exact same mechanism that allows to write
fix f = let x = f x in x
- You'll notice
(a -> a) -> a, which is usually known as the fallacy of circular reasoning. But if we forbid it, we can't have recursive functions. —Ashley Y 08:42, 9 June 2006 (UTC)
- I am aware of that, I was just trying to promote continuations as negations point of view. I like it very much. --Pirated Dreams 14:31, 18 November 2006 (UTC)