https://wiki.haskell.org/api.php?action=feedcontributions&user=MathematicalOrchid&feedformat=atomHaskellWiki - User contributions [en]2016-09-27T15:55:10ZUser contributionsMediaWiki 1.19.14+dfsg-1https://wiki.haskell.org/Talk:Structure_of_a_Haskell_projectTalk:Structure of a Haskell project2008-08-06T20:02:30Z<p>MathematicalOrchid: </p>
<hr />
<div>Do we ''really'' want <code>runtests.sh</code>? Shell scripts are only likely to work on Unix-like operating systems. Something more portable would be better.<br />
<br />
[[User:MathematicalOrchid|MathematicalOrchid]] 20:02, 6 August 2008 (UTC)</div>MathematicalOrchidhttps://wiki.haskell.org/Cabal/FAQCabal/FAQ2008-05-25T20:29:21Z<p>MathematicalOrchid: </p>
<hr />
<div>[[Category:FAQ]]<br />
== What is this hidden package? ==<br />
<br />
You build a package and get a message like:<br />
<br />
<pre><br />
Could not find module `Data.Map': it is a member of package<br />
containers-0.1.0.0, which is hidden.<br />
</pre><br />
<br />
This is because the package has not been updated for ghc-6.8 which has split the base package into lots of smaller packages. The package needs to be updated to say that it depends on these new split base packages, like containers, process and several others.<br />
<br />
If you just want to get the package to build, add the missing package names to the build-depends: line in the .cabal file. For example given the above error message we would add the 'containers' package to the build-depends.<br />
<br />
Developers of packages who want to know how to update their package properly so that it will continue to work with old and new compilers should see [[Upgrading_packages]].<br />
<br />
== How do I handle extensions? ==<br />
If your code uses some of the advanced Haskell extensions, you have a number of options.<br />
# If you're distributing via Cabal, you can simply add <code>ghc-options: -fglasgow-exts</code> to your .cabal file.<br />
# You can use the <code>OPTIONS_GHC</code> pragma to supply the -fglasgow-exts on a per-file basis (as opposed to in the cabal file which would apply to every file), like thus: <haskell>{-# OPTIONS_GHC -fglasgow-exts #-}</haskell>.<br />
# The best way to do it, if you know your users are on GHC 6.8.x are the new LANGUAGE pragmas. ie, each extension has a name, and you only list those which you are using (you can find the list in [http://haskell.org/ghc/docs/latest/html/libraries/Cabal/Distribution-Extension.html Distribution.Extension]). So you can enable only those extensions you are using. Like before, you enable them in the cabal file, mention them like <code>extensions: CPP, ForeignFunctionInterface</code>.<br />
#Of course, even better is specifying them in only the file using them. A pragma might look like: <haskell>{-# LANGUAGE CPP, ForeignFunctionInterface #-}</haskell> (You'll probably still want to use the extensions field just to make clear at the top level what extensions the project uses.)<br />
<br />
== [Windows] I tried to install a Haskell binding to (some external C library), but I get build errors ==<br />
<br />
Packages consisting of 100% Haskell code almost always build perfectly on Windows. However, packages which implement bindings to external C libraries (e.g., OpenSSH, libSDL, etc.) sometimes won't build on Windows without prodding.<br />
<br />
# Check that the external C library is actually installed on your system. (Cabal does ''not'' do this for you.)<br />
# Check the package contents, package home page, etc., to see if the author has ''told'' you how to get this package to work on Windows.<br />
<br />
If those two fail to get you any further, proceed as follows:<br />
<br />
* Cabal probably needs to be able to find header files in order to compile the package. In future there will be some switches for the 'configure' step to allow you to specify the path to these. For now, you'll have to manually ''hack'' the Cabal information file to tell Cabal where to look. Try adding a line in the 'library' section saying something like <code>include-dirs: "C:\\Program Files\\My External Library\\include"</code> (Note carefully the quotes and double backslashes!) Obviously the actual path varies depending on where you installed the thing.<br />
* Cabal may also need to find object files that need to be statically linked. Again, a future Cabal release will allow you to specify these during the configure state with switches, but for now try adding <code>extra-lib-dirs: "C:\\Program Files\\My External Library\\lib"</code> or similar.<br />
* Assuming you get your library to compile, you may still need to add DLLs or other resources to your PATH variable to get any programs ''using'' the package to actually run. (But the installer for the external library might have done this for you already.)</div>MathematicalOrchidhttps://wiki.haskell.org/Talk:GHC/Type_familiesTalk:GHC/Type families2007-11-11T12:55:25Z<p>MathematicalOrchid: </p>
<hr />
<div>How does this change now that GHC 6.8.1 is out? [[User:MathematicalOrchid|MathematicalOrchid]] 12:55, 11 November 2007 (UTC)<br />
<br />
<br />
<br />
In 4.3.2 Examples <br />
<br />
module GMap (GMapKey(..), GMap(..)) where...: As before, but also exports all the data constructors GMapInt, GMapChar, GMapUnit, GMapPair, and GMapUnit. <br />
<br />
should probably be:<br />
<br />
module GMap (GMapKey(..), GMap(..)) where...: As before, but also exports all the data constructors GMapInt, GMapChar, GMapUnit, GMapPair, and GMapEither.<br />
<br />
<br />
Paragraph 6 is a copy of Paragraph 4 except Collects is used.</div>MathematicalOrchidhttps://wiki.haskell.org/GHC_optimisationsGHC optimisations2007-08-13T18:47:09Z<p>MathematicalOrchid: More details in dead code elimination.</p>
<hr />
<div>[[Category:GHC]]<br />
<br />
== Introduction ==<br />
<br />
This page collects together information about the optimisations that GHC does and does ''not'' perform.<br />
<br />
* GHC experts: Please check that the info in this page is correct.<br />
* Everybody else: Feel free to add questions!<br />
<br />
== General optimisations ==<br />
<br />
=== Dead code elimination ===<br />
<br />
Does GHC remove code that you're not actually using?<br />
<br />
Yes and no. If there is something in a module that isn't exported and isn't used by anything that ''is'' exported, it gets ignored. (This makes your compiled program smaller.) So at the module level, yes, GHC does dead code elimination.<br />
<br />
On the other hand, if you import a module and use just 1 function from it, ''all'' of the code for ''all'' of the functions in that module get linked in. So in this sense, no, GHC doesn't do dead code elimination.<br />
<br />
(There is a switch to make GHC spit out a separate object file for each individual function in a module. If you use this, only the functions are actually used will get linked into your executable. But this tends to freak out the linker program...)<br />
<br />
If you want to be warned about unused code (Why do you have it there if it's unused? Did you forget to type something?) you can use the <code>-fwarn-unused-binds</code> option (or just <code>-Wall</code>).<br />
<br />
=== Common subexpression elimination ===<br />
<br />
First of all, ''common subexpression elemination'' (CSE) means that if an expression appears in several places, the code is rearranged so that the value of that expression is computed only once. For example:<br />
<br />
<haskell><br />
foo x = (bar x) * (bar x)<br />
</haskell><br />
<br />
might be transformed into<br />
<br />
<haskell><br />
foo x = let x' = bar x in x' * x'<br />
</haskell><br />
<br />
thus, the <hask>bar</hask> function is only called once. (And if <hask>bar</hask> is a particularly expensive function, this might save quite a lot of work.)<br />
<br />
GHC doesn't actually perform CSE as often as you might expect. The trouble is, performing CSE can affect the strictness/lazyness of the program. So GHC ''does'' do CSE, but only in specific circumstances --- see the GHC manual. (Section??)<br />
<br />
Long story short: "If you care about CSE, do it by hand."<br />
<br />
=== Inlining ===<br />
<br />
Inlining is where a function call is replaced by that function's definition. For example, the standard <hask>map</hask> function can be defined as<br />
<br />
<haskell><br />
map :: (a -> b) -> [a] -> [b]<br />
map f [] = []<br />
map f (x:xs) = f x : map f xs<br />
</haskell><br />
<br />
Now if you write something like<br />
<br />
<haskell><br />
foo = map bar<br />
</haskell><br />
<br />
it's possible that the compiler might ''inline'' the definition of <hask>map</hask>, yielding something like<br />
<br />
<haskell><br />
foo [] = []<br />
foo (x:xs) = bar x : foo xs<br />
</haskell><br />
<br />
which is (hopefully!) faster, because it doesn't involve a call to the <hask>map</hask> function any more, it just does the work directly. (This might also expose new optimisations opportunities; <hask>map</hask> works for ''any'' types, whereas <hask>foo</hask> probably works for only ''one'' type.)<br />
<br />
So, that's what inlining is. By default, GHC will inline things if they are 'small enough'. Every time you inline a function, you are in a sense making a (customised) ''copy'' of that function. Do too much of this and the compiled program will be enormous. So it's only worth it for 'small' functions.<br />
<br />
(How does GHC determine 'small'? Isn't there a switch that adjusts this?)<br />
<br />
=== Specialisation ===<br />
<br />
Flexibility is the enemy of performance. Take <hask>(+)</hask> for example. As you know, it adds two numbers together. However, would that be two integers? Two floating-point numbers? Two complex numbers? Two vectors? The generated machine code is very, very different in each case!<br />
<br />
It's easy enough to make a function such as <hask>sum</hask>, which will work for ''any'' type of number. However, in the interests of performance, if it can be determined exactly which type of number we're going to be working on, the compiler can generate exactly the right machine code, without having to do lots of runtime lookups.<br />
<br />
GHC tries to do this where possible. However (as I understand it?) this tends to work less well across module boundaries. For example, suppose you write<br />
<br />
<haskell><br />
module Physics where<br />
<br />
data Force = ...<br />
<br />
instance Num Force where ...<br />
<br />
resultant_force :: [Force] -> Force<br />
resultant_force = sum<br />
</haskell><br />
<br />
One might ''hope'' that <hask>resultant_force</hask> would get compiled using a special version of <hask>sum</hask> tailored to adding up only <hask>Force</hask> objects. This may or may not happen.<br />
<br />
Generally GHC won't just take an existing function and recompile it with a new type signature. What ''might'' happen is that the function gets inlined, and specialised from there. (Can someone say something more concrete here?)<br />
<br />
=== Strictness analysis ===<br />
<br />
Haskell is a lazy language. Calculations are notionally not performed until their results are 'needed'. However, if the result definitely ''will'' be needed, it's a waste of time and effort to save up the expression and execute it later; more efficient to just execute it right now.<br />
<br />
''Strictness analysis'' is a process by which GHC attempts to determine, at compile-time, which data definitely will 'always be needed'. GHC can then build code to just calculate such data, rather than the normal (higher overhead) process for storing up the calculation and executing it later.<br />
<br />
Unfortunately, looking at a program and saying "will this data be needed?" is a bit like looking at a program and saying "this program will never halt" --- see The Halting Problem. (Good link?) But GHC does its best, and can give big speedups in some cases.<br />
<br />
=== Fusion ===<br />
<br />
In Haskell, it is common to write expressions such as<br />
<br />
<haskell><br />
foo = length . filter p . map g . map f . concat<br />
</haskell><br />
<br />
This style of writing makes it very clear what the function ''does'' (it takes a list of lists, concatenates them all, applies f to every element, applies g to every element, throws away all elements that fail p, and then calculates the length of the result). However, if executed literally, it's very inefficient.<br />
<br />
When executed, <hask>concat</hask> takes a list of lists and constructs a flat list. Then <hask>map</hask> constructs another list. Then the second <hask>map</hask> function creates yet another list...<br />
<br />
Since Haskell is a lazy language, these intermediate lists never exist in memory in their entirety. One element will be generated by one function, and then immediately consumed by the next function in the chain. So as each element is generated, it instantly becomes garbage. So the memory usage isn't that great, but the GC load is quite high. (Not to mention all the time wasted on creating thunks, evaluating thunks, and allocating/deallocating RAM.) So we really want to avoid all this!<br />
<br />
The term ''fusion'' refers to program transformations aimed at removing intermediate data structures. (''Deforestation'' refers specifically to lists, but in general fusion is applicable to operations on any structure.)<br />
<br />
The standard libraries provide a function <hask>concatMap</hask> such that<br />
<br />
<haskell><br />
concatMap f = concat . map f<br />
</haskell><br />
<br />
As you can see, we don't 'need' this function --- we can define it in turns of other, simpler functions. However, it's more efficient to run because it doesn't generate an intermediate list of lists. (It's also used to define the list monad, which is probably why it's there.)<br />
<br />
Having <hask>concatMap</hask> is nice. But we really don't want to define new functions for every possible combination of list operators. (Do ''you'' fancy implementing a <hask>lengthFilterMapMapConat</hask> function?) So one of the optimisations that GHC performs is to attempt to perform fusion automatically.<br />
<br />
One way that we could try to do this is by inlining all the function definitions. But list processing functions are generally recursive, which makes matters rather complicated. (I.e., this doesn't really work.)<br />
<br />
Currently (GHC 6.6.1) we have build/foldr fusion. That is, where a function ''builds'' a list and passes the result to a function that ''consumes'' a list, GHC can (usually) elide the list itself. There are also other transformations that can be applied. For example, map fusion. Map fusion simply says that<br />
<br />
<haskell><br />
map g . map f<br />
</haskell><br />
<br />
is equivilent to<br />
<br />
<haskell><br />
map (g . f)<br />
</haskell><br />
<br />
(but the latter is more efficient).<br />
<br />
All of this is implemented using GHC's ''transformation rules'' facility. See the manual. (Section??) This functionality is only turned on with -O or -O2.<br />
<br />
In the future (GHC 6.7?) we will have ''stream fusion''. In layman's terms, this increases the number of functions that can be fused = big speedups.<br />
<br />
To be more technical, a ''stream'' represents a traversal of a list (or, indeed, some other structure such as an array). All the list functions become stream functions --- but, crucially, stream operations are ''non-recursive'', meaning they can all be glued together. Taking our example above:<br />
<br />
<haskell><br />
foo = length . filter p . map g . map f . concat<br />
</haskell><br />
<br />
becomes something like<br />
<br />
<haskell><br />
foo =<br />
length .<br />
fromStream . streamFilter p . toStream .<br />
fromStream . streamMap g . toStream .<br />
fromStream . streamMap f . toStream .<br />
fromStream . concat . toStream<br />
</haskell><br />
<br />
which, obviously, is massively ''less'' efficient than the original. However, since<br />
<br />
<haskell><br />
toStream . fromStream = id<br />
</haskell><br />
<br />
we can simplify that down to<br />
<br />
<haskell><br />
foo =<br />
length .<br />
fromStream . streamFilter p .<br />
streamMap g .<br />
streamMap f .<br />
streamConcat . toStream<br />
</haskell><br />
<br />
In other words, we have a <hask>toStream</hask> at one end, and a <hask>fromStream</hask> at the other end, with a bunch of stream operations in the middle. These are all non-recursive; onto <hask>fromStream</hask> actually performs a recursive loop, so once GHC does all its inlining we'll end up with something like<br />
<br />
<code><br />
foreach x in xs do<br />
... concat ...<br />
... map f ...<br />
... map g ...<br />
... filter p ...<br />
... length ...<br />
</code><br />
<br />
which is what we want.<br />
<br />
(...Add link to papers...)<br />
<br />
== Execution Model ==<br />
<br />
In order to understand how to write efficient code, and what GHC does with your code to optimise it, it helps to know a bit about what your compiled code looks like and how it works.<br />
<br />
=== Graph reduction ===<br />
<br />
To a first approximation, at any moment your program is a 'graph' of objects in memory. ('Graph' in the graph theory sense --- nodes connected by arcs.) Some of the objects are 'data' --- booleans, integers, strings, lists, etc. Some of those objects are functions (because Haskell lets you pass functions around like data). And some of these are ''thunks'' --- unevaluated expressions (because Haskell only evaluates expressions 'as needed').<br />
<br />
The program starts off with a single node representing the unevaluated call to <hask>main</hask>, and proceeds to execute from there. Each time a thunk is executed, the result (whatever it is) overwrites the thunk data. (It's possible that the result of evaluating a thunk is a new thunk of course.)<br />
<br />
=== About STG ===<br />
<br />
GHC compiles to the ''spineless tagness G-machine'' (STG). This is a notional graph reduction machine (i.e., a virtual machine that performs graph reductions as described above). 'G-machine' because it does graph reduction. 'Spineless' because it can't stand up to bullies. 'Tagless' because the graph nodes don't have 'tags' on them to say what they are.<br />
<br />
Instead of tags, the nodes have access pointers. If the node is a thunk, its pointer points to the code to evaluate the thunk and return the real result. Otherwise the pointer points to some 'do-nothing' code. So to access any type of node, you just do an indirect jump on this pointer; no case analysis is necessary.<br />
<br />
(Gosh I hope I got that lot right!)<br />
<br />
Internally, GHC uses a kind of 'machine code' that runs on this non-existent G-machine. It does a number of optimisations on that representation, before finally compiling it into ''real'' machine code (possibly via C using GCC).<br />
<br />
=== STG optimisations ===<br />
<br />
There are a number of optimisations done at the STG level. These mainly involve trying to avoid unnecessary steps. For example, avoid creating a thunk which immediately creates another thunk when executed; make it evaluate all the way down to a final result in one go. (If we 'need' the thunk's value, we're going to evaluate all the way down anyway, so let's leave out the overhead...)<br />
<br />
=== Primitive data types ===<br />
<br />
Haskell-98 provides some standard types such as <hask>Int</hask>, etc. GHC defines these as 'boxed' versions of GHC-specific 'unboxed' types:<br />
<br />
<haskell><br />
-- From GHC.Exts:<br />
data Int = I# Int#<br />
data Word = W# Word#<br />
data Double = D# Double#<br />
-- etc.<br />
</haskell><br />
<br />
Here <hask>Int#</hask> is a GHC-specific internal type representing, literally, a plain ordinary bundle of 32 or 64 bits inside the computer somewhere. (Depending on whether it's a 32 or 64-bit architecture.)<br />
<br />
In particular, a <hask>Int#</hask> is strict, whereas a <hask>Int</hask> isn't.<br />
<br />
=== Algebraic data types ===<br />
<br />
(I'm not sure about the basic memory layout. Somebody fill in the general case?)<br />
<br />
There are a few special cases:<br />
<br />
==== Types with 1 constructor ====<br />
<br />
If a function returns a tuple of values, and the caller immediately takes the tuple apart again, GHC will attempt to eliminate the tuple completely at the machine code level. Actually, this works for ''all'' types having exactly 1 constructor.<br />
<br />
==== Constructors with no fields ====<br />
<br />
Booleans are a good example:<br />
<br />
<haskell><br />
data Bool = False | True<br />
</haskell><br />
<br />
GHC will construct a single object in memory representing <hask>False</hask>, and another representing <hask>True</hask>. All <hask>Bool</hask> values are thus pointers to one or the other of these objects. (And hence, consume either 32 or 64 bits.)</div>MathematicalOrchidhttps://wiki.haskell.org/GHC_optimisationsGHC optimisations2007-08-13T18:32:33Z<p>MathematicalOrchid: Fixed note about tuples.</p>
<hr />
<div>[[Category:GHC]]<br />
<br />
== Introduction ==<br />
<br />
This page collects together information about the optimisations that GHC does and does ''not'' perform.<br />
<br />
* GHC experts: Please check that the info in this page is correct.<br />
* Everybody else: Feel free to add questions!<br />
<br />
== General optimisations ==<br />
<br />
=== Dead code elimination ===<br />
<br />
Does GHC remove code that you're not actually using?<br />
<br />
Yes and no. If there is something in a module that isn't exported and isn't used by anything that ''is'' exported, it gets ignored. (This makes your compiled program smaller.) So at the module level, yes, GHC does dead code elimination.<br />
<br />
On the other hand, if you import a module and use just 1 function from it, ''all' of the code for ''all'' of the functions in that module get linked in. So in this sense, no, GHC doesn't do dead code elimination.<br />
<br />
(There is a switch to make GHC spit out a separate object file for each individual function in a module. If you use this, only the functions are actually use will get linked into your executable. But this tends to freak out the linker program...)<br />
<br />
=== Common subexpression elimination ===<br />
<br />
First of all, ''common subexpression elemination'' (CSE) means that if an expression appears in several places, the code is rearranged so that the value of that expression is computed only once. For example:<br />
<br />
<haskell><br />
foo x = (bar x) * (bar x)<br />
</haskell><br />
<br />
might be transformed into<br />
<br />
<haskell><br />
foo x = let x' = bar x in x' * x'<br />
</haskell><br />
<br />
thus, the <hask>bar</hask> function is only called once. (And if <hask>bar</hask> is a particularly expensive function, this might save quite a lot of work.)<br />
<br />
GHC doesn't actually perform CSE as often as you might expect. The trouble is, performing CSE can affect the strictness/lazyness of the program. So GHC ''does'' do CSE, but only in specific circumstances --- see the GHC manual. (Section??)<br />
<br />
Long story short: "If you care about CSE, do it by hand."<br />
<br />
=== Inlining ===<br />
<br />
Inlining is where a function call is replaced by that function's definition. For example, the standard <hask>map</hask> function can be defined as<br />
<br />
<haskell><br />
map :: (a -> b) -> [a] -> [b]<br />
map f [] = []<br />
map f (x:xs) = f x : map f xs<br />
</haskell><br />
<br />
Now if you write something like<br />
<br />
<haskell><br />
foo = map bar<br />
</haskell><br />
<br />
it's possible that the compiler might ''inline'' the definition of <hask>map</hask>, yielding something like<br />
<br />
<haskell><br />
foo [] = []<br />
foo (x:xs) = bar x : foo xs<br />
</haskell><br />
<br />
which is (hopefully!) faster, because it doesn't involve a call to the <hask>map</hask> function any more, it just does the work directly. (This might also expose new optimisations opportunities; <hask>map</hask> works for ''any'' types, whereas <hask>foo</hask> probably works for only ''one'' type.)<br />
<br />
So, that's what inlining is. By default, GHC will inline things if they are 'small enough'. Every time you inline a function, you are in a sense making a (customised) ''copy'' of that function. Do too much of this and the compiled program will be enormous. So it's only worth it for 'small' functions.<br />
<br />
(How does GHC determine 'small'? Isn't there a switch that adjusts this?)<br />
<br />
=== Specialisation ===<br />
<br />
Flexibility is the enemy of performance. Take <hask>(+)</hask> for example. As you know, it adds two numbers together. However, would that be two integers? Two floating-point numbers? Two complex numbers? Two vectors? The generated machine code is very, very different in each case!<br />
<br />
It's easy enough to make a function such as <hask>sum</hask>, which will work for ''any'' type of number. However, in the interests of performance, if it can be determined exactly which type of number we're going to be working on, the compiler can generate exactly the right machine code, without having to do lots of runtime lookups.<br />
<br />
GHC tries to do this where possible. However (as I understand it?) this tends to work less well across module boundaries. For example, suppose you write<br />
<br />
<haskell><br />
module Physics where<br />
<br />
data Force = ...<br />
<br />
instance Num Force where ...<br />
<br />
resultant_force :: [Force] -> Force<br />
resultant_force = sum<br />
</haskell><br />
<br />
One might ''hope'' that <hask>resultant_force</hask> would get compiled using a special version of <hask>sum</hask> tailored to adding up only <hask>Force</hask> objects. This may or may not happen.<br />
<br />
Generally GHC won't just take an existing function and recompile it with a new type signature. What ''might'' happen is that the function gets inlined, and specialised from there. (Can someone say something more concrete here?)<br />
<br />
=== Strictness analysis ===<br />
<br />
Haskell is a lazy language. Calculations are notionally not performed until their results are 'needed'. However, if the result definitely ''will'' be needed, it's a waste of time and effort to save up the expression and execute it later; more efficient to just execute it right now.<br />
<br />
''Strictness analysis'' is a process by which GHC attempts to determine, at compile-time, which data definitely will 'always be needed'. GHC can then build code to just calculate such data, rather than the normal (higher overhead) process for storing up the calculation and executing it later.<br />
<br />
Unfortunately, looking at a program and saying "will this data be needed?" is a bit like looking at a program and saying "this program will never halt" --- see The Halting Problem. (Good link?) But GHC does its best, and can give big speedups in some cases.<br />
<br />
=== Fusion ===<br />
<br />
In Haskell, it is common to write expressions such as<br />
<br />
<haskell><br />
foo = length . filter p . map g . map f . concat<br />
</haskell><br />
<br />
This style of writing makes it very clear what the function ''does'' (it takes a list of lists, concatenates them all, applies f to every element, applies g to every element, throws away all elements that fail p, and then calculates the length of the result). However, if executed literally, it's very inefficient.<br />
<br />
When executed, <hask>concat</hask> takes a list of lists and constructs a flat list. Then <hask>map</hask> constructs another list. Then the second <hask>map</hask> function creates yet another list...<br />
<br />
Since Haskell is a lazy language, these intermediate lists never exist in memory in their entirety. One element will be generated by one function, and then immediately consumed by the next function in the chain. So as each element is generated, it instantly becomes garbage. So the memory usage isn't that great, but the GC load is quite high. (Not to mention all the time wasted on creating thunks, evaluating thunks, and allocating/deallocating RAM.) So we really want to avoid all this!<br />
<br />
The term ''fusion'' refers to program transformations aimed at removing intermediate data structures. (''Deforestation'' refers specifically to lists, but in general fusion is applicable to operations on any structure.)<br />
<br />
The standard libraries provide a function <hask>concatMap</hask> such that<br />
<br />
<haskell><br />
concatMap f = concat . map f<br />
</haskell><br />
<br />
As you can see, we don't 'need' this function --- we can define it in turns of other, simpler functions. However, it's more efficient to run because it doesn't generate an intermediate list of lists. (It's also used to define the list monad, which is probably why it's there.)<br />
<br />
Having <hask>concatMap</hask> is nice. But we really don't want to define new functions for every possible combination of list operators. (Do ''you'' fancy implementing a <hask>lengthFilterMapMapConat</hask> function?) So one of the optimisations that GHC performs is to attempt to perform fusion automatically.<br />
<br />
One way that we could try to do this is by inlining all the function definitions. But list processing functions are generally recursive, which makes matters rather complicated. (I.e., this doesn't really work.)<br />
<br />
Currently (GHC 6.6.1) we have build/foldr fusion. That is, where a function ''builds'' a list and passes the result to a function that ''consumes'' a list, GHC can (usually) elide the list itself. There are also other transformations that can be applied. For example, map fusion. Map fusion simply says that<br />
<br />
<haskell><br />
map g . map f<br />
</haskell><br />
<br />
is equivilent to<br />
<br />
<haskell><br />
map (g . f)<br />
</haskell><br />
<br />
(but the latter is more efficient).<br />
<br />
All of this is implemented using GHC's ''transformation rules'' facility. See the manual. (Section??) This functionality is only turned on with -O or -O2.<br />
<br />
In the future (GHC 6.7?) we will have ''stream fusion''. In layman's terms, this increases the number of functions that can be fused = big speedups.<br />
<br />
To be more technical, a ''stream'' represents a traversal of a list (or, indeed, some other structure such as an array). All the list functions become stream functions --- but, crucially, stream operations are ''non-recursive'', meaning they can all be glued together. Taking our example above:<br />
<br />
<haskell><br />
foo = length . filter p . map g . map f . concat<br />
</haskell><br />
<br />
becomes something like<br />
<br />
<haskell><br />
foo =<br />
length .<br />
fromStream . streamFilter p . toStream .<br />
fromStream . streamMap g . toStream .<br />
fromStream . streamMap f . toStream .<br />
fromStream . concat . toStream<br />
</haskell><br />
<br />
which, obviously, is massively ''less'' efficient than the original. However, since<br />
<br />
<haskell><br />
toStream . fromStream = id<br />
</haskell><br />
<br />
we can simplify that down to<br />
<br />
<haskell><br />
foo =<br />
length .<br />
fromStream . streamFilter p .<br />
streamMap g .<br />
streamMap f .<br />
streamConcat . toStream<br />
</haskell><br />
<br />
In other words, we have a <hask>toStream</hask> at one end, and a <hask>fromStream</hask> at the other end, with a bunch of stream operations in the middle. These are all non-recursive; onto <hask>fromStream</hask> actually performs a recursive loop, so once GHC does all its inlining we'll end up with something like<br />
<br />
<code><br />
foreach x in xs do<br />
... concat ...<br />
... map f ...<br />
... map g ...<br />
... filter p ...<br />
... length ...<br />
</code><br />
<br />
which is what we want.<br />
<br />
(...Add link to papers...)<br />
<br />
== Execution Model ==<br />
<br />
In order to understand how to write efficient code, and what GHC does with your code to optimise it, it helps to know a bit about what your compiled code looks like and how it works.<br />
<br />
=== Graph reduction ===<br />
<br />
To a first approximation, at any moment your program is a 'graph' of objects in memory. ('Graph' in the graph theory sense --- nodes connected by arcs.) Some of the objects are 'data' --- booleans, integers, strings, lists, etc. Some of those objects are functions (because Haskell lets you pass functions around like data). And some of these are ''thunks'' --- unevaluated expressions (because Haskell only evaluates expressions 'as needed').<br />
<br />
The program starts off with a single node representing the unevaluated call to <hask>main</hask>, and proceeds to execute from there. Each time a thunk is executed, the result (whatever it is) overwrites the thunk data. (It's possible that the result of evaluating a thunk is a new thunk of course.)<br />
<br />
=== About STG ===<br />
<br />
GHC compiles to the ''spineless tagness G-machine'' (STG). This is a notional graph reduction machine (i.e., a virtual machine that performs graph reductions as described above). 'G-machine' because it does graph reduction. 'Spineless' because it can't stand up to bullies. 'Tagless' because the graph nodes don't have 'tags' on them to say what they are.<br />
<br />
Instead of tags, the nodes have access pointers. If the node is a thunk, its pointer points to the code to evaluate the thunk and return the real result. Otherwise the pointer points to some 'do-nothing' code. So to access any type of node, you just do an indirect jump on this pointer; no case analysis is necessary.<br />
<br />
(Gosh I hope I got that lot right!)<br />
<br />
Internally, GHC uses a kind of 'machine code' that runs on this non-existent G-machine. It does a number of optimisations on that representation, before finally compiling it into ''real'' machine code (possibly via C using GCC).<br />
<br />
=== STG optimisations ===<br />
<br />
There are a number of optimisations done at the STG level. These mainly involve trying to avoid unnecessary steps. For example, avoid creating a thunk which immediately creates another thunk when executed; make it evaluate all the way down to a final result in one go. (If we 'need' the thunk's value, we're going to evaluate all the way down anyway, so let's leave out the overhead...)<br />
<br />
=== Primitive data types ===<br />
<br />
Haskell-98 provides some standard types such as <hask>Int</hask>, etc. GHC defines these as 'boxed' versions of GHC-specific 'unboxed' types:<br />
<br />
<haskell><br />
-- From GHC.Exts:<br />
data Int = I# Int#<br />
data Word = W# Word#<br />
data Double = D# Double#<br />
-- etc.<br />
</haskell><br />
<br />
Here <hask>Int#</hask> is a GHC-specific internal type representing, literally, a plain ordinary bundle of 32 or 64 bits inside the computer somewhere. (Depending on whether it's a 32 or 64-bit architecture.)<br />
<br />
In particular, a <hask>Int#</hask> is strict, whereas a <hask>Int</hask> isn't.<br />
<br />
=== Algebraic data types ===<br />
<br />
(I'm not sure about the basic memory layout. Somebody fill in the general case?)<br />
<br />
There are a few special cases:<br />
<br />
==== Types with 1 constructor ====<br />
<br />
If a function returns a tuple of values, and the caller immediately takes the tuple apart again, GHC will attempt to eliminate the tuple completely at the machine code level. Actually, this works for ''all'' types having exactly 1 constructor.<br />
<br />
==== Constructors with no fields ====<br />
<br />
Booleans are a good example:<br />
<br />
<haskell><br />
data Bool = False | True<br />
</haskell><br />
<br />
GHC will construct a single object in memory representing <hask>False</hask>, and another representing <hask>True</hask>. All <hask>Bool</hask> values are thus pointers to one or the other of these objects. (And hence, consume either 32 or 64 bits.)</div>MathematicalOrchidhttps://wiki.haskell.org/GHC_optimisationsGHC optimisations2007-08-13T18:30:20Z<p>MathematicalOrchid: Added dead code elimination.</p>
<hr />
<div>[[Category:GHC]]<br />
<br />
== Introduction ==<br />
<br />
This page collects together information about the optimisations that GHC does and does ''not'' perform.<br />
<br />
* GHC experts: Please check that the info in this page is correct.<br />
* Everybody else: Feel free to add questions!<br />
<br />
== General optimisations ==<br />
<br />
=== Dead code elimination ===<br />
<br />
Does GHC remove code that you're not actually using?<br />
<br />
Yes and no. If there is something in a module that isn't exported and isn't used by anything that ''is'' exported, it gets ignored. (This makes your compiled program smaller.) So at the module level, yes, GHC does dead code elimination.<br />
<br />
On the other hand, if you import a module and use just 1 function from it, ''all' of the code for ''all'' of the functions in that module get linked in. So in this sense, no, GHC doesn't do dead code elimination.<br />
<br />
(There is a switch to make GHC spit out a separate object file for each individual function in a module. If you use this, only the functions are actually use will get linked into your executable. But this tends to freak out the linker program...)<br />
<br />
=== Common subexpression elimination ===<br />
<br />
First of all, ''common subexpression elemination'' (CSE) means that if an expression appears in several places, the code is rearranged so that the value of that expression is computed only once. For example:<br />
<br />
<haskell><br />
foo x = (bar x) * (bar x)<br />
</haskell><br />
<br />
might be transformed into<br />
<br />
<haskell><br />
foo x = let x' = bar x in x' * x'<br />
</haskell><br />
<br />
thus, the <hask>bar</hask> function is only called once. (And if <hask>bar</hask> is a particularly expensive function, this might save quite a lot of work.)<br />
<br />
GHC doesn't actually perform CSE as often as you might expect. The trouble is, performing CSE can affect the strictness/lazyness of the program. So GHC ''does'' do CSE, but only in specific circumstances --- see the GHC manual. (Section??)<br />
<br />
Long story short: "If you care about CSE, do it by hand."<br />
<br />
=== Inlining ===<br />
<br />
Inlining is where a function call is replaced by that function's definition. For example, the standard <hask>map</hask> function can be defined as<br />
<br />
<haskell><br />
map :: (a -> b) -> [a] -> [b]<br />
map f [] = []<br />
map f (x:xs) = f x : map f xs<br />
</haskell><br />
<br />
Now if you write something like<br />
<br />
<haskell><br />
foo = map bar<br />
</haskell><br />
<br />
it's possible that the compiler might ''inline'' the definition of <hask>map</hask>, yielding something like<br />
<br />
<haskell><br />
foo [] = []<br />
foo (x:xs) = bar x : foo xs<br />
</haskell><br />
<br />
which is (hopefully!) faster, because it doesn't involve a call to the <hask>map</hask> function any more, it just does the work directly. (This might also expose new optimisations opportunities; <hask>map</hask> works for ''any'' types, whereas <hask>foo</hask> probably works for only ''one'' type.)<br />
<br />
So, that's what inlining is. By default, GHC will inline things if they are 'small enough'. Every time you inline a function, you are in a sense making a (customised) ''copy'' of that function. Do too much of this and the compiled program will be enormous. So it's only worth it for 'small' functions.<br />
<br />
(How does GHC determine 'small'? Isn't there a switch that adjusts this?)<br />
<br />
=== Specialisation ===<br />
<br />
Flexibility is the enemy of performance. Take <hask>(+)</hask> for example. As you know, it adds two numbers together. However, would that be two integers? Two floating-point numbers? Two complex numbers? Two vectors? The generated machine code is very, very different in each case!<br />
<br />
It's easy enough to make a function such as <hask>sum</hask>, which will work for ''any'' type of number. However, in the interests of performance, if it can be determined exactly which type of number we're going to be working on, the compiler can generate exactly the right machine code, without having to do lots of runtime lookups.<br />
<br />
GHC tries to do this where possible. However (as I understand it?) this tends to work less well across module boundaries. For example, suppose you write<br />
<br />
<haskell><br />
module Physics where<br />
<br />
data Force = ...<br />
<br />
instance Num Force where ...<br />
<br />
resultant_force :: [Force] -> Force<br />
resultant_force = sum<br />
</haskell><br />
<br />
One might ''hope'' that <hask>resultant_force</hask> would get compiled using a special version of <hask>sum</hask> tailored to adding up only <hask>Force</hask> objects. This may or may not happen.<br />
<br />
Generally GHC won't just take an existing function and recompile it with a new type signature. What ''might'' happen is that the function gets inlined, and specialised from there. (Can someone say something more concrete here?)<br />
<br />
=== Strictness analysis ===<br />
<br />
Haskell is a lazy language. Calculations are notionally not performed until their results are 'needed'. However, if the result definitely ''will'' be needed, it's a waste of time and effort to save up the expression and execute it later; more efficient to just execute it right now.<br />
<br />
''Strictness analysis'' is a process by which GHC attempts to determine, at compile-time, which data definitely will 'always be needed'. GHC can then build code to just calculate such data, rather than the normal (higher overhead) process for storing up the calculation and executing it later.<br />
<br />
Unfortunately, looking at a program and saying "will this data be needed?" is a bit like looking at a program and saying "this program will never halt" --- see The Halting Problem. (Good link?) But GHC does its best, and can give big speedups in some cases.<br />
<br />
=== Fusion ===<br />
<br />
In Haskell, it is common to write expressions such as<br />
<br />
<haskell><br />
foo = length . filter p . map g . map f . concat<br />
</haskell><br />
<br />
This style of writing makes it very clear what the function ''does'' (it takes a list of lists, concatenates them all, applies f to every element, applies g to every element, throws away all elements that fail p, and then calculates the length of the result). However, if executed literally, it's very inefficient.<br />
<br />
When executed, <hask>concat</hask> takes a list of lists and constructs a flat list. Then <hask>map</hask> constructs another list. Then the second <hask>map</hask> function creates yet another list...<br />
<br />
Since Haskell is a lazy language, these intermediate lists never exist in memory in their entirety. One element will be generated by one function, and then immediately consumed by the next function in the chain. So as each element is generated, it instantly becomes garbage. So the memory usage isn't that great, but the GC load is quite high. (Not to mention all the time wasted on creating thunks, evaluating thunks, and allocating/deallocating RAM.) So we really want to avoid all this!<br />
<br />
The term ''fusion'' refers to program transformations aimed at removing intermediate data structures. (''Deforestation'' refers specifically to lists, but in general fusion is applicable to operations on any structure.)<br />
<br />
The standard libraries provide a function <hask>concatMap</hask> such that<br />
<br />
<haskell><br />
concatMap f = concat . map f<br />
</haskell><br />
<br />
As you can see, we don't 'need' this function --- we can define it in turns of other, simpler functions. However, it's more efficient to run because it doesn't generate an intermediate list of lists. (It's also used to define the list monad, which is probably why it's there.)<br />
<br />
Having <hask>concatMap</hask> is nice. But we really don't want to define new functions for every possible combination of list operators. (Do ''you'' fancy implementing a <hask>lengthFilterMapMapConat</hask> function?) So one of the optimisations that GHC performs is to attempt to perform fusion automatically.<br />
<br />
One way that we could try to do this is by inlining all the function definitions. But list processing functions are generally recursive, which makes matters rather complicated. (I.e., this doesn't really work.)<br />
<br />
Currently (GHC 6.6.1) we have build/foldr fusion. That is, where a function ''builds'' a list and passes the result to a function that ''consumes'' a list, GHC can (usually) elide the list itself. There are also other transformations that can be applied. For example, map fusion. Map fusion simply says that<br />
<br />
<haskell><br />
map g . map f<br />
</haskell><br />
<br />
is equivilent to<br />
<br />
<haskell><br />
map (g . f)<br />
</haskell><br />
<br />
(but the latter is more efficient).<br />
<br />
All of this is implemented using GHC's ''transformation rules'' facility. See the manual. (Section??) This functionality is only turned on with -O or -O2.<br />
<br />
In the future (GHC 6.7?) we will have ''stream fusion''. In layman's terms, this increases the number of functions that can be fused = big speedups.<br />
<br />
To be more technical, a ''stream'' represents a traversal of a list (or, indeed, some other structure such as an array). All the list functions become stream functions --- but, crucially, stream operations are ''non-recursive'', meaning they can all be glued together. Taking our example above:<br />
<br />
<haskell><br />
foo = length . filter p . map g . map f . concat<br />
</haskell><br />
<br />
becomes something like<br />
<br />
<haskell><br />
foo =<br />
length .<br />
fromStream . streamFilter p . toStream .<br />
fromStream . streamMap g . toStream .<br />
fromStream . streamMap f . toStream .<br />
fromStream . concat . toStream<br />
</haskell><br />
<br />
which, obviously, is massively ''less'' efficient than the original. However, since<br />
<br />
<haskell><br />
toStream . fromStream = id<br />
</haskell><br />
<br />
we can simplify that down to<br />
<br />
<haskell><br />
foo =<br />
length .<br />
fromStream . streamFilter p .<br />
streamMap g .<br />
streamMap f .<br />
streamConcat . toStream<br />
</haskell><br />
<br />
In other words, we have a <hask>toStream</hask> at one end, and a <hask>fromStream</hask> at the other end, with a bunch of stream operations in the middle. These are all non-recursive; onto <hask>fromStream</hask> actually performs a recursive loop, so once GHC does all its inlining we'll end up with something like<br />
<br />
<code><br />
foreach x in xs do<br />
... concat ...<br />
... map f ...<br />
... map g ...<br />
... filter p ...<br />
... length ...<br />
</code><br />
<br />
which is what we want.<br />
<br />
(...Add link to papers...)<br />
<br />
== Execution Model ==<br />
<br />
In order to understand how to write efficient code, and what GHC does with your code to optimise it, it helps to know a bit about what your compiled code looks like and how it works.<br />
<br />
=== Graph reduction ===<br />
<br />
To a first approximation, at any moment your program is a 'graph' of objects in memory. ('Graph' in the graph theory sense --- nodes connected by arcs.) Some of the objects are 'data' --- booleans, integers, strings, lists, etc. Some of those objects are functions (because Haskell lets you pass functions around like data). And some of these are ''thunks'' --- unevaluated expressions (because Haskell only evaluates expressions 'as needed').<br />
<br />
The program starts off with a single node representing the unevaluated call to <hask>main</hask>, and proceeds to execute from there. Each time a thunk is executed, the result (whatever it is) overwrites the thunk data. (It's possible that the result of evaluating a thunk is a new thunk of course.)<br />
<br />
=== About STG ===<br />
<br />
GHC compiles to the ''spineless tagness G-machine'' (STG). This is a notional graph reduction machine (i.e., a virtual machine that performs graph reductions as described above). 'G-machine' because it does graph reduction. 'Spineless' because it can't stand up to bullies. 'Tagless' because the graph nodes don't have 'tags' on them to say what they are.<br />
<br />
Instead of tags, the nodes have access pointers. If the node is a thunk, its pointer points to the code to evaluate the thunk and return the real result. Otherwise the pointer points to some 'do-nothing' code. So to access any type of node, you just do an indirect jump on this pointer; no case analysis is necessary.<br />
<br />
(Gosh I hope I got that lot right!)<br />
<br />
Internally, GHC uses a kind of 'machine code' that runs on this non-existent G-machine. It does a number of optimisations on that representation, before finally compiling it into ''real'' machine code (possibly via C using GCC).<br />
<br />
=== STG optimisations ===<br />
<br />
There are a number of optimisations done at the STG level. These mainly involve trying to avoid unnecessary steps. For example, avoid creating a thunk which immediately creates another thunk when executed; make it evaluate all the way down to a final result in one go. (If we 'need' the thunk's value, we're going to evaluate all the way down anyway, so let's leave out the overhead...)<br />
<br />
=== Primitive data types ===<br />
<br />
Haskell-98 provides some standard types such as <hask>Int</hask>, etc. GHC defines these as 'boxed' versions of GHC-specific 'unboxed' types:<br />
<br />
<haskell><br />
-- From GHC.Exts:<br />
data Int = I# Int#<br />
data Word = W# Word#<br />
data Double = D# Double#<br />
-- etc.<br />
</haskell><br />
<br />
Here <hask>Int#</hask> is a GHC-specific internal type representing, literally, a plain ordinary bundle of 32 or 64 bits inside the computer somewhere. (Depending on whether it's a 32 or 64-bit architecture.)<br />
<br />
In particular, a <hask>Int#</hask> is strict, whereas a <hask>Int</hask> isn't.<br />
<br />
=== Algebraic data types ===<br />
<br />
(I'm not sure about the basic memory layout. Somebody fill in the general case?)<br />
<br />
There are a few special cases:<br />
<br />
==== Types with 1 constructor ====<br />
<br />
If a function puts a bunch of things into a type value, and the caller immediately takes the things out of the bunch again, GHC will try to eliminate the bundle type all together. (Or is that just for ''tuples''?)<br />
<br />
==== Constructors with no fields ====<br />
<br />
Booleans are a good example:<br />
<br />
<haskell><br />
data Bool = False | True<br />
</haskell><br />
<br />
GHC will construct a single object in memory representing <hask>False</hask>, and another representing <hask>True</hask>. All <hask>Bool</hask> values are thus pointers to one or the other of these objects. (And hence, consume either 32 or 64 bits.)</div>MathematicalOrchidhttps://wiki.haskell.org/GHC_optimisationsGHC optimisations2007-08-12T08:43:56Z<p>MathematicalOrchid: Fusion goodness.</p>
<hr />
<div>[[Category:GHC]]<br />
<br />
== Introduction ==<br />
<br />
This page collects together information about the optimisations that GHC does and does ''not'' perform.<br />
<br />
* GHC experts: Please check that the info in this page is correct.<br />
* Everybody else: Feel free to add questions!<br />
<br />
== General optimisations ==<br />
<br />
=== Common subexpression elimination ===<br />
<br />
First of all, ''common subexpression elemination'' (CSE) means that if an expression appears in several places, the code is rearranged so that the value of that expression is computed only once. For example:<br />
<br />
<haskell><br />
foo x = (bar x) * (bar x)<br />
</haskell><br />
<br />
might be transformed into<br />
<br />
<haskell><br />
foo x = let x' = bar x in x' * x'<br />
</haskell><br />
<br />
thus, the <hask>bar</hask> function is only called once. (And if <hask>bar</hask> is a particularly expensive function, this might save quite a lot of work.)<br />
<br />
GHC doesn't actually perform CSE as often as you might expect. The trouble is, performing CSE can affect the strictness/lazyness of the program. So GHC ''does'' do CSE, but only in specific circumstances --- see the GHC manual. (Section??)<br />
<br />
Long story short: "If you care about CSE, do it by hand."<br />
<br />
=== Inlining ===<br />
<br />
Inlining is where a function call is replaced by that function's definition. For example, the standard <hask>map</hask> function can be defined as<br />
<br />
<haskell><br />
map :: (a -> b) -> [a] -> [b]<br />
map f [] = []<br />
map f (x:xs) = f x : map f xs<br />
</haskell><br />
<br />
Now if you write something like<br />
<br />
<haskell><br />
foo = map bar<br />
</haskell><br />
<br />
it's possible that the compiler might ''inline'' the definition of <hask>map</hask>, yielding something like<br />
<br />
<haskell><br />
foo [] = []<br />
foo (x:xs) = bar x : foo xs<br />
</haskell><br />
<br />
which is (hopefully!) faster, because it doesn't involve a call to the <hask>map</hask> function any more, it just does the work directly. (This might also expose new optimisations opportunities; <hask>map</hask> works for ''any'' types, whereas <hask>foo</hask> probably works for only ''one'' type.)<br />
<br />
So, that's what inlining is. By default, GHC will inline things if they are 'small enough'. Every time you inline a function, you are in a sense making a (customised) ''copy'' of that function. Do too much of this and the compiled program will be enormous. So it's only worth it for 'small' functions.<br />
<br />
(How does GHC determine 'small'? Isn't there a switch that adjusts this?)<br />
<br />
=== Specialisation ===<br />
<br />
Flexibility is the enemy of performance. Take <hask>(+)</hask> for example. As you know, it adds two numbers together. However, would that be two integers? Two floating-point numbers? Two complex numbers? Two vectors? The generated machine code is very, very different in each case!<br />
<br />
It's easy enough to make a function such as <hask>sum</hask>, which will work for ''any'' type of number. However, in the interests of performance, if it can be determined exactly which type of number we're going to be working on, the compiler can generate exactly the right machine code, without having to do lots of runtime lookups.<br />
<br />
GHC tries to do this where possible. However (as I understand it?) this tends to work less well across module boundaries. For example, suppose you write<br />
<br />
<haskell><br />
module Physics where<br />
<br />
data Force = ...<br />
<br />
instance Num Force where ...<br />
<br />
resultant_force :: [Force] -> Force<br />
resultant_force = sum<br />
</haskell><br />
<br />
One might ''hope'' that <hask>resultant_force</hask> would get compiled using a special version of <hask>sum</hask> tailored to adding up only <hask>Force</hask> objects. This may or may not happen.<br />
<br />
Generally GHC won't just take an existing function and recompile it with a new type signature. What ''might'' happen is that the function gets inlined, and specialised from there. (Can someone say something more concrete here?)<br />
<br />
=== Strictness analysis ===<br />
<br />
Haskell is a lazy language. Calculations are notionally not performed until their results are 'needed'. However, if the result definitely ''will'' be needed, it's a waste of time and effort to save up the expression and execute it later; more efficient to just execute it right now.<br />
<br />
''Strictness analysis'' is a process by which GHC attempts to determine, at compile-time, which data definitely will 'always be needed'. GHC can then build code to just calculate such data, rather than the normal (higher overhead) process for storing up the calculation and executing it later.<br />
<br />
Unfortunately, looking at a program and saying "will this data be needed?" is a bit like looking at a program and saying "this program will never halt" --- see The Halting Problem. (Good link?) But GHC does its best, and can give big speedups in some cases.<br />
<br />
=== Fusion ===<br />
<br />
In Haskell, it is common to write expressions such as<br />
<br />
<haskell><br />
foo = length . filter p . map g . map f . concat<br />
</haskell><br />
<br />
This style of writing makes it very clear what the function ''does'' (it takes a list of lists, concatenates them all, applies f to every element, applies g to every element, throws away all elements that fail p, and then calculates the length of the result). However, if executed literally, it's very inefficient.<br />
<br />
When executed, <hask>concat</hask> takes a list of lists and constructs a flat list. Then <hask>map</hask> constructs another list. Then the second <hask>map</hask> function creates yet another list...<br />
<br />
Since Haskell is a lazy language, these intermediate lists never exist in memory in their entirety. One element will be generated by one function, and then immediately consumed by the next function in the chain. So as each element is generated, it instantly becomes garbage. So the memory usage isn't that great, but the GC load is quite high. (Not to mention all the time wasted on creating thunks, evaluating thunks, and allocating/deallocating RAM.) So we really want to avoid all this!<br />
<br />
The term ''fusion'' refers to program transformations aimed at removing intermediate data structures. (''Deforestation'' refers specifically to lists, but in general fusion is applicable to operations on any structure.)<br />
<br />
The standard libraries provide a function <hask>concatMap</hask> such that<br />
<br />
<haskell><br />
concatMap f = concat . map f<br />
</haskell><br />
<br />
As you can see, we don't 'need' this function --- we can define it in turns of other, simpler functions. However, it's more efficient to run because it doesn't generate an intermediate list of lists. (It's also used to define the list monad, which is probably why it's there.)<br />
<br />
Having <hask>concatMap</hask> is nice. But we really don't want to define new functions for every possible combination of list operators. (Do ''you'' fancy implementing a <hask>lengthFilterMapMapConat</hask> function?) So one of the optimisations that GHC performs is to attempt to perform fusion automatically.<br />
<br />
One way that we could try to do this is by inlining all the function definitions. But list processing functions are generally recursive, which makes matters rather complicated. (I.e., this doesn't really work.)<br />
<br />
Currently (GHC 6.6.1) we have build/foldr fusion. That is, where a function ''builds'' a list and passes the result to a function that ''consumes'' a list, GHC can (usually) elide the list itself. There are also other transformations that can be applied. For example, map fusion. Map fusion simply says that<br />
<br />
<haskell><br />
map g . map f<br />
</haskell><br />
<br />
is equivilent to<br />
<br />
<haskell><br />
map (g . f)<br />
</haskell><br />
<br />
(but the latter is more efficient).<br />
<br />
All of this is implemented using GHC's ''transformation rules'' facility. See the manual. (Section??) This functionality is only turned on with -O or -O2.<br />
<br />
In the future (GHC 6.7?) we will have ''stream fusion''. In layman's terms, this increases the number of functions that can be fused = big speedups.<br />
<br />
To be more technical, a ''stream'' represents a traversal of a list (or, indeed, some other structure such as an array). All the list functions become stream functions --- but, crucially, stream operations are ''non-recursive'', meaning they can all be glued together. Taking our example above:<br />
<br />
<haskell><br />
foo = length . filter p . map g . map f . concat<br />
</haskell><br />
<br />
becomes something like<br />
<br />
<haskell><br />
foo =<br />
length .<br />
fromStream . streamFilter p . toStream .<br />
fromStream . streamMap g . toStream .<br />
fromStream . streamMap f . toStream .<br />
fromStream . concat . toStream<br />
</haskell><br />
<br />
which, obviously, is massively ''less'' efficient than the original. However, since<br />
<br />
<haskell><br />
toStream . fromStream = id<br />
</haskell><br />
<br />
we can simplify that down to<br />
<br />
<haskell><br />
foo =<br />
length .<br />
fromStream . streamFilter p .<br />
streamMap g .<br />
streamMap f .<br />
streamConcat . toStream<br />
</haskell><br />
<br />
In other words, we have a <hask>toStream</hask> at one end, and a <hask>fromStream</hask> at the other end, with a bunch of stream operations in the middle. These are all non-recursive; onto <hask>fromStream</hask> actually performs a recursive loop, so once GHC does all its inlining we'll end up with something like<br />
<br />
<code><br />
foreach x in xs do<br />
... concat ...<br />
... map f ...<br />
... map g ...<br />
... filter p ...<br />
... length ...<br />
</code><br />
<br />
which is what we want.<br />
<br />
(...Add link to papers...)<br />
<br />
== Execution Model ==<br />
<br />
In order to understand how to write efficient code, and what GHC does with your code to optimise it, it helps to know a bit about what your compiled code looks like and how it works.<br />
<br />
=== Graph reduction ===<br />
<br />
To a first approximation, at any moment your program is a 'graph' of objects in memory. ('Graph' in the graph theory sense --- nodes connected by arcs.) Some of the objects are 'data' --- booleans, integers, strings, lists, etc. Some of those objects are functions (because Haskell lets you pass functions around like data). And some of these are ''thunks'' --- unevaluated expressions (because Haskell only evaluates expressions 'as needed').<br />
<br />
The program starts off with a single node representing the unevaluated call to <hask>main</hask>, and proceeds to execute from there. Each time a thunk is executed, the result (whatever it is) overwrites the thunk data. (It's possible that the result of evaluating a thunk is a new thunk of course.)<br />
<br />
=== About STG ===<br />
<br />
GHC compiles to the ''spineless tagness G-machine'' (STG). This is a notional graph reduction machine (i.e., a virtual machine that performs graph reductions as described above). 'G-machine' because it does graph reduction. 'Spineless' because it can't stand up to bullies. 'Tagless' because the graph nodes don't have 'tags' on them to say what they are.<br />
<br />
Instead of tags, the nodes have access pointers. If the node is a thunk, its pointer points to the code to evaluate the thunk and return the real result. Otherwise the pointer points to some 'do-nothing' code. So to access any type of node, you just do an indirect jump on this pointer; no case analysis is necessary.<br />
<br />
(Gosh I hope I got that lot right!)<br />
<br />
Internally, GHC uses a kind of 'machine code' that runs on this non-existent G-machine. It does a number of optimisations on that representation, before finally compiling it into ''real'' machine code (possibly via C using GCC).<br />
<br />
=== STG optimisations ===<br />
<br />
There are a number of optimisations done at the STG level. These mainly involve trying to avoid unnecessary steps. For example, avoid creating a thunk which immediately creates another thunk when executed; make it evaluate all the way down to a final result in one go. (If we 'need' the thunk's value, we're going to evaluate all the way down anyway, so let's leave out the overhead...)<br />
<br />
=== Primitive data types ===<br />
<br />
Haskell-98 provides some standard types such as <hask>Int</hask>, etc. GHC defines these as 'boxed' versions of GHC-specific 'unboxed' types:<br />
<br />
<haskell><br />
-- From GHC.Exts:<br />
data Int = I# Int#<br />
data Word = W# Word#<br />
data Double = D# Double#<br />
-- etc.<br />
</haskell><br />
<br />
Here <hask>Int#</hask> is a GHC-specific internal type representing, literally, a plain ordinary bundle of 32 or 64 bits inside the computer somewhere. (Depending on whether it's a 32 or 64-bit architecture.)<br />
<br />
In particular, a <hask>Int#</hask> is strict, whereas a <hask>Int</hask> isn't.<br />
<br />
=== Algebraic data types ===<br />
<br />
(I'm not sure about the basic memory layout. Somebody fill in the general case?)<br />
<br />
There are a few special cases:<br />
<br />
==== Types with 1 constructor ====<br />
<br />
If a function puts a bunch of things into a type value, and the caller immediately takes the things out of the bunch again, GHC will try to eliminate the bundle type all together. (Or is that just for ''tuples''?)<br />
<br />
==== Constructors with no fields ====<br />
<br />
Booleans are a good example:<br />
<br />
<haskell><br />
data Bool = False | True<br />
</haskell><br />
<br />
GHC will construct a single object in memory representing <hask>False</hask>, and another representing <hask>True</hask>. All <hask>Bool</hask> values are thus pointers to one or the other of these objects. (And hence, consume either 32 or 64 bits.)</div>MathematicalOrchidhttps://wiki.haskell.org/GHC_optimisationsGHC optimisations2007-08-11T15:54:15Z<p>MathematicalOrchid: </p>
<hr />
<div>[[Category:GHC]]<br />
<br />
== Introduction ==<br />
<br />
This page collects together information about the optimisations that GHC does and does ''not'' perform.<br />
<br />
* GHC experts: Please check that the info in this page is correct.<br />
* Everybody else: Feel free to add questions!<br />
<br />
== General optimisations ==<br />
<br />
=== Common subexpression elimination ===<br />
<br />
First of all, ''common subexpression elemination'' (CSE) means that if an expression appears in several places, the code is rearranged so that the value of that expression is computed only once. For example:<br />
<br />
<haskell><br />
foo x = (bar x) * (bar x)<br />
</haskell><br />
<br />
might be transformed into<br />
<br />
<haskell><br />
foo x = let x' = bar x in x' * x'<br />
</haskell><br />
<br />
thus, the <hask>bar</hask> function is only called once. (And if <hask>bar</hask> is a particularly expensive function, this might save quite a lot of work.)<br />
<br />
GHC doesn't actually perform CSE as often as you might expect. The trouble is, performing CSE can affect the strictness/lazyness of the program. So GHC ''does'' do CSE, but only in specific circumstances --- see the GHC manual. (Section??)<br />
<br />
Long story short: "If you care about CSE, do it by hand."<br />
<br />
=== Inlining ===<br />
<br />
Inlining is where a function call is replaced by that function's definition. For example, the standard <hask>map</hask> function can be defined as<br />
<br />
<haskell><br />
map :: (a -> b) -> [a] -> [b]<br />
map f [] = []<br />
map f (x:xs) = f x : map f xs<br />
</haskell><br />
<br />
Now if you write something like<br />
<br />
<haskell><br />
foo = map bar<br />
</haskell><br />
<br />
it's possible that the compiler might ''inline'' the definition of <hask>map</hask>, yielding something like<br />
<br />
<haskell><br />
foo [] = []<br />
foo (x:xs) = bar x : foo xs<br />
</haskell><br />
<br />
which is (hopefully!) faster, because it doesn't involve a call to the <hask>map</hask> function any more, it just does the work directly. (This might also expose new optimisations opportunities; <hask>map</hask> works for ''any'' types, whereas <hask>foo</hask> probably works for only ''one'' type.)<br />
<br />
So, that's what inlining is. By default, GHC will inline things if they are 'small enough'. Every time you inline a function, you are in a sense making a (customised) ''copy'' of that function. Do too much of this and the compiled program will be enormous. So it's only worth it for 'small' functions.<br />
<br />
(How does GHC determine 'small'? Isn't there a switch that adjusts this?)<br />
<br />
=== Specialisation ===<br />
<br />
Flexibility is the enemy of performance. Take <hask>(+)</hask> for example. As you know, it adds two numbers together. However, would that be two integers? Two floating-point numbers? Two complex numbers? Two vectors? The generated machine code is very, very different in each case!<br />
<br />
It's easy enough to make a function such as <hask>sum</hask>, which will work for ''any'' type of number. However, in the interests of performance, if it can be determined exactly which type of number we're going to be working on, the compiler can generate exactly the right machine code, without having to do lots of runtime lookups.<br />
<br />
GHC tries to do this where possible. However (as I understand it?) this tends to work less well across module boundaries. For example, suppose you write<br />
<br />
<haskell><br />
module Physics where<br />
<br />
data Force = ...<br />
<br />
instance Num Force where ...<br />
<br />
resultant_force :: [Force] -> Force<br />
resultant_force = sum<br />
</haskell><br />
<br />
One might ''hope'' that <hask>resultant_force</hask> would get compiled using a special version of <hask>sum</hask> tailored to adding up only <hask>Force</hask> objects. This may or may not happen.<br />
<br />
Generally GHC won't just take an existing function and recompile it with a new type signature. What ''might'' happen is that the function gets inlined, and specialised from there. (Can someone say something more concrete here?)<br />
<br />
=== Strictness analysis ===<br />
<br />
Haskell is a lazy language. Calculations are notionally not performed until their results are 'needed'. However, if the result definitely ''will'' be needed, it's a waste of time and effort to save up the expression and execute it later; more efficient to just execute it right now.<br />
<br />
''Strictness analysis'' is a process by which GHC attempts to determine, at compile-time, which data definitely will 'always be needed'. GHC can then build code to just calculate such data, rather than the normal (higher overhead) process for storing up the calculation and executing it later.<br />
<br />
Unfortunately, looking at a program and saying "will this data be needed?" is a bit like looking at a program and saying "this program will never halt" --- see The Halting Problem. (Good link?) But GHC does its best, and can give big speedups in some cases.<br />
<br />
== Execution Model ==<br />
<br />
In order to understand how to write efficient code, and what GHC does with your code to optimise it, it helps to know a bit about what your compiled code looks like and how it works.<br />
<br />
=== Graph reduction ===<br />
<br />
To a first approximation, at any moment your program is a 'graph' of objects in memory. ('Graph' in the graph theory sense --- nodes connected by arcs.) Some of the objects are 'data' --- booleans, integers, strings, lists, etc. Some of those objects are functions (because Haskell lets you pass functions around like data). And some of these are ''thunks'' --- unevaluated expressions (because Haskell only evaluates expressions 'as needed').<br />
<br />
The program starts off with a single node representing the unevaluated call to <hask>main</hask>, and proceeds to execute from there. Each time a thunk is executed, the result (whatever it is) overwrites the thunk data. (It's possible that the result of evaluating a thunk is a new thunk of course.)<br />
<br />
=== About STG ===<br />
<br />
GHC compiles to the ''spineless tagness G-machine'' (STG). This is a notional graph reduction machine (i.e., a virtual machine that performs graph reductions as described above). 'G-machine' because it does graph reduction. 'Spineless' because it can't stand up to bullies. 'Tagless' because the graph nodes don't have 'tags' on them to say what they are.<br />
<br />
Instead of tags, the nodes have access pointers. If the node is a thunk, its pointer points to the code to evaluate the thunk and return the real result. Otherwise the pointer points to some 'do-nothing' code. So to access any type of node, you just do an indirect jump on this pointer; no case analysis is necessary.<br />
<br />
(Gosh I hope I got that lot right!)<br />
<br />
Internally, GHC uses a kind of 'machine code' that runs on this non-existent G-machine. It does a number of optimisations on that representation, before finally compiling it into ''real'' machine code (possibly via C using GCC).<br />
<br />
=== STG optimisations ===<br />
<br />
There are a number of optimisations done at the STG level. These mainly involve trying to avoid unnecessary steps. For example, avoid creating a thunk which immediately creates another thunk when executed; make it evaluate all the way down to a final result in one go. (If we 'need' the thunk's value, we're going to evaluate all the way down anyway, so let's leave out the overhead...)<br />
<br />
=== Primitive data types ===<br />
<br />
Haskell-98 provides some standard types such as <hask>Int</hask>, etc. GHC defines these as 'boxed' versions of GHC-specific 'unboxed' types:<br />
<br />
<haskell><br />
-- From GHC.Exts:<br />
data Int = I# Int#<br />
data Word = W# Word#<br />
data Double = D# Double#<br />
-- etc.<br />
</haskell><br />
<br />
Here <hask>Int#</hask> is a GHC-specific internal type representing, literally, a plain ordinary bundle of 32 or 64 bits inside the computer somewhere. (Depending on whether it's a 32 or 64-bit architecture.)<br />
<br />
In particular, a <hask>Int#</hask> is strict, whereas a <hask>Int</hask> isn't.<br />
<br />
=== Algebraic data types ===<br />
<br />
(I'm not sure about the basic memory layout. Somebody fill in the general case?)<br />
<br />
There are a few special cases:<br />
<br />
==== Types with 1 constructor ====<br />
<br />
If a function puts a bunch of things into a type value, and the caller immediately takes the things out of the bunch again, GHC will try to eliminate the bundle type all together. (Or is that just for ''tuples''?)<br />
<br />
==== Constructors with no fields ====<br />
<br />
Booleans are a good example:<br />
<br />
<haskell><br />
data Bool = False | True<br />
</haskell><br />
<br />
GHC will construct a single object in memory representing <hask>False</hask>, and another representing <hask>True</hask>. All <hask>Bool</hask> values are thus pointers to one or the other of these objects. (And hence, consume either 32 or 64 bits.)</div>MathematicalOrchidhttps://wiki.haskell.org/GHC_optimisationsGHC optimisations2007-08-11T15:43:16Z<p>MathematicalOrchid: More babble.</p>
<hr />
<div>[[Category:GHC]]<br />
<br />
== Introduction ==<br />
<br />
This page collects together information about the optimisations that GHC does and does ''not'' perform.<br />
<br />
* GHC experts: Please check that the info in this page is correct.<br />
* Everybody else: Feel free to add questions!<br />
<br />
== General optimisations ==<br />
<br />
=== Common subexpression elimination ===<br />
<br />
First of all, ''common subexpression elemination'' (CSE) means that if an expression appears in several places, the code is rearranged so that the value of that expression is computed only once. For example:<br />
<br />
<haskell><br />
foo x = (bar x) * (bar x)<br />
</haskell><br />
<br />
might be transformed into<br />
<br />
<haskell><br />
foo x = let x' = bar x in x' * x'<br />
</haskell><br />
<br />
thus, the <hask>bar</hask> function is only called once. (And if <hask>bar</hask> is a particularly expensive function, this might save quite a lot of work.)<br />
<br />
GHC doesn't actually perform CSE as often as you might expect. The trouble is, performing CSE can affect the strictness/lazyness of the program. So GHC ''does'' do CSE, but only in specific circumstances --- see the GHC manual. (Section??)<br />
<br />
Long story short: "If you care about CSE, do it by hand."<br />
<br />
=== Inlining ===<br />
<br />
Inlining is where a function call is replaced by that function's definition. For example, the standard <hask>map</hask> function can be defined as<br />
<br />
<haskell><br />
map :: (a -> b) -> [a] -> [b]<br />
map f [] = []<br />
map f (x:xs) = f x : map f xs<br />
</haskell><br />
<br />
Now if you write something like<br />
<br />
<haskell><br />
foo = map bar<br />
</haskell><br />
<br />
it's possible that the compiler might ''inline'' the definition of <hask>map</hask>, yielding something like<br />
<br />
<haskell><br />
foo [] = []<br />
foo (x:xs) = bar x : foo xs<br />
</haskell><br />
<br />
which is (hopefully!) faster, because it doesn't involve a call to the <hask>map</hask> function any more, it just does the work directly. (This might also expose new optimisations opportunities; <hask>map</hask> works for ''any'' types, whereas <hask>foo</hask> probably works for only ''one'' type.)<br />
<br />
So, that's what inlining is. By default, GHC will inline things if they are 'small enough'. Every time you inline a function, you are in a sense making a (customised) ''copy'' of that function. Do too much of this and the compiled program will be enormous. So it's only worth it for 'small' functions.<br />
<br />
(How does GHC determine 'small'? Isn't there a switch that adjusts this?)<br />
<br />
=== Strictness analysis ===<br />
<br />
Haskell is a lazy language. Calculations are notionally not performed until their results are 'needed'. However, if the result definitely ''will'' be needed, it's a waste of time and effort to save up the expression and execute it later; more efficient to just execute it right now.<br />
<br />
''Strictness analysis'' is a process by which GHC attempts to determine, at compile-time, which data definitely will 'always be needed'. GHC can then build code to just calculate such data, rather than the normal (higher overhead) process for storing up the calculation and executing it later.<br />
<br />
Unfortunately, looking at a program and saying "will this data be needed?" is a bit like looking at a program and saying "this program will never halt" --- see The Halting Problem. (Good link?) But GHC does its best, and can give big speedups in some cases.<br />
<br />
== Execution Model ==<br />
<br />
In order to understand how to write efficient code, and what GHC does with your code to optimise it, it helps to know a bit about what your compiled code looks like and how it works.<br />
<br />
=== Graph reduction ===<br />
<br />
To a first approximation, at any moment your program is a 'graph' of objects in memory. ('Graph' in the graph theory sense --- nodes connected by arcs.) Some of the objects are 'data' --- booleans, integers, strings, lists, etc. Some of those objects are functions (because Haskell lets you pass functions around like data). And some of these are ''thunks'' --- unevaluated expressions (because Haskell only evaluates expressions 'as needed').<br />
<br />
The program starts off with a single node representing the unevaluated call to <hask>main</hask>, and proceeds to execute from there. Each time a thunk is executed, the result (whatever it is) overwrites the thunk data. (It's possible that the result of evaluating a thunk is a new thunk of course.)<br />
<br />
=== About STG ===<br />
<br />
GHC compiles to the ''spineless tagness G-machine'' (STG). This is a notional graph reduction machine (i.e., a virtual machine that performs graph reductions as described above). 'G-machine' because it does graph reduction. 'Spineless' because it can't stand up to bullies. 'Tagless' because the graph nodes don't have 'tags' on them to say what they are.<br />
<br />
Instead of tags, the nodes have access pointers. If the node is a thunk, its pointer points to the code to evaluate the thunk and return the real result. Otherwise the pointer points to some 'do-nothing' code. So to access any type of node, you just do an indirect jump on this pointer; no case analysis is necessary.<br />
<br />
(Gosh I hope I got that lot right!)<br />
<br />
Internally, GHC uses a kind of 'machine code' that runs on this non-existent G-machine. It does a number of optimisations on that representation, before finally compiling it into ''real'' machine code (possibly via C using GCC).<br />
<br />
=== STG optimisations ===<br />
<br />
There are a number of optimisations done at the STG level. These mainly involve trying to avoid unnecessary steps. For example, avoid creating a thunk which immediately creates another thunk when executed; make it evaluate all the way down to a final result in one go. (If we 'need' the thunk's value, we're going to evaluate all the way down anyway, so let's leave out the overhead...)<br />
<br />
=== Primitive data types ===<br />
<br />
Haskell-98 provides some standard types such as <hask>Int</hask>, etc. GHC defines these as 'boxed' versions of GHC-specific 'unboxed' types:<br />
<br />
<haskell><br />
-- From GHC.Exts:<br />
data Int = I# Int#<br />
data Word = W# Word#<br />
data Double = D# Double#<br />
-- etc.<br />
</haskell><br />
<br />
Here <hask>Int#</hask> is a GHC-specific internal type representing, literally, a plain ordinary bundle of 32 or 64 bits inside the computer somewhere. (Depending on whether it's a 32 or 64-bit architecture.)<br />
<br />
In particular, a <hask>Int#</hask> is strict, whereas a <hask>Int</hask> isn't.<br />
<br />
=== Algebraic data types ===<br />
<br />
(I'm not sure about the basic memory layout. Somebody fill in the general case?)<br />
<br />
There are a few special cases:<br />
<br />
==== Types with 1 constructor ====<br />
<br />
If a function puts a bunch of things into a type value, and the caller immediately takes the things out of the bunch again, GHC will try to eliminate the bundle type all together. (Or is that just for ''tuples''?)<br />
<br />
==== Constructors with no fields ====<br />
<br />
Booleans are a good example:<br />
<br />
<haskell><br />
data Bool = False | True<br />
</haskell><br />
<br />
GHC will construct a single object in memory representing <hask>False</hask>, and another representing <hask>True</hask>. All <hask>Bool</hask> values are thus pointers to one or the other of these objects. (And hence, consume either 32 or 64 bits.)</div>MathematicalOrchidhttps://wiki.haskell.org/GHC_optimisationsGHC optimisations2007-08-11T11:13:30Z<p>MathematicalOrchid: OK, I'm done with this - for now.</p>
<hr />
<div>[[Category:GHC]]<br />
<br />
== Introduction ==<br />
<br />
This page collects together information about the optimisations that GHC does and does ''not'' perform.<br />
<br />
* GHC experts: Please check that the info in this page is correct.<br />
* Everybody else: Feel free to add questions!<br />
<br />
== General optimisations ==<br />
<br />
=== Common subexpression elimination ===<br />
<br />
First of all, ''common subexpression elemination'' (CSE) means that if an expression appears in several places, the code is rearranged so that the value of that expression is computed only once. For example:<br />
<br />
<haskell><br />
foo x = (bar x) * (bar x)<br />
</haskell><br />
<br />
might be transformed into<br />
<br />
<haskell><br />
foo x = let x' = bar x in x' * x'<br />
</haskell><br />
<br />
thus, the <hask>bar</hask> function is only called once. (And if <hask>bar</hask> is a particularly expensive function, this might save quite a lot of work.)<br />
<br />
GHC doesn't actually perform CSE as often as you might expect. The trouble is, performing CSE can affect the strictness/lazyness of the program. So GHC ''does'' do CSE, but only in specific circumstances --- see the GHC manual. (Section??)<br />
<br />
Long story short: "If you care about CSE, do it by hand."<br />
<br />
=== Inlining ===<br />
<br />
Inlining is where a function call is replaced by that function's definition. For example, the standard <hask>map</hask> function can be defined as<br />
<br />
<haskell><br />
map :: (a -> b) -> [a] -> [b]<br />
map f [] = []<br />
map f (x:xs) = f x : map f xs<br />
</haskell><br />
<br />
Now if you write something like<br />
<br />
<haskell><br />
foo = map bar<br />
</haskell><br />
<br />
it's possible that the compiler might ''inline'' the definition of <hask>map</hask>, yielding something like<br />
<br />
<haskell><br />
foo [] = []<br />
foo (x:xs) = bar x : foo xs<br />
</haskell><br />
<br />
which is (hopefully!) faster, because it doesn't involve a call to the <hask>map</hask> function any more, it just does the work directly. (This might also expose new optimisations opportunities; <hask>map</hask> works for ''any'' types, whereas <hask>foo</hask> probably works for only ''one'' type.)<br />
<br />
So, that's what inlining is. By default, GHC will inline things if they are 'small enough'. Every time you inline a function, you are in a sense making a (customised) ''copy'' of that function. Do too much of this and the compiled program will be enormous. So it's only worth it for 'small' functions.<br />
<br />
(How does GHC determine 'small'? Isn't there a switch that adjusts this?)<br />
<br />
=== Strictness analysis ===<br />
<br />
Haskell is a lazy language. Calculations are notionally not performed until their results are 'needed'. However, if the result definitely ''will'' be needed, it's a waste of time and effort to save up the expression and execute it later; more efficient to just execute it right now.<br />
<br />
''Strictness analysis'' is a process by which GHC attempts to determine, at compile-time, which data definitely will 'always be needed'. GHC can then build code to just calculate such data, rather than the normal (higher overhead) process for storing up the calculation and executing it later.<br />
<br />
Unfortunately, looking at a program and saying "will this data be needed?" is a bit like looking at a program and saying "this program will never halt" --- see The Halting Problem. (Good link?) But GHC does its best, and can give big speedups in some cases.<br />
<br />
== Execution Model ==<br />
<br />
In order to understand how to write efficient code, and what GHC does with your code to optimise it, it helps to know a bit about what your compiled code looks like and how it works.<br />
<br />
=== Graph reduction ===<br />
<br />
To a first approximation, at any moment your program is a 'graph' of objects in memory. ('Graph' in the graph theory sense --- nodes connected by arcs.) Some of the objects are 'data' --- booleans, integers, strings, lists, etc. Some of those objects are functions (because Haskell lets you pass functions around like data). And some of these are ''thunks'' --- unevaluated expressions (because Haskell only evaluates expressions 'as needed').<br />
<br />
The program starts off with a single node representing the unevaluated call to <hask>main</hask>, and proceeds to execute from there. Each time a thunk is executed, the result (whatever it is) overwrites the thunk data. (It's possible that the result of evaluating a thunk is a new thunk of course.)<br />
<br />
=== About STG ===<br />
<br />
GHC compiles to the ''spineless tagness G-machine'' (STG). This is a notional graph reduction machine (i.e., a virtual machine that performs graph reductions as described above). 'G-machine' because it does graph reduction. 'Spineless' because it can't stand up to bullies. 'Tagless' because the graph nodes don't have 'tags' on them to say what they are.<br />
<br />
Instead of tags, the nodes have access pointers. If the node is a thunk, its pointer points to the code to evaluate the thunk and return the real result. Otherwise the pointer points to some 'do-nothing' code. So to access any type of node, you just do an indirect jump on this pointer; no case analysis is necessary.<br />
<br />
(Gosh I hope I got that lot right!)<br />
<br />
Internally, GHC uses a kind of 'machine code' that runs on this non-existent G-machine. It does a number of optimisations on that representation, before finally compiling it into ''real'' machine code (possibly via C using GCC).<br />
<br />
=== STG optimisations ===<br />
<br />
There are a number of optimisations done at the STG level. These mainly involve trying to avoid unnecessary steps:<br />
<br />
* Avoid creating a thunk which immediately creates another thunk when executed; make it evaluate all the way down to a final result in one go. (If we 'need' the thunk's value, we're going to evaluate all the way down anyway, so let's leave out the overhead...)</div>MathematicalOrchidhttps://wiki.haskell.org/GHC_optimisationsGHC optimisations2007-08-11T10:49:23Z<p>MathematicalOrchid: More content. Still not finished...</p>
<hr />
<div>[[Category:GHC]]<br />
<br />
== Introduction ==<br />
<br />
This page collects together information about the optimisations that GHC does and does ''not'' perform.<br />
<br />
* GHC experts: Please check that the info in this page is correct.<br />
* Everybody else: Feel free to add questions!<br />
<br />
== General optimisations ==<br />
<br />
=== Common subexpression elimination ===<br />
<br />
First of all, ''common subexpression elemination'' (CSE) means that if an expression appears in several places, the code is rearranged so that the value of that expression is computed only once. For example:<br />
<br />
<haskell><br />
foo x = (bar x) * (bar x)<br />
</haskell><br />
<br />
might be transformed into<br />
<br />
<haskell><br />
foo x = let x' = bar x in x' * x'<br />
</haskell><br />
<br />
thus, the <hask>bar</hask> function is only called once. (And if <hask>bar</hask> is a particularly expensive function, this might save quite a lot of work.)<br />
<br />
GHC doesn't actually perform CSE as often as you might expect. The trouble is, performing CSE can affect the strictness/lazyness of the program. So GHC ''does'' do CSE, but only in specific circumstances --- see the GHC manual. (Section??)<br />
<br />
Long story short: "If you care about CSE, do it by hand."<br />
<br />
=== Inlining ===<br />
<br />
Inlining is where a function call is replaced by that function's definition. For example, the standard <hask>map</hask> function can be defined as<br />
<br />
<haskell><br />
map :: (a -> b) -> [a] -> [b]<br />
map f [] = []<br />
map f (x:xs) = f x : map f xs<br />
</haskell><br />
<br />
Now if you write something like<br />
<br />
<haskell><br />
foo = map bar<br />
</haskell><br />
<br />
it's possible that the compiler might ''inline'' the definition of <hask>map</hask>, yielding something like<br />
<br />
<haskell><br />
foo [] = []<br />
foo (x:xs) = bar x : foo xs<br />
</haskell><br />
<br />
which is (hopefully!) faster, because it doesn't involve a call to the <hask>map</hask> function any more, it just does the work directly. (This might also expose new optimisations opportunities; <hask>map</hask> works for ''any'' types, whereas <hask>foo</hask> probably works for only ''one'' type.)<br />
<br />
So, that's what inlining is. By default, GHC will inline things if they are 'small enough'. Every time you inline a function, you are in a sense making a (customised) ''copy'' of that function. Do too much of this and the compiled program will be enormous. So it's only worth it for 'small' functions.<br />
<br />
(How does GHC determine 'small'? Isn't there a switch that adjusts this?)<br />
<br />
== Execution Model ==<br />
<br />
In order to understand how to write efficient code, and what GHC does with your code to optimise it, it helps to know a bit about what your compiled code looks like and how it works.<br />
<br />
=== Graph reduction ===<br />
<br />
To a first approximation, at any moment your program is a 'graph' of objects in memory. ('Graph' in the graph theory sense --- nodes connected by arcs.) Some of the objects are 'data' --- booleans, integers, strings, lists, etc. Some of those objects are functions (because Haskell lets you pass functions around like data). And some of these are ''thunks'' --- unevaluated expressions (because Haskell only evaluates expressions 'as needed').<br />
<br />
The program starts off with a single node representing the unevaluated call to <hask>main</hask>, and proceeds to execute from there. Each time a thunk is executed, the result (whatever it is) overwrites the thunk data. (It's possible that the result of evaluating a thunk is a new thunk of course.)<br />
<br />
=== About STG ===<br />
<br />
GHC compiles to the ''spineless tagness G-machine'' (STG). (Finish me!)</div>MathematicalOrchidhttps://wiki.haskell.org/GHC_optimisationsGHC optimisations2007-08-11T10:31:14Z<p>MathematicalOrchid: Initial version; still working on this...</p>
<hr />
<div>[[Category:GHC]]<br />
<br />
== Introduction ==<br />
<br />
This page collects together information about the optimisations that GHC does and does ''not'' perform.<br />
<br />
* GHC experts: Please check that the info in this page is correct.<br />
* Everybody else: Feel free to add questions!<br />
<br />
== General optimisations ==<br />
<br />
=== Common subexpression elimination ===<br />
<br />
First of all, ''common subexpression elemination'' (CSE) means that if an expression appears in several places, the code is rearranged so that the value of that expression is computed only once. For example:<br />
<br />
<haskell><br />
foo x = (bar x) * (bar x)<br />
</haskell><br />
<br />
might be transformed into<br />
<br />
<haskell><br />
foo x = let x' = bar x in x' * x'<br />
</haskell><br />
<br />
thus, the <hask>bar</hask> function is only called once. (And if <hask>bar</hask> is a particularly expensive function, this might save quite a lot of work.)<br />
<br />
GHC doesn't actually perform CSE as often as you might expect. The trouble is, performing CSE can affect the strictness/lazyness of the program. So GHC ''does'' do CSE, but only in specific circumstances --- see the GHC manual. (Section??)<br />
<br />
Long story short: "If you care about CSE, do it by hand."<br />
<br />
=== Inlining ===<br />
<br />
Inlining is where a function call is replaced by that function's definition. For example, the standard <hask>map</hask> function can be defined as<br />
<br />
<haskell><br />
map :: (a -> b) -> [a] -> [b]<br />
map f [] = []<br />
map f (x:xs) = f x : map f xs<br />
</haskell><br />
<br />
Now if you write something like<br />
<br />
<haskell><br />
foo = map bar<br />
</haskell><br />
<br />
it's possible that the compiler might ''inline'' the definition of <hask>map</hask>, yielding something like<br />
<br />
<haskell><br />
foo [] = []<br />
foo (x:xs) = bar x : foo xs<br />
</haskell><br />
<br />
which is (hopefully!) faster, because it doesn't involve a call to the <hask>map</hask> function any more, it just does the work directly. (This might also expose new optimisations opportunities; <hask>map</hask> works for ''any'' types, whereas <hask>foo</hask> probably works for only ''one'' type.)<br />
<br />
So, that's what inlining is. By default, GHC will inline things if they are 'small enough'. Every time you inline a function, you are in a sense making a (customised) ''copy'' of that function. Do too much of this and the compiled program will be enormous. So it's only worth it for 'small' functions.<br />
<br />
(How does GHC determine 'small'? Isn't there a switch that adjusts this?)</div>MathematicalOrchidhttps://wiki.haskell.org/Rank-N_typesRank-N types2007-07-09T12:40:01Z<p>MathematicalOrchid: +category</p>
<hr />
<div>[[Category:Language extensions]]<br />
[[Category:Stub articles]]<br />
<br />
== About ==<br />
<br />
As best as I can tell, rank-N types are exactly like [[existential type]]s - except that they're completely different.<br />
<br />
Rank-2 types are a special case of rank-N types, and normal Haskell 98 types are all rank-1 types.<br />
<br />
== Also see ==<br />
<br />
[http://hackage.haskell.org/trac/haskell-prime/wiki/RankNTypes Rank-N types] on the Haskell' website.</div>MathematicalOrchidhttps://wiki.haskell.org/Existential_typeExistential type2007-07-09T12:37:44Z<p>MathematicalOrchid: Corrected invalid syntax. (I wonder if anybody ever checks this...?)</p>
<hr />
<div>__TOC__<br />
This is a extension of Haskell available in [[GHC]]. See the GHC documentation:<br />
http://www.haskell.org/ghc/docs/latest/html/users_guide/type-extensions.html<br />
<br />
==Introduction to existential types==<br />
<br />
=== Overview ===<br />
<br />
Normally when creating a new type using <hask>type</hask>, <hask>newtype</hask>, <hask>data</hask>, etc., every type variable that appears on the right-hand side must also appear on the left-hand side. Existential types are a way of turning this off.<br />
<br />
=== Basics ===<br />
<br />
Existential types can be ''used'' for several different purposes. But what they ''do'' is to 'hide' a type variable on the right-hand side.<br />
<br />
Normally, any type variable appearing on the right must also appear on the left:<br />
<br />
<haskell><br />
data Worker x y = Worker {buffer :: b, input :: x, output :: y}<br />
</haskell><br />
<br />
This is an error, since the type of the buffer isn't specified on the right (it's a type variable rather than a type) but also isn't specified on the left (there's no 'b' in the left part). In Haskell98, you would have to write<br />
<br />
<haskell><br />
data Worker b x y = Worker {buffer :: b, input :: x, output :: y}<br />
</haskell><br />
<br />
That may or may not be an actual problem.<br />
<br />
Usually there is no problem at all with this state of affairs (which is why Haskell98 works this way). However, suppose that a <hask>Worker</hask> can use ''any'' type 'b' so long as it belongs to some particular class. Then every function that uses a <hask>Worker</hask> will have a type like<br />
<br />
<haskell><br />
foo :: (Buffer b) => Worker b Int Int<br />
</haskell><br />
<br />
or something. (In particular, failing to write an explicit type signature will invoke the dreaded [[monomorphism restriction]].) Using existential types, we can avoid this:<br />
<br />
<haskell><br />
data Worker x y = forall b. Buffer b => Worker {buffer :: b, input :: x, output :: y}<br />
<br />
foo :: Worker Int Int<br />
</haskell><br />
<br />
The type of the buffer now does ''not'' appear in the <hask>Worker</hask> type at all.<br />
<br />
This has a number of consequences. First of all, it is now impossible for a function to demand a <hask>Worker</hask> having a specific type of buffer. Second, the type of <hask>foo</hask> can now be derived automatically without needing an explicit type signature. (No [[monomorphism restriction]].) Thirdly, since code now has ''no idea'' what type the <hask>buffer</hask> function returns, you are more limited in what you can do to it.<br />
<br />
In general, when you use a 'hidden' type in this way, you will usually want that type to belong to a specific class, or you will want to pass some functions along that can work on that type. Otherwise you'll have some value belonging to a random unknown type, and you won't be able to ''do'' anything to it!<br />
<br />
Note: You can use existential types to convert a more specific type into a less specific one. (See the examples below.) There is ''no way'' to perform the reverse conversion!<br />
<br />
== Examples ==<br />
<br />
===A short example===<br />
<br />
This illustrates creating a heterogeneous list, all of whose members implement "Show", and progressing through that list to show these items:<br />
<br />
<haskell><br />
data Obj = forall a. (Show a) => Obj a<br />
<br />
xs = [Obj 1, Obj "foo", Obj 'c']<br />
<br />
doShow :: [Obj] -> String<br />
doShow [] = ""<br />
doShow ((Obj x):xs) = show x ++ doShow xs<br />
</haskell><br />
<br />
With output: <code>doShow xs ==> "1\"foo\"'c'"</code><br />
<br />
===Expanded example - rendering objects in a raytracer===<br />
<br />
====Problem statement====<br />
<br />
In a raytracer, a requirement is to be able to render several different objects (like a ball, mesh or whatever). The first step is a type class for Renderable like so:<br />
<br />
<haskell><br />
class Renderable a where<br />
boundingSphere :: a -> Sphere<br />
hit :: a -> [Fragment] -- returns the "fragments" of all hits with ray<br />
{- ... etc ... -}<br />
</haskell><br />
<br />
To solve the problem, the <hask>hit</hask> function must apply to several objects (like a sphere and a polygon for instance).<br />
<br />
<haskell><br />
hits :: Renderable a => [a] -> [Fragment]<br />
hits xs = sortByDistance $ concatMap hit xs<br />
</haskell><br />
<br />
However, this does not work as written since the elements of the list can be of '''SEVERAL''' different types (like a sphere and a polygon and a mesh etc. etc.) but<br />
lists need to have elements of the same type.<br />
<br />
====The solution====<br />
<br />
Use 'existential types' - an extension to Haskell that can be found in most compilers.<br />
<br />
The following example is based on GHC :<br />
<br />
<haskell><br />
{-# OPTIONS -fglasgow-exts #-}<br />
<br />
{- ...-}<br />
<br />
data AnyRenderable = forall a. Renderable a => AnyRenderable a<br />
<br />
instance Renderable AnyRenderable where<br />
boundingSphere (AnyRenderable a) = boundingSphere a<br />
hit (AnyRenderable a) = hit a<br />
{- ... -}<br />
</haskell><br />
<br />
Now, create lists with type <hask>[AnyRenderable]</hask>, for example,<br />
<haskell><br />
[ AnyRenderable x<br />
, AnyRenderable y<br />
, AnyRenderable z ]<br />
</haskell><br />
where x, y, z can be from different instances of <hask>Renderable</hask>.<br />
=== Dynamic dispatch mechanism of OOP ===<br />
<br />
'''Existential types''' in conjunction with type classes can be used to emulate the dynamic dispatch mechanism of object oriented programming languages. To illustrate this concept I show how a classic example from object oriented programming can be encoded in Haskell.<br />
<br />
<haskell><br />
class Shape_ a where<br />
perimeter :: a -> Double<br />
area :: a -> Double<br />
<br />
data Shape = forall a. Shape_ a => Shape a<br />
<br />
type Radius = Double<br />
type Side = Double<br />
<br />
data Circle = Circle Radius<br />
data Rectangle = Rectangle Side Side<br />
data Square = Square Side<br />
<br />
<br />
instance Shape_ Circle where<br />
perimeter (Circle r) = 2 * pi * r<br />
area (Circle r) = pi * r * r<br />
<br />
instance Shape_ Rectangle where<br />
perimeter (Rectangle x y) = 2*(x + y)<br />
area (Rectangle x y) = x * y<br />
<br />
instance Shape_ Square where<br />
perimeter (Square s) = 4*s<br />
area (Square s) = s*s<br />
<br />
instance Shape_ Shape where<br />
perimeter (Shape shape) = perimeter shape<br />
area (Shape shape) = area shape<br />
<br />
<br />
--<br />
-- Smart constructor<br />
--<br />
<br />
circle :: Radius -> Shape<br />
circle r = Shape (Circle r)<br />
<br />
rectangle :: Side -> Side -> Shape<br />
rectangle x y = Shape (Rectangle x y)<br />
<br />
square :: Side -> Shape<br />
square s = Shape (Square s)<br />
<br />
shapes :: [Shape]<br />
shapes = [circle 2.4, rectangle 3.1 4.4, square 2.1]<br />
</haskell><br />
<br />
(You may see other [[Smart constructors]] for other purposes).<br />
<br />
=== [[Generalised algebraic datatype]] ===<br />
<br />
The type of the <hask>parse</hask> function for [[Generalised algebraic datatype#Motivating example|this GADT]] is a good example to illustrate the concept of existential type.<br />
<br />
==Alternate methods==<br />
===Concrete data types===<br />
====Universal instance of a Class====<br />
Here one way to simulate existentials (Hawiki note: (Borrowed from somewhere...))<br />
<br />
<br />
Suppose I have a type class Shape a<br />
<haskell><br />
type Point = (Float,Float)<br />
<br />
class Shape a where<br />
draw :: a -> IO ()<br />
translate :: a-> Point -> a<br />
<br />
</haskell><br />
<br />
Then we can pack shapes up into a [[concrete data type]] like this:<br />
<haskell><br />
data SHAPE = SHAPE (IO ()) (Point -> SHAPE)<br />
</haskell><br />
with a function like this<br />
<haskell><br />
packShape :: Shape a => a -> SHAPE<br />
packShape s = SHAPE (draw s) (\(x,y) -> packShape (translate s (x,y)))<br />
</haskell><br />
This would be useful if we needed a list of shapes that we would need to translate and draw.<br />
<br />
In fact we can make <hask>SHAPE</hask> an instance of <hask>Shape</hask>:<br />
<haskell><br />
instance Shape SHAPE where<br />
draw (SHAPE d t) = d<br />
translate (SHAPE d t) = t<br />
</haskell><br />
<br />
So SHAPE is a sort of universal instance.<br />
<br />
====Using constructors and combinators====<br />
Why bother with class <hask>Shape</hask>? Why not just go straight to<br />
<br />
<haskell><br />
data Shape = Shape {<br />
draw :: IO()<br />
translate :: (Int, Int) -> Shape<br />
}<br />
</haskell><br />
<br />
Then you can create a library of shape [[constructor]]s and [[combinator]]s<br />
that each have defined "draw" and "translate" in their "where" clauses.<br />
<br />
<haskell><br />
circle :: (Int, Int) -> Int -> Shape<br />
circle (x,y) r =<br />
Shape draw1 translate1<br />
where<br />
draw1 = ...<br />
translate1 (x1,y1) = circle (x+x1, y+y1) r<br />
<br />
shapeGroup :: [Shape] -> Shape<br />
shapeGroup shapes = Shape draw1 translate1<br />
where<br />
draw1 = sequence_ $ map draw shapes<br />
translate1 v = shapeGroup $ map (translate v) shapes<br />
</haskell><br />
<br />
===Cases that really require existentials===<br />
<br />
There are cases where this sort of trick doesnt work. Here are two examples from a haskell mailing list discussion (from K. Claussen) that don't seem expressible without<br />
existentials. (But maybe one can rethink the whole thing :)<br />
<haskell><br />
data Expr a = Val a | forall b . Apply (Expr (b -> a)) (Expr b)<br />
</haskell><br />
and<br />
<haskell><br />
data Action = forall b . Act (IORef b) (b -> IO ())<br />
</haskell><br />
(Maybe this last one could be done as a <hask>type Act (IORef b) (IORef b -> IO ())</hask> then we could hide the <hask>IORef</hask> as above, that is go ahead and apply the second argument to the first)<br />
<br />
== Examples from the [http://www.cs.uu.nl/wiki/Ehc/ Essential Haskell Compiler] project ==<br />
<br />
See the [http://www.cs.uu.nl/wiki/Ehc/#On_EHC documentation on EHC], each paper at the ''Version 4'' part:<br />
* Chapter 8 (EH4) of Atze Dijkstra's [http://www.cs.uu.nl/groups/ST/Projects/ehc/ehc-book.pdf Essential Haskell PhD thesis] (most recent version). A detailed explanation. It explains also that existential types can be expressed in Haskell, but their use is restricted to data declarations, and the notation (using keyword <hask>forall</hask>) may be confusing. In Essential Haskell, existential types can occur not only in data declarations, and a separate keyword <hask>exists</hask> is used for their notation.<br />
* [http://www.cs.uu.nl/wiki/pub/Ehc/WebHome/20050107-eh-intro.pdf Essential Haskell Compiler overview]<br />
* [http://www.cs.uu.nl/wiki/Ehc/Examples#EH_4_forall_and_exists_everywher Examples]<br />
<br />
==See also==<br />
* A mailinglist discussion: http://haskell.org/pipermail/haskell-cafe/2003-October/005231.html<br />
*An example of encoding existentials using RankTwoPolymorphism : http://haskell.org/pipermail/haskell-cafe/2003-October/005304.html<br />
=== Trac ===<br />
<br />
[http://hackage.haskell.org/trac/haskell-prime/wiki/ExistentialQuantification Existential Quantification] is a detailed material on the topic. It has link also to the smaller [http://hackage.haskell.org/trac/haskell-prime/wiki/ExistentialQuantifier Existential Quantifier] page.<br />
<br />
[[Category:Idioms]]<br />
[[Category:Glossary]]<br />
[[Category:Language extensions]]</div>MathematicalOrchidhttps://wiki.haskell.org/User:MathematicalOrchidUser:MathematicalOrchid2007-07-09T12:36:03Z<p>MathematicalOrchid: </p>
<hr />
<div>=== Status ===<br />
<br />
Enthusiastic Haskell newbie.<br />
<br />
=== Main Interests ===<br />
<br />
* Using Haskell to write triposcopic mathematical algorithms with only a tiny amount of code.<br />
* Using Haskell to do seriously compute-bounded work in a multiprocessor setup.<br />
<br />
=== Projects ===<br />
<br />
==== Active ====<br />
<br />
* Toy compression implementations in Haskell.<br />
<br />
==== On Hold ====<br />
<br />
* ''Indoculate'' &mdash; Program to convert a single (custom) source to both HTML and LaTeX, and also do cross-linking. (Status: in production use)<br />
* ''Chaos'' &mdash; chaos pendulum simulator (Status: moderately working, needs UI)<br />
* ''Haktal'' &mdash; fractal generator. (Status: minimal functionality)<br />
* ''HoJ'' &mdash; Haskell to Java compiler. (Status: skeletal)<br />
* ''Evlor'' &mdash; Interactive Haskell step-line debugger. (Status: skeletal)<br />
* Sorting algorithm benchmarks.<br />
* Audio DSP in Haskell.<br />
* [[POV-Ray SDL project|Haskell SDL]] for [http://www.povray.org/ POV-Ray].<br />
<br />
==== Failed ====<br />
<br />
* Haskell ray tracer.<br />
* Haskell type deducer.<br />
* Haskell program to cause world peace.<br />
<br />
=== Darcs ===<br />
<br />
==== Indoculate ====<br />
<br />
* <code>darcs get http://www.orphi.me.uk/darcs/Indoculate</code><br />
* <code>ghc --make MakeHTML</code><br />
* <code>ghc --make MakeSite</code><br />
* <code>ghc --make MakeLaTeX</code><br />
* Comes with a minimal manual. (<code>Manual.html</code> in the darcs repo.)<br />
<br />
==== Chaos pendulum simulator ====<br />
<br />
* <code>darcs get http://www.orphi.me.uk/darcs/Chaos</code> (Chaos pendulum simulator.)<br />
* <code>ghc -O2 --make System1</code><br />
* <code>System1</code><br />
* Go have a cup of tea, what some TV, go to bed, come back next day, and it might have finished. Will draw 500 frames at 200x200 pixels each, and save them as PPM image files. Make an animation out of these, and enjoy the light show!<br />
<br />
==== Toy Compression ====<br />
<br />
* <code>darcs get http://www.orphi.me.uk/darcs/ToyCompression</code><br />
* <code>ghc -O2 --make Encode</code><br />
* <code>ghc -O2 --make Decode</code><br />
* <code>Encode algorithm file</code> (Compress <code>file</code> using specified algorithm, and save as <code>file-algorithm</code>.)<br />
* <code>Decode algorithm file</code> (Decompress <code>file</code> using specified algorithm, and save as <code>file-unalgorithm</code>.)<br />
<br />
Currently working algorithms:<br />
* '<code>RLE</code>': ''Run-length encoding''. Works well on files containing lots of 'runs' of the same value - e.g., pixel data. Works horribly on text.<br />
* '<code>BWT</code>': ''Burrows-Wheeler transform''. Doesn't actually do any compression, but tends to make data more compressible.<br />
* '<code>MTF</code>': ''Move-to-front encoding''. Again, doesn't compress, but makes the data more compressible.<br />
* '<code>Fib</code>': ''Fibonacci codes''. Low numbers take up fewer bits than large numbers.<br />
* '<code>LZW</code>':'' Lempel-Ziv-Welch''. Works well on just about everything!<br />
<br />
Notes:<br />
* Danger: BWT is ''extremely'' slow. It also uses ''absurd'' amounts of RAM! Run this algorithm only on small files. (Less than about 10 KB.)<br />
* LZW works very well, but BWT+MTF+Fib is currently unbeaten...<br />
<br />
=== Contributed Code ===<br />
<br />
* [[Library for binary]]<br />
* [[Library for vectors]]<br />
* [[Library for colours]]<br />
* [[Library for PPM images]]<br />
* [[Toy compression implementations]]<br />
<br />
=== Current Unsolved Questions ===<br />
<br />
* Why do Haskell language extensions exist?<br />
* How do you do graphics in Haskell?<br />
* How come (e.g.) Smalltalk provides 27 different types of collection, but Haskell only ever involves single-linked lists and binary trees?<br />
* Why is <hask>putStr xs1; putStr xs2</hask> slower than <hask>putStr (xs1 ++ xs2)</hask>?</div>MathematicalOrchidhttps://wiki.haskell.org/User:MathematicalOrchidUser:MathematicalOrchid2007-07-09T12:31:05Z<p>MathematicalOrchid: </p>
<hr />
<div>=== Status ===<br />
<br />
Enthusiastic Haskell newbie.<br />
<br />
=== Main Interests ===<br />
<br />
* Using Haskell to write triposcopic mathematical algorithms with only a tiny amount of code.<br />
* Using Haskell to do seriously compute-bounded work in a multiprocessor setup.<br />
<br />
=== Projects ===<br />
<br />
==== Active ====<br />
<br />
* Toy compression implementations in Haskell.<br />
<br />
==== On Hold ====<br />
<br />
* ''Indoculate'' &mdash; Program to convert a single (custom) source to both HTML and LaTeX, and also do cross-linking. (Status: in production use)<br />
* ''Chaos'' &mdash; chaos pendulum simulator (Status: moderately working, needs UI)<br />
* ''Haktal'' &mdash; fractal generator. (Status: minimal functionality)<br />
* ''HoJ'' &mdash; Haskell to Java compiler. (Status: skeletal)<br />
* ''Evlor'' &mdash; Interactive Haskell step-line debugger. (Status: skeletal)<br />
* Sorting algorithm benchmarks.<br />
* Audio DSP in Haskell.<br />
* [[POV-Ray SDL project|Haskell SDL]] for [http://www.povray.org/ POV-Ray].<br />
<br />
==== Failed ====<br />
<br />
* Haskell ray tracer.<br />
* Haskell type deducer.<br />
* Haskell program to cause world peace.<br />
<br />
=== Darcs ===<br />
<br />
==== Indoculate ====<br />
<br />
* <code>darcs get http://www.orphi.me.uk/darcs/Indoculate</code><br />
* <code>ghc --make MakeHTML</code><br />
* <code>ghc --make MakeSite</code><br />
* <code>ghc --make MakeLaTeX</code><br />
<br />
==== Chaos pendulum simulator ====<br />
<br />
* <code>darcs get http://www.orphi.me.uk/darcs/Chaos</code> (Chaos pendulum simulator.)<br />
<br />
==== Toy Compression ====<br />
<br />
* <code>darcs get http://www.orphi.me.uk/darcs/ToyCompression</code><br />
* <code>ghc -O2 --make Encode</code><br />
* <code>ghc -O2 --make Decode</code><br />
* <code>Encode algorithm file</code> (Compress <code>file</code> using specified algorithm, and save as <code>file-algorithm</code>.)<br />
* <code>Decode algorithm file</code> (Decompress <code>file</code> using specified algorithm, and save as <code>file-unalgorithm</code>.)<br />
<br />
Currently working algorithms:<br />
* '<code>RLE</code>': ''Run-length encoding''. Works well on files containing lots of 'runs' of the same value - e.g., pixel data. Works horribly on text.<br />
* '<code>BWT</code>': ''Burrows-Wheeler transform''. Doesn't actually do any compression, but tends to make data more compressible.<br />
* '<code>MTF</code>': ''Move-to-front encoding''. Again, doesn't compress, but makes the data more compressible.<br />
* '<code>Fib</code>': ''Fibonacci codes''. Low numbers take up fewer bits than large numbers.<br />
* '<code>LZW</code>':'' Lempel-Ziv-Welch''. Works well on just about everything!<br />
<br />
Notes:<br />
* Danger: BWT is ''extremely'' slow. It also uses ''absurd'' amounts of RAM! Run this algorithm only on small files. (Less than about 10 KB.)<br />
* LZW works very well, but BWT+MTF+Fib is currently unbeaten...<br />
<br />
=== Contributed Code ===<br />
<br />
* [[Library for binary]]<br />
* [[Library for vectors]]<br />
* [[Library for colours]]<br />
* [[Library for PPM images]]<br />
* [[Toy compression implementations]]<br />
<br />
=== Current Unsolved Questions ===<br />
<br />
* Why do Haskell language extensions exist?<br />
* How do you do graphics in Haskell?<br />
* How come (e.g.) Smalltalk provides 27 different types of collection, but Haskell only ever involves single-linked lists and binary trees?<br />
* Why is <hask>putStr xs1; putStr xs2</hask> slower than <hask>putStr (xs1 ++ xs2)</hask>?</div>MathematicalOrchidhttps://wiki.haskell.org/User_talk:MathematicalOrchidUser talk:MathematicalOrchid2007-07-09T12:07:01Z<p>MathematicalOrchid: </p>
<hr />
<div></div>MathematicalOrchidhttps://wiki.haskell.org/Rank-N_typesRank-N types2007-07-09T12:05:49Z<p>MathematicalOrchid: +category</p>
<hr />
<div>[[Category:Language extensions]]<br />
<br />
== About ==<br />
<br />
As best as I can tell, rank-N types are exactly like [[existential type]]s - except that they're completely different.<br />
<br />
Rank-2 types are a special case of rank-N types, and normal Haskell 98 types are all rank-1 types.<br />
<br />
== Also see ==<br />
<br />
[http://hackage.haskell.org/trac/haskell-prime/wiki/RankNTypes Rank-N types] on the Haskell' website.</div>MathematicalOrchidhttps://wiki.haskell.org/Rank-N_typesRank-N types2007-07-09T12:05:20Z<p>MathematicalOrchid: Can somebody add some content here?</p>
<hr />
<div>== About ==<br />
<br />
As best as I can tell, rank-N types are exactly like [[existential type]]s - except that they're completely different.<br />
<br />
Rank-2 types are a special case of rank-N types, and normal Haskell 98 types are all rank-1 types.<br />
<br />
== Also see ==<br />
<br />
[http://hackage.haskell.org/trac/haskell-prime/wiki/RankNTypes Rank-N types] on the Haskell' website.</div>MathematicalOrchidhttps://wiki.haskell.org/TypeType2007-07-09T12:01:06Z<p>MathematicalOrchid: Added link to the language extensions category. (Clumsy, but works.)</p>
<hr />
<div>In Haskell, '''types''' are how you describe the data your program will work with.<br />
<br />
[[Category:Language]]<br />
<br />
==Data declarations==<br />
<br />
One introduces, or declares, at type in Haskell via the <code>data</code> statement. In general a data declaration looks like:<br />
<br />
data [context =>] type tv1 ... tvi = con1 c1t1 c1c2... c1tn |<br />
... | conm cmt1 ... cmtq<br />
[deriving]<br />
<br />
which probably explains nothing if you don't already know Haskell!<br />
<br />
The essence of the above statement is that you use the keyword <hask>data</hask>,<br />
supply an optional context, give the type name and a variable number of <br />
[[type variable]]s. This is then followed by a variable number of [[constructor]]s, each of which has a list of [[type variable]]s or [[type constant]]s. At the end, there is an optional <code>deriving</code>.<br />
<br />
There are a number of other subtleties associated with this, such as requiring<br />
parameters to the data constructors to be [[eager]], what [[class]]es are<br />
allowed in the [[deriving]], use of [[field]] names in the constructors<br />
and what the [[context]] actually does. Please refer to the specific articles for more on each of those.<br />
<br />
Let's look at some examples. The Haskell standard data type [[Maybe]] is typically declared as:<br />
<haskell><br />
data Maybe a = Just a | Nothing<br />
</haskell><br />
What this means is that the type '''Maybe''' has one type variable, represented by the ''a'' and two [[constructor]]s '''Just''' and '''Nothing'''. (Note that Haskell requires type names and constructor names to begin with an uppercase letter). The '''Just''' constructor takes one parameter, ''a''.<br />
<br />
As another example, consider binary [[Tree]]s. They could be represented by:<br />
<haskell><br />
data Tree a = Branch (Tree a) (Tree a) | Leaf a<br />
</haskell><br />
Here, one of the constructors, '''Branch''' of '''Tree''' takes two trees as<br />
parameters to the constructor, while '''Leaf''' takes the type variable ''a''. This type of recursion is a very common [[:Category:Idioms |pattern]] in Haskell.<br />
<br />
==Type and newtype==<br />
<br />
The other two ways one may introduce types to Haskell programs are via the<br />
<hask>type</hask> and <hask>newtype</hask> statements.<br />
<br />
<hask>type</hask> introduces a synonym for a type and uses the same data<br />
constructors. <hask>newtype</hask> introduces a renaming of a type and <br />
requires you to provide new constructors.<br />
<br />
When using a <hask>type</hask> declaration, the type synonym and its base type<br />
are interchangeble almost everywhere (There are some restrictions when dealing with [[instance]] declarations). For example, if you had the declaration:<br />
<haskell><br />
type Name = String<br />
</haskell><br />
then any [[function]] you had declared that had <hask>String</hask> in its<br />
signature could be used on any element of type <code>Name</code><br />
<br />
However, if one had the declaration:<br />
<haskell> <br />
newtype FirstName = FirstName String<br />
</haskell><br />
this would no longer be the case. Functions would have to be declared that <br />
actually were defined on '''FirstName'''. Often, one creates a deconstructor<br />
at the same time which helps alleviate this requirement. e.g.:<br />
<haskell><br />
unFirstName :: FirstName -> String<br />
unFirstName (FirstName s) = s<br />
</haskell><br />
This is often done by the use of [[field]]s in the <code>newtype</code>. (Note<br />
that many consider the Haskell field implementation sub-optimal, while <br />
others use it extensively. See [[Programming guidelines]] and [[Future of Haskell]])<br />
<br />
==A simple example==<br />
<br />
Suppose you want to create a program to play bridge. You need something to represent cards. Here is one way to do that.<br />
<br />
First, create data types for the suit and card number.<br />
<haskell><br />
data Suit = Club | Diamond | Heart | Spade<br />
deriving (Read, Show, Enum, Eq, Ord)<br />
<br />
data CardValue = Two | Three | Four<br />
| Five | Six | Seven | Eight | Nine | Ten <br />
| Jack | Queen | King | Ace<br />
deriving (Read, Show, Enum, Eq, Ord)<br />
</haskell><br />
Each of these uses a [[deriving]] clause to allow us to convert them from / to [[String]] and Int, test them for equality and ordering. With types like this, <br />
where there are no [[type variable]]s, equality is based upon which constructor is used and order by the order you wrote them. e.g. <code>Three</code> is less than <code>Queen</code>.<br />
<br />
Now we define an actual <code>Card</code><br />
<haskell> <br />
data Card = Card {value::CardValue, <br />
suit::Suit}<br />
deriving (Read, Show, Eq)<br />
</haskell><br />
In this definition, we use [[field]]s, which give us ready made functions to<br />
access the two parts of a <code>Card</code>. Again, [[type variables]] were not<br />
used, but the data [[constructor]] requires its two parameters to be of<br />
specific types, <code>CardValue</code> and <code>Suit</code>.<br />
<br />
The deriving clause here only specifies three of our desired [[Class]]es, we supply [[instance]] declarations for [[Ord]] and [[Enum]].<br />
<haskell><br />
instance Ord Card where<br />
compare c1 c2 | (value c1 == (value c2)) = compare (suit c1) (suit c2)<br />
| otherwise = compare (value c1) (value c2)<br />
<br />
instance Enum Card where<br />
toEnum n = Card (toEnum (n `div` 4)) (toEnum (n `mod` 4))<br />
fromEnum c = 4*(fromEnum (value c)) + (fromEnum (suit c))<br />
</haskell><br />
Finally, we alias the type <code>Deck</code> to a list of <code>Card</code>s<br />
and populate the deck with a [[list comprehension]] <br />
<haskell><br />
type Deck = [Card]<br />
<br />
deck::Deck<br />
deck = [Card val su | val <- [Two .. Ace], su <- [Club .. Spade]]<br />
</haskell><br />
<br />
==Please add==<br />
<br />
Further illustrative examples would be most appreciated.<br />
<br />
==See also==<br />
Read the (wanted) articles about data [[constructor]]s and [[class]]es. As well the<br />
[http://haskell.org/definition/haskell98-report.pdf Haskell 98 report] and<br />
your chosen implementation (e.g. [[GHC/Documentation]]) have the latest words.<br />
<br />
*[http://www.haskell.org/haskellwiki/Category:Language_extensions Language extensions] - many language extensions are to do with changes to the type system.<br />
*[[Smart constructors]] shows some interesting examples including a non-trivial usage of <code>newtype</code>.<br />
*[[Unboxed type]] shows ways to have values closer to the bare metal :).<br />
*[[Phantom type]] discusses types without constructors.<br />
*[[Type witness]] gives an example of [[Generalised algebraic datatype | GADTs]], a [[GHC]] extension.<br />
*[[Existential type]] shows how to implement a common O-O programming paradigm.<br />
*[[Type arithmetic]] implements the [[Peano numbers]].<br />
*[[Reified type]], [[Non-trivial type synonyms]], [[Abstract data type]], [[Concrete data type]], [[Algebraic data type]].<br />
*[[Research_papers/Type_systems]] allow the curious to delve deeper.</div>MathematicalOrchidhttps://wiki.haskell.org/Existential_typeExistential type2007-07-05T09:20:16Z<p>MathematicalOrchid: Idiot-features here doesn't know left from right... >_<</p>
<hr />
<div>__TOC__<br />
This is a extension of Haskell available in [[GHC]]. See the GHC documentation:<br />
http://www.haskell.org/ghc/docs/latest/html/users_guide/type-extensions.html<br />
<br />
==Introduction to existential types==<br />
<br />
=== Overview ===<br />
<br />
Normally when creating a new type using <hask>type</hask>, <hask>newtype</hask>, <hask>data</hask>, etc., every type variable that appears on the right-hand side must also appear on the left-hand side. Existential types are a way of turning this off.<br />
<br />
=== Basics ===<br />
<br />
Existential types can be ''used'' for several different purposes. But what they ''do'' is to 'hide' a type variable on the right-hand side.<br />
<br />
Normally, any type variable appearing on the right must also appear on the left:<br />
<br />
<haskell><br />
data Worker x y = Worker {buffer :: b, input :: x, output :: y}<br />
</haskell><br />
<br />
This is an error, since the type of the buffer isn't specified on the right (it's a type variable rather than a type) but also isn't specified on the left (there's no 'b' in the left part). In Haskell98, you would have to write<br />
<br />
<haskell><br />
data Worker b x y = Worker {buffer :: b, input :: x, output :: y}<br />
</haskell><br />
<br />
That may or may not be an actual problem.<br />
<br />
Usually there is no problem at all with this state of affairs (which is why Haskell98 works this way). However, suppose that a <hask>Worker</hask> can use ''any'' type 'b' so long as it belongs to some particular class. Then every function that uses a <hask>Worker</hask> will have a type like<br />
<br />
<haskell><br />
foo :: (Buffer b) => Worker b Int Int<br />
</haskell><br />
<br />
or something. (In particular, failing to write an explicit type signature will invoke the dreaded [[monomorphism restriction]].) Using existential types, we can avoid this:<br />
<br />
<haskell><br />
data Worker x y = forall (Buffer b). Worker {buffer :: b, input :: x, output :: y}<br />
<br />
foo :: Worker Int Int<br />
</haskell><br />
<br />
The type of the buffer now does ''not'' appear in the <hask>Worker</hask> type at all.<br />
<br />
This has a number of consequences. First of all, it is now impossible for a function to demand a <hask>Worker</hask> having a specific type of buffer. Second, the type of <hask>foo</hask> can now be derived automatically without needing an explicit type signature. (No [[monomorphism restriction]].) Thirdly, since code now has ''no idea'' what type the <hask>buffer</hask> function returns, you are more limited in what you can do to it.<br />
<br />
In general, when you use a 'hidden' type in this way, you will usually want that type to belong to a specific class, or you will want to pass some functions along that can work on that type. Otherwise you'll have some value belonging to a random unknown type, and you won't be able to ''do'' anything to it!<br />
<br />
Note: You can use existential types to convert a more specific type into a less specific one. (See the examples below.) There is ''no way'' to perform the reverse conversion!<br />
<br />
== Examples ==<br />
<br />
===A short example===<br />
<br />
This illustrates creating a heterogeneous list, all of whose members implement "Show", and progressing through that list to show these items:<br />
<br />
<haskell><br />
data Obj = forall a. (Show a) => Obj a<br />
<br />
xs = [Obj 1, Obj "foo", Obj 'c']<br />
<br />
doShow :: [Obj] -> String<br />
doShow [] = ""<br />
doShow ((Obj x):xs) = show x ++ doShow xs<br />
</haskell><br />
<br />
With output: <code>doShow xs ==> "1\"foo\"'c'"</code><br />
<br />
===Expanded example - rendering objects in a raytracer===<br />
<br />
====Problem statement====<br />
<br />
In a raytracer, a requirement is to be able to render several different objects (like a ball, mesh or whatever). The first step is a type class for Renderable like so:<br />
<br />
<haskell><br />
class Renderable a where<br />
boundingSphere :: a -> Sphere<br />
hit :: a -> [Fragment] -- returns the "fragments" of all hits with ray<br />
{- ... etc ... -}<br />
</haskell><br />
<br />
To solve the problem, the <hask>hit</hask> function must apply to several objects (like a sphere and a polygon for instance).<br />
<br />
<haskell><br />
hits :: Renderable a => [a] -> [Fragment]<br />
hits xs = sortByDistance $ concatMap hit xs<br />
</haskell><br />
<br />
However, this does not work as written since the elements of the list can be of '''SEVERAL''' different types (like a sphere and a polygon and a mesh etc. etc.) but<br />
lists need to have elements of the same type.<br />
<br />
====The solution====<br />
<br />
Use 'existential types' - an extension to Haskell that can be found in most compilers.<br />
<br />
The following example is based on GHC :<br />
<br />
<haskell><br />
{-# OPTIONS -fglasgow-exts #-}<br />
<br />
{- ...-}<br />
<br />
data AnyRenderable = forall a. Renderable a => AnyRenderable a<br />
<br />
instance Renderable AnyRenderable where<br />
boundingSphere (AnyRenderable a) = boundingSphere a<br />
hit (AnyRenderable a) = hit a<br />
{- ... -}<br />
</haskell><br />
<br />
Now, create lists with type <hask>[AnyRenderable]</hask>, for example,<br />
<haskell><br />
[ AnyRenderable x<br />
, AnyRenderable y<br />
, AnyRenderable z ]<br />
</haskell><br />
where x, y, z can be from different instances of <hask>Renderable</hask>.<br />
=== Dynamic dispatch mechanism of OOP ===<br />
<br />
'''Existential types''' in conjunction with type classes can be used to emulate the dynamic dispatch mechanism of object oriented programming languages. To illustrate this concept I show how a classic example from object oriented programming can be encoded in Haskell.<br />
<br />
<haskell><br />
class Shape_ a where<br />
perimeter :: a -> Double<br />
area :: a -> Double<br />
<br />
data Shape = forall a. Shape_ a => Shape a<br />
<br />
type Radius = Double<br />
type Side = Double<br />
<br />
data Circle = Circle Radius<br />
data Rectangle = Rectangle Side Side<br />
data Square = Square Side<br />
<br />
<br />
instance Shape_ Circle where<br />
perimeter (Circle r) = 2 * pi * r<br />
area (Circle r) = pi * r * r<br />
<br />
instance Shape_ Rectangle where<br />
perimeter (Rectangle x y) = 2*(x + y)<br />
area (Rectangle x y) = x * y<br />
<br />
instance Shape_ Square where<br />
perimeter (Square s) = 4*s<br />
area (Square s) = s*s<br />
<br />
instance Shape_ Shape where<br />
perimeter (Shape shape) = perimeter shape<br />
area (Shape shape) = area shape<br />
<br />
<br />
--<br />
-- Smart constructor<br />
--<br />
<br />
circle :: Radius -> Shape<br />
circle r = Shape (Circle r)<br />
<br />
rectangle :: Side -> Side -> Shape<br />
rectangle x y = Shape (Rectangle x y)<br />
<br />
square :: Side -> Shape<br />
square s = Shape (Square s)<br />
<br />
shapes :: [Shape]<br />
shapes = [circle 2.4, rectangle 3.1 4.4, square 2.1]<br />
</haskell><br />
<br />
(You may see other [[Smart constructors]] for other purposes).<br />
<br />
=== [[Generalised algebraic datatype]] ===<br />
<br />
The type of the <hask>parse</hask> function for [[Generalised algebraic datatype#Motivating example|this GADT]] is a good example to illustrate the concept of existential type.<br />
<br />
==Alternate methods==<br />
===Concrete data types===<br />
====Universal instance of a Class====<br />
Here one way to simulate existentials (Hawiki note: (Borrowed from somewhere...))<br />
<br />
<br />
Suppose I have a type class Shape a<br />
<haskell><br />
type Point = (Float,Float)<br />
<br />
class Shape a where<br />
draw :: a -> IO ()<br />
translate :: a-> Point -> a<br />
<br />
</haskell><br />
<br />
Then we can pack shapes up into a [[concrete data type]] like this:<br />
<haskell><br />
data SHAPE = SHAPE (IO ()) (Point -> SHAPE)<br />
</haskell><br />
with a function like this<br />
<haskell><br />
packShape :: Shape a => a -> SHAPE<br />
packShape s = SHAPE (draw s) (\(x,y) -> packShape (translate s (x,y)))<br />
</haskell><br />
This would be useful if we needed a list of shapes that we would need to translate and draw.<br />
<br />
In fact we can make <hask>SHAPE</hask> an instance of <hask>Shape</hask>:<br />
<haskell><br />
instance Shape SHAPE where<br />
draw (SHAPE d t) = d<br />
translate (SHAPE d t) = t<br />
</haskell><br />
<br />
So SHAPE is a sort of universal instance.<br />
<br />
====Using constructors and combinators====<br />
Why bother with class <hask>Shape</hask>? Why not just go straight to<br />
<br />
<haskell><br />
data Shape = Shape {<br />
draw :: IO()<br />
translate :: (Int, Int) -> Shape<br />
}<br />
</haskell><br />
<br />
Then you can create a library of shape [[constructor]]s and [[combinator]]s<br />
that each have defined "draw" and "translate" in their "where" clauses.<br />
<br />
<haskell><br />
circle :: (Int, Int) -> Int -> Shape<br />
circle (x,y) r =<br />
Shape draw1 translate1<br />
where<br />
draw1 = ...<br />
translate1 (x1,y1) = circle (x+x1, y+y1) r<br />
<br />
shapeGroup :: [Shape] -> Shape<br />
shapeGroup shapes = Shape draw1 translate1<br />
where<br />
draw1 = sequence_ $ map draw shapes<br />
translate1 v = shapeGroup $ map (translate v) shapes<br />
</haskell><br />
<br />
===Cases that really require existentials===<br />
<br />
There are cases where this sort of trick doesnt work. Here are two examples from a haskell mailing list discussion (from K. Claussen) that don't seem expressible without<br />
existentials. (But maybe one can rethink the whole thing :)<br />
<haskell><br />
data Expr a = Val a | forall b . Apply (Expr (b -> a)) (Expr b)<br />
</haskell><br />
and<br />
<haskell><br />
data Action = forall b . Act (IORef b) (b -> IO ())<br />
</haskell><br />
(Maybe this last one could be done as a <hask>type Act (IORef b) (IORef b -> IO ())</hask> then we could hide the <hask>IORef</hask> as above, that is go ahead and apply the second argument to the first)<br />
<br />
== Examples from the [http://www.cs.uu.nl/wiki/Ehc/ Essential Haskell Compiler] project ==<br />
<br />
See the [http://www.cs.uu.nl/wiki/Ehc/#On_EHC documentation on EHC], each paper at the ''Version 4'' part:<br />
* Chapter 8 (EH4) of Atze Dijkstra's [http://www.cs.uu.nl/groups/ST/Projects/ehc/ehc-book.pdf Essential Haskell PhD thesis] (most recent version). A detailed explanation. It explains also that existential types can be expressed in Haskell, but their use is restricted to data declarations, and the notation (using keyword <hask>forall</hask>) may be confusing. In Essential Haskell, existential types can occur not only in data declarations, and a separate keyword <hask>exists</hask> is used for their notation.<br />
* [http://www.cs.uu.nl/wiki/pub/Ehc/WebHome/20050107-eh-intro.pdf Essential Haskell Compiler overview]<br />
* [http://www.cs.uu.nl/wiki/Ehc/Examples#EH_4_forall_and_exists_everywher Examples]<br />
<br />
==See also==<br />
* A mailinglist discussion: http://haskell.org/pipermail/haskell-cafe/2003-October/005231.html<br />
*An example of encoding existentials using RankTwoPolymorphism : http://haskell.org/pipermail/haskell-cafe/2003-October/005304.html<br />
=== Trac ===<br />
<br />
[http://hackage.haskell.org/trac/haskell-prime/wiki/ExistentialQuantification Existential Quantification] is a detailed material on the topic. It has link also to the smaller [http://hackage.haskell.org/trac/haskell-prime/wiki/ExistentialQuantifier Existential Quantifier] page.<br />
<br />
[[Category:Idioms]]<br />
[[Category:Glossary]]<br />
[[Category:Language extensions]]</div>MathematicalOrchidhttps://wiki.haskell.org/Existential_typeExistential type2007-07-05T09:16:50Z<p>MathematicalOrchid: Added an explanation that mere mortals can (hopefully!) understand.</p>
<hr />
<div>__TOC__<br />
This is a extension of Haskell available in [[GHC]]. See the GHC documentation:<br />
http://www.haskell.org/ghc/docs/latest/html/users_guide/type-extensions.html<br />
<br />
==Introduction to existential types==<br />
<br />
=== Overview ===<br />
<br />
Normally when creating a new type using <hask>type</hask>, <hask>newtype</hask>, <hask>data</hask>, etc., every type variable that appears on the left-hand side must also appear on the right-hand side. Existential types are a way of turning this off.<br />
<br />
=== Basics ===<br />
<br />
Existential types can be ''used'' for several different purposes. But what they ''do'' is to 'hide' a type variable on the right-hand side.<br />
<br />
Normally, any type variable appearing on the right must also appear on the left:<br />
<br />
<haskell><br />
data Worker x y = Worker {buffer :: b, input :: x, output :: y}<br />
</haskell><br />
<br />
This is an error, since the type of the buffer isn't specified on the right (it's a type variable rather than a type) but also isn't specified on the left (there's no 'b' in the left part). In Haskell98, you would have to write<br />
<br />
<haskell><br />
data Worker b x y = Worker {buffer :: b, input :: x, output :: y}<br />
</haskell><br />
<br />
That may or may not be an actual problem.<br />
<br />
Usually there is no problem at all with this state of affairs (which is why Haskell98 works this way). However, suppose that a <hask>Worker<hask> can use ''any'' type 'b' so long as it belongs to some particular class. Then every function that uses a <hask>Worker</hask> will have a type like<br />
<br />
<haskell><br />
foo :: (Buffer b) => Worker b Int Int<br />
</haskell><br />
<br />
or something. (In particular, failing to write an explicit type signature will invoke the dreaded [[monomorphism restriction]].) Using existential types, we can avoid this:<br />
<br />
<haskell><br />
data Worker x y = forall (Buffer b). Worker {buffer :: b, input :: x, output :: y}<br />
<br />
foo :: Worker Int Int<br />
</haskell><br />
<br />
The type of the buffer now does ''not'' appear in the <hask>Worker</hask> type at all.<br />
<br />
This has a number of consequences. First of all, it is now impossible for a function to demand a <hask>Worker</hask> having a specific type of buffer. Second, the type of <hask>foo</hask> can now be derived automatically without needing an explicit type signature. (No [[monomorphism restriction]].) Thirdly, since code now has ''no idea'' what type the <hask>buffer</hask> function returns, you are more limited in what you can do to it.<br />
<br />
In general, when you use a 'hidden' type in this way, you will usually want that type to belong to a specific class, or you will want to pass some functions along that can work on that type. Otherwise you'll have some value belonging to a random unknown type, and you won't be able to ''do'' anything to it!<br />
<br />
Note: You can use existential types to convert a more specific type into a less specific one. (See the examples below.) There is ''no way'' to perform the reverse conversion!<br />
<br />
== Examples ==<br />
<br />
===A short example===<br />
<br />
This illustrates creating a heterogeneous list, all of whose members implement "Show", and progressing through that list to show these items:<br />
<br />
<haskell><br />
data Obj = forall a. (Show a) => Obj a<br />
<br />
xs = [Obj 1, Obj "foo", Obj 'c']<br />
<br />
doShow :: [Obj] -> String<br />
doShow [] = ""<br />
doShow ((Obj x):xs) = show x ++ doShow xs<br />
</haskell><br />
<br />
With output: <code>doShow xs ==> "1\"foo\"'c'"</code><br />
<br />
===Expanded example - rendering objects in a raytracer===<br />
<br />
====Problem statement====<br />
<br />
In a raytracer, a requirement is to be able to render several different objects (like a ball, mesh or whatever). The first step is a type class for Renderable like so:<br />
<br />
<haskell><br />
class Renderable a where<br />
boundingSphere :: a -> Sphere<br />
hit :: a -> [Fragment] -- returns the "fragments" of all hits with ray<br />
{- ... etc ... -}<br />
</haskell><br />
<br />
To solve the problem, the <hask>hit</hask> function must apply to several objects (like a sphere and a polygon for instance).<br />
<br />
<haskell><br />
hits :: Renderable a => [a] -> [Fragment]<br />
hits xs = sortByDistance $ concatMap hit xs<br />
</haskell><br />
<br />
However, this does not work as written since the elements of the list can be of '''SEVERAL''' different types (like a sphere and a polygon and a mesh etc. etc.) but<br />
lists need to have elements of the same type.<br />
<br />
====The solution====<br />
<br />
Use 'existential types' - an extension to Haskell that can be found in most compilers.<br />
<br />
The following example is based on GHC :<br />
<br />
<haskell><br />
{-# OPTIONS -fglasgow-exts #-}<br />
<br />
{- ...-}<br />
<br />
data AnyRenderable = forall a. Renderable a => AnyRenderable a<br />
<br />
instance Renderable AnyRenderable where<br />
boundingSphere (AnyRenderable a) = boundingSphere a<br />
hit (AnyRenderable a) = hit a<br />
{- ... -}<br />
</haskell><br />
<br />
Now, create lists with type <hask>[AnyRenderable]</hask>, for example,<br />
<haskell><br />
[ AnyRenderable x<br />
, AnyRenderable y<br />
, AnyRenderable z ]<br />
</haskell><br />
where x, y, z can be from different instances of <hask>Renderable</hask>.<br />
=== Dynamic dispatch mechanism of OOP ===<br />
<br />
'''Existential types''' in conjunction with type classes can be used to emulate the dynamic dispatch mechanism of object oriented programming languages. To illustrate this concept I show how a classic example from object oriented programming can be encoded in Haskell.<br />
<br />
<haskell><br />
class Shape_ a where<br />
perimeter :: a -> Double<br />
area :: a -> Double<br />
<br />
data Shape = forall a. Shape_ a => Shape a<br />
<br />
type Radius = Double<br />
type Side = Double<br />
<br />
data Circle = Circle Radius<br />
data Rectangle = Rectangle Side Side<br />
data Square = Square Side<br />
<br />
<br />
instance Shape_ Circle where<br />
perimeter (Circle r) = 2 * pi * r<br />
area (Circle r) = pi * r * r<br />
<br />
instance Shape_ Rectangle where<br />
perimeter (Rectangle x y) = 2*(x + y)<br />
area (Rectangle x y) = x * y<br />
<br />
instance Shape_ Square where<br />
perimeter (Square s) = 4*s<br />
area (Square s) = s*s<br />
<br />
instance Shape_ Shape where<br />
perimeter (Shape shape) = perimeter shape<br />
area (Shape shape) = area shape<br />
<br />
<br />
--<br />
-- Smart constructor<br />
--<br />
<br />
circle :: Radius -> Shape<br />
circle r = Shape (Circle r)<br />
<br />
rectangle :: Side -> Side -> Shape<br />
rectangle x y = Shape (Rectangle x y)<br />
<br />
square :: Side -> Shape<br />
square s = Shape (Square s)<br />
<br />
shapes :: [Shape]<br />
shapes = [circle 2.4, rectangle 3.1 4.4, square 2.1]<br />
</haskell><br />
<br />
(You may see other [[Smart constructors]] for other purposes).<br />
<br />
=== [[Generalised algebraic datatype]] ===<br />
<br />
The type of the <hask>parse</hask> function for [[Generalised algebraic datatype#Motivating example|this GADT]] is a good example to illustrate the concept of existential type.<br />
<br />
==Alternate methods==<br />
===Concrete data types===<br />
====Universal instance of a Class====<br />
Here one way to simulate existentials (Hawiki note: (Borrowed from somewhere...))<br />
<br />
<br />
Suppose I have a type class Shape a<br />
<haskell><br />
type Point = (Float,Float)<br />
<br />
class Shape a where<br />
draw :: a -> IO ()<br />
translate :: a-> Point -> a<br />
<br />
</haskell><br />
<br />
Then we can pack shapes up into a [[concrete data type]] like this:<br />
<haskell><br />
data SHAPE = SHAPE (IO ()) (Point -> SHAPE)<br />
</haskell><br />
with a function like this<br />
<haskell><br />
packShape :: Shape a => a -> SHAPE<br />
packShape s = SHAPE (draw s) (\(x,y) -> packShape (translate s (x,y)))<br />
</haskell><br />
This would be useful if we needed a list of shapes that we would need to translate and draw.<br />
<br />
In fact we can make <hask>SHAPE</hask> an instance of <hask>Shape</hask>:<br />
<haskell><br />
instance Shape SHAPE where<br />
draw (SHAPE d t) = d<br />
translate (SHAPE d t) = t<br />
</haskell><br />
<br />
So SHAPE is a sort of universal instance.<br />
<br />
====Using constructors and combinators====<br />
Why bother with class <hask>Shape</hask>? Why not just go straight to<br />
<br />
<haskell><br />
data Shape = Shape {<br />
draw :: IO()<br />
translate :: (Int, Int) -> Shape<br />
}<br />
</haskell><br />
<br />
Then you can create a library of shape [[constructor]]s and [[combinator]]s<br />
that each have defined "draw" and "translate" in their "where" clauses.<br />
<br />
<haskell><br />
circle :: (Int, Int) -> Int -> Shape<br />
circle (x,y) r =<br />
Shape draw1 translate1<br />
where<br />
draw1 = ...<br />
translate1 (x1,y1) = circle (x+x1, y+y1) r<br />
<br />
shapeGroup :: [Shape] -> Shape<br />
shapeGroup shapes = Shape draw1 translate1<br />
where<br />
draw1 = sequence_ $ map draw shapes<br />
translate1 v = shapeGroup $ map (translate v) shapes<br />
</haskell><br />
<br />
===Cases that really require existentials===<br />
<br />
There are cases where this sort of trick doesnt work. Here are two examples from a haskell mailing list discussion (from K. Claussen) that don't seem expressible without<br />
existentials. (But maybe one can rethink the whole thing :)<br />
<haskell><br />
data Expr a = Val a | forall b . Apply (Expr (b -> a)) (Expr b)<br />
</haskell><br />
and<br />
<haskell><br />
data Action = forall b . Act (IORef b) (b -> IO ())<br />
</haskell><br />
(Maybe this last one could be done as a <hask>type Act (IORef b) (IORef b -> IO ())</hask> then we could hide the <hask>IORef</hask> as above, that is go ahead and apply the second argument to the first)<br />
<br />
== Examples from the [http://www.cs.uu.nl/wiki/Ehc/ Essential Haskell Compiler] project ==<br />
<br />
See the [http://www.cs.uu.nl/wiki/Ehc/#On_EHC documentation on EHC], each paper at the ''Version 4'' part:<br />
* Chapter 8 (EH4) of Atze Dijkstra's [http://www.cs.uu.nl/groups/ST/Projects/ehc/ehc-book.pdf Essential Haskell PhD thesis] (most recent version). A detailed explanation. It explains also that existential types can be expressed in Haskell, but their use is restricted to data declarations, and the notation (using keyword <hask>forall</hask>) may be confusing. In Essential Haskell, existential types can occur not only in data declarations, and a separate keyword <hask>exists</hask> is used for their notation.<br />
* [http://www.cs.uu.nl/wiki/pub/Ehc/WebHome/20050107-eh-intro.pdf Essential Haskell Compiler overview]<br />
* [http://www.cs.uu.nl/wiki/Ehc/Examples#EH_4_forall_and_exists_everywher Examples]<br />
<br />
==See also==<br />
* A mailinglist discussion: http://haskell.org/pipermail/haskell-cafe/2003-October/005231.html<br />
*An example of encoding existentials using RankTwoPolymorphism : http://haskell.org/pipermail/haskell-cafe/2003-October/005304.html<br />
=== Trac ===<br />
<br />
[http://hackage.haskell.org/trac/haskell-prime/wiki/ExistentialQuantification Existential Quantification] is a detailed material on the topic. It has link also to the smaller [http://hackage.haskell.org/trac/haskell-prime/wiki/ExistentialQuantifier Existential Quantifier] page.<br />
<br />
[[Category:Idioms]]<br />
[[Category:Glossary]]<br />
[[Category:Language extensions]]</div>MathematicalOrchidhttps://wiki.haskell.org/Talk:Haskell_in_industryTalk:Haskell in industry2007-06-23T07:44:50Z<p>MathematicalOrchid: Erlang...?</p>
<hr />
<div>Erm... Erlang Training and Consultancy Ltd? What's that got to do with ''Haskell''? [[User:MathematicalOrchid|MathematicalOrchid]] 07:44, 23 June 2007 (UTC)</div>MathematicalOrchidhttps://wiki.haskell.org/Library_for_PPM_imagesLibrary for PPM images2007-04-24T10:45:20Z<p>MathematicalOrchid: Coming soon...</p>
<hr />
<div>[[Category:Code]]<br />
<br />
Here's a trivial little thing I wrote for saving PPM images.<br />
<br />
For those that don't know, PPM is probably the simplest possible image file format that other software will actually read! For example, [http://www.irfanview.com/ IrfanView] will read it. Thus, this is a simple, light-weight way to write programs that will output graphics files, using only pure Haskell 98 I/O.<br />
<br />
The code is actually designed to work with my [[Library for colours]] - but you can supply something of your own if you prefer.<br />
<br />
=== ASCII PPM ===<br />
<br />
This is the 'P3' PPM format. The entire thing is plain ASCII. This makes it very easy to read and write, and extremely inefficient. Don't be surprised if a 800x800 pixel image takes up a couple of MB of space!<br />
<br />
<haskell><br />
module PPM (make_ppm, save_ppm) where<br />
<br />
import Colour<br />
<br />
save_ppm :: FilePath -> [[Colour]] -> IO ()<br />
save_ppm f css = writeFile f $ make_ppm css<br />
<br />
make_ppm :: [[Colour]] -> String<br />
make_ppm css =<br />
"P3\n" ++ (show $ length $ head css) ++ " " ++ (show $ length css) ++ " 255\n" ++<br />
(unlines $ map unwords $ group 15 $ map show $ concatMap colour $ concat css)<br />
<br />
group _ [] = []<br />
group n xs =<br />
let (xs0,xs1) = splitAt n xs<br />
in xs0 : group n xs1<br />
<br />
colour (Colour r g b) = [channel r, channel g, channel b]<br />
<br />
channel :: Double -> Int<br />
channel = floor . (255*) . min 1 . max 0<br />
</haskell><br />
<br />
=== Binary PPM ===<br />
<br />
This is the 'P6' PPM format. The header is still plain ASCII, but the actual raster data is binary. This makes the file roughly 10x smaller. I suspect it also makes it go ''faster'' too. This library is a drop-in replacement for the one above; include whichever one you want depending on what output you want.<br />
<br />
<haskell><br />
module Fast_PPM (make_ppm, save_ppm) where<br />
<br />
import Data.Word<br />
import qualified Data.ByteString as BIN<br />
import Colour<br />
<br />
quant8 :: Double -> Word8<br />
quant8 x = floor $ x * 0xFF<br />
<br />
cquant8 :: Colour -> [Word8]<br />
cquant8 (Colour r g b) = [quant8 r, quant8 g, quant8 b]<br />
<br />
string_to_bin :: String -> BIN.ByteString<br />
string_to_bin = BIN.pack . map (fromIntegral . fromEnum)<br />
<br />
header :: [[Colour]] -> BIN.ByteString<br />
header pss =<br />
let nx = length $ head pss<br />
ny = length pss<br />
in string_to_bin $ "P6\n" ++ show nx ++ " " ++ show ny ++ " 255\n"<br />
<br />
body :: [[Colour]] -> BIN.ByteString<br />
body pss = BIN.pack $ concatMap (cquant8 . cclip) $ concat pss<br />
<br />
make_ppm :: [[Colour]] -> BIN.ByteString<br />
make_ppm pss = BIN.append (header pss) (body pss)<br />
<br />
save_ppm :: FilePath -> [[Colour]] -> IO ()<br />
save_ppm f pss = BIN.writeFile f (make_ppm pss)<br />
</haskell><br />
<br />
=== Binary PPM using Arrays ===<br />
<br />
Coming soon. External interface looks something like this:<br />
<br />
<haskell><br />
module FrameBuffer where<br />
<br />
import Colour<br />
<br />
data FrameBuffer<br />
<br />
make_fb :: (Int,Int) -> IO FrameBuffer<br />
<br />
write_pixel :: FrameBuffer -> (Int,Int) -> Colour -> IO ()<br />
<br />
save_ppm :: FrameBuffer -> FilePath -> IO ()<br />
</haskell><br />
<br />
Uses IOUArrays to drastically improve save speed. (And, in general, improves the efficiency of the rest of the program by 1) being more strict, and 2) using constant space for all drawing operations.)</div>MathematicalOrchidhttps://wiki.haskell.org/User:MathematicalOrchidUser:MathematicalOrchid2007-04-24T10:41:24Z<p>MathematicalOrchid: </p>
<hr />
<div>=== Status ===<br />
<br />
Enthusiastic Haskell newbie.<br />
<br />
=== Main Interests ===<br />
<br />
* Using Haskell to write triposcopic mathematical algorithms with only a tiny amount of code.<br />
* Using Haskell to do seriously compute-bounded work in a multiprocessor setup.<br />
<br />
=== Projects ===<br />
<br />
==== Active ====<br />
<br />
* (no name) &mdash; chaos pendulum simulator (Status: moderately working, needs UI)<br />
* ''Haktal'' &mdash; fractal generator. (Status: minimal functionality)<br />
* ''HoJ'' &mdash; Haskell to Java compiler. (Status: skeletal)<br />
* ''Evlor'' &mdash; Interactive Haskell step-line debugger. (Status: skeletal)<br />
* ''Indoculate'' &mdash; Program to convert a single (custom) source to both HTML and LaTeX, and also do cross-linking. (Status: in production use)<br />
<br />
==== On Hold ====<br />
<br />
* Sorting algorithm benchmarks.<br />
* Audio DSP in Haskell.<br />
* [[Toy compression implementations|Haskell implementation of compression algorithms]].<br />
* [[POV-Ray SDL project|Haskell SDL]] for [http://www.povray.org/ POV-Ray].<br />
<br />
==== Failed ====<br />
<br />
* Haskell ray tracer.<br />
* Haskell type deducer.<br />
* Haskell program to cause world peace.<br />
<br />
=== Contributed Code ===<br />
<br />
* [[Library for binary]]<br />
* [[Library for vectors]]<br />
* [[Library for colours]]<br />
* [[Library for PPM images]]<br />
* [[Toy compression implementations]]<br />
<br />
=== Current Unsolved Questions ===<br />
<br />
* Why do Haskell language extensions exist?<br />
* How do you do graphics in Haskell?<br />
* How come (e.g.) Smalltalk provides 27 different types of collection, but Haskell only ever involves single-linked lists and binary trees?<br />
* Why is <hask>putStr xs1; putStr xs2</hask> slower than <hask>putStr (xs1 ++ xs2)</hask>?</div>MathematicalOrchidhttps://wiki.haskell.org/User:MathematicalOrchidUser:MathematicalOrchid2007-04-24T10:37:30Z<p>MathematicalOrchid: New projects, better formatting, etc.</p>
<hr />
<div>'''Status''': Enthusiastic Haskell newbie.<br />
<br />
=== Main Interests ===<br />
<br />
* Using Haskell to write triposcopic mathematical algorithms with only a tiny amount of code.<br />
* Using Haskell to do seriously compute-bounded work in a multiprocessor setup.<br />
<br />
=== Current Projects ===<br />
<br />
* (no name) &mdash; chaos pendulum simulator (Status: moderately working, needs UI)<br />
* ''Haktal'' &mdash; fractal generator. (Status: minimal functionality)<br />
* ''HoJ'' &mdash; Haskell to Java compiler. (Status: skeletal)<br />
* ''Evlor'' &mdash; Interactive Haskell step-line debugger. (Status: skeletal)<br />
* ''Indoculate'' &mdash; Program to convert a single (custom) source to both HTML and LaTeX, and also do cross-linking. (Status: in production use)<br />
<br />
=== Projects On Hold ===<br />
<br />
* Sorting algorithm benchmarks.<br />
* Audio DSP in Haskell.<br />
* [[Toy compression implementations|Haskell implementation of compression algorithms]].<br />
* [[POV-Ray SDL project|Haskell SDL]] for [http://www.povray.org/ POV-Ray].<br />
<br />
=== Failed Projects ===<br />
<br />
* Haskell ray tracer.<br />
* Haskell program to cause world peace.<br />
<br />
=== Contributed Code ===<br />
<br />
* [[Library for binary]]<br />
* [[Library for vectors]]<br />
* [[Library for colours]]<br />
* [[Library for PPM images]]<br />
<br />
=== Current Unsolved Questions ===<br />
<br />
* Why do Haskell language extensions exist?<br />
* How do you do graphics in Haskell?<br />
* How come (e.g.) Smalltalk provides 27 different types of collection, but Haskell only ever involves single-linked lists and binary trees?<br />
* Why is <hask>putStr xs1; putStr xs2</hask> slower than <hask>putStr (xs1 ++ xs2)</hask>?</div>MathematicalOrchidhttps://wiki.haskell.org/User_talk:MathematicalOrchidUser talk:MathematicalOrchid2007-04-24T10:27:46Z<p>MathematicalOrchid: Interesting idea...</p>
<hr />
<div>==Indoculate==<br />
<br />
My [http://www.haskell.org/pipermail/haskell-cafe/2007-March/023335.html survey] on a HTML+LaTeX generator resulted in [http://sophos.berkeley.edu/macfarlane/pandoc/ PanDoc].<br />
<br />
Interesting tip. However, Indoculate performs 2 functions. First, it provides a tool <code>MakeHTML</code>, which takes a source file and converts it to HTML. Similarly, <code>MakeLaTeX</code> takes the same source file and converts it to LaTeX instead. However, it also provides a second function. The <code>MakeSite</code> tool takes an entire hierarchical tree of source files and generates a complete fully cross-linked website. I very much doubt PanDoc can do that. [[User:MathematicalOrchid|MathematicalOrchid]] 10:27, 24 April 2007 (UTC)</div>MathematicalOrchidhttps://wiki.haskell.org/Library_for_PPM_imagesLibrary for PPM images2007-04-18T14:18:24Z<p>MathematicalOrchid: </p>
<hr />
<div>[[Category:Code]]<br />
<br />
Here's a trivial little thing I wrote for saving PPM images.<br />
<br />
For those that don't know, PPM is probably the simplest possible image file format that other software will actually read! For example, [http://www.irfanview.com/ IrfanView] will read it. Thus, this is a simple, light-weight way to write programs that will output graphics files, using only pure Haskell 98 I/O.<br />
<br />
The code is actually designed to work with my [[Library for colours]] - but you can supply something of your own if you prefer.<br />
<br />
=== ASCII PPM ===<br />
<br />
This is the 'P3' PPM format. The entire thing is plain ASCII. This makes it very easy to read and write, and extremely inefficient. Don't be surprised if a 800x800 pixel image takes up a couple of MB of space!<br />
<br />
<haskell><br />
module PPM (make_ppm, save_ppm) where<br />
<br />
import Colour<br />
<br />
save_ppm :: FilePath -> [[Colour]] -> IO ()<br />
save_ppm f css = writeFile f $ make_ppm css<br />
<br />
make_ppm :: [[Colour]] -> String<br />
make_ppm css =<br />
"P3\n" ++ (show $ length $ head css) ++ " " ++ (show $ length css) ++ " 255\n" ++<br />
(unlines $ map unwords $ group 15 $ map show $ concatMap colour $ concat css)<br />
<br />
group _ [] = []<br />
group n xs =<br />
let (xs0,xs1) = splitAt n xs<br />
in xs0 : group n xs1<br />
<br />
colour (Colour r g b) = [channel r, channel g, channel b]<br />
<br />
channel :: Double -> Int<br />
channel = floor . (255*) . min 1 . max 0<br />
</haskell><br />
<br />
=== Binary PPM ===<br />
<br />
This is the 'P6' PPM format. The header is still plain ASCII, but the actual raster data is binary. This makes the file roughly 10x smaller. I suspect it also makes it go ''faster'' too. This library is a drop-in replacement for the one above; include whichever one you want depending on what output you want.<br />
<br />
<haskell><br />
module Fast_PPM (make_ppm, save_ppm) where<br />
<br />
import Data.Word<br />
import qualified Data.ByteString as BIN<br />
import Colour<br />
<br />
quant8 :: Double -> Word8<br />
quant8 x = floor $ x * 0xFF<br />
<br />
cquant8 :: Colour -> [Word8]<br />
cquant8 (Colour r g b) = [quant8 r, quant8 g, quant8 b]<br />
<br />
string_to_bin :: String -> BIN.ByteString<br />
string_to_bin = BIN.pack . map (fromIntegral . fromEnum)<br />
<br />
header :: [[Colour]] -> BIN.ByteString<br />
header pss =<br />
let nx = length $ head pss<br />
ny = length pss<br />
in string_to_bin $ "P6\n" ++ show nx ++ " " ++ show ny ++ " 255\n"<br />
<br />
body :: [[Colour]] -> BIN.ByteString<br />
body pss = BIN.pack $ concatMap (cquant8 . cclip) $ concat pss<br />
<br />
make_ppm :: [[Colour]] -> BIN.ByteString<br />
make_ppm pss = BIN.append (header pss) (body pss)<br />
<br />
save_ppm :: FilePath -> [[Colour]] -> IO ()<br />
save_ppm f pss = BIN.writeFile f (make_ppm pss)<br />
</haskell></div>MathematicalOrchidhttps://wiki.haskell.org/Library_for_PPM_imagesLibrary for PPM images2007-04-18T14:17:24Z<p>MathematicalOrchid: </p>
<hr />
<div>[[Category:Code]]<br />
<br />
Here's a trivial little thing I wrote for saving PPM images.<br />
<br />
For those that don't know, PPM is probably the simplest possible image file format that other software will actually read! For example, [http://www.irfanview.com/ IrfanView] will read it. Thus, this is a simple, light-weight way to write programs that will output graphics files, using only pure Haskell 98 I/O.<br />
<br />
The code is actually designed to work with my [[Library for colours]] - but you can supply something of your own if you prefer.<br />
<br />
=== ASCII PPM ===<br />
<br />
This is the 'P6' PPM format. The entire thing is plain ASCII. This makes it very easy to read and write, and extremely inefficient. Don't be surprised if a 800x800 pixel image takes up a couple of MB of space!<br />
<br />
<haskell><br />
module PPM (make_ppm, save_ppm) where<br />
<br />
import Colour<br />
<br />
save_ppm :: FilePath -> [[Colour]] -> IO ()<br />
save_ppm f css = writeFile f $ make_ppm css<br />
<br />
make_ppm :: [[Colour]] -> String<br />
make_ppm css =<br />
"P3\n" ++ (show $ length $ head css) ++ " " ++ (show $ length css) ++ " 255\n" ++<br />
(unlines $ map unwords $ group 15 $ map show $ concatMap colour $ concat css)<br />
<br />
group _ [] = []<br />
group n xs =<br />
let (xs0,xs1) = splitAt n xs<br />
in xs0 : group n xs1<br />
<br />
colour (Colour r g b) = [channel r, channel g, channel b]<br />
<br />
channel :: Double -> Int<br />
channel = floor . (255*) . min 1 . max 0<br />
</haskell><br />
<br />
=== Binary PPM ===<br />
<br />
This is the 'P6' PPM format. The header is still plain ASCII, but the actual raster data is binary. This makes the file roughly 10x smaller. I suspect it also makes it go ''faster'' too. This library is a drop-in replacement for the one above; include whichever one you want depending on what output you want.<br />
<br />
<haskell><br />
module Fast_PPM (make_ppm, save_ppm) where<br />
<br />
import Data.Word<br />
import qualified Data.ByteString as BIN<br />
import Colour<br />
<br />
quant8 :: Double -> Word8<br />
quant8 x = floor $ x * 0xFF<br />
<br />
cquant8 :: Colour -> [Word8]<br />
cquant8 (Colour r g b) = [quant8 r, quant8 g, quant8 b]<br />
<br />
string_to_bin :: String -> BIN.ByteString<br />
string_to_bin = BIN.pack . map (fromIntegral . fromEnum)<br />
<br />
header :: [[Colour]] -> BIN.ByteString<br />
header pss =<br />
let nx = length $ head pss<br />
ny = length pss<br />
in string_to_bin $ "P6\n" ++ show nx ++ " " ++ show ny ++ " 255\n"<br />
<br />
body :: [[Colour]] -> BIN.ByteString<br />
body pss = BIN.pack $ concatMap (cquant8 . cclip) $ concat pss<br />
<br />
make_ppm :: [[Colour]] -> BIN.ByteString<br />
make_ppm pss = BIN.append (header pss) (body pss)<br />
<br />
save_ppm :: FilePath -> [[Colour]] -> IO ()<br />
save_ppm f pss = BIN.writeFile f (make_ppm pss)<br />
</haskell></div>MathematicalOrchidhttps://wiki.haskell.org/Library_for_coloursLibrary for colours2007-04-18T14:12:33Z<p>MathematicalOrchid: Removed old quantinize function. (Doesn't really belong here.)</p>
<hr />
<div>[[Category:Code]]<br />
<br />
Simple thing for working on colours in the RGB colours space. (The intention being that each component is in the interval 0 &le; x &le; 1.) You could just use tuples, but this library provides simple colour arithmetic.<br />
<br />
<haskell><br />
module Colour where<br />
<br />
data Colour = Colour {red, green, blue :: Double} deriving (Eq, Show)<br />
<br />
cmap :: (Double -> Double) -> Colour -> Colour<br />
cmap f (Colour r g b) = Colour (f r) (f g) (f b)<br />
<br />
czip :: (Double -> Double -> Double) -> Colour -> Colour -> Colour<br />
czip f (Colour r1 g1 b1) (Colour r2 g2 b2) = Colour (f r1 r2) (f g1 g2) (f b1 b2)<br />
<br />
cfold :: (Double -> Double -> Double) -> Colour -> Double<br />
cfold f (Colour r g b) = r `f` g `f` b<br />
<br />
cpromote :: Double -> Colour<br />
cpromote x = Colour x x x<br />
<br />
instance Num Colour where<br />
(+) = czip (+)<br />
(-) = czip (-)<br />
(*) = czip (*)<br />
negate = cmap negate<br />
abs = cmap abs<br />
signum = cmap signum<br />
fromInteger x = cpromote (fromInteger x)<br />
<br />
instance Fractional Colour where<br />
(/) = czip (/)<br />
recip = cmap recip<br />
fromRational x = cpromote (fromRational x)<br />
<br />
clip :: (Num n, Ord n) => n -> n<br />
clip n<br />
| n < 0 = 0<br />
| n > 1 = 1<br />
| otherwise = n<br />
<br />
cclip :: Colour -> Colour<br />
cclip = cmap clip<br />
</haskell></div>MathematicalOrchidhttps://wiki.haskell.org/Library_for_vectorsLibrary for vectors2007-04-18T14:10:32Z<p>MathematicalOrchid: Added vmag_sqr</p>
<hr />
<div>[[Category:Code]]<br />
<br />
Most people just use <hask>(Int,Int)</hask> or similar for a 2-vector. However, if you find yourself wanting to do lots of vector arithmetic, that becomes annoying quite quickly. Below is what I use; feel free to adapt it to your needs.<br />
<br />
<haskell><br />
module Vector where<br />
<br />
type Scalar = Double<br />
<br />
class Vector v where<br />
vmap :: (Scalar -> Scalar) -> v -> v<br />
vzip :: (Scalar -> Scalar -> Scalar) -> v -> v -> v<br />
vfold :: (x -> Scalar -> x) -> x -> v -> x<br />
<br />
vdot :: Vector v => v -> v -> Scalar<br />
vdot v0 v1 = vfold (+) 0 $ vzip (*) v0 v1<br />
<br />
vmag_sqr :: Vector v => v -> Scalar<br />
vmag_sqr v = v `vdot` v<br />
<br />
vmag :: Vector v => v -> Scalar<br />
vmag = sqrt . vmag_sqr<br />
<br />
vscale :: Vector v => Scalar -> v -> v<br />
vscale s = vmap (s*)<br />
<br />
vunit :: Vector v => v -> v<br />
vunit v =<br />
if vmag v == 0<br />
then v<br />
else vscale (1 / vmag v) v<br />
<br />
<br />
data Vector2 = Vector2 {v2x, v2y :: Scalar} deriving (Eq)<br />
<br />
instance Show Vector2 where<br />
show (Vector2 x y) = "<" ++ (show x) ++ ", " ++ (show y) ++ ">"<br />
<br />
instance Vector Vector2 where<br />
vmap f (Vector2 x y) = Vector2 (f x) (f y)<br />
vfold f i (Vector2 x y) = (i `f` x) `f` y<br />
vzip f (Vector2 x0 y0) (Vector2 x1 y1) = Vector2 (f x0 x1) (f y0 y1)<br />
<br />
instance Num Vector2 where<br />
(+) = vzip (+)<br />
(-) = vzip (-)<br />
(*) = vzip (*)<br />
negate = vmap negate<br />
fromInteger s = Vector2 (fromInteger s) (fromInteger s)<br />
<br />
instance Fractional Vector2 where<br />
(/) = vzip (/)<br />
fromDouble s = Vector2 s s<br />
<br />
<br />
data Vector3 = Vector3 {v3x, v3y, v3z :: Scalar} deriving (Eq)<br />
<br />
instance Show Vector3 where<br />
show (Vector3 x y z) = "<" ++ (show x) ++ ", " ++ (show y) ++ ", " ++ (show z) ++ ">"<br />
<br />
instance Vector Vector3 where<br />
vmap f (Vector3 x y z) = Vector3 (f x) (f y) (f z)<br />
vfold f i (Vector3 x y z) = ((i `f` x) `f` y) `f` z<br />
vzip f (Vector3 x0 y0 z0) (Vector3 x1 y1 z1) = Vector3 (f x0 x1) (f y0 y1) (f z0 z1)<br />
<br />
instance Num Vector3 where<br />
(+) = vzip (+)<br />
(-) = vzip (-)<br />
(*) = vzip (*)<br />
negate = vmap negate<br />
fromInteger s = Vector3 (fromInteger s) (fromInteger s) (fromInteger s)<br />
<br />
instance Fractional Vector3 where<br />
(/) = vzip (/)<br />
fromDouble s = Vector3 s s s<br />
<br />
v3cross (Vector3 x0 y0 z0) (Vector3 x1 y1 z1) = Vector3 (y0*z1 - y1*z0) (x0*z1 - x1*z0) (x0*y1 - x1*y0)<br />
</haskell><br />
<br />
PS. If anybody knows a way to make every instance of <hask>Vector</hask> automatically become an instance of <hask>Num</hask>, etc., let me know!</div>MathematicalOrchidhttps://wiki.haskell.org/Library_for_PPM_imagesLibrary for PPM images2007-04-17T12:48:49Z<p>MathematicalOrchid: Added 'P6' PPM file format.</p>
<hr />
<div>[[Category:Code]]<br />
<br />
Here's a trivial little thing I wrote for saving PPM images.<br />
<br />
For those that don't know, PPM is probably the simplest possible image file format that other software will actually read! For example, [http://www.irfanview.com/ IrfanView] will read it. Thus, this is a simple, light-weight way to write programs that will output graphics files, using only pure Haskell 98 I/O.<br />
<br />
=== ASCII PPM ===<br />
<br />
<haskell><br />
module PPM (make_ppm, save_ppm) where<br />
<br />
import Colour<br />
<br />
save_ppm :: FilePath -> [[Colour]] -> IO ()<br />
save_ppm f css = writeFile f $ make_ppm css<br />
<br />
make_ppm :: [[Colour]] -> String<br />
make_ppm css =<br />
"P3\n" ++ (show $ length $ head css) ++ " " ++ (show $ length css) ++ " 255\n" ++<br />
(unlines $ map unwords $ group 15 $ map show $ concatMap colour $ concat css)<br />
<br />
group _ [] = []<br />
group n xs =<br />
let (xs0,xs1) = splitAt n xs<br />
in xs0 : group n xs1<br />
<br />
colour (Colour r g b) = [channel r, channel g, channel b]<br />
<br />
channel :: Double -> Int<br />
channel = floor . (255*) . min 1 . max 0<br />
</haskell><br />
<br />
The code is actually designed to work with my [[Library for colours]] - but you can supply something of your own if you prefer.<br />
<br />
=== Binary PPM ===<br />
<br />
This is the 'P6' PPM format. The header is still plain ASCII, but the actual raster data is binary. This makes the file roughly 10x smaller. I suspect it also makes it go ''faster'' too. This library is a drop-in replacement for the one about; include whichever one you want depending on what output you want.<br />
<br />
<haskell><br />
module Fast_PPM (make_ppm, save_ppm) where<br />
<br />
import Data.Word<br />
import qualified Data.ByteString as BIN<br />
import Colour<br />
<br />
quant8 :: Double -> Word8<br />
quant8 x = floor $ x * 0xFF<br />
<br />
cquant8 :: Colour -> [Word8]<br />
cquant8 (Colour r g b) = [quant8 r, quant8 g, quant8 b]<br />
<br />
string_to_bin :: String -> BIN.ByteString<br />
string_to_bin = BIN.pack . map (fromIntegral . fromEnum)<br />
<br />
header :: [[Colour]] -> BIN.ByteString<br />
header pss =<br />
let nx = length $ head pss<br />
ny = length pss<br />
in string_to_bin $ "P6\n" ++ show nx ++ " " ++ show ny ++ " 255\n"<br />
<br />
body :: [[Colour]] -> BIN.ByteString<br />
body pss = BIN.pack $ concatMap (cquant8 . cclip) $ concat pss<br />
<br />
make_ppm :: [[Colour]] -> BIN.ByteString<br />
make_ppm pss = BIN.append (header pss) (body pss)<br />
<br />
save_ppm :: FilePath -> [[Colour]] -> IO ()<br />
save_ppm f pss = BIN.writeFile f (make_ppm pss)<br />
</haskell></div>MathematicalOrchidhttps://wiki.haskell.org/Library_for_coloursLibrary for colours2007-04-16T11:50:20Z<p>MathematicalOrchid: </p>
<hr />
<div>[[Category:Code]]<br />
<br />
Simple thing for working on colours in the RGB colours space. (The intention being that each component is in the interval 0 &le; x &le; 1.) You could just use tuples, but this library provides simple colour arithmetic.<br />
<br />
<haskell><br />
module Colour where<br />
<br />
data Colour = Colour {red, green, blue :: Double} deriving (Eq, Show)<br />
<br />
cmap :: (Double -> Double) -> Colour -> Colour<br />
cmap f (Colour r g b) = Colour (f r) (f g) (f b)<br />
<br />
czip :: (Double -> Double -> Double) -> Colour -> Colour -> Colour<br />
czip f (Colour r1 g1 b1) (Colour r2 g2 b2) = Colour (f r1 r2) (f g1 g2) (f b1 b2)<br />
<br />
cfold :: (Double -> Double -> Double) -> Colour -> Double<br />
cfold f (Colour r g b) = r `f` g `f` b<br />
<br />
cpromote :: Double -> Colour<br />
cpromote x = Colour x x x<br />
<br />
instance Num Colour where<br />
(+) = czip (+)<br />
(-) = czip (-)<br />
(*) = czip (*)<br />
negate = cmap negate<br />
abs = cmap abs<br />
signum = cmap signum<br />
fromInteger x = cpromote (fromInteger x)<br />
<br />
instance Fractional Colour where<br />
(/) = czip (/)<br />
recip = cmap recip<br />
fromRational x = cpromote (fromRational x)<br />
<br />
clip :: (Num n, Ord n) => n -> n<br />
clip n<br />
| n < 0 = 0<br />
| n > 1 = 1<br />
| otherwise = n<br />
<br />
cclip :: Colour -> Colour<br />
cclip = cmap clip<br />
<br />
quantinize :: Int -> Double -> Int<br />
quantinize scale v = floor (v * (fromIntegral scale))<br />
</haskell></div>MathematicalOrchidhttps://wiki.haskell.org/Library_for_PPM_imagesLibrary for PPM images2007-04-16T11:46:07Z<p>MathematicalOrchid: I'll get it right in a minute...</p>
<hr />
<div>[[Category:Code]]<br />
<br />
Here's a trivial little thing I wrote for saving PPM images.<br />
<br />
For those that don't know, PPM is probably the simplest possible image file format that other software will actually read! For example, [http://www.irfanview.com/ IrfanView] will read it. Thus, this is a simple, light-weight way to write programs that will output graphics files, using only pure Haskell 98 I/O.<br />
<br />
<haskell><br />
module PPM (make_ppm, save_ppm) where<br />
<br />
import Colour<br />
<br />
save_ppm :: FilePath -> [[Colour]] -> IO ()<br />
save_ppm f css = writeFile f $ make_ppm css<br />
<br />
make_ppm :: [[Colour]] -> String<br />
make_ppm css =<br />
"P3\n" ++ (show $ length $ head css) ++ " " ++ (show $ length css) ++ " 255\n" ++<br />
(unlines $ map unwords $ group 15 $ map show $ concatMap colour $ concat css)<br />
<br />
group _ [] = []<br />
group n xs =<br />
let (xs0,xs1) = splitAt n xs<br />
in xs0 : group n xs1<br />
<br />
colour (Colour r g b) = [channel r, channel g, channel b]<br />
<br />
channel :: Double -> Int<br />
channel = floor . (255*) . min 1 . max 0<br />
</haskell><br />
<br />
The code is actually designed to work with my [[Library for colours]] - but you can supply something of your own if you prefer.</div>MathematicalOrchidhttps://wiki.haskell.org/Library_for_PPM_imagesLibrary for PPM images2007-04-16T11:45:43Z<p>MathematicalOrchid: </p>
<hr />
<div>[[Category:Code]]<br />
<br />
Here's a trivial little thing I wrote for saving PPM images.<br />
<br />
For those that don't know, PPM is probably the simplest possible image file format that other software will actually read! For example, [http://www.irfanview.com/ IrfanView] will read it. Thus, this is a simple, light-weight way to write programs that will output graphics files, using only pure Haskell 98 I/O.<br />
<br />
<haskell><br />
module PPM (write_ppm, save_ppm) where<br />
<br />
import Colour<br />
<br />
save_ppm :: FilePath -> [[Colour]] -> IO ()<br />
save_ppm f css = writeFile f $ make_ppm css<br />
<br />
make_ppm :: [[Colour]] -> String<br />
make_ppm css =<br />
"P3\n" ++ (show $ length $ head css) ++ " " ++ (show $ length css) ++ " 255\n" ++<br />
(unlines $ map unwords $ group 15 $ map show $ concatMap colour $ concat css)<br />
<br />
group _ [] = []<br />
group n xs =<br />
let (xs0,xs1) = splitAt n xs<br />
in xs0 : group n xs1<br />
<br />
colour (Colour r g b) = [channel r, channel g, channel b]<br />
<br />
channel :: Double -> Int<br />
channel = floor . (255*) . min 1 . max 0<br />
</haskell><br />
<br />
The code is actually designed to work with my [[Library for colours]] - but you can supply something of your own if you prefer.</div>MathematicalOrchidhttps://wiki.haskell.org/Library_for_PPM_imagesLibrary for PPM images2007-04-16T11:45:25Z<p>MathematicalOrchid: </p>
<hr />
<div>[[Category:Code]]<br />
<br />
Here's a trivial little thing I wrote for saving PPM images.<br />
<br />
For those that don't know, PPM is probably the simplest possible image file format that other software will actually read! For example, [http://www.irfanview.com/ IrfanView] will read it. Thus, this is a simple, light-weight way to write programs that will output graphics files, using only pure Haskell 98 I/O.<br />
<br />
<haskell><br />
module PPM (write_ppm, save_ppm) where<br />
<br />
import Colour<br />
<br />
save_ppm :: FilePath -> [[Colour]] -> IO ()<br />
save_ppm f css = writeFile f $ write_ppm css<br />
<br />
make_ppm :: [[Colour]] -> String<br />
make_ppm css =<br />
"P3\n" ++ (show $ length $ head css) ++ " " ++ (show $ length css) ++ " 255\n" ++<br />
(unlines $ map unwords $ group 15 $ map show $ concatMap colour $ concat css)<br />
<br />
group _ [] = []<br />
group n xs =<br />
let (xs0,xs1) = splitAt n xs<br />
in xs0 : group n xs1<br />
<br />
colour (Colour r g b) = [channel r, channel g, channel b]<br />
<br />
channel :: Double -> Int<br />
channel = floor . (255*) . min 1 . max 0<br />
</haskell><br />
<br />
The code is actually designed to work with my [[Library for colours]] - but you can supply something of your own if you prefer.</div>MathematicalOrchidhttps://wiki.haskell.org/User_talk:MathematicalOrchidUser talk:MathematicalOrchid2007-04-12T15:55:12Z<p>MathematicalOrchid: </p>
<hr />
<div></div>MathematicalOrchidhttps://wiki.haskell.org/Library_for_coloursLibrary for colours2007-04-12T15:54:19Z<p>MathematicalOrchid: Ooops! Meant to press preview, not save...</p>
<hr />
<div>[[Category:Code]]<br />
<br />
Simple thing for working on colours in the RGB colours space. (The intention being that each component is in the interval 0 <= x <= 1.) You could just use tuples, but this library provides simple colour arithmetic.<br />
<br />
<haskell><br />
module Colour where<br />
<br />
data Colour = Colour {red, green, blue :: Double} deriving (Eq, Show)<br />
<br />
cmap :: (Double -> Double) -> Colour -> Colour<br />
cmap f (Colour r g b) = Colour (f r) (f g) (f b)<br />
<br />
czip :: (Double -> Double -> Double) -> Colour -> Colour -> Colour<br />
czip f (Colour r1 g1 b1) (Colour r2 g2 b2) = Colour (f r1 r2) (f g1 g2) (f b1 b2)<br />
<br />
cfold :: (Double -> Double -> Double) -> Colour -> Double<br />
cfold f (Colour r g b) = r `f` g `f` b<br />
<br />
cpromote :: Double -> Colour<br />
cpromote x = Colour x x x<br />
<br />
instance Num Colour where<br />
(+) = czip (+)<br />
(-) = czip (-)<br />
(*) = czip (*)<br />
negate = cmap negate<br />
abs = cmap abs<br />
signum = cmap signum<br />
fromInteger x = cpromote (fromInteger x)<br />
<br />
instance Fractional Colour where<br />
(/) = czip (/)<br />
recip = cmap recip<br />
fromRational x = cpromote (fromRational x)<br />
<br />
clip :: (Num n, Ord n) => n -> n<br />
clip n<br />
| n < 0 = 0<br />
| n > 1 = 1<br />
| otherwise = n<br />
<br />
cclip :: Colour -> Colour<br />
cclip = cmap clip<br />
<br />
quantinize :: Int -> Double -> Int<br />
quantinize max v = floor (v * (fromIntegral max))<br />
</haskell></div>MathematicalOrchidhttps://wiki.haskell.org/Library_for_coloursLibrary for colours2007-04-12T15:52:51Z<p>MathematicalOrchid: </p>
<hr />
<div>[[Category:Code]]<br />
<br />
<haskell><br />
module Colour where<br />
<br />
data Colour = Colour {red, green, blue :: Double} deriving (Eq, Show)<br />
<br />
cmap :: (Double -> Double) -> Colour -> Colour<br />
cmap f (Colour r g b) = Colour (f r) (f g) (f b)<br />
<br />
czip :: (Double -> Double -> Double) -> Colour -> Colour -> Colour<br />
czip f (Colour r1 g1 b1) (Colour r2 g2 b2) = Colour (f r1 r2) (f g1 g2) (f b1 b2)<br />
<br />
cfold :: (Double -> Double -> Double) -> Colour -> Double<br />
cfold f (Colour r g b) = r `f` g `f` b<br />
<br />
cpromote :: Double -> Colour<br />
cpromote x = Colour x x x<br />
<br />
instance Num Colour where<br />
(+) = czip (+)<br />
(-) = czip (-)<br />
(*) = czip (*)<br />
negate = cmap negate<br />
abs = cmap abs<br />
signum = cmap signum<br />
fromInteger x = cpromote (fromInteger x)<br />
<br />
instance Fractional Colour where<br />
(/) = czip (/)<br />
recip = cmap recip<br />
fromRational x = cpromote (fromRational x)<br />
<br />
clip :: (Num n, Ord n) => n -> n<br />
clip n<br />
| n < 0 = 0<br />
| n > 1 = 1<br />
| otherwise = n<br />
<br />
cclip :: Colour -> Colour<br />
cclip = cmap clip<br />
<br />
quantinize :: Int -> Double -> Int<br />
quantinize max v = floor (v * (fromIntegral max))<br />
</haskell></div>MathematicalOrchidhttps://wiki.haskell.org/Library_for_PPM_imagesLibrary for PPM images2007-04-12T15:51:59Z<p>MathematicalOrchid: Simple PPM image saving.</p>
<hr />
<div>[[Category:Code]]<br />
<br />
Here's a trivial little thing I wrote for saving PPM images.<br />
<br />
For those that don't know, PPM is probably the simplest possible image file format that other software will actually read! For example, [http://www.irfanview.com/ IrfanView] will read it. Thus, this is a simple, light-weight way to write programs that will output graphics files, using only pure Haskell 98 I/O.<br />
<br />
<haskell><br />
module PPM (write_ppm, save_ppm) where<br />
<br />
import Colour<br />
<br />
save_ppm :: FilePath -> [[Colour]] -> IO ()<br />
save_ppm f css = writeFile f $ write_ppm css<br />
<br />
write_ppm :: [[Colour]] -> String<br />
write_ppm css =<br />
"P3\n" ++ (show $ length $ head css) ++ " " ++ (show $ length css) ++ " 255\n" ++<br />
(unlines $ map unwords $ group 15 $ map show $ concatMap colour $ concat css)<br />
<br />
group _ [] = []<br />
group n xs =<br />
let (xs0,xs1) = splitAt n xs<br />
in xs0 : group n xs1<br />
<br />
colour (Colour r g b) = [channel r, channel g, channel b]<br />
<br />
channel :: Double -> Int<br />
channel = floor . (255*) . min 1 . max 0<br />
</haskell><br />
<br />
The code is actually designed to work with my [[Library for colours]] - but you can supply something of your own if you prefer.</div>MathematicalOrchidhttps://wiki.haskell.org/User:MathematicalOrchidUser:MathematicalOrchid2007-04-12T15:48:00Z<p>MathematicalOrchid: Added links to random stuff I wrote.</p>
<hr />
<div>'''Status''': Enthusiastic Haskell newbie.<br />
<br />
'''Main Interests''':<br />
<br />
* Using Haskell to write triposcopic mathematical algorithms in tiny amounts of code.<br />
* Using Haskell to do seriously compute-bounded work in a multiprocessor setup.<br />
<br />
'''Current Projects''':<br />
<br />
* Haskell to Java compiler. (Status: skeletal)<br />
* ''Evlor'' &mdash; Interactive Haskell step-line debugger. (Status: broken/incomplete/major work required)<br />
* ''Indoculate'' &mdash; Program to convert a single (custom) source to both HTML and LaTeX. (Status: medium-complete)<br />
<br />
'''Projects On Hold''':<br />
<br />
* Sorting algorithm benchmarks.<br />
* Audio DSP in Haskell.<br />
* [[Toy compression implementations|Haskell implementation of compression algorithms]].<br />
* [[POV-Ray SDL project|Haskell SDL]] for [http://www.povray.org/ POV-Ray].<br />
<br />
'''Failed Projects''':<br />
<br />
* Haskell fractal generator.<br />
* Haskell ray tracer.<br />
* Haskell program to cause world peace.<br />
<br />
'''Contributed Code'''<br />
<br />
* [[Library for binary]]<br />
* [[Library for vectors]]<br />
* [[Library for colours]]<br />
* [[Library for PPM images]]<br />
<br />
'''Current Unsolved Questions''':<br />
<br />
* Why do Haskell language extensions exist?<br />
* How do you do graphics in Haskell?<br />
* Why does Hugs crash so much?<br />
* How come (e.g.) Smalltalk provides 27 different types of collection, but Haskell only ever involves single-linked lists and binary trees?<br />
* Is <hask>putStr xs1; putStr xs2</hask> faster or slower than <hask>putStr (xs1 ++ xs2)</hask>?</div>MathematicalOrchidhttps://wiki.haskell.org/Talk:Haskell_in_5_stepsTalk:Haskell in 5 steps2007-03-26T12:54:31Z<p>MathematicalOrchid: Just noticing...</p>
<hr />
<div>1. Is there a difference between <code>ghc -o hello hello.hs</code> and <code>ghc --make hello</code>? Which one is preferable?<br />
<br />
2. The example with <code>let</code> is only going to work in GHCi, not Hugs. (Don't want to confuse beginners with that one...)<br />
<br />
[[User:MathematicalOrchid|MathematicalOrchid]] 12:54, 26 March 2007 (UTC)</div>MathematicalOrchidhttps://wiki.haskell.org/MonomorphismMonomorphism2007-03-16T22:01:12Z<p>MathematicalOrchid: This is a wild guess! Somebody check it...</p>
<hr />
<div>[[Category:Glossary]]<br />
<br />
Monomorphism is the opposite of [[polymorphism]]. That is, a function is polymorphic if it works for several different types - and thus, a function is ''monomorphic'' if it works only for ''one'' type.<br />
<br />
As an example, <hask>map</hask> is polymorphic. It's type is simply<br />
<br />
<haskell><br />
map :: (a -> b) -> [a] -> [b]<br />
</haskell><br />
<br />
However, the function<br />
<br />
<haskell><br />
foo :: (Int -> Int) -> [Int] -> [Int]<br />
foo = map<br />
</haskell><br />
<br />
performs an identical operation to <hask>map</hask> (as is evident from the second line), but has a monomorphic type; it will ''only'' accept lists of <hask>Int</hask> and functions over them.<br />
<br />
Perhaps you were looking for [[monomorphism restriction]]?</div>MathematicalOrchidhttps://wiki.haskell.org/Parametric_polymorphismParametric polymorphism2007-03-16T21:56:27Z<p>MathematicalOrchid: ...have I got this right?</p>
<hr />
<div>Parametric polymorphism is when a function's type signature allows various arguments to take on arbitrary types, but the types most be ''related'' to each other in some way.<br />
<br />
For example, in Java one can write a function that accepts two arguments of any possible type. However, Haskell goes further by allowing a function to accept two arguments of any type so long as they are both ''the same'' type. For example<br />
<br />
As a specific (and slightly more complicated) example, the well-known <hask>map</hask> function has a parametrically polymorphic type<br />
<br />
<haskell><br />
map :: (a -> b) -> [a] -> [b]<br />
</haskell><br />
<br />
which means that the function well accept ''any'' type of list and ''any'' type of function, '''provided''' the types match up. This makes <hask>map</hask> highly polymorphic, yet there is still no risk of a runtime type mismatch.</div>MathematicalOrchidhttps://wiki.haskell.org/Talk:CombinatorTalk:Combinator2007-03-12T20:27:14Z<p>MathematicalOrchid: </p>
<hr />
<div>I heard Parsec described as a "combinator library", but I have literally ''no concept'' of what the hell that actually means. This page doesn't leave me feeling any less confused. Any hints? [[User:MathematicalOrchid|MathematicalOrchid]] 12:39, 7 March 2007 (UTC)<br />
<br />
I've created the referenced page "combinator pattern". Some knowledgeable soul should probably check I haven't said something dumb...</div>MathematicalOrchidhttps://wiki.haskell.org/Combinator_patternCombinator pattern2007-03-12T20:26:19Z<p>MathematicalOrchid: Somebody who knows stuff should check this...</p>
<hr />
<div>[[Category:Idioms]]<br />
<br />
Libraries such as Parsec use the ''combinator pattern'', where complex structures are built by defining a small set of very simple 'primitives', and a set of 'combinators' for combining them into more complicated structures. It's somewhat similar to the Composition pattern found in object-oriented programming.<br />
<br />
In the case of the Parsec, the library provides a set of extremely simple (almost trivial) parsers, and ways to combine small parsers into bigger parsers. Many other libraries and programs use the same ideas to build other structures:<br />
<br />
* Parsec builds parsers out of smaller parsers.<br />
* The School of Expression (SOE) graphics library builds pictures out of individual shapes.<br />
* The SOE book also mentions a library to build music out of individual notes and rests.<br />
* Another textbook describes building financial contracts.<br />
* [Software transactional memory] builds big transactions out of smaller ones.<br />
* The Haskell IO system itself builds whole programs out of small I/O actions using <hask>>>=</hask> and <hask>return</hask>.</div>MathematicalOrchidhttps://wiki.haskell.org/Talk:Type_arithmeticTalk:Type arithmetic2007-03-12T20:07:35Z<p>MathematicalOrchid: Is this why?</p>
<hr />
<div>== Why? ==<br />
<br />
This page seems to explain ''what'' but not ''why''. I don't know about anyone else, but when I read 'arithmetic at the type level', the very first thought that pops into my head is 'why in the name of God would you ''want'' to do such an insane thing?' [[User:MathematicalOrchid|MathematicalOrchid]] 11:53, 12 March 2007 (UTC)<br />
<br />
== This why? ==<br />
<br />
Following some discussions in #haskell, I understand this is related to that widespread Haskell obsession with attempting to "prove" things about programs (in spite of the fact that this is obviously impossible).<br />
<br />
If I'm understanding this right, the idea is to be able to construct a type that means not merely "a List containing Integers", but "a List containing at least 6 Integers". And the "arithmetic" part comes in when one wants to say something like<br />
<br />
: "This function takes a List containing at least X objects of type T and another List containing at least Y objects of type T, and returns a List containing at least X+Y objects of type T."<br />
<br />
In other words, the "arithmetic" part is calculating X+Y at compile-time. And any function that calls the one so-described must prove to the type system that it satisfies the constrains. And, once the constraints are statically verified, no further runtime checks are required.<br />
<br />
Is that roughly what this is all about? (And if so, can somebody add some statements to that effect to the content page?) [[User:MathematicalOrchid|MathematicalOrchid]] 20:07, 12 March 2007 (UTC)</div>MathematicalOrchidhttps://wiki.haskell.org/Talk:Type_arithmeticTalk:Type arithmetic2007-03-12T11:53:05Z<p>MathematicalOrchid: Why?</p>
<hr />
<div>This page seems to explain ''what'' but not ''why''. I don't know about anyone else, but when I read 'arithmetic at the type level', the very first thought that pops into my head is 'why in the name of God would you ''want'' to do such an insane thing?' [[User:MathematicalOrchid|MathematicalOrchid]] 11:53, 12 March 2007 (UTC)</div>MathematicalOrchidhttps://wiki.haskell.org/Talk:Toy_compression_implementationsTalk:Toy compression implementations2007-03-09T13:34:17Z<p>MathematicalOrchid: I'm impressed...</p>
<hr />
<div>Much kudos for fixing the underflow error. The new LZW implementation is much smaller, but... how in the name of God does it actually work? o_O [[User:MathematicalOrchid|MathematicalOrchid]] 11:36, 9 March 2007 (UTC)<br />
<br />
To understand it I rewrote it a bit:<br />
<br />
<haskell><br />
encode_LZW :: (Eq t) => [t] -> [t] -> [Int]<br />
encode_LZW alphabet = work (map (:[]) alphabet) where<br />
work table [] = []<br />
work table lst = index : work table' rest<br />
where (tok, rest) = last . takeWhile ((`elem` table) . fst) . tail $ zip (inits lst) (tails lst)<br />
index = fromJust (elemIndex tok table)<br />
table' = table ++ [tok']<br />
tok' = tok ++ [head rest]<br />
</haskell><br />
<br />
The idea of the the table, which is the 1st argument to 'work', is that some prefix of the input is already in the table.<br />
<br />
(encode_LZW chars) uses 'chars' to make the initial table for the 'work' function by turning the list of characters into a list of length 1 strings.<br />
<br />
The <hask>where (tok,rst)</hask> definition can be read right to left:<br />
* The <hask>zip (inits lst) (tails lst)</hask> computes every possible way to split <hask>lst</hask> input into a prefix and suffix, in increasing length of prefix.<br />
* The <hask>tail</hask> function just drops the head because it doesn't want to consider the length 0 prefix<br />
* <hask>takeWhile</hask> applies the predicate <hask>(`elem` table)</hask> to the prefix. This will always succeed on the length 1 prefix, and may find longer prefixes in the table.<br />
* The <hask>last</hask> function take the last prefix in the table, which will always be the longest such prefix<br />
* <hask>tok</hask> is this prefix, and <hask>rest</hask> is the remaining suffix to process.<br />
<br />
<br />
Wow... a most ingenious (and inefficient) approach! Well, now it makes sense anyway. [[User:MathematicalOrchid|MathematicalOrchid]] 13:34, 9 March 2007 (UTC)</div>MathematicalOrchidhttps://wiki.haskell.org/Talk:Toy_compression_implementationsTalk:Toy compression implementations2007-03-09T11:36:12Z<p>MathematicalOrchid: My mind is blown...</p>
<hr />
<div>Much kudos for fixing the underflow error. The new LZW implementation is much smaller, but... how in the name of God does it actually work? o_O [[User:MathematicalOrchid|MathematicalOrchid]] 11:36, 9 March 2007 (UTC)</div>MathematicalOrchidhttps://wiki.haskell.org/Toy_compression_implementationsToy compression implementations2007-03-08T15:04:49Z<p>MathematicalOrchid: Added Huffman compression.</p>
<hr />
<div>[[Category:Code]]<br />
<br />
== About ==<br />
<br />
This code is provided in the hope that someone might find it interesting/entertaining, and to demonstrate what an excellent programming language Haskell truly is. (A working polymorphic LZW implementation in 10 lines? Try ''that'' in Java!)<br />
<br />
This is 'toy' code. Please don't try to use it to compress multi-GB of data. It has not been thoroughly checked for correctness, and I shudder to think what the time and space complexity would be like! However, it is enlightening and entertaining to see how many algorithms you can implement with a handful of lines...<br />
<br />
[[User:MathematicalOrchid|MathematicalOrchid]] 16:46, 15 February 2007 (UTC)<br />
<br />
== Main module ==<br />
<br />
<haskell><br />
module Compression where<br />
<br />
import Data.List<br />
import Data.Word -- In case you want it. (Not actually used anywhere!)<br />
<br />
chars = [' '..'~'] -- Becuase ' ' = 0x20 and '~' = 0x7F.<br />
<br />
<br />
-- Run-length encoding<br />
<br />
encode_RLE :: (Eq t) => [t] -> [(Int,t)]<br />
encode_RLE = map (\xs -> (length xs, head xs)) . groupBy (==)<br />
<br />
decode_RLE :: [(Int,t)] -> [t]<br />
decode_RLE = concatMap (uncurry replicate)<br />
<br />
<br />
-- Limpel-Ziv-Welch encoding<br />
<br />
encode_LZW :: (Eq t) => [t] -> [t] -> [Int]<br />
encode_LZW _ [] = []<br />
encode_LZW alphabet (x:xs) = work (make alphabet) [x] xs where<br />
make = map (\x -> [x])<br />
work table buffer [] = [maybe undefined id $ elemIndex buffer table]<br />
work table buffer (x:xs) =<br />
let new = buffer ++ [x]<br />
in case elemIndex new table of<br />
Nothing -> maybe undefined id (elemIndex buffer table) : work (table ++ [new]) [x] xs<br />
Just _ -> work table new xs<br />
<br />
decode_LZW :: [t] -> [Int] -> [t]<br />
decode_LZW _ [] = []<br />
decode_LZW alphabet xs = work (length alphabet) (make alphabet) [] xs where<br />
make = map (\x -> [x])<br />
work _ t _ [] = []<br />
work n table prev (x:xs) = case x >= n of<br />
True -> error "underflow" -- THIS NEEDS FIXING!<br />
False -> let out = table !! x<br />
in out ++<br />
if null prev<br />
then work n table out xs<br />
else work (n+1) (table ++ [prev ++ [head out]]) out xs<br />
<br />
</haskell><br />
<br />
Some examples are in order:<br />
<br />
<haskell><br />
> encode_RLE "AAAABBBBDDCCCCEEEGGFFFF"<br />
<br />
[(4,'A'),(4,'B'),(2,'D'),(4,'C'),(3,'E'),(2,'G'),(4,'F')]<br />
<br />
<br />
> decode_RLE [(4,'A'),(4,'B'),(2,'D'),(4,'C'),(3,'E'),(2,'G'),(4,'F')]<br />
<br />
"AAAABBBBDDCCCCEEEGGFFFF"<br />
<br />
<br />
> encode_LZW chars "This is just a simple test."<br />
<br />
[52,72,73,83,0,97,0,74,85,83,84,0,65,0,83,73,77,80,76,69,0,84,69,104,14]<br />
<br />
<br />
> decode_LZW chars [52,72,73,83,0,97,0,74,85,83,84,0,65,0,83,73,77,80,76,69,0,84,69,104,14]<br />
<br />
"This is just a simple test."<br />
</haskell><br />
<br />
== Huffman coding ==<br />
<br />
<haskell><br />
module Huffman<br />
(count, markov1, Tree, encode_huffman, decode_huffman)<br />
where<br />
<br />
import Data.List (nub)<br />
<br />
-- Marvok1 probability model...<br />
<br />
count :: (Eq t) => [t] -> [(t,Int)]<br />
count xs = map (\x -> (x, length $ filter (x ==) xs)) $ nub xs<br />
<br />
markov1 :: (Eq t) => [t] -> [(t,Double)]<br />
markov1 xs =<br />
let n = fromIntegral $ length xs<br />
in map (\(x,c) -> (x, fromIntegral c / n)) $ count xs<br />
<br />
<br />
-- Build a Huffman tree...<br />
<br />
data Tree t = Leaf Double t | Branch Double (Tree t) (Tree t) deriving Show<br />
<br />
prob :: Tree t -> Double<br />
prob (Leaf p _) = p<br />
prob (Branch p _ _) = p<br />
<br />
get_tree :: [Tree t] -> (Tree t, [Tree t])<br />
get_tree (t:ts) = work t [] ts where<br />
work x xs [] = (x,xs)<br />
work x xs (y:ys)<br />
| prob y < prob x = work y (x:xs) ys<br />
| otherwise = work x (y:xs) ys<br />
<br />
huffman_build :: [(t,Double)] -> Tree t<br />
huffman_build = build . map (\(t,p) -> Leaf p t) where<br />
build [t] = t<br />
build ts =<br />
let (t0,ts0) = get_tree ts<br />
(t1,ts1) = get_tree ts0<br />
in build $ Branch (prob t0 + prob t1) t0 t1 : ts1<br />
<br />
<br />
-- Make codebook...<br />
<br />
data Bit = Zero | One deriving (Eq, Show)<br />
type Bits = [Bit]<br />
<br />
huffman_codebook :: Tree t -> [(t,Bits)]<br />
huffman_codebook = work [] where<br />
work bs (Leaf _ x) = [(x,bs)]<br />
work bs (Branch _ t0 t1) = work (bs ++ [Zero]) t0 ++ work (bs ++ [One]) t1<br />
<br />
<br />
-- Do the coding!<br />
<br />
encode :: (Eq t) => [(t,Bits)] -> [t] -> Bits<br />
encode cb = concatMap (\x -> maybe undefined id $ lookup x cb)<br />
<br />
decode :: (Eq t) => Tree t -> Bits -> [t]<br />
decode t = work t t where<br />
work _ (Leaf _ x) [] = [x]<br />
work t (Leaf _ x) bs = x : work t t bs<br />
work t (Branch _ t0 t1) (b:bs)<br />
| b == Zero = work t t0 bs<br />
| otherwise = work t t1 bs<br />
<br />
encode_huffman :: (Eq t) => [t] -> (Tree t, Bits)<br />
encode_huffman xs =<br />
let t = huffman_build $ markov1 xs<br />
bs = encode (huffman_codebook t) xs<br />
in (t,bs)<br />
<br />
decode_huffman :: (Eq t) => Tree t -> Bits -> [t]<br />
decode_huffman = decode<br />
</haskell><br />
<br />
If anybody can make this code shorter / more elegant, feel free!<br />
<br />
A short demo:<br />
<haskell><br />
> encode_huffman "this is just a simple test"<br />
<loads of data><br />
<br />
> decode_huffman (fst it) (snd it)<br />
"this is just a simple test"<br />
</haskell></div>MathematicalOrchid