https://wiki.haskell.org/api.php?action=feedcontributions&user=Conal&feedformat=atomHaskellWiki - User contributions [en]2017-05-26T00:17:32ZUser contributionsMediaWiki 1.19.14+dfsg-1https://wiki.haskell.org/GPUGPU2017-03-26T16:15:49Z<p>Conal: /* Links */ paper</p>
<hr />
<div>{{Stub}}<br />
<br />
== Introduction ==<br />
<br />
A graphics processing unit (GPU) is a processor for handling graphics data, but can be used for other types of software as well.<br />
<br />
See also [[CUDA]].<br />
<br />
For more information, see [http://en.wikipedia.org/wiki/Graphics_processing_unit the Wikipedia article].<br />
<br />
<br />
== Software ==<br />
<br />
* [https://hackage.haskell.org/packages/#cat:GPU Category GPU] at Hackage<br />
<br />
* [https://hackage.haskell.org/package/accelerate-cuda accelerate-cuda library: Accelerate backend for NVIDIA GPUs]<br />
<br />
* [https://hackage.haskell.org/package/cuda cuda library: FFI binding to the CUDA interface for programming NVIDIA GPUs]<br />
<br />
* [https://hackage.haskell.org/package/GPipe GPipe library: A functional graphics API for programmable GPUs]<br />
<br />
* [https://hackage.haskell.org/package/shady-graphics shady-graphics library: Functional GPU programming - DSEL & compiler]<br />
<br />
* [https://hackage.haskell.org/package/Obsidian Obsidian library: Embedded language for GPU Programming]<br />
<br />
* [https://hackage.haskell.org/package/shady-gen shady-gen library: Functional GPU programming - DSEL & compiler]<br />
<br />
<br />
== Links ==<br />
<br />
* [http://conal.net/papers/Vertigo/ Programming Graphics Processors Functionally] (PDF)<br />
<br />
* [http://chimera.labs.oreilly.com/books/1230000000929/ch06.html Parallel and Concurrent Programming in Haskell, Chapter 6. GPU Programming with Accelerate]<br />
<br />
* [http://looprecur.com/blog/gpu-programming-in-haskell/ GPU Programming in Haskell] (video)<br />
<br />
* [http://community.haskell.org/~simonmar/slides/cadarache2012/7%20-%20accelerate.pdf GPU programming with Accelerate] (PDF slides)<br />
<br />
* [http://parfunk.blogspot.com/2012/05/how-to-write-hybrid-cpugpu-programs.html How to write hybrid CPU/GPU programs with Haskell] (blog)<br />
<br />
* [http://www.cse.chalmers.se/~joels/writing/GPUFL.pdf GPU Programming in Functional Languages, A Comparison of Haskell GPU Embedded Domain Specic Languages] (PDF)<br />
<br />
* [http://www.cse.chalmers.se/~joels/writing/dccpaper_obsidian.pdf Obsidian: GPU Programming in Haskell] (PDF)<br />
<br />
* [http://www.cse.unsw.edu.au/~keller/Papers/gpugen.pdf GPU Kernels as Data-Parallel Array Computations in Haskell] (PDF)<br />
<br />
* [http://www.wotug.org/papers/CPA-2012/Cole12a/Cole12a.pdf Beauty And The Beast: Exploiting GPUs In Haskell] (PDF)<br />
<br />
* [http://hiperfit.dk/pdf/FHPC12HIPERFIT.pdf Financial Software on GPUs: Between Haskell and Fortran] (PDF)<br />
<br />
* [http://www.youtube.com/watch?v=8DcpPWLkla4 Haskell on the GPU, Game of Life] (video)<br />
<br />
<br />
[[Category:Concurrency]]</div>Conalhttps://wiki.haskell.org/The_JavaScript_ProblemThe JavaScript Problem2015-02-25T16:57:20Z<p>Conal: /* TypeScript */ "better then" --> "better than"</p>
<hr />
<div>== The problem ==<br />
<br />
The JavaScript problem is two-fold and can be described thus:<br />
<br />
# '''JavaScript sucks.''' The depths to which JavaScript sucks are well-documented and well-understood. Its main faults are: lack of module system, weak-typing, verbose function syntax<sup>1</sup>, late binding<sup>2</sup>, which has led to the creation of various static analysis tools to alleviate this language flaw<sup>3</sup>, but with limited success<sup>4</sup> (there is even a static type checker<sup>5</sup>), finicky equality/automatic conversion, <code>this</code> behaviour, and lack of static types.<br />
<br />
# '''We need JavaScript.''' Using it for what it is good for, i.e. providing a platform for browser development, but not using the language ''per se'', is therefore desirable, and many are working to achieve this, in varying forms. There are various ways to do it, but we ought to opt for compiling an existing language, Haskell, to JavaScript, because we do not have time to learn or teach other people a new language, garner a new library set and a new type checker and all that Haskell implementations provide.<br />
<br />
== Mainstream alternatives ==<br />
<br />
=== CoffeeScript ===<br />
It makes many aspects of JavaScript sane and convenient, and you get a compilation check that verifies syntax, however it still suffers greatly from weak-typing.<br />
<br />
=== TypeScript ===<br />
<br />
Structural typing with traditional generics on top of JavaScript.<br />
Of all the alternatives, TypeScript's advantage is that it makes no changes to JavaScript. Existing JavaScript code that passes jshint is valid Typescript code. TypeScript also adds features from the latest JavaScript standards that it can compile down to older versions of JavaScript. TypeScript is by far the easiest JavaScript variant to learn. The downside is that one might desire a better language than just JavaScript + types.<br />
<br />
TypeScript defaults to dynamic typing when it can't figure the type out. However, it now has a noImplicitAny setting that will give a compilation error if it can't figure out the type.<br />
<br />
Structural sub-typing seems a good fit for JavaScript. <br />
Perhaps Typescript's biggest problem is that null is a valid value for any type.<br />
<br />
== Haskell -> JS ==<br />
<br />
=== UHC ===<br />
<br />
Original blog post [https://github.com/atzedijkstra/javascript-runtime-for-UHC here.] Quickstart guide [http://chrisdone.com/posts/2012-01-06-uhc-javascript.html here.] A more in-depth discussion about the current capabilities of the backend [http://www.norm2782.com/improving-uhc-js-report.pdf here.] For an example of using the JavaScript compilation for a real app see this [http://alessandrovermeulen.me/2012/01/26/getting-rid-of-javascript-with-haskell/ blog post], there is also a port of wxAsteroids to the browser (see [http://uu-computerscience.github.io/js-asteroids/ github] or a [http://www.rubendegooijer.nl/posts/2013-04-06-haskell-oop.html blog post]). <br />
<br />
* Beta.<br />
* Only works for UHC, but promising. <br />
* UHC compiles enough of Hackage to be very useful.<br />
* Doesn't produce an explosion of code, seemingly.<br />
* Fairly substantial JS/DOM/W3C/HTML5 API.<br />
* Currently works.<br />
<br />
=== Fay ===<br />
<br />
Website: https://github.com/faylang/fay/wiki<br />
Discussion on Reddit: [http://www.reddit.com/r/haskell/comments/11yrpi/fay_slides/ Fay slides]. The package is on [http://hackage.haskell.org/package/fay Hackage]. Fetch with Git: <br />
git clone git://github.com/faylang/fay.git<br />
<br />
* Compiles a subset of Haskell, needs more<br />
* Currently works.<br />
<br />
=== GHCJS ===<br />
<br />
The GitHub page is [https://github.com/ghcjs/ghcjs here.]<br />
<br />
* Alpha.<br />
* Works.<br />
* Incomplete.<br />
* Nicely designed.<br />
* Compiles most pure Haskell libraries no problem.<br />
* FFI to JS works, and the author, sviperll is a helpful guy.<br />
<br />
=== Haste ===<br />
<br />
[http://haste-lang.org Website], [http://hackage.haskell.org/package/haste-compiler Hackage]<br />
<br />
* Seamless, type-safe single program framework for client-server communication<br />
* Easy JavaScript interoperability<br />
* Generates small, fast, minifiable code.<br />
* Lightweight concurrency, Cabal integration, FFI and GHC extensions supported.<br />
* Cross platform.<br />
* Works.<br />
<br />
=== [[JMacro]] ===<br />
<br />
On the Haskell wiki (see above) and on [http://hackage.haskell.org/package/jmacro Hackage]<br />
<br />
* Mature, Maintained<br />
* Not Haskell but an EDSL _in_ Haskell nonetheless.<br />
* JMacro Panels provides a purely Haskell combinator library that generates dynamically updating html and js with asynchronous client-server communication.<br />
* Syntax is a fusion of Haskell and JavaScript<br />
* Untyped, but with syntactic correctness (at least) enforced at compile-time.<br />
* Embeddable through quasi-quoting<br />
* Support for various forms of code-generation<br />
<br />
=== Others ===<br />
<br />
* [https://github.com/johang88/haskellinjavascript Haskell interpreter in JS] — An interpreter. Haven't tried but is apparently dead.<br />
* YHC JS backend — Beta-ish. Apparently works, but I was unable to compile YHC, so haven't tried yet. I would be interested in anyone's experience using it. There's [http://www.haskell.org/haskellwiki/Yhc/Javascript an old wiki page] about Yhc's JavaScript support, but Yhc itself is a dead project.<br />
* Emscripten — not Haskell→JS, but compiles LLVM/Clang output to JavaScript. Could possibly be used for GHC→LLVM→JS compiling, which I tried, and works, but would have to also compile the GHC runtime which is not straight-forward (to me) for it to actually run. <br />
* HJScript — Beta. EDSL, not Haskell→JS. Works. Not ''very'' annoying to program in, but is JS semantics, not Haskell. Hackage package [http://hackage.haskell.org/package/HJScript here.]<br />
* Some have also tried writing a Haskell→JS compiler to make a more direct JS-aware translation of code (to not have huge code output a la GHCJS, YHC, Emscripten).<br />
* I've tried [http://lpaste.net/84342 compiling via JHC and Emscripten] a while ago, which worked, but IIRC the output was rather slow.<br />
* It's also possible to compile Hugs via Emscripten, which works (with minor tweaks), but again, it's too slow.<br />
<br />
<br />
== FP -> JS ==<br />
<br />
=== Ur/Web ===<br />
<br />
http://www.impredicative.com/ur/<br />
<br />
Perhaps the problem with Ur is that they are selling both a backend and a frontend together. Being a new language, the backend is lacking in libraries to be practical for many tasks. However, there is an RSS reader that is using Ur for the front-end and Haskell for the backend: https://bazqux.com/<br />
<br />
=== Opa ===<br />
<br />
Similar to Ur/Web, write one language in the front-end and backend: http://opalang.org/ Haven't tried it. No idea what its type-system is like.<br />
<br />
=== OCaml ===<br />
<br />
The OCaml -> JS compiler is supposed to be good, it is now used at Facebook for an internal in-browser code editor.<br />
http://ocsigen.org/js_of_ocaml/<br />
<br />
=== GorillaScript ===<br />
<br />
http://ckknight.github.io/gorillascript/<br />
<br />
immutable by default, global type inference, macros, what coffeescript should have been. The syntax is similar to coffeescript<br />
<br />
=== Roy ===<br />
<br />
[http://roy.brianmckenna.org/ Roy]: meld JavaScript semantics with functional languages. Experimental, but has many bread-and-butter Haskell features.<br />
Roy is written in JS.<br />
<br />
=== PureScript ===<br />
<br />
[http://purescript.org/ PureScript] aims to provide a type system for a fragment of JavaScript. It includes many features which are similar to features of Haskell, such as type classes and RankNTypes, and its syntax mirrors that of Haskell very closely, but it is a fundamentally different language with the execution model of JavaScript. PureScript is written in Haskell. The project has a focus on the generation of efficient, readable JavaScript.<br />
<br />
=== Idris ===<br />
<br />
Idris is a compiled language with dependent types, implemented in Haskell, with backends for both LLVM and JavaScript. Experimental.<br />
<br />
* Full dependent types with dependent pattern matching where clauses, with rule, simple case expressions, pattern matching let and lambda bindings<br />
* Dependent records with projection and update<br />
* Type classes<br />
* Monad comprehensions<br />
* Syntactic conveniences for lists, tuples, dependent pairs do notation and idiom brackets<br />
* Indentation significant syntax<br />
* Extensible syntax<br />
* Tactic based theorem proving (influenced by Coq)<br />
* Cumulative universes<br />
* Totality checking<br />
* Simple foreign function interface (to C)<br />
* Hugs style interactive environment<br />
<br />
Links:<br />
* [http://idris-lang.org/ Website idris-lang.org] <br />
* [[Dependent_type|Dependent Type in haskell wiki]]<br />
* [http://en.wikipedia.org/wiki/Dependent_type WP (en) Dependent type] (with Idris listed under language comparison)<br />
<br />
== Links ==<br />
<br />
* [http://www.reddit.com/r/haskell/comments/28o7my/what_is_the_state_of_the_javascript_problem_what/ What is the state of "The JavaScript Problem"? What is the currently preferred way to solve in a real world application?] (reddit, 2014-06-20)<br />
* [https://github.com/yesodweb/yesod/wiki/JavaScript-Options Yesod - JavaScript Options]<br />
* [http://chrisdone.com/tags/javascript Chris Done Blog] - Tag: JavaScript<br />
<br />
== Footnotes ==<br />
<br />
# Its support for closures is commonly noted as being one of JavaScript’s redeeming features.<br />
# Early binding allows for static verification of the existence of method-signature pairs (e.g. v-tables). Late binding does not give the compiler (or an IDE) enough information for existence verification, it has to be looked up at run-time.<br />
# There are several hinting libraries, which developers insist are indispensable tools when developing JavaScript seriously, such as JavaScript lint, JSLint, and JSure.<br />
# “Any non-trivial analysis is very difficult due to JavaScript’s dynamic nature.” — Berke Durak, Ph.D., author of jsure.<br />
# Google Inc. thought it necessary to develop a compiler, Google Closure, which does type-checking and limited inference.<br />
<br />
<br />
[[Category:Web|*]]</div>Conalhttps://wiki.haskell.org/BayHac2014BayHac20142014-06-13T17:52:19Z<p>Conal: /* Lightning Talks */ Fixed spelling of my last name</p>
<hr />
<div>__NOTOC__<br />
<br />
<b><span style="color:#e73">San Francisco Bay Area</span> <span style="color:#aaa">&amp;</span> <span style="color:#930">Silicon Valley</span> <span style="color:#aaa">Haskell Hackathon</span></b><br />
<br />
[[Image:BayHac14_banner.png]]<br />
<br />
Come join a group of Haskell hackers to work on a wide variety of projects. All levels welcome.<br />
<br />
<center><br />
<big>Sign-up Here:<br /> [https://docs.google.com/forms/d/16QEHqAioGQeHHOlnMTEmjdgtO4YNN2_Qc-rbgLOFatU/viewform BayHac '14 Attendee Form]</big><br />
</center><br />
<br />
Special thanks to [http://engineering.imvu.com/ IMVU], [https://developers.google.com/open-source/ Google], Aleph Cloud and Twitter for sponsoring BayHac '14!<br />
<br />
----<br />
<br />
{|<br />
|When:<br />
|Friday, May 16th – Sunday, May 18th, 2014<br />
|-<br />
|Where:<br />
|[http://www.hackerdojo.com/ Hacker Dojo]<br />
|-<br />
|Cost:<br />
|Free<br />
|-<br />
|News and Discussion:<br />
|[http://groups.google.com/group/bayhac BayHac Google Group]<br />
|}<br />
<br />
<br />
<div style="text-align: right; float: right; width: 250px"><br />
[[Image:BayHac14 Poster Small.png|237px]]<br />
<br /><br />
<small><i>[https://drive.google.com/file/d/0B1eCSfs15HPRZjRIWWtCNmJjSms/edit?usp=sharing Full size PDF poster available]</i></small><br />
<br />
</div><br />
== Location ==<br />
<br />
[http://www.hackerdojo.com/ Hacker Dojo], 599 Fairchild Drive, Mountain View, CA ([https://maps.google.com/maps?ie=UTF8&cid=11488539903009648209&q=Hacker+Dojo&iwloc=A&gl=US&hl=en-US Google Map])<br />
<br />
== Schedule ==<br />
<br />
Basic timing... details to be developed. Expect lightning talks, hacking, and other activities:<br />
<br />
{|<br />
|Friday, May 16th<br />
|3pm - 7pm<br />
|-<br />
|Saturday, May 17th<br />
|10am ~ 7pm<br />
|-<br />
|Sunday, May 18th<br />
|10am - 4pm<br />
|}<br />
<br />
== Classes ==<br />
=== Friday ===<br />
* 5:15pm - 6:15pm '''Programming with Pipes''' by Gabriel Gonzalez (Large Room) — [https://drive.google.com/folderview?id=0B60EFlB9qDBNMGhKNHY3NXZLbHM&usp=sharing slides]<br />
* 6:15pm - 7pm '''A Tutorial on Free Monads''' by Dan Piponi (Large Room) — [https://plus.google.com/u/0/events/cu5t5s2g14t4fqmapft5bcatqeg video]<br />
<br />
=== Saturday ===<br />
* 10am - 11am '''Beginning Haskell''' by Bob Ippolito (Small Room) - [http://bob.ippoli.to/beginning-haskell-bayhac-2014/ slides]<br />
* 11am - 12pm '''Haskell for Scala Programmers''' by Runar Bjarnason (Small Room)<br />
* 12 pm - 1pm '''Conquering Cabal''' by Jonathan Fischoff (Small Room)<br />
* 2pm - 3pm [http://johnmacfarlane.net/BayHac2014/ '''Pandoc for Haskell Hackers'''] by John MacFarlane (Small Room)<br />
* 3pm - 4pm [https://github.com/alephcloud/bayhac2014 '''Haste: Front End Web Development with Haskell'''] by Lars Kuhtz (Small Room)<br />
* 4pm - 5pm [http://www.haskell.org/haskellwiki/BayHac2014/Prolog '''From Prolog to Hindley-Milner'''] by Tikhon Jelvis (Small Room)<br />
* 5pm - 6pm [https://goo.gl/gMrmnv '''Yesod: Up and Running'''] by [[User:drb226|Dan Burton]] (Small Room) - [https://www.fpcomplete.com/user/DanBurton/yesod-beginner source]<br />
* 6pm - 7pm '''Lens: Inside and Out''' by Shachaf Ben-Kiki (Small Room)<br />
<br />
=== Sunday ===<br />
* 10am - 11:30am '''GHC iOS: Up and Running''' by Luke Iannini (Small Room)<br />
* 11:30am - 1pm [https://vimeo.com/95694918 '''Programming with Vinyl''' ] by Jonathan Sterling (Small Room) - [https://github.com/VinylRecords/BayHac2014-Talk/blob/master/Talk.pdf slides]<br />
* 1pm - 2pm '''Functional Reactive Programming with Elm''' by Evan Czaplicki (Large Room)<br />
* 2pm - 3pm [https://github.com/conal/talk-2014-bayhac-denotational-design/blob/master/README.md '''Denotational Design: from meanings to programs'''] by Conal Elliott (Large Room)<br />
* 3pm - 4pm [https://docs.google.com/presentation/d/1suMuLRo1xS5NxWn-L9lGHtVNpOH48F9ZnDyv5PyxEpI/edit?usp=sharing '''Getting Stuff Done with Haskell'''] by Greg Weber (Large Room) [https://app.usedox.com/d/rbczklzyvgczkfgh/Getting-it-Done-with-Haskell-pdf view presentation on Dox]<br />
<br />
== Saturday Demos and Experience Reports (Large Room) ==<br />
1pm - 2pm<br />
* '''Haskell at IMVU''' by Andy Friesen<br />
* '''Haskell at Aleph Cloud''' by Jeff Polakow<br />
* '''Haskell at Docmunch''' by Greg Weber<br />
* '''Haskell at Pingwell''' by Tim Sears<br />
* '''Tree.is demo''' by Luke Iannini<br />
<br />
== Lightning Talks ==<br />
<br />
* Aaron Wolf - '''Snowdrift.coop: FLO fundraising built with Yesod'''<br />
* Harold Carr - '''a Haskell Bitly Client using Template Haskell & Aeson'''<br />
* Tad Doxsee - [http://www.planit9.com/blog/learning_web_programming.pdf '''PlanIt9: Learning Web Programming via Haskell (pdf)''']<br />
* Paul Ivanov - '''IHaskell Notebook'''<br />
* Ben Burdotle - '''Cyclophone'''<br />
* John Millikin - '''The "options" package'''<br />
* Jon Sterling - '''Vinyl'''<br />
* Conal Eliott - '''Haskell to HW'''<br />
<br />
== Attendees == <br />
<br />
* Jonathan Fischoff - organizer<br />
* [http://www.ozonehouse.com/mark/ Mark Lentczner] - asst. organizer<br />
* [mailto:capn.freako@gmail.com David Banas] - amateur Haskeller<br />
* [mailto:michael@schmong.org Michael Litchard] - Haskeller<br />
* [http://conal.net Conal Elliott]<br />
* [http://jelv.is Tikhon Jelvis]<br />
<br />
== Projects ==<br />
# [http://www.haskell.org/haskellwiki/Treeviz TreeViz] - a computation breakdown visualization project hosted by [mailto:capn.freako@gmail.com David Banas]<br />
# [https://github.com/haskell/haskell-platform/tree/new-build Haskell Platform, the new build] - We are working on a new build system for all of Haskell Platform: Generating tarballs, installers, and even the web site from one single Shake based build tool. Lots to do! See Mark Lentczner.<br />
# [https://github.com/conal/lambda-ccc/ lambda-ccc] - a project for compiling Haskell to hardware. I'm doing this work for my day job, but the development is open, and the result will be shared freely. The project starts with a GHC plugin that transforms Core in order to generate a convenient-to-manipulate GADT representation of the original. Then convert to an <code>Arrow</code>-like algebraic interface that can be interpreted in various ways, including as circuits. See [mailto:conal@conal.net Conal Elliott].<br />
# [https://ghc.haskell.org/trac/ghc/ticket/8624#comment:12 see what Template Haskell generates]. For those interested in hacking on the GHC compiler, see Greg Weber<br />
# [https://snowdrift.coop Snowdrift.coop] — a community-engagement and fundraising platform strictly for Free/Libre/Open projects, built on Yesod; Head developer David Thomas and co-founder (and Haskell beginner) Aaron Wolf will be on hand. We have a wide range of projects at different levels and sizes to hack on.<br />
<br />
== IRC channel ==<br />
<br />
We'll be hanging out on #bayhac on FreeNode.<br />
<br />
[[Category:Community]]</div>Conalhttps://wiki.haskell.org/BayHac2014BayHac20142014-05-23T05:37:24Z<p>Conal: /* Sunday */ change talk pointer</p>
<hr />
<div>__NOTOC__<br />
<br />
<b><span style="color:#e73">San Francisco Bay Area</span> <span style="color:#aaa">&amp;</span> <span style="color:#930">Silicon Valley</span> <span style="color:#aaa">Haskell Hackathon</span></b><br />
<br />
[[Image:BayHac14_banner.png]]<br />
<br />
Come join a group of Haskell hackers to work on a wide variety of projects. All levels welcome.<br />
<br />
<center><br />
<big>Sign-up Here:<br /> [https://docs.google.com/forms/d/16QEHqAioGQeHHOlnMTEmjdgtO4YNN2_Qc-rbgLOFatU/viewform BayHac '14 Attendee Form]</big><br />
</center><br />
<br />
Special thanks to [http://engineering.imvu.com/ IMVU], [https://developers.google.com/open-source/ Google], Aleph Cloud and Twitter for sponsoring BayHac '14!<br />
<br />
----<br />
<br />
{|<br />
|When:<br />
|Friday, May 16th – Sunday, May 18th, 2014<br />
|-<br />
|Where:<br />
|[http://www.hackerdojo.com/ Hacker Dojo]<br />
|-<br />
|Cost:<br />
|Free<br />
|-<br />
|News and Discussion:<br />
|[http://groups.google.com/group/bayhac BayHac Google Group]<br />
|}<br />
<br />
<br />
<div style="text-align: right; float: right; width: 250px"><br />
[[Image:BayHac14 Poster Small.png|237px]]<br />
<br /><br />
<small><i>[https://drive.google.com/file/d/0B1eCSfs15HPRZjRIWWtCNmJjSms/edit?usp=sharing Full size PDF poster available]</i></small><br />
<br />
</div><br />
== Location ==<br />
<br />
[http://www.hackerdojo.com/ Hacker Dojo], 599 Fairchild Drive, Mountain View, CA ([https://maps.google.com/maps?ie=UTF8&cid=11488539903009648209&q=Hacker+Dojo&iwloc=A&gl=US&hl=en-US Google Map])<br />
<br />
== Schedule ==<br />
<br />
Basic timing... details to be developed. Expect lightning talks, hacking, and other activities:<br />
<br />
{|<br />
|Friday, May 16th<br />
|3pm - 7pm<br />
|-<br />
|Saturday, May 17th<br />
|10am ~ 7pm<br />
|-<br />
|Sunday, May 18th<br />
|10am - 4pm<br />
|}<br />
<br />
== Classes ==<br />
=== Friday ===<br />
* 5:15pm - 6:15pm '''Programming with Pipes''' by Gabriel Gonzalez (Large Room)<br />
* 6:15pm - 7pm '''A Tutorial on Free Monads''' by Dan Piponi (Large Room) — [https://plus.google.com/u/0/events/cu5t5s2g14t4fqmapft5bcatqeg video]<br />
<br />
=== Saturday ===<br />
* 10am - 11am '''Beginning Haskell''' by Bob Ippolito (Small Room) - [http://bob.ippoli.to/beginning-haskell-bayhac-2014/ slides]<br />
* 11am - 12pm '''Haskell for Scala Programmers''' by Runar Bjarnason (Small Room)<br />
* 12 pm - 1pm '''Conquering Cabal''' by Jonathan Fischoff (Small Room)<br />
* 2pm - 3pm [http://johnmacfarlane.net/BayHac2014/ '''Pandoc for Haskell Hackers'''] by John MacFarlane (Small Room)<br />
* 3pm - 4pm '''Haste: Front End Web Development with Haskell''' by Lars Kuhtz (Small Room)<br />
* 4pm - 5pm [http://www.haskell.org/haskellwiki/BayHac2014/Prolog '''From Prolog to Hindley-Milner'''] by Tikhon Jelvis (Small Room)<br />
* 5pm - 6pm [https://goo.gl/gMrmnv '''Yesod: Up and Running'''] by [[User:drb226|Dan Burton]] (Small Room) - [https://www.fpcomplete.com/user/DanBurton/yesod-beginner source]<br />
* 6pm - 7pm '''Lens: Inside and Out''' by Shachaf Ben-Kiki (Small Room)<br />
<br />
=== Sunday ===<br />
* 10am - 11:30am '''GHC iOS: Up and Running''' by Luke Iannini (Small Room)<br />
* 11:30am - 1pm [https://vimeo.com/95694918 '''Programming with Vinyl''' ] by Jonathan Sterling (Small Room) - [https://github.com/VinylRecords/BayHac2014-Talk/blob/master/Talk.pdf slides]<br />
* 1pm - 2pm '''Functional Reactive Programming with Elm''' by Evan Czaplicki (Large Room)<br />
* 2pm - 3pm [https://github.com/conal/talk-2014-bayhac-denotational-design/blob/master/README.md '''Denotational Design: from meanings to programs'''] by Conal Elliott (Large Room)<br />
* 3pm - 4pm [https://docs.google.com/presentation/d/1suMuLRo1xS5NxWn-L9lGHtVNpOH48F9ZnDyv5PyxEpI/edit?usp=sharing '''Getting Stuff Done with Haskell'''] by Greg Weber (Large Room) [https://app.usedox.com/d/rbczklzyvgczkfgh/Getting-it-Done-with-Haskell-pdf view presentation on Dox]<br />
<br />
== Saturday Demos and Experience Reports (Large Room) ==<br />
1pm - 2pm<br />
* '''Haskell at IMVU''' by Andy Friesen<br />
* '''Haskell at Aleph Cloud''' by Jeff Polakow<br />
* '''Haskell at Docmunch''' by Greg Weber<br />
* '''Haskell at Pingwell''' by Tim Sears<br />
* '''Tree.is demo''' by Luke Iannini<br />
<br />
== Lightning Talks ==<br />
<br />
* Aaron Wolf - '''Snowdrift.coop: FLO fundraising built with Yesod'''<br />
* Harold Carr - '''a Haskell Bitly Client using Template Haskell & Aeson'''<br />
* Tad Doxsee - [http://www.planit9.com/blog/learning_web_programming.pdf '''PlanIt9: Learning Web Programming via Haskell (pdf)''']<br />
* Paul Ivanov - '''IHaskell Notebook'''<br />
* Ben Burdotle - '''Cyclophone'''<br />
* John Millikin - '''The "options" package'''<br />
* Jon Sterling - '''Vinyl'''<br />
* Conal Eliot - '''Haskell to HW'''<br />
<br />
== Attendees == <br />
<br />
* Jonathan Fischoff - organizer<br />
* [http://www.ozonehouse.com/mark/ Mark Lentczner] - asst. organizer<br />
* [mailto:capn.freako@gmail.com David Banas] - amateur Haskeller<br />
* [mailto:michael@schmong.org Michael Litchard] - Haskeller<br />
* [http://conal.net Conal Elliott]<br />
* [http://jelv.is Tikhon Jelvis]<br />
<br />
== Projects ==<br />
# [http://www.haskell.org/haskellwiki/Treeviz TreeViz] - a computation breakdown visualization project hosted by [mailto:capn.freako@gmail.com David Banas]<br />
# [https://github.com/haskell/haskell-platform/tree/new-build Haskell Platform, the new build] - We are working on a new build system for all of Haskell Platform: Generating tarballs, installers, and even the web site from one single Shake based build tool. Lots to do! See Mark Lentczner.<br />
# [https://github.com/conal/lambda-ccc/ lambda-ccc] - a project for compiling Haskell to hardware. I'm doing this work for my day job, but the development is open, and the result will be shared freely. The project starts with a GHC plugin that transforms Core in order to generate a convenient-to-manipulate GADT representation of the original. Then convert to an <code>Arrow</code>-like algebraic interface that can be interpreted in various ways, including as circuits. See [mailto:conal@conal.net Conal Elliott].<br />
# [https://ghc.haskell.org/trac/ghc/ticket/8624#comment:12 see what Template Haskell generates]. For those interested in hacking on the GHC compiler, see Greg Weber<br />
# [https://snowdrift.coop Snowdrift.coop] — a community-engagement and fundraising platform strictly for Free/Libre/Open projects, built on Yesod; Head developer David Thomas and co-founder (and Haskell beginner) Aaron Wolf will be on hand. We have a wide range of projects at different levels and sizes to hack on.<br />
<br />
== IRC channel ==<br />
<br />
We'll be hanging out on #bayhac on FreeNode.<br />
<br />
[[Category:Community]]</div>Conalhttps://wiki.haskell.org/BayHac2014/DenotationalDesignBayHac2014/DenotationalDesign2014-05-23T05:35:42Z<p>Conal: Now point to GitHub README.</p>
<hr />
<div>Speaker: [http://conal.net Conal Elliott]<br />
<br />
Title: ''Denotational Design: from meanings to programs''<br />
<br />
For slides, abstract and remarks on &quot;why continuous time matters&quot;, see [https://github.com/conal/talk-2014-bayhac-denotational-design/blob/master/README.md this talk's README on GitHub].</div>Conalhttps://wiki.haskell.org/BayHac2014/DenotationalDesignBayHac2014/DenotationalDesign2014-05-23T05:28:08Z<p>Conal: Tweak remarks on integration</p>
<hr />
<div>Speaker: [http://conal.net Conal Elliott]<br />
<br />
Title: ''Denotational Design: from meanings to programs''<br />
<br />
[http://conal.net/talks/bayhac-2014.pdf Slides now available (PDF)].<br />
<br />
''Edit of May 19, 2014'': Added remarks on continuous time below.<br />
<br />
== Abstract ==<br />
<br />
In this talk, I'll share a methodology that I have applied many times over the last 20+ years when designing high-level libraries for functional programming. Functional libraries are usually organized around small collections of domain-specific data types together with operations for forming and combining values of those types. When done well, the result has the elegance and precision of algebra on numbers while capturing much larger and more interesting ideas.<br />
<br />
A library has two aspects with which all programmers are familiar: the programming interface (API) and its implementation. We want the implementation to be efficient and ''correct'', since it's (usually) not enough to select arbitrary code for the implementation. To get clear about what constitutes correctness, and avoid fooling ourselves with vague, hand-waving statements, we'll want a precise specification, independent of any implementation. Fortunately, there is an elegant means of specification available to functional programmers: give a (preferably simple) mathematical ''meaning'' (model) for the types provided by a library, and then define each operation as if it worked on meanings rather than on representations. This practice, which goes by the fancy name of &quot;denotational semantics&quot; (invented to explain programming languages rigorously), is very like functional programming itself, and so can be easily assimilated by functional programmers.<br />
<br />
Rather than using semantics to ''explain'' an existing library (or language), we can instead use it to ''design'' one. It is often much easier and more enlightening to define a denotation than an implementation, because it does not have any constraints or distractions of efficiency, or even of executability. As an example, this style gave rise to [http://stackoverflow.com/questions/5875929/specification-for-a-functional-reactive-programming-language/5878525#5878525 Functional Reactive Programming (FRP)], where the semantic model of &quot;behaviors&quot; (dynamic values) is simply functions of infinite, continuous time. Similarly, the [http://conal.net/Pan Pan system] applies this same idea to space instead of time, defining the semantics of an &quot;image&quot; to be a function over infinite, continuous 2D space. Such meanings effectively and precisely capture the essence of a library's intent without the distraction of operational details. By doing so, these meanings offer library users a simpler but precise understanding of a library, while giving library developers an unambiguous definition of exactly ''what'' specification they must implement, while leaving a great deal of room for creativity about ''how''. I call this methodology &quot;Denotational Design&quot;, because it is design focused on meaning (denotation).<br />
<br />
The talk and workshop will present the principles and practice of Denotational Design through examples. I will use Haskell, where purity and type classes are especially useful to guide the process. Once understood, the techniques are transferable to other functional languages as well. If you'd like a sneak peak at the principles and applications, see the paper [http://conal.net/papers/type-class-morphisms/ ''Denotational design with type class morphisms''] and some [http://conal.net/blog/tag/type-class-morphism related blog articles].<br />
<br />
== Why continuous time matters ==<br />
<br />
Some follow-up remarks, based on questions &amp; discussion during and after the talk:<br />
<br />
* Continuous time matters for exactly the same reason that laziness (non-strictness) matters, namely modularity. (See [http://www.cse.chalmers.se/~rjmh/Papers/whyfp.html ''Why Functional Programming Matters''].) Modularity comes from providing information while making as few restrictive assumptions as possible about how that information can be used. Laziness lets us build infinite data structures, thus not assuming what finite subset any particular usage will access. By also not assuming the ''frequency'' of sampling (even that it’s constant), continuous time and space place even fewer restrictions about what finite subset of information will be accessed and is thus even more modular.<br />
* Continuous time allows integration and differentiation to be expressed directly and meaningfully. In discrete-time systems, one instead has to clutter programs by including numeric approximation algorithms for integration and differentiation, usually via very badly behaved algorithm such as Euler integration and naive finite differencing. For instance, in games, it’s typical to have a character move based on user-chosen direction, plus forces like gravity &amp; drag. The easiest path for an implementor is to use Euler integration (<code>(x,y) := (x + dt * vx, y + dt * vy); (vx,vy) := (vx + dt * ax, vy + dt * ay)</code>) at the visual sampling rate. In comparison with a direct continuous specification via integration, the result is necessarily inaccurate, and the program still fails to say what it means (and instead says one way to approximate it). Switching to a more accurate and efficient algorithm would mean further complicating an application with significant operational distractions (especially if changing sampling rate). In contrast, even in [http://conal.net/tbag/ TBAG] (an early ’90s predecessor to the FRP systems [http://conal.net/papers/ActiveVRML/ ActiveVRML] and [http://conal.net/papers/icfp97/ Fran]), thanks to continuous time we were able to express examples in a very natural way as systems of ODEs (expressed via mutually recursive integrals) and then solve them automatically, using a fourth-order Runge-Kutta with adaptive step size determination. A simple declarative specification and an efficient accurate implementation. Importantly, the numeric integration sampling pattern and the visual sampling rates were entirely decoupled, so each could make good choices.<br />
* Multiple discrete input sources typically enter the system at different rates. Combining them in discrete-time systems thus leads to awkward issues of alignment. With continuous behaviors/signals, there are no rates to be out of sync. In other words, the alignment is done automatically (to infinite resolution) as soon as the discrete streams enter the system. Afterward, combining them is effortless and easily given a precise description.<br />
* With continuous time, implementations can intelligently adapt sampling rates for accuracy and efficiency. Within a single execution, there can be ''many'' different sampling rates, for intelligent allocation of effort where it’s most helpful. For instance, slowly-changing signals can be sampled (discretized for output) less frequently than rapidly-changing signals. In contrast, discrete-time systems prematurely (and often arbitrarily) commit to sampling rates before knowing and usually a single sampling rate. Uniform rates waste computation for some signals while under-sampling others.</div>Conalhttps://wiki.haskell.org/BayHac2014/DenotationalDesignBayHac2014/DenotationalDesign2014-05-19T16:54:33Z<p>Conal: /* Why continuous time matters */ Fixed previous oops (pasted markdown)</p>
<hr />
<div>Speaker: [http://conal.net Conal Elliott]<br />
<br />
Title: ''Denotational Design: from meanings to programs''<br />
<br />
[http://conal.net/talks/bayhac-2014.pdf Slides now available (PDF)].<br />
<br />
''Edit of May 19, 2014'': Added remarks on continuous time below.<br />
<br />
== Abstract ==<br />
<br />
In this talk, I'll share a methodology that I have applied many times over the last 20+ years when designing high-level libraries for functional programming. Functional libraries are usually organized around small collections of domain-specific data types together with operations for forming and combining values of those types. When done well, the result has the elegance and precision of algebra on numbers while capturing much larger and more interesting ideas.<br />
<br />
A library has two aspects with which all programmers are familiar: the programming interface (API) and its implementation. We want the implementation to be efficient and ''correct'', since it's (usually) not enough to select arbitrary code for the implementation. To get clear about what constitutes correctness, and avoid fooling ourselves with vague, hand-waving statements, we'll want a precise specification, independent of any implementation. Fortunately, there is an elegant means of specification available to functional programmers: give a (preferably simple) mathematical ''meaning'' (model) for the types provided by a library, and then define each operation as if it worked on meanings rather than on representations. This practice, which goes by the fancy name of &quot;denotational semantics&quot; (invented to explain programming languages rigorously), is very like functional programming itself, and so can be easily assimilated by functional programmers.<br />
<br />
Rather than using semantics to ''explain'' an existing library (or language), we can instead use it to ''design'' one. It is often much easier and more enlightening to define a denotation than an implementation, because it does not have any constraints or distractions of efficiency, or even of executability. As an example, this style gave rise to [http://stackoverflow.com/questions/5875929/specification-for-a-functional-reactive-programming-language/5878525#5878525 Functional Reactive Programming (FRP)], where the semantic model of &quot;behaviors&quot; (dynamic values) is simply functions of infinite, continuous time. Similarly, the [http://conal.net/Pan Pan system] applies this same idea to space instead of time, defining the semantics of an &quot;image&quot; to be a function over infinite, continuous 2D space. Such meanings effectively and precisely capture the essence of a library's intent without the distraction of operational details. By doing so, these meanings offer library users a simpler but precise understanding of a library, while giving library developers an unambiguous definition of exactly ''what'' specification they must implement, while leaving a great deal of room for creativity about ''how''. I call this methodology &quot;Denotational Design&quot;, because it is design focused on meaning (denotation).<br />
<br />
The talk and workshop will present the principles and practice of Denotational Design through examples. I will use Haskell, where purity and type classes are especially useful to guide the process. Once understood, the techniques are transferable to other functional languages as well. If you'd like a sneak peak at the principles and applications, see the paper [http://conal.net/papers/type-class-morphisms/ ''Denotational design with type class morphisms''] and some [http://conal.net/blog/tag/type-class-morphism related blog articles].<br />
<br />
== Why continuous time matters ==<br />
<br />
Some follow-up remarks, based on questions &amp; discussion during and after the talk:<br />
<br />
* Continuous time matters for exactly the same reason that laziness (non-strictness) matters, namely modularity. (See [http://www.cse.chalmers.se/~rjmh/Papers/whyfp.html ''Why Functional Programming Matters''].) Modularity comes from providing information while making as few restrictive assumptions as possible about how that information can be used. Laziness lets us build infinite data structures, thus not assuming what finite subset any particular usage will access. By also not assuming the ''frequency'' of sampling (even that it's constant), continuous time and space place even fewer restrictions about what finite subset of information will be accessed and is thus even more modular.<br />
* Continuous time allows integration and differentiation to be expressed directly and meaningfully. In discrete-time systems, one instead has to clutter their program by including numeric approximation algorithms for integration and differentiation, usually via very badly behaved algorithm such as Euler integration and naive finite differencing. (For instance, [http://elm-lang.org/edit/examples/Intermediate/Mario.elm the Mario example in Elm] includes two explicit Euler approximations.) The result is inaccurate, and the program fails to say what it means (and instead says one way to approximate it). Switching to a better algorithm means further complicating an application with operational distractions. In contrast, even in [http://conal.net/tbag/ TBAG] (an early '90s predecessor to the FRP systems ActiveVRML and Fran), thanks to continuous time we were able to express examples in a very natural way as systems of ODEs (expressed via mutually recursive continuous integrals) and then solve them automatically, using a fourth-order Runge-Kutta with adaptive step size determination.<br />
* Multiple discrete input sources typically enter the system at different rates. Combining them in discrete-time systems thus leads to awkward issues of alignment. With continuous behaviors/signals, there are no rates to be out of sync. In other words, the alignment is done automatically (to infinite resolution) as soon as the discrete streams enter the system. Afterward, combining them is effortless and easily given a precise description.<br />
* With continuous time, implementations can intelligently adapt sampling rates for accuracy and efficiency. For instance, slowly-changing signals can be sampled (discretized for output) less frequently than rapidly-changing signals. In contrast, discrete-time systems prematurely (and often arbitrarily) commit to sampling rates before knowing and usually a single sampling rate. Uniform rates waste computation for some signals while under-sampling others.</div>Conalhttps://wiki.haskell.org/BayHac2014/DenotationalDesignBayHac2014/DenotationalDesign2014-05-19T16:53:13Z<p>Conal: /* Why continuous time matters */ sentence tweak</p>
<hr />
<div>Speaker: [http://conal.net Conal Elliott]<br />
<br />
Title: ''Denotational Design: from meanings to programs''<br />
<br />
[http://conal.net/talks/bayhac-2014.pdf Slides now available (PDF)].<br />
<br />
''Edit of May 19, 2014'': Added remarks on continuous time below.<br />
<br />
== Abstract ==<br />
<br />
In this talk, I'll share a methodology that I have applied many times over the last 20+ years when designing high-level libraries for functional programming. Functional libraries are usually organized around small collections of domain-specific data types together with operations for forming and combining values of those types. When done well, the result has the elegance and precision of algebra on numbers while capturing much larger and more interesting ideas.<br />
<br />
A library has two aspects with which all programmers are familiar: the programming interface (API) and its implementation. We want the implementation to be efficient and ''correct'', since it's (usually) not enough to select arbitrary code for the implementation. To get clear about what constitutes correctness, and avoid fooling ourselves with vague, hand-waving statements, we'll want a precise specification, independent of any implementation. Fortunately, there is an elegant means of specification available to functional programmers: give a (preferably simple) mathematical ''meaning'' (model) for the types provided by a library, and then define each operation as if it worked on meanings rather than on representations. This practice, which goes by the fancy name of &quot;denotational semantics&quot; (invented to explain programming languages rigorously), is very like functional programming itself, and so can be easily assimilated by functional programmers.<br />
<br />
Rather than using semantics to ''explain'' an existing library (or language), we can instead use it to ''design'' one. It is often much easier and more enlightening to define a denotation than an implementation, because it does not have any constraints or distractions of efficiency, or even of executability. As an example, this style gave rise to [http://stackoverflow.com/questions/5875929/specification-for-a-functional-reactive-programming-language/5878525#5878525 Functional Reactive Programming (FRP)], where the semantic model of &quot;behaviors&quot; (dynamic values) is simply functions of infinite, continuous time. Similarly, the [http://conal.net/Pan Pan system] applies this same idea to space instead of time, defining the semantics of an &quot;image&quot; to be a function over infinite, continuous 2D space. Such meanings effectively and precisely capture the essence of a library's intent without the distraction of operational details. By doing so, these meanings offer library users a simpler but precise understanding of a library, while giving library developers an unambiguous definition of exactly ''what'' specification they must implement, while leaving a great deal of room for creativity about ''how''. I call this methodology &quot;Denotational Design&quot;, because it is design focused on meaning (denotation).<br />
<br />
The talk and workshop will present the principles and practice of Denotational Design through examples. I will use Haskell, where purity and type classes are especially useful to guide the process. Once understood, the techniques are transferable to other functional languages as well. If you'd like a sneak peak at the principles and applications, see the paper [http://conal.net/papers/type-class-morphisms/ ''Denotational design with type class morphisms''] and some [http://conal.net/blog/tag/type-class-morphism related blog articles].<br />
<br />
== Why continuous time matters ==<br />
<br />
Some follow-up remarks, based on questions &amp; discussion during and after the talk:<br />
<br />
* Continuous time matters for exactly the same reason that laziness (non-strictness) matters, namely modularity.<br />
(See [*Why Functional Programming Matters*](http://www.cse.chalmers.se/~rjmh/Papers/whyfp.html).)<br />
Modularity comes from providing information while making as few restrictive assumptions as possible about how that information can be used.<br />
Laziness lets us build infinite data structures, thus not assuming what finite subset any particular usage will access.<br />
By also not assuming the *frequency* of sampling (even that it's constant), continuous time and space place even fewer restrictions about what finite subset of information will be accessed and is thus even more modular.<br />
* Continuous time allows integration and differentiation to be expressed directly and meaningfully.<br />
In discrete-time systems, one instead has to clutter their program by including numeric approximation algorithms for integration and differentiation, usually via very badly behaved algorithm such as Euler integration and naïve finite differencing.<br />
(For instance, [the Mario example in Elm](http://elm-lang.org/edit/examples/Intermediate/Mario.elm) includes two explicit Euler approximations.)<br />
The result is inaccurate, and the program fails to say what it means (and instead says one way to approximate it).<br />
Switching to a better algorithm means further complicating an application with operational distractions.<br />
In contrast, even in [TBAG](http://conal.net/tbag/) (an early '90s predecessor to the FRP systems ActiveVRML and Fran), thanks to continuous time we were able to express examples in a very natural way as systems of ODEs (expressed via mutually recursive continuous integrals) and then solve them automatically, using a fourth-order Runge-Kutta with adaptive step size determination.<br />
* Multiple discrete input sources typically enter the system at different rates.<br />
Combining them in discrete-time systems thus leads to awkward issues of alignment.<br />
With continuous behaviors/signals, there are no rates to be out of sync.<br />
In other words, the alignment is done automatically (to infinite resolution) as soon as the discrete streams enter the system.<br />
Afterward, combining them is effortless and easily given a precise description.<br />
* With continuous time, implementations can intelligently adapt sampling rates for accuracy and efficiency.<br />
For instance, slowly-changing signals can be sampled (discretized for output) less frequently than rapidly-changing signals.<br />
In contrast, discrete-time systems prematurely (and often arbitrarily) commit to sampling rates before knowing and usually a single sampling rate.<br />
Uniform rates waste computation for some signals while under-sampling others.</div>Conalhttps://wiki.haskell.org/BayHac2014/DenotationalDesignBayHac2014/DenotationalDesign2014-05-19T16:48:11Z<p>Conal: /* Why continuous time matters */ added Elm Mario reference</p>
<hr />
<div>Speaker: [http://conal.net Conal Elliott]<br />
<br />
Title: ''Denotational Design: from meanings to programs''<br />
<br />
[http://conal.net/talks/bayhac-2014.pdf Slides now available (PDF)].<br />
<br />
''Edit of May 19, 2014'': Added remarks on continuous time below.<br />
<br />
== Abstract ==<br />
<br />
In this talk, I'll share a methodology that I have applied many times over the last 20+ years when designing high-level libraries for functional programming. Functional libraries are usually organized around small collections of domain-specific data types together with operations for forming and combining values of those types. When done well, the result has the elegance and precision of algebra on numbers while capturing much larger and more interesting ideas.<br />
<br />
A library has two aspects with which all programmers are familiar: the programming interface (API) and its implementation. We want the implementation to be efficient and ''correct'', since it's (usually) not enough to select arbitrary code for the implementation. To get clear about what constitutes correctness, and avoid fooling ourselves with vague, hand-waving statements, we'll want a precise specification, independent of any implementation. Fortunately, there is an elegant means of specification available to functional programmers: give a (preferably simple) mathematical ''meaning'' (model) for the types provided by a library, and then define each operation as if it worked on meanings rather than on representations. This practice, which goes by the fancy name of &quot;denotational semantics&quot; (invented to explain programming languages rigorously), is very like functional programming itself, and so can be easily assimilated by functional programmers.<br />
<br />
Rather than using semantics to ''explain'' an existing library (or language), we can instead use it to ''design'' one. It is often much easier and more enlightening to define a denotation than an implementation, because it does not have any constraints or distractions of efficiency, or even of executability. As an example, this style gave rise to [http://stackoverflow.com/questions/5875929/specification-for-a-functional-reactive-programming-language/5878525#5878525 Functional Reactive Programming (FRP)], where the semantic model of &quot;behaviors&quot; (dynamic values) is simply functions of infinite, continuous time. Similarly, the [http://conal.net/Pan Pan system] applies this same idea to space instead of time, defining the semantics of an &quot;image&quot; to be a function over infinite, continuous 2D space. Such meanings effectively and precisely capture the essence of a library's intent without the distraction of operational details. By doing so, these meanings offer library users a simpler but precise understanding of a library, while giving library developers an unambiguous definition of exactly ''what'' specification they must implement, while leaving a great deal of room for creativity about ''how''. I call this methodology &quot;Denotational Design&quot;, because it is design focused on meaning (denotation).<br />
<br />
The talk and workshop will present the principles and practice of Denotational Design through examples. I will use Haskell, where purity and type classes are especially useful to guide the process. Once understood, the techniques are transferable to other functional languages as well. If you'd like a sneak peak at the principles and applications, see the paper [http://conal.net/papers/type-class-morphisms/ ''Denotational design with type class morphisms''] and some [http://conal.net/blog/tag/type-class-morphism related blog articles].<br />
<br />
== Why continuous time matters ==<br />
<br />
Some follow-up remarks, based on questions &amp; discussion during and after the talk:<br />
<br />
* Continuous time matters for exactly the same reason that laziness (non-strictness) matters, namely modularity. (See [http://www.cse.chalmers.se/~rjmh/Papers/whyfp.html ''Why Functional Programming Matters''].) Modularity comes from providing information while making as few restrictive assumptions as possible about how that information can be used. Laziness lets us build infinite data structures, thus not assuming what finite subset any particular usage will access. By also not assuming the ''frequency'' of sampling (even that it's constant), continuous time and space place even fewer restrictions about what finite subset of information will be accessed and is thus even more modular.<br />
* Continuous time allows integration and differentiation to be expressed directly and meaningfully. In discrete-time systems, one instead has to clutter their program by including numeric approximation algorithms for integration and differentiation, usually via very badly behaved algorithm such as Euler integration and naïve finite differencing. (For instance, see [http://elm-lang.org/edit/examples/Intermediate/Mario.elm the Mario example in Elm], which includes two explicit Euler approximations.) The result is inaccurate, and the program fails to say what it means (and instead says one way to approximate it). Switching to a better algorithm means further complicating an application with operational distractions. In contrast, even in [http://conal.net/tbag/ TBAG] (an early '90s predecessor to the FRP systems ActiveVRML and Fran), thanks to continuous time we were able to express examples in a very natural way as systems of ODEs (expressed via mutually recursive continuous integrals) and then solve them automatically, using a fourth-order Runge-Kutta with adaptive step size determination.<br />
* Multiple discrete input sources typically enter the system at different rates. Combining them in discrete-time systems thus leads to awkward issues of alignment. With continuous behaviors/signals, there are no rates to be out of sync. In other words, the alignment is done automatically (to infinite resolution) as soon as the discrete streams enter the system. Afterward, combining them is effortless and easily given a precise description.<br />
* With continuous time, implementations can intelligently adapt sampling rates for accuracy and efficiency. For instance, slowly-changing signals can be sampled (discretized for output) less frequently than rapidly-changing signals. In contrast, discrete-time systems prematurely (and often arbitrarily) commit to sampling rates before knowing and usually a single sampling rate. Uniform rates waste computation for some signals while under-sampling others.</div>Conalhttps://wiki.haskell.org/BayHac2014/DenotationalDesignBayHac2014/DenotationalDesign2014-05-19T16:43:00Z<p>Conal: /* Why continuous time matters */</p>
<hr />
<div>Speaker: [http://conal.net Conal Elliott]<br />
<br />
Title: ''Denotational Design: from meanings to programs''<br />
<br />
[http://conal.net/talks/bayhac-2014.pdf Slides now available (PDF)].<br />
<br />
''Edit of May 19, 2014'': Added remarks on continuous time below.<br />
<br />
== Abstract ==<br />
<br />
In this talk, I'll share a methodology that I have applied many times over the last 20+ years when designing high-level libraries for functional programming. Functional libraries are usually organized around small collections of domain-specific data types together with operations for forming and combining values of those types. When done well, the result has the elegance and precision of algebra on numbers while capturing much larger and more interesting ideas.<br />
<br />
A library has two aspects with which all programmers are familiar: the programming interface (API) and its implementation. We want the implementation to be efficient and ''correct'', since it's (usually) not enough to select arbitrary code for the implementation. To get clear about what constitutes correctness, and avoid fooling ourselves with vague, hand-waving statements, we'll want a precise specification, independent of any implementation. Fortunately, there is an elegant means of specification available to functional programmers: give a (preferably simple) mathematical ''meaning'' (model) for the types provided by a library, and then define each operation as if it worked on meanings rather than on representations. This practice, which goes by the fancy name of &quot;denotational semantics&quot; (invented to explain programming languages rigorously), is very like functional programming itself, and so can be easily assimilated by functional programmers.<br />
<br />
Rather than using semantics to ''explain'' an existing library (or language), we can instead use it to ''design'' one. It is often much easier and more enlightening to define a denotation than an implementation, because it does not have any constraints or distractions of efficiency, or even of executability. As an example, this style gave rise to [http://stackoverflow.com/questions/5875929/specification-for-a-functional-reactive-programming-language/5878525#5878525 Functional Reactive Programming (FRP)], where the semantic model of &quot;behaviors&quot; (dynamic values) is simply functions of infinite, continuous time. Similarly, the [http://conal.net/Pan Pan system] applies this same idea to space instead of time, defining the semantics of an &quot;image&quot; to be a function over infinite, continuous 2D space. Such meanings effectively and precisely capture the essence of a library's intent without the distraction of operational details. By doing so, these meanings offer library users a simpler but precise understanding of a library, while giving library developers an unambiguous definition of exactly ''what'' specification they must implement, while leaving a great deal of room for creativity about ''how''. I call this methodology &quot;Denotational Design&quot;, because it is design focused on meaning (denotation).<br />
<br />
The talk and workshop will present the principles and practice of Denotational Design through examples. I will use Haskell, where purity and type classes are especially useful to guide the process. Once understood, the techniques are transferable to other functional languages as well. If you'd like a sneak peak at the principles and applications, see the paper [http://conal.net/papers/type-class-morphisms/ ''Denotational design with type class morphisms''] and some [http://conal.net/blog/tag/type-class-morphism related blog articles].<br />
<br />
== Why continuous time matters ==<br />
<br />
Some follow-up remarks, based on questions &amp; discussion during and after the talk:<br />
<br />
* Continuous time matters for exactly the same reason that laziness (non-strictness) matters, namely modularity. (See [http://www.cse.chalmers.se/~rjmh/Papers/whyfp.html ''Why Functional Programming Matters''].) Modularity comes from providing information while making as few restrictive assumptions as possible about how that information can be used. Laziness lets us build infinite data structures, thus not assuming what finite subset any particular usage will access. By also not assuming the ''frequency'' of sampling (even that it's constant), continuous time and space place even fewer restrictions about what finite subset of information will be accessed and is thus even more modular.<br />
* Continuous time allows integration and differentiation to be expressed directly and meaningfully. In discrete-time systems, one instead has to clutter their program by including numeric approximation algorithms for integration and differentiation, usually via very badly behaved algorithm such as Euler integration and naïve finite differencing. The result is inaccurate, and the program fails to say what it means (and instead says one way to approximate it). Switching to a better algorithm means further complicating an application with operational distractions. In contrast, even in [http://conal.net/tbag/ TBAG] (an early '90s predecessor to the FRP systems ActiveVRML and Fran), thanks to continuous time we were able to express examples in a very natural way as systems of ODEs (expressed via mutually recursive continuous integrals) and then solve them automatically, using a fourth-order Runge-Kutta with adaptive step size determination.<br />
* Multiple discrete input sources typically enter the system at different rates. Combining them in discrete-time systems thus leads to awkward issues of alignment. With continuous behaviors/signals, there are no rates to be out of sync. In other words, the alignment is done automatically (to infinite resolution) as soon as the discrete streams enter the system. Afterward, combining them is effortless and easily given a precise description.<br />
* With continuous time, implementations can intelligently adapt sampling rates for accuracy and efficiency. For instance, slowly-changing signals can be sampled (discretized for output) less frequently than rapidly-changing signals. In contrast, discrete-time systems prematurely (and often arbitrarily) commit to sampling rates before knowing and usually a single sampling rate. Uniform rates waste computation for some signals while under-sampling others.</div>Conalhttps://wiki.haskell.org/BayHac2014/DenotationalDesignBayHac2014/DenotationalDesign2014-05-19T16:30:36Z<p>Conal: Added remarks on continuous time</p>
<hr />
<div>Speaker: [http://conal.net Conal Elliott]<br />
<br />
Title: ''Denotational Design: from meanings to programs''<br />
<br />
[http://conal.net/talks/bayhac-2014.pdf Slides now available (PDF)].<br />
<br />
''Edit of May 19, 2014'': Added remarks on continuous time below.<br />
<br />
== Abstract ==<br />
<br />
In this talk, I'll share a methodology that I have applied many times over the last 20+ years when designing high-level libraries for functional programming. Functional libraries are usually organized around small collections of domain-specific data types together with operations for forming and combining values of those types. When done well, the result has the elegance and precision of algebra on numbers while capturing much larger and more interesting ideas.<br />
<br />
A library has two aspects with which all programmers are familiar: the programming interface (API) and its implementation. We want the implementation to be efficient and ''correct'', since it's (usually) not enough to select arbitrary code for the implementation. To get clear about what constitutes correctness, and avoid fooling ourselves with vague, hand-waving statements, we'll want a precise specification, independent of any implementation. Fortunately, there is an elegant means of specification available to functional programmers: give a (preferably simple) mathematical ''meaning'' (model) for the types provided by a library, and then define each operation as if it worked on meanings rather than on representations. This practice, which goes by the fancy name of &quot;denotational semantics&quot; (invented to explain programming languages rigorously), is very like functional programming itself, and so can be easily assimilated by functional programmers.<br />
<br />
Rather than using semantics to ''explain'' an existing library (or language), we can instead use it to ''design'' one. It is often much easier and more enlightening to define a denotation than an implementation, because it does not have any constraints or distractions of efficiency, or even of executability. As an example, this style gave rise to [http://stackoverflow.com/questions/5875929/specification-for-a-functional-reactive-programming-language/5878525#5878525 Functional Reactive Programming (FRP)], where the semantic model of &quot;behaviors&quot; (dynamic values) is simply functions of infinite, continuous time. Similarly, the [http://conal.net/Pan Pan system] applies this same idea to space instead of time, defining the semantics of an &quot;image&quot; to be a function over infinite, continuous 2D space. Such meanings effectively and precisely capture the essence of a library's intent without the distraction of operational details. By doing so, these meanings offer library users a simpler but precise understanding of a library, while giving library developers an unambiguous definition of exactly ''what'' specification they must implement, while leaving a great deal of room for creativity about ''how''. I call this methodology &quot;Denotational Design&quot;, because it is design focused on meaning (denotation).<br />
<br />
The talk and workshop will present the principles and practice of Denotational Design through examples. I will use Haskell, where purity and type classes are especially useful to guide the process. Once understood, the techniques are transferable to other functional languages as well. If you'd like a sneak peak at the principles and applications, see the paper [http://conal.net/papers/type-class-morphisms/ ''Denotational design with type class morphisms''] and some [http://conal.net/blog/tag/type-class-morphism related blog articles].<br />
<br />
== Why continuous time matters ==<br />
<br />
Some follow-up remarks, based on questions &amp; discussion during and after the talk:<br />
<br />
* Continuous time matters for exactly the same reason that laziness (non-strictness) matters, namely modularity. (See [http://www.cse.chalmers.se/~rjmh/Papers/whyfp.html ''Why Functional Programming Matters''].) Modularity comes from providing information while making as few restrictive assumptions as possible about how that information can be used. Laziness lets us build infinite data structures, thus not assuming what finite subset any particular usage will access. By also not assuming the ''frequency'' of sampling (even that it's constant), continuous time (and space) place even fewer restrictions about what finite subset of information will be accessed and is thus even more modular.<br />
* Continuous time allows integration and differentiation to be expressed directly and meaningfully. In discrete-time systems, one instead has to clutter their program by including numeric approximation algorithms for integration and differentiation, usually via very badly behaved algorithm such as Euler integration and naïve finite differencing. The result is inaccurate, and the program fails to say what it means (and instead says one way to approximate it). Switching to a better algorithm means further complicating an application with operational distractions. In contrast, even in [http://conal.net/tbag/ TBAG] (an early '90s predecessor to the FRP systems ActiveVRML and Fran), thanks to continuous time we were able to express examples in a very natural way as systems of ODEs (expressed via mutually recursive continuous integrals) and then solve them automatically, using a fourth-order Runge-Kutta with adaptive step size determination.<br />
* Multiple discrete input sources typically enter the system at different rates. Combining them in discrete-time systems thus leads to awkward issues of alignment. With continuous behaviors/signals, there are no rates to be out of sync. In other words, the alignment is done automatically (to infinite resolution) as soon as the discrete streams enter the system. Afterward, combining them is effortless and easily given a precise description.<br />
* With continuous time, implementations can intelligently adapt sampling rates for accuracy and efficiency. For instance, slowly-changing signals can be sampled (discretized for output) less frequently than rapidly-changing signals. In contrast, discrete-time systems prematurely (and often arbitrarily) commit to sampling rates before knowing and usually a single sampling rate. Uniform rates waste computation for some signals while under-sampling others.</div>Conalhttps://wiki.haskell.org/BayHac2014/DenotationalDesignBayHac2014/DenotationalDesign2014-05-19T04:21:08Z<p>Conal: /* Abstract */ slides link</p>
<hr />
<div>Speaker: [http://conal.net Conal Elliott]<br />
<br />
Title: ''Denotational Design: from meanings to programs''<br />
<br />
[http://conal.net/talks/bayhac-2014.pdf Slides now available (PDF)].<br />
<br />
== Abstract ==<br />
<br />
In this talk, I'll share a methodology that I have applied many times over the last 20+ years when designing high-level libraries for functional programming. Functional libraries are usually organized around small collections of domain-specific data types together with operations for forming and combining values of those types. When done well, the result has the elegance and precision of algebra on numbers while capturing much larger and more interesting ideas.<br />
<br />
A library has two aspects with which all programmers are familiar: the programming interface (API) and its implementation. We want the implementation to be efficient and ''correct'', since it's (usually) not enough to select arbitrary code for the implementation. To get clear about what constitutes correctness, and avoid fooling ourselves with vague, hand-waving statements, we'll want a precise specification, independent of any implementation. Fortunately, there is an elegant means of specification available to functional programmers: give a (preferably simple) mathematical ''meaning'' (model) for the types provided by a library, and then define each operation as if it worked on meanings rather than on representations. This practice, which goes by the fancy name of &quot;denotational semantics&quot; (invented to explain programming languages rigorously), is very like functional programming itself, and so can be easily assimilated by functional programmers.<br />
<br />
Rather than using semantics to ''explain'' an existing library (or language), we can instead use it to ''design'' one. It is often much easier and more enlightening to define a denotation than an implementation, because it does not have any constraints or distractions of efficiency, or even of executability. As an example, this style gave rise to [http://stackoverflow.com/questions/5875929/specification-for-a-functional-reactive-programming-language/5878525#5878525 Functional Reactive Programming (FRP)], where the semantic model of &quot;behaviors&quot; (dynamic values) is simply functions of infinite, continuous time. Similarly, the [http://conal.net/Pan Pan system] applies this same idea to space instead of time, defining the semantics of an &quot;image&quot; to be a function over infinite, continuous 2D space. Such meanings effectively and precisely capture the essence of a library's intent without the distraction of operational details. By doing so, these meanings offer library users a simpler but precise understanding of a library, while giving library developers an unambiguous definition of exactly ''what'' specification they must implement, while leaving a great deal of room for creativity about ''how''. I call this methodology &quot;Denotational Design&quot;, because it is design focused on meaning (denotation).<br />
<br />
The talk and workshop will present the principles and practice of Denotational Design through examples. I will use Haskell, where purity and type classes are especially useful to guide the process. Once understood, the techniques are transferable to other functional languages as well. If you'd like a sneak peak at the principles and applications, see the paper [http://conal.net/papers/type-class-morphisms/ ''Denotational design with type class morphisms''] and some [http://conal.net/blog/tag/type-class-morphism related blog articles].</div>Conalhttps://wiki.haskell.org/BayHac2014BayHac20142014-05-15T18:35:18Z<p>Conal: /* Projects */ punctuation</p>
<hr />
<div>__NOTOC__<br />
<br />
<b><span style="color:#e73">San Francisco Bay Area</span> <span style="color:#aaa">&amp;</span> <span style="color:#930">Silicon Valley</span> <span style="color:#aaa">Haskell Hackathon</span></b><br />
<br />
[[Image:BayHac14_banner.png]]<br />
<br />
Come join a group of Haskell hackers to work on a wide variety of projects. All levels welcome.<br />
<br />
<center><br />
<big>Sign-up Here:<br /> [https://docs.google.com/forms/d/16QEHqAioGQeHHOlnMTEmjdgtO4YNN2_Qc-rbgLOFatU/viewform BayHac '14 Attendee Form]</big><br />
</center><br />
<br />
Special thanks to [http://engineering.imvu.com/ IMVU], [https://developers.google.com/open-source/ Google], Aleph Cloud and Twitter for sponsoring BayHac '14!<br />
<br />
----<br />
<br />
{|<br />
|When:<br />
|Friday, May 16th – Sunday, May 18th, 2014<br />
|-<br />
|Where:<br />
|[http://www.hackerdojo.com/ Hacker Dojo]<br />
|-<br />
|Cost:<br />
|Free<br />
|-<br />
|News and Discussion:<br />
|[http://groups.google.com/group/bayhac BayHac Google Group]<br />
|}<br />
<br />
<br />
<div style="text-align: right; float: right; width: 250px"><br />
[[Image:BayHac14 Poster Small.png|237px]]<br />
<br /><br />
<small><i>[https://drive.google.com/file/d/0B1eCSfs15HPRZjRIWWtCNmJjSms/edit?usp=sharing Full size PDF poster available]</i></small><br />
<br />
</div><br />
== Location ==<br />
<br />
[http://www.hackerdojo.com/ Hacker Dojo], 599 Fairchild Drive, Mountain View, CA ([https://maps.google.com/maps?ie=UTF8&cid=11488539903009648209&q=Hacker+Dojo&iwloc=A&gl=US&hl=en-US Google Map])<br />
<br />
== Schedule ==<br />
<br />
Basic timing... details to be developed. Expect lightning talks, hacking, and other activities:<br />
<br />
{|<br />
|Friday, May 16th<br />
|3pm - 7pm<br />
|-<br />
|Saturday, May 17th<br />
|10am ~ 7pm<br />
|-<br />
|Sunday, May 18th<br />
|10am - 4pm<br />
|}<br />
<br />
== Classes ==<br />
=== Friday ===<br />
* 5:15pm - 6:15pm '''Programming with Pipes''' by Gabriel Gonzalez<br />
* 6:15pm - 7pm '''A Tutorial on Free Monads''' by Dan Piponi<br />
<br />
=== Saturday ===<br />
* 10am - 11am '''Beginning Haskell''' by Bob Ippolito<br />
* 11am - 12pm '''Haskell for Scala Programmers''' by Runar Bjarnason<br />
* 12 pm - 1pm '''Conquering Cabal''' by Jonathan Fischoff<br />
* 2pm - 3pm '''Pandoc''' by John MacFarlane<br />
* 3pm - 4pm '''Haste: Front End Web Development with Haskell''' by Lars Kuhtz<br />
* 4pm - 5pm '''From Prolog to Hindley-Milner''' by Tikhon Jelvis<br />
* 5pm - 6pm '''Yesod: Up and Running''' by Dan Burton<br />
* 6pm - 7pm '''Lens: Inside and Out''' by Shachaf Ben-Kiki<br />
<br />
=== Sunday ===<br />
* 10am - 11:30am '''GHC iOS: Up and Running''' by Luke Iannini<br />
* 11:30am - 1pm '''Programming with Vinyl''' by Jonathan Sterling<br />
* 1pm - 2pm '''Functional Reactive Programming with Elm''' by Evan Czaplicki<br />
* 2pm - 3pm [http://www.haskell.org/haskellwiki/BayHac2014/DenotationalDesign '''Denotational Design: from meanings to programs'''] by Conal Elliott<br />
* 3pm - 4pm '''Getting Stuff Done with Haskell''' by Greg Weber<br />
<br />
== Saturday Demos and Experience Reports ==<br />
1pm - 2pm<br />
* '''Haskell at IMVU''' by Andy Friesen<br />
* '''Haskell at Aleph Cloud''' by Jeff Polakow<br />
* '''Haskell at Docmunch''' by Greg Weber<br />
* '''Haskell at Pingwell''' by Tim Sears<br />
* '''Tree.is demo''' by Luke Iannini<br />
<br />
== Lightning Talks ==<br />
<br />
''determined at the event''<br />
<br />
== Attendees == <br />
<br />
* Jonathan Fischoff - organizer<br />
* [http://www.ozonehouse.com/mark/ Mark Lentczner] - asst. organizer<br />
* [mailto:capn.freako@gmail.com David Banas] - amateur Haskeller<br />
* [mailto:michael@schmong.org Michael Litchard] - Haskeller<br />
* [http://conal.net Conal Elliott]<br />
<br />
== Projects ==<br />
# [http://www.haskell.org/haskellwiki/Treeviz TreeViz] - a computation breakdown visualization project hosted by [mailto:capn.freako@gmail.com David Banas]<br />
# [https://github.com/haskell/haskell-platform/tree/new-build Haskell Platform, the new build] - We are working on a new build system for all of Haskell Platform: Generating tarballs, installers, and even the web site from one single Shake based build tool. Lots to do! See Mark Lentczner.<br />
# [https://github.com/conal/lambda-ccc/ lambda-ccc] - a project for compiling Haskell to hardware. I'm doing this work for my day job, but the development is open, and the result will be shared freely. The project starts with a GHC plugin that turns transforms Core to generate a convenient-to-manipulate GADT representation of the original. Then convert to an <code>Arrow</code>-like algebraic interface that can be interpreted in various ways, including as circuits. See [mailto:conal@conal.net Conal Elliott].<br />
<br />
== IRC channel ==<br />
<br />
We'll be hanging out on #bayhac on FreeNode.<br />
<br />
[[Category:Community]]</div>Conalhttps://wiki.haskell.org/BayHac2014BayHac20142014-05-15T18:27:29Z<p>Conal: /* Projects */ lambda-ccc</p>
<hr />
<div>__NOTOC__<br />
<br />
<b><span style="color:#e73">San Francisco Bay Area</span> <span style="color:#aaa">&amp;</span> <span style="color:#930">Silicon Valley</span> <span style="color:#aaa">Haskell Hackathon</span></b><br />
<br />
[[Image:BayHac14_banner.png]]<br />
<br />
Come join a group of Haskell hackers to work on a wide variety of projects. All levels welcome.<br />
<br />
<center><br />
<big>Sign-up Here:<br /> [https://docs.google.com/forms/d/16QEHqAioGQeHHOlnMTEmjdgtO4YNN2_Qc-rbgLOFatU/viewform BayHac '14 Attendee Form]</big><br />
</center><br />
<br />
Special thanks to [http://engineering.imvu.com/ IMVU], [https://developers.google.com/open-source/ Google], Aleph Cloud and Twitter for sponsoring BayHac '14!<br />
<br />
----<br />
<br />
{|<br />
|When:<br />
|Friday, May 16th – Sunday, May 18th, 2014<br />
|-<br />
|Where:<br />
|[http://www.hackerdojo.com/ Hacker Dojo]<br />
|-<br />
|Cost:<br />
|Free<br />
|-<br />
|News and Discussion:<br />
|[http://groups.google.com/group/bayhac BayHac Google Group]<br />
|}<br />
<br />
<br />
<div style="text-align: right; float: right; width: 250px"><br />
[[Image:BayHac14 Poster Small.png|237px]]<br />
<br /><br />
<small><i>[https://drive.google.com/file/d/0B1eCSfs15HPRZjRIWWtCNmJjSms/edit?usp=sharing Full size PDF poster available]</i></small><br />
<br />
</div><br />
== Location ==<br />
<br />
[http://www.hackerdojo.com/ Hacker Dojo], 599 Fairchild Drive, Mountain View, CA ([https://maps.google.com/maps?ie=UTF8&cid=11488539903009648209&q=Hacker+Dojo&iwloc=A&gl=US&hl=en-US Google Map])<br />
<br />
== Schedule ==<br />
<br />
Basic timing... details to be developed. Expect lightning talks, hacking, and other activities:<br />
<br />
{|<br />
|Friday, May 16th<br />
|3pm - 7pm<br />
|-<br />
|Saturday, May 17th<br />
|10am ~ 7pm<br />
|-<br />
|Sunday, May 18th<br />
|10am - 4pm<br />
|}<br />
<br />
== Classes ==<br />
=== Friday ===<br />
* 5:15pm - 6:15pm '''Programming with Pipes''' by Gabriel Gonzalez<br />
* 6:15pm - 7pm '''A Tutorial on Free Monads''' by Dan Piponi<br />
<br />
=== Saturday ===<br />
* 10am - 11am '''Beginning Haskell''' by Bob Ippolito<br />
* 11am - 12pm '''Haskell for Scala Programmers''' by Runar Bjarnason<br />
* 12 pm - 1pm '''Conquering Cabal''' by Jonathan Fischoff<br />
* 2pm - 3pm '''Pandoc''' by John MacFarlane<br />
* 3pm - 4pm '''Haste: Front End Web Development with Haskell''' by Lars Kuhtz<br />
* 4pm - 5pm '''From Prolog to Hindley-Milner''' by Tikhon Jelvis<br />
* 5pm - 6pm '''Yesod: Up and Running''' by Dan Burton<br />
* 6pm - 7pm '''Lens: Inside and Out''' by Shachaf Ben-Kiki<br />
<br />
=== Sunday ===<br />
* 10am - 11:30am '''GHC iOS: Up and Running''' by Luke Iannini<br />
* 11:30am - 1pm '''Programming with Vinyl''' by Jonathan Sterling<br />
* 1pm - 2pm '''Functional Reactive Programming with Elm''' by Evan Czaplicki<br />
* 2pm - 3pm [http://www.haskell.org/haskellwiki/BayHac2014/DenotationalDesign '''Denotational Design: from meanings to programs'''] by Conal Elliott<br />
* 3pm - 4pm '''Getting Stuff Done with Haskell''' by Greg Weber<br />
<br />
== Saturday Demos and Experience Reports ==<br />
1pm - 2pm<br />
* '''Haskell at IMVU''' by Andy Friesen<br />
* '''Haskell at Aleph Cloud''' by Jeff Polakow<br />
* '''Haskell at Docmunch''' by Greg Weber<br />
* '''Haskell at Pingwell''' by Tim Sears<br />
* '''Tree.is demo''' by Luke Iannini<br />
<br />
== Lightning Talks ==<br />
<br />
''determined at the event''<br />
<br />
== Attendees == <br />
<br />
* Jonathan Fischoff - organizer<br />
* [http://www.ozonehouse.com/mark/ Mark Lentczner] - asst. organizer<br />
* [mailto:capn.freako@gmail.com David Banas] - amateur Haskeller<br />
* [mailto:michael@schmong.org Michael Litchard] - Haskeller<br />
* [http://conal.net Conal Elliott]<br />
<br />
== Projects ==<br />
# [http://www.haskell.org/haskellwiki/Treeviz TreeViz] - a computation breakdown visualization project hosted by [mailto:capn.freako@gmail.com David Banas]<br />
# [https://github.com/haskell/haskell-platform/tree/new-build Haskell Platform, the new build] - We are working on a new build system for all of Haskell Platform: Generating tarballs, installers, and even the web site from one single Shake based build tool. Lots to do! See Mark Lentczner.<br />
# [https://github.com/conal/lambda-ccc/ lambda-ccc] --- a project for compiling Haskell to hardware. I'm doing this work for my day job, but the development is open, and the result will be shared freely. The project starts with a GHC plugin that turns transforms Core to generate a convenient-to-manipulate GADT representation of the original. Then convert to an <code>Arrow</code>-like algebraic interface that can be interpreted in various ways, including as circuits. See [mailto:conal@conal.net Conal Elliott].<br />
<br />
== IRC channel ==<br />
<br />
We'll be hanging out on #bayhac on FreeNode.<br />
<br />
[[Category:Community]]</div>Conalhttps://wiki.haskell.org/BayHac2014BayHac20142014-05-12T19:45:40Z<p>Conal: /* Sunday */ Abstract link and full title</p>
<hr />
<div>__NOTOC__<br />
<br />
<b><span style="color:#e73">San Francisco Bay Area</span> <span style="color:#aaa">&amp;</span> <span style="color:#930">Silicon Valley</span> <span style="color:#aaa">Haskell Hackathon</span></b><br />
<br />
[[Image:BayHac14_banner.png]]<br />
<br />
Come join a group of Haskell hackers to work on a wide variety of projects. All levels welcome.<br />
<br />
<center><br />
<big>Sign-up Here:<br /> [https://docs.google.com/forms/d/16QEHqAioGQeHHOlnMTEmjdgtO4YNN2_Qc-rbgLOFatU/viewform BayHac '14 Attendee Form]</big><br />
</center><br />
<br />
Special thanks to [http://engineering.imvu.com/ IMVU], [https://developers.google.com/open-source/ Google], Aleph Cloud and Twitter for sponsoring BayHac '14!<br />
<br />
----<br />
<br />
{|<br />
|When:<br />
|Friday, May 16th – Sunday, May 18th, 2014<br />
|-<br />
|Where:<br />
|[http://www.hackerdojo.com/ Hacker Dojo]<br />
|-<br />
|Cost:<br />
|Free<br />
|-<br />
|News and Discussion:<br />
|[http://groups.google.com/group/bayhac BayHac Google Group]<br />
|}<br />
<br />
<br />
<div style="text-align: right; float: right; width: 250px"><br />
[[Image:BayHac14 Poster Small.png|237px]]<br />
<br /><br />
<small><i>[https://drive.google.com/file/d/0B1eCSfs15HPRZjRIWWtCNmJjSms/edit?usp=sharing Full size PDF poster available]</i></small><br />
<br />
</div><br />
== Location ==<br />
<br />
[http://www.hackerdojo.com/ Hacker Dojo], 599 Fairchild Drive, Mountain View, CA ([https://maps.google.com/maps?ie=UTF8&cid=11488539903009648209&q=Hacker+Dojo&iwloc=A&gl=US&hl=en-US Google Map])<br />
<br />
== Schedule ==<br />
<br />
Basic timing... details to be developed. Expect lightning talks, hacking, and other activities:<br />
<br />
{|<br />
|Friday, May 16th<br />
|3pm - 7pm<br />
|-<br />
|Saturday, May 17th<br />
|10am ~ 7pm<br />
|-<br />
|Sunday, May 18th<br />
|10am - 4pm<br />
|}<br />
<br />
== Classes ==<br />
=== Friday ===<br />
* 5:15pm - 6:15pm '''Programming with Pipes''' by Gabriel Gonzalez<br />
* 6:15pm - 7pm '''A Tutorial on Free Monads''' by Dan Piponi<br />
<br />
=== Saturday ===<br />
* 10am - 11am '''Beginning Haskell''' by Bob Ippolito<br />
* 11am - 12pm '''Haskell for Scala Programmers''' by Runar Bjarnason<br />
* 12 pm - 1pm '''Conquering Cabal''' by Jonathan Fischoff<br />
* 2pm - 3pm '''Pandoc''' by John MacFarlane<br />
* 3pm - 4pm '''Haste: Front End Web Development with Haskell''' by Lars Kuhtz<br />
* 4pm - 5pm '''From Prolog to Hindley-Milner''' by Tikhon Jelvis<br />
* 5pm - 6pm '''Yesod: Up and Running''' by Dan Burton<br />
* 6pm - 7pm '''Lens: Inside and Out''' by Shachaf Ben-Kiki<br />
<br />
=== Sunday ===<br />
* 10am - 11:30am '''GHC iOS: Up and Running''' by Luke Iannini<br />
* 11:30am - 1pm '''Programming with Vinyl''' by Jonathan Sterling<br />
* 1pm - 2pm '''Functional Reactive Programming with Elm''' by Evan Czaplicki<br />
* 2pm - 3pm [http://www.haskell.org/haskellwiki/BayHac2014/DenotationalDesign '''Denotational Design: from meanings to programs'''] by Conal Elliott<br />
* 3pm - 4pm '''Getting Stuff Done with Haskell''' by Greg Weber<br />
<br />
== Saturday Demos and Experience Reports ==<br />
1pm - 2pm<br />
* '''Haskell at IMVU''' by Andy Friesen<br />
* '''Haskell at Aleph Cloud''' by Jeff Polakow<br />
* '''Haskell at Docmunch''' by Greg Weber<br />
* '''Haskell at Pingwell''' by Tim Sears<br />
* '''Tree.is demo''' by Luke Iannini<br />
<br />
== Lightning Talks ==<br />
<br />
''determined at the event''<br />
<br />
== Attendees == <br />
<br />
* Jonathan Fischoff - organizer<br />
* [http://www.ozonehouse.com/mark/ Mark Lentczner] - asst. organizer<br />
* [mailto:capn.freako@gmail.com David Banas] - amateur Haskeller<br />
* [mailto:michael@schmong.org Michael Litchard] - Haskeller<br />
* [http://conal.net Conal Elliott]<br />
<br />
== Projects ==<br />
# [http://www.haskell.org/haskellwiki/Treeviz TreeViz] - a computation breakdown visualization project hosted by [mailto:capn.freako@gmail.com David Banas]<br />
<br />
== IRC channel ==<br />
<br />
We'll be hanging out on #bayhac on FreeNode.<br />
<br />
[[Category:Community]]</div>Conalhttps://wiki.haskell.org/BayHac2014/DenotationalDesignBayHac2014/DenotationalDesign2014-05-12T19:44:02Z<p>Conal: Talk abstract</p>
<hr />
<div>Speaker: [http://conal.net Conal Elliott]<br />
<br />
Title: ''Denotational Design: from meanings to programs''<br />
<br />
== Abstract ==<br />
<br />
In this talk, I'll share a methodology that I have applied many times over the last 20+ years when designing high-level libraries for functional programming. Functional libraries are usually organized around small collections of domain-specific data types together with operations for forming and combining values of those types. When done well, the result has the elegance and precision of algebra on numbers while capturing much larger and more interesting ideas.<br />
<br />
A library has two aspects with which all programmers are familiar: the programming interface (API) and its implementation. We want the implementation to be efficient and ''correct'', since it's (usually) not enough to select arbitrary code for the implementation. To get clear about what constitutes correctness, and avoid fooling ourselves with vague, hand-waving statements, we'll want a precise specification, independent of any implementation. Fortunately, there is an elegant means of specification available to functional programmers: give a (preferably simple) mathematical ''meaning'' (model) for the types provided by a library, and then define each operation as if it worked on meanings rather than on representations. This practice, which goes by the fancy name of &quot;denotational semantics&quot; (invented to explain programming languages rigorously), is very like functional programming itself, and so can be easily assimilated by functional programmers.<br />
<br />
Rather than using semantics to ''explain'' an existing library (or language), we can instead use it to ''design'' one. It is often much easier and more enlightening to define a denotation than an implementation, because it does not have any constraints or distractions of efficiency, or even of executability. As an example, this style gave rise to [http://stackoverflow.com/questions/5875929/specification-for-a-functional-reactive-programming-language/5878525#5878525 Functional Reactive Programming (FRP)], where the semantic model of &quot;behaviors&quot; (dynamic values) is simply functions of infinite, continuous time. Similarly, the [http://conal.net/Pan Pan system] applies this same idea to space instead of time, defining the semantics of an &quot;image&quot; to be a function over infinite, continuous 2D space. Such meanings effectively and precisely capture the essence of a library's intent without the distraction of operational details. By doing so, these meanings offer library users a simpler but precise understanding of a library, while giving library developers an unambiguous definition of exactly ''what'' specification they must implement, while leaving a great deal of room for creativity about ''how''. I call this methodology &quot;Denotational Design&quot;, because it is design focused on meaning (denotation).<br />
<br />
The talk and workshop will present the principles and practice of Denotational Design through examples. I will use Haskell, where purity and type classes are especially useful to guide the process. Once understood, the techniques are transferable to other functional languages as well. If you'd like a sneak peak at the principles and applications, see the paper [http://conal.net/papers/type-class-morphisms/ ''Denotational design with type class morphisms''] and some [http://conal.net/blog/tag/type-class-morphism related blog articles].</div>Conalhttps://wiki.haskell.org/BayHac2014BayHac20142014-05-12T19:25:47Z<p>Conal: /* Attendees */</p>
<hr />
<div>__NOTOC__<br />
<br />
<b><span style="color:#e73">San Francisco Bay Area</span> <span style="color:#aaa">&amp;</span> <span style="color:#930">Silicon Valley</span> <span style="color:#aaa">Haskell Hackathon</span></b><br />
<br />
[[Image:BayHac14_banner.png]]<br />
<br />
Come join a group of Haskell hackers to work on a wide variety of projects. All levels welcome.<br />
<br />
<center><br />
<big>Sign-up Here:<br /> [https://docs.google.com/forms/d/16QEHqAioGQeHHOlnMTEmjdgtO4YNN2_Qc-rbgLOFatU/viewform BayHac '14 Attendee Form]</big><br />
</center><br />
<br />
Special thanks to [http://engineering.imvu.com/ IMVU], [https://developers.google.com/open-source/ Google], Aleph Cloud and Twitter for sponsoring BayHac '14!<br />
<br />
----<br />
<br />
{|<br />
|When:<br />
|Friday, May 16th – Sunday, May 18th, 2014<br />
|-<br />
|Where:<br />
|[http://www.hackerdojo.com/ Hacker Dojo]<br />
|-<br />
|Cost:<br />
|Free<br />
|-<br />
|News and Discussion:<br />
|[http://groups.google.com/group/bayhac BayHac Google Group]<br />
|}<br />
<br />
<br />
<div style="text-align: right; float: right; width: 250px"><br />
[[Image:BayHac14 Poster Small.png|237px]]<br />
<br /><br />
<small><i>[https://drive.google.com/file/d/0B1eCSfs15HPRZjRIWWtCNmJjSms/edit?usp=sharing Full size PDF poster available]</i></small><br />
<br />
</div><br />
== Location ==<br />
<br />
[http://www.hackerdojo.com/ Hacker Dojo], 599 Fairchild Drive, Mountain View, CA ([https://maps.google.com/maps?ie=UTF8&cid=11488539903009648209&q=Hacker+Dojo&iwloc=A&gl=US&hl=en-US Google Map])<br />
<br />
== Schedule ==<br />
<br />
Basic timing... details to be developed. Expect lightning talks, hacking, and other activities:<br />
<br />
{|<br />
|Friday, May 16th<br />
|3pm - 7pm<br />
|-<br />
|Saturday, May 17th<br />
|10am ~ 7pm<br />
|-<br />
|Sunday, May 18th<br />
|10am - 4pm<br />
|}<br />
<br />
== Classes ==<br />
=== Friday ===<br />
* 5:15pm - 6:15pm '''Programming with Pipes''' by Gabriel Gonzalez<br />
* 6:15pm - 7pm '''A Tutorial on Free Monads''' by Dan Piponi<br />
<br />
=== Saturday ===<br />
* 10am - 11am '''Beginning Haskell''' by Bob Ippolito<br />
* 11am - 12pm '''Haskell for Scala Programmers''' by Runar Bjarnason<br />
* 12 pm - 1pm '''Conquering Cabal''' by Jonathan Fischoff<br />
* 2pm - 3pm '''Pandoc''' by John MacFarlane<br />
* 3pm - 4pm '''Haste: Front End Web Development with Haskell''' by Lars Kuhtz<br />
* 4pm - 5pm '''From Prolog to Hindley-Milner''' by Tikhon Jelvis<br />
* 5pm - 6pm '''Yesod: Up and Running''' by Dan Burton<br />
* 6pm - 7pm '''Lens: Inside and Out''' by Shachaf Ben-Kiki<br />
<br />
=== Sunday ===<br />
* 10am - 11:30am '''GHC iOS: Up and Running''' by Luke Iannini<br />
* 11:30am - 1pm '''Programming with Vinyl''' by Jonathan Sterling<br />
* 1pm - 2pm '''Functional Reactive Programming with Elm''' by Evan Czaplicki<br />
* 2pm - 3pm '''Denotational Design''' by Conal Elliott<br />
* 3pm - 4pm '''Getting Stuff Done with Haskell''' by Greg Weber<br />
<br />
== Saturday Demos and Experience Reports ==<br />
1pm - 2pm<br />
* '''Haskell at IMVU''' by Andy Friesen<br />
* '''Haskell at Aleph Cloud''' by Jeff Polakow<br />
* '''Haskell at Docmunch''' by Greg Weber<br />
* '''Haskell at Pingwell''' by Tim Sears<br />
* '''Tree.is demo''' by Luke Iannini<br />
<br />
== Lightning Talks ==<br />
<br />
''determined at the event''<br />
<br />
== Attendees == <br />
<br />
* Jonathan Fischoff - organizer<br />
* [http://www.ozonehouse.com/mark/ Mark Lentczner] - asst. organizer<br />
* [mailto:capn.freako@gmail.com David Banas] - amateur Haskeller<br />
* [mailto:michael@schmong.org Michael Litchard] - Haskeller<br />
* [http://conal.net Conal Elliott]<br />
<br />
== Projects ==<br />
# [http://www.haskell.org/haskellwiki/Treeviz TreeViz] - a computation breakdown visualization project hosted by [mailto:capn.freako@gmail.com David Banas]<br />
<br />
== IRC channel ==<br />
<br />
We'll be hanging out on #bayhac on FreeNode.<br />
<br />
[[Category:Community]]</div>Conalhttps://wiki.haskell.org/MemoizationMemoization2014-02-17T21:16:11Z<p>Conal: /* Memoizing polymorphic functions */ link to Memoizing polymorphic functions via unmemoization</p>
<hr />
<div>[[Category:Idioms]]<br />
<br />
'''Memoization''' is a technique for storing values of a function instead of recomputing them each time the function is called.<br />
<br />
== Memoization without recursion ==<br />
<br />
You can just write a memoization function using a data structure that is suitable for your application.<br />
We don't go into the details of this case.<br />
If you want a general solution for several types,<br />
you need a type class, say <hask>Memoizable</hask>.<br />
<haskell><br />
memoize :: Memoizable a => (a->b) -> (a->b)<br />
</haskell><br />
<br />
Now, how to implement something like this? Of course, one needs a finite<br />
map that stores values <hask>b</hask> for keys of type <hask>a</hask>.<br />
It turns out that such a map can be constructed recursively based on the structure of <hask>a</hask>:<br />
<haskell><br />
Map () b := b<br />
Map (Either a a') b := (Map a b, Map a' b)<br />
Map (a,a') b := Map a (Map a' b)<br />
</haskell><br />
<br />
Here, <hask>Map a b</hask> is the type of a finite map from keys <hask>a</hask> to values <hask>b</hask>.<br />
Its construction is based on the following laws for functions<br />
<haskell><br />
() -> b =~= b<br />
(a + a') -> b =~= (a -> b) x (a' -> b) -- = case analysis<br />
(a x a') -> b =~= a -> (a' -> b) -- = currying<br />
</haskell><br />
<br />
For further and detailed explanations, see<br />
<br />
* Ralf Hinze: [http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.43.3272 Memo functions, polytypically !]<br />
* Ralf Hinze: [http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.8.4069 Generalizing generalized tries]<br />
* Conal Elliott: [http://conal.net/blog/posts/elegant-memoization-with-functional-memo-tries/ Elegant memoization with functional memo tries] and other [http://conal.net/blog/tag/memoization/ posts on memoization].<br />
* Conal Elliott [http://conal.net/papers/type-class-morphisms/ Denotational design with type class morphisms], section 9 (Memo tries).<br />
<br />
== Memoization with recursion ==<br />
<br />
Things become more complicated if the function is recursively defined<br />
and it should use memoized calls to itself.<br />
A classic example is the recursive computation of [[The Fibonacci sequence|Fibonacci numbers]].<br />
<br />
The naive implementation of Fibonacci numbers without memoization is horribly slow.<br />
Try <hask>slow_fib 30</hask>, not too much higher than that and it hangs.<br />
<haskell><br />
slow_fib :: Int -> Integer<br />
slow_fib 0 = 0<br />
slow_fib 1 = 1<br />
slow_fib n = slow_fib (n-2) + slow_fib (n-1)<br />
</haskell><br />
<br />
The memoized version is much faster.<br />
Try <hask>memoized_fib 10000</hask>.<br />
<br />
<haskell><br />
memoized_fib :: Int -> Integer<br />
memoized_fib = (map fib [0 ..] !!)<br />
where fib 0 = 0<br />
fib 1 = 1<br />
fib n = memoized_fib (n-2) + memoized_fib (n-1)<br />
</haskell><br />
<br />
<br />
=== Memoizing fix point operator ===<br />
<br />
You can factor out the memoizing trick to a function, the memoizing fix point operator,<br />
which we will call <hask>memoFix</hask> here.<br />
<br />
<haskell><br />
fib :: (Int -> Integer) -> Int -> Integer<br />
fib f 0 = 1<br />
fib f 1 = 1<br />
fib f n = f (n-1) + f (n-2)<br />
<br />
fibonacci :: Int -> Integer<br />
fibonacci = memoFix fib<br />
<br />
</haskell><br />
<br />
I suppose if you want to "put it in a library",<br />
you should just put <hask>fib</hask> in,<br />
and allow the user to call <hask>memoFix fib</hask> to make a new version when necessary.<br />
This allows the user e.g. to define the data structure used for memoization.<br />
<br />
The memoising fixpoint operator works<br />
by putting the result of the first call of the function<br />
for each natural number into a data structure and<br />
using that value for subsequent calls ;-)<br />
<br />
In general it is<br />
<haskell><br />
memoFix :: ((a -> b) -> (a -> b)) -> a -> b<br />
memoFix f =<br />
let mf = memoize (f mf) in mf<br />
</haskell><br />
<br />
== Efficient tree data structure for maps from Int to somewhere ==<br />
<br />
Here we present a special tree data type<br />
({{HackagePackage|id=data-inttrie}})<br />
which is useful as memoizing data structure e.g. for the Fibonacci function.<br />
<haskell><br />
memoizeInt :: (Int -> a) -> (Int -> a)<br />
memoizeInt f = (fmap f (naturals 1 0) !!!)<br />
</haskell><br />
<br />
A data structure with a node corresponding to each natural number to use as a memo.<br />
<haskell><br />
data NaturalTree a = Node a (NaturalTree a) (NaturalTree a)<br />
</haskell><br />
<br />
Map the nodes to the naturals in this order:<br />
<br />
<code><br />
0<br />
1 2<br />
3 5 4 6<br />
7 ...<br />
</code><br />
<br />
Look up the node for a particular number<br />
<br />
<haskell><br />
Node a tl tr !!! 0 = a <br />
Node a tl tr !!! n =<br />
if odd n<br />
then tl !!! top<br />
else tr !!! (top-1)<br />
where top = n `div` 2<br />
</haskell><br />
<br />
We surely want to be able to map on these things...<br />
<br />
<haskell><br />
instance Functor NaturalTree where<br />
fmap f (Node a tl tr) = Node (f a) (fmap f tl) (fmap f tr)<br />
</haskell><br />
<br />
If only so that we can write cute,<br />
but inefficient things like the below,<br />
which is just a <hask>NaturalTree</hask><br />
such that <hask>naturals!!!n == n</hask>:<br />
<br />
<haskell><br />
naturals = Node 0 (fmap ((+1).(*2)) naturals) (fmap ((*2).(+1)) naturals)<br />
</haskell><br />
<br />
The following is probably more efficient<br />
(and, having arguments won't hang around at top level, I think)<br />
-- have I put more <hask>$!</hask>s than necessary?<br />
<br />
<haskell><br />
naturals r n =<br />
Node n<br />
((naturals $! r2) $! (n+r))<br />
((naturals $! r2) $! (n+r2))<br />
where r2 = 2*r<br />
</haskell><br />
<br />
==Memoising CAFS==<br />
'''Note: This is migrated from the old wiki.'''<br />
<br />
Memoising constructor functions gives you HashConsing, and you can sometimes use MemoisingCafs ([[constant applicative form]]s) to implement that.<br />
<br />
The MemoisingCafs idiom also supports recursion.<br />
<br />
Consider, for example:<br />
<br />
<haskell><br />
wonderous :: Integer -> Integer<br />
wonderous 1 = 0<br />
wonderous x<br />
| even x = 1 + wonderous (x `div` 2)<br />
| otherwise = 1 + wonderous (3*x+1)<br />
</haskell><br />
This function is not at all understood by mathematicians and has a surprisingly complex recursion pattern, so if you need to call it many times with different values, optimising it would not be easy.<br />
<br />
However, we can memoise some of the domain using an array CAF:<br />
<haskell><br />
wonderous2 :: Integer -> Integer<br />
wonderous2 x<br />
| x <= maxMemo = memoArray ! x<br />
| otherwise = wonderous2' x<br />
where<br />
maxMemo = 100<br />
memoArray = array (1,maxMemo)<br />
[ (x, wonderous2' x) | x <- [1..maxMemo] ]<br />
<br />
wonderous2' 1 = 0<br />
wonderous2' x<br />
| even x = 1 + wonderous2 (x `div` 2)<br />
| otherwise = 1 + wonderous2' (3*x+1)<br />
</haskell><br />
When using this pattern in your own code, note carefully when to call the memoised version (wonderous2 in the above example) and when not to. In general, the partially memoised version (wonderous2' in the above example) should call the memoised version if it needs to perform a recursive call. However, in this instance, we only memoize for small values of x, so the branch of the recursion that passes a larger argument need not bother checking the memo table. (This does slow the array initialization, however.)<br />
Thanks to [[lazy evaluation]], we can even memoise an infinite domain, though we lose constant time lookup. This data structure is O(log N):<br />
<br />
<haskell><br />
type MemoTable a = [(Integer, BinTree a)]<br />
data BinTree a = Leaf a | Node Integer (BinTree a) (BinTree a)<br />
<br />
wonderous3 :: Integer -> Integer<br />
wonderous3 x<br />
= searchMemoTable x memoTable<br />
where<br />
memoTable :: MemoTable Integer<br />
memoTable = buildMemoTable 1 5<br />
<br />
buildMemoTable n i<br />
= (nextn, buildMemoTable' n i) : buildMemoTable nextn (i+1)<br />
where<br />
nextn = n + 2^i<br />
<br />
buildMemoTable' base 0<br />
= Leaf (wonderous3' base)<br />
buildMemoTable' base i<br />
= Node (base + midSize)<br />
(buildMemoTable' base (i-1))<br />
(buildMemoTable' (base + midSize) (i-1))<br />
where<br />
midSize = 2 ^ (i-1)<br />
<br />
searchMemoTable x ((x',tree):ms)<br />
| x < x' = searchMemoTree x tree<br />
| otherwise = searchMemoTable x ms<br />
<br />
searchMemoTree x (Leaf y) = y<br />
searchMemoTree x (Node mid l r)<br />
| x < mid = searchMemoTree x l<br />
| otherwise = searchMemoTree x r<br />
<br />
wonderous3' 1 = 0<br />
wonderous3' x<br />
| even x = 1 + wonderous3 (x `div` 2)<br />
| otherwise = 1 + wonderous3 (3*x+1)<br />
</haskell><br />
<br />
Naturally, these techniques can be combined, say, by using a fast CAF data structure for the most common part of the domain and an infinite CAF data structure for the rest.<br />
<br />
-- [[AndrewBromage]]<br />
<br />
== Memoizing polymorphic functions ==<br />
<br />
What about memoizing polymorphic functions defined with polymorphic recursion?<br />
How can such functions be memoized?<br />
The caching data structures used in memoization typically handle only one type of argument at a time.<br />
For instance, one can have finite maps of differing types, but each concrete finite map holds just one type of key and one type of value.<br />
<br />
See the discussion on ''Memoizing polymorphic functions'', [http://conal.net/blog/posts/memoizing-polymorphic-functions-part-one/ part one] and [http://conal.net/blog/posts/memoizing-polymorphic-functions-part-two/ part two], as well as [http://conal.net/blog/posts/memoizing-polymorphic-functions-via-unmemoization/ ''Memoizing polymorphic functions via unmemoization''].<br />
<br />
== See also ==<br />
<br />
* [http://www.haskell.org/pipermail/haskell-cafe/2007-February/021288.html Haskell-Cafe "speeding up fibonacci with memoizing"]<br />
* [http://www.haskell.org/pipermail/haskell-cafe/2007-May/024689.html Haskell-Cafe about memoization utility function]<br />
* [http://www.haskell.org/pipermail/haskell-cafe/2007-February/021563.html Haskell-Cafe "memoisation"]<br />
* [http://www.haskell.org/pipermail/haskell-cafe/2005-October/010287.html Haskell-Cafe about Memoization and Data.Map]<br />
* http://programming.reddit.com/info/16ofr/comments<br />
* [http://www.cs.utexas.edu/~wcook/Drafts/2006/MemoMixins.pdf Monadic Memoization Mixins] by Daniel Brown and William R. Cook<br />
* [http://hackage.haskell.org/cgi-bin/hackage-scripts/package/data-memocombinators data-memocombinators: Combinators for building memo tables.]<br />
* [http://hackage.haskell.org/cgi-bin/hackage-scripts/package/MemoTrie MemoTrie: Trie-based memo functions]<br />
* [http://hackage.haskell.org/package/monad-memo monad-memo: memoization monad transformer]<br />
* [http://hackage.haskell.org/package/memoize memoize: uses Template Haskell to derive memoization code]</div>Conalhttps://wiki.haskell.org/LubLub2014-02-04T21:38:15Z<p>Conal: /* Abstract */ repo link fix</p>
<hr />
<div>[[Category:Packages]]<br />
[[Category:Concurrency]]<br />
<br />
== Abstract ==<br />
<br />
Lub is an experiment in computing least upper information bounds on (partially defined) functional values.<br />
It provides a <hask>lub</hask> function that is consistent with the [[unamb]] operator but has a more liberal precondition.<br />
Where <hask>unamb</hask> requires its arguments to equal when neither is bottom, <hask>lub</hask> is able to synthesize a value from the partial information contained in both of its arguments, which is useful with non-flat types.<br />
<br />
Besides this wiki page, here are more ways to find out about lub:<br />
* Read the blog post ''[http://conal.net/blog/posts/merging-partial-values/ Merging partial values]''<br />
* Visit the [http://hackage.haskell.org/cgi-bin/hackage-scripts/package/lub Hackage page] for library documentation and to download & install.<br />
* Or install with <tt>cabal install lub</tt>.<br />
* Get the [https://github.com/conal/lub code repository].<br />
<!-- * See the [[lub/Versions| version history]]. --><br />
<!-- Please leave comments at the [[Talk:lub|Talk page]]. --><br />
<br />
I got inspired for this package after [http://tunes.org/~nef/logs/haskell/08.11.17 stimulating discussions] with Thomas Davie, Russell O'Connor and others in the #haskell gang.</div>Conalhttps://wiki.haskell.org/UnambUnamb2014-02-04T21:36:21Z<p>Conal: /* Abstract */ fixed the repo pointer fix</p>
<hr />
<div>[[Category:Packages]]<br />
[[Category:Concurrency]]<br />
<br />
== Abstract ==<br />
<br />
'''unamb''' is a package containing the ''unambiguous choice'' operator <hask>unamb</hask>, which wraps thread racing up in a purely functional, semantically simple wrapper.<br />
Given any two arguments <hask>u</hask> and <hask>v</hask> that agree unless bottom, the value of <hask>unamb u v</hask> is the more terminating of <hask>u</hask> and <hask>v</hask>.<br />
Operationally, the value of <hask>unamb u v</hask> becomes available when the earlier of <hask>u</hask> and <hask>v</hask> does.<br />
The agreement precondition ensures unamb's referential transparency.<br />
For more info about <hask>unamb</hask> and its use, see the paper ''[http://conal.net/papers/push-pull-frp/ Push-pull functional reactive programming]'', sections 10 and 11.<br />
<br />
<hask>unamb</hask> was originally a part of [[Reactive]]. I moved it to its own package in order to encourage experimentation.<br />
<br />
Besides this wiki page, here are more ways to find out about unamb:<br />
* Visit the [http://hackage.haskell.org/cgi-bin/hackage-scripts/package/unamb Hackage page] for library documentation and to download & install.<br />
* Read [http://conal.net/blog/tag/unamb/ related blog posts].<br />
* Or install with <tt>cabal install unamb</tt>.<br />
* Get the [https://github.com/conal/unamb code repository].<br />
<br />
<!-- * See the [[unamb/Versions| version history]]. --><br />
<br />
<!-- Please leave comments at the [[Talk:unamb|Talk page]]. --><br />
See also the [[lub]] package, which extends unamb's usefulness with non-flat types.<br />
<br />
== Issues ==<br />
<br />
Although semantically very simple, unamb has been quite tricky to implement correctly and efficiently.<br />
<br />
As of version 0.1.1, unamb requires ghc 6.10 or better.<br />
<br />
As of version 0.1.6, unamb correctly handles recursive termination of sub-efforts and automatic restarting, but only with the GHC RTS fixes that first appeared (stably, by my testing) in GHC HEAD version 6.11.20090115.<br />
The problems and solution can be found in a few places:<br />
* Email thread: ''[http://n2.nabble.com/problem-with-unamb----doesn%27t-kill-enough-threads-tt1674917.html Problem with unamb -- doesn't kill enough threads]''<br />
* Blog post: ''[http://conal.net/blog/posts/smarter-termination-for-thread-racing/ Smarter termination for thread racing]''<br />
* Email thread: ''[http://n2.nabble.com/Re%3A-black-hole-detection-and-concurrency-td2016290.htm Black hole detection and concurrency]''<br />
<br />
unamb seems to be working well in version 0.2.2, under GHC 6.10.3.</div>Conalhttps://wiki.haskell.org/UnambUnamb2014-02-04T21:35:50Z<p>Conal: /* Abstract */ fix repo pointer</p>
<hr />
<div>[[Category:Packages]]<br />
[[Category:Concurrency]]<br />
<br />
== Abstract ==<br />
<br />
'''unamb''' is a package containing the ''unambiguous choice'' operator <hask>unamb</hask>, which wraps thread racing up in a purely functional, semantically simple wrapper.<br />
Given any two arguments <hask>u</hask> and <hask>v</hask> that agree unless bottom, the value of <hask>unamb u v</hask> is the more terminating of <hask>u</hask> and <hask>v</hask>.<br />
Operationally, the value of <hask>unamb u v</hask> becomes available when the earlier of <hask>u</hask> and <hask>v</hask> does.<br />
The agreement precondition ensures unamb's referential transparency.<br />
For more info about <hask>unamb</hask> and its use, see the paper ''[http://conal.net/papers/push-pull-frp/ Push-pull functional reactive programming]'', sections 10 and 11.<br />
<br />
<hask>unamb</hask> was originally a part of [[Reactive]]. I moved it to its own package in order to encourage experimentation.<br />
<br />
Besides this wiki page, here are more ways to find out about unamb:<br />
* Visit the [http://hackage.haskell.org/cgi-bin/hackage-scripts/package/unamb Hackage page] for library documentation and to download & install.<br />
* Read [http://conal.net/blog/tag/unamb/ related blog posts].<br />
* Or install with <tt>cabal install unamb</tt>.<br />
* Get the [https://github.com/conal/lub code repository].<br />
<br />
<!-- * See the [[unamb/Versions| version history]]. --><br />
<br />
<!-- Please leave comments at the [[Talk:unamb|Talk page]]. --><br />
See also the [[lub]] package, which extends unamb's usefulness with non-flat types.<br />
<br />
== Issues ==<br />
<br />
Although semantically very simple, unamb has been quite tricky to implement correctly and efficiently.<br />
<br />
As of version 0.1.1, unamb requires ghc 6.10 or better.<br />
<br />
As of version 0.1.6, unamb correctly handles recursive termination of sub-efforts and automatic restarting, but only with the GHC RTS fixes that first appeared (stably, by my testing) in GHC HEAD version 6.11.20090115.<br />
The problems and solution can be found in a few places:<br />
* Email thread: ''[http://n2.nabble.com/problem-with-unamb----doesn%27t-kill-enough-threads-tt1674917.html Problem with unamb -- doesn't kill enough threads]''<br />
* Blog post: ''[http://conal.net/blog/posts/smarter-termination-for-thread-racing/ Smarter termination for thread racing]''<br />
* Email thread: ''[http://n2.nabble.com/Re%3A-black-hole-detection-and-concurrency-td2016290.htm Black hole detection and concurrency]''<br />
<br />
unamb seems to be working well in version 0.2.2, under GHC 6.10.3.</div>Conalhttps://wiki.haskell.org/Tangible_ValueTangible Value2013-08-14T15:39:33Z<p>Conal: fix cabal install directions</p>
<hr />
<div>[[Category:User interfaces]]<br />
[[Category:IO]]<br />
[[Category:Arrow]]<br />
[[Category:Libraries]]<br />
[[Category:Packages]]<br />
<br />
== Abstract ==<br />
<br />
'''TV''' is a library for composing ''tangible values'' ("TVs"), i.e., values that carry along external interfaces. In particular, TVs can be composed to create new TVs, ''and'' they can be directly executed with a friendly GUI, a process that reads and writes character streams, or many other kinds interfaces. Values and interfaces are ''combined'' for direct use, and ''separable'' for composition. This combination makes for software that is ''ready to use and ready to reuse''.<br />
<br />
TV can be thought of as a simple functional formulation of the Model-View-Controller pattern. (My thanks to an anonymous ICFP referee for pointing out this connection.) The value part of a TV is the ''model'', and the "interface" part, or "output" as it is called below, is the ''viewer''. Outputs are built up compositionally from other outputs and from inputs (the ''controllers''), as described below.<br />
<br />
Besides this wiki page, here are more ways to learn about TV:<br />
* Visit the [http://hackage.haskell.org/package/TV Hackage page] for library documentation.<br />
* Install with <tt>cabal install TV</tt>.<br />
<br />
As of version 0.2, I have moved the GUI functionality out of TV and into a small new package [[GuiTV]]. I moved it out to eliminate the dependency of core TV on [[Phooey]] and hence on [[wxHaskell]], as the latter can be difficult to install. The GUI examples below require [[GuiTV]].<br />
<br />
GuiTV (better named "wxTV") is bit-rotten. There is also a very similar [http://hackage.haskell.org/package/GtkTV package to generate Gtk-based GUIs].<br />
<br />
I'd love to hear your comments at the [[Talk:TV]] page.<br />
<br />
== First Example ==<br />
<br />
Here is a tangible reverse function:<br />
<br />
<haskell><br />
reverseT :: CTV (String -> String)<br />
reverseT = tv (oTitle "reverse" defaultOut) reverse<br />
</haskell><br />
<br />
The <hask>tv</hask> function combines an interface and a value. In this example, the interface is the default for string functions, wrapped with the title "reverse".<br />
<br />
TV "interfaces" are more than just GUIs. Here are two different renderings of <hask>reverseT</hask>. (User input is shown <tt><b><i>in italics</i></b></tt> in the <hask>runIO</hask> version).<br />
<br />
Running:<br />
<blockquote><br />
{| class="wikitable"<br />
! <hask>runUI reverseT</hask> !! <hask>runIO reverseT</hask> <br />
|- <br />
| style="padding:20px;" | [[Image:reverseT.png]]<br />
| style="padding:20px;" |<br />
*Examples> runIO reverseT<br />
reverse: <b><i>Hello, reversible world.</i></b><br />
.dlrow elbisrever ,olleH<br />
*Examples> <br />
|}<br />
</blockquote><br />
<br />
We'll see [[#The_general_story|later]] that "<hask>runUI</hask>" and "<hask>runIO</hask>" are both type-specialized synonyms for a more general function.<br />
<br />
== Outputs ==<br />
<br />
What I've been calling an "interface" is a value of type <hask>COutput a</hask> for a type <hask>a</hask>. For instance, for <hask>reverseT</hask>, <hask>a</hask> is <hask>String->String</hask>. The reason for the <hask>C</hask> prefix is explained below. At the heart of TV is a small algebra for constructing these outputs. Weve already seen one output function, <hask>oTitle</hask>. Another one is <hask>showOut</hask>, which is an output for all <hask>Show</hask> types. For instance,<br />
<br />
<haskell><br />
total :: Show a => COutput a<br />
total = oTitle "total" showOut<br />
</haskell><br />
<br />
== Inputs and function-valued outputs ==<br />
<br />
Just as an output is a way to ''deliver'' (or ''consume'') a value, an "input" is a way to ''obtain'' (or ''produce'') a value. For example, here are two inputs, each specifying an initial value and a value range, and each given a title.<br />
<br />
<haskell><br />
apples, bananas :: CInput Int<br />
apples = iTitle "apples" defaultIn<br />
bananas = iTitle "bananas" defaultIn<br />
</haskell><br />
<br />
Now for the fun part. Let's combine the <hask>apples</hask> and <hask>bananas</hask> inputs and the <hask>total</hask> output to make a ''function-valued'' output.<br />
<br />
<haskell><br />
shoppingO :: COutput (Int -> Int -> Int)<br />
shoppingO = oTitle "shopping list" $<br />
oLambda apples (oLambda bananas total)<br />
</haskell><br />
<br />
And a TV:<br />
<haskell><br />
shopping :: CTV (Int -> Int -> Int)<br />
shopping = tv shoppingO (+)<br />
</haskell><br />
<br />
Running:<br />
<blockquote><br />
{| class="wikitable"<br />
! <hask>runUI shopping</hask> !! <hask>runIO shopping</hask> <br />
|- <br />
| style="padding:20px;" | [[Image:shopping.png]]<br />
| style="padding:20px;" |<br />
shopping list: apples: <b><i>8</i></b><br />
bananas: <b><i>5</i></b><br />
total: 13<br />
|}<br />
</blockquote><br />
<br />
== A variation ==<br />
<br />
Here is an uncurried variation:<br />
<br />
<haskell><br />
shoppingPr :: CTV ((Int,Int) -> Int)<br />
shoppingPr = tv ( oTitle "shopping list -- uncurried" $ <br />
oLambda (iPair apples bananas) total )<br />
(uncurry (+))<br />
</haskell><br />
However, there's a much more elegant formulation, using <hask>uncurryA</hask> and <hask>$$</hask> from [[DeepArrow]]:<br />
<haskell><br />
shoppingPr = uncurryA $$ shopping<br />
</haskell><br />
<br />
Running:<br />
<blockquote><br />
{| class="wikitable"<br />
! <hask>runUI shoppingPr</hask> !! <hask>runIO shoppingPr</hask> <br />
|- <br />
| style="padding:20px;" | [[Image:shoppingPr.png]]<br />
| style="padding:20px;" |<br />
shopping list -- uncurried: apples: <b><i>8</i></b><br />
bananas: <b><i>5</i></b><br />
total: 13<br />
|}<br />
</blockquote><br />
<br />
== The general story ==<br />
<br />
TVs, outputs, and inputs are not restricted to GUIs and IO. In general, they are parameterized by the mechanics of "transmitting values", i.e., delivering ("sinking") output and gathering ("sourcing") input.<br />
<br />
<haskell><br />
data Input src a<br />
data Output src snk a<br />
type TV src snk a<br />
</haskell><br />
<br />
The "sources" will be [[applicative functor]]s (AFs), and the "sinks" will be contravariant functors.<br />
<br />
In the examples above, we've used two different mechanisms, namely [[Phooey]]'s <hask>UI</hask> AF and <hask>IO</hask>. The sinks are counterparts <hask>IU</hask> and <hask>OI</hask>.<br />
<br />
The functions <hask>runUI</hask> and <hask>runIO</hask> used in examples above are simply type-specialized synonyms for [http://hackage.haskell.org/package/TV/latest/doc/html/Interface-TV.html#v%3ArunTV <hask>runTV</hask>].<br />
<haskell><br />
runUI :: TV UI IU a -> IO ()<br />
runUI = runTV<br />
<br />
runIO :: TV IO OI a -> IO ()<br />
runIO = runTV<br />
</haskell><br />
<br />
== Common Ins and Outs ==<br />
<br />
The examples <hask>reverseT</hask> and <hask>shoppingT</hask> above used not only the generic <hask>Output</hask> and <hask>Input</hask> operations, but also some operations that apply to AFs having a few methods for sourcing and sinking a few common types (strings, readables, showables, and booleans). The type constructors <hask>CInput</hask>, <hask>COutput</hask>, and <hask>CTV</hask> are universally quantified over sources and sinks having the required methods.<br />
<br />
<haskell><br />
type CInput a = forall src.<br />
(CommonIns src) => Input src a<br />
type COutput a = forall src snk.<br />
(CommonIns src, CommonOuts snk) => Output src snk a<br />
type CTV a = forall src snk.<br />
(CommonIns src, CommonOuts snk) => TV src snk a<br />
</haskell><br />
<br />
== Sorting examples ==<br />
<br />
Here's a sorting TV (see [http://hackage.haskell.org/packages/archive/TV/latest/doc/html/Interface-TV-Common.html#v:interactLineRS <hask>interactLineRS</hask>]), tested with <hask>runUI</hask>:<br />
<br />
<blockquote><br />
{| class="wikitable"<br />
| style="padding-right:2em;" |<br />
<haskell><br />
sortT :: (Read a, Show a, Ord a) => CTV ([a] -> [a])<br />
sortT = tv (oTitle "sort" $ interactLineRS []) sort<br />
</haskell><br />
|- <br />
| style="padding:20px;text-align:center;" | [[Image:sortT.png]]<br />
|}<br />
</blockquote><br />
<br />
Note that <hask>sortT</hask> is polymorphic in value, and the type variable <hask>a</hask> as defaulted to <hask>Int</hask>. You could instead type-annotate its uses, e.g.,<br />
<br />
: <hask>runUI (sortT :: CTV ([String] -> [String]))</hask><br />
<br />
== Composition of TVs ==<br />
<br />
So far, we done a little composition of interfaces and combined them with values to construct TVs. Now let's look at composition of TVs.<br />
<br />
First, wrap up the <hask>words</hask> and <hask>unwords</hask> functions:<br />
<br />
<blockquote><br />
{| class="wikitable"<br />
| style="padding-right:2em;" |<br />
<haskell><br />
wordsT :: CTV (String -> [String]) <br />
wordsT = tv ( oTitle "function: words" $<br />
oLambda (iTitle "sentence in" defaultIn)<br />
(oTitle "words out" defaultOut))<br />
words<br />
</haskell><br />
|- <br />
| style="padding:20px;text-align:center;" | [[Image:wordsT.png]]<br />
|}<br />
</blockquote><br />
<br />
<blockquote><br />
{| class="wikitable"<br />
| style="padding-right:2em;" |<br />
<haskell><br />
unwordsT :: CTV ([String] -> String) <br />
unwordsT = tv ( oTitle "function: unwords" $<br />
oLambda (iTitle "words in" defaultIn)<br />
(oTitle "sentence out" defaultOut))<br />
unwords<br />
</haskell><br />
|- <br />
| style="padding:20px;text-align:center;" | [[Image:unwordsT.png]]<br />
|}<br />
</blockquote><br />
<br />
Finally, compose <hask>wordsT</hask>, <hask>unwordsT</hask>, and <hask>sortT</hask><br />
<br />
<haskell><br />
sortWordsT :: CTV (String -> String)<br />
sortWordsT = wordsT ->| sortT ->| unwordsT<br />
</haskell><br />
<br />
Running:<br />
<blockquote><br />
{| class="wikitable"<br />
! <hask>runUI sortWordsT</hask> !! <hask>runIO sortWordsT</hask> <br />
|- <br />
| style="padding:20px;" | [[Image:sortWordsT.png]]<br />
| style="padding:20px;" |<br />
sentence in: <b><i>The night Max wore his wolf suit</i></b><br />
sentence out: Max The his night suit wolf wore<br />
|}<br />
</blockquote><br />
<br />
The operator "[http://hackage.haskell.org/package/DeepArrow/latest/doc/html/Control-Arrow-DeepArrow.html#v%3A-%3E%7C <hask>->|</hask>]" is part of a general approach to value composition from [[DeepArrow]].<br />
<br />
== Transmission-specific interfaces ==<br />
<br />
While some interfaces can be implemented for different means of transmission, others are more specialized.<br />
<br />
=== GUIs ===<br />
<br />
Here are inputs for our shopping example above that specifically work with [[Phooey]]'s UI applicative functor.<br />
<haskell><br />
applesU, bananasU :: Input UI Int<br />
applesU = iTitle "apples" (islider 3 (0,10))<br />
bananasU = iTitle "bananas" (islider 7 (0,10))<br />
<br />
shoppingUO :: Output UI (Int -> Int -> Int)<br />
shoppingUO = oTitle "shopping list" $ oLambda applesU (oLambda bananasU total)<br />
</haskell><br />
<br />
We can then make curried and uncurried TVs:<br />
<blockquote><br />
{| class="wikitable"<br />
! code !! runUI rendering <br />
|-<br />
| style="padding:20px;" align=right| <hask>tv shoppingUO (+)</hask><br />
| style="padding:20px;" align="center" | [[Image:shoppingU.png]]<br />
|-<br />
| style="padding:20px;" align=right | <hask>uncurryA $$ tv shoppingUO (+)</hask><br />
| style="padding:20px;" align="center" | [[Image:shoppingPrU.png]]<br />
|}<br />
</blockquote><br />
<br />
'''Note''': We could define other type classes, besides <hask>CommonInsOuts</hask>. For instance, <hask>islider</hask> could be made a method of a <hask>GuiArrow</hask> class, allowing it to be rendered in different ways with different GUI toolkits or even using HTML and Javascript.<br />
<br />
=== IO ===<br />
<br />
We can use <hask>IO</hask> operations in TV interfaces. The corresponding sink is <hask>OI</hask>, defined in [[TypeCompose]]. TV provides a few functions in its [http://hackage.haskell.org/package/TV/latest/doc/html/Interface-TV-IO.html <hask>IO</hask> module], including a close counterpart to the standard <hask>interact</hask> function.<br />
<haskell><br />
interactOut :: Output IO OI (String -> String)<br />
interactOut = oLambda contentsIn stringOut<br />
</haskell><br />
<br />
Assuming we have a file <tt>"test.txt"</tt> containing some lines of text, we can use it to test string transformations.<br />
<haskell><br />
testO :: Output IO OI (String -> String)<br />
testO = oLambda (fileIn "test.txt") defaultOut<br />
</haskell><br />
<br />
First, let's define higher-order functions that apply another function to the lines or on the words of a string.<br />
<haskell><br />
onLines, onWords :: ([String] -> [String]) -> (String -> String)<br />
onLines f = unlines . f . lines<br />
onWords f = unwords . f . words<br />
</haskell><br />
Next, specializations that operate on ''each'' line or word:<br />
<haskell><br />
perLine,perWord :: (String -> String) -> (String -> String)<br />
perLine f = onLines (map f)<br />
perWord f = onWords (map f)<br />
</haskell><br />
<br />
Some examples:<br />
<br />
<blockquote><br />
{| class="wikitable"<br />
! string function <hask>f</hask> !! <hask>runIO (tv test0 f)</hask><br />
|-<br />
| style="padding:20px;" align=right| <hask>id</hask><br />
| style="padding:20px;" align="center" |<br />
To see a World in a Grain of Sand<br />
And a Heaven in a Wild Flower, <br />
Hold Infinity in the palm of your hand<br />
And Eternity in an hour.<br />
- William Blake<br />
|-<br />
| style="padding:20px;" align=right| <hask>reverse</hask><br />
| style="padding:20px;" align="center" |<br />
<br />
ekalB mailliW - <br />
.ruoh na ni ytinretE dnA<br />
dnah ruoy fo mlap eht ni ytinifnI dloH<br />
,rewolF dliW a ni nevaeH a dnA<br />
dnaS fo niarG a ni dlroW a ees oT<br />
|-<br />
| style="padding:20px;" align=right| <hask>onLines reverse</hask><br />
| style="padding:20px;" align="center" |<br />
- William Blake<br />
And Eternity in an hour.<br />
Hold Infinity in the palm of your hand<br />
And a Heaven in a Wild Flower, <br />
To see a World in a Grain of Sand<br />
|-<br />
| style="padding:20px;" align=right| <hask>perLine reverse</hask><br />
| style="padding:20px;" align="center" |<br />
dnaS fo niarG a ni dlroW a ees oT<br />
,rewolF dliW a ni nevaeH a dnA<br />
dnah ruoy fo mlap eht ni ytinifnI dloH<br />
.ruoh na ni ytinretE dnA<br />
ekalB mailliW - <br />
|-<br />
| style="padding:20px;" align=right| <hask>perLine (perWord reverse)</hask><br />
| style="padding:20px;" align="center" |<br />
oT ees a dlroW ni a niarG fo dnaS<br />
dnA a nevaeH ni a dliW ,rewolF<br />
dloH ytinifnI ni eht mlap fo ruoy dnah<br />
dnA ytinretE ni na .ruoh<br />
- mailliW ekalB<br />
|}<br />
</blockquote><br />
<br />
There are more examples [http://code.haskell.org/~conal/code/TV/src/Examples.hs in the TV repository] and in the [http://code.haskell.org/~conal/code/GuiTV/src/Examples.hs in the GuiTV repository]. See also "[http://journal.conal.net/#%5B%5Bseparating%20IO%20from%20logic%20--%20example%5D%5D separating IO from logic -- example]".</div>Conalhttps://wiki.haskell.org/BayHac2013/Denotative_ProgrammingBayHac2013/Denotative Programming2013-05-22T02:29:04Z<p>Conal: talk notes</p>
<hr />
<div><ul><br />
<li><em>[http://www.scribd.com/doc/12878059/The-Next-700-Programming-Languages The Next 700 Programming Languages]</em>:<br />
<ul><br />
<li>By Peter Landin in 1966. Enormously important figure in improving our understanding of programming languages.<br />
<ul><br />
<li>Prime move with Algol, about which Tony Hoare said “Here is a language so far ahead of its time that it was not only an improvement on its predecessors but also on nearly all its successors.”</li><br />
<li>Realized that lambda calculus can model PLs.</li><br />
<li>Worked out how to mechanically evaluate lambda calculus.</li><br />
</ul></li><br />
<li>The seminal paper on DSELs</li><br />
<li>Led to the lambda-calculus-based languages, including Scheme, ML, and Haskell.</li><br />
<li>Recommends replacing the fuzzy terms “functional”, “declarative”, and “non-procedural” with the substantive “denotative”.</li><br />
<li>“… gives us a test for whether the notation is genuinely functional or merely masquerading.”</li><br />
</ul></li><br />
<li>Denotative:<br />
<ul><br />
<li>Give meaning by mapping to a mathematical type</li><br />
<li>Meaning of a composite is a function of the meanings of the components</li><br />
<li>Applies to:<br />
<ul><br />
<li>Programming <em>languages</em></li><br />
<li>Programming <em>libraries</em>:<br />
<ul><br />
<li>Give your data type a <em>meaning</em> (model). Examples: <code>Image</code> means function of space; <code>Animation</code> means function of time; <code>Sequence</code> means list.</li><br />
<li>Everything operation your data type must be explainable via the meaning.</li><br />
<li>Guides API, and defines correctness.</li><br />
<li>Reason on meanings, not on implementations.</li><br />
<li>For instance:<br />
<ul><br />
<li>Is append on tricky-sequences associative?</li><br />
<li>Is my type a monoid, functor, applicative, etc?</li><br />
</ul></li><br />
</ul></li><br />
</ul></li><br />
<li>Strong &amp; simple (practical) foundation for (correct) reasoning</li><br />
</ul></li><br />
<li>What about Haskell?<br />
<ul><br />
<li>“Purely functional”? Ill-defined question.</li><br />
<li>“Purely denotative”? If not, what parts are and what parts are not?</li><br />
</ul></li><br />
</ul></div>Conalhttps://wiki.haskell.org/BayHac2013BayHac20132013-05-22T02:28:17Z<p>Conal: /* Lightning Talks */ Denotative Programming notes link</p>
<hr />
<div>__NOTOC__<br />
<br />
[[Image:BayHac13_banner.png]]<br />
<br />
<b><span style="color:#e73">San Francisco Bay Area</span> <span style="color:#aaa">&amp;</span> <span style="color:#930">Silicon Valley</span> <span style="color:#aaa">Haskell Hackathon</span></b><br />
<br />
Come join a group of Haskell hackers to work on a wide variety of projects. All levels welcome.<br />
<br />
Special thanks to [https://developers.google.com/open-source/ Google] for sponsoring BayHac '13, and providing '''lunch on Saturday!'''<br />
<br />
----<br />
{|<br />
|When:<br />
|Friday, May 17th – Sunday, May 19th, 2013<br />
|-<br />
|Where:<br />
|[http://www.hackerdojo.com/ Hacker Dojo] ''- note the Dojo has moved''<br />
|-<br />
|Cost:<br />
|Free<br />
|-<br />
|News and Discussion:<br />
|[http://groups.google.com/group/bayhac BayHac Google Group]<br />
|}<br />
<br />
<big>'''Post-Event Survey:''' Please take [https://docs.google.com/forms/d/18xWYQm1nwcyM6GxXkqaGjfd2kY8dvYWt8Y1mRwYCz3w/viewform this survey] if you attended BayHac'13.</big><br />
<br />
== Location ==<br />
<br />
[http://www.hackerdojo.com/ Hacker Dojo], 599 Fairchild Drive, Mountain View, CA ([https://maps.google.com/maps?ie=UTF8&cid=11488539903009648209&q=Hacker+Dojo&iwloc=A&gl=US&hl=en-US Google Map])<br />
<br />
== Schedule ==<br />
<br />
Anticipated, but subject to group whim and circumstance:<br />
<br />
{|<br />
|Friday, May 17th<br />
|3pm - 7pm<br />
|Meet-n-Greet-n-Hack<br />
|-<br />
|Saturday, May 18th<br />
|10am ~ 7pm<br />
|Hacking all day<br />
|-<br />
|<br />
|1pm - 2pm<br />
|Lunch, sponsored by Google<br />
|-<br />
|<br />
|2pm - 5pm<br />
|[https://docs.google.com/forms/d/1lFGAjnPAcvKdYEFdoGtw5-A7z9L3JKF68O_AF_aAEH4/viewform Code Kata] (purely optional)<br />
|-<br />
|<br />
|6pm - 7pm<br />
|Lightning Talks<br />
|-<br />
|Sunday, May 19th<br />
|10am - 4pm<br />
|Hacking<br />
|-<br />
|<br />
|1pm - 2pm<br />
|Lunch<br />
|-<br />
|<br />
|2pm - 3pm<br />
|Lightning Talks<br />
|-<br />
|<br />
|3pm - 4pm<br />
|Good-Byes, Clean up, Go get beer!<br />
|}<br />
<br />
== Code Kata ==<br />
<br />
Code Katas are programming exercises with the aim of just practicing the skill of programming.<br />
<br />
We'll have a Code Kata session at BayHac '13 on Saturday afternoon. There'll be a coding problem, and we'll break into groups or singles and tackle it in Haskell. The problem will expand as time progresses. After about two hours of coding, we'll re-group and do some quick reviews of the solutions people came up with and discuss the challenges and design trade-offs.<br />
<br />
# [https://github.com/mzero/bayhac-13-kata Mark's repo]<br />
<br />
== Lightning Talks ==<br />
<br />
{|<br />
| colspan="2" | '''Saturday, 6pm'''<br />
|- <br />
| Steve Severance || Exception Handling<br />
|-<br />
| Jonathan Fischoff || BNFC Meta<br />
|-<br />
| Kyle Butt || Art and Dan explained<br />
|-<br />
| Lev Walkin || Explaining Charts to Blind<br />
|-<br />
| Evan Laforge || Music Sequencer<br />
|-<br />
| [http://jelv.is Tikhon Jelvis] || GreenArray Synthesis -- [http://jelv.is/af-slides.html slides]<br />
|-<br />
| Stephen Balaban || agda-mode<br />
|-<br />
| Eric Nedervold || FP to OOP and back<br />
|-<br />
| [http://conal.net Conal Elliott] || Denotative Programming -- [[/Denotative Programming|notes]]<br />
|}<br />
<br />
{|<br />
| colspan="2" | '''Sunday, 2pm'''<br />
|-<br />
| Paul Rubin || [http://hpaste.org/70413 Memoization benchmark]<br />
|-<br />
| Gabriel Gonzalez || Protein Search Engine<br />
|-<br />
| Luis Casillas || Denotational OLAP<br />
|-<br />
| Steve Severance || Accelerate<br />
|-<br />
| [http://jelv.is Tikhon Jelvis] || Algebras and Coalgebras -- [http://jelv.is/algebras.html slides]<br />
|-<br />
| Jonathan Fischoff || MTL<br />
|-<br />
| Russel O'Connor || Prisms!<br />
|-<br />
| John Millikin || re2 bindings<br />
|}<br />
<br />
== Attendees == <br />
<br />
If you're attending, please use the [https://docs.google.com/forms/d/1u502QHmyFC_Wi4fqv_jYTTRun8E6D_gwcbf6bB3dvrs/viewform sign-up form] to help the organizers plan. Add your name here if you want to let others know you're coming.<br />
<br />
# [http://www.ozonehouse.com/mark/ Mark Lentczner] - organizer<br />
# [http://www.johantibell.com/mark/ Johan Tibell] - organizer<br />
# [http://www.linkedin.com/pub/david-banas/1/6ab/a48 David Banas] - ''AMITool'' project lead<br />
# [http://www.linkedin.com/in/danburtonhaskeller/ Dan Burton] - Just another Haskeller<br />
# [http://conal.net Conal Elliott]<br />
# [http://www.mega-nerd.com/erikd/Blog/ Erik de Castro Lopo]<br />
# [http://haskellforall.com/ Gabriel Gonzalez]<br />
# Aaron Culich<br />
<br />
== Projects ==<br />
<br />
If you plan working on a project at the Hackathon, you can put it up here so other interested hackers can see what projects are afoot. If you don't have a project, look here and find one!<br />
<br />
# [http://code.google.com/p/plush/ Plush] - Mark L.<br />
# [http://www.haskell.org/haskellwiki/AMI_Tool AMITool] - David Banas<br />
# [http://github.com/jkff/minxmod minxmod] (a tiny concurrent modelchecker) - Eugene Kirpichov<br />
# [https://github.com/tommythorn/Reduceron Reduceron] - Tommy Thorn<br />
# Compiling Haskell to circuits (netlists) via GHC Core. Or other means of getting familiar with the GHC API. - Conal<br />
# Cross-platform, OpenGL- and GHCi-friendly GUIs & graphics. - Conal<br />
# [https://github.com/ktvoelker/HsTools HsTools] - continuous compilation with vim integration (and more to come?) - Karl Voelker<br />
<br />
== IRC channel ==<br />
<br />
We'll be hanging out on #bayhac on FreeNode.<br />
<br />
[[Category:Community]]</div>Conalhttps://wiki.haskell.org/BayHac2013/BayHac2013/Denotative_ProgrammingBayHac2013/BayHac2013/Denotative Programming2013-05-22T02:26:15Z<p>Conal: oops. accidental extra indirection.</p>
<hr />
<div>oops. accidental extra indirection.</div>Conalhttps://wiki.haskell.org/BayHac2013/BayHac2013/Denotative_ProgrammingBayHac2013/BayHac2013/Denotative Programming2013-05-22T02:23:14Z<p>Conal: Talk notes</p>
<hr />
<div><ul><br />
<li><em>[http://www.scribd.com/doc/12878059/The-Next-700-Programming-Languages The Next 700 Programming Languages]</em>:<br />
<ul><br />
<li>By Peter Landin in 1966. Enormously important figure in improving our understanding of programming languages.<br />
<ul><br />
<li>Prime move with Algol, about which Tony Hoare said “Here is a language so far ahead of its time that it was not only an improvement on its predecessors but also on nearly all its successors.”</li><br />
<li>Realized that lambda calculus can model PLs.</li><br />
<li>Worked out how to mechanically evaluate lambda calculus.</li><br />
</ul></li><br />
<li>The seminal paper on DSELs</li><br />
<li>Led to the lambda-calculus-based languages, including Scheme, ML, and Haskell.</li><br />
<li>Recommends replacing the fuzzy terms “functional”, “declarative”, and “non-procedural” with the substantive “denotative”.</li><br />
<li>“… gives us a test for whether the notation is genuinely functional or merely masquerading.”</li><br />
</ul></li><br />
<li>Denotative:<br />
<ul><br />
<li>Give meaning by mapping to a mathematical type</li><br />
<li>Meaning of a composite is a function of the meanings of the components</li><br />
<li>Applies to:<br />
<ul><br />
<li>Programming <em>languages</em></li><br />
<li>Programming <em>libraries</em>:<br />
<ul><br />
<li>Give your data type a <em>meaning</em> (model). Examples: <code>Image</code> means function of space; <code>Animation</code> means function of time; <code>Sequence</code> means list.</li><br />
<li>Everything operation your data type must be explainable via the meaning.</li><br />
<li>Guides API, and defines correctness.</li><br />
<li>Reason on meanings, not on implementations.</li><br />
<li>For instance:<br />
<ul><br />
<li>Is append on tricky-sequences associative?</li><br />
<li>Is my type a monoid, functor, applicative, etc?</li><br />
</ul></li><br />
</ul></li><br />
</ul></li><br />
<li>Strong &amp; simple (practical) foundation for (correct) reasoning</li><br />
</ul></li><br />
<li>What about Haskell?<br />
<ul><br />
<li>“Purely functional”? Ill-defined question.</li><br />
<li>“Purely denotative”? If not, what parts are and what parts are not?</li><br />
</ul></li><br />
</ul></div>Conalhttps://wiki.haskell.org/BayHac2013BayHac20132013-05-07T20:41:04Z<p>Conal: /* Projects */ missing space</p>
<hr />
<div>__NOTOC__<br />
<br />
[[Image:BayHac13_banner.png]]<br />
<br />
<b><span style="color:#e73">San Francisco Bay Area</span> <span style="color:#aaa">&amp;</span> <span style="color:#930">Silicon Valley</span> <span style="color:#aaa">Haskell Hackathon</span></b><br />
<br />
Come join a group of Haskell hackers to work on a wide variety of projects. All levels welcome.<br />
<br />
----<br />
{|<br />
|When:<br />
|Friday, May 17th – Sunday, May 19th, 2013<br />
|-<br />
|Where:<br />
|[http://www.hackerdojo.com/ Hacker Dojo] ''- note the Dojo has moved''<br />
|-<br />
|Cost:<br />
|Free<br />
|-<br />
|Sign up:<br />
|[https://docs.google.com/forms/d/1u502QHmyFC_Wi4fqv_jYTTRun8E6D_gwcbf6bB3dvrs/viewform sign-up form]<br />
|-<br />
|News and Discussion:<br />
|[http://groups.google.com/group/bayhac BayHac Google Group]<br />
|}<br />
<br />
== Location ==<br />
<br />
[http://www.hackerdojo.com/ Hacker Dojo], 599 Fairchild Drive, Mountain View, CA ([https://maps.google.com/maps?ie=UTF8&cid=11488539903009648209&q=Hacker+Dojo&iwloc=A&gl=US&hl=en-US Google Map])<br />
<br />
== Schedule ==<br />
<br />
Anticipated, but subject to group whim and circumstance:<br />
<br />
{|<br />
|Friday, May 17th<br />
|3pm - 7pm<br />
|Meet-n-Greet-n-Hack<br />
|-<br />
|Saturday, May 18th<br />
|10am ~ 7pm<br />
|Hacking all day<br />
|-<br />
|<br />
|2pm - 5pm<br />
|[https://docs.google.com/forms/d/1lFGAjnPAcvKdYEFdoGtw5-A7z9L3JKF68O_AF_aAEH4/viewform Code Kata] (purely optional)<br />
|-<br />
|Sunday, May 19th<br />
|10am - 1pm<br />
|Hacking<br />
|-<br />
|<br />
|1pm - 3pm<br />
|Lightning Talks<br />
|-<br />
|<br />
|3pm - 4pm<br />
|Good-Byes, Clean up, Go get beer!<br />
|}<br />
<br />
== Code Kata ==<br />
<br />
Code Katas are programming exercises with the aim of just practicing the skill of programming.<br />
<br />
We'll have a Code Kata session at BayHac '13 on Saturday afternoon. There'll be a coding problem, and we'll break into groups or singles and tackle it in Haskell. The problem will expand as time progresses. After about two hours of coding, we'll re-group and do some quick reviews of the solutions people came up with and discuss the challenges and design trade-offs.<br />
<br />
If you're interested in this activity, [https://docs.google.com/forms/d/1lFGAjnPAcvKdYEFdoGtw5-A7z9L3JKF68O_AF_aAEH4/viewform please let us know].<br />
<br />
== Attendees == <br />
<br />
If you're attending, please use the [https://docs.google.com/forms/d/1u502QHmyFC_Wi4fqv_jYTTRun8E6D_gwcbf6bB3dvrs/viewform sign-up form] to help the organizers plan. Add your name here if you want to let others know you're coming.<br />
<br />
# [http://www.ozonehouse.com/mark/ Mark Lentczner] - organizer<br />
# [http://www.johantibell.com/mark/ Johan Tibell] - organizer<br />
# [http://www.linkedin.com/pub/david-banas/1/6ab/a48 David Banas] - ''AMITool'' project lead<br />
# [http://www.linkedin.com/in/danburtonhaskeller/ Dan Burton] - Just another Haskeller<br />
# [http://conal.net Conal Elliott]<br />
# [http://www.mega-nerd.com/erikd/Blog/ Erik de Castro Lopo]<br />
# [http://haskellforall.com/ Gabriel Gonzalez]<br />
<br />
== Projects ==<br />
<br />
If you plan working on a project at the Hackathon, you can put it up here so other interested hackers can see what projects are afoot. If you don't have a project, look here and find one!<br />
<br />
# [http://code.google.com/p/plush/ Plush] - Mark L.<br />
# [http://www.haskell.org/haskellwiki/AMI_Tool AMITool] - David Banas<br />
# [http://github.com/jkff/minxmod minxmod] (a tiny concurrent modelchecker) - Eugene Kirpichov<br />
# [https://github.com/tommythorn/Reduceron Reduceron] - Tommy Thorn<br />
# Compiling Haskell to circuits (netlists) via GHC Core. Or other means of getting familiar with the GHC API. - Conal<br />
# Cross-platform, OpenGL- and GHCi-friendly GUIs & graphics. - Conal<br />
<br />
[[Category:Community]]</div>Conalhttps://wiki.haskell.org/BayHac2013BayHac20132013-05-07T20:40:45Z<p>Conal: /* Projects */ Cross-platform, OpenGL- and GHCi-friendly GUIs & graphics.</p>
<hr />
<div>__NOTOC__<br />
<br />
[[Image:BayHac13_banner.png]]<br />
<br />
<b><span style="color:#e73">San Francisco Bay Area</span> <span style="color:#aaa">&amp;</span> <span style="color:#930">Silicon Valley</span> <span style="color:#aaa">Haskell Hackathon</span></b><br />
<br />
Come join a group of Haskell hackers to work on a wide variety of projects. All levels welcome.<br />
<br />
----<br />
{|<br />
|When:<br />
|Friday, May 17th – Sunday, May 19th, 2013<br />
|-<br />
|Where:<br />
|[http://www.hackerdojo.com/ Hacker Dojo] ''- note the Dojo has moved''<br />
|-<br />
|Cost:<br />
|Free<br />
|-<br />
|Sign up:<br />
|[https://docs.google.com/forms/d/1u502QHmyFC_Wi4fqv_jYTTRun8E6D_gwcbf6bB3dvrs/viewform sign-up form]<br />
|-<br />
|News and Discussion:<br />
|[http://groups.google.com/group/bayhac BayHac Google Group]<br />
|}<br />
<br />
== Location ==<br />
<br />
[http://www.hackerdojo.com/ Hacker Dojo], 599 Fairchild Drive, Mountain View, CA ([https://maps.google.com/maps?ie=UTF8&cid=11488539903009648209&q=Hacker+Dojo&iwloc=A&gl=US&hl=en-US Google Map])<br />
<br />
== Schedule ==<br />
<br />
Anticipated, but subject to group whim and circumstance:<br />
<br />
{|<br />
|Friday, May 17th<br />
|3pm - 7pm<br />
|Meet-n-Greet-n-Hack<br />
|-<br />
|Saturday, May 18th<br />
|10am ~ 7pm<br />
|Hacking all day<br />
|-<br />
|<br />
|2pm - 5pm<br />
|[https://docs.google.com/forms/d/1lFGAjnPAcvKdYEFdoGtw5-A7z9L3JKF68O_AF_aAEH4/viewform Code Kata] (purely optional)<br />
|-<br />
|Sunday, May 19th<br />
|10am - 1pm<br />
|Hacking<br />
|-<br />
|<br />
|1pm - 3pm<br />
|Lightning Talks<br />
|-<br />
|<br />
|3pm - 4pm<br />
|Good-Byes, Clean up, Go get beer!<br />
|}<br />
<br />
== Code Kata ==<br />
<br />
Code Katas are programming exercises with the aim of just practicing the skill of programming.<br />
<br />
We'll have a Code Kata session at BayHac '13 on Saturday afternoon. There'll be a coding problem, and we'll break into groups or singles and tackle it in Haskell. The problem will expand as time progresses. After about two hours of coding, we'll re-group and do some quick reviews of the solutions people came up with and discuss the challenges and design trade-offs.<br />
<br />
If you're interested in this activity, [https://docs.google.com/forms/d/1lFGAjnPAcvKdYEFdoGtw5-A7z9L3JKF68O_AF_aAEH4/viewform please let us know].<br />
<br />
== Attendees == <br />
<br />
If you're attending, please use the [https://docs.google.com/forms/d/1u502QHmyFC_Wi4fqv_jYTTRun8E6D_gwcbf6bB3dvrs/viewform sign-up form] to help the organizers plan. Add your name here if you want to let others know you're coming.<br />
<br />
# [http://www.ozonehouse.com/mark/ Mark Lentczner] - organizer<br />
# [http://www.johantibell.com/mark/ Johan Tibell] - organizer<br />
# [http://www.linkedin.com/pub/david-banas/1/6ab/a48 David Banas] - ''AMITool'' project lead<br />
# [http://www.linkedin.com/in/danburtonhaskeller/ Dan Burton] - Just another Haskeller<br />
# [http://conal.net Conal Elliott]<br />
# [http://www.mega-nerd.com/erikd/Blog/ Erik de Castro Lopo]<br />
# [http://haskellforall.com/ Gabriel Gonzalez]<br />
<br />
== Projects ==<br />
<br />
If you plan working on a project at the Hackathon, you can put it up here so other interested hackers can see what projects are afoot. If you don't have a project, look here and find one!<br />
<br />
# [http://code.google.com/p/plush/ Plush] - Mark L.<br />
# [http://www.haskell.org/haskellwiki/AMI_Tool AMITool] - David Banas<br />
# [http://github.com/jkff/minxmod minxmod] (a tiny concurrent modelchecker) - Eugene Kirpichov<br />
# [https://github.com/tommythorn/Reduceron Reduceron] - Tommy Thorn<br />
# Compiling Haskell to circuits (netlists) via GHC Core. Or other means of getting familiar with the GHC API. - Conal<br />
# Cross-platform, OpenGL- and GHCi-friendly GUIs & graphics. -Conal<br />
<br />
[[Category:Community]]</div>Conalhttps://wiki.haskell.org/BayHac2013BayHac20132013-05-07T20:22:38Z<p>Conal: /* Schedule */ "lightening" --> "lightning"</p>
<hr />
<div>__NOTOC__<br />
<br />
[[Image:BayHac13_banner.png]]<br />
<br />
<b><span style="color:#e73">San Francisco Bay Area</span> <span style="color:#aaa">&amp;</span> <span style="color:#930">Silicon Valley</span> <span style="color:#aaa">Haskell Hackathon</span></b><br />
<br />
Come join a group of Haskell hackers to work on a wide variety of projects. All levels welcome.<br />
<br />
----<br />
{|<br />
|When:<br />
|Friday, May 17th – Sunday, May 19th, 2013<br />
|-<br />
|Where:<br />
|[http://www.hackerdojo.com/ Hacker Dojo] ''- note the Dojo has moved''<br />
|-<br />
|Cost:<br />
|Free<br />
|-<br />
|Sign up:<br />
|[https://docs.google.com/forms/d/1u502QHmyFC_Wi4fqv_jYTTRun8E6D_gwcbf6bB3dvrs/viewform sign-up form]<br />
|-<br />
|News and Discussion:<br />
|[http://groups.google.com/group/bayhac BayHac Google Group]<br />
|}<br />
<br />
== Location ==<br />
<br />
[http://www.hackerdojo.com/ Hacker Dojo], 599 Fairchild Drive, Mountain View, CA ([https://maps.google.com/maps?ie=UTF8&cid=11488539903009648209&q=Hacker+Dojo&iwloc=A&gl=US&hl=en-US Google Map])<br />
<br />
== Schedule ==<br />
<br />
Anticipated, but subject to group whim and circumstance:<br />
<br />
{|<br />
|Friday, May 17th<br />
|3pm - 7pm<br />
|Meet-n-Greet-n-Hack<br />
|-<br />
|Saturday, May 18th<br />
|10am ~ 7pm<br />
|Hacking all day<br />
|-<br />
|<br />
|2pm - 5pm<br />
|[https://docs.google.com/forms/d/1lFGAjnPAcvKdYEFdoGtw5-A7z9L3JKF68O_AF_aAEH4/viewform Code Kata] (purely optional)<br />
|-<br />
|Sunday, May 19th<br />
|10am - 1pm<br />
|Hacking<br />
|-<br />
|<br />
|1pm - 3pm<br />
|Lightning Talks<br />
|-<br />
|<br />
|3pm - 4pm<br />
|Good-Byes, Clean up, Go get beer!<br />
|}<br />
<br />
== Code Kata ==<br />
<br />
Code Katas are programming exercises with the aim of just practicing the skill of programming.<br />
<br />
We'll have a Code Kata session at BayHac '13 on Saturday afternoon. There'll be a coding problem, and we'll break into groups or singles and tackle it in Haskell. The problem will expand as time progresses. After about two hours of coding, we'll re-group and do some quick reviews of the solutions people came up with and discuss the challenges and design trade-offs.<br />
<br />
If you're interested in this activity, [https://docs.google.com/forms/d/1lFGAjnPAcvKdYEFdoGtw5-A7z9L3JKF68O_AF_aAEH4/viewform please let us know].<br />
<br />
== Attendees == <br />
<br />
If you're attending, please use the [https://docs.google.com/forms/d/1u502QHmyFC_Wi4fqv_jYTTRun8E6D_gwcbf6bB3dvrs/viewform sign-up form] to help the organizers plan. Add your name here if you want to let others know you're coming.<br />
<br />
# [http://www.ozonehouse.com/mark/ Mark Lentczner] - organizer<br />
# [http://www.johantibell.com/mark/ Johan Tibell] - organizer<br />
# [http://www.linkedin.com/pub/david-banas/1/6ab/a48 David Banas] - ''AMITool'' project lead<br />
# [http://www.linkedin.com/in/danburtonhaskeller/ Dan Burton] - Just another Haskeller<br />
# [http://conal.net Conal Elliott]<br />
# [http://www.mega-nerd.com/erikd/Blog/ Erik de Castro Lopo]<br />
# [http://haskellforall.com/ Gabriel Gonzalez]<br />
<br />
== Projects ==<br />
<br />
If you plan working on a project at the Hackathon, you can put it up here so other interested hackers can see what projects are afoot. If you don't have a project, look here and find one!<br />
<br />
# [http://code.google.com/p/plush/ Plush] - Mark L.<br />
# [http://www.haskell.org/haskellwiki/AMI_Tool AMITool] - David Banas<br />
# [http://github.com/jkff/minxmod minxmod] (a tiny concurrent modelchecker) - Eugene Kirpichov<br />
# [https://github.com/tommythorn/Reduceron Reduceron] - Tommy Thorn<br />
# Compiling Haskell to circuits (netlists) via GHC Core. Or other means of getting familiar with the GHC API.<br />
<br />
[[Category:Community]]</div>Conalhttps://wiki.haskell.org/BayHac2013BayHac20132013-05-07T20:20:59Z<p>Conal: /* Projects */ rewording</p>
<hr />
<div>__NOTOC__<br />
<br />
[[Image:BayHac13_banner.png]]<br />
<br />
<b><span style="color:#e73">San Francisco Bay Area</span> <span style="color:#aaa">&amp;</span> <span style="color:#930">Silicon Valley</span> <span style="color:#aaa">Haskell Hackathon</span></b><br />
<br />
Come join a group of Haskell hackers to work on a wide variety of projects. All levels welcome.<br />
<br />
----<br />
{|<br />
|When:<br />
|Friday, May 17th – Sunday, May 19th, 2013<br />
|-<br />
|Where:<br />
|[http://www.hackerdojo.com/ Hacker Dojo] ''- note the Dojo has moved''<br />
|-<br />
|Cost:<br />
|Free<br />
|-<br />
|Sign up:<br />
|[https://docs.google.com/forms/d/1u502QHmyFC_Wi4fqv_jYTTRun8E6D_gwcbf6bB3dvrs/viewform sign-up form]<br />
|-<br />
|News and Discussion:<br />
|[http://groups.google.com/group/bayhac BayHac Google Group]<br />
|}<br />
<br />
== Location ==<br />
<br />
[http://www.hackerdojo.com/ Hacker Dojo], 599 Fairchild Drive, Mountain View, CA ([https://maps.google.com/maps?ie=UTF8&cid=11488539903009648209&q=Hacker+Dojo&iwloc=A&gl=US&hl=en-US Google Map])<br />
<br />
== Schedule ==<br />
<br />
Anticipated, but subject to group whim and circumstance:<br />
<br />
{|<br />
|Friday, May 17th<br />
|3pm - 7pm<br />
|Meet-n-Greet-n-Hack<br />
|-<br />
|Saturday, May 18th<br />
|10am ~ 7pm<br />
|Hacking all day<br />
|-<br />
|<br />
|2pm - 5pm<br />
|[https://docs.google.com/forms/d/1lFGAjnPAcvKdYEFdoGtw5-A7z9L3JKF68O_AF_aAEH4/viewform Code Kata] (purely optional)<br />
|-<br />
|Sunday, May 19th<br />
|10am - 1pm<br />
|Hacking<br />
|-<br />
|<br />
|1pm - 3pm<br />
|Lightening Talks<br />
|-<br />
|<br />
|3pm - 4pm<br />
|Good-Byes, Clean up, Go get beer!<br />
|}<br />
<br />
== Code Kata ==<br />
<br />
Code Katas are programming exercises with the aim of just practicing the skill of programming.<br />
<br />
We'll have a Code Kata session at BayHac '13 on Saturday afternoon. There'll be a coding problem, and we'll break into groups or singles and tackle it in Haskell. The problem will expand as time progresses. After about two hours of coding, we'll re-group and do some quick reviews of the solutions people came up with and discuss the challenges and design trade-offs.<br />
<br />
If you're interested in this activity, [https://docs.google.com/forms/d/1lFGAjnPAcvKdYEFdoGtw5-A7z9L3JKF68O_AF_aAEH4/viewform please let us know].<br />
<br />
== Attendees == <br />
<br />
If you're attending, please use the [https://docs.google.com/forms/d/1u502QHmyFC_Wi4fqv_jYTTRun8E6D_gwcbf6bB3dvrs/viewform sign-up form] to help the organizers plan. Add your name here if you want to let others know you're coming.<br />
<br />
# [http://www.ozonehouse.com/mark/ Mark Lentczner] - organizer<br />
# [http://www.johantibell.com/mark/ Johan Tibell] - organizer<br />
# [http://www.linkedin.com/pub/david-banas/1/6ab/a48 David Banas] - ''AMITool'' project lead<br />
# [http://www.linkedin.com/in/danburtonhaskeller/ Dan Burton] - Just another Haskeller<br />
# [http://conal.net Conal Elliott]<br />
# [http://www.mega-nerd.com/erikd/Blog/ Erik de Castro Lopo]<br />
# [http://haskellforall.com/ Gabriel Gonzalez]<br />
<br />
== Projects ==<br />
<br />
If you plan working on a project at the Hackathon, you can put it up here so other interested hackers can see what projects are afoot. If you don't have a project, look here and find one!<br />
<br />
# [http://code.google.com/p/plush/ Plush] - Mark L.<br />
# [http://www.haskell.org/haskellwiki/AMI_Tool AMITool] - David Banas<br />
# [http://github.com/jkff/minxmod minxmod] (a tiny concurrent modelchecker) - Eugene Kirpichov<br />
# [https://github.com/tommythorn/Reduceron Reduceron] - Tommy Thorn<br />
# Compiling Haskell to circuits (netlists) via GHC Core. Or other means of getting familiar with the GHC API.<br />
<br />
[[Category:Community]]</div>Conalhttps://wiki.haskell.org/BayHac2013BayHac20132013-05-07T20:20:11Z<p>Conal: /* Projects */ Haskell-to-gates & GHC Core</p>
<hr />
<div>__NOTOC__<br />
<br />
[[Image:BayHac13_banner.png]]<br />
<br />
<b><span style="color:#e73">San Francisco Bay Area</span> <span style="color:#aaa">&amp;</span> <span style="color:#930">Silicon Valley</span> <span style="color:#aaa">Haskell Hackathon</span></b><br />
<br />
Come join a group of Haskell hackers to work on a wide variety of projects. All levels welcome.<br />
<br />
----<br />
{|<br />
|When:<br />
|Friday, May 17th – Sunday, May 19th, 2013<br />
|-<br />
|Where:<br />
|[http://www.hackerdojo.com/ Hacker Dojo] ''- note the Dojo has moved''<br />
|-<br />
|Cost:<br />
|Free<br />
|-<br />
|Sign up:<br />
|[https://docs.google.com/forms/d/1u502QHmyFC_Wi4fqv_jYTTRun8E6D_gwcbf6bB3dvrs/viewform sign-up form]<br />
|-<br />
|News and Discussion:<br />
|[http://groups.google.com/group/bayhac BayHac Google Group]<br />
|}<br />
<br />
== Location ==<br />
<br />
[http://www.hackerdojo.com/ Hacker Dojo], 599 Fairchild Drive, Mountain View, CA ([https://maps.google.com/maps?ie=UTF8&cid=11488539903009648209&q=Hacker+Dojo&iwloc=A&gl=US&hl=en-US Google Map])<br />
<br />
== Schedule ==<br />
<br />
Anticipated, but subject to group whim and circumstance:<br />
<br />
{|<br />
|Friday, May 17th<br />
|3pm - 7pm<br />
|Meet-n-Greet-n-Hack<br />
|-<br />
|Saturday, May 18th<br />
|10am ~ 7pm<br />
|Hacking all day<br />
|-<br />
|<br />
|2pm - 5pm<br />
|[https://docs.google.com/forms/d/1lFGAjnPAcvKdYEFdoGtw5-A7z9L3JKF68O_AF_aAEH4/viewform Code Kata] (purely optional)<br />
|-<br />
|Sunday, May 19th<br />
|10am - 1pm<br />
|Hacking<br />
|-<br />
|<br />
|1pm - 3pm<br />
|Lightening Talks<br />
|-<br />
|<br />
|3pm - 4pm<br />
|Good-Byes, Clean up, Go get beer!<br />
|}<br />
<br />
== Code Kata ==<br />
<br />
Code Katas are programming exercises with the aim of just practicing the skill of programming.<br />
<br />
We'll have a Code Kata session at BayHac '13 on Saturday afternoon. There'll be a coding problem, and we'll break into groups or singles and tackle it in Haskell. The problem will expand as time progresses. After about two hours of coding, we'll re-group and do some quick reviews of the solutions people came up with and discuss the challenges and design trade-offs.<br />
<br />
If you're interested in this activity, [https://docs.google.com/forms/d/1lFGAjnPAcvKdYEFdoGtw5-A7z9L3JKF68O_AF_aAEH4/viewform please let us know].<br />
<br />
== Attendees == <br />
<br />
If you're attending, please use the [https://docs.google.com/forms/d/1u502QHmyFC_Wi4fqv_jYTTRun8E6D_gwcbf6bB3dvrs/viewform sign-up form] to help the organizers plan. Add your name here if you want to let others know you're coming.<br />
<br />
# [http://www.ozonehouse.com/mark/ Mark Lentczner] - organizer<br />
# [http://www.johantibell.com/mark/ Johan Tibell] - organizer<br />
# [http://www.linkedin.com/pub/david-banas/1/6ab/a48 David Banas] - ''AMITool'' project lead<br />
# [http://www.linkedin.com/in/danburtonhaskeller/ Dan Burton] - Just another Haskeller<br />
# [http://conal.net Conal Elliott]<br />
# [http://www.mega-nerd.com/erikd/Blog/ Erik de Castro Lopo]<br />
# [http://haskellforall.com/ Gabriel Gonzalez]<br />
<br />
== Projects ==<br />
<br />
If you plan working on a project at the Hackathon, you can put it up here so other interested hackers can see what projects are afoot. If you don't have a project, look here and find one!<br />
<br />
# [http://code.google.com/p/plush/ Plush] - Mark L.<br />
# [http://www.haskell.org/haskellwiki/AMI_Tool AMITool] - David Banas<br />
# [http://github.com/jkff/minxmod minxmod] (a tiny concurrent modelchecker) - Eugene Kirpichov<br />
# [https://github.com/tommythorn/Reduceron Reduceron] - Tommy Thorn<br />
# Compiling Haskell to circuits (netlists) via GHC Core. Or other uses of the GHC API to help me get familiar with it.<br />
<br />
[[Category:Community]]</div>Conalhttps://wiki.haskell.org/BayHac2013BayHac20132013-03-13T20:22:57Z<p>Conal: /* Attendees */</p>
<hr />
<div>[[Image:BayHac13_banner.png]]<br />
<br />
<b><span style="color:#e73">San Francisco Bay Area</span> <span style="color:#aaa">&amp;</span> <span style="color:#930">Silicon Valley</span> <span style="color:#aaa">Haskell Hackathon</span></b><br />
<br />
Come join a group of Haskell hackers to work on a wide variety of projects. All levels welcome.<br />
<br />
----<br />
{|<br />
|When:<br />
|Friday, May 17th – Sunday, May 19th, 2013<br />
|-<br />
|Hours:<br />
|10am ~ 7pm (still in early planning stages)<br />
|-<br />
|Where:<br />
|[http://www.hackerdojo.com/ Hacker Dojo] ''- note the Dojo has moved''<br />
|-<br />
|Cost:<br />
|Free<br />
|-<br />
|Sign up:<br />
|[https://docs.google.com/forms/d/1u502QHmyFC_Wi4fqv_jYTTRun8E6D_gwcbf6bB3dvrs/viewform sign-up form]<br />
|-<br />
|News and Discussion:<br />
|[http://groups.google.com/group/bayhac BayHac Google Group]<br />
|}<br />
<br />
== Location ==<br />
<br />
[http://www.hackerdojo.com/ Hacker Dojo], 599 Fairchild Drive, Mountain View, CA ([https://maps.google.com/maps?ie=UTF8&cid=11488539903009648209&q=Hacker+Dojo&iwloc=A&gl=US&hl=en-US Google Map])<br />
<br />
== Attendees == <br />
<br />
If you're attending, please use the [https://docs.google.com/forms/d/1u502QHmyFC_Wi4fqv_jYTTRun8E6D_gwcbf6bB3dvrs/viewform sign-up form] to help the organizers plan. Add your name here if you want to let others know you're coming.<br />
<br />
# [http://www.ozonehouse.com/mark/ Mark Lentczner] - organizer<br />
# [http://www.johantibell.com/mark/ Johan Tibell] - organizer<br />
# [http://www.linkedin.com/pub/david-banas/1/6ab/a48 David Banas] - ''AMITool'' project lead<br />
# [http://www.linkedin.com/in/danburtonhaskeller/ Dan Burton] - Just another Haskeller<br />
# [http://conal.net Conal Elliott]<br />
<br />
== Projects ==<br />
<br />
If you plan working on a project at the Hackathon, you can put it up here so other interested hackers can see what projects are afoot. If you don't have a project, look here and find one!<br />
<br />
# [http://code.google.com/p/plush/ Plush] - Mark L.<br />
# [http://www.haskell.org/haskellwiki/AMI_Tool AMITool] - David Banas<br />
[[Category:Community]]</div>Conalhttps://wiki.haskell.org/Cabal-makeCabal-make2013-03-10T21:56:49Z<p>Conal: updated repo location to code.darcs.org</p>
<hr />
<div>== Abstract ==<br />
<br />
Cabal-make is an include file for [http://www.gnu.org/software/make GNU make] files to be used with [[Cabal]] in sharing Haskell packages. It is intended mainly for package authors. People who just build & install packages software can do so entirely with [[Cabal]] commands. In particular, it's a bit hairy to get the best results from [[Haddock]] & [[hscolour]].<br />
<br />
== Features ==<br />
<br />
* Web-based, cross-package links in [[Haddock]] docs (documentation generated by [[Haddock]]).<br />
* Syntax coloring via [[hscolour]], with per-project CSS.<br />
* Links from the [[Haddock]] docs to [[hscolour]]'d code (per-module, and per-entity).<br />
* Links from [[Haddock]] docs to wiki-based user comment pages (per-project and per-module), with automatic subscription (for email notification).<br />
* Set up with [[darcs]] repositories on http://code.haskell.org or elsewhere.<br />
* Make distribution tarballs and install on server.<br />
* Automated download and build in a fresh local temp directory for testing.<br />
* Copy [[Haddock]] docs to server (deprecated now that hackage has caught up with ghc).<br />
* Generate editor tags files (via [[hasktags]]).<br />
* Convert source files between dos-style and unix-style line endings.<br />
* Customizable.<br />
<br />
== Packages using cabal-make ==<br />
<br />
To get a concrete sense of the first few of the features listed above, here are some links to docs for packages that use cabal-make. (Please add your own packages to this list when you use cabal-make.)<br />
* [[Phooey]]: a simple, arrow-based functional GUI library<br />
* [[DeepArrow]]: a framework for composable semantic editors<br />
* [[TV]]: combined and separable packaging of functionality and interface<br />
* [[GuiTV]]: GUIs for TV<br />
* [[Checkers]]: Some [[QuickCheck]] helpers<br />
* [[FieldTrip]]: Functional 3D<br />
* [[Reactive]]: Functional reactive programming with a data-driven implementation<br />
<br />
== Example use ==<br />
<br />
On my Windows system, I've placed cabal-make at <code>c:\Haskell\cabal-make</code>, and I like to install Haskell packages under <code>c:\Haskell\packages</code>. I might write a <code>Makefile</code> for the package [[checkers]] as follows:<br />
<br />
{| class="wikitable"<br />
| style="padding:0px 20px 0px 20px;" |<br />
<pre><br />
user = conal<br />
cabal-make = c:/conal/Haskell/cabal-make<br />
configure-dirs = --prefix=c:/Haskell/packages --datadir=c:/Haskell/packages --libdir=c:/Haskell/packages --bindir=c:/Haskell/packages/bin<br />
hscolour-css = $(cabal-make)/hscolour.css<br />
<br />
server = code.haskell.org<br />
server-dir = /srv/code<br />
server-url-dir =<br />
include ../my-cabal-make.inc<br />
</pre><br />
|}<br />
<br />
To build [[checkers]], I run "<code>make</code>" with targets like "<code>configure</code>", "<code>build</code>", "<code>doc</code>", and "<code>install</code>". Or "<code>all</code>" (default) for all of these targets.<br />
<br />
A few [[darcs]]-related targets: <br />
* <code>pull</code> and <code>push</code>.<br />
* <code>repo</code>: makes a remote repository<br />
* <code>tag</code>: do "<code>darcs tag</code>" using current version (extracted from project [[Cabal]] file)<br />
* <code>darcs-dist</code>: make a tarball and copy to server.<br />
* <code>web-doc</code>: copy docs & colored sources to the server.<br />
* <code>test-get-build</code>: Test by doing "<code>darcs get</code>", configure, and build in a fresh temp directory.<br />
<br />
The target "<code>watch-comments</code>" sets up a subscription to the Haskell wiki talk pages that correspond to the package's modules (for the user comment links inserted in the [[Haddock]] docs.)<br />
<br />
There are a few other targets as well. See the source.<br />
<br />
== Specializing ==<br />
<br />
I use a trick for collecting my favorite setting to be saved across my own<br />
packages. The file is called "<code>my-cabal-make.inc</code>":<br />
<br />
{| class="wikitable"<br />
| style="padding:0px 20px 0px 20px;" |<br />
<pre><br />
user = conal<br />
cabal-make = c:/Haskell/cabal-make<br />
configure-dirs = --prefix=c:/Haskell/packages --datadir=c:/Haskell/packages<br />
hscolour-css = $(cabal-make)/hscolour.css<br />
<br />
include $(cabal-make)/cabal-make.inc<br />
</pre><br />
|}<br />
<br />
Then I just have to define <code>haddock_interfaces</code> and include <code>my-cabal-make</code>. My [[checkers]] <code>Makefile</code> is really<br />
<br />
{| class="wikitable"<br />
| style="padding:0px 20px 0px 20px;" |<br />
<pre><br />
user = conal<br />
cabal-make = c:/Haskell/cabal-make<br />
configure-dirs = --prefix=c:/Haskell/packages --datadir=c:/Haskell/packages<br />
hscolour-css = $(cabal-make)/hscolour.css<br />
<br />
include $(cabal-make)/cabal-make.inc<br />
</pre><br />
|}<br />
<br />
== Dependencies ==<br />
<br />
* [[Cabal]]<br />
* [[darcs]] for push & pull targets<br />
* [[Haddock]] and [[hscolour]] if you make your own docs<br />
* [http://www.gnu.org/software/make GNU make]<br />
<br />
== Use guidelines ==<br />
<br />
* In order for cabal-make to work, you have to list each of your source modules on a line by itself, ''including'' the first one in the list (instead of placing it aside the Cabal directive). You can use "<code>make show-modules</code>" to see if your list of source modules is extracted correctly.<br />
* Cabal-make assumes your source code to be under <code>src</code>. Overridable via <code>top-src-dir</code>.<br />
<br />
== Get it ==<br />
<br />
<blockquote><br />
<tt>darcs get --partial http://code.haskell.org/~conal/code/cabal-make</tt><br />
</blockquote><br />
<br />
== Customization ==<br />
<br />
There are several customization variables defined in cabal-make that can be overriden. Simply define these variables in your makefile before "<code>cabal-make.inc</code>". See the "Settings" section of the source.<br />
<br />
== To do ==<br />
<br />
* Eliminate the restrictions/assumptions listed in [[#Use guidelines]].<br />
<br />
[[Category:Tools]]<br />
[[Category:Cabal]]<br />
[[Category:Libraries]]<br />
[[Category:Packages]]</div>Conalhttps://wiki.haskell.org/CheckersCheckers2013-01-03T19:41:16Z<p>Conal: moved to github</p>
<hr />
<div>[[Category:Packages]]<br />
<br />
Content moved [https://github.com/conal/checkers to github].</div>Conalhttps://wiki.haskell.org/MemoTrieMemoTrie2012-11-29T17:20:19Z<p>Conal: redirect to github</p>
<hr />
<div>[[Category:Packages]]<br />
<br />
Description moved to [https://github.com/conal/MemoTrie github page].</div>Conalhttps://wiki.haskell.org/Research_papers/Domain_specific_languagesResearch papers/Domain specific languages2012-02-14T23:27:49Z<p>Conal: /* Graphics */ Replaced paper with the version that superceded it.</p>
<hr />
<div>__TOC__<br />
<br />
==Domain specific languages==<br />
<br />
;[http://www.jucs.org/jucs_9_8/implementation_of_an_embedded/Alves_N_M_M.html Implementation of an Embedded Hardware Description Language Using Haskell]<br />
:Nelio Muniz Mendes Alves and Sergio de Mello Schneider, 2003<br />
<br />
;[http://legacy.cs.uu.nl/daan/download/papers/dsec.ps Domain Specific Embedded Compilers]<br />
:Daan Leijen and Erik Meijer. 2nd USENIX Conference on Domain-Specific Languages (DSL'99), Austin, Texas, October 1999. Also appeared in ACM SIGPLAN Notices 35, 1, January 2000.<br />
<br />
;[http://conal.net/papers/jfp-saig/ Compiling Embedded Languages]<br />
:Conal Elliott, Sigbjorn Finne, Oege de Moor. Journal of Functional Programming, 13(2), 2003.<br />
<br />
;[http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.53.5061 Modular Domain Specific Languages and Tools]<br />
:Hudak (1998) (cited by 92)<br />
<br />
;[http://www.kestrel.edu/~jullig/rio97/position-papers/ACM-WS.ps Building Domain-Specific Embedded Languages] ([http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.49.6020 citeseer link])<br />
:Paul Hudak (cited by 102)<br />
<br />
;[http://www.usenix.org/publications/library/proceedings/dsl99/full_papers/leijen/leijen.ps Domain Specific Embedded Compilers] <br />
:D Leijen, E Meijer (cited by 88)<br />
<br />
;[http://www.haskell.org/wikiupload/c/c6/ICMI45-paper-en.pdf How to build a monadic interpreter in one day] (pdf)<br />
:By Dan Popa. A small tutorial on how to build a language in one day, using the Parser Monad in the front end and a monad with state and I/O string in the back end. <br />
<br />
;[http://portal.acm.org/citation.cfm%3Fid%3D317765.317794 Haskell and XML: generic combinators or type-based translation?]<br />
:M Wallace, C Runciman - ACM SIGPLAN Notices, 1999 <br />
<br />
;[http://www.springerlink.com/index/RPRWGR7LHTXBT2DW.pdf Modeling HTML in Haskell]<br />
:P Thiemann - Practical Applications of Declarative Languages, 2000 (cited by 24)<br />
<br />
;[http://portal.acm.org/citation.cfm%3Fcoll%3DGUIDE%26dl%3DGUIDE%26id%3D331975 DSL implementation using staging and monads]<br />
:T Sheard, E Pasalic - Proceedings of the 2nd conference on Domain-specific languages, 1999<br />
<br />
;[http://www.informatik.uni-freiburg.de/~thiemann/papers/wflp02.ps.gz Programmable type systems for domain specific languages]<br />
:P Thiemann Electronic Notes in Theoretical Computer Science, 2002<br />
<br />
;[http://portal.acm.org/citation.cfm%3Fid%3D1052935 An embedded domain-specific language for type-safe server-side web scripting]<br />
:P Thiemann, ACM Transactions on Internet Technology (TOIT), 2005<br />
<br />
;[http://www.informatik.uni-freiburg.de/~thiemann/papers/modeling.ps.gz A typed representation for HTML and XML documents in Haskell]<br />
:P Thiemann, Journal of Functional Programming, 2003 (cited by 38)<br />
<br />
;[http://www.haskell.org/wikiupload/f/f5/Types2.pdf.zip Adaptable Software - Modular Extensible Monadic Entry-pointless Type Checker in Haskell]<br />
:Ro/Haskell Group, Univ. “V.Alecsandri”, Bacau, Romania, 2011<br />
<br />
;[http://portal.acm.org/citation.cfm%3Fid%3D331963.331976 Monadic robotics]<br />
:J Peterson, G Hager.<br />
<br />
;[ftp://cse.ogi.edu/pub/pacsoft/papers/dsl-tools.ps Defining and Implementing Closed, Domain-Specific Languages]<br />
:RB Kieburtz - Invited talk, 2000 <br />
<br />
===Rapid prototyping===<br />
<br />
;[http://www.haskell.org/frob/icse99/visionpaper.ps Prototyping Real-Time Vision Systems: An Experiment in DSL Design]<br />
:A. Reid, J. Peterson, G. Hager and P. Hudak, In Proceedings of International Conference on Software Engineering (ICSE'99), Los Angeles, CA. 16-22 May, 1999.<br />
<br />
;[http://haskell.cs.yale.edu/yale/papers/padl01-vision/index.html FVision: A Declarative Language for Visual Tracking]<br />
:J. Peterson, P. Hudak, A. Reid and G. Hager. In Proceedings of Third International Symposium on Practical Applications of Declarative Languages PADL'01, March 2001.<br />
<br />
===Graphics===<br />
<br />
;[http://conal.net/papers/tse-modeled-animation/ An Embedded Modeling Language Approach to Interactive 3D and Multimedia Animation]<br />
:Conal Elliott. IEEE Transactions on Software Engineering, May/June 1999.<br />
<br />
;[http://conal.net/papers/Vertigo/ Programming Graphics Processors Functionally]<br />
:Conal Elliott. Proceedings of the 2004 Haskell Workshop.<br />
<br />
;[http://conal.net/papers/functional-images/ Functional Images]<br />
:Conal Elliott. In The Fun of Programming, March 2003.<br />
<br />
;[http://conal.net/papers/Eros/ Tangible Functional Programming]<br />
:Conal Elliott, ICFP 07<br />
<br />
===Hardware design===<br />
<br />
;[http://purl.utwente.nl/essays/59381 Haskell as a higher order structural hardware description language]<br />
:Kooijman, M. (2009); master's thesis.<br />
<br />
;[http://purl.utwente.nl/essays/59482 CλasH : from Haskell to hardware]<br />
:Baaij, C. (2009); master's thesis.<br />
<br />
;[http://www.cs.chalmers.se/~koen/pubs/charme01-sorter.pdf The Design and Verification of a Sorter Core]<br />
:Koen Claessen, Mary Sheeran, and Satnam Singh. In Proc. of Conference on Correct Hardware Design and Verification Methods (CHARME), Lecture Notes in Computer Science, Springer Verlag, 2001.<br />
<br />
;[http://www.cs.chalmers.se/~koen/Papers/lic.ps An Embedded Language Approach to Hardware Description and Verification]<br />
:Koen Claessen. Dept. of Computer Science and Engineering, Chalmers University of Technology, Lic. thesis, August 2000.<br />
<br />
;[http://www.cs.chalmers.se/~koen/pubs/phd01-thesis.ps Embedded Languages for Describing and Verifying Hardware]<br />
:Koen Claessen. Dept. of Computer Science and Engineering, Chalmers University of Technology, Ph.D. thesis, April 2001.<br />
<br />
;[http://www.cs.chalmers.se/~koen/pubs/fdpe02-lava.ps An Embedded Language Approach to Teaching Hardware Compilation]<br />
:Koen Claessen and Gordon Pace. In Proc. of Workshop on Functional and Declarative Programming in Education (FDPE), 2002.<br />
<br />
;[http://www.cs.chalmers.se/~koen/Papers/constructive.ps Safety Property Verification of Cyclic Circuits]<br />
:Koen Claessen. June 2002.<br />
<br />
;[http://www.cs.chalmers.se/~koen/Papers/paps.ps Verification of Hardware Systems with First-Order Logic]<br />
:Koen Claessen, Reiner Hähnle, Johan Mårtensson. PaPS 2002. 2002.<br />
<br />
;[http://www.cs.chalmers.se/~koen/Papers/dcc-hwcomp.ps An Embedded Language Framework for Hardware Compilation]<br />
:Koen Claessen, Gordon Pace. DCC 2002. 2002.<br />
<br />
;[http://www.cs.chalmers.se/~koen/Papers/obs-shar.ps Observable Sharing for Functional Circuit Description]<br />
:Koen Claessen and David Sands. ASIAN '99. 1999.<br />
<br />
;[http://www.cs.chalmers.se/~bjesse/fftpaper.ps.gz Automatic Verification of Combinational and Pipelined FFT Circuits]<br />
:Per Bjesse. CAV. 1999<br />
<br />
;[http://content.ohsu.edu/cdm4/item_viewer.php?CISOROOT=/etd&CISOPTR=212&CISOBOX=1&REC=9 Algebraic Specification and Verification of Processor Microarchitectures]<br />
:John Matthews. PhD. Thesis. Oregon Graduate Institute. 2000.<br />
<br />
;[http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.26.8326 Symbolic Simulation of Microprocessor Models using Type Classes in Haskell]<br />
:Nancy A. Day, Jeffrey R. Lewis and Byron Cook. CHARME'99. September 1999.<br />
<br />
;[http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.37.4284 On Embedding a Microarchitectural Design Language within Haskell]<br />
:John Launchbury, Jeff Lewis and Byron Cook. ICFP'99. 1999. <br />
<br />
;[http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.22.5519 Elementary Microarchitecture Algebra]<br />
:John Matthews and John Launchbury. CAV '99. 1999.<br />
<br />
;[http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.24.1486 Specifying Superscalar Microprocessors with Hawk]<br />
:Byron Cook, John Launchbury and John Matthews. FTH '98. 1998.<br />
<br />
;[https://wiki.ittc.ku.edu/lambda/Image:Matthews-Microprocessor_Specification_in_HAWK.pdf Microprocessor Specification in Hawk]<br />
:John Matthews, John Launchbury and Byron Cook. ICCL '98. 1998.<br />
<br />
====Lava====<br />
<br />
;[http://www.cs.chalmers.se/~koen/Papers/lava.ps Lava: Hardware Design in Haskell]<br />
:Per Bjesse, Koen Claessen, Mary Sheeran, Satnam Singh<br />
<br />
;[http://www.math.chalmers.se/~koen/pubs/entry-sttt03-lava.html Using Lava to Design and Verify Recursive and Periodic Sorters]<br />
:Koen Claessen, Mary Sheeran, and Satnam Singh. In International Journal on Software Tools for Technology Transfer, vol. 4 (3), pp. 349--358, Springer Verlag, 2003.<br />
<br />
;[http://www.math.chalmers.se/~koen/pubs/entry-fop-lava.html Functional Hardware Description in Lava]<br />
:Koen Claessen, Mary Sheeran, and Satnam Singh. In Jeremy Gibbons and Oege de Moor (eds.), The Fun of Programming, Cornerstones of Computing, pp. 151--176, Palgrave, 2003.<br />
<br />
;[http://www.cs.chalmers.se/~koen/Lava/tutorial.ps A Lava Tutorial]<br />
:Koen Claessen, Mary Sheeran. April 2000.<br />
<br />
===Network programming===<br />
<br />
;[http://www.fujipress.jp/finder/xslt.php?mode=present&inputfile=IPSTP004700160002.xml A Network Programming Framework in Haskell Based on Asynchronous Localized pi-calculus]<br />
:Keigo Imai ,Shoji Yuen ,Kiyoshi Agusa. Graduate School of Information Science, Nagoya University Journal ref: IPSJ Transactions on Programming, Vol.47, No.16 pp. 10-28, 2006<br />
<br />
===Logic and constraint programming===<br />
<br />
;[http://www.cs.chalmers.se/~koen/pubs/haskell00-typedlp.ps Typed Logical Variables in Haskell]<br />
:Koen Claessen and Peter Ljunglöf. In Proc. of Haskell Workshop, ACM SIGPLAN, 2000. 1999<br />
<br />
;[http://www.comlab.ox.ac.uk/ralf.hinze/publications/Prolog.ps.gz Prolog's control constructs in a functional setting - Axioms and implementation]<br />
:Ralf Hinze. International Journal of Foundations of Computer Science. 12 (2). 2001.<br />
<br />
;[http://www.cs.kuleuven.be/~toms/Research/papers/modref2009.pdf Monadic Constraint Programming]<br />
:Tom Schrijvers, Peter Stuckey and Phil Wadler. Journal of Functional Programming, 2009.<br />
<br />
[[Category:Research]]</div>Conalhttps://wiki.haskell.org/Research_papers/Domain_specific_languagesResearch papers/Domain specific languages2012-02-13T18:32:03Z<p>Conal: /* Graphics */ fix link for my DSL97 paper</p>
<hr />
<div>__TOC__<br />
<br />
==Domain specific languages==<br />
<br />
;[http://www.jucs.org/jucs_9_8/implementation_of_an_embedded/Alves_N_M_M.html Implementation of an Embedded Hardware Description Language Using Haskell]<br />
:Nelio Muniz Mendes Alves and Sergio de Mello Schneider, 2003<br />
<br />
;[http://legacy.cs.uu.nl/daan/download/papers/dsec.ps Domain Specific Embedded Compilers]<br />
:Daan Leijen and Erik Meijer. 2nd USENIX Conference on Domain-Specific Languages (DSL'99), Austin, Texas, October 1999. Also appeared in ACM SIGPLAN Notices 35, 1, January 2000.<br />
<br />
;[http://conal.net/papers/jfp-saig/ Compiling Embedded Languages]<br />
:Conal Elliott, Sigbjorn Finne, Oege de Moor. Journal of Functional Programming, 13(2), 2003.<br />
<br />
;[http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.53.5061 Modular Domain Specific Languages and Tools]<br />
:Hudak (1998) (cited by 92)<br />
<br />
;[http://www.kestrel.edu/~jullig/rio97/position-papers/ACM-WS.ps Building Domain-Specific Embedded Languages] ([http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.49.6020 citeseer link])<br />
:Paul Hudak (cited by 102)<br />
<br />
;[http://www.usenix.org/publications/library/proceedings/dsl99/full_papers/leijen/leijen.ps Domain Specific Embedded Compilers] <br />
:D Leijen, E Meijer (cited by 88)<br />
<br />
;[http://www.haskell.org/wikiupload/c/c6/ICMI45-paper-en.pdf How to build a monadic interpreter in one day] (pdf)<br />
:By Dan Popa. A small tutorial on how to build a language in one day, using the Parser Monad in the front end and a monad with state and I/O string in the back end. <br />
<br />
;[http://portal.acm.org/citation.cfm%3Fid%3D317765.317794 Haskell and XML: generic combinators or type-based translation?]<br />
:M Wallace, C Runciman - ACM SIGPLAN Notices, 1999 <br />
<br />
;[http://www.springerlink.com/index/RPRWGR7LHTXBT2DW.pdf Modeling HTML in Haskell]<br />
:P Thiemann - Practical Applications of Declarative Languages, 2000 (cited by 24)<br />
<br />
;[http://portal.acm.org/citation.cfm%3Fcoll%3DGUIDE%26dl%3DGUIDE%26id%3D331975 DSL implementation using staging and monads]<br />
:T Sheard, E Pasalic - Proceedings of the 2nd conference on Domain-specific languages, 1999<br />
<br />
;[http://www.informatik.uni-freiburg.de/~thiemann/papers/wflp02.ps.gz Programmable type systems for domain specific languages]<br />
:P Thiemann Electronic Notes in Theoretical Computer Science, 2002<br />
<br />
;[http://portal.acm.org/citation.cfm%3Fid%3D1052935 An embedded domain-specific language for type-safe server-side web scripting]<br />
:P Thiemann, ACM Transactions on Internet Technology (TOIT), 2005<br />
<br />
;[http://www.informatik.uni-freiburg.de/~thiemann/papers/modeling.ps.gz A typed representation for HTML and XML documents in Haskell]<br />
:P Thiemann, Journal of Functional Programming, 2003 (cited by 38)<br />
<br />
;[http://www.haskell.org/wikiupload/f/f5/Types2.pdf.zip Adaptable Software - Modular Extensible Monadic Entry-pointless Type Checker in Haskell]<br />
:Ro/Haskell Group, Univ. “V.Alecsandri”, Bacau, Romania, 2011<br />
<br />
;[http://portal.acm.org/citation.cfm%3Fid%3D331963.331976 Monadic robotics]<br />
:J Peterson, G Hager.<br />
<br />
;[ftp://cse.ogi.edu/pub/pacsoft/papers/dsl-tools.ps Defining and Implementing Closed, Domain-Specific Languages]<br />
:RB Kieburtz - Invited talk, 2000 <br />
<br />
===Rapid prototyping===<br />
<br />
;[http://www.haskell.org/frob/icse99/visionpaper.ps Prototyping Real-Time Vision Systems: An Experiment in DSL Design]<br />
:A. Reid, J. Peterson, G. Hager and P. Hudak, In Proceedings of International Conference on Software Engineering (ICSE'99), Los Angeles, CA. 16-22 May, 1999.<br />
<br />
;[http://haskell.cs.yale.edu/yale/papers/padl01-vision/index.html FVision: A Declarative Language for Visual Tracking]<br />
:J. Peterson, P. Hudak, A. Reid and G. Hager. In Proceedings of Third International Symposium on Practical Applications of Declarative Languages PADL'01, March 2001.<br />
<br />
===Graphics===<br />
<br />
;[http://conal.net/papers/dsl97/ Modeling Interactive 3D and Multimedia Animation with an Embedded Language]<br />
:Conal Elliott (cited by 35)<br />
<br />
;[http://conal.net/papers/Vertigo/ Programming Graphics Processors Functionally]<br />
:Conal Elliott. Proceedings of the 2004 Haskell Workshop.<br />
<br />
;[http://conal.net/papers/functional-images/ Functional Images]<br />
:Conal Elliott. In The Fun of Programming, March 2003.<br />
<br />
;[http://conal.net/papers/Eros/ Tangible Functional Programming]<br />
:Conal Elliott, ICFP 07<br />
<br />
===Hardware design===<br />
<br />
;[http://purl.utwente.nl/essays/59381 Haskell as a higher order structural hardware description language]<br />
:Kooijman, M. (2009); master's thesis.<br />
<br />
;[http://purl.utwente.nl/essays/59482 CλasH : from Haskell to hardware]<br />
:Baaij, C. (2009); master's thesis.<br />
<br />
;[http://www.cs.chalmers.se/~koen/pubs/charme01-sorter.pdf The Design and Verification of a Sorter Core]<br />
:Koen Claessen, Mary Sheeran, and Satnam Singh. In Proc. of Conference on Correct Hardware Design and Verification Methods (CHARME), Lecture Notes in Computer Science, Springer Verlag, 2001.<br />
<br />
;[http://www.cs.chalmers.se/~koen/Papers/lic.ps An Embedded Language Approach to Hardware Description and Verification]<br />
:Koen Claessen. Dept. of Computer Science and Engineering, Chalmers University of Technology, Lic. thesis, August 2000.<br />
<br />
;[http://www.cs.chalmers.se/~koen/pubs/phd01-thesis.ps Embedded Languages for Describing and Verifying Hardware]<br />
:Koen Claessen. Dept. of Computer Science and Engineering, Chalmers University of Technology, Ph.D. thesis, April 2001.<br />
<br />
;[http://www.cs.chalmers.se/~koen/pubs/fdpe02-lava.ps An Embedded Language Approach to Teaching Hardware Compilation]<br />
:Koen Claessen and Gordon Pace. In Proc. of Workshop on Functional and Declarative Programming in Education (FDPE), 2002.<br />
<br />
;[http://www.cs.chalmers.se/~koen/Papers/constructive.ps Safety Property Verification of Cyclic Circuits]<br />
:Koen Claessen. June 2002.<br />
<br />
;[http://www.cs.chalmers.se/~koen/Papers/paps.ps Verification of Hardware Systems with First-Order Logic]<br />
:Koen Claessen, Reiner Hähnle, Johan Mårtensson. PaPS 2002. 2002.<br />
<br />
;[http://www.cs.chalmers.se/~koen/Papers/dcc-hwcomp.ps An Embedded Language Framework for Hardware Compilation]<br />
:Koen Claessen, Gordon Pace. DCC 2002. 2002.<br />
<br />
;[http://www.cs.chalmers.se/~koen/Papers/obs-shar.ps Observable Sharing for Functional Circuit Description]<br />
:Koen Claessen and David Sands. ASIAN '99. 1999.<br />
<br />
;[http://www.cs.chalmers.se/~bjesse/fftpaper.ps.gz Automatic Verification of Combinational and Pipelined FFT Circuits]<br />
:Per Bjesse. CAV. 1999<br />
<br />
;[http://content.ohsu.edu/cdm4/item_viewer.php?CISOROOT=/etd&CISOPTR=212&CISOBOX=1&REC=9 Algebraic Specification and Verification of Processor Microarchitectures]<br />
:John Matthews. PhD. Thesis. Oregon Graduate Institute. 2000.<br />
<br />
;[http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.26.8326 Symbolic Simulation of Microprocessor Models using Type Classes in Haskell]<br />
:Nancy A. Day, Jeffrey R. Lewis and Byron Cook. CHARME'99. September 1999.<br />
<br />
;[http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.37.4284 On Embedding a Microarchitectural Design Language within Haskell]<br />
:John Launchbury, Jeff Lewis and Byron Cook. ICFP'99. 1999. <br />
<br />
;[http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.22.5519 Elementary Microarchitecture Algebra]<br />
:John Matthews and John Launchbury. CAV '99. 1999.<br />
<br />
;[http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.24.1486 Specifying Superscalar Microprocessors with Hawk]<br />
:Byron Cook, John Launchbury and John Matthews. FTH '98. 1998.<br />
<br />
;[https://wiki.ittc.ku.edu/lambda/Image:Matthews-Microprocessor_Specification_in_HAWK.pdf Microprocessor Specification in Hawk]<br />
:John Matthews, John Launchbury and Byron Cook. ICCL '98. 1998.<br />
<br />
====Lava====<br />
<br />
;[http://www.cs.chalmers.se/~koen/Papers/lava.ps Lava: Hardware Design in Haskell]<br />
:Per Bjesse, Koen Claessen, Mary Sheeran, Satnam Singh<br />
<br />
;[http://www.math.chalmers.se/~koen/pubs/entry-sttt03-lava.html Using Lava to Design and Verify Recursive and Periodic Sorters]<br />
:Koen Claessen, Mary Sheeran, and Satnam Singh. In International Journal on Software Tools for Technology Transfer, vol. 4 (3), pp. 349--358, Springer Verlag, 2003.<br />
<br />
;[http://www.math.chalmers.se/~koen/pubs/entry-fop-lava.html Functional Hardware Description in Lava]<br />
:Koen Claessen, Mary Sheeran, and Satnam Singh. In Jeremy Gibbons and Oege de Moor (eds.), The Fun of Programming, Cornerstones of Computing, pp. 151--176, Palgrave, 2003.<br />
<br />
;[http://www.cs.chalmers.se/~koen/Lava/tutorial.ps A Lava Tutorial]<br />
:Koen Claessen, Mary Sheeran. April 2000.<br />
<br />
===Network programming===<br />
<br />
;[http://www.fujipress.jp/finder/xslt.php?mode=present&inputfile=IPSTP004700160002.xml A Network Programming Framework in Haskell Based on Asynchronous Localized pi-calculus]<br />
:Keigo Imai ,Shoji Yuen ,Kiyoshi Agusa. Graduate School of Information Science, Nagoya University Journal ref: IPSJ Transactions on Programming, Vol.47, No.16 pp. 10-28, 2006<br />
<br />
===Logic and constraint programming===<br />
<br />
;[http://www.cs.chalmers.se/~koen/pubs/haskell00-typedlp.ps Typed Logical Variables in Haskell]<br />
:Koen Claessen and Peter Ljunglöf. In Proc. of Haskell Workshop, ACM SIGPLAN, 2000. 1999<br />
<br />
;[http://www.comlab.ox.ac.uk/ralf.hinze/publications/Prolog.ps.gz Prolog's control constructs in a functional setting - Axioms and implementation]<br />
:Ralf Hinze. International Journal of Foundations of Computer Science. 12 (2). 2001.<br />
<br />
;[http://www.cs.kuleuven.be/~toms/Research/papers/modref2009.pdf Monadic Constraint Programming]<br />
:Tom Schrijvers, Peter Stuckey and Phil Wadler. Journal of Functional Programming, 2009.<br />
<br />
[[Category:Research]]</div>Conalhttps://wiki.haskell.org/IO_insideIO inside2011-08-05T19:20:58Z<p>Conal: /* Welcome to the RealWorld, baby :) */ small typo fix ("in incorrect" --> "is incorrect")</p>
<hr />
<div>Haskell I/O has always been a source of confusion and surprises for new Haskellers. While simple I/O code in Haskell looks very similar to its equivalents in imperative languages, attempts to write somewhat more complex code often result in a total mess. This is because Haskell I/O is really very different internally. Haskell is a pure language and even the I/O system can't break this purity.<br />
<br />
The following text is an attempt to explain the details of Haskell I/O implementations. This explanation should help you eventually master all the smart I/O tricks. Moreover, I've added a detailed explanation of various traps you might encounter along the way. After reading this text, you will receive a "Master of Haskell I/O" degree that is equal to a Bachelor in Computer Science and Mathematics, simultaneously :)<br />
<br />
If you are new to Haskell I/O you may prefer to start by reading the [[Introduction to IO]] page.<br />
<br />
<br />
== Haskell is a pure language ==<br />
<br />
Haskell is a pure language, which means that the result of any function call is fully determined by its arguments. Pseudo-functions like rand() or getchar() in C, which return different results on each call, are simply impossible to write in Haskell. Moreover, Haskell functions can't have side effects, which means that they can't effect any changes to the "real world", like changing files, writing to the screen, printing, sending data over the network, and so on. These two restrictions together mean that any function call can be replaced by the result of a previous call with the same parameters, and the language '''guarantees''' that all these rearrangements will not change the program result!<br />
<br />
Let's compare this to C: optimizing C compilers try to guess which functions have no side effects and don't depend on mutable global variables. If this guess is wrong, an optimization can change the program's semantics! To avoid this kind of disaster, C optimizers are conservative in their guesses or require hints from the programmer about the purity of functions.<br />
<br />
Compared to an optimizing C compiler, a Haskell compiler is a set of pure mathematical transformations. This results in much better high-level optimization facilities. Moreover, pure mathematical computations can be much more easily divided into several threads that may be executed in parallel, which is increasingly important in these days of multi-core CPUs. Finally, pure computations are less error-prone and easier to verify, which adds to Haskell's robustness and to the speed of program development using Haskell.<br />
<br />
Haskell purity allows compiler to call only functions whose results<br />
are really required to calculate final value of high-level function<br />
(i.e., main) - this is called lazy evaluation. It's great thing for<br />
pure mathematical computations, but how about I/O actions? Function<br />
like (<hask>putStrLn "Press any key to begin formatting"</hask>) can't return any<br />
meaningful result value, so how can we ensure that compiler will not<br />
omit or reorder its execution? And in general: how we can work with<br />
stateful algorithms and side effects in an entirely lazy language?<br />
This question has had many different solutions proposed in 18 years of<br />
Haskell development (see [[History of Haskell]]), though a solution based on [[monad]]s is now<br />
the standard.<br />
<br />
== What is a monad? ==<br />
<br />
What is a [[monad]]? It's something from mathematical category theory, which I<br />
don't know anymore :) In order to understand how monads are used to<br />
solve the problem of I/O and side effects, you don't need to know it. It's<br />
enough to just know elementary mathematics, like I do :)<br />
<br />
Let's imagine that we want to implement in Haskell the well-known<br />
'getchar' function. What type should it have? Let's try:<br />
<br />
<haskell><br />
getchar :: Char<br />
<br />
get2chars = [getchar,getchar]<br />
</haskell><br />
<br />
What will we get with 'getchar' having just the 'Char' type? You can see<br />
all the possible problems in the definition of 'get2chars':<br />
<br />
# Because the Haskell compiler treats all functions as pure (not having side effects), it can avoid "excessive" calls to 'getchar' and use one returned value twice.<br />
# Even if it does make two calls, there is no way to determine which call should be performed first. Do you want to return the two chars in the order in which they were read, or in the opposite order? Nothing in the definition of 'get2chars' answers this question.<br />
<br />
How can these problems be solved, from the programmer's viewpoint?<br />
Let's introduce a fake parameter of 'getchar' to make each call<br />
"different" from the compiler's point of view:<br />
<br />
<haskell><br />
getchar :: Int -> Char<br />
<br />
get2chars = [getchar 1, getchar 2]<br />
</haskell><br />
<br />
Right away, this solves the first problem mentioned above - now the<br />
compiler will make two calls because it sees them as having different<br />
parameters. The whole 'get2chars' function should also have a<br />
fake parameter, otherwise we will have the same problem calling it:<br />
<br />
<haskell><br />
getchar :: Int -> Char<br />
get2chars :: Int -> String<br />
<br />
get2chars _ = [getchar 1, getchar 2]<br />
</haskell><br />
<br />
<br />
Now we need to give the compiler some clue to determine which function it<br />
should call first. The Haskell language doesn't provide any way to express<br />
order of evaluation... except for data dependencies! How about adding an<br />
artificial data dependency which prevents evaluation of the second<br />
'getchar' before the first one? In order to achieve this, we will<br />
return an additional fake result from 'getchar' that will be used as a<br />
parameter for the next 'getchar' call:<br />
<br />
<haskell><br />
getchar :: Int -> (Char, Int)<br />
<br />
get2chars _ = [a,b] where (a,i) = getchar 1<br />
(b,_) = getchar i<br />
</haskell><br />
<br />
So far so good - now we can guarantee that 'a' is read before 'b'<br />
because reading 'b' needs the value ('i') that is returned by reading 'a'!<br />
<br />
We've added a fake parameter to 'get2chars' but the problem is that the<br />
Haskell compiler is too smart! It can believe that the external 'getchar'<br />
function is really dependent on its parameter but for 'get2chars' it<br />
will see that we're just cheating because we throw it away! Therefore it won't feel obliged to execute the calls in the order we want. How can we fix this? How about passing this fake parameter to the 'getchar' function?! In this case<br />
the compiler can't guess that it is really unused :)<br />
<br />
<haskell><br />
get2chars i0 = [a,b] where (a,i1) = getchar i0<br />
(b,i2) = getchar i1<br />
</haskell><br />
<br />
<br />
And more - 'get2chars' has all the same purity problems as the 'getchar'<br />
function. If you need to call it two times, you need a way to describe<br />
the order of these calls. Look at:<br />
<br />
<haskell><br />
get4chars = [get2chars 1, get2chars 2] -- order of 'get2chars' calls isn't defined<br />
</haskell><br />
<br />
We already know how to deal with these problems - 'get2chars' should<br />
also return some fake value that can be used to order calls:<br />
<br />
<haskell><br />
get2chars :: Int -> (String, Int)<br />
<br />
get4chars i0 = (a++b) where (a,i1) = get2chars i0<br />
(b,i2) = get2chars i1<br />
</haskell><br />
<br />
<br />
But what's the fake value 'get2chars' should return? If we use some integer constant, the excessively-smart Haskell compiler will guess that we're cheating again :) What about returning the value returned by 'getchar'? See:<br />
<br />
<haskell><br />
get2chars :: Int -> (String, Int)<br />
get2chars i0 = ([a,b], i2) where (a,i1) = getchar i0<br />
(b,i2) = getchar i1<br />
</haskell><br />
<br />
Believe it or not, but we've just constructed the whole "monadic"<br />
Haskell I/O system.<br />
<br />
== Welcome to the RealWorld, baby :) ==<br />
<br />
Warning: The following story about IO is incorrect in that it cannot actually explain some important aspects of IO (including interaction and concurrency). However, some people find it useful to begin developing an understanding.<br />
<br />
The 'main' Haskell function has the type:<br />
<br />
<haskell><br />
main :: RealWorld -> ((), RealWorld)<br />
</haskell><br />
<br />
where 'RealWorld' is a fake type used instead of our Int. It's something<br />
like the baton passed in a relay race. When 'main' calls some IO function,<br />
it passes the "RealWorld" it received as a parameter. All IO functions have<br />
similar types involving RealWorld as a parameter and result. To be<br />
exact, "IO" is a type synonym defined in the following way:<br />
<br />
<haskell><br />
type IO a = RealWorld -> (a, RealWorld)<br />
</haskell><br />
<br />
So, 'main' just has type "IO ()", 'getChar' has type "IO Char" and so<br />
on. You can think of the type "IO Char" as meaning "take the current RealWorld, do something to it, and return a Char and a (possibly changed) RealWorld". Let's look at 'main' calling 'getChar' two times:<br />
<br />
<haskell><br />
getChar :: RealWorld -> (Char, RealWorld)<br />
<br />
main :: RealWorld -> ((), RealWorld)<br />
main world0 = let (a, world1) = getChar world0<br />
(b, world2) = getChar world1<br />
in ((), world2)<br />
</haskell><br />
<br />
<br />
Look at this closely: 'main' passes the "world" it received to the first 'getChar'. This 'getChar' returns some new value of type RealWorld<br />
that gets used in the next call. Finally, 'main' returns the "world" it got<br />
from the second 'getChar'.<br />
<br />
# Is it possible here to omit any call of 'getChar' if the Char it read is not used? No, because we need to return the "world" that is the result of the second 'getChar' and this in turn requires the "world" returned from the first 'getChar'.<br />
# Is it possible to reorder the 'getChar' calls? No: the second 'getChar' can't be called before the first one because it uses the "world" returned from the first call.<br />
# Is it possible to duplicate calls? In Haskell semantics - yes, but real compilers never duplicate work in such simple cases (otherwise, the programs generated will not have any speed guarantees).<br />
<br />
<br />
As we already said, RealWorld values are used like a baton which gets passed<br />
between all routines called by 'main' in strict order. Inside each<br />
routine called, RealWorld values are used in the same way. Overall, in<br />
order to "compute" the world to be returned from 'main', we should perform<br />
each IO procedure that is called from 'main', directly or indirectly.<br />
This means that each procedure inserted in the chain will be performed<br />
just at the moment (relative to the other IO actions) when we intended it<br />
to be called. Let's consider the following program:<br />
<br />
<haskell><br />
main = do a <- ask "What is your name?"<br />
b <- ask "How old are you?"<br />
return ()<br />
<br />
ask s = do putStr s<br />
readLn<br />
</haskell><br />
<br />
Now you have enough knowledge to rewrite it in a low-level way and<br />
check that each operation that should be performed will really be<br />
performed with the arguments it should have and in the order we expect.<br />
<br />
<br />
But what about conditional execution? No problem. Let's define the<br />
well-known 'when' operation:<br />
<br />
<haskell><br />
when :: Bool -> IO () -> IO ()<br />
when condition action world =<br />
if condition<br />
then action world<br />
else ((), world)<br />
</haskell><br />
<br />
As you can see, we can easily include or exclude from the execution chain<br />
IO procedures (actions) depending on the data values. If 'condition'<br />
will be False on the call of 'when', 'action' will never be called because<br />
real Haskell compilers, again, never call functions whose results<br />
are not required to calculate the final result (''i.e.'', here, the final "world" value of 'main').<br />
<br />
Loops and more complex control structures can be implemented in<br />
the same way. Try it as an exercise!<br />
<br />
<br />
Finally, you may want to know how much passing these RealWorld<br />
values around the program costs. It's free! These fake values exist solely for the compiler while it analyzes and optimizes the code, but when it gets to assembly code generation, it "suddenly" realize that this type is like "()", so<br />
all these parameters and result values can be omitted from the final generated code. Isn't it beautiful? :)<br />
<br />
== '>>=' and 'do' notation ==<br />
<br />
All beginners (including me :)) start by thinking that 'do' is some<br />
magic statement that executes IO actions. That's wrong - 'do' is just<br />
syntactic sugar that simplifies the writing of procedures that use IO (and also other monads, but that's beyond the scope of this tutorial). 'do' notation eventually gets translated to statements passing "world" values around like we've manually written above and is used to simplify the gluing of several<br />
IO actions together. You don't need to use 'do' for just one statement; for instance,<br />
<br />
<haskell><br />
main = do putStr "Hello!"<br />
</haskell><br />
<br />
is desugared to:<br />
<br />
<haskell><br />
main = putStr "Hello!"<br />
</haskell><br />
<br />
But nevertheless it's considered Good Style to use 'do' even for one statement<br />
because it simplifies adding new statements in the future.<br />
<br />
<br />
Let's examine how to desugar a 'do' with multiple statements in the<br />
following example: <br />
<br />
<haskell><br />
main = do putStr "What is your name?"<br />
putStr "How old are you?"<br />
putStr "Nice day!"<br />
</haskell><br />
<br />
The 'do' statement here just joins several IO actions that should be<br />
performed sequentially. It's translated to sequential applications<br />
of one of the so-called "binding operators", namely '>>':<br />
<br />
<haskell><br />
main = (putStr "What is your name?")<br />
>> ( (putStr "How old are you?")<br />
>> (putStr "Nice day!")<br />
)<br />
</haskell><br />
<br />
This binding operator just combines two IO actions, executing them<br />
sequentially by passing the "world" between them:<br />
<br />
<haskell><br />
(>>) :: IO a -> IO b -> IO b<br />
(action1 >> action2) world0 =<br />
let (a, world1) = action1 world0<br />
(b, world2) = action2 world1<br />
in (b, world2)<br />
</haskell><br />
<br />
If defining operators this way looks strange to you, read this<br />
definition as follows:<br />
<br />
<haskell><br />
action1 >> action2 = action<br />
where<br />
action world0 = let (a, world1) = action1 world0<br />
(b, world2) = action2 world1<br />
in (b, world2)<br />
</haskell><br />
<br />
Now you can substitute the definition of '>>' at the places of its usage<br />
and check that program constructed by the 'do' desugaring is actually the<br />
same as we could write by manually manipulating "world" values.<br />
<br />
<br />
A more complex example involves the binding of variables using "<-":<br />
<br />
<haskell><br />
main = do a <- readLn<br />
print a<br />
</haskell><br />
<br />
This code is desugared into:<br />
<br />
<haskell><br />
main = readLn<br />
>>= (\a -> print a)<br />
</haskell><br />
<br />
As you should remember, the '>>' binding operator silently ignores<br />
the value of its first action and returns as an overall result<br />
the result of its second action only. On the other hand, the '>>=' binding operator (note the extra '=' at the end) allows us to use the result of its first action - it gets passed as an additional parameter to the second one! Look at the definition:<br />
<br />
<haskell><br />
(>>=) :: IO a -> (a -> IO b) -> IO b<br />
(action1 >>= action2) world0 =<br />
let (a, world1) = action1 world0<br />
(b, world2) = action2 a world1<br />
in (b, world2)<br />
</haskell><br />
<br />
First, what does the type of the second "action" (more precisely, a function which returns an IO action), namely "a -> IO b", mean? By<br />
substituting the "IO" definition, we get "a -> RealWorld -> (b, RealWorld)".<br />
This means that second action actually has two parameters<br />
- the type 'a' actually used inside it, and the value of type RealWorld used for sequencing of IO actions. That's always the case - any IO procedure has one<br />
more parameter compared to what you see in its type signature. This<br />
parameter is hidden inside the definition of the type alias "IO".<br />
<br />
Second, you can use these '>>' and '>>=' operations to simplify your<br />
program. For example, in the code above we don't need to introduce the<br />
variable, because the result of 'readLn' can be send directly to 'print':<br />
<br />
<haskell><br />
main = readLn >>= print<br />
</haskell><br />
<br />
<br />
And third - as you see, the notation:<br />
<br />
<haskell><br />
do x <- action1<br />
action2<br />
</haskell><br />
<br />
where 'action1' has type "IO a" and 'action2' has type "IO b",<br />
translates into:<br />
<br />
<haskell><br />
action1 >>= (\x -> action2)<br />
</haskell><br />
<br />
where the second argument of '>>=' has the type "a -> IO b". It's the way<br />
the '<-' binding is processed - the name on the left-hand side of '<-' just becomes a parameter of subsequent operations represented as one large IO action. Note also that if 'action1' has type "IO a" then 'x' will just have type "a"; you can think of the effect of '<-' as "unpacking" the IO value of 'action1' into 'x'. Note also that '<-' is not a true operator; it's pure syntax, just like 'do' itself. Its meaning results only from the way it gets desugared.<br />
<br />
Look at the next example: <br />
<br />
<haskell><br />
main = do putStr "What is your name?"<br />
a <- readLn<br />
putStr "How old are you?"<br />
b <- readLn<br />
print (a,b)<br />
</haskell><br />
<br />
This code is desugared into:<br />
<br />
<haskell><br />
main = putStr "What is your name?"<br />
>> readLn<br />
>>= \a -> putStr "How old are you?"<br />
>> readLn<br />
>>= \b -> print (a,b)<br />
</haskell><br />
<br />
I omitted the parentheses here; both the '>>' and the '>>=' operators are<br />
left-associative, but lambda-bindings always stretches as far to the right as possible, which means that the 'a' and 'b' bindings introduced<br />
here are valid for all remaining actions. As an exercise, add the<br />
parentheses yourself and translate this procedure into the low-level<br />
code that explicitly passes "world" values. I think it should be enough to help you finally realize how the 'do' translation and binding operators work.<br />
<br />
<br />
Oh, no! I forgot the third monadic operator - 'return'. It just<br />
combines its two parameters - the value passed and "world":<br />
<br />
<haskell><br />
return :: a -> IO a<br />
return a world0 = (a, world0)<br />
</haskell><br />
<br />
How about translating a simple example of 'return' usage? Say,<br />
<br />
<haskell><br />
main = do a <- readLn<br />
return (a*2)<br />
</haskell><br />
<br />
<br />
Programmers with an imperative language background often think that<br />
'return' in Haskell, as in other languages, immediately returns from<br />
the IO procedure. As you can see in its definition (and even just from its<br />
type!), such an assumption is totally wrong. The only purpose of using<br />
'return' is to "lift" some value (of type 'a') into the result of<br />
a whole action (of type "IO a") and therefore it should generally be used only as the last executed statement of some IO sequence. For example try to<br />
translate the following procedure into the corresponding low-level code:<br />
<br />
<haskell><br />
main = do a <- readLn<br />
when (a>=0) $ do<br />
return ()<br />
print "a is negative"<br />
</haskell><br />
<br />
and you will realize that the 'print' statement is executed even for non-negative values of 'a'. If you need to escape from the middle of an IO procedure, you can use the 'if' statement:<br />
<br />
<haskell><br />
main = do a <- readLn<br />
if (a>=0)<br />
then return ()<br />
else print "a is negative"<br />
</haskell><br />
<br />
Moreover, Haskell layout rules allow us to use the following layout:<br />
<br />
<haskell><br />
main = do a <- readLn<br />
if (a>=0) then return ()<br />
else do<br />
print "a is negative"<br />
...<br />
</haskell><br />
<br />
that may be useful for escaping from the middle of a longish 'do' statement.<br />
<br />
<br />
Last exercise: implement a function 'liftM' that lifts operations on<br />
plain values to the operations on monadic ones. Its type signature:<br />
<br />
<haskell><br />
liftM :: (a -> b) -> (IO a -> IO b)<br />
</haskell><br />
<br />
If that's too hard for you, start with the following high-level<br />
definition and rewrite it in low-level fashion:<br />
<br />
<haskell><br />
liftM f action = do x <- action<br />
return (f x)<br />
</haskell><br />
<br />
<br />
<br />
== Mutable data (references, arrays, hash tables...) ==<br />
<br />
As you should know, every name in Haskell is bound to one fixed (immutable) value. This greatly simplifies understanding algorithms and code optimization, but it's inappropriate in some cases. As we all know, there are plenty of algorithms that are simpler to implement in terms of updatable<br />
variables, arrays and so on. This means that the value associated with<br />
a variable, for example, can be different at different execution points,<br />
so reading its value can't be considered as a pure function. Imagine,<br />
for example, the following code:<br />
<br />
<haskell><br />
main = do let a0 = readVariable varA<br />
_ = writeVariable varA 1<br />
a1 = readVariable varA<br />
print (a0, a1)<br />
</haskell><br />
<br />
Does this look strange? First, the two calls to 'readVariable' look the same, so the compiler can just reuse the value returned by the first call. Second,<br />
the result of the 'writeVariable' call isn't used so the compiler can (and will!) omit this call completely. To complete the picture, these three calls may be rearranged in any order because they appear to be independent of each<br />
other. This is obviously not what was intended. What's the solution? You already know this - use IO actions! Using IO actions guarantees that:<br />
<br />
# the execution order will be retained as written<br />
# each action will have to be executed<br />
# the result of the "same" action (such as "readVariable varA") will not be reused<br />
<br />
So, the code above really should be written as:<br />
<br />
<haskell><br />
import Data.IORef<br />
main = do varA <- newIORef 0 -- Create and initialize a new variable<br />
a0 <- readIORef varA<br />
writeIORef varA 1<br />
a1 <- readIORef varA<br />
print (a0, a1)<br />
</haskell><br />
<br />
Here, 'varA' has the type "IORef Int" which means "a variable (reference) in<br />
the IO monad holding a value of type Int". newIORef creates a new variable<br />
(reference) and returns it, and then read/write actions use this<br />
reference. The value returned by the "readIORef varA" action depends not<br />
only on the variable involved but also on the moment this operation is performed so it can return different values on each call.<br />
<br />
Arrays, hash tables and any other _mutable_ data structures are<br />
defined in the same way - for each of them, there's an operation that creates new "mutable values" and returns a reference to it. Then special read and write<br />
operations in the IO monad are used. The following code shows an example<br />
using mutable arrays:<br />
<br />
<haskell><br />
import Data.Array.IO<br />
main = do arr <- newArray (1,10) 37 :: IO (IOArray Int Int)<br />
a <- readArray arr 1<br />
writeArray arr 1 64<br />
b <- readArray arr 1<br />
print (a, b)<br />
</haskell><br />
<br />
Here, an array of 10 elements with 37 as the initial value at each location is created. After reading the value of the first element (index 1) into 'a' this element's value is changed to 64 and then read again into 'b'. As you can see by executing this code, 'a' will be set to 37 and 'b' to 64.<br />
<br />
<br />
<br />
Other state-dependent operations are also often implemented as IO<br />
actions. For example, a random number generator should return a different<br />
value on each call. It looks natural to give it a type involving IO:<br />
<br />
<haskell><br />
rand :: IO Int<br />
</haskell><br />
<br />
Moreover, when you import C routines you should be careful - if this<br />
routine is impure, i.e. its result depends on something in the "real<br />
world" (file system, memory contents...), internal state and so on,<br />
you should give it an IO type. Otherwise, the compiler can<br />
"optimize" repetitive calls of this procedure with the same parameters! :)<br />
<br />
For example, we can write a non-IO type for:<br />
<br />
<haskell><br />
foreign import ccall<br />
sin :: Double -> Double<br />
</haskell><br />
<br />
because the result of 'sin' depends only on its argument, but<br />
<br />
<haskell><br />
foreign import ccall<br />
tell :: Int -> IO Int<br />
</haskell><br />
<br />
If you will declare 'tell' as a pure function (without IO) then you may<br />
get the same position on each call! :)<br />
<br />
== IO actions as values ==<br />
<br />
By this point you should understand why it's impossible to use IO<br />
actions inside non-IO (pure) procedures. Such procedures just don't<br />
get a "baton"; they don't know any "world" value to pass to an IO action.<br />
The RealWorld type is an abstract datatype, so pure functions also can't construct RealWorld values by themselves, and it's a strict type, so 'undefined' also can't be used. So, the prohibition of using IO actions inside pure procedures is just a type system trick (as it usually is in Haskell :)).<br />
<br />
But while pure code can't _execute_ IO actions, it can work with them<br />
as with any other functional values - they can be stored in data<br />
structures, passed as parameters, returned as results, collected in<br />
lists, and partially applied. But an IO action will remain a<br />
functional value because we can't apply it to the last argument - of<br />
type RealWorld.<br />
<br />
In order to _execute_ the IO action we need to apply it to some<br />
RealWorld value. That can be done only inside some IO procedure,<br />
in its "actions chain". And real execution of this action will take<br />
place only when this procedure is called as part of the process of<br />
"calculating the final value of world" for 'main'. Look at this example:<br />
<br />
<haskell><br />
main world0 = let get2chars = getChar >> getChar<br />
((), world1) = putStr "Press two keys" world0<br />
(answer, world2) = get2chars world1<br />
in ((), world2)<br />
</haskell><br />
<br />
Here we first bind a value to 'get2chars' and then write a binding<br />
involving 'putStr'. But what's the execution order? It's not defined<br />
by the order of the 'let' bindings, it's defined by the order of processing<br />
"world" values! You can arbitrarily reorder the binding statements - the execution order will be defined by the data dependency with respect to the <br />
"world" values that get passed around. Let's see what this 'main' looks like in the 'do' notation:<br />
<br />
<haskell><br />
main = do let get2chars = getChar >> getChar<br />
putStr "Press two keys"<br />
get2chars<br />
return ()<br />
</haskell><br />
<br />
As you can see, we've eliminated two of the 'let' bindings and left only the one defining 'get2chars'. The non-'let' statements are executed in the exact order in which they're written, because they pass the "world" value from statement to statement as we described above. Thus, this version of the function is much easier to understand because we don't have to mentally figure out the data dependency of the "world" value.<br />
<br />
Moreover, IO actions like 'get2chars' can't be executed directly<br />
because they are functions with a RealWorld parameter. To execute them,<br />
we need to supply the RealWorld parameter, i.e. insert them in the 'main'<br />
chain, placing them in some 'do' sequence executed from 'main' (either directly in the 'main' function, or indirectly in an IO function called from 'main'). Until that's done, they will remain like any function, in partially<br />
evaluated form. And we can work with IO actions as with any other<br />
functions - bind them to names (as we did above), save them in data<br />
structures, pass them as function parameters and return them as results - and<br />
they won't be performed until you give them the magic RealWorld<br />
parameter!<br />
<br />
<br />
<br />
=== Example: a list of IO actions ===<br />
<br />
Let's try defining a list of IO actions:<br />
<br />
<haskell><br />
ioActions :: [IO ()]<br />
ioActions = [(print "Hello!"),<br />
(putStr "just kidding"),<br />
(getChar >> return ())<br />
]<br />
</haskell><br />
<br />
I used additional parentheses around each action, although they aren't really required. If you still can't believe that these actions won't be executed immediately, just recall the real type of this list:<br />
<br />
<haskell><br />
ioActions :: [RealWorld -> ((), RealWorld)]<br />
</haskell><br />
<br />
Well, now we want to execute some of these actions. No problem, just<br />
insert them into the 'main' chain:<br />
<br />
<haskell><br />
main = do head ioActions<br />
ioActions !! 1<br />
last ioActions<br />
</haskell><br />
<br />
Looks strange, right? :) Really, any IO action that you write in a 'do'<br />
statement (or use as a parameter for the '>>'/'>>=' operators) is an expression<br />
returning a result of type 'IO a' for some type 'a'. Typically, you use some function that has the type 'x -> y -> ... -> IO a' and provide all the x, y, etc. parameters. But you're not limited to this standard scenario -<br />
don't forget that Haskell is a functional language and you're free to<br />
compute the functional value required (recall that "IO a" is really a function<br />
type) in any possible way. Here we just extracted several functions<br />
from the list - no problem. This functional value can also be<br />
constructed on-the-fly, as we've done in the previous example - that's also<br />
OK. Want to see this functional value passed as a parameter?<br />
Just look at the definition of 'when'. Hey, we can buy, sell, and rent<br />
these IO actions just like we can with any other functional values! For example, let's define a function that executes all the IO actions in the list:<br />
<br />
<haskell><br />
sequence_ :: [IO a] -> IO ()<br />
sequence_ [] = return ()<br />
sequence_ (x:xs) = do x<br />
sequence_ xs<br />
</haskell><br />
<br />
No black magic - we just extract IO actions from the list and insert<br />
them into a chain of IO operations that should be performed one after another (in the same order that they occurred in the list) to "compute the final world value" of the entire 'sequence_' call.<br />
<br />
With the help of 'sequence_', we can rewrite our last 'main' function as:<br />
<br />
<haskell><br />
main = sequence_ ioActions<br />
</haskell><br />
<br />
<br />
Haskell's ability to work with IO actions as with any other<br />
(functional and non-functional) values allows us to define control<br />
structures of arbitrary complexity. Try, for example, to define a control<br />
structure that repeats an action until it returns the 'False' result:<br />
<br />
<haskell><br />
while :: IO Bool -> IO ()<br />
while action = ???<br />
</haskell><br />
<br />
Most programming languages don't allow you to define control structures at all, and those that do often require you to use a macro-expansion system. In Haskell, control structures are just trivial functions anyone can write.<br />
<br />
<br />
=== Example: returning an IO action as a result ===<br />
<br />
How about returning an IO action as the result of a function? Well, we've done<br />
this each time we've defined an IO procedure - they all return IO actions<br />
that need a RealWorld value to be performed. While we usually just<br />
execute them as part of a higher-level IO procedure, it's also<br />
possible to just collect them without actual execution:<br />
<br />
<haskell><br />
main = do let a = sequence ioActions<br />
b = when True getChar<br />
c = getChar >> getChar<br />
putStr "These 'let' statements are not executed!"<br />
</haskell><br />
<br />
These assigned IO procedures can be used as parameters to other<br />
procedures, or written to global variables, or processed in some other<br />
way, or just executed later, as we did in the example with 'get2chars'.<br />
<br />
But how about returning a parameterized IO action from an IO procedure? Let's define a procedure that returns the i'th byte from a file represented as a Handle:<br />
<br />
<haskell><br />
readi h i = do hSeek h i AbsoluteSeek<br />
hGetChar h<br />
</haskell><br />
<br />
So far so good. But how about a procedure that returns the i'th byte of a file<br />
with a given name without reopening it each time?<br />
<br />
<haskell><br />
readfilei :: String -> IO (Integer -> IO Char)<br />
readfilei name = do h <- openFile name ReadMode<br />
return (readi h)<br />
</haskell><br />
<br />
As you can see, it's an IO procedure that opens a file and returns...<br />
another IO procedure that will read the specified byte. But we can go<br />
further and include the 'readi' body in 'readfilei':<br />
<br />
<haskell><br />
readfilei name = do h <- openFile name ReadMode<br />
let readi h i = do hSeek h i AbsoluteSeek<br />
hGetChar h<br />
return (readi h)<br />
</haskell><br />
<br />
That's a little better. But why do we add 'h' as a parameter to 'readi' if it can be obtained from the environment where 'readi' is now defined? An even shorter version is this:<br />
<br />
<haskell><br />
readfilei name = do h <- openFile name ReadMode<br />
let readi i = do hSeek h i AbsoluteSeek<br />
hGetChar h<br />
return readi<br />
</haskell><br />
<br />
What have we done here? We've build a parameterized IO action involving local<br />
names inside 'readfilei' and returned it as the result. Now it can be<br />
used in the following way:<br />
<br />
<haskell><br />
main = do myfile <- readfilei "test"<br />
a <- myfile 0<br />
b <- myfile 1<br />
print (a,b)<br />
</haskell><br />
<br />
<br />
This way of using IO actions is very typical for Haskell programs - you<br />
just construct one or more IO actions that you need,<br />
with or without parameters, possibly involving the parameters that your<br />
"constructor" received, and return them to the caller. Then these IO actions<br />
can be used in the rest of the program without any knowledge about your<br />
internal implementation strategy. One thing this can be used for is to<br />
partially emulate the OOP (or more precisely, the ADT) programming paradigm.<br />
<br />
<br />
=== Example: a memory allocator generator ===<br />
<br />
As an example, one of my programs has a module which is a memory suballocator. It receives the address and size of a large memory block and returns two<br />
procedures - one to allocate a subblock of a given size and the other to<br />
free the allocated subblock:<br />
<br />
<haskell><br />
memoryAllocator :: Ptr a -> Int -> IO (Int -> IO (Ptr b),<br />
Ptr c -> IO ())<br />
<br />
memoryAllocator buf size = do ......<br />
let alloc size = do ...<br />
...<br />
free ptr = do ...<br />
...<br />
return (alloc, free)<br />
</haskell><br />
<br />
How this is implemented? 'alloc' and 'free' work with references<br />
created inside the memoryAllocator procedure. Because the creation of these references is a part of the memoryAllocator IO actions chain, a new independent set of references will be created for each memory block for which<br />
memoryAllocator is called:<br />
<br />
<haskell><br />
memoryAllocator buf size = do start <- newIORef buf<br />
end <- newIORef (buf `plusPtr` size)<br />
...<br />
</haskell><br />
<br />
These two references are read and written in the 'alloc' and 'free' definitions (we'll implement a very simple memory allocator for this example):<br />
<br />
<haskell><br />
...<br />
let alloc size = do addr <- readIORef start<br />
writeIORef start (addr `plusPtr` size)<br />
return addr<br />
<br />
let free ptr = do writeIORef start ptr<br />
</haskell><br />
<br />
What we've defined here is just a pair of closures that use state<br />
available at the moment of their definition. As you can see, it's as<br />
easy as in any other functional language, despite Haskell's lack<br />
of direct support for impure functions.<br />
<br />
The following example uses procedures, returned by memoryAllocator, to<br />
simultaneously allocate/free blocks in two independent memory buffers:<br />
<br />
<haskell><br />
main = do buf1 <- mallocBytes (2^16)<br />
buf2 <- mallocBytes (2^20)<br />
(alloc1, free1) <- memoryAllocator buf1 (2^16)<br />
(alloc2, free2) <- memoryAllocator buf2 (2^20)<br />
ptr11 <- alloc1 100<br />
ptr21 <- alloc2 1000<br />
free1 ptr11<br />
free2 ptr21<br />
ptr12 <- alloc1 100<br />
ptr22 <- alloc2 1000<br />
</haskell><br />
<br />
<br />
<br />
=== Example: emulating OOP with record types ===<br />
<br />
Let's implement the classical OOP example: drawing figures. There are<br />
figures of different types: circles, rectangles and so on. The task is<br />
to create a heterogeneous list of figures. All figures in this list should<br />
support the same set of operations: draw, move and so on. We will<br />
represent these operations as IO procedures. Instead of a "class" let's<br />
define a structure containing implementations of all the procedures<br />
required:<br />
<br />
<haskell><br />
data Figure = Figure { draw :: IO (),<br />
move :: Displacement -> IO ()<br />
}<br />
<br />
type Displacement = (Int, Int) -- horizontal and vertical displacement in points<br />
</haskell><br />
<br />
<br />
The constructor of each figure's type should just return a Figure record:<br />
<br />
<haskell><br />
circle :: Point -> Radius -> IO Figure<br />
rectangle :: Point -> Point -> IO Figure<br />
<br />
type Point = (Int, Int) -- point coordinates<br />
type Radius = Int -- circle radius in points<br />
</haskell><br />
<br />
<br />
We will "draw" figures by just printing their current parameters.<br />
Let's start with a simplified implementation of the 'circle' and 'rectangle'<br />
constructors, without actual 'move' support:<br />
<br />
<haskell><br />
circle center radius = do<br />
let description = " Circle at "++show center++" with radius "++show radius<br />
return $ Figure { draw = putStrLn description }<br />
<br />
rectangle from to = do<br />
let description = " Rectangle "++show from++"-"++show to)<br />
return $ Figure { draw = putStrLn description }<br />
</haskell><br />
<br />
<br />
As you see, each constructor just returns a fixed 'draw' procedure that prints<br />
parameters with which the concrete figure was created. Let's test it:<br />
<br />
<haskell><br />
drawAll :: [Figure] -> IO ()<br />
drawAll figures = do putStrLn "Drawing figures:"<br />
mapM_ draw figures<br />
<br />
main = do figures <- sequence [circle (10,10) 5,<br />
circle (20,20) 3,<br />
rectangle (10,10) (20,20),<br />
rectangle (15,15) (40,40)]<br />
drawAll figures<br />
</haskell><br />
<br />
<br />
Now let's define "full-featured" figures that can actually be<br />
moved around. In order to achieve this, we should provide each figure<br />
with a mutable variable that holds each figure's current screen location. The<br />
type of this variable will be "IORef Point". This variable should be created in the figure constructor and manipulated in IO procedures (closures) enclosed in<br />
the Figure record:<br />
<br />
<haskell><br />
circle center radius = do<br />
centerVar <- newIORef center<br />
<br />
let drawF = do center <- readIORef centerVar<br />
putStrLn (" Circle at "++show center<br />
++" with radius "++show radius)<br />
<br />
let moveF (addX,addY) = do (x,y) <- readIORef centerVar<br />
writeIORef centerVar (x+addX, y+addY)<br />
<br />
return $ Figure { draw=drawF, move=moveF }<br />
<br />
<br />
rectangle from to = do<br />
fromVar <- newIORef from<br />
toVar <- newIORef to<br />
<br />
let drawF = do from <- readIORef fromVar<br />
to <- readIORef toVar<br />
putStrLn (" Rectangle "++show from++"-"++show to)<br />
<br />
let moveF (addX,addY) = do (fromX,fromY) <- readIORef fromVar<br />
(toX,toY) <- readIORef toVar<br />
writeIORef fromVar (fromX+addX, fromY+addY)<br />
writeIORef toVar (toX+addX, toY+addY)<br />
<br />
return $ Figure { draw=drawF, move=moveF }<br />
</haskell><br />
<br />
<br />
Now we can test the code which moves figures around:<br />
<br />
<haskell><br />
main = do figures <- sequence [circle (10,10) 5,<br />
rectangle (10,10) (20,20)]<br />
drawAll figures<br />
mapM_ (\fig -> move fig (10,10)) figures<br />
drawAll figures<br />
</haskell><br />
<br />
<br />
It's important to realize that we are not limited to including only IO actions<br />
in a record that's intended to simulate a C++/Java-style interface. The record can also include values, IORefs, pure functions - in short, any type of data. For example, we can easily add to the Figure interface fields for area and origin:<br />
<br />
<haskell><br />
data Figure = Figure { draw :: IO (),<br />
move :: Displacement -> IO (),<br />
area :: Double,<br />
origin :: IORef Point<br />
}<br />
</haskell><br />
<br />
<br />
<br />
== Exception handling (under development) ==<br />
<br />
Although Haskell provides set of exception raising/handling features comparable to those in popular OOP languages (C++, Java, C#), this part of language receives much less attention than there. First reason is that you just don't need to pay attention - most times it just works "behind the scene". Second reason is that Haskell, being lacked OOP inheritance, doesn't allow to easily subclass exception types, therefore limiting flexibility of exception handling.<br />
<br />
First, Haskell RTS raise more exceptions than traditional languages - pattern match failures, calls with invalid arguments (such as '''head []''') and computations whose results depend on special values '''undefined''' and '''error "...."''' all raise their own exceptions:<br />
<br />
example 1:<br />
<haskell><br />
main = print (f 2)<br />
<br />
f 0 = "zero"<br />
f 1 = "one"<br />
</haskell><br />
<br />
example 2:<br />
<haskell><br />
main = print (head [])<br />
</haskell><br />
<br />
example 3:<br />
<haskell><br />
main = print (1 + (error "Value that wasn't initialized or cannot be computed"))<br />
</haskell><br />
<br />
This allows to write programs in much more error-prone way.<br />
<br />
== Interfacing with C/C++ and foreign libraries (under development) ==<br />
<br />
While Haskell is great at algorithm development, speed isn't its best side. We can combine best of both worlds, though, by writing speed-critical parts of program in C and rest in Haskell. We just need a way to call C functions from Haskell and vice versa, and to marshal data between two worlds.<br />
<br />
We also need to interact with C world for using Windows/Linux APIs, linking to various libraries and DLLs. Even interfacing with other languages requires to go through C world as "common denominator". Appendix [6] to Haskell'98 standard provides complete description of interfacing with C.<br />
<br />
We will learn FFI via series of examples. These examples includes C/C++ code, so they need C/C++ compilers to be installed, the same will be true if you need to include code written in C/C++ in your program (C/C++ compilers are not required when you need just to link with existing libraries providing APIs with C calling convention). On Unix (and MacOS?) systems system-wide default C/C++ compiler typically used by GHC installation. On Windows, no default compilers exist, so GHC typically shipped with C compiler, and you may find on download page GHC distribution with bundled C and C++ compilers. Alternatively, you may find and install gcc/mingw32 version compatible with your GHC installation.<br />
<br />
If you need to make your C/C++ code as fast as possible, you may compile your code by Intel compilers instead of gcc. However, these compilers are not free, moreover on Windows code compiled by Intel compilers may be interact with GHC-compiled code only if one of them is put into DLLs (due to RTS incompatibility) [not checked! please correct if i'm wrong].<br />
<br />
[http://www.haskell.org/haskellwiki/Applications_and_libraries/Interfacing_other_languages More links]:<br />
<br />
;[http://www.cse.unsw.edu.au/~chak/haskell/c2hs/ C-&gt;Haskell]<br />
:A lightweight tool for implementing access to C libraries from Haskell.<br />
<br />
;[[HSFFIG]]<br />
:Haskell FFI Binding Modules Generator (HSFFIG) is a tool that takes a C library include file (.h) and generates Haskell Foreign Functions Interface import declarations for items (functions, structures, etc.) the header defines.<br />
<br />
;[http://quux.org/devel/missingpy MissingPy]<br />
:MissingPy is really two libraries in one. At its lowest level, MissingPy is a library designed to make it easy to call into Python from Haskell. It provides full support for interpreting arbitrary Python code, interfacing with a good part of the Python/C API, and handling Python objects. It also provides tools for converting between Python objects and their Haskell equivalents. Memory management is handled for you, and Python exceptions get mapped to Haskell Dynamic exceptions. At a higher level, MissingPy contains Haskell interfaces to some Python modules.<br />
<br />
[[HsLua Haskell interface to Lua scripting language]]<br />
<br />
=== Calling functions ===<br />
<br />
First, we will learn how to call C functions from Haskell and Haskell functions from C. The first example consists of three files:<br />
<br />
main.hs:<br />
<haskell><br />
{-# LANGUAGE ForeignFunctionInterface #-}<br />
<br />
main = do print "Hello from main"<br />
c_function<br />
<br />
haskell_function = print "Hello from haskell_function"<br />
<br />
foreign import ccall safe "prototypes.h"<br />
c_function :: IO ()<br />
<br />
foreign export ccall<br />
haskell_function :: IO ()<br />
</haskell><br />
<br />
evil.c:<br />
<haskell><br />
#include <stdio.h><br />
#include "prototypes.h"<br />
<br />
void c_function (void)<br />
{<br />
printf("Hello from c_function\n");<br />
haskell_function();<br />
} <br />
</haskell><br />
<br />
prototypes.h:<br />
<haskell><br />
extern void c_function (void);<br />
extern void haskell_function (void);<br />
</haskell><br />
<br />
It may be compiled and linked in one step by ghc:<br />
ghc --make main.hs evil.c<br />
<br />
Or, you may compile C module(s) separately and link in .o files (this may be preferable if you use make and don't want to recompile unchanged sources; ghc's --make option provides smart recompilation only for .hs files):<br />
ghc -c evil.c<br />
ghc --make main.hs evil.o<br />
<br />
You may use gcc/g++ directly to compile your C/C++ files but i recommend to do linking via ghc because it adds a lots of libraries required for execution of Haskell code. For the same reasons, even if your main routine is written in C/C++, I recommend calling it from the Haskell function main - otherwise you'll have to explicitly init/shutdown the GHC RTS (run-time system).<br />
<br />
We use the "foreign import" specification to import foreign routines into our Haskell world, and "foreign export" to export Haskell routines into the external world. Note that the import statement creates a new Haskell symbol (from the external one), while the export statement uses a Haskell symbol previously defined. Technically speaking, both types of statements create a wrapper that converts the names and calling conventions from C to Haskell or vice versa.<br />
<br />
=== All about the "foreign" statement ===<br />
<br />
The "ccall" specifier in foreign statements means the use of C (not C++ !) calling convention. This means that if you want to write the external function in C++ (instead of C) you should add '''export "C"''' specification to its declaration - otherwise you'll get linking error. Let's rewrite our first example to use C++ instead of C:<br />
<br />
prototypes.h:<br />
<haskell><br />
#ifdef __cplusplus<br />
extern "C" {<br />
#endif<br />
<br />
extern void c_function (void);<br />
extern void haskell_function (void);<br />
<br />
#ifdef __cplusplus<br />
}<br />
#endif<br />
</haskell><br />
<br />
Compile it via:<br />
<br />
ghc --make main.hs evil.cpp<br />
<br />
where evil.cpp is just a renamed copy of evil.c from the first example. Note that the new prototypes.h is written to allow compiling it both as C and C++ code. When it's included from evil.cpp, it's compiled as C++ code. When GHC compiles main.hs via the C compiler (enabled by -fvia-C option), it also includes prototypes.h but compiles it in C mode. It's why you need to specify .h files in "foreign" declarations - depending on which Haskell compiler you use, these files may be included to check consistency of C and Haskell declarations.<br />
<br />
The quoted part of the foreign statement may also be used to import or export a function under another name--for example,<br />
<br />
<haskell><br />
foreign import ccall safe "prototypes.h CFunction"<br />
c_function :: IO ()<br />
<br />
foreign export ccall "HaskellFunction"<br />
haskell_function :: IO ()<br />
</haskell><br />
<br />
specifies that the C function called CFunction will become known as the Haskell function c_function, while the Haskell function haskell_function will be known in the C world as HaskellFunction. It's required when the C name doesn't conform to Haskell naming requirements.<br />
<br />
Although the Haskell FFI standard tells about many other calling conventions in addition to ccall (e.g. cplusplus, jvm, net) current Haskell implementations support only ccall and stdcall. The latter, also called the "Pascal" calling convention, is used to interface with WinAPI:<br />
<br />
<haskell><br />
foreign import stdcall unsafe "windows.h SetFileApisToOEM"<br />
setFileApisToOEM :: IO ()<br />
</haskell><br />
<br />
And finally, about the safe/unsafe specifier: a C function imported with the "unsafe" keyword is called directly and the Haskell runtime is stopped while the C function is executed (when there are several OS threads executing the Haskell program, only the current OS thread is delayed). This call doesn't allow recursively entering into the Haskell world by calling any Haskell function - the Haskell RTS is just not prepared for such an event. However, unsafe calls are as quick as calls in C world. It's ideal for "momentary" calls that quickly return back to the caller.<br />
<br />
When "safe" is specified, the C function is called in safe environment - the Haskell execution context is saved, so it's possible to call back to Haskell and, if the C call takes a long time, another OS thread may be started to execute Haskell code (of course, in threads other than the one that called the C code). This has its own price, though - around 1000 CPU ticks per call.<br />
<br />
You can read more about interaction between FFI calls and Haskell concurrency in [7].<br />
<br />
=== Marshalling simple types ===<br />
<br />
Calling by itself is relatively easy; the real problem of interfacing languages with different data models is passing data between them. In this case, there is no guarantee that Haskell's Int is represented in memory the same way as C's int, nor Haskell's Double the same as C's double and so on. While on *some* platforms they are the same and you can write throw-away programs relying on these, the goal of portability requires you to declare imported and exported functions using special types described in the FFI standard, which are guaranteed to correspond to C types. These are:<br />
<br />
<haskell><br />
import Foreign.C.Types ( -- equivalent to the following C type:<br />
CChar, CUChar, -- char/unsigned char<br />
CShort, CUShort, -- short/unsigned short<br />
CInt, CUInt, CLong, CULong, -- int/unsigned/long/unsigned long<br />
CFloat, CDouble...) -- float/double<br />
</haskell><br />
<br />
Now we can import and export typeful C/Haskell functions:<br />
<haskell><br />
foreign import ccall unsafe "math.h"<br />
c_sin :: CDouble -> CDouble<br />
</haskell><br />
<br />
Note that pure C functions (those whose results depend only on their arguments) are imported without IO in their return type. The "const" specifier in C is not reflected in Haskell types, so appropriate compiler checks are not performed. <!-- What would these be? --><br />
<br />
All these numeric types are instances of the same classes as their Haskell cousins (Ord, Num, Show and so on), so you may perform calculations on these data directly. Alternatively, you may convert them to native Haskell types. It's very typical to write simple wrappers around imported and exported functions just to provide interfaces having native Haskell types:<br />
<br />
<haskell><br />
-- |Type-conversion wrapper around c_sin<br />
sin :: Double -> Double<br />
sin = fromRational . c_sin . toRational<br />
</haskell><br />
<br />
=== Memory management ===<br />
<br />
=== Marshalling strings ===<br />
<br />
<haskell><br />
import Foreign.C.String ( -- representation of strings in C<br />
CString, -- = Ptr CChar<br />
CStringLen) -- = (Ptr CChar, Int)<br />
</haskell><br />
<br />
<haskell><br />
foreign import ccall unsafe "string.h"<br />
c_strlen :: CString -> IO CSize -- CSize defined in Foreign.C.Types and is equal to size_t<br />
</haskell><br />
<br />
<haskell><br />
-- |Type-conversion wrapper around c_strlen <br />
strlen :: String -> Int<br />
strlen = ....<br />
</haskell><br />
<br />
=== Marshalling composite types ===<br />
<br />
C array may be manipulated in Haskell as [http://haskell.org/haskellwiki/Arrays#StorableArray_.28module_Data.Array.Storable.29 StorableArray].<br />
<br />
There is no built-in support for marshalling C structures and using C constants in Haskell. These are implemented in c2hs preprocessor, though.<br />
<br />
Binary marshalling (serializing) of data structures of any complexity is implemented in library Binary.<br />
<br />
=== Dynamic calls ===<br />
<br />
=== DLLs ===<br />
''because i don't have experience of using DLLs, can someone write into this section? ultimately, we need to consider the following tasks:''<br />
* using DLLs of 3rd-party libraries (such as ziplib)<br />
* putting your own C code into DLL to use in Haskell<br />
* putting Haskell code into DLL which may be called from C code<br />
<br />
== Dark side of IO monad ==<br />
=== unsafePerformIO ===<br />
<br />
Programmers coming from an imperative language background often look for a way to execute IO actions inside a pure procedure. But what does this mean?<br />
Imagine that you're trying to write a procedure that reads the contents of a file with a given name, and you try to write it as a pure (non-IO) function:<br />
<br />
<haskell><br />
readContents :: Filename -> String<br />
</haskell><br />
<br />
Defining readContents as a pure function will certainly simplify the code that uses it. But it will also create problems for the compiler:<br />
<br />
# This call is not inserted in a sequence of "world transformations", so the compiler doesn't know at what exact moment you want to execute this action. For example, if the file has one kind of contents at the beginning of the program and another at the end - which contents do you want to see? You have no idea when (or even if) this function is going to get invoked, because Haskell sees this function as pure and feels free to reorder the execution of any or all pure functions as needed.<br />
# Attempts to read the contents of files with the same name can be factored (''i.e.'' reduced to a single call) despite the fact that the file (or the current directory) can be changed between calls. Again, Haskell considers all non-IO functions to be pure and feels free to omit multiple calls with the same parameters.<br />
<br />
So, implementing pure functions that interact with the Real World is<br />
considered to be Bad Behavior. Good boys and girls never do it ;)<br />
<br />
<br />
Nevertheless, there are (semi-official) ways to use IO actions inside<br />
of pure functions. As you should remember this is prohibited by<br />
requiring the RealWorld "baton" in order to call an IO action. Pure functions don't have the baton, but there is a special "magic" procedure that produces this baton from nowhere, uses it to call an IO action and then throws the resulting "world" away! It's a little low-level magic :) This very special (and dangerous) procedure is:<br />
<br />
<haskell><br />
unsafePerformIO :: IO a -> a<br />
</haskell><br />
<br />
Let's look at its (possible) definition:<br />
<br />
<haskell><br />
unsafePerformIO :: (RealWorld -> (a, RealWorld)) -> a<br />
unsafePerformIO action = let (a, world1) = action createNewWorld<br />
in a<br />
</haskell><br />
<br />
where 'createNewWorld' is an internal function producing a new value of<br />
the RealWorld type.<br />
<br />
Using unsafePerformIO, you can easily write pure functions that do<br />
I/O inside. But don't do this without a real need, and remember to<br />
follow this rule: the compiler doesn't know that you are cheating; it still<br />
considers each non-IO function to be a pure one. Therefore, all the usual<br />
optimization rules can (and will!) be applied to its execution. So<br />
you must ensure that:<br />
<br />
# The result of each call depends only on its arguments.<br />
# You don't rely on side-effects of this function, which may be not executed if its results are not needed.<br />
<br />
<br />
Let's investigate this problem more deeply. Function evaluation in Haskell<br />
is determined by a value's necessity - the language computes only the values that are really required to calculate the final result. But what does this mean with respect to the 'main' function? To "calculate the final world's" value, you need to perform all the intermediate IO actions that are included in the 'main' chain. By using 'unsafePerformIO' we call IO actions outside of this chain. What guarantee do we have that they will be run at all? None. The only time they will be run is if running them is required to compute the overall function result (which in turn should be required to perform some action in the<br />
'main' chain). This is an example of Haskell's evaluation-by-need strategy. Now you should clearly see the difference:<br />
<br />
- An IO action inside an IO procedure is guaranteed to execute as long as<br />
it is (directly or indirectly) inside the 'main' chain - even when its result isn't used (because the implicit "world" value it returns ''will'' be used). You directly specify the order of the action's execution inside the IO procedure. Data dependencies are simulated via the implicit "world" values that are passed from each IO action to the next.<br />
<br />
- An IO action inside 'unsafePerformIO' will be performed only if<br />
result of this operation is really used. The evaluation order is not<br />
guaranteed and you should not rely on it (except when you're sure about<br />
whatever data dependencies may exist).<br />
<br />
<br />
I should also say that inside 'unsafePerformIO' call you can organize<br />
a small internal chain of IO actions with the help of the same binding<br />
operators and/or 'do' syntactic sugar we've seen above. For example, here's a particularly convoluted way to compute the integer that comes after zero:<br />
<br />
<haskell><br />
one :: Int<br />
one = unsafePerformIO $ do var <- newIORef 0<br />
modifyIORef var (+1)<br />
readIORef var<br />
</haskell><br />
<br />
and in this case ALL the operations in this chain will be performed as<br />
long as the result of the 'unsafePerformIO' call is needed. To ensure this,<br />
the actual 'unsafePerformIO' implementation evaluates the "world" returned<br />
by the 'action':<br />
<br />
<haskell><br />
unsafePerformIO action = let (a,world1) = action createNewWorld<br />
in (world1 `seq` a)<br />
</haskell><br />
<br />
(The 'seq' operation strictly evaluates its first argument before<br />
returning the value of the second one).<br />
<br />
<br />
=== inlinePerformIO ===<br />
<br />
inlinePerformIO has the same definition as unsafePerformIO but with addition of INLINE pragma:<br />
<haskell><br />
-- | Just like unsafePerformIO, but we inline it. Big performance gains as<br />
-- it exposes lots of things to further inlining<br />
{-# INLINE inlinePerformIO #-}<br />
inlinePerformIO action = let (a, world1) = action createNewWorld<br />
in (world1 `seq` a)<br />
#endif<br />
</haskell><br />
<br />
Semantically inlinePerformIO = unsafePerformIO<br />
in as much as either of those have any semantics at all.<br />
<br />
The difference of course is that inlinePerformIO is even less safe than<br />
unsafePerformIO. While ghc will try not to duplicate or common up<br />
different uses of unsafePerformIO, we aggressively inline<br />
inlinePerformIO. So you can really only use it where the IO content is<br />
really properly pure, like reading from an immutable memory buffer (as<br />
in the case of ByteStrings). However things like allocating new buffers<br />
should not be done inside inlinePerformIO since that can easily be<br />
floated out and performed just once for the whole program, so you end up<br />
with many things sharing the same buffer, which would be bad.<br />
<br />
So the rule of thumb is that IO things wrapped in unsafePerformIO have<br />
to be externally pure while with inlinePerformIO it has to be really<br />
really pure or it'll all go horribly wrong.<br />
<br />
That said, here's some really hairy code. This should frighten any pure<br />
functional programmer...<br />
<br />
<haskell><br />
write :: Int -> (Ptr Word8 -> IO ()) -> Put ()<br />
write !n body = Put $ \c buf@(Buffer fp o u l) -><br />
if n <= l<br />
then write' c fp o u l<br />
else write' (flushOld c n fp o u) (newBuffer c n) 0 0 0<br />
<br />
where {-# NOINLINE write' #-}<br />
write' c !fp !o !u !l =<br />
-- warning: this is a tad hardcore<br />
inlinePerformIO<br />
(withForeignPtr fp<br />
(\p -> body $! (p `plusPtr` (o+u))))<br />
`seq` c () (Buffer fp o (u+n) (l-n))<br />
</haskell><br />
<br />
it's used like:<br />
<haskell><br />
word8 w = write 1 (\p -> poke p w)<br />
</haskell><br />
<br />
This does not adhere to my rule of thumb above. Don't ask exactly why we<br />
claim it's safe :-) (and if anyone really wants to know, ask Ross<br />
Paterson who did it first in the Builder monoid)<br />
<br />
=== unsafeInterleaveIO ===<br />
<br />
But there is an even stranger operation called 'unsafeInterleaveIO' that<br />
gets the "official baton", makes its own pirate copy, and then runs<br />
an "illegal" relay-race in parallel with the main one! I can't talk further<br />
about its behavior without causing grief and indignation, so it's no surprise<br />
that this operation is widely used in countries that are hotbeds of software piracy such as Russia and China! ;) Don't even ask me - I won't say anything more about this dirty trick I use all the time ;)<br />
<br />
One can use unsafePerformIO (not unsafeInterleaveIO) to perform I/O<br />
operations not in predefined order but by demand. For example, the<br />
following code:<br />
<br />
<haskell><br />
do let c = unsafePerformIO getChar<br />
do_proc c<br />
</haskell><br />
<br />
will perform getChar I/O call only when value of c is really required<br />
by code, i.e. it this call will be performed lazily as any usual<br />
Haskell computation.<br />
<br />
Now imagine the following code:<br />
<br />
<haskell><br />
do let s = [unsafePerformIO getChar, unsafePerformIO getChar, unsafePerformIO getChar]<br />
do_proc s<br />
</haskell><br />
<br />
Three chars inside this list will be computed on demand too, and this<br />
means that their values will depend on the order they are consumed. It<br />
is not that we usually need :)<br />
<br />
<br />
unsafeInterleaveIO solves this problem - it performs I/O only on<br />
demand but allows to define exact *internal* execution order for parts<br />
of your datastructure. It is why I wrote that unsafeInterleaveIO makes<br />
illegal copy of baton :)<br />
<br />
First, unsafeInterleaveIO has (IO a) action as a parameter and returns<br />
value of type 'a':<br />
<br />
<haskell><br />
do str <- unsafeInterleaveIO myGetContents<br />
</haskell><br />
<br />
Second, unsafeInterleaveIO don't perform any action immediately, it<br />
only creates a box of type 'a' which on requesting this value will<br />
perform action specified as a parameter.<br />
<br />
Third, this action by itself may compute the whole value immediately<br />
or... use unsafeInterleaveIO again to defer calculation of some<br />
sub-components:<br />
<br />
<haskell><br />
myGetContents = do<br />
c <- getChar<br />
s <- unsafeInterleaveIO myGetContents<br />
return (c:s)<br />
</haskell><br />
<br />
This code will be executed only at the moment when value of str is<br />
really demanded. In this moment, getChar will be performed (with<br />
result assigned to c) and one more lazy IO box will be created - for s.<br />
This box again contains link to the myGetContents call<br />
<br />
Then, list cell returned that contains one char read and link to<br />
myGetContents call as a way to compute rest of the list. Only at the<br />
moment when next value in list required, this operation will be<br />
performed again<br />
<br />
As a final result, we get inability to read second char in list before<br />
first one, but lazy character of reading in whole. bingo!<br />
<br />
<br />
PS: of course, actual code should include EOF checking. also note that<br />
you can read many chars/records at each call:<br />
<br />
<haskell><br />
myGetContents = do<br />
c <- replicateM 512 getChar<br />
s <- unsafeInterleaveIO myGetContents<br />
return (c++s)<br />
</haskell><br />
<br />
== A safer approach: the ST monad ==<br />
<br />
We said earlier that we can use unsafePerformIO to perform computations that are totally pure but nevertheless interact with the Real World in some way. There is, however, a better way! One that remains totally pure and yet allows the use of references, arrays, and so on -- and it's done using, you guessed it, type magic. This is the ST monad.<br />
<br />
The ST monad's version of unsafePerformIO is called runST, and it has a very unusual type.<br />
<haskell><br />
runST :: (forall s . ST s a) -> a<br />
</haskell><br />
<br />
The s variable in the ST monad is the state type. Moreover, all the fun mutable stuff available in the ST monad is quantified over s:<br />
<haskell><br />
newSTRef :: a -> ST s (STRef s a)<br />
newArray_ :: Ix i => (i, i) -> ST s (STArray s i e)<br />
</haskell><br />
<br />
So why does runST have such a funky type? Let's see what would happen if we wrote<br />
<haskell><br />
makeSTRef :: a -> STRef s a<br />
makeSTRef a = runST (newSTRef a)<br />
</haskell><br />
This fails, because newSTRef a doesn't work for all state types s -- it only works for the s from the return type STRef s a.<br />
<br />
This is all sort of wacky, but the result is that you can only run an ST computation where the output type is functionally pure, and makes no references to the internal mutable state of the computation. The ST monad doesn't have access to I/O operations like writing to the console, either -- only references, arrays, and suchlike that come in handy for pure computations.<br />
<br />
Important note -- the state type doesn't actually mean anything. We never have a value of type s, for instance. It's just a way of getting the type system to do the work of ensuring purity for us, with smoke and mirrors.<br />
<br />
It's really just type system magic: secretly, on the inside, runST runs a computation with the real world baton just like unsafePerformIO. Their internal implementations are almost identical: in fact, there's a function<br />
<haskell><br />
stToIO :: ST RealWorld a -> IO a<br />
</haskell><br />
<br />
The difference is that ST uses type system magic to forbid unsafe behavior like extracting mutable objects from their safe ST wrapping, but allowing purely functional outputs to be performed with all the handy access to mutable references and arrays.<br />
<br />
So here's how we'd rewrite our function using unsafePerformIO from above:<br />
<br />
<haskell><br />
oneST :: ST s Int -- note that this works correctly for any s<br />
oneST = do var <- newSTRef 0<br />
modifySTRef var (+1)<br />
readSTRef var<br />
<br />
one :: Int<br />
one = runST oneST<br />
</haskell><br />
<br />
== Welcome to the machine: the actual [[GHC]] implementation ==<br />
<br />
A little disclaimer: I should say that I'm not describing<br />
here exactly what a monad is (I don't even completely understand it myself) and my explanation shows only one _possible_ way to implement the IO monad in<br />
Haskell. For example, the hbc Haskell compiler implements IO monad via<br />
continuations. I also haven't said anything about exception handling,<br />
which is a natural part of the "monad" concept. You can read the "All About<br />
Monads" guide to learn more about these topics.<br />
<br />
But there is some good news: first, the IO monad understanding you've just acquired will work with any implementation and with many other monads. You just can't work with RealWorld<br />
values directly.<br />
<br />
Second, the IO monad implementation described here is really used in the GHC,<br />
yhc/nhc (Hugs/jhc, too?) compilers. Here is the actual IO definition<br />
from the GHC sources:<br />
<br />
<haskell><br />
newtype IO a = IO (State# RealWorld -> (# State# RealWorld, a #))<br />
</haskell><br />
<br />
It uses the "State# RealWorld" type instead of our RealWorld, it uses the "(# #)" strict tuple for optimization, and it adds an IO data constructor<br />
around the type. Nevertheless, there are no significant changes from the standpoint of our explanation. Knowing the principle of "chaining" IO actions via fake "state of the world" values, you can now easily understand and write low-level implementations of GHC I/O operations.<br />
<br />
<br />
=== The [[Yhc]]/nhc98 implementation ===<br />
<br />
<haskell><br />
data World = World<br />
newtype IO a = IO (World -> Either IOError a)<br />
</haskell><br />
<br />
This implementation makes the "World" disappear somewhat, and returns Either a<br />
result of type "a", or if an error occurs then "IOError". The lack of the World on the right-hand side of the function can only be done because the compiler knows special things about the IO type, and won't overoptimise it.<br />
<br />
<br />
== Further reading ==<br />
<br />
[1] This tutorial is largely based on the Simon Peyton Jones' paper [http://research.microsoft.com/%7Esimonpj/Papers/marktoberdorf Tackling the awkward squad: monadic input/output, concurrency, exceptions, and foreign-language calls in Haskell]. I hope that my tutorial improves his original explanation of the Haskell I/O system and brings it closer to the point of view of beginning Haskell programmers. But if you need to learn about concurrency, exceptions and FFI in Haskell/GHC, the original paper is the best source of information.<br />
<br />
[2] You can find more information about concurrency, FFI and STM at the [[GHC/Concurrency#Starting points]] page.<br />
<br />
[3] The [[Arrays]] page contains exhaustive explanations about using mutable arrays.<br />
<br />
[4] Look also at the [[Tutorials#Using_monads|Using monads]] page, which contains tutorials and papers really describing these mysterious monads :)<br />
<br />
[5] An explanation of the basic monad functions, with examples, can be found in the reference guide [http://members.chello.nl/hjgtuyl/tourdemonad.html A tour of the Haskell Monad functions], by Henk-Jan van Tuyl.<br />
<br />
[6] Official FFI specifications can be found on the page [http://www.cse.unsw.edu.au/~chak/haskell/ffi/ The Haskell 98 Foreign Function Interface 1.0: An Addendum to the Haskell 98 Report]<br />
<br />
[7] Using FFI in multithreaded programs described in paper [http://www.haskell.org/~simonmar/bib/concffi04_abstract.html Extending the Haskell Foreign Function Interface with Concurrency]<br />
<br />
Do you have more questions? Ask in the [http://www.haskell.org/mailman/listinfo/haskell-cafe haskell-cafe mailing list].<br />
<br />
== To-do list ==<br />
<br />
If you are interested in adding more information to this manual, please add your questions/topics here.<br />
<br />
Topics:<br />
* fixIO and 'mdo'<br />
* Q monad<br />
<br />
Questions:<br />
* split '>>='/'>>'/return section and 'do' section, more examples of using binding operators<br />
* IORef detailed explanation (==const*), usage examples, syntax sugar, unboxed refs<br />
* control structures developing - much more examples<br />
* unsafePerformIO usage examples: global variable, ByteString, other examples<br />
* actual GHC implementation - how to write low-level routines on example of newIORef implementation<br />
<br />
This manual is collective work, so feel free to add more information to it yourself. The final goal is to collectively develop a comprehensive manual for using the IO monad.<br />
<br />
----<br />
<br />
[[Category:Tutorials]]</div>Conalhttps://wiki.haskell.org/IO_insideIO inside2011-08-05T19:09:13Z<p>Conal: /* Welcome to the RealWorld, baby :) */ Added a warning about the falsenss of the IO story.</p>
<hr />
<div>Haskell I/O has always been a source of confusion and surprises for new Haskellers. While simple I/O code in Haskell looks very similar to its equivalents in imperative languages, attempts to write somewhat more complex code often result in a total mess. This is because Haskell I/O is really very different internally. Haskell is a pure language and even the I/O system can't break this purity.<br />
<br />
The following text is an attempt to explain the details of Haskell I/O implementations. This explanation should help you eventually master all the smart I/O tricks. Moreover, I've added a detailed explanation of various traps you might encounter along the way. After reading this text, you will receive a "Master of Haskell I/O" degree that is equal to a Bachelor in Computer Science and Mathematics, simultaneously :)<br />
<br />
If you are new to Haskell I/O you may prefer to start by reading the [[Introduction to IO]] page.<br />
<br />
<br />
== Haskell is a pure language ==<br />
<br />
Haskell is a pure language, which means that the result of any function call is fully determined by its arguments. Pseudo-functions like rand() or getchar() in C, which return different results on each call, are simply impossible to write in Haskell. Moreover, Haskell functions can't have side effects, which means that they can't effect any changes to the "real world", like changing files, writing to the screen, printing, sending data over the network, and so on. These two restrictions together mean that any function call can be replaced by the result of a previous call with the same parameters, and the language '''guarantees''' that all these rearrangements will not change the program result!<br />
<br />
Let's compare this to C: optimizing C compilers try to guess which functions have no side effects and don't depend on mutable global variables. If this guess is wrong, an optimization can change the program's semantics! To avoid this kind of disaster, C optimizers are conservative in their guesses or require hints from the programmer about the purity of functions.<br />
<br />
Compared to an optimizing C compiler, a Haskell compiler is a set of pure mathematical transformations. This results in much better high-level optimization facilities. Moreover, pure mathematical computations can be much more easily divided into several threads that may be executed in parallel, which is increasingly important in these days of multi-core CPUs. Finally, pure computations are less error-prone and easier to verify, which adds to Haskell's robustness and to the speed of program development using Haskell.<br />
<br />
Haskell purity allows compiler to call only functions whose results<br />
are really required to calculate final value of high-level function<br />
(i.e., main) - this is called lazy evaluation. It's great thing for<br />
pure mathematical computations, but how about I/O actions? Function<br />
like (<hask>putStrLn "Press any key to begin formatting"</hask>) can't return any<br />
meaningful result value, so how can we ensure that compiler will not<br />
omit or reorder its execution? And in general: how we can work with<br />
stateful algorithms and side effects in an entirely lazy language?<br />
This question has had many different solutions proposed in 18 years of<br />
Haskell development (see [[History of Haskell]]), though a solution based on [[monad]]s is now<br />
the standard.<br />
<br />
== What is a monad? ==<br />
<br />
What is a [[monad]]? It's something from mathematical category theory, which I<br />
don't know anymore :) In order to understand how monads are used to<br />
solve the problem of I/O and side effects, you don't need to know it. It's<br />
enough to just know elementary mathematics, like I do :)<br />
<br />
Let's imagine that we want to implement in Haskell the well-known<br />
'getchar' function. What type should it have? Let's try:<br />
<br />
<haskell><br />
getchar :: Char<br />
<br />
get2chars = [getchar,getchar]<br />
</haskell><br />
<br />
What will we get with 'getchar' having just the 'Char' type? You can see<br />
all the possible problems in the definition of 'get2chars':<br />
<br />
# Because the Haskell compiler treats all functions as pure (not having side effects), it can avoid "excessive" calls to 'getchar' and use one returned value twice.<br />
# Even if it does make two calls, there is no way to determine which call should be performed first. Do you want to return the two chars in the order in which they were read, or in the opposite order? Nothing in the definition of 'get2chars' answers this question.<br />
<br />
How can these problems be solved, from the programmer's viewpoint?<br />
Let's introduce a fake parameter of 'getchar' to make each call<br />
"different" from the compiler's point of view:<br />
<br />
<haskell><br />
getchar :: Int -> Char<br />
<br />
get2chars = [getchar 1, getchar 2]<br />
</haskell><br />
<br />
Right away, this solves the first problem mentioned above - now the<br />
compiler will make two calls because it sees them as having different<br />
parameters. The whole 'get2chars' function should also have a<br />
fake parameter, otherwise we will have the same problem calling it:<br />
<br />
<haskell><br />
getchar :: Int -> Char<br />
get2chars :: Int -> String<br />
<br />
get2chars _ = [getchar 1, getchar 2]<br />
</haskell><br />
<br />
<br />
Now we need to give the compiler some clue to determine which function it<br />
should call first. The Haskell language doesn't provide any way to express<br />
order of evaluation... except for data dependencies! How about adding an<br />
artificial data dependency which prevents evaluation of the second<br />
'getchar' before the first one? In order to achieve this, we will<br />
return an additional fake result from 'getchar' that will be used as a<br />
parameter for the next 'getchar' call:<br />
<br />
<haskell><br />
getchar :: Int -> (Char, Int)<br />
<br />
get2chars _ = [a,b] where (a,i) = getchar 1<br />
(b,_) = getchar i<br />
</haskell><br />
<br />
So far so good - now we can guarantee that 'a' is read before 'b'<br />
because reading 'b' needs the value ('i') that is returned by reading 'a'!<br />
<br />
We've added a fake parameter to 'get2chars' but the problem is that the<br />
Haskell compiler is too smart! It can believe that the external 'getchar'<br />
function is really dependent on its parameter but for 'get2chars' it<br />
will see that we're just cheating because we throw it away! Therefore it won't feel obliged to execute the calls in the order we want. How can we fix this? How about passing this fake parameter to the 'getchar' function?! In this case<br />
the compiler can't guess that it is really unused :)<br />
<br />
<haskell><br />
get2chars i0 = [a,b] where (a,i1) = getchar i0<br />
(b,i2) = getchar i1<br />
</haskell><br />
<br />
<br />
And more - 'get2chars' has all the same purity problems as the 'getchar'<br />
function. If you need to call it two times, you need a way to describe<br />
the order of these calls. Look at:<br />
<br />
<haskell><br />
get4chars = [get2chars 1, get2chars 2] -- order of 'get2chars' calls isn't defined<br />
</haskell><br />
<br />
We already know how to deal with these problems - 'get2chars' should<br />
also return some fake value that can be used to order calls:<br />
<br />
<haskell><br />
get2chars :: Int -> (String, Int)<br />
<br />
get4chars i0 = (a++b) where (a,i1) = get2chars i0<br />
(b,i2) = get2chars i1<br />
</haskell><br />
<br />
<br />
But what's the fake value 'get2chars' should return? If we use some integer constant, the excessively-smart Haskell compiler will guess that we're cheating again :) What about returning the value returned by 'getchar'? See:<br />
<br />
<haskell><br />
get2chars :: Int -> (String, Int)<br />
get2chars i0 = ([a,b], i2) where (a,i1) = getchar i0<br />
(b,i2) = getchar i1<br />
</haskell><br />
<br />
Believe it or not, but we've just constructed the whole "monadic"<br />
Haskell I/O system.<br />
<br />
== Welcome to the RealWorld, baby :) ==<br />
<br />
Warning: The following story about IO in incorrect in that it cannot actually explain some important aspects of IO (including interaction and concurrency). However, some people find it useful to begin developing an understanding.<br />
<br />
The 'main' Haskell function has the type:<br />
<br />
<haskell><br />
main :: RealWorld -> ((), RealWorld)<br />
</haskell><br />
<br />
where 'RealWorld' is a fake type used instead of our Int. It's something<br />
like the baton passed in a relay race. When 'main' calls some IO function,<br />
it passes the "RealWorld" it received as a parameter. All IO functions have<br />
similar types involving RealWorld as a parameter and result. To be<br />
exact, "IO" is a type synonym defined in the following way:<br />
<br />
<haskell><br />
type IO a = RealWorld -> (a, RealWorld)<br />
</haskell><br />
<br />
So, 'main' just has type "IO ()", 'getChar' has type "IO Char" and so<br />
on. You can think of the type "IO Char" as meaning "take the current RealWorld, do something to it, and return a Char and a (possibly changed) RealWorld". Let's look at 'main' calling 'getChar' two times:<br />
<br />
<haskell><br />
getChar :: RealWorld -> (Char, RealWorld)<br />
<br />
main :: RealWorld -> ((), RealWorld)<br />
main world0 = let (a, world1) = getChar world0<br />
(b, world2) = getChar world1<br />
in ((), world2)<br />
</haskell><br />
<br />
<br />
Look at this closely: 'main' passes the "world" it received to the first 'getChar'. This 'getChar' returns some new value of type RealWorld<br />
that gets used in the next call. Finally, 'main' returns the "world" it got<br />
from the second 'getChar'.<br />
<br />
# Is it possible here to omit any call of 'getChar' if the Char it read is not used? No, because we need to return the "world" that is the result of the second 'getChar' and this in turn requires the "world" returned from the first 'getChar'.<br />
# Is it possible to reorder the 'getChar' calls? No: the second 'getChar' can't be called before the first one because it uses the "world" returned from the first call.<br />
# Is it possible to duplicate calls? In Haskell semantics - yes, but real compilers never duplicate work in such simple cases (otherwise, the programs generated will not have any speed guarantees).<br />
<br />
<br />
As we already said, RealWorld values are used like a baton which gets passed<br />
between all routines called by 'main' in strict order. Inside each<br />
routine called, RealWorld values are used in the same way. Overall, in<br />
order to "compute" the world to be returned from 'main', we should perform<br />
each IO procedure that is called from 'main', directly or indirectly.<br />
This means that each procedure inserted in the chain will be performed<br />
just at the moment (relative to the other IO actions) when we intended it<br />
to be called. Let's consider the following program:<br />
<br />
<haskell><br />
main = do a <- ask "What is your name?"<br />
b <- ask "How old are you?"<br />
return ()<br />
<br />
ask s = do putStr s<br />
readLn<br />
</haskell><br />
<br />
Now you have enough knowledge to rewrite it in a low-level way and<br />
check that each operation that should be performed will really be<br />
performed with the arguments it should have and in the order we expect.<br />
<br />
<br />
But what about conditional execution? No problem. Let's define the<br />
well-known 'when' operation:<br />
<br />
<haskell><br />
when :: Bool -> IO () -> IO ()<br />
when condition action world =<br />
if condition<br />
then action world<br />
else ((), world)<br />
</haskell><br />
<br />
As you can see, we can easily include or exclude from the execution chain<br />
IO procedures (actions) depending on the data values. If 'condition'<br />
will be False on the call of 'when', 'action' will never be called because<br />
real Haskell compilers, again, never call functions whose results<br />
are not required to calculate the final result (''i.e.'', here, the final "world" value of 'main').<br />
<br />
Loops and more complex control structures can be implemented in<br />
the same way. Try it as an exercise!<br />
<br />
<br />
Finally, you may want to know how much passing these RealWorld<br />
values around the program costs. It's free! These fake values exist solely for the compiler while it analyzes and optimizes the code, but when it gets to assembly code generation, it "suddenly" realize that this type is like "()", so<br />
all these parameters and result values can be omitted from the final generated code. Isn't it beautiful? :)<br />
<br />
== '>>=' and 'do' notation ==<br />
<br />
All beginners (including me :)) start by thinking that 'do' is some<br />
magic statement that executes IO actions. That's wrong - 'do' is just<br />
syntactic sugar that simplifies the writing of procedures that use IO (and also other monads, but that's beyond the scope of this tutorial). 'do' notation eventually gets translated to statements passing "world" values around like we've manually written above and is used to simplify the gluing of several<br />
IO actions together. You don't need to use 'do' for just one statement; for instance,<br />
<br />
<haskell><br />
main = do putStr "Hello!"<br />
</haskell><br />
<br />
is desugared to:<br />
<br />
<haskell><br />
main = putStr "Hello!"<br />
</haskell><br />
<br />
But nevertheless it's considered Good Style to use 'do' even for one statement<br />
because it simplifies adding new statements in the future.<br />
<br />
<br />
Let's examine how to desugar a 'do' with multiple statements in the<br />
following example: <br />
<br />
<haskell><br />
main = do putStr "What is your name?"<br />
putStr "How old are you?"<br />
putStr "Nice day!"<br />
</haskell><br />
<br />
The 'do' statement here just joins several IO actions that should be<br />
performed sequentially. It's translated to sequential applications<br />
of one of the so-called "binding operators", namely '>>':<br />
<br />
<haskell><br />
main = (putStr "What is your name?")<br />
>> ( (putStr "How old are you?")<br />
>> (putStr "Nice day!")<br />
)<br />
</haskell><br />
<br />
This binding operator just combines two IO actions, executing them<br />
sequentially by passing the "world" between them:<br />
<br />
<haskell><br />
(>>) :: IO a -> IO b -> IO b<br />
(action1 >> action2) world0 =<br />
let (a, world1) = action1 world0<br />
(b, world2) = action2 world1<br />
in (b, world2)<br />
</haskell><br />
<br />
If defining operators this way looks strange to you, read this<br />
definition as follows:<br />
<br />
<haskell><br />
action1 >> action2 = action<br />
where<br />
action world0 = let (a, world1) = action1 world0<br />
(b, world2) = action2 world1<br />
in (b, world2)<br />
</haskell><br />
<br />
Now you can substitute the definition of '>>' at the places of its usage<br />
and check that program constructed by the 'do' desugaring is actually the<br />
same as we could write by manually manipulating "world" values.<br />
<br />
<br />
A more complex example involves the binding of variables using "<-":<br />
<br />
<haskell><br />
main = do a <- readLn<br />
print a<br />
</haskell><br />
<br />
This code is desugared into:<br />
<br />
<haskell><br />
main = readLn<br />
>>= (\a -> print a)<br />
</haskell><br />
<br />
As you should remember, the '>>' binding operator silently ignores<br />
the value of its first action and returns as an overall result<br />
the result of its second action only. On the other hand, the '>>=' binding operator (note the extra '=' at the end) allows us to use the result of its first action - it gets passed as an additional parameter to the second one! Look at the definition:<br />
<br />
<haskell><br />
(>>=) :: IO a -> (a -> IO b) -> IO b<br />
(action1 >>= action2) world0 =<br />
let (a, world1) = action1 world0<br />
(b, world2) = action2 a world1<br />
in (b, world2)<br />
</haskell><br />
<br />
First, what does the type of the second "action" (more precisely, a function which returns an IO action), namely "a -> IO b", mean? By<br />
substituting the "IO" definition, we get "a -> RealWorld -> (b, RealWorld)".<br />
This means that second action actually has two parameters<br />
- the type 'a' actually used inside it, and the value of type RealWorld used for sequencing of IO actions. That's always the case - any IO procedure has one<br />
more parameter compared to what you see in its type signature. This<br />
parameter is hidden inside the definition of the type alias "IO".<br />
<br />
Second, you can use these '>>' and '>>=' operations to simplify your<br />
program. For example, in the code above we don't need to introduce the<br />
variable, because the result of 'readLn' can be send directly to 'print':<br />
<br />
<haskell><br />
main = readLn >>= print<br />
</haskell><br />
<br />
<br />
And third - as you see, the notation:<br />
<br />
<haskell><br />
do x <- action1<br />
action2<br />
</haskell><br />
<br />
where 'action1' has type "IO a" and 'action2' has type "IO b",<br />
translates into:<br />
<br />
<haskell><br />
action1 >>= (\x -> action2)<br />
</haskell><br />
<br />
where the second argument of '>>=' has the type "a -> IO b". It's the way<br />
the '<-' binding is processed - the name on the left-hand side of '<-' just becomes a parameter of subsequent operations represented as one large IO action. Note also that if 'action1' has type "IO a" then 'x' will just have type "a"; you can think of the effect of '<-' as "unpacking" the IO value of 'action1' into 'x'. Note also that '<-' is not a true operator; it's pure syntax, just like 'do' itself. Its meaning results only from the way it gets desugared.<br />
<br />
Look at the next example: <br />
<br />
<haskell><br />
main = do putStr "What is your name?"<br />
a <- readLn<br />
putStr "How old are you?"<br />
b <- readLn<br />
print (a,b)<br />
</haskell><br />
<br />
This code is desugared into:<br />
<br />
<haskell><br />
main = putStr "What is your name?"<br />
>> readLn<br />
>>= \a -> putStr "How old are you?"<br />
>> readLn<br />
>>= \b -> print (a,b)<br />
</haskell><br />
<br />
I omitted the parentheses here; both the '>>' and the '>>=' operators are<br />
left-associative, but lambda-bindings always stretches as far to the right as possible, which means that the 'a' and 'b' bindings introduced<br />
here are valid for all remaining actions. As an exercise, add the<br />
parentheses yourself and translate this procedure into the low-level<br />
code that explicitly passes "world" values. I think it should be enough to help you finally realize how the 'do' translation and binding operators work.<br />
<br />
<br />
Oh, no! I forgot the third monadic operator - 'return'. It just<br />
combines its two parameters - the value passed and "world":<br />
<br />
<haskell><br />
return :: a -> IO a<br />
return a world0 = (a, world0)<br />
</haskell><br />
<br />
How about translating a simple example of 'return' usage? Say,<br />
<br />
<haskell><br />
main = do a <- readLn<br />
return (a*2)<br />
</haskell><br />
<br />
<br />
Programmers with an imperative language background often think that<br />
'return' in Haskell, as in other languages, immediately returns from<br />
the IO procedure. As you can see in its definition (and even just from its<br />
type!), such an assumption is totally wrong. The only purpose of using<br />
'return' is to "lift" some value (of type 'a') into the result of<br />
a whole action (of type "IO a") and therefore it should generally be used only as the last executed statement of some IO sequence. For example try to<br />
translate the following procedure into the corresponding low-level code:<br />
<br />
<haskell><br />
main = do a <- readLn<br />
when (a>=0) $ do<br />
return ()<br />
print "a is negative"<br />
</haskell><br />
<br />
and you will realize that the 'print' statement is executed even for non-negative values of 'a'. If you need to escape from the middle of an IO procedure, you can use the 'if' statement:<br />
<br />
<haskell><br />
main = do a <- readLn<br />
if (a>=0)<br />
then return ()<br />
else print "a is negative"<br />
</haskell><br />
<br />
Moreover, Haskell layout rules allow us to use the following layout:<br />
<br />
<haskell><br />
main = do a <- readLn<br />
if (a>=0) then return ()<br />
else do<br />
print "a is negative"<br />
...<br />
</haskell><br />
<br />
that may be useful for escaping from the middle of a longish 'do' statement.<br />
<br />
<br />
Last exercise: implement a function 'liftM' that lifts operations on<br />
plain values to the operations on monadic ones. Its type signature:<br />
<br />
<haskell><br />
liftM :: (a -> b) -> (IO a -> IO b)<br />
</haskell><br />
<br />
If that's too hard for you, start with the following high-level<br />
definition and rewrite it in low-level fashion:<br />
<br />
<haskell><br />
liftM f action = do x <- action<br />
return (f x)<br />
</haskell><br />
<br />
<br />
<br />
== Mutable data (references, arrays, hash tables...) ==<br />
<br />
As you should know, every name in Haskell is bound to one fixed (immutable) value. This greatly simplifies understanding algorithms and code optimization, but it's inappropriate in some cases. As we all know, there are plenty of algorithms that are simpler to implement in terms of updatable<br />
variables, arrays and so on. This means that the value associated with<br />
a variable, for example, can be different at different execution points,<br />
so reading its value can't be considered as a pure function. Imagine,<br />
for example, the following code:<br />
<br />
<haskell><br />
main = do let a0 = readVariable varA<br />
_ = writeVariable varA 1<br />
a1 = readVariable varA<br />
print (a0, a1)<br />
</haskell><br />
<br />
Does this look strange? First, the two calls to 'readVariable' look the same, so the compiler can just reuse the value returned by the first call. Second,<br />
the result of the 'writeVariable' call isn't used so the compiler can (and will!) omit this call completely. To complete the picture, these three calls may be rearranged in any order because they appear to be independent of each<br />
other. This is obviously not what was intended. What's the solution? You already know this - use IO actions! Using IO actions guarantees that:<br />
<br />
# the execution order will be retained as written<br />
# each action will have to be executed<br />
# the result of the "same" action (such as "readVariable varA") will not be reused<br />
<br />
So, the code above really should be written as:<br />
<br />
<haskell><br />
import Data.IORef<br />
main = do varA <- newIORef 0 -- Create and initialize a new variable<br />
a0 <- readIORef varA<br />
writeIORef varA 1<br />
a1 <- readIORef varA<br />
print (a0, a1)<br />
</haskell><br />
<br />
Here, 'varA' has the type "IORef Int" which means "a variable (reference) in<br />
the IO monad holding a value of type Int". newIORef creates a new variable<br />
(reference) and returns it, and then read/write actions use this<br />
reference. The value returned by the "readIORef varA" action depends not<br />
only on the variable involved but also on the moment this operation is performed so it can return different values on each call.<br />
<br />
Arrays, hash tables and any other _mutable_ data structures are<br />
defined in the same way - for each of them, there's an operation that creates new "mutable values" and returns a reference to it. Then special read and write<br />
operations in the IO monad are used. The following code shows an example<br />
using mutable arrays:<br />
<br />
<haskell><br />
import Data.Array.IO<br />
main = do arr <- newArray (1,10) 37 :: IO (IOArray Int Int)<br />
a <- readArray arr 1<br />
writeArray arr 1 64<br />
b <- readArray arr 1<br />
print (a, b)<br />
</haskell><br />
<br />
Here, an array of 10 elements with 37 as the initial value at each location is created. After reading the value of the first element (index 1) into 'a' this element's value is changed to 64 and then read again into 'b'. As you can see by executing this code, 'a' will be set to 37 and 'b' to 64.<br />
<br />
<br />
<br />
Other state-dependent operations are also often implemented as IO<br />
actions. For example, a random number generator should return a different<br />
value on each call. It looks natural to give it a type involving IO:<br />
<br />
<haskell><br />
rand :: IO Int<br />
</haskell><br />
<br />
Moreover, when you import C routines you should be careful - if this<br />
routine is impure, i.e. its result depends on something in the "real<br />
world" (file system, memory contents...), internal state and so on,<br />
you should give it an IO type. Otherwise, the compiler can<br />
"optimize" repetitive calls of this procedure with the same parameters! :)<br />
<br />
For example, we can write a non-IO type for:<br />
<br />
<haskell><br />
foreign import ccall<br />
sin :: Double -> Double<br />
</haskell><br />
<br />
because the result of 'sin' depends only on its argument, but<br />
<br />
<haskell><br />
foreign import ccall<br />
tell :: Int -> IO Int<br />
</haskell><br />
<br />
If you will declare 'tell' as a pure function (without IO) then you may<br />
get the same position on each call! :)<br />
<br />
== IO actions as values ==<br />
<br />
By this point you should understand why it's impossible to use IO<br />
actions inside non-IO (pure) procedures. Such procedures just don't<br />
get a "baton"; they don't know any "world" value to pass to an IO action.<br />
The RealWorld type is an abstract datatype, so pure functions also can't construct RealWorld values by themselves, and it's a strict type, so 'undefined' also can't be used. So, the prohibition of using IO actions inside pure procedures is just a type system trick (as it usually is in Haskell :)).<br />
<br />
But while pure code can't _execute_ IO actions, it can work with them<br />
as with any other functional values - they can be stored in data<br />
structures, passed as parameters, returned as results, collected in<br />
lists, and partially applied. But an IO action will remain a<br />
functional value because we can't apply it to the last argument - of<br />
type RealWorld.<br />
<br />
In order to _execute_ the IO action we need to apply it to some<br />
RealWorld value. That can be done only inside some IO procedure,<br />
in its "actions chain". And real execution of this action will take<br />
place only when this procedure is called as part of the process of<br />
"calculating the final value of world" for 'main'. Look at this example:<br />
<br />
<haskell><br />
main world0 = let get2chars = getChar >> getChar<br />
((), world1) = putStr "Press two keys" world0<br />
(answer, world2) = get2chars world1<br />
in ((), world2)<br />
</haskell><br />
<br />
Here we first bind a value to 'get2chars' and then write a binding<br />
involving 'putStr'. But what's the execution order? It's not defined<br />
by the order of the 'let' bindings, it's defined by the order of processing<br />
"world" values! You can arbitrarily reorder the binding statements - the execution order will be defined by the data dependency with respect to the <br />
"world" values that get passed around. Let's see what this 'main' looks like in the 'do' notation:<br />
<br />
<haskell><br />
main = do let get2chars = getChar >> getChar<br />
putStr "Press two keys"<br />
get2chars<br />
return ()<br />
</haskell><br />
<br />
As you can see, we've eliminated two of the 'let' bindings and left only the one defining 'get2chars'. The non-'let' statements are executed in the exact order in which they're written, because they pass the "world" value from statement to statement as we described above. Thus, this version of the function is much easier to understand because we don't have to mentally figure out the data dependency of the "world" value.<br />
<br />
Moreover, IO actions like 'get2chars' can't be executed directly<br />
because they are functions with a RealWorld parameter. To execute them,<br />
we need to supply the RealWorld parameter, i.e. insert them in the 'main'<br />
chain, placing them in some 'do' sequence executed from 'main' (either directly in the 'main' function, or indirectly in an IO function called from 'main'). Until that's done, they will remain like any function, in partially<br />
evaluated form. And we can work with IO actions as with any other<br />
functions - bind them to names (as we did above), save them in data<br />
structures, pass them as function parameters and return them as results - and<br />
they won't be performed until you give them the magic RealWorld<br />
parameter!<br />
<br />
<br />
<br />
=== Example: a list of IO actions ===<br />
<br />
Let's try defining a list of IO actions:<br />
<br />
<haskell><br />
ioActions :: [IO ()]<br />
ioActions = [(print "Hello!"),<br />
(putStr "just kidding"),<br />
(getChar >> return ())<br />
]<br />
</haskell><br />
<br />
I used additional parentheses around each action, although they aren't really required. If you still can't believe that these actions won't be executed immediately, just recall the real type of this list:<br />
<br />
<haskell><br />
ioActions :: [RealWorld -> ((), RealWorld)]<br />
</haskell><br />
<br />
Well, now we want to execute some of these actions. No problem, just<br />
insert them into the 'main' chain:<br />
<br />
<haskell><br />
main = do head ioActions<br />
ioActions !! 1<br />
last ioActions<br />
</haskell><br />
<br />
Looks strange, right? :) Really, any IO action that you write in a 'do'<br />
statement (or use as a parameter for the '>>'/'>>=' operators) is an expression<br />
returning a result of type 'IO a' for some type 'a'. Typically, you use some function that has the type 'x -> y -> ... -> IO a' and provide all the x, y, etc. parameters. But you're not limited to this standard scenario -<br />
don't forget that Haskell is a functional language and you're free to<br />
compute the functional value required (recall that "IO a" is really a function<br />
type) in any possible way. Here we just extracted several functions<br />
from the list - no problem. This functional value can also be<br />
constructed on-the-fly, as we've done in the previous example - that's also<br />
OK. Want to see this functional value passed as a parameter?<br />
Just look at the definition of 'when'. Hey, we can buy, sell, and rent<br />
these IO actions just like we can with any other functional values! For example, let's define a function that executes all the IO actions in the list:<br />
<br />
<haskell><br />
sequence_ :: [IO a] -> IO ()<br />
sequence_ [] = return ()<br />
sequence_ (x:xs) = do x<br />
sequence_ xs<br />
</haskell><br />
<br />
No black magic - we just extract IO actions from the list and insert<br />
them into a chain of IO operations that should be performed one after another (in the same order that they occurred in the list) to "compute the final world value" of the entire 'sequence_' call.<br />
<br />
With the help of 'sequence_', we can rewrite our last 'main' function as:<br />
<br />
<haskell><br />
main = sequence_ ioActions<br />
</haskell><br />
<br />
<br />
Haskell's ability to work with IO actions as with any other<br />
(functional and non-functional) values allows us to define control<br />
structures of arbitrary complexity. Try, for example, to define a control<br />
structure that repeats an action until it returns the 'False' result:<br />
<br />
<haskell><br />
while :: IO Bool -> IO ()<br />
while action = ???<br />
</haskell><br />
<br />
Most programming languages don't allow you to define control structures at all, and those that do often require you to use a macro-expansion system. In Haskell, control structures are just trivial functions anyone can write.<br />
<br />
<br />
=== Example: returning an IO action as a result ===<br />
<br />
How about returning an IO action as the result of a function? Well, we've done<br />
this each time we've defined an IO procedure - they all return IO actions<br />
that need a RealWorld value to be performed. While we usually just<br />
execute them as part of a higher-level IO procedure, it's also<br />
possible to just collect them without actual execution:<br />
<br />
<haskell><br />
main = do let a = sequence ioActions<br />
b = when True getChar<br />
c = getChar >> getChar<br />
putStr "These 'let' statements are not executed!"<br />
</haskell><br />
<br />
These assigned IO procedures can be used as parameters to other<br />
procedures, or written to global variables, or processed in some other<br />
way, or just executed later, as we did in the example with 'get2chars'.<br />
<br />
But how about returning a parameterized IO action from an IO procedure? Let's define a procedure that returns the i'th byte from a file represented as a Handle:<br />
<br />
<haskell><br />
readi h i = do hSeek h i AbsoluteSeek<br />
hGetChar h<br />
</haskell><br />
<br />
So far so good. But how about a procedure that returns the i'th byte of a file<br />
with a given name without reopening it each time?<br />
<br />
<haskell><br />
readfilei :: String -> IO (Integer -> IO Char)<br />
readfilei name = do h <- openFile name ReadMode<br />
return (readi h)<br />
</haskell><br />
<br />
As you can see, it's an IO procedure that opens a file and returns...<br />
another IO procedure that will read the specified byte. But we can go<br />
further and include the 'readi' body in 'readfilei':<br />
<br />
<haskell><br />
readfilei name = do h <- openFile name ReadMode<br />
let readi h i = do hSeek h i AbsoluteSeek<br />
hGetChar h<br />
return (readi h)<br />
</haskell><br />
<br />
That's a little better. But why do we add 'h' as a parameter to 'readi' if it can be obtained from the environment where 'readi' is now defined? An even shorter version is this:<br />
<br />
<haskell><br />
readfilei name = do h <- openFile name ReadMode<br />
let readi i = do hSeek h i AbsoluteSeek<br />
hGetChar h<br />
return readi<br />
</haskell><br />
<br />
What have we done here? We've build a parameterized IO action involving local<br />
names inside 'readfilei' and returned it as the result. Now it can be<br />
used in the following way:<br />
<br />
<haskell><br />
main = do myfile <- readfilei "test"<br />
a <- myfile 0<br />
b <- myfile 1<br />
print (a,b)<br />
</haskell><br />
<br />
<br />
This way of using IO actions is very typical for Haskell programs - you<br />
just construct one or more IO actions that you need,<br />
with or without parameters, possibly involving the parameters that your<br />
"constructor" received, and return them to the caller. Then these IO actions<br />
can be used in the rest of the program without any knowledge about your<br />
internal implementation strategy. One thing this can be used for is to<br />
partially emulate the OOP (or more precisely, the ADT) programming paradigm.<br />
<br />
<br />
=== Example: a memory allocator generator ===<br />
<br />
As an example, one of my programs has a module which is a memory suballocator. It receives the address and size of a large memory block and returns two<br />
procedures - one to allocate a subblock of a given size and the other to<br />
free the allocated subblock:<br />
<br />
<haskell><br />
memoryAllocator :: Ptr a -> Int -> IO (Int -> IO (Ptr b),<br />
Ptr c -> IO ())<br />
<br />
memoryAllocator buf size = do ......<br />
let alloc size = do ...<br />
...<br />
free ptr = do ...<br />
...<br />
return (alloc, free)<br />
</haskell><br />
<br />
How this is implemented? 'alloc' and 'free' work with references<br />
created inside the memoryAllocator procedure. Because the creation of these references is a part of the memoryAllocator IO actions chain, a new independent set of references will be created for each memory block for which<br />
memoryAllocator is called:<br />
<br />
<haskell><br />
memoryAllocator buf size = do start <- newIORef buf<br />
end <- newIORef (buf `plusPtr` size)<br />
...<br />
</haskell><br />
<br />
These two references are read and written in the 'alloc' and 'free' definitions (we'll implement a very simple memory allocator for this example):<br />
<br />
<haskell><br />
...<br />
let alloc size = do addr <- readIORef start<br />
writeIORef start (addr `plusPtr` size)<br />
return addr<br />
<br />
let free ptr = do writeIORef start ptr<br />
</haskell><br />
<br />
What we've defined here is just a pair of closures that use state<br />
available at the moment of their definition. As you can see, it's as<br />
easy as in any other functional language, despite Haskell's lack<br />
of direct support for impure functions.<br />
<br />
The following example uses procedures, returned by memoryAllocator, to<br />
simultaneously allocate/free blocks in two independent memory buffers:<br />
<br />
<haskell><br />
main = do buf1 <- mallocBytes (2^16)<br />
buf2 <- mallocBytes (2^20)<br />
(alloc1, free1) <- memoryAllocator buf1 (2^16)<br />
(alloc2, free2) <- memoryAllocator buf2 (2^20)<br />
ptr11 <- alloc1 100<br />
ptr21 <- alloc2 1000<br />
free1 ptr11<br />
free2 ptr21<br />
ptr12 <- alloc1 100<br />
ptr22 <- alloc2 1000<br />
</haskell><br />
<br />
<br />
<br />
=== Example: emulating OOP with record types ===<br />
<br />
Let's implement the classical OOP example: drawing figures. There are<br />
figures of different types: circles, rectangles and so on. The task is<br />
to create a heterogeneous list of figures. All figures in this list should<br />
support the same set of operations: draw, move and so on. We will<br />
represent these operations as IO procedures. Instead of a "class" let's<br />
define a structure containing implementations of all the procedures<br />
required:<br />
<br />
<haskell><br />
data Figure = Figure { draw :: IO (),<br />
move :: Displacement -> IO ()<br />
}<br />
<br />
type Displacement = (Int, Int) -- horizontal and vertical displacement in points<br />
</haskell><br />
<br />
<br />
The constructor of each figure's type should just return a Figure record:<br />
<br />
<haskell><br />
circle :: Point -> Radius -> IO Figure<br />
rectangle :: Point -> Point -> IO Figure<br />
<br />
type Point = (Int, Int) -- point coordinates<br />
type Radius = Int -- circle radius in points<br />
</haskell><br />
<br />
<br />
We will "draw" figures by just printing their current parameters.<br />
Let's start with a simplified implementation of the 'circle' and 'rectangle'<br />
constructors, without actual 'move' support:<br />
<br />
<haskell><br />
circle center radius = do<br />
let description = " Circle at "++show center++" with radius "++show radius<br />
return $ Figure { draw = putStrLn description }<br />
<br />
rectangle from to = do<br />
let description = " Rectangle "++show from++"-"++show to)<br />
return $ Figure { draw = putStrLn description }<br />
</haskell><br />
<br />
<br />
As you see, each constructor just returns a fixed 'draw' procedure that prints<br />
parameters with which the concrete figure was created. Let's test it:<br />
<br />
<haskell><br />
drawAll :: [Figure] -> IO ()<br />
drawAll figures = do putStrLn "Drawing figures:"<br />
mapM_ draw figures<br />
<br />
main = do figures <- sequence [circle (10,10) 5,<br />
circle (20,20) 3,<br />
rectangle (10,10) (20,20),<br />
rectangle (15,15) (40,40)]<br />
drawAll figures<br />
</haskell><br />
<br />
<br />
Now let's define "full-featured" figures that can actually be<br />
moved around. In order to achieve this, we should provide each figure<br />
with a mutable variable that holds each figure's current screen location. The<br />
type of this variable will be "IORef Point". This variable should be created in the figure constructor and manipulated in IO procedures (closures) enclosed in<br />
the Figure record:<br />
<br />
<haskell><br />
circle center radius = do<br />
centerVar <- newIORef center<br />
<br />
let drawF = do center <- readIORef centerVar<br />
putStrLn (" Circle at "++show center<br />
++" with radius "++show radius)<br />
<br />
let moveF (addX,addY) = do (x,y) <- readIORef centerVar<br />
writeIORef centerVar (x+addX, y+addY)<br />
<br />
return $ Figure { draw=drawF, move=moveF }<br />
<br />
<br />
rectangle from to = do<br />
fromVar <- newIORef from<br />
toVar <- newIORef to<br />
<br />
let drawF = do from <- readIORef fromVar<br />
to <- readIORef toVar<br />
putStrLn (" Rectangle "++show from++"-"++show to)<br />
<br />
let moveF (addX,addY) = do (fromX,fromY) <- readIORef fromVar<br />
(toX,toY) <- readIORef toVar<br />
writeIORef fromVar (fromX+addX, fromY+addY)<br />
writeIORef toVar (toX+addX, toY+addY)<br />
<br />
return $ Figure { draw=drawF, move=moveF }<br />
</haskell><br />
<br />
<br />
Now we can test the code which moves figures around:<br />
<br />
<haskell><br />
main = do figures <- sequence [circle (10,10) 5,<br />
rectangle (10,10) (20,20)]<br />
drawAll figures<br />
mapM_ (\fig -> move fig (10,10)) figures<br />
drawAll figures<br />
</haskell><br />
<br />
<br />
It's important to realize that we are not limited to including only IO actions<br />
in a record that's intended to simulate a C++/Java-style interface. The record can also include values, IORefs, pure functions - in short, any type of data. For example, we can easily add to the Figure interface fields for area and origin:<br />
<br />
<haskell><br />
data Figure = Figure { draw :: IO (),<br />
move :: Displacement -> IO (),<br />
area :: Double,<br />
origin :: IORef Point<br />
}<br />
</haskell><br />
<br />
<br />
<br />
== Exception handling (under development) ==<br />
<br />
Although Haskell provides set of exception raising/handling features comparable to those in popular OOP languages (C++, Java, C#), this part of language receives much less attention than there. First reason is that you just don't need to pay attention - most times it just works "behind the scene". Second reason is that Haskell, being lacked OOP inheritance, doesn't allow to easily subclass exception types, therefore limiting flexibility of exception handling.<br />
<br />
First, Haskell RTS raise more exceptions than traditional languages - pattern match failures, calls with invalid arguments (such as '''head []''') and computations whose results depend on special values '''undefined''' and '''error "...."''' all raise their own exceptions:<br />
<br />
example 1:<br />
<haskell><br />
main = print (f 2)<br />
<br />
f 0 = "zero"<br />
f 1 = "one"<br />
</haskell><br />
<br />
example 2:<br />
<haskell><br />
main = print (head [])<br />
</haskell><br />
<br />
example 3:<br />
<haskell><br />
main = print (1 + (error "Value that wasn't initialized or cannot be computed"))<br />
</haskell><br />
<br />
This allows to write programs in much more error-prone way.<br />
<br />
== Interfacing with C/C++ and foreign libraries (under development) ==<br />
<br />
While Haskell is great at algorithm development, speed isn't its best side. We can combine best of both worlds, though, by writing speed-critical parts of program in C and rest in Haskell. We just need a way to call C functions from Haskell and vice versa, and to marshal data between two worlds.<br />
<br />
We also need to interact with C world for using Windows/Linux APIs, linking to various libraries and DLLs. Even interfacing with other languages requires to go through C world as "common denominator". Appendix [6] to Haskell'98 standard provides complete description of interfacing with C.<br />
<br />
We will learn FFI via series of examples. These examples includes C/C++ code, so they need C/C++ compilers to be installed, the same will be true if you need to include code written in C/C++ in your program (C/C++ compilers are not required when you need just to link with existing libraries providing APIs with C calling convention). On Unix (and MacOS?) systems system-wide default C/C++ compiler typically used by GHC installation. On Windows, no default compilers exist, so GHC typically shipped with C compiler, and you may find on download page GHC distribution with bundled C and C++ compilers. Alternatively, you may find and install gcc/mingw32 version compatible with your GHC installation.<br />
<br />
If you need to make your C/C++ code as fast as possible, you may compile your code by Intel compilers instead of gcc. However, these compilers are not free, moreover on Windows code compiled by Intel compilers may be interact with GHC-compiled code only if one of them is put into DLLs (due to RTS incompatibility) [not checked! please correct if i'm wrong].<br />
<br />
[http://www.haskell.org/haskellwiki/Applications_and_libraries/Interfacing_other_languages More links]:<br />
<br />
;[http://www.cse.unsw.edu.au/~chak/haskell/c2hs/ C-&gt;Haskell]<br />
:A lightweight tool for implementing access to C libraries from Haskell.<br />
<br />
;[[HSFFIG]]<br />
:Haskell FFI Binding Modules Generator (HSFFIG) is a tool that takes a C library include file (.h) and generates Haskell Foreign Functions Interface import declarations for items (functions, structures, etc.) the header defines.<br />
<br />
;[http://quux.org/devel/missingpy MissingPy]<br />
:MissingPy is really two libraries in one. At its lowest level, MissingPy is a library designed to make it easy to call into Python from Haskell. It provides full support for interpreting arbitrary Python code, interfacing with a good part of the Python/C API, and handling Python objects. It also provides tools for converting between Python objects and their Haskell equivalents. Memory management is handled for you, and Python exceptions get mapped to Haskell Dynamic exceptions. At a higher level, MissingPy contains Haskell interfaces to some Python modules.<br />
<br />
[[HsLua Haskell interface to Lua scripting language]]<br />
<br />
=== Calling functions ===<br />
<br />
First, we will learn how to call C functions from Haskell and Haskell functions from C. The first example consists of three files:<br />
<br />
main.hs:<br />
<haskell><br />
{-# LANGUAGE ForeignFunctionInterface #-}<br />
<br />
main = do print "Hello from main"<br />
c_function<br />
<br />
haskell_function = print "Hello from haskell_function"<br />
<br />
foreign import ccall safe "prototypes.h"<br />
c_function :: IO ()<br />
<br />
foreign export ccall<br />
haskell_function :: IO ()<br />
</haskell><br />
<br />
evil.c:<br />
<haskell><br />
#include <stdio.h><br />
#include "prototypes.h"<br />
<br />
void c_function (void)<br />
{<br />
printf("Hello from c_function\n");<br />
haskell_function();<br />
} <br />
</haskell><br />
<br />
prototypes.h:<br />
<haskell><br />
extern void c_function (void);<br />
extern void haskell_function (void);<br />
</haskell><br />
<br />
It may be compiled and linked in one step by ghc:<br />
ghc --make main.hs evil.c<br />
<br />
Or, you may compile C module(s) separately and link in .o files (this may be preferable if you use make and don't want to recompile unchanged sources; ghc's --make option provides smart recompilation only for .hs files):<br />
ghc -c evil.c<br />
ghc --make main.hs evil.o<br />
<br />
You may use gcc/g++ directly to compile your C/C++ files but i recommend to do linking via ghc because it adds a lots of libraries required for execution of Haskell code. For the same reasons, even if your main routine is written in C/C++, I recommend calling it from the Haskell function main - otherwise you'll have to explicitly init/shutdown the GHC RTS (run-time system).<br />
<br />
We use the "foreign import" specification to import foreign routines into our Haskell world, and "foreign export" to export Haskell routines into the external world. Note that the import statement creates a new Haskell symbol (from the external one), while the export statement uses a Haskell symbol previously defined. Technically speaking, both types of statements create a wrapper that converts the names and calling conventions from C to Haskell or vice versa.<br />
<br />
=== All about the "foreign" statement ===<br />
<br />
The "ccall" specifier in foreign statements means the use of C (not C++ !) calling convention. This means that if you want to write the external function in C++ (instead of C) you should add '''export "C"''' specification to its declaration - otherwise you'll get linking error. Let's rewrite our first example to use C++ instead of C:<br />
<br />
prototypes.h:<br />
<haskell><br />
#ifdef __cplusplus<br />
extern "C" {<br />
#endif<br />
<br />
extern void c_function (void);<br />
extern void haskell_function (void);<br />
<br />
#ifdef __cplusplus<br />
}<br />
#endif<br />
</haskell><br />
<br />
Compile it via:<br />
<br />
ghc --make main.hs evil.cpp<br />
<br />
where evil.cpp is just a renamed copy of evil.c from the first example. Note that the new prototypes.h is written to allow compiling it both as C and C++ code. When it's included from evil.cpp, it's compiled as C++ code. When GHC compiles main.hs via the C compiler (enabled by -fvia-C option), it also includes prototypes.h but compiles it in C mode. It's why you need to specify .h files in "foreign" declarations - depending on which Haskell compiler you use, these files may be included to check consistency of C and Haskell declarations.<br />
<br />
The quoted part of the foreign statement may also be used to import or export a function under another name--for example,<br />
<br />
<haskell><br />
foreign import ccall safe "prototypes.h CFunction"<br />
c_function :: IO ()<br />
<br />
foreign export ccall "HaskellFunction"<br />
haskell_function :: IO ()<br />
</haskell><br />
<br />
specifies that the C function called CFunction will become known as the Haskell function c_function, while the Haskell function haskell_function will be known in the C world as HaskellFunction. It's required when the C name doesn't conform to Haskell naming requirements.<br />
<br />
Although the Haskell FFI standard tells about many other calling conventions in addition to ccall (e.g. cplusplus, jvm, net) current Haskell implementations support only ccall and stdcall. The latter, also called the "Pascal" calling convention, is used to interface with WinAPI:<br />
<br />
<haskell><br />
foreign import stdcall unsafe "windows.h SetFileApisToOEM"<br />
setFileApisToOEM :: IO ()<br />
</haskell><br />
<br />
And finally, about the safe/unsafe specifier: a C function imported with the "unsafe" keyword is called directly and the Haskell runtime is stopped while the C function is executed (when there are several OS threads executing the Haskell program, only the current OS thread is delayed). This call doesn't allow recursively entering into the Haskell world by calling any Haskell function - the Haskell RTS is just not prepared for such an event. However, unsafe calls are as quick as calls in C world. It's ideal for "momentary" calls that quickly return back to the caller.<br />
<br />
When "safe" is specified, the C function is called in safe environment - the Haskell execution context is saved, so it's possible to call back to Haskell and, if the C call takes a long time, another OS thread may be started to execute Haskell code (of course, in threads other than the one that called the C code). This has its own price, though - around 1000 CPU ticks per call.<br />
<br />
You can read more about interaction between FFI calls and Haskell concurrency in [7].<br />
<br />
=== Marshalling simple types ===<br />
<br />
Calling by itself is relatively easy; the real problem of interfacing languages with different data models is passing data between them. In this case, there is no guarantee that Haskell's Int is represented in memory the same way as C's int, nor Haskell's Double the same as C's double and so on. While on *some* platforms they are the same and you can write throw-away programs relying on these, the goal of portability requires you to declare imported and exported functions using special types described in the FFI standard, which are guaranteed to correspond to C types. These are:<br />
<br />
<haskell><br />
import Foreign.C.Types ( -- equivalent to the following C type:<br />
CChar, CUChar, -- char/unsigned char<br />
CShort, CUShort, -- short/unsigned short<br />
CInt, CUInt, CLong, CULong, -- int/unsigned/long/unsigned long<br />
CFloat, CDouble...) -- float/double<br />
</haskell><br />
<br />
Now we can import and export typeful C/Haskell functions:<br />
<haskell><br />
foreign import ccall unsafe "math.h"<br />
c_sin :: CDouble -> CDouble<br />
</haskell><br />
<br />
Note that pure C functions (those whose results depend only on their arguments) are imported without IO in their return type. The "const" specifier in C is not reflected in Haskell types, so appropriate compiler checks are not performed. <!-- What would these be? --><br />
<br />
All these numeric types are instances of the same classes as their Haskell cousins (Ord, Num, Show and so on), so you may perform calculations on these data directly. Alternatively, you may convert them to native Haskell types. It's very typical to write simple wrappers around imported and exported functions just to provide interfaces having native Haskell types:<br />
<br />
<haskell><br />
-- |Type-conversion wrapper around c_sin<br />
sin :: Double -> Double<br />
sin = fromRational . c_sin . toRational<br />
</haskell><br />
<br />
=== Memory management ===<br />
<br />
=== Marshalling strings ===<br />
<br />
<haskell><br />
import Foreign.C.String ( -- representation of strings in C<br />
CString, -- = Ptr CChar<br />
CStringLen) -- = (Ptr CChar, Int)<br />
</haskell><br />
<br />
<haskell><br />
foreign import ccall unsafe "string.h"<br />
c_strlen :: CString -> IO CSize -- CSize defined in Foreign.C.Types and is equal to size_t<br />
</haskell><br />
<br />
<haskell><br />
-- |Type-conversion wrapper around c_strlen <br />
strlen :: String -> Int<br />
strlen = ....<br />
</haskell><br />
<br />
=== Marshalling composite types ===<br />
<br />
C array may be manipulated in Haskell as [http://haskell.org/haskellwiki/Arrays#StorableArray_.28module_Data.Array.Storable.29 StorableArray].<br />
<br />
There is no built-in support for marshalling C structures and using C constants in Haskell. These are implemented in c2hs preprocessor, though.<br />
<br />
Binary marshalling (serializing) of data structures of any complexity is implemented in library Binary.<br />
<br />
=== Dynamic calls ===<br />
<br />
=== DLLs ===<br />
''because i don't have experience of using DLLs, can someone write into this section? ultimately, we need to consider the following tasks:''<br />
* using DLLs of 3rd-party libraries (such as ziplib)<br />
* putting your own C code into DLL to use in Haskell<br />
* putting Haskell code into DLL which may be called from C code<br />
<br />
== Dark side of IO monad ==<br />
=== unsafePerformIO ===<br />
<br />
Programmers coming from an imperative language background often look for a way to execute IO actions inside a pure procedure. But what does this mean?<br />
Imagine that you're trying to write a procedure that reads the contents of a file with a given name, and you try to write it as a pure (non-IO) function:<br />
<br />
<haskell><br />
readContents :: Filename -> String<br />
</haskell><br />
<br />
Defining readContents as a pure function will certainly simplify the code that uses it. But it will also create problems for the compiler:<br />
<br />
# This call is not inserted in a sequence of "world transformations", so the compiler doesn't know at what exact moment you want to execute this action. For example, if the file has one kind of contents at the beginning of the program and another at the end - which contents do you want to see? You have no idea when (or even if) this function is going to get invoked, because Haskell sees this function as pure and feels free to reorder the execution of any or all pure functions as needed.<br />
# Attempts to read the contents of files with the same name can be factored (''i.e.'' reduced to a single call) despite the fact that the file (or the current directory) can be changed between calls. Again, Haskell considers all non-IO functions to be pure and feels free to omit multiple calls with the same parameters.<br />
<br />
So, implementing pure functions that interact with the Real World is<br />
considered to be Bad Behavior. Good boys and girls never do it ;)<br />
<br />
<br />
Nevertheless, there are (semi-official) ways to use IO actions inside<br />
of pure functions. As you should remember this is prohibited by<br />
requiring the RealWorld "baton" in order to call an IO action. Pure functions don't have the baton, but there is a special "magic" procedure that produces this baton from nowhere, uses it to call an IO action and then throws the resulting "world" away! It's a little low-level magic :) This very special (and dangerous) procedure is:<br />
<br />
<haskell><br />
unsafePerformIO :: IO a -> a<br />
</haskell><br />
<br />
Let's look at its (possible) definition:<br />
<br />
<haskell><br />
unsafePerformIO :: (RealWorld -> (a, RealWorld)) -> a<br />
unsafePerformIO action = let (a, world1) = action createNewWorld<br />
in a<br />
</haskell><br />
<br />
where 'createNewWorld' is an internal function producing a new value of<br />
the RealWorld type.<br />
<br />
Using unsafePerformIO, you can easily write pure functions that do<br />
I/O inside. But don't do this without a real need, and remember to<br />
follow this rule: the compiler doesn't know that you are cheating; it still<br />
considers each non-IO function to be a pure one. Therefore, all the usual<br />
optimization rules can (and will!) be applied to its execution. So<br />
you must ensure that:<br />
<br />
# The result of each call depends only on its arguments.<br />
# You don't rely on side-effects of this function, which may be not executed if its results are not needed.<br />
<br />
<br />
Let's investigate this problem more deeply. Function evaluation in Haskell<br />
is determined by a value's necessity - the language computes only the values that are really required to calculate the final result. But what does this mean with respect to the 'main' function? To "calculate the final world's" value, you need to perform all the intermediate IO actions that are included in the 'main' chain. By using 'unsafePerformIO' we call IO actions outside of this chain. What guarantee do we have that they will be run at all? None. The only time they will be run is if running them is required to compute the overall function result (which in turn should be required to perform some action in the<br />
'main' chain). This is an example of Haskell's evaluation-by-need strategy. Now you should clearly see the difference:<br />
<br />
- An IO action inside an IO procedure is guaranteed to execute as long as<br />
it is (directly or indirectly) inside the 'main' chain - even when its result isn't used (because the implicit "world" value it returns ''will'' be used). You directly specify the order of the action's execution inside the IO procedure. Data dependencies are simulated via the implicit "world" values that are passed from each IO action to the next.<br />
<br />
- An IO action inside 'unsafePerformIO' will be performed only if<br />
result of this operation is really used. The evaluation order is not<br />
guaranteed and you should not rely on it (except when you're sure about<br />
whatever data dependencies may exist).<br />
<br />
<br />
I should also say that inside 'unsafePerformIO' call you can organize<br />
a small internal chain of IO actions with the help of the same binding<br />
operators and/or 'do' syntactic sugar we've seen above. For example, here's a particularly convoluted way to compute the integer that comes after zero:<br />
<br />
<haskell><br />
one :: Int<br />
one = unsafePerformIO $ do var <- newIORef 0<br />
modifyIORef var (+1)<br />
readIORef var<br />
</haskell><br />
<br />
and in this case ALL the operations in this chain will be performed as<br />
long as the result of the 'unsafePerformIO' call is needed. To ensure this,<br />
the actual 'unsafePerformIO' implementation evaluates the "world" returned<br />
by the 'action':<br />
<br />
<haskell><br />
unsafePerformIO action = let (a,world1) = action createNewWorld<br />
in (world1 `seq` a)<br />
</haskell><br />
<br />
(The 'seq' operation strictly evaluates its first argument before<br />
returning the value of the second one).<br />
<br />
<br />
=== inlinePerformIO ===<br />
<br />
inlinePerformIO has the same definition as unsafePerformIO but with addition of INLINE pragma:<br />
<haskell><br />
-- | Just like unsafePerformIO, but we inline it. Big performance gains as<br />
-- it exposes lots of things to further inlining<br />
{-# INLINE inlinePerformIO #-}<br />
inlinePerformIO action = let (a, world1) = action createNewWorld<br />
in (world1 `seq` a)<br />
#endif<br />
</haskell><br />
<br />
Semantically inlinePerformIO = unsafePerformIO<br />
in as much as either of those have any semantics at all.<br />
<br />
The difference of course is that inlinePerformIO is even less safe than<br />
unsafePerformIO. While ghc will try not to duplicate or common up<br />
different uses of unsafePerformIO, we aggressively inline<br />
inlinePerformIO. So you can really only use it where the IO content is<br />
really properly pure, like reading from an immutable memory buffer (as<br />
in the case of ByteStrings). However things like allocating new buffers<br />
should not be done inside inlinePerformIO since that can easily be<br />
floated out and performed just once for the whole program, so you end up<br />
with many things sharing the same buffer, which would be bad.<br />
<br />
So the rule of thumb is that IO things wrapped in unsafePerformIO have<br />
to be externally pure while with inlinePerformIO it has to be really<br />
really pure or it'll all go horribly wrong.<br />
<br />
That said, here's some really hairy code. This should frighten any pure<br />
functional programmer...<br />
<br />
<haskell><br />
write :: Int -> (Ptr Word8 -> IO ()) -> Put ()<br />
write !n body = Put $ \c buf@(Buffer fp o u l) -><br />
if n <= l<br />
then write' c fp o u l<br />
else write' (flushOld c n fp o u) (newBuffer c n) 0 0 0<br />
<br />
where {-# NOINLINE write' #-}<br />
write' c !fp !o !u !l =<br />
-- warning: this is a tad hardcore<br />
inlinePerformIO<br />
(withForeignPtr fp<br />
(\p -> body $! (p `plusPtr` (o+u))))<br />
`seq` c () (Buffer fp o (u+n) (l-n))<br />
</haskell><br />
<br />
it's used like:<br />
<haskell><br />
word8 w = write 1 (\p -> poke p w)<br />
</haskell><br />
<br />
This does not adhere to my rule of thumb above. Don't ask exactly why we<br />
claim it's safe :-) (and if anyone really wants to know, ask Ross<br />
Paterson who did it first in the Builder monoid)<br />
<br />
=== unsafeInterleaveIO ===<br />
<br />
But there is an even stranger operation called 'unsafeInterleaveIO' that<br />
gets the "official baton", makes its own pirate copy, and then runs<br />
an "illegal" relay-race in parallel with the main one! I can't talk further<br />
about its behavior without causing grief and indignation, so it's no surprise<br />
that this operation is widely used in countries that are hotbeds of software piracy such as Russia and China! ;) Don't even ask me - I won't say anything more about this dirty trick I use all the time ;)<br />
<br />
One can use unsafePerformIO (not unsafeInterleaveIO) to perform I/O<br />
operations not in predefined order but by demand. For example, the<br />
following code:<br />
<br />
<haskell><br />
do let c = unsafePerformIO getChar<br />
do_proc c<br />
</haskell><br />
<br />
will perform getChar I/O call only when value of c is really required<br />
by code, i.e. it this call will be performed lazily as any usual<br />
Haskell computation.<br />
<br />
Now imagine the following code:<br />
<br />
<haskell><br />
do let s = [unsafePerformIO getChar, unsafePerformIO getChar, unsafePerformIO getChar]<br />
do_proc s<br />
</haskell><br />
<br />
Three chars inside this list will be computed on demand too, and this<br />
means that their values will depend on the order they are consumed. It<br />
is not that we usually need :)<br />
<br />
<br />
unsafeInterleaveIO solves this problem - it performs I/O only on<br />
demand but allows to define exact *internal* execution order for parts<br />
of your datastructure. It is why I wrote that unsafeInterleaveIO makes<br />
illegal copy of baton :)<br />
<br />
First, unsafeInterleaveIO has (IO a) action as a parameter and returns<br />
value of type 'a':<br />
<br />
<haskell><br />
do str <- unsafeInterleaveIO myGetContents<br />
</haskell><br />
<br />
Second, unsafeInterleaveIO don't perform any action immediately, it<br />
only creates a box of type 'a' which on requesting this value will<br />
perform action specified as a parameter.<br />
<br />
Third, this action by itself may compute the whole value immediately<br />
or... use unsafeInterleaveIO again to defer calculation of some<br />
sub-components:<br />
<br />
<haskell><br />
myGetContents = do<br />
c <- getChar<br />
s <- unsafeInterleaveIO myGetContents<br />
return (c:s)<br />
</haskell><br />
<br />
This code will be executed only at the moment when value of str is<br />
really demanded. In this moment, getChar will be performed (with<br />
result assigned to c) and one more lazy IO box will be created - for s.<br />
This box again contains link to the myGetContents call<br />
<br />
Then, list cell returned that contains one char read and link to<br />
myGetContents call as a way to compute rest of the list. Only at the<br />
moment when next value in list required, this operation will be<br />
performed again<br />
<br />
As a final result, we get inability to read second char in list before<br />
first one, but lazy character of reading in whole. bingo!<br />
<br />
<br />
PS: of course, actual code should include EOF checking. also note that<br />
you can read many chars/records at each call:<br />
<br />
<haskell><br />
myGetContents = do<br />
c <- replicateM 512 getChar<br />
s <- unsafeInterleaveIO myGetContents<br />
return (c++s)<br />
</haskell><br />
<br />
== A safer approach: the ST monad ==<br />
<br />
We said earlier that we can use unsafePerformIO to perform computations that are totally pure but nevertheless interact with the Real World in some way. There is, however, a better way! One that remains totally pure and yet allows the use of references, arrays, and so on -- and it's done using, you guessed it, type magic. This is the ST monad.<br />
<br />
The ST monad's version of unsafePerformIO is called runST, and it has a very unusual type.<br />
<haskell><br />
runST :: (forall s . ST s a) -> a<br />
</haskell><br />
<br />
The s variable in the ST monad is the state type. Moreover, all the fun mutable stuff available in the ST monad is quantified over s:<br />
<haskell><br />
newSTRef :: a -> ST s (STRef s a)<br />
newArray_ :: Ix i => (i, i) -> ST s (STArray s i e)<br />
</haskell><br />
<br />
So why does runST have such a funky type? Let's see what would happen if we wrote<br />
<haskell><br />
makeSTRef :: a -> STRef s a<br />
makeSTRef a = runST (newSTRef a)<br />
</haskell><br />
This fails, because newSTRef a doesn't work for all state types s -- it only works for the s from the return type STRef s a.<br />
<br />
This is all sort of wacky, but the result is that you can only run an ST computation where the output type is functionally pure, and makes no references to the internal mutable state of the computation. The ST monad doesn't have access to I/O operations like writing to the console, either -- only references, arrays, and suchlike that come in handy for pure computations.<br />
<br />
Important note -- the state type doesn't actually mean anything. We never have a value of type s, for instance. It's just a way of getting the type system to do the work of ensuring purity for us, with smoke and mirrors.<br />
<br />
It's really just type system magic: secretly, on the inside, runST runs a computation with the real world baton just like unsafePerformIO. Their internal implementations are almost identical: in fact, there's a function<br />
<haskell><br />
stToIO :: ST RealWorld a -> IO a<br />
</haskell><br />
<br />
The difference is that ST uses type system magic to forbid unsafe behavior like extracting mutable objects from their safe ST wrapping, but allowing purely functional outputs to be performed with all the handy access to mutable references and arrays.<br />
<br />
So here's how we'd rewrite our function using unsafePerformIO from above:<br />
<br />
<haskell><br />
oneST :: ST s Int -- note that this works correctly for any s<br />
oneST = do var <- newSTRef 0<br />
modifySTRef var (+1)<br />
readSTRef var<br />
<br />
one :: Int<br />
one = runST oneST<br />
</haskell><br />
<br />
== Welcome to the machine: the actual [[GHC]] implementation ==<br />
<br />
A little disclaimer: I should say that I'm not describing<br />
here exactly what a monad is (I don't even completely understand it myself) and my explanation shows only one _possible_ way to implement the IO monad in<br />
Haskell. For example, the hbc Haskell compiler implements IO monad via<br />
continuations. I also haven't said anything about exception handling,<br />
which is a natural part of the "monad" concept. You can read the "All About<br />
Monads" guide to learn more about these topics.<br />
<br />
But there is some good news: first, the IO monad understanding you've just acquired will work with any implementation and with many other monads. You just can't work with RealWorld<br />
values directly.<br />
<br />
Second, the IO monad implementation described here is really used in the GHC,<br />
yhc/nhc (Hugs/jhc, too?) compilers. Here is the actual IO definition<br />
from the GHC sources:<br />
<br />
<haskell><br />
newtype IO a = IO (State# RealWorld -> (# State# RealWorld, a #))<br />
</haskell><br />
<br />
It uses the "State# RealWorld" type instead of our RealWorld, it uses the "(# #)" strict tuple for optimization, and it adds an IO data constructor<br />
around the type. Nevertheless, there are no significant changes from the standpoint of our explanation. Knowing the principle of "chaining" IO actions via fake "state of the world" values, you can now easily understand and write low-level implementations of GHC I/O operations.<br />
<br />
<br />
=== The [[Yhc]]/nhc98 implementation ===<br />
<br />
<haskell><br />
data World = World<br />
newtype IO a = IO (World -> Either IOError a)<br />
</haskell><br />
<br />
This implementation makes the "World" disappear somewhat, and returns Either a<br />
result of type "a", or if an error occurs then "IOError". The lack of the World on the right-hand side of the function can only be done because the compiler knows special things about the IO type, and won't overoptimise it.<br />
<br />
<br />
== Further reading ==<br />
<br />
[1] This tutorial is largely based on the Simon Peyton Jones' paper [http://research.microsoft.com/%7Esimonpj/Papers/marktoberdorf Tackling the awkward squad: monadic input/output, concurrency, exceptions, and foreign-language calls in Haskell]. I hope that my tutorial improves his original explanation of the Haskell I/O system and brings it closer to the point of view of beginning Haskell programmers. But if you need to learn about concurrency, exceptions and FFI in Haskell/GHC, the original paper is the best source of information.<br />
<br />
[2] You can find more information about concurrency, FFI and STM at the [[GHC/Concurrency#Starting points]] page.<br />
<br />
[3] The [[Arrays]] page contains exhaustive explanations about using mutable arrays.<br />
<br />
[4] Look also at the [[Tutorials#Using_monads|Using monads]] page, which contains tutorials and papers really describing these mysterious monads :)<br />
<br />
[5] An explanation of the basic monad functions, with examples, can be found in the reference guide [http://members.chello.nl/hjgtuyl/tourdemonad.html A tour of the Haskell Monad functions], by Henk-Jan van Tuyl.<br />
<br />
[6] Official FFI specifications can be found on the page [http://www.cse.unsw.edu.au/~chak/haskell/ffi/ The Haskell 98 Foreign Function Interface 1.0: An Addendum to the Haskell 98 Report]<br />
<br />
[7] Using FFI in multithreaded programs described in paper [http://www.haskell.org/~simonmar/bib/concffi04_abstract.html Extending the Haskell Foreign Function Interface with Concurrency]<br />
<br />
Do you have more questions? Ask in the [http://www.haskell.org/mailman/listinfo/haskell-cafe haskell-cafe mailing list].<br />
<br />
== To-do list ==<br />
<br />
If you are interested in adding more information to this manual, please add your questions/topics here.<br />
<br />
Topics:<br />
* fixIO and 'mdo'<br />
* Q monad<br />
<br />
Questions:<br />
* split '>>='/'>>'/return section and 'do' section, more examples of using binding operators<br />
* IORef detailed explanation (==const*), usage examples, syntax sugar, unboxed refs<br />
* control structures developing - much more examples<br />
* unsafePerformIO usage examples: global variable, ByteString, other examples<br />
* actual GHC implementation - how to write low-level routines on example of newIORef implementation<br />
<br />
This manual is collective work, so feel free to add more information to it yourself. The final goal is to collectively develop a comprehensive manual for using the IO monad.<br />
<br />
----<br />
<br />
[[Category:Tutorials]]</div>Conalhttps://wiki.haskell.org/Functional_Reactive_ProgrammingFunctional Reactive Programming2011-06-30T23:08:07Z<p>Conal: /* Material */ tightened Conal's pointers</p>
<hr />
<div>Functional Reactive Programming (FRP) integrates time flow and compositional events into functional programming. This provides an elegant way to express computation in domains such as interactive animations, robotics, computer vision, user interfaces, and simulation.<br />
<br />
== Libraries ==<br />
* [http://conal.net/fran/ Fran]<br />
* [[Grapefruit]]<br />
* [[Reactive]]<br />
* [[DataDriven]]<br />
* [[Yampa]]<br />
* [[WxFruit|wxFruit]]<br />
* [http://hackage.haskell.org/package/elerea Elerea]<br />
* [[Reactive-banana|reactive-banana]]<br />
<br />
* [http://hackage.haskell.org/packages/archive/pkg-list.html#cat:frp Hackage packages in the category FRP]<br />
<br />
== Material ==<br />
* Conal Elliott’s FRP-related [http://conal.net/papers/frp.html papers] and [http://conal.net/blog/tag/functional-reactive-programming/ blog posts]<br />
* [[Grapefruit#Publications and talks|Grapefruit-related publications and talks]]<br />
* [http://www.haskell.org/yale/publications.html The Yale Haskell group’s latest publications] (mostly related to FRP)<br />
* [http://apfelmus.nfshost.com/blog.html#functional-reactive-programming-frp FRP section] of Heinrich Apfelmus' blog<br />
<br />
== People ==<br />
* [http://apfelmus.nfshost.com/ Heinrich Apfelmus]<br />
* [http://www.apocalypse.org/pub/u/antony/work/index.html Antony Courtney]<br />
* [http://conal.net/ Conal Elliott]<br />
* [http://sgate.emt.bme.hu/patai/ Patai Gergely]<br />
* [http://www.ittc.ku.edu/~andygill Andy Gill]<br />
* Liwen Huang<br />
* Paul Hudak<br />
* [http://www.tu-cottbus.de/fakultaet1/de/programmiersprachen-compilerbau/lehrstuhl/mitarbeiter/wolfgang-jeltsch.html Wolfgang Jeltsch]<br />
* [http://www.cs.nott.ac.uk/~nhn/ Henrik Nilsson]<br />
* [http://mcis.western.edu/~jpeterson/ John Peterson]<br />
<br />
== Blog articles ==<br />
* [http://lukepalmer.wordpress.com/2008/11/28/relative-time-frp/ Relative time FRP]<br />
* Several on [http://conal.net/blog Conal's blog]<br />
* [http://blog.edwardamsden.com/2011/03/demonstrating-time-leak-in-arrowized.html Demonstrating a Time Leak in Arrowized FRP]</div>Conalhttps://wiki.haskell.org/Functional_Reactive_ProgrammingFunctional Reactive Programming2011-06-30T23:06:46Z<p>Conal: /* Material */ Conal's blog</p>
<hr />
<div>Functional Reactive Programming (FRP) integrates time flow and compositional events into functional programming. This provides an elegant way to express computation in domains such as interactive animations, robotics, computer vision, user interfaces, and simulation.<br />
<br />
== Libraries ==<br />
* [http://conal.net/fran/ Fran]<br />
* [[Grapefruit]]<br />
* [[Reactive]]<br />
* [[DataDriven]]<br />
* [[Yampa]]<br />
* [[WxFruit|wxFruit]]<br />
* [http://hackage.haskell.org/package/elerea Elerea]<br />
* [[Reactive-banana|reactive-banana]]<br />
<br />
* [http://hackage.haskell.org/packages/archive/pkg-list.html#cat:frp Hackage packages in the category FRP]<br />
<br />
== Material ==<br />
* [http://conal.net/papers/frp.html Conal Elliott’s FRP papers]<br />
* [http://conal.net/blog/tag/functional-reactive-programming/ FRP-related posts on Conal's blog]<br />
* [[Grapefruit#Publications and talks|Grapefruit-related publications and talks]]<br />
* [http://www.haskell.org/yale/publications.html The Yale Haskell group’s latest publications] (mostly related to FRP)<br />
* [http://apfelmus.nfshost.com/blog.html#functional-reactive-programming-frp FRP section] of Heinrich Apfelmus' blog<br />
<br />
== People ==<br />
* [http://apfelmus.nfshost.com/ Heinrich Apfelmus]<br />
* [http://www.apocalypse.org/pub/u/antony/work/index.html Antony Courtney]<br />
* [http://conal.net/ Conal Elliott]<br />
* [http://sgate.emt.bme.hu/patai/ Patai Gergely]<br />
* [http://www.ittc.ku.edu/~andygill Andy Gill]<br />
* Liwen Huang<br />
* Paul Hudak<br />
* [http://www.tu-cottbus.de/fakultaet1/de/programmiersprachen-compilerbau/lehrstuhl/mitarbeiter/wolfgang-jeltsch.html Wolfgang Jeltsch]<br />
* [http://www.cs.nott.ac.uk/~nhn/ Henrik Nilsson]<br />
* [http://mcis.western.edu/~jpeterson/ John Peterson]<br />
<br />
== Blog articles ==<br />
* [http://lukepalmer.wordpress.com/2008/11/28/relative-time-frp/ Relative time FRP]<br />
* Several on [http://conal.net/blog Conal's blog]<br />
* [http://blog.edwardamsden.com/2011/03/demonstrating-time-leak-in-arrowized.html Demonstrating a Time Leak in Arrowized FRP]</div>Conalhttps://wiki.haskell.org/Functional_Reactive_ProgrammingFunctional Reactive Programming2011-06-30T23:01:08Z<p>Conal: /* Material */ more specific link for Conal's FRP papers</p>
<hr />
<div>Functional Reactive Programming (FRP) integrates time flow and compositional events into functional programming. This provides an elegant way to express computation in domains such as interactive animations, robotics, computer vision, user interfaces, and simulation.<br />
<br />
== Libraries ==<br />
* [http://conal.net/fran/ Fran]<br />
* [[Grapefruit]]<br />
* [[Reactive]]<br />
* [[DataDriven]]<br />
* [[Yampa]]<br />
* [[WxFruit|wxFruit]]<br />
* [http://hackage.haskell.org/package/elerea Elerea]<br />
* [[Reactive-banana|reactive-banana]]<br />
<br />
* [http://hackage.haskell.org/packages/archive/pkg-list.html#cat:frp Hackage packages in the category FRP]<br />
<br />
== Material ==<br />
* [http://conal.net/papers/frp.html Conal Elliott’s FRP papers]<br />
* [[Grapefruit#Publications and talks|Grapefruit-related publications and talks]]<br />
* [http://www.haskell.org/yale/publications.html The Yale Haskell group’s latest publications] (mostly related to FRP)<br />
* [http://apfelmus.nfshost.com/blog.html#functional-reactive-programming-frp FRP section] of Heinrich Apfelmus' blog<br />
<br />
== People ==<br />
* [http://apfelmus.nfshost.com/ Heinrich Apfelmus]<br />
* [http://www.apocalypse.org/pub/u/antony/work/index.html Antony Courtney]<br />
* [http://conal.net/ Conal Elliott]<br />
* [http://sgate.emt.bme.hu/patai/ Patai Gergely]<br />
* [http://www.ittc.ku.edu/~andygill Andy Gill]<br />
* Liwen Huang<br />
* Paul Hudak<br />
* [http://www.tu-cottbus.de/fakultaet1/de/programmiersprachen-compilerbau/lehrstuhl/mitarbeiter/wolfgang-jeltsch.html Wolfgang Jeltsch]<br />
* [http://www.cs.nott.ac.uk/~nhn/ Henrik Nilsson]<br />
* [http://mcis.western.edu/~jpeterson/ John Peterson]<br />
<br />
== Blog articles ==<br />
* [http://lukepalmer.wordpress.com/2008/11/28/relative-time-frp/ Relative time FRP]<br />
* Several on [http://conal.net/blog Conal's blog]<br />
* [http://blog.edwardamsden.com/2011/03/demonstrating-time-leak-in-arrowized.html Demonstrating a Time Leak in Arrowized FRP]</div>Conalhttps://wiki.haskell.org/Bot/VersionsBot/Versions2011-05-06T20:21:18Z<p>Conal: Undo spam revision 39770 by Kate resaz (Talk)</p>
<hr />
<div>== Version 0 ==<br />
<br />
=== Version 0.1 ===<br />
<br />
* Updated according to blog post [http://conal.net/blog/posts/applicative-bots Applicative bots].<br />
<br />
=== Version 0.0 ===<br />
<br />
* New project. See [http://conal.net/blog/tag/bot/ some related blog posts].</div>Conalhttps://wiki.haskell.org/UnambUnamb2011-02-18T16:45:23Z<p>Conal: /* Abstract */ removed stray newline</p>
<hr />
<div>[[Category:Packages]]<br />
[[Category:Concurrency]]<br />
<br />
== Abstract ==<br />
<br />
'''unamb''' is a package containing the ''unambiguous choice'' operator <hask>unamb</hask>, which wraps thread racing up in a purely functional, semantically simple wrapper.<br />
Given any two arguments <hask>u</hask> and <hask>v</hask> that agree unless bottom, the value of <hask>unamb u v</hask> is the more terminating of <hask>u</hask> and <hask>v</hask>.<br />
Operationally, the value of <hask>unamb u v</hask> becomes available when the earlier of <hask>u</hask> and <hask>v</hask> does.<br />
The agreement precondition ensures unamb's referential transparency.<br />
For more info about <hask>unamb</hask> and its use, see the paper ''[http://conal.net/papers/push-pull-frp/ Push-pull functional reactive programming]'', sections 10 and 11.<br />
<br />
<hask>unamb</hask> was originally a part of [[Reactive]]. I moved it to its own package in order to encourage experimentation.<br />
<br />
Besides this wiki page, here are more ways to find out about unamb:<br />
* Visit the [http://hackage.haskell.org/cgi-bin/hackage-scripts/package/unamb Hackage page] for library documentation and to download & install.<br />
* Read [http://conal.net/blog/tag/unamb/ related blog posts].<br />
* Or install with <tt>cabal install unamb</tt>.<br />
* Get the code repository: <tt>darcs get http://code.haskell.org/unamb</tt>.<br />
<!-- * See the [[unamb/Versions| version history]]. --><br />
<br />
<!-- Please leave comments at the [[Talk:unamb|Talk page]]. --><br />
See also the [[lub]] package, which extends unamb's usefulness with non-flat types.<br />
<br />
== Issues ==<br />
<br />
Although semantically very simple, unamb has been quite tricky to implement correctly and efficiently.<br />
<br />
As of version 0.1.1, unamb requires ghc 6.10 or better.<br />
<br />
As of version 0.1.6, unamb correctly handles recursive termination of sub-efforts and automatic restarting, but only with the GHC RTS fixes that first appeared (stably, by my testing) in GHC HEAD version 6.11.20090115.<br />
The problems and solution can be found in a few places:<br />
* Email thread: ''[http://n2.nabble.com/problem-with-unamb----doesn%27t-kill-enough-threads-tt1674917.html Problem with unamb -- doesn't kill enough threads]''<br />
* Blog post: ''[http://conal.net/blog/posts/smarter-termination-for-thread-racing/ Smarter termination for thread racing]''<br />
* Email thread: ''[http://n2.nabble.com/Re%3A-black-hole-detection-and-concurrency-td2016290.htm Black hole detection and concurrency]''<br />
<br />
unamb seems to be working well in version 0.2.2, under GHC 6.10.3.</div>Conalhttps://wiki.haskell.org/UnambUnamb2011-02-18T16:44:43Z<p>Conal: /* Abstract */ replaced paper reference to push-pull-frp</p>
<hr />
<div>[[Category:Packages]]<br />
[[Category:Concurrency]]<br />
<br />
== Abstract ==<br />
<br />
'''unamb''' is a package containing the ''unambiguous choice'' operator <hask>unamb</hask>, which wraps thread racing up in a purely functional, semantically simple wrapper.<br />
Given any two arguments <hask>u</hask> and <hask>v</hask> that agree unless bottom, the value of <hask>unamb u v</hask> is the more terminating of <hask>u</hask> and <hask>v</hask>.<br />
Operationally, the value of <hask>unamb u v</hask> becomes available when the earlier of <hask>u</hask> and <hask>v</hask> does.<br />
The agreement precondition ensures unamb's referential transparency.<br />
For more info about <hask>unamb</hask> and its use, see the paper<br />
''[http://conal.net/papers/push-pull-frp/ Push-pull functional reactive programming]'', sections 10 and 11.<br />
<br />
<hask>unamb</hask> was originally a part of [[Reactive]]. I moved it to its own package in order to encourage experimentation.<br />
<br />
Besides this wiki page, here are more ways to find out about unamb:<br />
* Visit the [http://hackage.haskell.org/cgi-bin/hackage-scripts/package/unamb Hackage page] for library documentation and to download & install.<br />
* Read [http://conal.net/blog/tag/unamb/ related blog posts].<br />
* Or install with <tt>cabal install unamb</tt>.<br />
* Get the code repository: <tt>darcs get http://code.haskell.org/unamb</tt>.<br />
<!-- * See the [[unamb/Versions| version history]]. --><br />
<br />
<!-- Please leave comments at the [[Talk:unamb|Talk page]]. --><br />
See also the [[lub]] package, which extends unamb's usefulness with non-flat types.<br />
<br />
== Issues ==<br />
<br />
Although semantically very simple, unamb has been quite tricky to implement correctly and efficiently.<br />
<br />
As of version 0.1.1, unamb requires ghc 6.10 or better.<br />
<br />
As of version 0.1.6, unamb correctly handles recursive termination of sub-efforts and automatic restarting, but only with the GHC RTS fixes that first appeared (stably, by my testing) in GHC HEAD version 6.11.20090115.<br />
The problems and solution can be found in a few places:<br />
* Email thread: ''[http://n2.nabble.com/problem-with-unamb----doesn%27t-kill-enough-threads-tt1674917.html Problem with unamb -- doesn't kill enough threads]''<br />
* Blog post: ''[http://conal.net/blog/posts/smarter-termination-for-thread-racing/ Smarter termination for thread racing]''<br />
* Email thread: ''[http://n2.nabble.com/Re%3A-black-hole-detection-and-concurrency-td2016290.htm Black hole detection and concurrency]''<br />
<br />
unamb seems to be working well in version 0.2.2, under GHC 6.10.3.</div>Conalhttps://wiki.haskell.org/BayHac2011BayHac20112011-02-07T19:00:03Z<p>Conal: /* Attendees */</p>
<hr />
<div>[[Image:Haskell-and-dojo.png]]<br />
<br />
<span style="color:#930;font-weight:bold;">Haskell Project Hackathon</span><br />
<br />
''Friday, February 11th, 2pm – Sunday, February 13th 2pm''<br />
<br />
Come join a group of Haskell hackers. Bring your own projects, or work on ours: It is more fun to do it in a group!<br />
<br />
<br />
<span style="color:#930;font-weight:bold;">Learn Haskell Workshop </span><br />
<br />
''Saturday, February 12th, 10am – 4pm''<br />
<br />
Come learn Haskell! No prior Haskell experience needed. Bring your laptop and a willingness to have your brain stretched in enjoyable ways. We’ll be do some web programming in Haskell.<br />
----<br />
{|<br />
|When:<br />
|Feb 11-13, 2011<br />
|-<br />
|Where:<br />
|[http://www.hackerdojo.com/ Hacker Dojo], 140A South Whisman Road, Mountain View, CA ([http://maps.google.com/maps/place?cid=2122486601784397611&q=hacker+dojo&gl=us Google Map])<br />
|-<br />
|Cost:<br />
|Free<br />
|-<br />
|Sign up:<br />
|[https://spreadsheets.google.com/viewform?formkey=dEc5cW1fa3hjQ3JheVF5dHAwdTk0eGc6MQ sign-up form]<br />
|-<br />
|News and Discussion:<br />
|[http://groups.google.com/group/bayhac BayHac Google Group]<br />
|}<br />
<br />
----<br />
<br />
''As of Feb 1st, there are over 60 people signed up!''<br />
<br />
== Attendees == <br />
<br />
If you're attending, please put your name up!<br />
<br />
* [http://www.serpentine.com/blog Bryan O'Sullivan] - organizer<br />
* [http://www.ozonehouse.com/mark/ Mark Lentczner] - organizer<br />
* [http://hacks.yi.org/~as/ Austin Seipp]<br />
* [http://blog.johantibell.com/ Johan Tibell]<br />
* [http://blog.gregweber.info/ Greg Weber]<br />
* [http://conal.net/ Conal Elliott]<br />
<br />
== Projects == <br />
<br />
If you're going to attend and plan working on a project, please put it up here so other interested hackers can see what sorts of projects are afoot.<br />
<br />
=== Compiler plugins for GHC ===<br />
<br />
I plan on helping integrate and maintain compiler plugins for GHC. My work at the hackathon will be to try and get the current patch (allowing you to write Core optimizations) applied to GHC HEAD. Following that, my plan is to extend the functionality, so you can write Cmm passes as well.<br />
<br />
There's a wiki-page describing this on-going work: http://hackage.haskell.org/trac/ghc/wiki/NewPlugins<br />
<br />
* Hackers: Austin Seipp<br />
<br />
=== Hashing-based containers ===<br />
<br />
I plan to finish a first release of my new HashMap container, based on the hash array mapped trie data structure.<br />
<br />
* Hackers: Johan Tibell<br />
<br />
=== music sequencer ===<br />
<br />
It's a music sequencer in haskell, rather different from other sequencers out there. I'm (elaforge) planning on lazifying score interpretation, or if I'm done with that by the time the hackathon comes up, picking something from the list below:<br />
<br />
* language / score design<br />
* performance tuning (garbage reduction, parallelization, ...)<br />
* port to linux (i.e. write alsa midi or jack midi bindings)<br />
* alternate backends (osc, csound, lilypond, ...)<br />
* GUI / UI<br />
* if someone else is interested, whatever it is they're interested in!<br />
<br />
* Hackers: Evan Laforge<br />
<br />
=== project watcher ===<br />
Code to watch for changes to a project to automatically re-compile, run tests, etc. I have this working for me on Linux now. I would like to release this as an easy to use, platform independent package.<br />
<br />
* Hackers: Greg Weber<br />
<br />
=== Yesod ===<br />
I will probably work on the MongoDB backend for Persistent. If anyone has questions about web development with Yesod, or wants to hack on the framework, let me know.<br />
<br />
* Hackers: Greg Weber<br />
<br />
=== Projects listed in signups ===<br />
<br />
* GHC<br />
** Compiler Plugins<br />
** LLVM backend<br />
* Haskell Platform<br />
* Yices-Painless EDSL<br />
* network package re-write<br />
* BLAS bindings<br />
* iterIO/haskellDB<br />
* music sequencer<br />
* Yesod<br />
* binary file/block device analyzer/editor<br />
* solidsnack<br />
* Erland in Haskell<br />
* Graphics<br />
* NoSQL DB<br />
* HWordNet<br />
* barley<br />
* machine learning<br />
* natral language processing<br />
* matrix and tensor manipulation<br />
* DSL for microcontrollers<br />
* hledger<br />
* darcsden<br />
* theorem prover<br />
* interface to Swiss Ephemeris<br />
<br />
<br />
== Links ==<br />
<br />
* [http://wiki.hackerdojo.com/w/page/32992961/Haskell-Hackathon-2011 Hacker Dojo wiki page] for the event</div>Conalhttps://wiki.haskell.org/Tangible_ValueTangible Value2010-10-02T16:43:24Z<p>Conal: /* Abstract */ Mention GuiTV broken</p>
<hr />
<div>[[Category:Interfaces]]<br />
[[Category:IO]]<br />
[[Category:Arrow]]<br />
[[Category:Libraries]]<br />
[[Category:Packages]]<br />
<br />
== Abstract ==<br />
<br />
<br />
'''TV''' is a library for composing ''tangible values'' ("TVs"), i.e., values that carry along external interfaces. In particular, TVs can be composed to create new TVs, ''and'' they can be directly executed with a friendly GUI, a process that reads and writes character streams, or many other kinds interfaces. Values and interfaces are ''combined'' for direct use, and ''separable'' for composition. This combination makes for software that is ''ready to use and ready to reuse''.<br />
<br />
TV can be thought of as a simple functional formulation of the Model-View-Controller pattern. (My thanks to an anonymous ICFP referee for pointing out this connection.) The value part of a TV is the ''model'', and the "interface" part, or "output" as it is called below, is the ''viewer''. Outputs are built up compositionally from other outputs and from inputs (the ''controllers''), as described below.<br />
<br />
Besides this wiki page, here are more ways to learn about TV:<br />
* Visit the [http://hackage.haskell.org/package/project-foo Hackage page] for library documentation and to download & install.<br />
* Or install with <tt>cabal install project-foo</tt>.<br />
* See the use of TV in [[Eros]].<br />
<br />
As of version 0.2, I have moved the GUI functionality out of TV and into a small new package [[GuiTV]]. I moved it out to eliminate the dependency of core TV on [[Phooey]] and hence on [[wxHaskell]], as the latter can be difficult to install. The GUI examples below require [[GuiTV]].<br />
<br />
GuiTV (better named "wxTV") is bit-rotten. There is also a very similar [http://hackage.haskell.org/package/GtkTV package to generate Gtk-based GUIs].<br />
<br />
I'd love to hear your comments at the [[Talk:TV]] page.<br />
<br />
== First Example ==<br />
<br />
Here is a tangible reverse function:<br />
<br />
<haskell><br />
reverseT :: CTV (String -> String)<br />
reverseT = tv (oTitle "reverse" defaultOut) reverse<br />
</haskell><br />
<br />
The <hask>tv</hask> function combines an interface and a value. In this example, the interface is the default for string functions, wrapped with the title "reverse".<br />
<br />
TV "interfaces" are more than just GUIs. Here are two different renderings of <hask>reverseT</hask>. (User input is shown <tt><b><i>in italics</i></b></tt> in the <hask>runIO</hask> version).<br />
<br />
Running:<br />
<blockquote><br />
{| class="wikitable"<br />
! <hask>runUI reverseT</hask> !! <hask>runIO reverseT</hask> <br />
|- <br />
| style="padding:20px;" | [[Image:reverseT.png]]<br />
| style="padding:20px;" |<br />
*Examples> runIO reverseT<br />
reverse: <b><i>Hello, reversible world.</i></b><br />
.dlrow elbisrever ,olleH<br />
*Examples> <br />
|}<br />
</blockquote><br />
<br />
We'll see [[#The_general_story|later]] that "<hask>runUI</hask>" and "<hask>runIO</hask>" are both type-specialized synonyms for a more general function.<br />
<br />
== Outputs ==<br />
<br />
What I've been calling an "interface" is a value of type <hask>COutput a</hask> for a type <hask>a</hask>. For instance, for <hask>reverseT</hask>, <hask>a</hask> is <hask>String->String</hask>. The reason for the <hask>C</hask> prefix is explained below. At the heart of TV is a small algebra for constructing these outputs. Weve already seen one output function, <hask>oTitle</hask>. Another one is <hask>showOut</hask>, which is an output for all <hask>Show</hask> types. For instance,<br />
<br />
<haskell><br />
total :: Show a => COutput a<br />
total = oTitle "total" showOut<br />
</haskell><br />
<br />
== Inputs and function-valued outputs ==<br />
<br />
Just as an output is a way to ''deliver'' (or ''consume'') a value, an "input" is a way to ''obtain'' (or ''produce'') a value. For example, here are two inputs, each specifying an initial value and a value range, and each given a title.<br />
<br />
<haskell><br />
apples, bananas :: CInput Int<br />
apples = iTitle "apples" defaultIn<br />
bananas = iTitle "bananas" defaultIn<br />
</haskell><br />
<br />
Now for the fun part. Let's combine the <hask>apples</hask> and <hask>bananas</hask> inputs and the <hask>total</hask> output to make a ''function-valued'' output.<br />
<br />
<haskell><br />
shoppingO :: COutput (Int -> Int -> Int)<br />
shoppingO = oTitle "shopping list" $<br />
oLambda apples (oLambda bananas total)<br />
</haskell><br />
<br />
And a TV:<br />
<haskell><br />
shopping :: CTV (Int -> Int -> Int)<br />
shopping = tv shoppingO (+)<br />
</haskell><br />
<br />
Running:<br />
<blockquote><br />
{| class="wikitable"<br />
! <hask>runUI shopping</hask> !! <hask>runIO shopping</hask> <br />
|- <br />
| style="padding:20px;" | [[Image:shopping.png]]<br />
| style="padding:20px;" |<br />
shopping list: apples: <b><i>8</i></b><br />
bananas: <b><i>5</i></b><br />
total: 13<br />
|}<br />
</blockquote><br />
<br />
== A variation ==<br />
<br />
Here is an uncurried variation:<br />
<br />
<haskell><br />
shoppingPr :: CTV ((Int,Int) -> Int)<br />
shoppingPr = tv ( oTitle "shopping list -- uncurried" $ <br />
oLambda (iPair apples bananas) total )<br />
(uncurry (+))<br />
</haskell><br />
However, there's a much more elegant formulation, using [http://hackage.haskell.org/package/DeepArrow/latest/doc/html/Control-Arrow-DeepArrow.html#v%3AcurryA <hask>uncurryA</hask>] and [http://hackage.haskell.org/package/DeepArrow/latest/doc/html/Data-FunArr.html#v%3A%24%24 <hask>$$</hask>] from [[DeepArrow]]:<br />
<haskell><br />
shoppingPr = uncurryA $$ shopping<br />
</haskell><br />
<br />
Running:<br />
<blockquote><br />
{| class="wikitable"<br />
! <hask>runUI shoppingPr</hask> !! <hask>runIO shoppingPr</hask> <br />
|- <br />
| style="padding:20px;" | [[Image:shoppingPr.png]]<br />
| style="padding:20px;" |<br />
shopping list -- uncurried: apples: <b><i>8</i></b><br />
bananas: <b><i>5</i></b><br />
total: 13<br />
|}<br />
</blockquote><br />
<br />
== The general story ==<br />
<br />
TVs, outputs, and inputs are not restricted to GUIs and IO. In general, they are parameterized by the mechanics of "transmitting values", i.e., delivering ("sinking") output and gathering ("sourcing") input.<br />
<br />
<haskell><br />
data Input src a<br />
data Output src snk a<br />
type TV src snk a<br />
</haskell><br />
<br />
The "sources" will be [[applicative functor]]s (AFs), and the "sinks" will be contravariant functors.<br />
<br />
In the examples above, we've used two different mechanisms, namely [[Phooey]]'s <hask>UI</hask> AF and <hask>IO</hask>. The sinks are counterparts <hask>IU</hask> and <hask>OI</hask>.<br />
<br />
The functions <hask>runUI</hask> and <hask>runIO</hask> used in examples above are simply type-specialized synonyms for [http://hackage.haskell.org/package/TV/latest/doc/html/Interface-TV.html#v%3ArunTV <hask>runTV</hask>].<br />
<haskell><br />
runUI :: TV UI IU a -> IO ()<br />
runUI = runTV<br />
<br />
runIO :: TV IO OI a -> IO ()<br />
runIO = runTV<br />
</haskell><br />
<br />
== Common Ins and Outs ==<br />
<br />
The examples <hask>reverseT</hask> and <hask>shoppingT</hask> above used not only the generic <hask>Output</hask> and <hask>Input</hask> operations, but also some operations that apply to AFs having a few methods for sourcing and sinking a few common types (strings, readables, showables, and booleans). The type constructors <hask>CInput</hask>, <hask>COutput</hask>, and <hask>CTV</hask> are universally quantified over sources and sinks having the required methods.<br />
<br />
<haskell><br />
type CInput a = forall src.<br />
(CommonIns src) => Input src a<br />
type COutput a = forall src snk.<br />
(CommonIns src, CommonOuts snk) => Output src snk a<br />
type CTV a = forall src snk.<br />
(CommonIns src, CommonOuts snk) => TV src snk a<br />
</haskell><br />
<br />
== Sorting examples ==<br />
<br />
Here's a sorting TV (see [http://hackage.haskell.org/packages/archive/TV/latest/doc/html/Interface-TV-Common.html#v:interactLineRS <hask>interactLineRS</hask>]), tested with <hask>runUI</hask>:<br />
<br />
<blockquote><br />
{| class="wikitable"<br />
| style="padding-right:2em;" |<br />
<haskell><br />
sortT :: (Read a, Show a, Ord a) => CTV ([a] -> [a])<br />
sortT = tv (oTitle "sort" $ interactLineRS []) sort<br />
</haskell><br />
|- <br />
| style="padding:20px;text-align:center;" | [[Image:sortT.png]]<br />
|}<br />
</blockquote><br />
<br />
Note that <hask>sortT</hask> is polymorphic in value, and the type variable <hask>a</hask> as defaulted to <hask>Int</hask>. You could instead type-annotate its uses, e.g.,<br />
<br />
: <hask>runUI (sortT :: CTV ([String] -> [String]))</hask><br />
<br />
== Composition of TVs ==<br />
<br />
So far, we done a little composition of interfaces and combined them with values to construct TVs. Now let's look at composition of TVs.<br />
<br />
First, wrap up the <hask>words</hask> and <hask>unwords</hask> functions:<br />
<br />
<blockquote><br />
{| class="wikitable"<br />
| style="padding-right:2em;" |<br />
<haskell><br />
wordsT :: CTV (String -> [String]) <br />
wordsT = tv ( oTitle "function: words" $<br />
oLambda (iTitle "sentence in" defaultIn)<br />
(oTitle "words out" defaultOut))<br />
words<br />
</haskell><br />
|- <br />
| style="padding:20px;text-align:center;" | [[Image:wordsT.png]]<br />
|}<br />
</blockquote><br />
<br />
<blockquote><br />
{| class="wikitable"<br />
| style="padding-right:2em;" |<br />
<haskell><br />
unwordsT :: CTV ([String] -> String) <br />
unwordsT = tv ( oTitle "function: unwords" $<br />
oLambda (iTitle "words in" defaultIn)<br />
(oTitle "sentence out" defaultOut))<br />
unwords<br />
</haskell><br />
|- <br />
| style="padding:20px;text-align:center;" | [[Image:unwordsT.png]]<br />
|}<br />
</blockquote><br />
<br />
Finally, compose <hask>wordsT</hask>, <hask>unwordsT</hask>, and <hask>sortT</hask><br />
<br />
<haskell><br />
sortWordsT :: CTV (String -> String)<br />
sortWordsT = wordsT ->| sortT ->| unwordsT<br />
</haskell><br />
<br />
Running:<br />
<blockquote><br />
{| class="wikitable"<br />
! <hask>runUI sortWordsT</hask> !! <hask>runIO sortWordsT</hask> <br />
|- <br />
| style="padding:20px;" | [[Image:sortWordsT.png]]<br />
| style="padding:20px;" |<br />
sentence in: <b><i>The night Max wore his wolf suit</i></b><br />
sentence out: Max The his night suit wolf wore<br />
|}<br />
</blockquote><br />
<br />
The operator "[http://hackage.haskell.org/package/DeepArrow/latest/doc/html/Control-Arrow-DeepArrow.html#v%3A-%3E%7C <hask>->|</hask>]" is part of a general approach to value composition from [[DeepArrow]].<br />
<br />
== Transmission-specific interfaces ==<br />
<br />
While some interfaces can be implemented for different means of transmission, others are more specialized.<br />
<br />
=== GUIs ===<br />
<br />
Here are inputs for our shopping example above that specifically work with [[Phooey]]'s UI applicative functor.<br />
<haskell><br />
applesU, bananasU :: Input UI Int<br />
applesU = iTitle "apples" (islider 3 (0,10))<br />
bananasU = iTitle "bananas" (islider 7 (0,10))<br />
<br />
shoppingUO :: Output UI (Int -> Int -> Int)<br />
shoppingUO = oTitle "shopping list" $ oLambda applesU (oLambda bananasU total)<br />
</haskell><br />
<br />
We can then make curried and uncurried TVs:<br />
<blockquote><br />
{| class="wikitable"<br />
! code !! runUI rendering <br />
|-<br />
| style="padding:20px;" align=right| <hask>tv shoppingUO (+)</hask><br />
| style="padding:20px;" align="center" | [[Image:shoppingU.png]]<br />
|-<br />
| style="padding:20px;" align=right | <hask>uncurryA $$ tv shoppingUO (+)</hask><br />
| style="padding:20px;" align="center" | [[Image:shoppingPrU.png]]<br />
|}<br />
</blockquote><br />
<br />
'''Note''': We could define other type classes, besides <hask>CommonInsOuts</hask>. For instance, <hask>islider</hask> could be made a method of a <hask>GuiArrow</hask> class, allowing it to be rendered in different ways with different GUI toolkits or even using HTML and Javascript.<br />
<br />
=== IO ===<br />
<br />
We can use <hask>IO</hask> operations in TV interfaces. The corresponding sink is <hask>OI</hask>, defined in [[TypeCompose]]. TV provides a few functions in its [http://hackage.haskell.org/package/TV/latest/doc/html/Interface-TV-IO.html <hask>IO</hask> module], including a close counterpart to the standard <hask>interact</hask> function.<br />
<haskell><br />
interactOut :: Output IO OI (String -> String)<br />
interactOut = oLambda contentsIn stringOut<br />
</haskell><br />
<br />
Assuming we have a file <tt>"test.txt"</tt> containing some lines of text, we can use it to test string transformations.<br />
<haskell><br />
testO :: Output IO OI (String -> String)<br />
testO = oLambda (fileIn "test.txt") defaultOut<br />
</haskell><br />
<br />
First, let's define higher-order functions that apply another function to the lines or on the words of a string.<br />
<haskell><br />
onLines, onWords :: ([String] -> [String]) -> (String -> String)<br />
onLines f = unlines . f . lines<br />
onWords f = unwords . f . words<br />
</haskell><br />
Next, specializations that operate on ''each'' line or word:<br />
<haskell><br />
perLine,perWord :: (String -> String) -> (String -> String)<br />
perLine f = onLines (map f)<br />
perWord f = onWords (map f)<br />
</haskell><br />
<br />
Some examples:<br />
<br />
<blockquote><br />
{| class="wikitable"<br />
! string function <hask>f</hask> !! <hask>runIO (tv test0 f)</hask><br />
|-<br />
| style="padding:20px;" align=right| <hask>id</hask><br />
| style="padding:20px;" align="center" |<br />
To see a World in a Grain of Sand<br />
And a Heaven in a Wild Flower, <br />
Hold Infinity in the palm of your hand<br />
And Eternity in an hour.<br />
- William Blake<br />
|-<br />
| style="padding:20px;" align=right| <hask>reverse</hask><br />
| style="padding:20px;" align="center" |<br />
<br />
ekalB mailliW - <br />
.ruoh na ni ytinretE dnA<br />
dnah ruoy fo mlap eht ni ytinifnI dloH<br />
,rewolF dliW a ni nevaeH a dnA<br />
dnaS fo niarG a ni dlroW a ees oT<br />
|-<br />
| style="padding:20px;" align=right| <hask>onLines reverse</hask><br />
| style="padding:20px;" align="center" |<br />
- William Blake<br />
And Eternity in an hour.<br />
Hold Infinity in the palm of your hand<br />
And a Heaven in a Wild Flower, <br />
To see a World in a Grain of Sand<br />
|-<br />
| style="padding:20px;" align=right| <hask>perLine reverse</hask><br />
| style="padding:20px;" align="center" |<br />
dnaS fo niarG a ni dlroW a ees oT<br />
,rewolF dliW a ni nevaeH a dnA<br />
dnah ruoy fo mlap eht ni ytinifnI dloH<br />
.ruoh na ni ytinretE dnA<br />
ekalB mailliW - <br />
|-<br />
| style="padding:20px;" align=right| <hask>perLine (perWord reverse)</hask><br />
| style="padding:20px;" align="center" |<br />
oT ees a dlroW ni a niarG fo dnaS<br />
dnA a nevaeH ni a dliW ,rewolF<br />
dloH ytinifnI ni eht mlap fo ruoy dnah<br />
dnA ytinretE ni na .ruoh<br />
- mailliW ekalB<br />
|}<br />
</blockquote><br />
<br />
There are more examples [http://code.haskell.org/~conal/code/TV/src/Examples.hs in the TV repository] and in the [http://code.haskell.org/~conal/code/GuiTV/src/Examples.hs in the GuiTV repository]. See also "[http://journal.conal.net/#%5B%5Bseparating%20IO%20from%20logic%20--%20example%5D%5D separating IO from logic -- example]".</div>Conalhttps://wiki.haskell.org/LubLub2010-07-13T04:09:52Z<p>Conal: /* Abstract */ added "and"</p>
<hr />
<div>[[Category:Packages]]<br />
[[Category:Concurrency]]<br />
<br />
== Abstract ==<br />
<br />
Lub is an experiment in computing least upper information bounds on (partially defined) functional values.<br />
It provides a <hask>lub</hask> function that is consistent with the [[unamb]] operator but has a more liberal precondition.<br />
Where <hask>unamb</hask> requires its arguments to equal when neither is bottom, <hask>lub</hask> is able to synthesize a value from the partial information contained in both of its arguments, which is useful with non-flat types.<br />
<br />
Besides this wiki page, here are more ways to find out about lub:<br />
* Read the blog post ''[http://conal.net/blog/posts/merging-partial-values/ Merging partial values]''<br />
* Visit the [http://hackage.haskell.org/cgi-bin/hackage-scripts/package/lub Hackage page] for library documentation and to download & install.<br />
* Or install with <tt>cabal install lub</tt>.<br />
* Get the code repository: <tt>darcs get http://code.haskell.org/lub</tt>.<br />
<!-- * See the [[lub/Versions| version history]]. --><br />
<!-- Please leave comments at the [[Talk:lub|Talk page]]. --><br />
<br />
I got inspired for this package after [http://tunes.org/~nef/logs/haskell/08.11.17 stimulating discussions] with Thomas Davie, Russell O'Connor and others in the #haskell gang.</div>Conalhttps://wiki.haskell.org/Tangible_ValueTangible Value2010-03-18T17:42:09Z<p>Conal: /* Abstract */ GtkTV</p>
<hr />
<div>[[Category:Interfaces]]<br />
[[Category:IO]]<br />
[[Category:Arrow]]<br />
[[Category:Libraries]]<br />
[[Category:Packages]]<br />
<br />
== Abstract ==<br />
<br />
<br />
'''TV''' is a library for composing ''tangible values'' ("TVs"), i.e., values that carry along external interfaces. In particular, TVs can be composed to create new TVs, ''and'' they can be directly executed with a friendly GUI, a process that reads and writes character streams, or many other kinds interfaces. Values and interfaces are ''combined'' for direct use, and ''separable'' for composition. This combination makes for software that is ''ready to use and ready to reuse''.<br />
<br />
TV can be thought of as a simple functional formulation of the Model-View-Controller pattern. (My thanks to an anonymous ICFP referee for pointing out this connection.) The value part of a TV is the ''model'', and the "interface" part, or "output" as it is called below, is the ''viewer''. Outputs are built up compositionally from other outputs and from inputs (the ''controllers''), as described below.<br />
<br />
Besides this wiki page, here are more ways to learn about TV:<br />
* See the documentation [http://hackage.haskell.org/package/TV on Hackage].<br />
* Get the code repository: '''<tt>darcs get http://code.haskell.org/~conal/code/TV</tt>'''.<br />
* See the use of TV in [[Eros]].<br />
<br />
As of version 0.2, I have moved the GUI functionality out of TV and into a small new package [[GuiTV]]. I moved it out to eliminate the dependency of core TV on [[Phooey]] and hence on [[wxHaskell]], as the latter can be difficult to install. The GUI examples below require [[GuiTV]]. There is also a very similar [http://hackage.haskell.org/package/GtkTV package to generate Gtk-based GUIs].<br />
<br />
I'd love to hear your comments at the [[Talk:TV]] page.<br />
<br />
== First Example ==<br />
<br />
Here is a tangible reverse function:<br />
<br />
<haskell><br />
reverseT :: CTV (String -> String)<br />
reverseT = tv (oTitle "reverse" defaultOut) reverse<br />
</haskell><br />
<br />
The <hask>tv</hask> function combines an interface and a value. In this example, the interface is the default for string functions, wrapped with the title "reverse".<br />
<br />
TV "interfaces" are more than just GUIs. Here are two different renderings of <hask>reverseT</hask>. (User input is shown <tt><b><i>in italics</i></b></tt> in the <hask>runIO</hask> version).<br />
<br />
Running:<br />
<blockquote><br />
{| class="wikitable"<br />
! <hask>runUI reverseT</hask> !! <hask>runIO reverseT</hask> <br />
|- <br />
| style="padding:20px;" | [[Image:reverseT.png]]<br />
| style="padding:20px;" |<br />
*Examples> runIO reverseT<br />
reverse: <b><i>Hello, reversible world.</i></b><br />
.dlrow elbisrever ,olleH<br />
*Examples> <br />
|}<br />
</blockquote><br />
<br />
We'll see [[#The_general_story|later]] that "<hask>runUI</hask>" and "<hask>runIO</hask>" are both type-specialized synonyms for a more general function.<br />
<br />
== Outputs ==<br />
<br />
What I've been calling an "interface" is a value of type <hask>COutput a</hask> for a type <hask>a</hask>. For instance, for <hask>reverseT</hask>, <hask>a</hask> is <hask>String->String</hask>. The reason for the <hask>C</hask> prefix is explained below. At the heart of TV is a small algebra for constructing these outputs. Weve already seen one output function, <hask>oTitle</hask>. Another one is <hask>showOut</hask>, which is an output for all <hask>Show</hask> types. For instance,<br />
<br />
<haskell><br />
total :: Show a => COutput a<br />
total = oTitle "total" showOut<br />
</haskell><br />
<br />
== Inputs and function-valued outputs ==<br />
<br />
Just as an output is a way to ''deliver'' (or ''consume'') a value, an "input" is a way to ''obtain'' (or ''produce'') a value. For example, here are two inputs, each specifying an initial value and a value range, and each given a title.<br />
<br />
<haskell><br />
apples, bananas :: CInput Int<br />
apples = iTitle "apples" defaultIn<br />
bananas = iTitle "bananas" defaultIn<br />
</haskell><br />
<br />
Now for the fun part. Let's combine the <hask>apples</hask> and <hask>bananas</hask> inputs and the <hask>total</hask> output to make a ''function-valued'' output.<br />
<br />
<haskell><br />
shoppingO :: COutput (Int -> Int -> Int)<br />
shoppingO = oTitle "shopping list" $<br />
oLambda apples (oLambda bananas total)<br />
</haskell><br />
<br />
And a TV:<br />
<haskell><br />
shopping :: CTV (Int -> Int -> Int)<br />
shopping = tv shoppingO (+)<br />
</haskell><br />
<br />
Running:<br />
<blockquote><br />
{| class="wikitable"<br />
! <hask>runUI shopping</hask> !! <hask>runIO shopping</hask> <br />
|- <br />
| style="padding:20px;" | [[Image:shopping.png]]<br />
| style="padding:20px;" |<br />
shopping list: apples: <b><i>8</i></b><br />
bananas: <b><i>5</i></b><br />
total: 13<br />
|}<br />
</blockquote><br />
<br />
== A variation ==<br />
<br />
Here is an uncurried variation:<br />
<br />
<haskell><br />
shoppingPr :: CTV ((Int,Int) -> Int)<br />
shoppingPr = tv ( oTitle "shopping list -- uncurried" $ <br />
oLambda (iPair apples bananas) total )<br />
(uncurry (+))<br />
</haskell><br />
However, there's a much more elegant formulation, using [http://hackage.haskell.org/package/DeepArrow/latest/doc/html/Control-Arrow-DeepArrow.html#v%3AcurryA <hask>uncurryA</hask>] and [http://hackage.haskell.org/package/DeepArrow/latest/doc/html/Data-FunArr.html#v%3A%24%24 <hask>$$</hask>] from [[DeepArrow]]:<br />
<haskell><br />
shoppingPr = uncurryA $$ shopping<br />
</haskell><br />
<br />
Running:<br />
<blockquote><br />
{| class="wikitable"<br />
! <hask>runUI shoppingPr</hask> !! <hask>runIO shoppingPr</hask> <br />
|- <br />
| style="padding:20px;" | [[Image:shoppingPr.png]]<br />
| style="padding:20px;" |<br />
shopping list -- uncurried: apples: <b><i>8</i></b><br />
bananas: <b><i>5</i></b><br />
total: 13<br />
|}<br />
</blockquote><br />
<br />
== The general story ==<br />
<br />
TVs, outputs, and inputs are not restricted to GUIs and IO. In general, they are parameterized by the mechanics of "transmitting values", i.e., delivering ("sinking") output and gathering ("sourcing") input.<br />
<br />
<haskell><br />
data Input src a<br />
data Output src snk a<br />
type TV src snk a<br />
</haskell><br />
<br />
The "sources" will be [[applicative functor]]s (AFs), and the "sinks" will be contravariant functors.<br />
<br />
In the examples above, we've used two different mechanisms, namely [[Phooey]]'s <hask>UI</hask> AF and <hask>IO</hask>. The sinks are counterparts <hask>IU</hask> and <hask>OI</hask>.<br />
<br />
The functions <hask>runUI</hask> and <hask>runIO</hask> used in examples above are simply type-specialized synonyms for [http://hackage.haskell.org/package/TV/latest/doc/html/Interface-TV.html#v%3ArunTV <hask>runTV</hask>].<br />
<haskell><br />
runUI :: TV UI IU a -> IO ()<br />
runUI = runTV<br />
<br />
runIO :: TV IO OI a -> IO ()<br />
runIO = runTV<br />
</haskell><br />
<br />
== Common Ins and Outs ==<br />
<br />
The examples <hask>reverseT</hask> and <hask>shoppingT</hask> above used not only the generic <hask>Output</hask> and <hask>Input</hask> operations, but also some operations that apply to AFs having a few methods for sourcing and sinking a few common types (strings, readables, showables, and booleans). The type constructors <hask>CInput</hask>, <hask>COutput</hask>, and <hask>CTV</hask> are universally quantified over sources and sinks having the required methods.<br />
<br />
<haskell><br />
type CInput a = forall src.<br />
(CommonIns src) => Input src a<br />
type COutput a = forall src snk.<br />
(CommonIns src, CommonOuts snk) => Output src snk a<br />
type CTV a = forall src snk.<br />
(CommonIns src, CommonOuts snk) => TV src snk a<br />
</haskell><br />
<br />
== Sorting examples ==<br />
<br />
Here's a sorting TV (see [http://hackage.haskell.org/package/TV/latest/doc/html/Interface-TV.html#v%3AinteractLineRS <hask>interactLinesRS</hask>]), tested with <hask>runUI</hask>:<br />
<br />
<blockquote><br />
{| class="wikitable"<br />
| style="padding-right:2em;" |<br />
<haskell><br />
sortT :: (Read a, Show a, Ord a) => CTV ([a] -> [a])<br />
sortT = tv (oTitle "sort" $ interactLinesRS []) sort<br />
</haskell><br />
|- <br />
| style="padding:20px;text-align:center;" | [[Image:sortT.png]]<br />
|}<br />
</blockquote><br />
<br />
Note that <hask>sortT</hask> is polymorphic in value, and the type variable <hask>a</hask> as defaulted to <hask>Int</hask>. You could instead type-annotate its uses, e.g.,<br />
<br />
: <hask>runUI (sortT :: CTV ([String] -> [String]))</hask><br />
<br />
== Composition of TVs ==<br />
<br />
So far, we done a little composition of interfaces and combined them with values to construct TVs. Now let's look at composition of TVs.<br />
<br />
First, wrap up the <hask>words</hask> and <hask>unwords</hask> functions:<br />
<br />
<blockquote><br />
{| class="wikitable"<br />
| style="padding-right:2em;" |<br />
<haskell><br />
wordsT :: CTV (String -> [String]) <br />
wordsT = tv ( oTitle "function: words" $<br />
oLambda (iTitle "sentence in" defaultIn)<br />
(oTitle "words out" defaultOut))<br />
words<br />
</haskell><br />
|- <br />
| style="padding:20px;text-align:center;" | [[Image:wordsT.png]]<br />
|}<br />
</blockquote><br />
<br />
<blockquote><br />
{| class="wikitable"<br />
| style="padding-right:2em;" |<br />
<haskell><br />
unwordsT :: CTV ([String] -> String) <br />
unwordsT = tv ( oTitle "function: unwords" $<br />
oLambda (iTitle "words in" defaultIn)<br />
(oTitle "sentence out" defaultOut))<br />
unwords<br />
</haskell><br />
|- <br />
| style="padding:20px;text-align:center;" | [[Image:unwordsT.png]]<br />
|}<br />
</blockquote><br />
<br />
Finally, compose <hask>wordsT</hask>, <hask>unwordsT</hask>, and <hask>sortT</hask><br />
<br />
<haskell><br />
sortWordsT :: CTV (String -> String)<br />
sortWordsT = wordsT ->| sortT ->| unwordsT<br />
</haskell><br />
<br />
Running:<br />
<blockquote><br />
{| class="wikitable"<br />
! <hask>runUI sortWordsT</hask> !! <hask>runIO sortWordsT</hask> <br />
|- <br />
| style="padding:20px;" | [[Image:sortWordsT.png]]<br />
| style="padding:20px;" |<br />
sentence in: <b><i>The night Max wore his wolf suit</i></b><br />
sentence out: Max The his night suit wolf wore<br />
|}<br />
</blockquote><br />
<br />
The operator "[http://hackage.haskell.org/package/DeepArrow/latest/doc/html/Control-Arrow-DeepArrow.html#v%3A-%3E%7C <hask>->|</hask>]" is part of a general approach to value composition from [[DeepArrow]].<br />
<br />
== Transmission-specific interfaces ==<br />
<br />
While some interfaces can be implemented for different means of transmission, others are more specialized.<br />
<br />
=== GUIs ===<br />
<br />
Here are inputs for our shopping example above that specifically work with [[Phooey]]'s UI applicative functor.<br />
<haskell><br />
applesU, bananasU :: Input UI Int<br />
applesU = iTitle "apples" (islider 3 (0,10))<br />
bananasU = iTitle "bananas" (islider 7 (0,10))<br />
<br />
shoppingUO :: Output UI (Int -> Int -> Int)<br />
shoppingUO = oTitle "shopping list" $ oLambda applesU (oLambda bananasU total)<br />
</haskell><br />
<br />
We can then make curried and uncurried TVs:<br />
<blockquote><br />
{| class="wikitable"<br />
! code !! runUI rendering <br />
|-<br />
| style="padding:20px;" align=right| <hask>tv shoppingUO (+)</hask><br />
| style="padding:20px;" align="center" | [[Image:shoppingU.png]]<br />
|-<br />
| style="padding:20px;" align=right | <hask>uncurryA $$ tv shoppingUO (+)</hask><br />
| style="padding:20px;" align="center" | [[Image:shoppingPrU.png]]<br />
|}<br />
</blockquote><br />
<br />
'''Note''': We could define other type classes, besides <hask>CommonInsOuts</hask>. For instance, <hask>islider</hask> could be made a method of a <hask>GuiArrow</hask> class, allowing it to be rendered in different ways with different GUI toolkits or even using HTML and Javascript.<br />
<br />
=== IO ===<br />
<br />
We can use <hask>IO</hask> operations in TV interfaces. The corresponding sink is <hask>OI</hask>, defined in [[TypeCompose]]. TV provides a few functions in its [http://hackage.haskell.org/package/TV/latest/doc/html/Interface-TV-IO.html <hask>IO</hask> module], including a close counterpart to the standard <hask>interact</hask> function.<br />
<haskell><br />
interactOut :: Output IO OI (String -> String)<br />
interactOut = oLambda contentsIn stringOut<br />
</haskell><br />
<br />
Assuming we have a file <tt>"test.txt"</tt> containing some lines of text, we can use it to test string transformations.<br />
<haskell><br />
testO :: Output IO OI (String -> String)<br />
testO = oLambda (fileIn "test.txt") defaultOut<br />
</haskell><br />
<br />
First, let's define higher-order functions that apply another function to the lines or on the words of a string.<br />
<haskell><br />
onLines, onWords :: ([String] -> [String]) -> (String -> String)<br />
onLines f = unlines . f . lines<br />
onWords f = unwords . f . words<br />
</haskell><br />
Next, specializations that operate on ''each'' line or word:<br />
<haskell><br />
perLine,perWord :: (String -> String) -> (String -> String)<br />
perLine f = onLines (map f)<br />
perWord f = onWords (map f)<br />
</haskell><br />
<br />
Some examples:<br />
<br />
<blockquote><br />
{| class="wikitable"<br />
! string function <hask>f</hask> !! <hask>runIO (tv test0 f)</hask><br />
|-<br />
| style="padding:20px;" align=right| <hask>id</hask><br />
| style="padding:20px;" align="center" |<br />
To see a World in a Grain of Sand<br />
And a Heaven in a Wild Flower, <br />
Hold Infinity in the palm of your hand<br />
And Eternity in an hour.<br />
- William Blake<br />
|-<br />
| style="padding:20px;" align=right| <hask>reverse</hask><br />
| style="padding:20px;" align="center" |<br />
<br />
ekalB mailliW - <br />
.ruoh na ni ytinretE dnA<br />
dnah ruoy fo mlap eht ni ytinifnI dloH<br />
,rewolF dliW a ni nevaeH a dnA<br />
dnaS fo niarG a ni dlroW a ees oT<br />
|-<br />
| style="padding:20px;" align=right| <hask>onLines reverse</hask><br />
| style="padding:20px;" align="center" |<br />
- William Blake<br />
And Eternity in an hour.<br />
Hold Infinity in the palm of your hand<br />
And a Heaven in a Wild Flower, <br />
To see a World in a Grain of Sand<br />
|-<br />
| style="padding:20px;" align=right| <hask>perLine reverse</hask><br />
| style="padding:20px;" align="center" |<br />
dnaS fo niarG a ni dlroW a ees oT<br />
,rewolF dliW a ni nevaeH a dnA<br />
dnah ruoy fo mlap eht ni ytinifnI dloH<br />
.ruoh na ni ytinretE dnA<br />
ekalB mailliW - <br />
|-<br />
| style="padding:20px;" align=right| <hask>perLine (perWord reverse)</hask><br />
| style="padding:20px;" align="center" |<br />
oT ees a dlroW ni a niarG fo dnaS<br />
dnA a nevaeH ni a dliW ,rewolF<br />
dloH ytinifnI ni eht mlap fo ruoy dnah<br />
dnA ytinretE ni na .ruoh<br />
- mailliW ekalB<br />
|}<br />
</blockquote><br />
<br />
There are more examples [http://code.haskell.org/~conal/code/TV/src/Examples.hs in the TV repository] and in the [http://code.haskell.org/~conal/code/GuiTV/src/Examples.hs in the GuiTV repository]. See also "[http://journal.conal.net/#%5B%5Bseparating%20IO%20from%20logic%20--%20example%5D%5D separating IO from logic -- example]".</div>Conalhttps://wiki.haskell.org/DeepArrowDeepArrow2010-03-18T17:39:45Z<p>Conal: /* Abstract */ simplified</p>
<hr />
<div>== Abstract ==<br />
The '''DeepArrow''' library is a framework for composable "editors" of pure values.<br />
<br />
Besides this wiki page, here are more ways to learn about DeepArrow:<br />
* See the documentation [http://hackage.haskell.org/package/DeepArrow on Hackage].<br />
* Get the code repository: '''<tt>darcs get http://code.haskell.org/~conal/code/DeepArrow</tt>'''<br />
* Or grab a [http://code.haskell.org/~conal/code/DeepArrow/dist distribution tarball].<br />
* See the use of DeepArrow in [[TV]] and [[Eros]].<br />
<br />
Please leave comments at the [[Talk:DeepArrow|Talk page]].<br />
<br />
== Introduction ==<br />
By an "editor", I mean a function that targets a transformation at some part of a value, such as the first half of the second half of a value of type <hask>(a,(b,c))</hask>. In such a case, the transformation being targeted would have type <hask>b -> b'</hask>, and the overall transformation would have type <hask>(a,(b,c)) -> (a,(b',c))</hask>.<br />
<br />
If you've fooled around with arrows, you might guess that the arrow methods <hask>first</hask> and <hask>second</hask> having something to do with this game, and you'd be right. The main idea of DeepArrow is to play with compositions of <hask>first</hask> and <hask>second</hask> and of an analogous third combinator called <hask>result</hask>. I was stunned to realize that arbitrarily complex value editors can be made by stringing together compositions of these three combinators and delighted to find that the composition chains directly spell out the paths to the value subpart to be edited.<br />
<br />
The DeepArrow library is about "deep function application" in two senses. First, compositions of <hask>first</hask>, <hask>second</hask>, and <hask>result</hask> apply functions deeply inside of values. Second, another set of combinators extract "deep functions" so they can be applied. Combining these two abilities allows a buried function to be applied to a buried argument, with the two contexts being carried along.<br />
<br />
== Examples ==<br />
<br />
See [http://darcs.haskell.org/packages/DeepArrow/doc/html/Control-Arrow-DeepArrow-Examples.html example docs] in the library documentation and follow the source code links.<br />
<br />
== Background ==<br />
<br />
The inspiration for value-editing paths came while I was looking for a way for non-programmers to be able to create [http://conal.net/Pan functional images]. I've had a growing intuition over the last fifteen years that media authoring tools can be usefully looked at as environments for functional programming. I'd been wondering how to map a user's gestures into operations on a functional program. Lots of noodling led to ideas of composable interfaces and "tangible values" (term thanks to Sean Seefried) and gestural composition in [http://conal.net/papers/Eros Eros].<br />
<br />
Eros was more complicated than I like, so I started splitting it into pieces:<br />
* [http://haskell.org/haskellwiki/phooey Phooey] is a functional GUI library that has much of Eros's GUI implementation techniques, but much more carefully structured than in the Eros paper.<br />
* [http://haskell.org/haskellwiki/DeepArrow DeepArrow] has the general notion of "deep application".<br />
* [http://haskell.org/haskellwiki/TV TV] has the algebra of ''composable interfaces'', or visualizations of pure values, and it has ''tangible values'', which are separable combinations of interface and value. It uses Phooey to generate GUIs very simply from interfaces.<br />
<br />
-- [[User:Conal|Conal Elliott]]<br />
[[Category:Libraries]]<br />
[[Category:Arrow]]<br />
[[Category:Combinators]]</div>Conalhttps://wiki.haskell.org/Tangible_ValueTangible Value2010-03-18T17:38:28Z<p>Conal: /* Abstract */ shortened hackage link</p>
<hr />
<div>[[Category:Interfaces]]<br />
[[Category:IO]]<br />
[[Category:Arrow]]<br />
[[Category:Libraries]]<br />
[[Category:Packages]]<br />
<br />
== Abstract ==<br />
<br />
<br />
'''TV''' is a library for composing ''tangible values'' ("TVs"), i.e., values that carry along external interfaces. In particular, TVs can be composed to create new TVs, ''and'' they can be directly executed with a friendly GUI, a process that reads and writes character streams, or many other kinds interfaces. Values and interfaces are ''combined'' for direct use, and ''separable'' for composition. This combination makes for software that is ''ready to use and ready to reuse''.<br />
<br />
TV can be thought of as a simple functional formulation of the Model-View-Controller pattern. (My thanks to an anonymous ICFP referee for pointing out this connection.) The value part of a TV is the ''model'', and the "interface" part, or "output" as it is called below, is the ''viewer''. Outputs are built up compositionally from other outputs and from inputs (the ''controllers''), as described below.<br />
<br />
Besides this wiki page, here are more ways to learn about TV:<br />
* See the documentation [http://hackage.haskell.org/package/TV on Hackage].<br />
* Get the code repository: '''<tt>darcs get http://code.haskell.org/~conal/code/TV</tt>'''.<br />
* See the use of TV in [[Eros]].<br />
<br />
As of version 0.2, I have moved the GUI functionality out of TV and into a small new package [[GuiTV]]. I moved it out to eliminate the dependency of core TV on [[Phooey]] and hence on [[wxHaskell]], as the latter can be difficult to install. The GUI examples below require [[GuiTV]].<br />
<br />
I'd love to hear your comments at the [[Talk:TV]] page.<br />
<br />
== First Example ==<br />
<br />
Here is a tangible reverse function:<br />
<br />
<haskell><br />
reverseT :: CTV (String -> String)<br />
reverseT = tv (oTitle "reverse" defaultOut) reverse<br />
</haskell><br />
<br />
The <hask>tv</hask> function combines an interface and a value. In this example, the interface is the default for string functions, wrapped with the title "reverse".<br />
<br />
TV "interfaces" are more than just GUIs. Here are two different renderings of <hask>reverseT</hask>. (User input is shown <tt><b><i>in italics</i></b></tt> in the <hask>runIO</hask> version).<br />
<br />
Running:<br />
<blockquote><br />
{| class="wikitable"<br />
! <hask>runUI reverseT</hask> !! <hask>runIO reverseT</hask> <br />
|- <br />
| style="padding:20px;" | [[Image:reverseT.png]]<br />
| style="padding:20px;" |<br />
*Examples> runIO reverseT<br />
reverse: <b><i>Hello, reversible world.</i></b><br />
.dlrow elbisrever ,olleH<br />
*Examples> <br />
|}<br />
</blockquote><br />
<br />
We'll see [[#The_general_story|later]] that "<hask>runUI</hask>" and "<hask>runIO</hask>" are both type-specialized synonyms for a more general function.<br />
<br />
== Outputs ==<br />
<br />
What I've been calling an "interface" is a value of type <hask>COutput a</hask> for a type <hask>a</hask>. For instance, for <hask>reverseT</hask>, <hask>a</hask> is <hask>String->String</hask>. The reason for the <hask>C</hask> prefix is explained below. At the heart of TV is a small algebra for constructing these outputs. Weve already seen one output function, <hask>oTitle</hask>. Another one is <hask>showOut</hask>, which is an output for all <hask>Show</hask> types. For instance,<br />
<br />
<haskell><br />
total :: Show a => COutput a<br />
total = oTitle "total" showOut<br />
</haskell><br />
<br />
== Inputs and function-valued outputs ==<br />
<br />
Just as an output is a way to ''deliver'' (or ''consume'') a value, an "input" is a way to ''obtain'' (or ''produce'') a value. For example, here are two inputs, each specifying an initial value and a value range, and each given a title.<br />
<br />
<haskell><br />
apples, bananas :: CInput Int<br />
apples = iTitle "apples" defaultIn<br />
bananas = iTitle "bananas" defaultIn<br />
</haskell><br />
<br />
Now for the fun part. Let's combine the <hask>apples</hask> and <hask>bananas</hask> inputs and the <hask>total</hask> output to make a ''function-valued'' output.<br />
<br />
<haskell><br />
shoppingO :: COutput (Int -> Int -> Int)<br />
shoppingO = oTitle "shopping list" $<br />
oLambda apples (oLambda bananas total)<br />
</haskell><br />
<br />
And a TV:<br />
<haskell><br />
shopping :: CTV (Int -> Int -> Int)<br />
shopping = tv shoppingO (+)<br />
</haskell><br />
<br />
Running:<br />
<blockquote><br />
{| class="wikitable"<br />
! <hask>runUI shopping</hask> !! <hask>runIO shopping</hask> <br />
|- <br />
| style="padding:20px;" | [[Image:shopping.png]]<br />
| style="padding:20px;" |<br />
shopping list: apples: <b><i>8</i></b><br />
bananas: <b><i>5</i></b><br />
total: 13<br />
|}<br />
</blockquote><br />
<br />
== A variation ==<br />
<br />
Here is an uncurried variation:<br />
<br />
<haskell><br />
shoppingPr :: CTV ((Int,Int) -> Int)<br />
shoppingPr = tv ( oTitle "shopping list -- uncurried" $ <br />
oLambda (iPair apples bananas) total )<br />
(uncurry (+))<br />
</haskell><br />
However, there's a much more elegant formulation, using [http://hackage.haskell.org/package/DeepArrow/latest/doc/html/Control-Arrow-DeepArrow.html#v%3AcurryA <hask>uncurryA</hask>] and [http://hackage.haskell.org/package/DeepArrow/latest/doc/html/Data-FunArr.html#v%3A%24%24 <hask>$$</hask>] from [[DeepArrow]]:<br />
<haskell><br />
shoppingPr = uncurryA $$ shopping<br />
</haskell><br />
<br />
Running:<br />
<blockquote><br />
{| class="wikitable"<br />
! <hask>runUI shoppingPr</hask> !! <hask>runIO shoppingPr</hask> <br />
|- <br />
| style="padding:20px;" | [[Image:shoppingPr.png]]<br />
| style="padding:20px;" |<br />
shopping list -- uncurried: apples: <b><i>8</i></b><br />
bananas: <b><i>5</i></b><br />
total: 13<br />
|}<br />
</blockquote><br />
<br />
== The general story ==<br />
<br />
TVs, outputs, and inputs are not restricted to GUIs and IO. In general, they are parameterized by the mechanics of "transmitting values", i.e., delivering ("sinking") output and gathering ("sourcing") input.<br />
<br />
<haskell><br />
data Input src a<br />
data Output src snk a<br />
type TV src snk a<br />
</haskell><br />
<br />
The "sources" will be [[applicative functor]]s (AFs), and the "sinks" will be contravariant functors.<br />
<br />
In the examples above, we've used two different mechanisms, namely [[Phooey]]'s <hask>UI</hask> AF and <hask>IO</hask>. The sinks are counterparts <hask>IU</hask> and <hask>OI</hask>.<br />
<br />
The functions <hask>runUI</hask> and <hask>runIO</hask> used in examples above are simply type-specialized synonyms for [http://hackage.haskell.org/package/TV/latest/doc/html/Interface-TV.html#v%3ArunTV <hask>runTV</hask>].<br />
<haskell><br />
runUI :: TV UI IU a -> IO ()<br />
runUI = runTV<br />
<br />
runIO :: TV IO OI a -> IO ()<br />
runIO = runTV<br />
</haskell><br />
<br />
== Common Ins and Outs ==<br />
<br />
The examples <hask>reverseT</hask> and <hask>shoppingT</hask> above used not only the generic <hask>Output</hask> and <hask>Input</hask> operations, but also some operations that apply to AFs having a few methods for sourcing and sinking a few common types (strings, readables, showables, and booleans). The type constructors <hask>CInput</hask>, <hask>COutput</hask>, and <hask>CTV</hask> are universally quantified over sources and sinks having the required methods.<br />
<br />
<haskell><br />
type CInput a = forall src.<br />
(CommonIns src) => Input src a<br />
type COutput a = forall src snk.<br />
(CommonIns src, CommonOuts snk) => Output src snk a<br />
type CTV a = forall src snk.<br />
(CommonIns src, CommonOuts snk) => TV src snk a<br />
</haskell><br />
<br />
== Sorting examples ==<br />
<br />
Here's a sorting TV (see [http://hackage.haskell.org/package/TV/latest/doc/html/Interface-TV.html#v%3AinteractLineRS <hask>interactLinesRS</hask>]), tested with <hask>runUI</hask>:<br />
<br />
<blockquote><br />
{| class="wikitable"<br />
| style="padding-right:2em;" |<br />
<haskell><br />
sortT :: (Read a, Show a, Ord a) => CTV ([a] -> [a])<br />
sortT = tv (oTitle "sort" $ interactLinesRS []) sort<br />
</haskell><br />
|- <br />
| style="padding:20px;text-align:center;" | [[Image:sortT.png]]<br />
|}<br />
</blockquote><br />
<br />
Note that <hask>sortT</hask> is polymorphic in value, and the type variable <hask>a</hask> as defaulted to <hask>Int</hask>. You could instead type-annotate its uses, e.g.,<br />
<br />
: <hask>runUI (sortT :: CTV ([String] -> [String]))</hask><br />
<br />
== Composition of TVs ==<br />
<br />
So far, we done a little composition of interfaces and combined them with values to construct TVs. Now let's look at composition of TVs.<br />
<br />
First, wrap up the <hask>words</hask> and <hask>unwords</hask> functions:<br />
<br />
<blockquote><br />
{| class="wikitable"<br />
| style="padding-right:2em;" |<br />
<haskell><br />
wordsT :: CTV (String -> [String]) <br />
wordsT = tv ( oTitle "function: words" $<br />
oLambda (iTitle "sentence in" defaultIn)<br />
(oTitle "words out" defaultOut))<br />
words<br />
</haskell><br />
|- <br />
| style="padding:20px;text-align:center;" | [[Image:wordsT.png]]<br />
|}<br />
</blockquote><br />
<br />
<blockquote><br />
{| class="wikitable"<br />
| style="padding-right:2em;" |<br />
<haskell><br />
unwordsT :: CTV ([String] -> String) <br />
unwordsT = tv ( oTitle "function: unwords" $<br />
oLambda (iTitle "words in" defaultIn)<br />
(oTitle "sentence out" defaultOut))<br />
unwords<br />
</haskell><br />
|- <br />
| style="padding:20px;text-align:center;" | [[Image:unwordsT.png]]<br />
|}<br />
</blockquote><br />
<br />
Finally, compose <hask>wordsT</hask>, <hask>unwordsT</hask>, and <hask>sortT</hask><br />
<br />
<haskell><br />
sortWordsT :: CTV (String -> String)<br />
sortWordsT = wordsT ->| sortT ->| unwordsT<br />
</haskell><br />
<br />
Running:<br />
<blockquote><br />
{| class="wikitable"<br />
! <hask>runUI sortWordsT</hask> !! <hask>runIO sortWordsT</hask> <br />
|- <br />
| style="padding:20px;" | [[Image:sortWordsT.png]]<br />
| style="padding:20px;" |<br />
sentence in: <b><i>The night Max wore his wolf suit</i></b><br />
sentence out: Max The his night suit wolf wore<br />
|}<br />
</blockquote><br />
<br />
The operator "[http://hackage.haskell.org/package/DeepArrow/latest/doc/html/Control-Arrow-DeepArrow.html#v%3A-%3E%7C <hask>->|</hask>]" is part of a general approach to value composition from [[DeepArrow]].<br />
<br />
== Transmission-specific interfaces ==<br />
<br />
While some interfaces can be implemented for different means of transmission, others are more specialized.<br />
<br />
=== GUIs ===<br />
<br />
Here are inputs for our shopping example above that specifically work with [[Phooey]]'s UI applicative functor.<br />
<haskell><br />
applesU, bananasU :: Input UI Int<br />
applesU = iTitle "apples" (islider 3 (0,10))<br />
bananasU = iTitle "bananas" (islider 7 (0,10))<br />
<br />
shoppingUO :: Output UI (Int -> Int -> Int)<br />
shoppingUO = oTitle "shopping list" $ oLambda applesU (oLambda bananasU total)<br />
</haskell><br />
<br />
We can then make curried and uncurried TVs:<br />
<blockquote><br />
{| class="wikitable"<br />
! code !! runUI rendering <br />
|-<br />
| style="padding:20px;" align=right| <hask>tv shoppingUO (+)</hask><br />
| style="padding:20px;" align="center" | [[Image:shoppingU.png]]<br />
|-<br />
| style="padding:20px;" align=right | <hask>uncurryA $$ tv shoppingUO (+)</hask><br />
| style="padding:20px;" align="center" | [[Image:shoppingPrU.png]]<br />
|}<br />
</blockquote><br />
<br />
'''Note''': We could define other type classes, besides <hask>CommonInsOuts</hask>. For instance, <hask>islider</hask> could be made a method of a <hask>GuiArrow</hask> class, allowing it to be rendered in different ways with different GUI toolkits or even using HTML and Javascript.<br />
<br />
=== IO ===<br />
<br />
We can use <hask>IO</hask> operations in TV interfaces. The corresponding sink is <hask>OI</hask>, defined in [[TypeCompose]]. TV provides a few functions in its [http://hackage.haskell.org/package/TV/latest/doc/html/Interface-TV-IO.html <hask>IO</hask> module], including a close counterpart to the standard <hask>interact</hask> function.<br />
<haskell><br />
interactOut :: Output IO OI (String -> String)<br />
interactOut = oLambda contentsIn stringOut<br />
</haskell><br />
<br />
Assuming we have a file <tt>"test.txt"</tt> containing some lines of text, we can use it to test string transformations.<br />
<haskell><br />
testO :: Output IO OI (String -> String)<br />
testO = oLambda (fileIn "test.txt") defaultOut<br />
</haskell><br />
<br />
First, let's define higher-order functions that apply another function to the lines or on the words of a string.<br />
<haskell><br />
onLines, onWords :: ([String] -> [String]) -> (String -> String)<br />
onLines f = unlines . f . lines<br />
onWords f = unwords . f . words<br />
</haskell><br />
Next, specializations that operate on ''each'' line or word:<br />
<haskell><br />
perLine,perWord :: (String -> String) -> (String -> String)<br />
perLine f = onLines (map f)<br />
perWord f = onWords (map f)<br />
</haskell><br />
<br />
Some examples:<br />
<br />
<blockquote><br />
{| class="wikitable"<br />
! string function <hask>f</hask> !! <hask>runIO (tv test0 f)</hask><br />
|-<br />
| style="padding:20px;" align=right| <hask>id</hask><br />
| style="padding:20px;" align="center" |<br />
To see a World in a Grain of Sand<br />
And a Heaven in a Wild Flower, <br />
Hold Infinity in the palm of your hand<br />
And Eternity in an hour.<br />
- William Blake<br />
|-<br />
| style="padding:20px;" align=right| <hask>reverse</hask><br />
| style="padding:20px;" align="center" |<br />
<br />
ekalB mailliW - <br />
.ruoh na ni ytinretE dnA<br />
dnah ruoy fo mlap eht ni ytinifnI dloH<br />
,rewolF dliW a ni nevaeH a dnA<br />
dnaS fo niarG a ni dlroW a ees oT<br />
|-<br />
| style="padding:20px;" align=right| <hask>onLines reverse</hask><br />
| style="padding:20px;" align="center" |<br />
- William Blake<br />
And Eternity in an hour.<br />
Hold Infinity in the palm of your hand<br />
And a Heaven in a Wild Flower, <br />
To see a World in a Grain of Sand<br />
|-<br />
| style="padding:20px;" align=right| <hask>perLine reverse</hask><br />
| style="padding:20px;" align="center" |<br />
dnaS fo niarG a ni dlroW a ees oT<br />
,rewolF dliW a ni nevaeH a dnA<br />
dnah ruoy fo mlap eht ni ytinifnI dloH<br />
.ruoh na ni ytinretE dnA<br />
ekalB mailliW - <br />
|-<br />
| style="padding:20px;" align=right| <hask>perLine (perWord reverse)</hask><br />
| style="padding:20px;" align="center" |<br />
oT ees a dlroW ni a niarG fo dnaS<br />
dnA a nevaeH ni a dliW ,rewolF<br />
dloH ytinifnI ni eht mlap fo ruoy dnah<br />
dnA ytinretE ni na .ruoh<br />
- mailliW ekalB<br />
|}<br />
</blockquote><br />
<br />
There are more examples [http://code.haskell.org/~conal/code/TV/src/Examples.hs in the TV repository] and in the [http://code.haskell.org/~conal/code/GuiTV/src/Examples.hs in the GuiTV repository]. See also "[http://journal.conal.net/#%5B%5Bseparating%20IO%20from%20logic%20--%20example%5D%5D separating IO from logic -- example]".</div>Conal