From HaskellWiki
Jump to navigation Jump to search

Talk page for "Types Are Calling Conventions"

This is a discussion page for the paper Types Are Calling Conventions.

If you are kind enough to read this paper, you may like to jot down any thoughts it triggers off, and see what others have written. This talk-page lets you do just that.

You can identify your entries by preceding them with four tildes. Doing so adds your name, and the date. Thus:

Batterseapower 08:42, 19 April 2007 (UTC) Note from Max

If you say who you are in this way, we'll be able to acknowledge your help in a revised version of the paper.

OK, I can add the first errata myself: I've just noticed that my section about an alternative translation for type lambdas is total nonsense because it doesn't preserve type variable binding. That's what I get for adding a new section at the last moment... Batterseapower 22:17, 10 May 2009 (UTC)

I'm only half way through but it immediately makes me ask the question of if this helps us to express data types containing functions where we want to specify a particular data representation (ie calling convention of the function). My main example is types for monads eg:

newtype Put a = Put {
        runPut :: (a -> {-# UNPACK #-} !Buffer -> [B.ByteString])
                     -> {-# UNPACK #-} !Buffer -> [B.ByteString]

See this old thread

A couple typos: final bit of section 1, final sentence has a spurious '(' before the full stop. Section 6.2 Use-site arity raising, "than than".

Two thirds of the way through now... :-) The basic idea is very appealing. To make it possible (sanely) to express these calling convention optimisations we extend the language to express the before and after picture. It's like the addition of unboxed types/kinds to the intermediate language which enabled the optimiser to express the current worker wrapper transform.

You mention in the introduction "The source language, HL" without it being clear if HL is an acronym we should know or if it's just a name for a language you will be introducing later on. Though the language you introduce later however is featherweight Haskell (FH).

DuncanCoutts 10:22, 11 May 2009 (UTC)

I'm glad you found the paper interesting! Yes, HL is the old name for the source language -- we clearly failed to search-and-replace comprehensively enough.

Your post on the mailing list RE unpack on argument types is very interesting. I hadn't considered the possibility, but it certainly makes sense. I don't think it would be possible to UNPACK non-strict arguments without causing a semantic change (e.g. consider UNPACKing a Int argument to an Int# - unlifted types may not point to a thunk), but certainly UNPACKing strict stuff makes sense.

Batterseapower 16:08, 11 May 2009 (UTC)

Another minor errata for when I get around to fixing it: probably need more kind restrictions in the type rule to prevent things of kinds other than * from entering the value environment. In particular, consider data constructors.

Batterseapower 08:21, 16 May 2009 (UTC)

First I have some minor comments on section 3.5 and 3.6

  • Typo: "delcaration" -> "declaration"
  • Typo: "the a" -> "a"
  • You claim that case expressions in Haskell perform evaluation. But it is not necessarily so.
  • You claim that it doesn't matter which order arguments are evaluated in a pure strict language. While this holds if we only consider termination and side effects evaluation order does matter if we want to reliably reason about space usage.

In the related work I miss a comparison to the work of Karl-Filip Faxén. It's probably the work most closely related to what you're doing.

Josef 15:27, 27 May 2009 (UTC)

Josef, thanks for your comments - I've corrected those typos and added a comparison to FLEET -- the new version is on the webpage. However, I don't understand your point about order of argument evaluation. If I have:

let x = e1 in
let y = e2 in

Then reordering the x and y assignments won't change the peak memory requirements, will it?

Batterseapower 21:49, 28 May 2009 (UTC)

Max, the change can change the peak memory requirements. Let's see if I can make the example understandable. First, suppose that we have a variable 'a' which contains a data structure occupying n units of space. Secondly, assume that 'a' occurs in e1 but not in e2. Thirdly, let e1 compute something small, say an integer, and e2 compute something big, taking up m units of space. Are you with me so far?

First, let's execute the code you have above. What will happen is that a will be used while computing e1, which is then bound to x. The neat thing now is that we can garbage collect the stuff in a, leaving us with only the integer in memory. Then we compute e2 which creates the big structure of m units. This results in a peak memory consumption of max(n,m). I hope that's clear.

Now, let's swap the two bindings as you suggest. This has the unfortunate effect that while e2 is executes and builds up the structure of m units, we will still have the structure reference by the variable a in memory. This is because it will be used lated when evaluating e1. This results in a peak memory usage of n+m, which certainly can be more than max(n,m).

I hope my explanation makes sense to you.

Josef 20:36, 10 June 2009 (UTC)

Yes Josef, that does make sense - thank you. I had stupidly not considered the effect of garbage collection!

Batterseapower 21:53, 12 June 2009 (UTC)

I'm confused - in the paper (sec 3.3, 2nd paragraph,) you say you denote the fact the heap has a mapping from x to v by 'H[x -> v]', and you denote the fact the heap does not have such a mapping by 'H,x -> v'.

Then, in the definition of the operational semantics, the ENTER rule is:

<H,x -> (\<> -> e); x (); Stack> ----> <H,x -> Blackhole; e; update x. Stack>

However, wouldn't x need to be a closure (in this case, a thunk) in the heap in order for 'x ()' - to make sense? So shouldn't it be:

<H[x -> (\<> -> e)]; x (); Stack> ----> <H[x -> Blackhole]; e; update x. Stack>

If not, than maybe this should be noted or something (or at least explained to me?)

thoughtpolice 22:25, 9 June 2009 (UTC)

Hi Thoughtpolice,

Thanks for reading the paper! Let me see if I understand your confusion. This thing:

 <H, x -> (\<> -> e); x (); Stack)

Says that x () is executing in a heap (H, x -> (\<> -> e)). Furthermore, H doesn't contain a binding for x -- so the only binding for x is the "x -> (\<> -> e)", which is __additional__ to the H but nonetheless "in scope" over the x () call.

This means that H both excludes the x binding and "x ()" makes sense.

The alternative notation:

 H[x -> (\<> -> e)]

Says that H contains a binding for x to (\<> -> e), and when I refer to H later on I'll be referring to __that__ H, rather than one which lacks an x binding.

Does that clear anything up?

(Alas your comments come too late to adjust the published version of the paper, which I submitted earlier this week).

Batterseapower 21:53, 12 June 2009 (UTC)