Difference between revisions of "Talk:Open research problems"

From HaskellWiki
Jump to navigation Jump to search
(Acoiding the I/O problem using the FFI)
m (Suggestion extended with Agda-centric quote)
Line 56: Line 56:
 
== Outsourcing the I/O problem ==
 
== Outsourcing the I/O problem ==
   
  +
<div style="border-left:1px solid lightgray; padding: 1em" alt="blockquote">
...it's quite simple really:
 
  +
'''4 Compiling Agda programs'''
  +
  +
This section deals with the topic of getting Agda programs to interact
  +
with the real world. Type checking Agda programs requires evaluating
  +
arbitrary terms, ans as long as all terms are pure and normalizing this is
  +
not a problem, but what happens when we introduce side effects? Clearly,
  +
we don't want side effects to happen at compile time. Another question is
  +
what primitives the language should provide for constructing side effecing
  +
programs. In Agda, these problems are solved by <i>allowing arbitrary
  +
Haskell functions to be imported as axioms.</i> At compile time, these imported
  +
functions have no reduction behaviour, <i>only at run time is the
  +
Haskell function executed</i>.
  +
  +
<tt>[http://www.cse.chalmers.se/~ulfn/papers/afp08/tutorial.pdf Dependently Typed Programming in Agda], Ulf Norell and James Chapman [<i>emphasis added</i>].</tt>
  +
</div>
  +
  +
In Haskell:
   
 
* just extend the FFI enough to replace the ''usually''-abstract I/O definitions with calls to foreign definitions:
 
* just extend the FFI enough to replace the ''usually''-abstract I/O definitions with calls to foreign definitions:
 
:{|
 
:{|
  +
|<haskell>
|
 
instance Monad IO where
+
instance Monad IO where
return = primUnitIO
+
return = primUnitIO
(>>=) = primBindIO
+
(>>=) = primBindIO
  +
 
foreign import ccall unsafe primUnitIO :: a -> IO a
+
foreign import ccall unsafe primUnitIO :: a -> IO a
foreign import ccall unsafe primBindIO :: IO a -> (a -> IO b) -> IO b
+
foreign import ccall unsafe primBindIO :: IO a -> (a -> IO b) -> IO b
+
  +
</haskell>
 
|}
 
|}
   
Line 73: Line 91:
   
 
:{|
 
:{|
  +
|<haskell>
|
 
data IO a -- that's all folks!
+
data IO a -- that's all folks!
  +
</haskell>
 
|}
 
|}
   
 
''Voilà!'' In this example, the I/O problem has been outsourced to C - if you're not happy with C's solution to the I/O problem, just use another programming language for the Haskell implementation: there's plenty of them to choose from (...but don't use Haskell, to avoid going <code>⊥</code>-up ;-).
 
''Voilà!'' In this example, the I/O problem has been outsourced to C - if you're not happy with C's solution to the I/O problem, just use another programming language for the Haskell implementation: there's plenty of them to choose from (...but don't use Haskell, to avoid going <code>⊥</code>-up ;-).
  +
  +
By defining them in this way, <code>IO</code> and its actions in Haskell can also be thought of as being "axiomatic": they have no effect when a Haskell program is compiled, only at run time is the foreign definition executed and its effects occur.
   
 
&mdash; [[User:Atravers|Atravers]] Thu Dec 9 01:55:47 UTC 2021
 
&mdash; [[User:Atravers|Atravers]] Thu Dec 9 01:55:47 UTC 2021

Revision as of 21:36, 17 December 2021

Denotative languages and the I/O problem

My view is that the next logical step for programming is to split into two non-overlapping programming domains:

  • runtime building for …
  • … mathematical programming languages

Gabriella Gonzalez.

Let's assume:

  • a denotative language exists - here, it's called DL.
  • the implementation of DL is written in an imperative language - let's call that IL.

Let's also assume:

  • DL is initially successfull.
  • solid-state Turing machines remain in use, so IL is still needed.

As time goes on, technology advances which means an ever-expanding list of hardware to cater for. Unfortunately, the computing architecture remains mired in state and effects - supporting the new hardware usually means a visit to IL to add the extra subroutines/procedures (or modify existing ones) in the implementation.

DL will still attract some interest:

  • Parts of the logic required to support hardware can be more usefully written as DL definitions, to be called by the implementation where needed - there's no problem with imperative code calling denotative code.
  • periodic refactoring of the implementation reveals suitable candidates for replacement with calls to DL expressions.
  • DL is occasionally extended to cater for new patterns of use - mostly in the form of new abstractions and their supporting libraries, or (more rarely) the language itself and therefore its implementation in IL.

...in any case, DL remains denotative - if you want a computer to do something new to its surroundings, that usually means using IL to modify the implementation of DL.

So the question is this: which language will programmers use more often, out of habit - DL or IL?

Here's a clue:

They [monadic types] are especially useful for structuring large systems. In fact, there's a danger of programming in this style too much (I know I do), and almost forgetting about the 'pure' style of Haskell.

Noel Winstanley.

Instead of looking at the whole system in a consistently denotational style (with simple & precise semantics) by using DL alone, most users would be working on the implementation using IL - being denotative makes for nice libraries, but getting the job done means being imperative. Is this an improvement over the current situation in Haskell? No - instead of having the denotative/imperative division in Haskell by way of types, users would be contending with that division at the language level in the forms of differing syntax and semantics, annoying foreign calls, and so forth.

The advent of Haskell's FFI is an additional aggravation - it allows countless more effect-centric operations to be accessed. Moving all of them into a finite implementation isn't just impractical - it's impossible.

But if you still think being denotative is worth all that bother (or you just want to prove me wrong :-) this could be a useful place to start:

  • OneHundredPercentPure: Gone is the catch-all lawless IO monad. And no trace of those scary unsafeThisAndThat functions. All functions must be pure.

Atravers Fri Oct 22 06:36:41 UTC 2021


Outsourcing the I/O problem

4 Compiling Agda programs

This section deals with the topic of getting Agda programs to interact with the real world. Type checking Agda programs requires evaluating arbitrary terms, ans as long as all terms are pure and normalizing this is not a problem, but what happens when we introduce side effects? Clearly, we don't want side effects to happen at compile time. Another question is what primitives the language should provide for constructing side effecing programs. In Agda, these problems are solved by allowing arbitrary Haskell functions to be imported as axioms. At compile time, these imported functions have no reduction behaviour, only at run time is the Haskell function executed.

Dependently Typed Programming in Agda, Ulf Norell and James Chapman [emphasis added].

In Haskell:

  • just extend the FFI enough to replace the usually-abstract I/O definitions with calls to foreign definitions:
instance Monad IO where
    return = primUnitIO
    (>>=)  = primBindIO
    
foreign import ccall unsafe primUnitIO :: a -> IO a
foreign import ccall unsafe primBindIO :: IO a -> (a -> IO b) -> IO b
                
  • the IO type is then just a type-level tag, as specified in the Haskell 2010 report:
data IO a  -- that's all folks!

Voilà! In this example, the I/O problem has been outsourced to C - if you're not happy with C's solution to the I/O problem, just use another programming language for the Haskell implementation: there's plenty of them to choose from (...but don't use Haskell, to avoid going -up ;-).

By defining them in this way, IO and its actions in Haskell can also be thought of as being "axiomatic": they have no effect when a Haskell program is compiled, only at run time is the foreign definition executed and its effects occur.

Atravers Thu Dec 9 01:55:47 UTC 2021