# Difference between revisions of "Talk:Open research problems"

m (The connection between mathematics, functional languages and the I/O conundrum) |
m |
||

Line 121: | Line 121: | ||

</div> |
</div> |
||

− | So the seeking of a "pure"/"mathematical"/"functional"/"formal"/... '''general-purpose''' solution to I/O which <u>isn't</u> [https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.13.9123&rep=rep1&type=pdf awkward] should focus on finding an extension of mathematics which can |
+ | So the seeking of a "pure"/"mathematical"/"functional"/"formal"/... '''general-purpose''' solution to I/O which <u>isn't</u> [https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.13.9123&rep=rep1&type=pdf awkward] should focus on finding an extension of mathematics which can [https://www.cs.kent.ac.uk/people/staff/dat/krc/paraffins-turner.pdf elegantly] describe interactions with external environments: |

* If one exists, denotational semantics can then be used to map it into Haskell. |
* If one exists, denotational semantics can then be used to map it into Haskell. |
||

* If one cannot exist (like the solution to the halting problem), then implementors everywhere can just select the most-suitable (or least-ugly) model of I/O as they see fit. |
* If one cannot exist (like the solution to the halting problem), then implementors everywhere can just select the most-suitable (or least-ugly) model of I/O as they see fit. |
||

− | — [[User:Atravers|Atravers]] Fri Jan 28 22:33: |
+ | — [[User:Atravers|Atravers]] Fri Jan 28 22:33:43 UTC 2022 |

## Revision as of 22:43, 28 January 2022

## Denotative languages and the I/O problem

My view is that the next logical step for programming is to split into two non-overlapping programming domains:

- runtime building for …
- … mathematical programming languages

Let's assume:

- a denotative language exists - here, it's called
*DL*. - the implementation of
*DL*is written in an imperative language - let's call that*IL*.

Let's also assume:

*DL*is initially successfull.- solid-state Turing machines remain in use, so
*IL*is still needed.

As time goes on, technology advances which means an ever-expanding list of hardware to cater for. Unfortunately, the computing architecture remains mired in state and effects - supporting the new hardware usually means a visit to *IL* to add the extra subroutines/procedures (or modify existing ones) in the implementation.

*DL* will still attract some interest:

- Parts of the logic required to support hardware can be more usefully written as
*DL*definitions, to be called by the implementation where needed - there's no problem with imperative code calling denotative code.

- periodic refactoring of the implementation reveals suitable candidates for replacement with calls to
*DL*expressions.

*DL*is occasionally extended to cater for new patterns of use - mostly in the form of new abstractions and their supporting libraries, or (more rarely) the language itself and therefore its implementation in*IL*.

...in any case, *DL* remains denotative - if you want a computer to do something new to its surroundings, that usually means using *IL* to modify the implementation of *DL*.

So the question is this: which language will programmers use more often, out of habit - *DL* or *IL*?

Here's a clue:

They [monadic types] are especially useful for structuring large systems. In fact, there's a danger of programming in this style too much (I know I do), and almost forgetting about the 'pure' style of Haskell.

Instead of looking at the whole system in a consistently denotational style (with simple & precise semantics) by using *DL* alone, most users would be working on the implementation using *IL* - being denotative makes for nice libraries, but getting the job done means being imperative. Is this an improvement over the current situation in Haskell? No - instead of having the denotative/imperative division in Haskell by way of types, users would be contending with that division *at the language level* in the forms of differing syntax and semantics, annoying foreign calls, and so forth.

The advent of Haskell's FFI is an additional aggravation - it allows countless more effect-centric operations to be accessed. Moving **all** of them into a *finite* implementation isn't just impractical - it's impossible.

But if you still think being denotative is worth all that bother (or you just want to prove me wrong :-) this could be a useful place to start:

`OneHundredPercentPure`: Gone is the catch-all lawless`IO`

monad. And no trace of those scary`unsafeThisAndThat`

functions. All functions must be pure.

— Atravers Fri Oct 22 06:36:41 UTC 2021

## Outsourcing the I/O problem

**4 Compiling Agda programs**

This section deals with the topic of getting Agda programs to interact
with the real world. Type checking Agda programs requires evaluating
arbitrary terms, ans as long as all terms are pure and normalizing this is
not a problem, but what happens when we introduce side effects? Clearly,
we don't want side effects to happen at compile time. Another question is
what primitives the language should provide for constructing side effecing
programs. In Agda, these problems are solved by *allowing arbitrary
Haskell functions to be imported as axioms.* At compile time, these imported
functions have no reduction behaviour, *only at run time is the
Haskell function executed*.

`Dependently Typed Programming in Agda, Ulf Norell and James Chapman [ emphasis added].`

In Haskell:

- just extend the FFI enough to replace the
*usually*-abstract I/O definitions with calls to foreign definitions:

instance Monad IO where return = primUnitIO (>>=) = primBindIO foreign import ccall unsafe primUnitIO :: a -> IO a foreign import ccall unsafe primBindIO :: IO a -> (a -> IO b) -> IO b ⋮

- the
`IO`

type is then just a type-level tag, as specified in the Haskell 2010 report:

data IO a -- that's all folks!

*Voilà!* In this example, the I/O problem has been outsourced to C - if you're not happy with C's solution to the I/O problem, just use another programming language for the Haskell implementation: there's plenty of them to choose from (...but don't use Haskell, to avoid going `⊥`

-up ;-).

By defining them in this way, `IO`

and its actions in Haskell can also be thought of as being "axiomatic": they have no effect when a Haskell program is compiled, only at run time is the foreign definition executed and its effects occur.

— Atravers Thu Dec 9 01:55:47 UTC 2021

## Mathematics and the I/O problem

Mathematics is abstract:

In a preliminary sense, mathematics is abstract because it is studied using highly general and formal resources.

`The Applicability of Mathematics, from the Encyclopedia of Philosophy.`

In particular, it abstracts from the existence of an external environment. This poses a challenge for languages like Haskell which pursue the mathematical notion of a function (rather than use subroutines or procedures):

How must interactions between a program and an external environment (consisting of, e.g., input/output-devices, file systems, ...) be described in a programming language that abstracts from the existence of an outside world?

`Funtions, Frames and Interactions, Claus Reinke (page 10 of 210).`

So the seeking of a "pure"/"mathematical"/"functional"/"formal"/... **general-purpose** solution to I/O which __isn't__ awkward should focus on finding an extension of mathematics which can elegantly describe interactions with external environments:

- If one exists, denotational semantics can then be used to map it into Haskell.
- If one cannot exist (like the solution to the halting problem), then implementors everywhere can just select the most-suitable (or least-ugly) model of I/O as they see fit.

— Atravers Fri Jan 28 22:33:43 UTC 2022