# Runners in action

- 28 October 2019
- Programming languages, Software, Publications

It has been almost a decade since Matija Pretnar and I posted the first blog posts about programming with algebraic effects and handlers and the programming language Eff. Since then handlers have become a well-known control mechanism in programming languages.

Handlers and monads excel at *simulating* effects, either in terms of other
effects or as pure computations. For example, the familiar state
monad implements mutable state with
(pure) state-passing functions, and there are many more examples. But I have
always felt that handlers and monads are not very good at explaining how a
program interacts with its external environment and how it gets to perform
*real-world* effects.

Danel Ahman and I have worked for a while on attacking
the question on how to better model external resources and what programming
constructs are appropriate for working with them. The time is right for us to
show what we have done so far. The theoretical side of things is explained in
our paper **Runners in action**, Danel
implemented a Haskell library
**Haskell-Coop** to go with the
paper, and I implemented a programming language
**Coop**.

General-purpose programming languages, even the so-called pure ones, have to have
*some* account of interaction with the external environment. A popular choice is
to provide a foreign-function interface that connects the language with an
external library, and through it with an operating system and the universe. A
more nuanced approach would differentiate between a function that just happens
to be written in a different language, and one that actually performs an effect.
The latter kind is known as an *algebraic operation* in the
algebraic-effects-and-handlers way of doing things.

A *bad* approach to modeling the external world is to pretend that it is
internal to the language. One would think that this is obvious but it is not.
For instance, Haskell represents the interface to the external world through the
`IO`

monad.
But what is this monad *really*? How does it get to interact with the external
world? The Haskell Wiki page which answers this question has the following
disclaimer:

"Warning: The following story about`IO`

is incorrect in that it cannot actually explain some important aspects of`IO`

(including interaction and concurrency). However, some people find it useful to begin developing an understanding."

The Wiki goes on to say how `IO`

is a bit like a state monad with an imaginary
`RealWorld`

state, except that of course `RealWorld`

is not really a Haskell
type, or at least not one that actually holds the state of the real world.

The situation with Eff is not much better: it treats some operations at the
top-level in a special way. For example, if `print`

percolates to the top level,
it turns into a *real* `print`

that actually causes an effect. So it looks like
there is some sort of "top level handler" that models the real world, but that
cannot be the case: a handler may discard the continuation or run it twice, but
Eff hardly has the ability to discard the external world, or to make it
bifurcate into two separate realities.

If `IO`

monad is not an honest monad and a top-level handler is not really a
handler, then what we have is a case of ingenious hackery in need of proper
programming-language design.

How precisely does an operation call in the program cause an effect in the
external world? As we have just seen, some sort of runtime environment or top
level needs to relate it to the external world. From the viewpoint of the
program, the external world appears as state which is not directly accessible,
or even representable in the language. The effect of calling an operation
$\mathtt{op}(a,\kappa)$ is to change the state of the world, and to get back a
result. We can model the situation with a map $\overline{\mathtt{op}} : A \times
W \to B \times W$, where $W$ is the set of all states of the world, $A$ is the
set of parameters, and $B$ the set of results of the operation. The operation
call $\mathtt{op}(a, \kappa)$ is "executed" in the current world $w \in W$ by
computing $\overline{\mathtt{op}}(a,w) = (b, w')$ to get the next world $w'$ and
a result $b$. The program then proceeds with the continuation $\kappa\,b$ in the
world $w'$. Notice how the world $w$ is an external entity that is manipulated
by the external map $\overline{\mathtt{op}}$ realistically in a *linear*
fashion, i.e., the world is neither discarded nor copied, just transformed.

What I have just described is *not* a monad or a handler, but a *comodel*, also
known as a *runner*, and the map $\overline{\mathtt{op}}$ is not an operation,
but a *co-operation*. This was all observed a while ago by Gordon
Plotkin and John
Power, Tarmo
Uustalu, and generalized by Rasmus
Møgelberg and Sam
Staton, see our paper
for references. Perhaps we should replace "top-level" handlers and "special"
monads with runners?

Danel and I worked out how *effectful* runners (a generalization of runners that
supports other effects in addition to state) provide a mathematical model of
resource management. They also give rise to a programming concept that models
top-level external resources, as well as allows programmers to modularly define
their own “virtual machines” and run code inside them. Such virtual machines can
be nested and combined in interesting ways. We capture the core ideas of
programming with runners in an equational calculus $\lambda_{\mathsf{coop}}$,
that guarantees the linear use of resources and execution of finalization code.

An interesting practical aspect of $\lambda_{\mathsf{coop}}$, that was begotten by
theory, is modeling of extra-ordinary circumstances. The external environment
should have the ability to signal back to the program an extra-ordinary
circumstance that prevents if from returning a result. This is normally
accomplished by an exception mechanism, but since the external world is
stateful, there are *two* ways of combining it with exceptions, namely the sum
and the tensor of algebraic theories. Which one is the right one? Both! After a
bit of head scratching we realized that the two options are (analogous to) what
is variously called "checked" and "non-checked"
exceptions,
errors and
signals, or synchronous
and asynchronous
exceptions.
And so we included in $\lambda_{\mathsf{coop}}$ both mechanisms: ordinary
*exceptions*, which are special events that disrupt the flow of user code but
can be caught and attended to, and *signals* which are unrecoverable failures
that irrevocably *kill* user code, but can still be finalized. We proved a finalization
theorem which gives strong guarantees about resources always being properly
finalized.

If you are familiar with handlers, as a first approximation you can think of
runners as handlers that use the continuation at most once in a tail-call
position. Many handlers are already of this form but not all. Non-determinism,
probability, and handlers that hijack the continuation (`delimcc`

, threads, and
selection functionals) fall outside of the scope of runners. Perhaps in the
future we can resurrect some of these (in particular it seems like threads, or
even some form of concurrency would be worth investigating). There are many
other directions of possible future investigations: efficient compilation, notions
of correctness, extensions to the simple effect subtyping discipline that we
implemented, etc.

To find out more, we kindly invite you to have a look at the paper, and to try out the implementations. The prototype programming language Coop implements and extends $\lambda_{\mathsf{coop}}$. You can start by skimming the Coop manual and the examples. If you prefer to experiment on your own, you might prefer the Haskell-Coop library, as it allows you to combine runners with everything else that Haskell has to offer.

##### Post a comment:

`$⋯$`

for inline and `$$⋯$$`

for display LaTeX formulas,
and `<pre>⋯</pre>`

for display code. Your E-mail address is only used to compute
your Gravatar and is not stored anywhere.
Comments are moderated through pull requests.
## Comments

Very interesting, thank you.

FYI, the link to the paper is broken.

regards, -F

Thanks, I fixed the link.

So previous work on the enriched effect calculus by Mogelborg, Staton and others proves a representation theorem that "all monads are linear state monads". Can you take advantage of this in some way to share some code between uses of runners and uses of effects?

@Max: I am not sure what you mean by "share some code"? Do you mean "combine in a single programming language" or literally "share code" as in "avoid duplication fo code"? Also, what do you mean by "uses of effects"? I guess I just don't understand your question.