I have had the honor to lecture at the Oregon Programming Language Summer School 2018 on the topic of algebraic effects and handlers. The notes, materials and the lectures are available online:

- the GitHub repository with the course material
- the OPLSS lecture materials, including notes and video recordings of the lectures

I gave four lectures which started with the mathematics of algebraic theories, explained how they can be used to model computational effects, how we make a programming language out of them, and how to program with handlers.

Nice to see that the lectures were recorded!

I just looked at the associated notes https://arxiv.org/abs/1807.05923, which are very nice. I feel like I am starting to finally understand algebraic effects. I have heard about them before but didn’t really understand what benefit you get from observing that many of the “computational” monads (e.g. as used in Haskell) happen to be algebraic. After reading your notes I think I see (at least) three such benefits: (1) an algebraic presentation of a monad specifies it in terms of what operations can be invoked in it and what laws they satisfy, which is what you want in practice for the syntax of a programming language, as opposed to Haskell-style monads which are specified more “semantically” and one then has to extract relevant “operations” in an ad hoc manner; (2) algebraic monads can be combined in a universal way, namely the monad coproduct, which has a sensible syntactic counterpart (put together the operations and axioms with no interaction); (3) one obtains the useful general notion of handler. Is that correct? Are there other benefits too?

Regarding (2), in Haskell one often has the problem of “combining” monads, which has no general solution; algebraic effects seem to solve this. Is the algebraic “monad coproduct” approach at all related to Haskell’s “monad transformers” that allow certain monads to be combined with any other monad?

Regarding (3), I am a little puzzled by the notion of handler from a mathematical perspective. It seems to me that the mathematically natural way to talk about a homomorphism from F_T(X) to F_{T’}(X’) would be to ask for a monad morphism from T to T’, thereby making

everyT’-model into a T-model functorially, and then a T-homomorphism from F_T(X) to F_{T’}(X’) (hence a set-map from X to F_{T’}(X’)). Syntactically, I think this means that the operation handlers would not be allowed to know or use the “return type” of the with-handle expression, instead being forced to return a value constructed in a generic way from the supplied continuation. Of course this would rule out basic things like exception handlers (Example 3.1), so it’s not as useful. But can you give any more mathematical intuition for the notion of handler?Regarding (2): yes, the fact that algebraic effects can be combined more easily is often considered to be an advantage over monads.

Regarding (3): you’re right, with your notion of handler the handlers would have to be generic in the return values. Man handlers are like that (sate, I/O), but some important ones are not, for instance, a handler which handles non-deterministic choice and computes the maximum possible return value relies on the fact that return values can be compared. I think handlers arise as a combination of two requirements: we want to use free algebras to model computations, and we want handlers to be algebra homomorphisms. Assuming these requirements, we have several options. Of these, the one presented is the one that programmers seem to be most happy with. This is an empirical observation. (In fact, some programers want “shallow” handlers which correspond to maps between carriers, ignoring algebra structures). I know this is not much of a mathematical explanation, but you are threading into computer science territory :)