I show how monads in Haskell can be used to structure infinite search algorithms, and indeed get them for free. This is a followup to my blog post Seemingly impossible functional programs. In the two papers Infinite sets that admit fast exhaustive search (LICS07) and Exhaustible sets in highertype computation (LMCS08), I discussed what kinds of infinite sets admit exhaustive search in finite time, and how to systematically build such sets. Here I build them using monads, which makes the algorithms more transparent (and economic).
An abstract data type for searchable sets
I want to define a Haskell type constructor S
such that, for any type a
, the elements of type S a
are the searchable subsets of a
. Thus, S
is like the powerset constructor in set theory, except that not all subsets are allowed here. Like the powerset construction, S
will be a monad. Before implementing S
concretely, I list the operations I would like to be able to perform with searchable sets:
search :: S a > (a > Bool) > Maybe a image :: (a > b) > S a > S b singleton :: a > S a union :: S a > S a > S a bigUnion :: S(S a) > S a
I adopt the notational convention, borrowed from the list data type, that if x
is used to denote an element of type a
then the plural xs
is used to denote a searchable set of elements of a
. So, notationally speaking, if x :: a
then xs :: S a
.
Given a set xs :: S a
and a predicate p :: a > Bool
, we require that the expression search xs p
either produces Just
an example of some x :: a
in the set xs :: S a
with p x = True
, if such an element exists, or else Nothing
. For a function f :: a > b
and a set xs :: S a
, the expression image f xs
denotes the f
image of the set xs
in the sense of set theory, that is, the set ys :: S b
such that y
$\in$ ys
iff f(x)=y
for some x
$\in$ xs
. The other operations construct singletons, binary unions, and unions of sets of sets, also in the usual sense of set theory.
The monad
It is defined from the abstract data type as
instance Monad S where return = singleton xs >>= f = bigUnion(image f xs)
Notice that xs :: S a
and f :: a > S b
, and hence image f xs :: S S b
.
Consequences of having a monad
Finite products of searchable sets are searchable:
times :: S a > S b > S(a,b) xs `times` ys = do x < xs y < ys return(x,y)
That is, the elements of the set xs `times` ys
are the pairs (x,y)
with x
$\in$ xs
and y
$\in$ ys
. This turns out to be the same algorithm given in the above two papers, although the definition given there looks very different as it doesn’t use monads.
Using lazy lists, the binary product can be iterated to finite and infinite products as follows:
product :: [S a] > S[a] product [] = return [] product (xs:yss) = do x < xs ys < product yss return(x:ys)
Notice that

xs :: S a
is a set, x :: a
is an element ofxs
,yss :: [S a]
is a list of sets,product yss :: S[a]
is a set of lists,ys :: [a]
is a list.
Again, this turns out to be the same infinite product algorithm given in the above papers, more precisely the one given in [LICS07], page 9, or [LMCS08], Section 8.1. I am working on a paper addressed to the Haskell and functional programming communities that will spell out this and other claims and details.
Haskell’s sequence
However, there is no need to define product
, because it is already defined in the Haskell’98 Standard Prelude under the name sequence
(using foldr
and the monad operations rather than recursion and donotation, but this makes no difference), and moreover for any monad, not just S
.
Example
The Cantor space of infinite sequences of binary digits, discussed in the previous blog post, can be constructed as follows:
bit :: S Int bit = singleton 0 `union` singleton 1 cantor :: S [Int] cantor = sequence (repeat bit)
This amounts to
cantor
=bit
$\times$bit
$\times$bit
$\times \dots$
Quantifiers
Notice that we can existentially and universally quantify over searchable sets in an algorithmic fashion, even when they are infinite, as is the case for the Cantor space:
forsome, forevery :: S a > (a > Bool) > Bool forsome xs p = case search xs p of Nothing > False Just _ > True forevery xs p = not(forsome xs (\x > not(p x))
Here forsome xs
and forevery xs
quantify over the set xs
:

forsome xs p = True
iffp x = True
for somex
$\in$xs
, forevery xs p = True
iffp x = True
for everyx
$\in$xs
.
(The function forsome
is in fact a monad morphism from S
into the continuation monad.)
Contrived examples that nevertheless make a point
Consider the program
f :: Int > Int > Int > Int f n2 n1 n0 = 4 * n2 + 2 * n1 + n0 p :: Int > Bool p k = forevery cantor (\xs > forsome cantor (\ys > f (xs !! 10) (xs !! 100) (xs !! 1000) == k  f (ys !! 2000) (ys !! 200) (ys !! 20)))
Then the expressions p 6
, p 7
and p 8
evaluate to False
, True
and False
respectively, together in a fraction of a second using the Glasgow Haskell Interpreter ghci
, using our implementation of the abstract data type S
given below.
As another example, consider
q = forevery cantor (\us > forsome cantor (\vs > forevery cantor (\xs > forsome cantor (\ys > us !! 333 + xs !! 17000 == vs !! 3000 * ys !! 1000))))
Then q
evaluates to True
in less than a second.
These examples are certainly contrived, but they do illustrate that something nontrivial is going on from an algorithmic point of view, as the quantification algorithms have no access to the source code of the predicates they apply to. I’ll discuss more meaningful examples another time. (I can use the quantifiers to find bugs in programs for exact real number computation using infinite lazy lists of digits. An application by Alex Simpson to integration was mentioned in the previous post.)
Representation of searchable sets
How should we represent infinite searchable sets to get an implementation of our abstract data type?
Obstacles
An infinite searchable set can be uncountable, like the Cantor space, and hence we cannot implement it using a lazy list of its elements. Moreover, this wouldn’t help regarding exhaustive search in finite time. It can be argued that the computable elements of the Cantor space form a countable set. However, they are not computably countable (or r.e.), with the very same proof given by Cantor in the noncomputable case, by diagonalization. (Exercise: write a Haskell program that given any infinite lazy list of elements of the Cantor space, produces an element that is not in the list. This amounts to an implementation of Cantor’s proof of the nondenumerability of the Cantor space.)
Almost a solution
The crucial operation we can perform with a searchable set is to search it. Hence it is natural to represent a searchable set by a search function. The following is what this line of reasoning tempted me to do at the beginning of this work:
newtype S a = S {search :: (a > Bool) > Maybe a}
or, equivalently,
newtype S a = S ((a > Bool) > Maybe a) search :: S a > (a > Bool) > Maybe a search(S searcher) = searcher
This has the advantage that it accounts for the empty set. However, this breaks the theorem that countable products of searchable sets are searchable, precisely because the empty set is present. In fact, if one of the factors is empty, then the product is empty. But an empty set may be present arbitrarily far away in the lazy list of sets, and the search algorithm can only output the first element of the lazy list when it knows that the product is nonempty. Hence it can never output it, because it can never determine nonemptiness of the product. In practice, we get an infinite loop when we run the product algorithm sequence
if we work with the above implementation of S
. However, with the exception of countable products, everything we do here works with the above implementation. But infinite products is what is needed to get infinite searchable sets.
The solution
Hence we do as we did in the above two papers instead:
newtype S a = S {find :: (a > Bool) > a}
or, equivalently,
newtype S a = S ((a > Bool) > a) find :: S a > (a > Bool) > a find(S finder) = finder
This forces the sets to be nonempty, but has a defect: it also forces the find
operator to tell lies when there is no correct element it can choose. We impose a requirement to overcome this: although lies are allowed, one always must have that find xs p
chooses an element in the set xs
, and if there is an element x
$\in$ xs
with p(x)=True
, then the answer must be honest. Given this requirement, we can easily check whether find
is lying, and this is what I do in the implementation of search
:
search :: S a > (a > Bool) > Maybe a search xs p = let x = find xs p in if p x then Just x else Nothing
With this representation of searchable sets, we have a shortcut for the implementation of the existential quantifier
forsome :: S a > (a > Bool) > Bool forsome xs p = p(find xs p)
Also notice that the above implementation of search
is equivalent to
search xs p = if forsome xs p then Just(find xs p) else Nothing
Implementation of the abstract data type
Given the above representation of searchable sets, we have already implemented the first of the operations that define our abstract data type:
search :: S a > (a > Bool) > Maybe a image :: (a > b) > S a > S b singleton :: a > S a union :: S a > S a > S a bigUnion :: S(S a) > S a
The others can be implemented as follows:
image :: (a > b) > S a > S b image f xs = S(\q > f(find xs (\x > q(f x))))
That is, given a predicate q :: b > Bool
, to find y
$\in$ f(xs)
such that q(y)
holds, first find x
$\in$ xs
such that q(f x)
holds, and then apply f
to this x
.
singleton :: a > S a singleton x = S(\p > x)
Given the requirement of the previous section, we can only answer x
, and this is what we do. To implement binary unions, I first implement doubletons, and then reduce to arbitrary unions:
doubleton :: a > a > S a doubleton x y = S(\p > if p x then x else y) union :: S a > S a > S a xs `union` ys = bigUnion(doubleton xs ys)
Arbitrary unions are a bit trickier:
bigUnion :: S(S a) > S a bigUnion xss = S(\p > find(find xss (\xs > forsome xs p)) p)
By definition of union, as in set theory, x
$\in \bigcup$ xss
$\iff \exists$ xs
$\in$ xss
such that x
$\in$ xs
. What our definition says is that, in order to find x
$\in \bigcup$ xss
such that p(x)
holds, we first find xs
$\in$ xss
such that p(x)
holds for some x
$\in$ xs
, and then find a specific x
$\in$ xs
such that p(x)
holds.
That’s it, we are done
It remains to make a few remarks. After them you’ll find the complete Haskell program, which is embarrassingly short in comparison with the size of this post.
Algorithms for free
There is a Haskell program called Djinn that, given a Haskell type, automatically gives you a recursionfree Haskell program of that type, provided there is some, and lets you know if there isn’t any. All the programs discussed here are correctly guessed by Djinn, just from knowledge of their types, except the binary and infinite product algorithms. For the binary product, Djinn gives four guesses. Only two of them are product algorithms, and one of them is equivalent to the one we have defined (the other is the symmetric version). The infinite product is hopeless, because it crucially relies on recursion. But singleton, image and big union are correctly guessed, with a unique guess each. Moreover, Djinn can directly guess the monad structure, in the sense of Haskell, without the detour via singleton, image and big union, which amount to the definition of monad in the sense of category theory.
Thus, all we have to do in principle is to define the type constructor S
of the monad: once we have done that, Djinn can tell us what the monad operations are, and the Haskell prelude gives us the infinite product algorithm sequence
. But this is not what happened historically. And of course, once we do have the product algorithm, we have to show that it does accomplish what we claim. The proof is tricky, and is developed in the above two papers, where the first gives a sketch and the second gives complete details.
The monad laws
The monad laws are rather laborious to prove. Hence I wrote a Haskell program that wrote the proof for me, as follows. The laws are a set of equations. To prove each one, I calculate the normal form of each side of the equation, and check that they are the same except perhaps for the name of the bound variables. The output of the Haskell program is five pages of chains of equations that one can routinely check if one wants to. Presumably I could have done this using a proof assistant, but I found it quicker to write a proof generator for this specific theorem than to familiarize myself with proof assistants (shame on me).
There are two interesting aspects of the monad S
that become apparent when we develop this proof: (1) Not all elements of S a
are search functions, but nevertheless the monad laws hold even when the “junk” elements of S a
are taken into account. (2) The type of booleans in the definition of S
can be replaced by any type whatsoever and the monad laws still hold. This second point brings me to my final comment.
Bar recursion
Before I made the connection with monads earlier this year, Paulo Oliva discovered that the product algorithm given in [LICS07] is a manifestation of bar recursion, provided we replace the booleans by the integers. See his papers at his web page to get an idea of what bar recursion is and what its applications are. Now, putting this together with the above findings, it follows that the Haskell prelude function sequence
turns out to be a form of bar recursion (technically known as modified bar recursion). Paulo and I have been working together trying to clarify all these connections from a theoretical point of view, and we are writing a paper with these and other findings, this time addressed to the theoretical computer science and logic communities.
The complete program
I summarize and rearrange the code discussed above, without the contrived examples:
newtype S a = S {find :: (a > Bool) > a} search :: S a > (a > Bool) > Maybe a search xs p = let x = find xs p in if p x then Just x else Nothing forsome, forevery :: S a > (a > Bool) > Bool forsome xs p = p(find xs p) forevery xs p = not(forsome xs (\x > not(p x))) singleton :: a > S a singleton x = S(\p > x) doubleton :: a > a > S a doubleton x y = S(\p > if p x then x else y) image :: (a > b) > S a > S b image f xs = S(\q > f(find xs (\x > q(f x)))) bigUnion :: S(S a) > S a bigUnion xss = S(\p > find(find xss (\xs > forsome xs p)) p) union :: S a > S a > S a xs `union` ys = bigUnion(doubleton xs ys) instance Monad S where return = singleton xs >>= f = bigUnion(image f xs) times :: S a > S b > S(a,b) xs `times` ys = do x < xs y < ys return(x,y) bit :: S Int bit = doubleton 0 1 cantor :: S [Int] cantor = sequence (repeat bit)
The making of this work
Here is how I came up with the idea of using monads for this purpose. It emerged from the above two papers, and from the ealier Barbados notes, that searchable sets are compact in the sense of topology. And I was familiar, from denotational semantics, domain theory and topology, with a number of monads of compact sets, notably the Smyth powerdomain monad and the Vietoris hyperspace modad. So although this took some time to mature and be realized, it was inevitable.
What was a surprise to me is that this monadic view of searchable sets shows that the countable product functional, which implements the countable Tychonoff theorem from topology, and which amounts to modified bar recursion from logic in view of Paulo’s discovery (which is in itself a surprise), found its way independently in Haskell via the standard prelude function sequence
. I don’t know whether sequence
was originally intended to be applied to infinite lists, but it can, and moreover in an interesting way. However, for most monads I considered, including the continuation monad and of course excluding the search monad, the functional sequence
applied to an infinite list gives rise to a divergent computation.
Very nice!
The function ‘sequence’ arose from a desire to combine the list monad with other monads. Bar and Wells show that you can combine monads S and T into a monad ST if you have a natural transformation from TS to ST, and ‘sequence’ fills the bill when T is the list monad. See Combining Monads by myself and David King. We did have streams in mind as well as lists, although I can’t recall whether we considered infinite streams.
But we certainly hadn’t thought of using ‘sequence’ this way. Lovely!
Hi Martin, great article!
I put (approximately) your code up on Hackage, under the name “infinitesearch”. However, I’m having trouble characterizing when search is total. How would you characterize the fact that:
sequence (repeat (doubleton True False))
Is searchable, but:
foldr union (singleton 0) (map singleton [1..])
Is not? How can we be sure if an expression denotes a totally searchable set?
Incidentally, I am quite pleased that the above expression, while not exhaustively searchable, does as well as it can (i.e. it will find the least element which satisfies the predicate).
Thanks for the positive reactions and the history of sequence.
luqui: excellent that you have added this to hackage. (I would prefer to rename pair back to doubleton, because pairs and doubletons are rather different kinds of sets. Alaso there are different kinds of subsets that one might consider for other monads (e.g. closed subsets too get the Hoare powerdomain), and hence renaming S to Set, as you did, is not ideal.) To answer your question: in general you don’t know and you have to prove a theorem. The papers I mention show that the infinite product algorithm always works. There are other closure properties; see the papers. The example you show that works as best as possible is related to the following example (not in the papers). Existential quantifiers live in the continuation monad. If you do an infinite product of existential quantifiers using sequence in that monad, then the existential quantifier you get can never answer False, but in can answer True whenever the answer is True. What is remarkable is that with the search monad both cases False and True are accounted for using sequence. Notice also that sequence gives cartesian products listed in the lexicographic order, for the list monad.
Prelude> sequence [[0,1],[10,11],[100,101]]
[[0,10,100],[0,10,101],[0,11,100],[0,11,101],[1,10,100],[1,10,101],[1,11,100],[1,11,101]]
But (sequence (repeat [0,1])) diverges. The convergent search operator (sequence (repeat bit)) in the search monad also turns out to scan the Cantor space in the lexicographic order: it always finds the smallest witness (or in fact, the infimum of the sets of witnesses, which is the largest element in the lexicographic order if the predicate is empty).
One last remark, regarding your question: you are attempting to define a search operator for the set of natural numbers. But searchable sets are compact and the natural numbers are not, and hence this is impossible. Compactness, if you develop a good intuition for it, is a good guide: whenever your set is not compact, it is definitely not searchable. Whenever it is compact, it is usually searchable (although not necessarily searchable, because we need an effective version of compactness to get searchability).
It’s really nice, but I think it can be slightly improved.
I’ll use “r” instead of “Bool” for the return type, since it really works for any type.
Instead of (a > r) > a, we can use (a > r) > (r > a). Semantically, given a “predicate” of type a > r, and a value of type r, this function would look for an element x :: a, such that this predicate, applied to x, results in a given value.
Your invariant for the function find :: S a > (a > Bool) > a is the following: if there is x such that f x == True, then f (find xs f) == True.
My invariant is: if there is x such that f x == y, then f (find xs f y) == y; this can be written as f (find xs f (f x)) = f x (without conditions), or, more nicely, f . find xs f . f = f, which looks algebraically nice.
Now, your “image” is just “liftM”, “bigUnion” is just “join”, “times” is “liftM2 (,)”, and the Monad instance can be defined straigtforwardly:
instance Monad (S r) where
return x = S (\_ _ > x)
xs >>= f =
S $ \yp z >
let try x = find (f x) yp z
x0 = find xs (yp . try) z
in try x0
The fun part, I think, is a definition of “union”:
xs1 `union` xs2 =
do b bp True == b)
That’s just minor remarks, though.
Sorry, some symbols got lost; I’ve uploaded all code (with a minor changes in notation) here: http://hpaste.org/13260
This looks like an interesting idea. I have some questions. Have you checked that this is indeed a monad? Does “sequence” converge for an infinite lazy list? (When? Always?) In what sense is your invariant an invariant? What does the monad multiplication (=join) do? There is a candidate of a morphism from the search monad to your “inversion” monad, but not the other way round. And there is a morphism from the search monad to the continuation monad. But there aren’t maps connecting the monads in the other directions (according to djinn).
And, of course, you define >>= recursively, in a way that is unlikely to produce a total function. Hence it is hard to see this as a generalization of the search monad.
Sorry for commenting so late in the game, but I also noticed: aren’t forevery and forsome just folds, with the arguments reversed? In particular, using the “S is a Monad” trait, then forevery and forsome are like disjunctive and conjunctive monoid versions of MonadPlus, except that MonadPlus isn’t exactly a monoid (and thus, isn’t a Monoid, either).
S is also a Comonad. The extract function is simple: extract xs = find xs (const True). duplicate is a little bit more difficult: duplicate xs = S $ \p > S $ \q > find xs (\a > p (S $ const a) && q a)
Intuitively, what duplicate does, given a predicate p, is produce another S. When that S is given a predicate q, it checks every value a in the original to see if it satisfies both p (singleton a) and q a.
This Comonad is not unique: It could also use False and () instead of True and (&&), or even False and xor.