A position is available for a PhD student at the University of Ljubljana in the general research area of modelling and reasoning about computational effects. The precise topic is somewhat flexible, and will be decided in discussion with the student. The PhD will be supervised by Alex Simpson who is Professor of Computer Science at the Faculty of Mathematics and Physics.

The position will be funded by the Effmath project (see project description). Full tuition & stipend will be provided.

The candidate should have a master’s (or equivalent) degree in either mathematics or computer science, with background knowledge relevant to the project area. The student will officially enrol in October 2015 at the University of Ljubljana. No knowledge of the Slovene language is required.

The candidates should contact Alex.Simpson@fmf.uni-lj.si by email as soon as possible. Please include a short CV and a statement of interest.

]]>The candidate should have as many of the following desiderata as possible, and at the very least a master’s degree (or an equivalent one):

- a master’s degree in mathematics, with good knowledge of computer science
- a master’s degree in computer science, with good knowledge of mathematics
- experience with functional programming
- experience with proof assistants
- familiarity with homotopy type theory

The student will officially enrol in October 2015 at the University of Ljubljana. No knowledge of Slovene is required. However, it is possible, and even desirable, to start with the actual work (and stipend) earlier, as soon as in the spring of 2015. The candidates should contact me by email as soon as possible. Please include a short CV and a statement of interest.

**Update 2015-03-28:** the position has been taken.

The zeroes GitHub repository contains the code.

There really is not much to show. The C program which computes the zeroes uses the GNU Scientific Library routines for zero finding and is just 105 lines long. It generates a PPM image which I then processed with ImageMagick and ffmpeg. The real work was in the image processing and the composition of movies. I wrote a helper Python program that lets me create floyovers the big image, and I became somewhat of an expert in the use of ImageMagick.

The code also contains a Python program for generating a variation of the picture in which roots of lower degrees are represented by big circles. I did not show any of this in the TEDx talk but it is available on Github.

Oh, and a piece of Mathematica code that generates the zeroes fits into a tweet.

The “Zeroes” Vimeo album contains the animations. The ones I showed in the TEDx talk are in Full HD (1920 by 1080). There is also a lower resolution animation of how zeroes move around when we change the coefficients. Here is one of the movies, but you really should watch it in Full HD to see all the details.

- The computed image zeroes.ppm.gz (125 MB) at 20000 by 17500 pixels is stored in the PPM format. The picture is dull gray, and is not meant to be viewed directly.
- The official image zeroes26.png (287 MB) at 20000 by 175000 pixels in orange tones. Beware, it can bring an image viewing program to its knees.
- I computed tons of closeups to generate the movies. Here are the beginnings of each animation available at Vimeo, and measly 1920 by 1080 pixels each (click on them).

**Abstract: **In constructive mathematics we often consider implications between non-constructive reasoning principles. For instance, it is well known that the Limited principle of omniscience implies that equality of real numbers is decidable. Most such reductions proceed by reducing an instance of the consequent to an instance of the antecedent. We may therefore define a notion of *instance reducibility*, which turns out to have a very rich structure. Even better, under Kleene’s function realizability interpretation instance reducibility corresponds to Weihrauch reducibility, while Kleene’s number realizability relates it to truth-table reducibility. We may also ask about a constructive treatment of other reducibilities in computability theory. I shall discuss how one can tackle Turing reducibility constructively via Kleene’s number realizability.

**Slides with talk notes: ** lc2014-slides-notes.pdf

$\forall p : X \to 2. (\exists x:X.p(x)=0) \vee (\forall x:X.p(x)=1)$

for $X=2^\mathbb{N}$, the Cantor set of infinite binary sequences, where $2$ is the set of binary digits. Then in the post A Haskell monad for infinite search in finite time I looked at ways of systematically constructing such sets $X$ with corresponding Haskell realizers of the above *omniscience principle*.

In this post I give examples of infinite sets $X$ and corresponding constructive *proofs* of their omniscience that are intended to be valid in Bishop mathematics, and which I have formalized in Martin-Löf type theory in Agda notation. This rules out the example $X=2^\mathbb{N}$, as discussed below, but includes many interesting infinite examples. I also look at ways of constructing new omniscient sets from given ones. Such sets include, in particular, ordinals, for which we can find minimal witnesses if any witness exists.

Agda is a dependently typed functional programming language based on Martin-Löf type theory. By the Curry-Howard correspondence, Agda is also a language for formulating mathematical theorems (types) and writing down their proofs (programs). Agda acts as a thorough referee, only accepting correct theorems and proofs. Moreover, Agda can run your proofs. Here is a graph of the main Agda modules for this post, and here is a full graph with all modules.

The omniscience of $\mathbb{N}$ is not provable in constructive mathematics and comes under the fancy name LPO, or limited principle of omniscience. Its negation is not provable either, at least not in Bishop’s or Martin-Löf’s schools of constructive mathematics, and so we can’t say that LPO is false. Following Peter Aczel, we call such statements *taboos*. They are also known as Brouwerian counterexamples:

Taboo (LPO).$\quad\forall p : \mathbb{N} \to 2. (\exists x:\mathbb{N}.p(x)=0) \vee (\forall x:\mathbb{N}.p(x)=1)$.

However, if we enlarge the type of natural numbers with a point at infinity, we get a constructive proof of

Theorem. $\quad\forall p : \mathbb{N}_\infty \to 2. (\exists x:\mathbb{N}_\infty.p(x)=0) \vee (\forall x:\mathbb{N}_\infty.p(x)=1)$.

We remark that the proof doesn’t rely on continuity axioms, which are not available in Bishop’s and Martin-Löf’s approaches to constructive mathematics (see below).

How do we add a point at infinity to $\mathbb{N}$? We can’t simply take $\mathbb{N}_\infty = \mathbb{N} + 1$, because this is isomorphic to $\mathbb{N}$. The folklore way to add a limit point to $\mathbb{N}$ is to let $\mathbb{N}_\infty$ be the type of infinite, decreasing binary sequences. (There are other isomorphic possibilities, such as increasing sequences, or sequences with at most one $1$, among others.)

Such sequences represent finite and infinite counting processes of the form

$\underline{n} = 1^n 0^\omega$, $\quad\infty = 1^\omega$

($n$ ones followed by infinitely many zeros, and infinitely many ones respectively). This gives the *extended natural numbers*, also known as the generic convergent sequence or the *one-point compactification of the natural numbers*. The above encoding gives an embedding

$(n \mapsto \underline{n}) : \mathbb{N} \to \mathbb{N}_\infty$.

This shows that the reason why the omniscience of $\mathbb{N}$ fails is not simply that it is infinite, as the larger set $\mathbb{N}_\infty$ is omniscient.

Using this theorem, we can show that for every $p : \mathbb{N}_\infty \to 2$, the proposition $\forall n : \mathbb{N}.p(\underline{n})=1$ is decidable. The point is that now the quantification is over the natural numbers without violating (weak )LPO. In turn, using this, it follows that the non-continuity of a function $f : \mathbb{N}_\infty \to \mathbb{N}$ is decidable. It is also a corollary that the function type $\mathbb{N}_\infty \to \mathbb{N}$ has decidable equality.

The proof of the above theorem was presented at the Types’2011 meeting and is published in the Journal of Symbolic Logic (JSL), Volume 78, Number 3, September 2013, pages 764-784, under the title Infinite sets that satisfy the principle of omniscience in any variety of constructive mathematics. To save some work in the writing of this post, I offer the slides of the Types talk as an informal account of the original formal proof in Agda or the current one.

One (categorical) way of convincing ourselves that we have the right definition of $\mathbb{N}_\infty$ is to prove that it is the final coalgebra of the functor $1+(-)$.

The realizer given in Seemingly impossible functional programs relied on the hypothesis that all functions from the Cantor set $2^\mathbb{N}$ of infinite binary sequences to the two-point set $2$ are uniformly continuous.

This statement fails, or rather is independent, in Bishop mathematics or Martin-Löf‘s type theory: it can’t be proved or disproved. In Brouwer‘s intuitionism, the statement holds, but it is false in Markov’s Russian school of constructive mathematics, and so it is in classical mathematics, because discontinuous functions can be easily defined by case analysis using excluded middle. Bishop mathematics is by design intended to be compatible with both classical mathematics and all varieties of constructive mathematics, in the sense that any theorem in Bishop mathematics is a theorem of classical mathematics and its constructive varieties. Martin-Löf’s type theory is a formal system intended to be capable of formalizing Bishop mathematics and run it on computers.

However, although Agda is based on Martin-Löf type theory, and hence the above remarks apply, we can disable the termination checker in a particular module to prove a countable Tychonoff Theorem for inhabited omniscient sets, which gives the omniscience of $X=2^\mathbb{N}$. This is the only place in our development that we commit such a sin, relying on an external termination proof. But notice that, although this goes beyond the realms of Bishop and Martin-Löf mathematics, the result is still compatible with classical mathematics, as all sets are omniscient in classical mathematics, by virtue of the principle of excluded middle.

(At this point I should disclose that when I say *set* I mean *type*. Apologies to HoTT people, for whom (h)sets are particular kinds of types.)

I formulated the above in terms of the notion of omniscience because it is perhaps more familiar. But, for many purposes, including showing the omniscience of $\mathbb{N}_\infty$, it is more convenient to work with the notion of searchable set. We say that $X$ is searchable if

$\forall p:X \to 2.\exists x_0 : X. p(x_0)=1 \to \forall x:X.p(x)=1$.

This is the Drinker Paradox, which says that in every (inhabited) pub $X$ there is a person $x_0$ such that if $x_0$ drinks then everybody drinks, where $p(x)=1$ means that $x$ drinks. Thus, a pub $X$ is searchable iff it satisfies the Drinker Paradox for every $2$-valued notion $p$ of “drinking”.

It is immediate that any searchable set is inhabited. To get the omniscience of $X$ from its searchability, given $p$ we first find a “universal witness” $x_0$ by the searchability of $X$. We can then check whether $p(x_0)=0$ or $p(x_0)=1$ by the decidability of equality of the set $2$. In the first case we conclude $\exists x:X.p(x)=0$ directly, and the second case gives $\forall x:X.p(x)=1$ by the definition of searchability, and so $X$ is indeed omniscient. Conversely, if we know an inhabitant $a : X$ and that $\exists x:X.p(x)=0$ or $\forall x:X.p(x)=1$, in the first case we let $x_0=x$ and in the second case we let $x=a$ to conclude that $X$ is searchable. This shows that a set is searchable iff it is both inhabited and omniscient.

We have already mentioned the countable Tychonoff Theorem, but this is a rogue example, as it departs from the constructive foundations we are interested in in this post.

If we have a searchable set $X$ and a family $(Y_x)_{x : X}$ of searchable sets, then its sum or disjoint union

$\sum_{x : X} Y_x$

is also searchable. This follows directly the ideas discussed in the paper with Paulo Oliva published in Mathematical Structures in Computer Science, Volume 20, Issue 2, April 2010, Cambridge University Press.

Two particular cases of interest are

- (binary Tychonoff) If $X$ and $Y$ are searchable then so is $X \times Y$ (when $Y$ doesn’t depend on $X$), and
- If $Y_0$ and $Y_1$ are searchable then so is $Y_0 + Y_1$ (when $X=2$).

The rogue proof of the countable Tychonoff Theorem mentioned above recursively iterates the binary Tychonoff Theorem, as in the above paper with Paulo.

The mysterious terminology comes from the above JSL paper, where this is a construction that really does perform some squashing (in a metric sense) within the Cantor space $2^\mathbb{N}$. Here we give a generalization of the construction that makes sense for sets that are not necessarily embedded in the Cantor space (unpublished, but done in 2011 and implemented in Agda in 2012).

Given countably many sets $X_n$, we want to construct its disjoint sum with a limit point at infinity, very much like we added a point at infinity to the natural numbers. We do this in two steps. We first extend the family $X : \mathbb{N} \to U$, where $U$ is the universe of types, to a family $Y : \mathbb{N}_\infty \to U$ as follows:

$Y_i = \prod_{n : \mathbb{N}} X_n^{i = \underline{n}}$.

Here $=$ is the identity type and we used exponential notation for the function space $i = \underline{n} \to X_n$.

The idea is that this product has at most one non-trivial factor, so that we get

$Y_{\underline{n}} \cong X_n$, $\quad Y_{\infty} \cong 1$

where $1$ is the one-point set. In the second step, we take the ordinary sum of the family $Y$. So, in summary, we define the squashed sum of the family $X$ to be

$\sum^1 X = \sum_{i : \mathbb{N}_\infty} \prod_{n : \mathbb{N}} X_n^{i = \underline{n}}$.

The theorem is that if $X_n$ is searchable for every $n:\mathbb{N}$ then the squashed sum $\sum^1 X$ is also searchable. The subtlety is to prove that $Y_i$ is searchable for every $i : \mathbb{N}_\infty$ without performing the taboo case analysis $i=\underline{n}$ or $i = \infty$, as this amounts to LPO. This subtlety amounts a provable, curious instance of the Tychonoff Theorem.

If $X$ has at most one point (any two of its points are equal), or, in HoTT terminology, $X$ is an hproposition or a $-1$-truncated type, and $Y : X \to U$ is a family of searchable sets, then

$\prod_{x : X} Y_x$

is also searchable. In other words, searchable sets are closed under the formation of products with at most one factor. This is clear with excluded middle, but the point is that it is not needed.

The above extension of $X : \mathbb{N} \to U$ to $Y:\mathbb{N}_\infty \to U$ is an instance of a more general phenomenon. If $A$ and $X : A \to U$ are arbitrary, and $j : A \to B$ is an embedding, then $X$ extends to a family $X / j : B \to U$ along $j$.

For this we need the right definition of embedding, namely that each fiber

$j^{-1}(b) = \sum_{a : A} b = j(a)$

is an hproposition. Then $X / j$ is constructed as

$(X / j)(b) = \prod_{(a , r) : j^{-1}(b)} X_a \cong \prod_{a : A} X_a^{b = j(a)}$.

We then have that if $X$ is a family of searchable sets, so is $X / j$, using the hproposition-indexed Tychonoff Theorem, and that the extended sum $\sum(X / j)$ is searchable if $B$ is searchable, even if $A$ isn’t searchable, using the fact that sums preserve searchability.

If $X_n$ is a family of ordinals in the sense of the HoTT book, then so is its squashed sum, under the lexicographic order with the added point at infinity as the largest element. I haven’t proved this in Agda yet (it is in my to do list). But I have proved in Agda some preliminary facts towards this. What needs to be done is to add order to this. Some preliminary Agda modules toward that have been developed.

To avoid setoids (what Bishop actually calls sets) I have invoked the axiom of function extensionality (any two pointwise equal functions are equal). This will be available in the next generation of proof assistants / programming languages based on HoTT which are under active development in various research centers. However, for the moment, it turns out that for the majority of theorems, we can get away with the double negation of function extensionality, which can be postulated without loss of computational content. Chuangjie Xu has formally checked that doubly negated function extensionality is enough for a few of the theorems discussed above, starting from the crucial one that $\mathbb{N}_\infty$ is searchable. What one observes in practice in my implementation, which uses (positive) function extensionality, is that when one runs examples one gets numerical results that don’t get stuck in irreducible terms containing the function extensionality constant funext.

I plan to write a paper with the informalization of the Agda development (other than that already provided in the JSL paper mentioned above). For the moment, this blog post is a first approximation, with an informal discussion of some the results proved in the Agda development.

]]>