**Abstract: **In constructive mathematics we often consider implications between non-constructive reasoning principles. For instance, it is well known that the Limited principle of omniscience implies that equality of real numbers is decidable. Most such reductions proceed by reducing an instance of the consequent to an instance of the antecedent. We may therefore define a notion of *instance reducibility*, which turns out to have a very rich structure. Even better, under Kleene’s function realizability interpretation instance reducibility corresponds to Weihrauch reducibility, while Kleene’s number realizability relates it to truth-table reducibility. We may also ask about a constructive treatment of other reducibilities in computability theory. I shall discuss how one can tackle Turing reducibility constructively via Kleene’s number realizability.

**Slides with talk notes: ** lc2014-slides-notes.pdf

$\forall p : X \to 2. (\exists x:X.p(x)=0) \vee (\forall x:X.p(x)=1)$

for $X=2^\mathbb{N}$, the Cantor set of infinite binary sequences, where $2$ is the set of binary digits. Then in the post A Haskell monad for infinite search in finite time I looked at ways of systematically constructing such sets $X$ with corresponding Haskell realizers of the above *omniscience principle*.

In this post I give examples of infinite sets $X$ and corresponding constructive *proofs* of their omniscience that are intended to be valid in Bishop mathematics, and which I have formalized in Martin-Löf type theory in Agda notation. This rules out the example $X=2^\mathbb{N}$, as discussed below, but includes many interesting infinite examples. I also look at ways of constructing new omniscient sets from given ones. Such sets include, in particular, ordinals, for which we can find minimal witnesses if any witness exists.

Agda is a dependently typed functional programming language based on Martin-Löf type theory. By the Curry-Howard correspondence, Agda is also a language for formulating mathematical theorems (types) and writing down their proofs (programs). Agda acts as a thorough referee, only accepting correct theorems and proofs. Moreover, Agda can run your proofs. Here is a graph of the main Agda modules for this post, and here is a full graph with all modules.

The omniscience of $\mathbb{N}$ is not provable in constructive mathematics and comes under the fancy name LPO, or limited principle of omniscience. Its negation is not provable either, at least not in Bishop’s or Martin-Löf’s schools of constructive mathematics, and so we can’t say that LPO is false. Following Peter Aczel, we call such statements *taboos*. They are also known as Brouwerian counterexamples:

Taboo (LPO).$\quad\forall p : \mathbb{N} \to 2. (\exists x:\mathbb{N}.p(x)=0) \vee (\forall x:\mathbb{N}.p(x)=1)$.

However, if we enlarge the type of natural numbers with a point at infinity, we get a constructive proof of

Theorem. $\quad\forall p : \mathbb{N}_\infty \to 2. (\exists x:\mathbb{N}_\infty.p(x)=0) \vee (\forall x:\mathbb{N}_\infty.p(x)=1)$.

We remark that the proof doesn’t rely on continuity axioms, which are not available in Bishop’s and Martin-Löf’s approaches to constructive mathematics (see below).

How do we add a point at infinity to $\mathbb{N}$? We can’t simply take $\mathbb{N}_\infty = \mathbb{N} + 1$, because this is isomorphic to $\mathbb{N}$. The folklore way to add a limit point to $\mathbb{N}$ is to let $\mathbb{N}_\infty$ be the type of infinite, decreasing binary sequences. (There are other isomorphic possibilities, such as increasing sequences, or sequences with at most one $1$, among others.)

Such sequences represent finite and infinite counting processes of the form

$\underline{n} = 1^n 0^\omega$, $\quad\infty = 1^\omega$

($n$ ones followed by infinitely many zeros, and infinitely many ones respectively). This gives the *extended natural numbers*, also known as the generic convergent sequence or the *one-point compactification of the natural numbers*. The above encoding gives an embedding

$(n \mapsto \underline{n}) : \mathbb{N} \to \mathbb{N}_\infty$.

This shows that the reason why the omniscience of $\mathbb{N}$ fails is not simply that it is infinite, as the larger set $\mathbb{N}_\infty$ is omniscient.

Using this theorem, we can show that for every $p : \mathbb{N}_\infty \to 2$, the proposition $\forall n : \mathbb{N}.p(\underline{n})=1$ is decidable. The point is that now the quantification is over the natural numbers without violating (weak )LPO. In turn, using this, it follows that the non-continuity of a function $f : \mathbb{N}_\infty \to \mathbb{N}$ is decidable. It is also a corollary that the function type $\mathbb{N}_\infty \to \mathbb{N}$ has decidable equality.

The proof of the above theorem was presented at the Types’2011 meeting and is published in the Journal of Symbolic Logic (JSL), Volume 78, Number 3, September 2013, pages 764-784, under the title Infinite sets that satisfy the principle of omniscience in any variety of constructive mathematics. To save some work in the writing of this post, I offer the slides of the Types talk as an informal account of the original formal proof in Agda or the current one.

One (categorical) way of convincing ourselves that we have the right definition of $\mathbb{N}_\infty$ is to prove that it is the final coalgebra of the functor $1+(-)$.

The realizer given in Seemingly impossible functional programs relied on the hypothesis that all functions from the Cantor set $2^\mathbb{N}$ of infinite binary sequences to the two-point set $2$ are uniformly continuous.

This statement fails, or rather is independent, in Bishop mathematics or Martin-Löf‘s type theory: it can’t be proved or disproved. In Brouwer‘s intuitionism, the statement holds, but it is false in Markov’s Russian school of constructive mathematics, and so it is in classical mathematics, because discontinuous functions can be easily defined by case analysis using excluded middle. Bishop mathematics is by design intended to be compatible with both classical mathematics and all varieties of constructive mathematics, in the sense that any theorem in Bishop mathematics is a theorem of classical mathematics and its constructive varieties. Martin-Löf’s type theory is a formal system intended to be capable of formalizing Bishop mathematics and run it on computers.

However, although Agda is based on Martin-Löf type theory, and hence the above remarks apply, we can disable the termination checker in a particular module to prove a countable Tychonoff Theorem for inhabited omniscient sets, which gives the omniscience of $X=2^\mathbb{N}$. This is the only place in our development that we commit such a sin, relying on an external termination proof. But notice that, although this goes beyond the realms of Bishop and Martin-Löf mathematics, the result is still compatible with classical mathematics, as all sets are omniscient in classical mathematics, by virtue of the principle of excluded middle.

(At this point I should disclose that when I say *set* I mean *type*. Apologies to HoTT people, for whom (h)sets are particular kinds of types.)

I formulated the above in terms of the notion of omniscience because it is perhaps more familiar. But, for many purposes, including showing the omniscience of $\mathbb{N}_\infty$, it is more convenient to work with the notion of searchable set. We say that $X$ is searchable if

$\forall p:X \to 2.\exists x_0 : X. p(x_0)=1 \to \forall x:X.p(x)=1$.

This is the Drinker Paradox, which says that in every (inhabited) pub $X$ there is a person $x_0$ such that if $x_0$ drinks then everybody drinks, where $p(x)=1$ means that $x$ drinks. Thus, a pub $X$ is searchable iff it satisfies the Drinker Paradox for every $2$-valued notion $p$ of “drinking”.

It is immediate that any searchable set is inhabited. To get the omniscience of $X$ from its searchability, given $p$ we first find a “universal witness” $x_0$ by the searchability of $X$. We can then check whether $p(x_0)=0$ or $p(x_0)=1$ by the decidability of equality of the set $2$. In the first case we conclude $\exists x:X.p(x)=0$ directly, and the second case gives $\forall x:X.p(x)=1$ by the definition of searchability, and so $X$ is indeed omniscient. Conversely, if we know an inhabitant $a : X$ and that $\exists x:X.p(x)=0$ or $\forall x:X.p(x)=1$, in the first case we let $x_0=x$ and in the second case we let $x=a$ to conclude that $X$ is searchable. This shows that a set is searchable iff it is both inhabited and omniscient.

We have already mentioned the countable Tychonoff Theorem, but this is a rogue example, as it departs from the constructive foundations we are interested in in this post.

If we have a searchable set $X$ and a family $(Y_x)_{x : X}$ of searchable sets, then its sum or disjoint union

$\sum_{x : X} Y_x$

is also searchable. This follows directly the ideas discussed in the paper with Paulo Oliva published in Mathematical Structures in Computer Science, Volume 20, Issue 2, April 2010, Cambridge University Press.

Two particular cases of interest are

- (binary Tychonoff) If $X$ and $Y$ are searchable then so is $X \times Y$ (when $Y$ doesn’t depend on $X$), and
- If $Y_0$ and $Y_1$ are searchable then so is $Y_0 + Y_1$ (when $X=2$).

The rogue proof of the countable Tychonoff Theorem mentioned above recursively iterates the binary Tychonoff Theorem, as in the above paper with Paulo.

The mysterious terminology comes from the above JSL paper, where this is a construction that really does perform some squashing (in a metric sense) within the Cantor space $2^\mathbb{N}$. Here we give a generalization of the construction that makes sense for sets that are not necessarily embedded in the Cantor space (unpublished, but done in 2011 and implemented in Agda in 2012).

Given countably many sets $X_n$, we want to construct its disjoint sum with a limit point at infinity, very much like we added a point at infinity to the natural numbers. We do this in two steps. We first extend the family $X : \mathbb{N} \to U$, where $U$ is the universe of types, to a family $Y : \mathbb{N}_\infty \to U$ as follows:

$Y_i = \prod_{n : \mathbb{N}} X_n^{i = \underline{n}}$.

Here $=$ is the identity type and we used exponential notation for the function space $i = \underline{n} \to X_n$.

The idea is that this product has at most one non-trivial factor, so that we get

$Y_{\underline{n}} \cong X_n$, $\quad Y_{\infty} \cong 1$

where $1$ is the one-point set. In the second step, we take the ordinary sum of the family $Y$. So, in summary, we define the squashed sum of the family $X$ to be

$\sum^1 X = \sum_{i : \mathbb{N}_\infty} \prod_{n : \mathbb{N}} X_n^{i = \underline{n}}$.

The theorem is that if $X_n$ is searchable for every $n:\mathbb{N}$ then the squashed sum $\sum^1 X$ is also searchable. The subtlety is to prove that $Y_i$ is searchable for every $i : \mathbb{N}_\infty$ without performing the taboo case analysis $i=\underline{n}$ or $i = \infty$, as this amounts to LPO. This subtlety amounts a provable, curious instance of the Tychonoff Theorem.

If $X$ has at most one point (any two of its points are equal), or, in HoTT terminology, $X$ is an hproposition or a $-1$-truncated type, and $Y : X \to U$ is a family of searchable sets, then

$\prod_{x : X} Y_x$

is also searchable. In other words, searchable sets are closed under the formation of products with at most one factor. This is clear with excluded middle, but the point is that it is not needed.

The above extension of $X : \mathbb{N} \to U$ to $Y:\mathbb{N}_\infty \to U$ is an instance of a more general phenomenon. If $A$ and $X : A \to U$ are arbitrary, and $j : A \to B$ is an embedding, then $X$ extends to a family $X / j : B \to U$ along $j$.

For this we need the right definition of embedding, namely that each fiber

$j^{-1}(b) = \sum_{a : A} b = j(a)$

is an hproposition. Then $X / j$ is constructed as

$(X / j)(b) = \prod_{(a , r) : j^{-1}(b)} X_a \cong \prod_{a : A} X_a^{b = j(a)}$.

We then have that if $X$ is a family of searchable sets, so is $X / j$, using the hproposition-indexed Tychonoff Theorem, and that the extended sum $\sum(X / j)$ is searchable if $B$ is searchable, even if $A$ isn’t searchable, using the fact that sums preserve searchability.

If $X_n$ is a family of ordinals in the sense of the HoTT book, then so is its squashed sum, under the lexicographic order with the added point at infinity as the largest element. I haven’t proved this in Agda yet (it is in my to do list). But I have proved in Agda some preliminary facts towards this. What needs to be done is to add order to this. Some preliminary Agda modules toward that have been developed.

To avoid setoids (what Bishop actually calls sets) I have invoked the axiom of function extensionality (any two pointwise equal functions are equal). This will be available in the next generation of proof assistants / programming languages based on HoTT which are under active development in various research centers. However, for the moment, it turns out that for the majority of theorems, we can get away with the double negation of function extensionality, which can be postulated without loss of computational content. Chuangjie Xu has formally checked that doubly negated function extensionality is enough for a few of the theorems discussed above, starting from the crucial one that $\mathbb{N}_\infty$ is searchable. What one observes in practice in my implementation, which uses (positive) function extensionality, is that when one runs examples one gets numerical results that don’t get stuck in irreducible terms containing the function extensionality constant funext.

I plan to write a paper with the informalization of the Agda development (other than that already provided in the JSL paper mentioned above). For the moment, this blog post is a first approximation, with an informal discussion of some the results proved in the Agda development.

]]>**Download slides:** brazilian-type-checking.pdf

**GitHub repository:** https://github.com/andrejbauer/tt

**Abstract:** Proof assistants verify that inputs are correct up to judgmental equality. Proofs are easier and smaller if equalities without computational content are verified by an oracle, because proof terms for these equations can be omitted. In order to keep judgmental equality decidable, though, typical proof assistants use a limited definition implemented by a fixed equivalence algorithm. While other equalities can be expressed using propositional identity types and explicit equality proofs and coercions, in some situations these create prohibitive levels of overhead in the proof.

Voevodsky has proposed a type theory with two identity types, one propositional and one judgmental. This lets us hypothesize new judgmental equalities for use during type checking, but generally renders the equational theory undecidable without help from the user.

Rather than reimpose the full overhead of term-level coercions for judgmental equality, we propose algebraic effect handlers as a general mechanism to provide local extensions to the proof assistant’s algorithms. As a special case, we retain a simple form of handlers even in the final proof terms, small proof-specific hints that extend the trusted verifier in sound ways.

]]>**Abstract:** Intuitionistic mathematics perceives subtle variations in meaning where classical mathematics asserts equivalence, and permits geometrically and computationally motivated axioms that classical mathematics prohibits. It is therefore well-suited as a logical foundation on which questions about computability in the real world are studied. The realizability interpretation explains the computational content of intuitionistic mathematics, and relates it to classical models of computation, as well as to more speculative ones that push the laws of physics to their limits. Through the realizability interpretation Brouwerian continuity principles and Markovian computability axioms become statements about the computational nature of the physical world.

**Download:** real-world-realizability.pdf

*Univalent foundations subsume classical mathematics!*

The next time you hear someone having doubts about this point, please refer them to this post. A more detailed explanation follows.

In standard mathematics we take classical logic and set theory as a foundation:

$\text{logic} + \text{sets}$

On top of this we build everything else. In Univalent foundations this picture is *extended.* Logic and sets are still basic, but they become part of a much larger universe of objects that we call *types.* The types are stratified into levels according to their homotopy-theoretic complexity. For a historical reason we start counting at $-2$:

$\text{$(-2)$-types} \subseteq \text{$(-1)$-types} \subseteq \text{$0$-types} \subseteq \text{$1$-types} \subseteq \cdots \subseteq \text{types}$

We have finite levels $-2, -1, 0, 1, 2, \ldots$, as well as types which lie outside finite levels. Levels $-1$ and $0$ correspond to logic and sets, respectively:

- $(-2)$-types are the contractible ones,
- $(-1)$-types are the truth values,
- $0$-types are the sets,
- $1$-types are the groupoids,
- etc.

For instance, $\mathbb{N}$ is a $0$-type, the circle $S^1$ is a $1$-type, while the sphere $S^2$ does not have a finite level. If you are familiar with homotopy theory it may be helpful to think of $n$-types as those homotopy types whose homotopy structure above dimension $n$ is trivial.

In Univalent foundations we *may* assume excluded middle for $(-1)$-types and we *may* assume the axiom of choice for $0$-types. This then precisely recovers classical mathematics, as sitting inside a larger foundation. Without excluded middle and choice we do indeed obtain a “constructive” version of Univalent foundations, which however is still *compatible* with classical mathematics. Thus every theorem in the HoTT book is compatible with classical math, even though the HoTT book does *not* assume excluded middle or choice (except in the sections on cardinals and ordinals).

Any mathematics that is formalized using Univalent foundations is compatible with classical mathematics. Moreover, in principle *all* classical mathematics could be so formalized.

As the univalent universe is larger than just sets, we have to get used to certain phenomena. For instance, there is the “the quotient set $\mathbb{R}/\mathbb{Z}$” at level 0, and then there is the circle as a $1$-type (groupoid with one object and a generating loop). This is no different from classical mathematics, where we are used to talking about the circle as a set vs. the circle as a topological space.

A classical mathematician and a constructive type theorist will both ask about “excluded middle and axiom of choice for all types”, each for his own reasons. Such principles can indeed be formulated, and then shown to be inconsistent with the Axiom of Univalence. However, this is of little consequence because the “higher-level” versions of excluded middle and choice are simply the wrong statements to consider. Excluded middle is about logic and so it should only apply to $(-1)$-types, while choice is about sets and it should apply only to $0$-types. (To imagine a similar situation in classical mathematics, consider what the axiom of choice would say if we applied it to topological spaces: “every topological bundle whose fibers are non-empty has a continuous global section”, which is obvious nonsense.)

]]>