**Abstract:** Intuitionistic mathematics perceives subtle variations in meaning where classical mathematics asserts equivalence, and permits geometrically and computationally motivated axioms that classical mathematics prohibits. It is therefore well-suited as a logical foundation on which questions about computability in the real world are studied. The realizability interpretation explains the computational content of intuitionistic mathematics, and relates it to classical models of computation, as well as to more speculative ones that push the laws of physics to their limits. Through the realizability interpretation Brouwerian continuity principles and Markovian computability axioms become statements about the computational nature of the physical world.

**Download:** real-world-realizability.pdf

*Univalent foundations subsume classical mathematics!*

The next time you hear someone having doubts about this point, please refer them to this post. A more detailed explanation follows.

In standard mathematics we take classical logic and set theory as a foundation:

$\text{logic} + \text{sets}$

On top of this we build everything else. In Univalent foundations this picture is *extended.* Logic and sets are still basic, but they become part of a much larger universe of objects that we call *types.* The types are stratified into levels according to their homotopy-theoretic complexity. For a historical reason we start counting at $-2$:

$\text{$(-2)$-types} \subseteq \text{$(-1)$-types} \subseteq \text{$0$-types} \subseteq \text{$1$-types} \subseteq \cdots \subseteq \text{types}$

We have finite levels $-2, -1, 0, 1, 2, \ldots$, as well as types which lie outside finite levels. Levels $-1$ and $0$ correspond to logic and sets, respectively:

- $(-2)$-types are the contractible ones,
- $(-1)$-types are the truth values,
- $0$-types are the sets,
- $1$-types are the groupoids,
- etc.

For instance, $\mathbb{N}$ is a $0$-type, the circle $S^1$ is a $1$-type, while the sphere $S^2$ does not have a finite level. If you are familiar with homotopy theory it may be helpful to think of $n$-types as those homotopy types whose homotopy structure above dimension $n$ is trivial.

In Univalent foundations we *may* assume excluded middle for $(-1)$-types and we *may* assume the axiom of choice for $0$-types. This then precisely recovers classical mathematics, as sitting inside a larger foundation. Without excluded middle and choice we do indeed obtain a “constructive” version of Univalent foundations, which however is still *compatible* with classical mathematics. Thus every theorem in the HoTT book is compatible with classical math, even though the HoTT book does *not* assume excluded middle or choice (except in the sections on cardinals and ordinals).

Any mathematics that is formalized using Univalent foundations is compatible with classical mathematics. Moreover, in principle *all* classical mathematics could be so formalized.

As the univalent universe is larger than just sets, we have to get used to certain phenomena. For instance, there is the “the quotient set $\mathbb{R}/\mathbb{Z}$” at level 0, and then there is the circle as a $1$-type (groupoid with one object and a generating loop). This is no different from classical mathematics, where we are used to talking about the circle as a set vs. the circle as a topological space.

A classical mathematician and a constructive type theorist will both ask about “excluded middle and axiom of choice for all types”, each for his own reasons. Such principles can indeed be formulated, and then shown to be inconsistent with the Axiom of Univalence. However, this is of little consequence because the “higher-level” versions of excluded middle and choice are simply the wrong statements to consider. Excluded middle is about logic and so it should only apply to $(-1)$-types, while choice is about sets and it should apply only to $0$-types. (To imagine a similar situation in classical mathematics, consider what the axiom of choice would say if we applied it to topological spaces: “every topological bundle whose fibers are non-empty has a continuous global section”, which is obvious nonsense.)

]]>I went back to my undergraduate days when I actually did differential geometry and churned out the normals with Mathematica. It took a bit of work, kind advice from my colleague Pavle Saksida, and a pinch of black magic (to extract the Delaunay triangulation from Mathematica), so I thought I might as well publish the result at my GitHub costa-surface repository. The code is released into public domain. Have fun making pictures of Costa’s surface! Here is mine (deliberately non-fancy):

]]>An inductive type contains exactly those elements that we obtain by repeatedly using the constructors.

If you believe the above statement you should keep reading. I am going to convince you that the statement is unfounded, or that at the very least it is preventing you from understanding type theory.

Let us consider the most famous inductive type, the natural numbers $\mathtt{nat}$. It has two constructors:

- the constant $0$, called
*zero* - the unary constructor $S$, called
*successor*

So one expects that the elements of $\mathtt{nat}$ are $0$, $S(0)$, $S(S(0))$, …, and no others. If our type theory is good enough, we can prove a meta-theorem:

Metatheorem:Every closed term of type $\mathtt{nat}$ is of the form $S(S(\cdots S(0) \cdots ))$.

(Incidentally, before another reddit visitor finds it productive to interpret “is of the form” as syntactic equality, let me point out that obviously I mean definitional equality.) This can be a very, very misleading metatheorem if you read it the wrong way. Suppose I told you:

Metatheorem:There are countably many closed terms of type $\mathtt{nat} \to \mathtt{nat}$.

Would you conclude that the type of functions $\mathtt{nat} \to \mathtt{nat}$ is countable? How would that reconcile with the usual diagonalization proof that $\mathtt{nat} \to \mathtt{nat}$ is uncountable?

Notice that I am carefully labeling above as “metatheorems”. That is because they are telling us something about the *formal language* that we are using to speak about mathematical objects, *not* about mathematical objects themselves. We must look at the objects themselves, that is, we must *interpret* the type in a *model*.

Let us call a model *exotic* if not every element of (the interpretation of) $\mathtt{nat}$ is obtained by finitely many uses of the successor constructor, starting from zero. Here are some *non-exotic* models:

- In set theory the natural numbers are just the usual ones $\mathbb{N}$.
- In the category of topological spaces and continuous maps the natural numbers are the usual ones, equipped with the discrete topology. (Exercise: why not the indiscrete topology?)
- In the syntactic category whose objects are types and whose morphisms are expressions, quotiented by provably equality, the above meta theorem tells us that there are no exotic numbers.

And here are some exotic models:

- Consider the category of pointed sets. Recall that a pointed set $(A, a)$ is a set $A$ with a chosen element $a \in A$, called the
*point*. A morphism $f : (A, a) \to (B, b)$ is a function $f : A \to B$ which preserves the point, $f(a) = b$. Computer scientists can think of the points as special “error” values. You might try to take $(\mathbb{N}, 0)$ as the interpretation of $\mathtt{nat}$. But this cannot be right because that would force $0$ to be always mapped to the point. In fact, the correct thing is to*adjoin*a new point $\bot$ to $\mathbb{N}$, so we get $(\mathbb{N} \cup \lbrace \bot \rbrace, \bot)$. The exotic element $\bot$ does not create any trouble because all morphisms are forced to map it to the point. - Consider the category of sheaves on a topological space $X$. The type $\mathtt{nat}$ corresponds to the sheaf of continuous maps into $\mathbb{N}$, where $\mathbb{N}$ is equipped with the discrete topology. The elements corresponding to $0$, $S(0)$, $S(S(0))$, etc. are the constant maps. If $X$ is connected then these are all of them, but otherwise there are going to me many more continuous maps $X \to \mathbb{N}$. For example, if $ X = 2 = \lbrace 0, 1 \rbrace$ is the discrete space on two points, then the object of natural numbers in sheaves on $X$ will constists of
*pairs*of natural numbers because $\mathsf{Sh}(2) = \mathsf{Set} \times \mathsf{Set}$. - It would be a crime not to mention the non-standard models, historically the first exotic models (I hope that’s correct). In these the natural numbers obtain many new exotic elements through the ultrapower construction. (To fit these into our scheme you need to consider the entire set-theoretic model formed by an ultrapower, not just an ultrapower of $\mathbb{N}$.)

In all of these $\mathtt{nat}$ gains new elements through a process of *completion* which makes sure that the structure obtained has certain properties. In the case of pointed sets the completion process makes sure that there is a distinguished point. In the case of sheaves the completion process is sheafification, which makes sure that the natural numbers are described by local information. I do not know what the ultrapower construction is doing in terms of it being a completion. (The ultrapower is really a two-step process: first we take a power and then a quotient by an ultrafilter. The power is just a special case of sheaves, so that is a completion process. But in what sense is quotienting a completion process?)

The situation is a bit similar to that of *freely generated* algebraic structures. The group freely generated by a single generator (constructor) is $\mathbb{Z}$, because in addition to the generator we have to throw in its inverse, square, cube, etc. These are *not* equal to the generator because there is a group with an element which is not equal to its own inverse, square, cube, etc. If the freely generated group is to be free it must make equal only those things which all groups make equal.

It is therefore better to think of an inductive type as *freely generated.* The generation process throws in everything that can be obtained by successive applications of the constructors, and possibly more, depending on what our model looks like.

Once you convince yourself that the natural numbers really can contain exotic elements, and that this is not just some sort of pathology cooked up by logicians, it should be easier to believe that the identity types may be non-trivial.

Let me further address two issues. The first one is this: in type theory we can *prove* that every natural number is either zero or a successor of something. Does this not exclude the exotic elements? For instance, in sheaves on the discrete space on two points, how is the element $(0, 42)$ either zero or a successor? It is not zero, because that would be $(0, 0)$, but it is not a successor either because its first component is $0$. Ah, but we must remember the *local character* of sheaves: in order to establish a disjunction it is sufficent to establish it locally on the elements of a cover. In the case of $\mathsf{Sh}(2)$ we just have to do it separately on each component: the first component of $(0, 42)$ is zero, and the second one is a successor, so all is fine. There is a difference between the *internal* statement “every number is zero or a successor” and its external meaning.

The second issue is a bit more philosophical. You might dislike certain models, and because of that you deny or ignore arguments that employ them. The reasons for disliking models vary: some people just ignore things they do not know about (“Oh sheaves, I do not know about those, but here is what happens in sets.”), some insist on doing things in certain ways (“We shall only truly understand the Univalence axiom when we have a computational re-interpretation of it in type theory”), and some just believe what they were taught (“Classical set theory suffices for all of mathematics”).

There are any number of theorems in logic which tell us that *the “unintended” models are unavoidable* (recall for instance the Löwenheim-Skolem theorem). Instead of turning a blind eye to them we should accept them because they allow us to complete an important process in mathematics:

- Based on previous mathematical experience, formulate a theory. Examples:
- from drawing lines and points, formulate some geometric axioms,
- from high-school math class, formulate some arithmetical laws,
- from ideas of what computation is about, formulate type theory.

- Look for other models, even if they are not “desired”. Examples:
- discover non-Euclidean geometries,
- discover exponential fields,
- discover homotopy-theoretic models of type theory.

- Study the new models to gain new mathematical experience.

In a typical situation an author submits a paper accompanied with some source code which contains the formalized parts of the work. Sometimes the code is enclosed with the paper, and sometimes it is available for download somewhere. ** It is easy to ignore the code! **The journal finds it difficult to archive the code, the editor naturally focuses on the paper itself, the reviewer trusts the authors and the proof assistant, and the authors are tempted not to mention dirty little secrets about their code. If the proponents of formalized mathematics want to avert a disaster that could destroy their efforts in a single blow, they must adopt a set of rules that will ensure high standards. There is much more to trusting a piece of formalized mathematics than just running it through a proof checker.

Before I say anything about reviewing formalized mathematics, let me just point out that being anonymous does not give the referee the right to be impolite or even abusive. A good guiding principle is to never write anything in a review that you would not say to the author’s face. You can be harsh and you can criticize, but do it politely and ground your opinions in arguments. After all, you expect no less of the author.

Let us imagine a submission to a journal in which the author claims to have checked proofs using a computer proof assistant or some other such tool. Almost everything I write below follows from the simple observation that * the code contains proofs and that proofs are an essential part of the paper*. If as a reviewer or an editor you are ever in doubt, just imagine how you would act if the formalized part were actually an appendix written informally as ordinary math.

Here is a set of guidelines that I think should be followed when formalized mathematics is reviewed.

*Enclose the code with the paper submission.**Provide information on how to independently verify the correctness of the code.**License the code so that anyone who wants to check the correctness of the proofs is free to do so.**Provide explicit information on what parts of the code correspond to what parts of the paper.*

Comments:

- It is not acceptable to just provide a link where the code is available for download, for several reasons:
- When the anonymous reviewer downloads the code, he will give away his location and therefore very likely his identity. The anonymity of the reviewer must be respected. While there are methods that allow the reviewer to download the code anonymously, it is not for him to worry about such things.
- There is no guarantee that the code will be available from the given link in the future. Even if the code is on Github or some other such established service, in the long run the published paper is likely going to outlive the service.
- It must be perfectly clear which version of the code goes with the paper. Code that is available for download is likely going to be updated and change, which will put it out of sync with the paper. The author is of course always free to mention that the code is
*also*available on some web site.

- Without instructions on how to verify correctness of the code, the reviewer and the readers may have a very hard time figuring things out. The information provided must:
- List the prerequisites: which proof assistant the code works with and which libraries it depends on, with exact version information for all of them.
- Include step-by-step instructions on how to compile the code.
- Provide an outline of how the code is organized.

- Formalized mathematics is a form of software. I am not a copyright expert, but I suspect that the rules for software are not the same as those for published papers. Therefore, the code should be licenced separately. I strongly urge everybody to release their code under an open source license, otherwise the evil journals will think of ways to hide the code from the readers, or to charge extra for access to the code.
- The reviewer must check that all theorems stated in the paper have actually been proved in the code. To make his job possible the author should make it clear how to pair up the paper theorems with the machine proofs. It is
*not*easy for the reviewer to wade through the code and try to figure out what is what. Imagine a paper in which all proofs were put in an appendix but they were not numbered, so that the reader would have to figure out which theorem goes with which proof.

*Review the paper part according to established standards.**Verify that the code compiles as described in the provided instructions.**Check that the code correctly formulates all the definitions.**Check that the code proves each theorem stated in the paper and that the machine version of the theorem is the same as the paper version.**Check that the code does not contain unfinished proofs or hypotheses that are not stated in the paper.**Review the code for quality.*

Comments:

- Of course the reviewer should not forget the traditional part of his job.
- It is absolutely critical that the reviewer independently compile the code. This may require some effort, but skipping this step is like not reading proofs.
- Because the work is presented in two separate parts, the paper and the code, there is potential for mismatch. It is the job of the reviewer to make sure that the two parts fit together. The reviewer can reject the paper if he cannot easily figure out which part of the code corresponds to which part of the paper.
- The code is part of the paper and is therefore subject to reviewing. Just because a piece of code is accepted by a proof checker, that does not automatically make it worthy. Again, think how you would react to a convoluted paper proof which were nevertheless correct. You would most definitely comment on it and ask for an improvement.

*The journal must archive the code and make it permanently available with the paper, under exactly the same access permissions as the paper itself.*

This is an extremely difficult thing to accomplish, so the journal should do whatever it can. Here are just two things to worry about:

- It is unacceptable to make the code less accessible than the paper because the code
*is*the paper. - The printed version of the journal should have the code enclosed on a medium that lasts as long as paper.
- If the code is placed on a web site, it is easy for it do disappear in the future when the site is re-organized.

*The editor must find a reviewer who is not only competent to judge the math, but can also verify that the code is as advertised.**The editor must make sure that the author, the journal, and the reviewer follow the rules.*

Comments:

- It may be quite hard to find a reviewer that both knows the math and can deal with the code. In such as a case the best strategy might be to find two reviewers whose joint abilities cover all tasks. But it is a very bad idea for the two reviewers to work independently, for who is going to check that the paper theorems really correspond to the computer proofs? It is not enough to just have a reviewer run the code and report back as “it compiles”.
- In particular, the editor must:
- insist that the code be enclosed with the paper
- convince the journal that the code be archived appropriately
- require that the reviewer explicitly describe to what extent he verified the code: did he run it, did he check it corresponds to the paper theorems, etc? (See the list under “The rules for the reviewer”.)

I think the answer is *not without a diligent reviewing process.* While a computer verified proof can instill quite a bit of confidence in a mathematical result, there are things that the machine simply cannot check. So even though the reviewer need not check the proofs themselves, there is still a lot for him to do. Trust is in the domain of humans. Let us not replace it with blind belief in the power of machines.