If we really do believe that computation is basic then

proof checkers should allow general and explicit computation inside

the trusted core.

Certainly not!

I have moved away from the LCF/Nuprl style[*] and now actively prefer

what Bob would call axiomatic or formal type theory with a normalizing

notion of judgmental equality.

To be usable in practice, a proof assistant needs a bunch of

convenience features like type inference, pattern matching,

overloading, proof search, and so on. However, I think it is a bad

idea to implement these features by LCF style elaboration on top of a

Nuprl-style core. This approach works great for verifying that a

complete proof is correct, but most of the time, our “proof” objects

are either incomplete or actively wrong. So we need very strong

guarantees about broken proofs even more than we do for correct

proofs.

Type inference offers a very good example of the issues

involved. Suppose we are doing type inference, and know that a

variable has type A ? B and that it is being used at X ? Y. To check

this, we must check for the judgmental equality of these types, and

algorithmically we want to proceed by checking that A = X and B = Y.

That is, we want judgemental equality to be injective for the function

type constructor, which in Nuprl is obviously false: for example,

consider (x:0) ? String and (x:0) ? Bool, which are equated in Nuprl’s

model of the universe.

In contrast, in pure MLTT, the injectivity of the ?-former is an

admissible property of judgemental equality. As a result, a type

inference procedure that relies upon this property will be complete

(at least in this respect), and so a failure of type inference will

reflect a genuine type error rather than a mere failure of a

heuristic.

The fragility resulting from heuristic tactics is not hypothetical:

George Gonthier’s proof of the four colour theorem no longer builds on

modern versions of Coq, due to small changes in how elaboration

works. (Still more inconceivable would be porting his proof to a

system like Matita, despite it having the same kernel logic as Coq!)

Basically, for a proof to be bearable, we need a bunch of convenience

features. Specifying these features in a reasonably declarative way

requires specifying when they *don’t* work just as much as when they

do, and doing this in a implementable way requires a bunch of

admissible properties from the kernel type theory, most of which do

not hold in Nuprl-style calculi.

[*] I have not, of course, rejected Nuprl-style semantics. For

example, it would be impossible to prove the decidability of

typechecking for the formal theories I prefer without logical

relations/realizability models. (Alas, usually two such models!)

The argument that non-deterministic computation may be exponential is bogus. Deterministic computation can be exponential as well.

Non-determinstic computation may be quite desirable, especially in situations where you do not care which witness is found. Correctness of such computations is a separate issue which needs to be addressed – but certainly not by forcing everything to be deterministic.

As for computation being a chain of equations: everything is wrong with that because it prevents us from thinking about computation as an independent concept. Consequently, we do not have a good notion of *transformation* of computation (like we do for inductively defined structures such as formal derivations).

For example, I trust my computer to compute, for two 32 binary natural numbers x and y, a third one, z, such that x+y is congruent to z modulo 2^32. Do you have in mind that we would manually link that computational ability to an appropriate formalization?

]]>