I am thinking of a number. I will tell you which number it is. It’s the one that is printed by the program:

k = 0 n = 1000000000000 while n <> 1: if n % 2 == 0: n = n / 2 else: n = 3 * n + 1 k = k + 1 print(k)

Now we are both thinking of the same number. Now, in this particular case there are other, more efficient ways of communicating the same number, but that is not going to be the case in general. The most efficient way to communicate numbers (or any kind of information) is to send programs.

]]>An independent verification of a computation involves independently re-running the computation – not verifying someone else’s trace of it encoded as a derivation.

In case “verifying a computation” means confirming that *foo* is a possible outcome, and the computation is nondeterministic, re-running the computation is a bad way to verify it.

Of course this doesn’t mean nondeterministic computations cannot be used, it just means that possible outcomes are generally hard to find. So a proof that an outcome of a computation is possible should not just be the computation itself.

]]>We are therefore either doomed to have many proof assistants, or to have extensible proof assistants. If the latter, what [form] do such extensions take? Building internal models inside some logic is just one way of providing such flexibility.

How do internal models extend the proof assistant? It seems like you can always do this–modulo size issues–and it doesn’t affect proof assistant functionality.

Indeed, it seems that with non-extensible proof assistants, working internally to some defined logic is a prime example of something that’s harder than it ought to be. For example, setioid hell is working internally to the setoid model the hard way.

]]>