Mathematics and Computation

A blog about mathematics for computers

First Steps in Synthetic Computability Theory (Fischbachau)

At the EST training workshop in Fischbachau, Germany, I gave two lectures on syntehtic computability theory. This version of the talk contains material on recursive analysis which is not found in the MFPS XXI version of a similar talk.

Abstract:
Computability theory, which investigates computable functions and computable sets, lies at the foundation of logic and computer science. Its classical presentations usually involve a fair amount of Goedel encodings. Consequently, there have been a number of presentations of computability theory that aimed to present the subject in an abstract and conceptually pleasing way. We build on two such approaches, Hyland's effective topos and Richman's formulation in Bishop-style constructive mathematics, and develop basic computability theory, starting from a few simple axioms. Because we want a theory that resembles ordinary mathematics as much as possible, we never speak of Turing machines and Goedel encodings, but rather use familiar concepts from set theory and
topology.

Download slides: est.pdf

Comments

logicnazi

I'm not entierly sure if I agree with your notion of elegance. Certainly Godel numbering and all that crap are a big pain in the ass to learn but once they are learned one never uses them. However, it is frequently very helpful to think of what is going on as a real computation or some kind of coding. Thus at some point it seems necessery that everyone be expoused to Godel numberings and the like, if for no other reason than to prove this system does the same thing as we intuitively expect a computer to do.

So seperating things into the two categories I'm not sure I see the elegance advantage. Since I'm studying recursion theory I'm probably biased toward finding the current practice pleasent but by ignoring details about coding and having a good grasp of what details we can wave away we end up with a very elegant practice. In fact I think it is far more elegant than almost every other branch of mathematics preciscely because one almost never has to deal with annoying details (well less so).

Now I certainly agree all the cruft one goes through while learning recursion theory is ugly crap. However, if people are going to need to know this anyway so they can move to the next level of ignoring these details then there isn't much we can do to get rid of it. In other words it seems like an elegant approach in need of an audience. Though perhaps I misunderstand what your goal is.

This isn't to say I didn't think your paper made a very interesting and perhaps usefull contribution. In particular I would be fascinated to know if this approach can be extended to work with various extensions of recursion theory to other structures such as search computable, prime computable and several other notions. This, I think is where this sort of elegant abstract approach is particularly appropriate.

One thing I was a bit confused on was what you meant by P 1. I assume the P is supposed to be power set and the 1 is just the standard ordinal definition of 1. Yet wouldn't this just be {0,1}. I presume I am missing something in the intuitionism framework but I would appreciate a hint.

P.S. Don't listen too much to my elegance complaints. The people who learned the old system never like the new one.

Dear logicnazi (nice username): I have been told by recursion theorists that (1) it is not so cumbersome to work with Goedel encodings once one gets used to it, and (2) as you yourself claim, most of the time one desn't have to use Goedel encodings anyway, once they have been mastered. But this begs the question: what do you use? I understand that there may be an informal way of doing "high-level" recursion theory, but I am interested in a precise axiomatization. I would appreciate a reference if you have one.

What I am attempting is a precise "synthetic" formulation of computability, one that looks like ordinary set theory and topology. It may lack an audience, and it may be next to impossible to learn it without first learning classical recursion theory. It is hard to tell without performing a nasty experiment on a young mathematician.

Intuitionistically the powerset Omega = P(1) of the singleton set is far from being "just {0,1}". Think of it this way. We have three different sets 2 subset Sigma subset Omega which all look the same to a classical mathemtican. To get some feel for them, it is informative to look at functions with codomains 2, Sigma, and Omega. The function space 2^N are the recursive sets, Sigma^N are the r.e. sets, Omega^N are all subsets of N. In fact, beween Sigma and Omega we can find further subsets Sigma_n and Pi_n such that functions N -> Sigma_n and N -> Pi_n are the characteristic functions of sets in the arithmetical hierarchy. So Omega is a strange "two element" set which contains a whole lot of structure. The whole thing is a bit confusing at first. Perhaps you can look at my post "How many is two"? for a start.

kurtvanetten

The idea of developing computability theory from a strictly logical or set theoretical point of view is fascinating and, if by 'elegant' you mean 'using fewer symbols', then certainly this is more elegant than the classical approach. But I'm bewildered by the terse notation; I feel like someone who's always programmed in Cobol trying to understand a C program. So I'm wondering, 1) what kind of background is needed to appreciate the lecture notes, and 2) can a similar theory be developed using classical logic, or would that just degenerate back into classical recursion theory?

By "elegant" I mean "conceptually elegant", although I do think there is also a slight (but ultimately insignificant) difference in notation. It is elegant to think of concepts from computability theory as just "ordinary" concepts from set theory and topology, i.e., explicit mention of computability is not necesasry. You should feel like someone who has always programmed in Cobol trying to understand prolog, really. That would be a comparable shift in thinking.

The background needed to understand this, I suspect, is first of all familiarity with classical computability theory, as this is what we're trying to get at. Then you would also need some familiarity with constructive mathematics.

Your second question, whether a similar theory may be developed using classical logic, is somewhat misguided. If we want to have a synthetic approach to computability based on the Axiom of Enumerability, then it follows that logic is non-classical (see theorem in the notes which states that the Axiom of Enumerability implies that the Law of Excluded Middle is false). I cannot choose logic here, so the answer to your question is "no", we cannot do the same thing using classical logic. Many people seem to think that mathematics is done in the following order: (1) pick logic (classical or intuitionistic), then (2) do mathematics using the chosen logic. This is not always possible as your choice of mathematical topic may force you to choose a particular logic. For example, you might want to consider an axiom which is inconsistent with classical logic.

I certainly did not ask "How can I do computability theory in intuitionistic logic?" Instead I asked: "The effective topos is a mathematical universe (a model of a certain kind of set theory) which has computability built in. Therefore, it must be ready-made for an abstract treatment of computability theory. What sort of thing does a mathematician who lives in this universe believe in?" (Note: a mathematician who lives in a topos does not see how the topos is constructed. The building blocks of the topos, which to us on the outside look like complicated structures involving Turing machines, look like ordinary sets and functions to him.) As is well known to specialists in this area of reasearch, the answer turnes out to be that mathematicians inside the effective topos happen to believe in the Axiom of Enumerability and happen to disbelieve the Law of Excluded Middle, and that things they say about sets mean to us something about computability.

kurtvanetten

Thanks for your response. I'm having a hard time wrapping my mind around this. Since I understand classical computability theory (or at I think I do; I'm starting to have doubts), my difficulty must come from not knowing constructive mathematics/intuitionistic logic. Could you recommend a tutorial or a text (at a beginner's level) that will get me started?

Deprogramming oneself from classical logic is not easy. In my experience it takes a couple of years. As for an introductory text in constructive logic, I am not sure what to recommend. For example, Troelstra and van Dalens "Constructivism in Mathematics" is very exhaustive and explains well the ideas, but is written by logicians for logicians. Or you might try Beeson's book. Bishop and Bridges "Constructive Analysis" is another option--it is after all the constructivist bible. But they simply do constructive mathematics, without explaining much how to do it. In fact, this may be the best approach: just jump into the constructivist swimming pool and hope there is water in it.

Perhaps the following exercise will help: prove that there exists a subset of NN which is not recursively enumerable. Precisely where did you use classical logic in that proof? Probably it was something like "either n in W_n or not." Now try proving it without this step (you will fail). Welcome to constructive mathematics.

(Sorry, I know this is a very old comment to reply to, but it caught my eye)

Did you mean to say "prove there exists an element of 2^N which is not recursively enumerable" instead, or have some particular constructive system in mind more restrictive than intuitionistic logic? To be explicit, it seems to me the following serves as an intuitionistically acceptable proof that there exists a non-r.e. subset of N:

Let Halt(a, b) mean that the program coded by a halts on input b. Consider {x | ~Halt(x, x)}. If this were r.e. (i.e., semidecidable), then there would be some p such that Halt(p, x) is equivalent to ~Halt(x, x) for all x. But then, Halt(p, p) would be equivalent to ~Halt(p, p), a contradiction (even intuitionistically). Thus, we can conclude, {x | ~Halt(x, x)} is non-r.e.

The only thing which we wouldn't be able to do intuitionistically, it seems to me, is go on to conclude that this set has a 2-valued characteristic function. I'm almost certain you just made a small typo, but just in case not, have I missed some flaw in the above ostensibly constructive proof?

You are absolutely correct. This is a good constructive proof that there exists a non-r.e. subset of N. I have no idea what I was thinking when I wrote my comment. As you noticed, we need to make the statement stronger: "Prove that there is a function NN -> {0,1} (defined everywhere on NN!) which is not computed by any Turing machine". In your proof, pinpoint the use of classical logic. (Note: it is possible to avoid classical logic in this case by using non-classical Brouwerian axioms. But you cannot prove the statement straight in constructive mathematics because constructive mathematics is consistent with Formalized Church's Thesis.)

How to comment on this blog: At present comments are disabled because the relevant script died. If you comment on this post on Mastodon and mention andrejbauer@mathstodon.xyz, I will gladly respond. You are also welcome to contact me directly.