Intuitionistic mathematics for physics
 13 August 2008
 Constructive math, Tutorial
At MSFP 2008 in Iceland I chatted with Dan Piponi about physics and intuitionistic mathematics, and he encouraged me to write down some of the ideas. I have little, if anything, original to say, so this seems like an excellent opportunity for a blog post. So let me explain why I think intuitionistic mathematics is good for physics.
Intuitionistic mathematics, whose main proponent was L.E.J. Brouwer, is largely misunderstood by mathematicians. Consequently, physicists have strange ideas about it, too. For example, David Deutsch somehow managed to write in his otherwise excellent popular science book “The Fabric of Reality” that intuitionists deny existence of infinitely many natural numbers (those would be the ultrafinitists, if there are any). He also produced rather silly arguments against intuitionistic mathematics, which I explained to myself by believing that he never had a chance to learn that intuitionistic mathematics supports his point of view.
While Brouwer's and other preintuitionists' reasons for intuitionistic mathematics were philosophical in nature, there is today a vibrant community of mathematicians, logicians, computer scientists, and even the odd physicist, who work with intuitionistic mathematics not because of their philosophical conviction but because it is simply the right kind of math for what they are doing.
Intuitionistic understanding of truth
A common obstacle in understanding intuitionistic logic is the opinion that the difference between classical and intuitionistic logic arises because classicists and intuitionists just happen to disagree about what is true. A typical example of this is the principle known as Proof by Contradiction:
For every proposition $\phi$, if $\phi$ is not false then $\phi$ is true.
With a formula we write this as
$\forall \phi \in \mathsf{Prop}, \lnot \lnot \phi \Rightarrow \phi$.
Classical mathematicians accept it as true. Intuitionists do not accept it, but neither do they claim it is false. In fact, they claim that the principle has no counterexamples, that is
$\lnot \exists \phi \in \mathsf{Prop},
\lnot (\lnot \lnot \phi \Rightarrow \phi)$.
This becomes very confusing for classical mathematicians who think that the two displayed formulae are equivalent, because they believe in Proof by Contradiction. It is like believing that the Earth is flat while trying to make sense of Kepler's Laws of planetary motion.
The difference between intuitionistic and classical logic is in the criteria for truth, i.e., what evidence must be provided before a statement is accepted as true. Speaking vaguely, intuitionistic logic demands positive evidence, while classical logic is happy with lack of negative evidence. The intuitionist view is closer to the criterion of truth in science, where we normally confirm a statement with an experiment (positive evidence), but this analogy should not be taken too far.
What counts as “evidence” is open to interpretation. Before I describe the three most common ones below, let me just explain the difference between $\phi$ (“$\phi$ is true”) and $\lnot \lnot \phi$ (“$\phi$ is not false”). Intuitionistically:
 $\phi$ holds if there is positive evidence supporting it,
 $\lnot \phi$ holds if it is contradictory to assume $\phi$, that is to say, evidence of $\phi$ would entail a contradiction.
 $\lnot \lnot \phi$ holds if it is contradictory to assume that it is contradictory to assume $\phi$.
That is a bit complicated. In essence, it says that $\lnot \lnot \phi$ is accepted when there is no evidence against it. In other words, $\lnot \lnot \phi$ means something like “$\phi$ cannot be falsified” or “$\phi$ is potentially true”. For example, if someone says
“There is a particle which does not interact with anything in the universe.”
that would be a statement which is not accepted as true, for how would you ever present positive evidence? But it is accepted as potentially true, for how would you ever falsify it?
A statement which is logically equivalent to one of the form $\lnot \lnot \phi$ is called doubly negated. For the purposes of this post I shall call a statement $\phi$ potentially true if its double negation $\lnot \lnot \phi$ is true. It seems nontrivial to come up with useful statement in physics which are only potentially true (but see the discussion about infinitesimals below). Perhaps Karl Popper would have something to say about that.
Let me now describe three most common interpretations of “evidence” in intuitionistic logic.
Computational interpretation
This is the interpretation of intuitionistic logic commonly presented in computer science. We view all sets as represented by suitable data structures—a reasonable point of view for a computer scientist. Then a statement is taken to be true if there exists a program (computational evidence) witnessing its truth. To demonstrate the idea, consider the statement
$\forall x \in A, \exists y \in B, \phi(x, y)$.
This is taken to be true if there exists a program which accepts $x$ and outputs $y$ together with computational evidence that $\phi(x,y)$ holds. Another example: the statement
$\forall x \in A, \phi(x) \lor \psi(x)$
is true if there exists a program which takes $x$ as input and outputs either $0$ and evidence of $\phi(x)$, or $1$ and evidence of $\psi(x)$. In other words, the program is a decision procedure which tells us which of the two disjuncts holds, and why. Under this interpretation the Law of Excluded Middle fails because there are unsolvable decision problems, such as the Halting problem.
The computationally minded readers might entertain themselves by figuring out a computational explanation of potentially true statements (Hint: first interpret Pierce's Law in terms of continuations). I have not done it myself.
Topological interpretation
We may replace the phrases “data structure” and “program” in the computational interpretation by “topological space” and “continuous function”, respectively. Thus a statement is true if it is witnessed by a continuous function which transforms input (hypotheses) to output (conclusions).
The basis for this explanation may be found in physics if we think about what it means for a function to be continuous in terms of communication or information processing. Suppose an observer wants to communicate a realvalued quantity $x$ to another observer. They can do it in many ways: by making sounds, by sending electromagnetic signals, by sending particles from one place to another, by manufacturing and sending a stick of length $x$ by mail, etc. However, as long as they use up only a finite amount of resources (time, space, energy) they will be able to communicate only a finite amount of information about $x$. Similarly, in any physical process (computer, brain, abacus) which transforms an input value $x$ to an output value $f(x)$ the rate of information flow is finite. Consequently, in finite time the process will obtain only a finite amount of information about $x$, on the basis of which it will output a finite amount of information about $f(x)$. This is just the definition of continuity of $f$ phrased in terms of information flow rather than $\epsilon$ and $\delta$. Notice that we are not assuming that $f$ is computable because we do not want to make the rather sweeping assumption that all physical processes are computable.
The conclusion is that “all functions are continuous”, including those that witness truth of statements.
You might be thinking that an analogtodigital converter is a counterexample to the above argument. It is a device which takes as input an electric signal and outputs either 0 or 1, depending on whether the voltage of the signal is below or above a given threshold. Indeed, this would be a discontinuous function, if only such converters worked exactly. But they do not, they always have a tolerance level, and the manufacturer makes no guarantees about it working correctly very close to the threshold value.
A useful exercise is to think about the difference between “all functions are continuous”, “potentially all functions are continuous”, and “all functions are potentially continuous”. Which one does the above argument about finite rate of information processing support?
Local truth
This explanation of intuitionistic logic is a bit more subtle, but also much more powerful and versatile. It is known by categorical logicians as the KripkeJoyal or sheaf semantics, while most logicians are familiar at least with the older Kripke semantics.
Imagine a planet and a meteorologist at each point of the surface, measuring the local temperature $T$. We assume that $T$ varies continuously with position. A statement such as $T > 273$ is true at some points of the planet and false at others. We say that it is locally true at $x$ if there exists a small neighborhood around $x$ where it is true. In other words, a statement is locally, or stably true at a given point if it remains true when we perturb the point a little.
On this planet a statement is globally true if it is locally true everywhere, and it is globally false if its negation is locally true everywhere. There are also many intermediate levels of truth. The truth value (a measure of truth) of a statement is the set of those points at which the statement is locally true. Such a set is always open.
The explanation so far is a bit wrong. For a statement to be locally true at $x$, not only must it be true in a neighborhood of $x$, but it must also be true everywhere in the neighborhood “for the same reason”. For example, the statement
$T > 273$ or $T \leq 273$
is true at $x$ if there exists a neighborhood $U$ of $x$ such that $T > 273$ everywhere on $U$, or $T \leq 273$ everywhere on $U$. The reason, namely which of the two possibilities holds, must be the same everywhere on $U$.
The truth value of $T = 273$ is the interior of the set of those points at which $T$ equals 273, while the truth value of $T \neq 273$ is the exterior of the set of those points at which $T$ equals 273. Thus the truth value of the disjunction
$T = 273$ or $T \neq 273$
need not be the entire planet—it will miss isolated points at which $T$ is 273. The Law of Excluded Middle is not valid.
By changing the underlying space and topology, we can express various notions of truth. We can, for example, incorporate passage of time, or a universe branching into possible worlds. In the most general case the underlying space need not even be a space, but a category with a socalled Grothendieck topology which determines what “locally” means.
Apart from being a wonderful mathematical tool, it should be possible to use sheaf semantics to clarify concepts in physics. I would expect the notions of “truth stable under small perturbation” and “truth local to an observer” to appeal to physicists. Fancy kinds of sheaf semantics have been proposed to explain features of quantum mechanics, see for example this paper by Bas Spitters and his coworkers.
Smooth infinitesimal analysis
Philosophical explanations and entertaining stories about intuitionistic mathematics are one thing, but getting actual benefits out of it are another. For physicists this means that they will want to calculate things with it. The good news is that they are already doing it, they just don't know it!
There is something odd about how physicists are taught mathematics—at least in my department. Physics majors learn the differential and integral calculus in the style of Cauchy and Weierstrass, with $\epsilon$–$delta$ definitions of continuity and differentiability. They are told by math professors that it is a sin to differentiate a nondifferentiable function. They might even be told that the original differential and integral calculus, as invented by Leibniz and Newton, was flawed because it used the unclear concept of infinitesimals, which were supposed to be infinitely small yet positive quantities.
Then these same students go to a physics class in which a physics professor never performs $\epsilon$–$\delta$ calculations, freely differentiates everything in sight, and tops it off by using the outlawed infinitesimals to calculate lots of cool things. What are the students supposed to think? Clearly, the “correct” mathematics is useless to them. It's a waste of time. Why aren't they taught mathematics that gives a foundation to what the physics professors are actually doing? Is there such math?
Yes there is. It's the mathematics of infinitesimal calculus, brought forward to the 20th century by Anders Kock and Bill Lawvere under the name Synthetic Differential Geometry (SDG), or Smooth Infinitesimal Analysis. (I am too young to know exactly who invented what, but I've heard people say that Eduardo Dubuc also played a part. I would be happy to correct bibliographical omissions on my part.) By the way, I am not talking about Robinson's nonstandard analysis, which uses classical logic.
This is not the place to properly introduce synthetic differential geometry. I will limit myself to a few basic ideas and results. For a first reading I highly recommend John Bell's booklet A Primer of Infinitesimal Analysis. If you refuse to read physical books, you may try his shorter An Invitation to Smooth Infinitesimal Analysis online. For further reading Anders Kock's Synthetic differential geometry is an obvious choice (available online!), and there is also Moerdijk and Reyes's Models of smooth infinitesimals analysis, which shows in detail how to construct models of SDG using sheaves of germs of smooth functions.
To get a feeling for what is going on, and why intuitionistic logic is needed, let us review the usual proof that infinitesimals do not exist. This requires a bit of logical nitpicking, so bare with me. Both intuitionistic and classical mathematics agree that there is no real number $x$ which is neither negative, nor zero, nor positive:
$\lnot \exists x \in \mathbb{R}, \lnot (x < 0) \land \lnot (x = 0) \land \lnot (x > 0)$.
(There is some disagreement as to whether every number is either negative, zero, or positive, but that is beside the point right now.) A nilpotent infinitesimal of second degree, or just infinitesimal for short, is a real number $dx$ whose square is zero. Any such $dx$ is neither negative nor positive, because both $dx > 0$ and $dx < 0$ imply $dx^2 > 0$, which contradicts $dx^2 = 0$. If $dx$ were also nonzero, we would have a number which is neither negative, zero, nor positive. Thus we proved that an infinitesimal cannot be nonzero:
$dx^2 = 0 \Rightarrow \lnot \lnot (dx = 0)$.
A classical mathematician will now conclude that $dx = 0$ by applying Proof by Contradiction. Intuitionistically we have only shown that infinitesimals are potentially equal to zero.
But are there any infinitesimals which are actually different from zero? It can be shown from the main axiom of SDG (see below) that nonzero infinitesimals potentially exist. It is a confusing world: on one hand all infinitesimals are potentially zero, but on the other nonzero ones potentially exist. Like all good things in life, intuitionistic mathematics is an acquired taste (and addictive).
Can a physicist make sense of all this? We may think of infinitesimals as quantities so small that they cannot be experimentally distinguished from zero (they are potentially zero), but neither can they be shown to all equal zero (potentially there are some nonzero ones). By the way, we are not talking about lengths below Planck length, as there are clearly reals numbers smaller than $1.6 * 10^(35)$ whose square is positive.
The actual axiom which gets the infinitesimal calculus going does not explicitly state anything about nonzero infinitesimals. Instead, it expresses the principle of microaffinity (sometimes called microlinearity) that physicists use in their calculations.
Principle of microaffinity: An infinitesimal change in the independent variable $x$ causes an affine (linear) change in the dependent variable $y = f(x)$.
More precisely, if $f : R \to R$ is any function, $x \in R$ and $dx$ is an infinitesimal, then there exists a unique number $f'(x)$, called the derivative of $f$ at $x$, such that $f(x + dx) = f(x) + f'(x) dx$. This principle has many consequences, such as potential existence of nonzero infinitesimals described above. For actual calculations the most important consequence is
Law of cancellation: If $a$ and $b$ are real numbers such that $a \cdot dx$ = $b \cdot dx$ for all infinitesimals $dx$ then $a = b$.
What this says is that we may cancel infinitesimals when they are arbitrary. This is important because infinitesimals do not have inverses (they are potentially zero). Nevertheless, we may cancel them in an equation, as long as they are arbitrary.
Let me show how this works in practice by calculating the derivative of $f(x) = x^2$. For arbitrary infinitesimal $dx$ we have
$f'(x) \cdot dx = f(x + dx)  f(x) = (x + dx)^2  x^2 = x^2 + 2 x \cdot dx + dx^2  x^2 = 2 x \cdot dx$
where we used the fact that $dx^2 = 0$. Because $dx$ is arbitrary, we may cancel it on both sides and get $f'(x) = 2 x$. I emphasize that this is a mathematically precise and logically correct calculation. It is in fact very close to the usual treatment which goes like this:
$f'(x) = (f(x+dx)  f(x))/dx = (x^2 + 2 x \cdot dx  dx^2  x^2)/dx = 2 x + dx = 2 x$
There are two incorrect steps here: we divided by an infinitesimal $dx$ without knowing that it is different from zero (it isn't!), and we pretended that $2 x + dx$ is equal to $2 x$ because “$dx$ is very small”. By the same reasoning we should have concluded that $f(x+dx)  f(x) = f(x)  f(x) = 0$, but we did not. Why?
The principle of microaffinity allows us to easily derive the usual rules for computing derivatives, the potential existence of nonzero infinitesimals, prove the fundamental theorem of calculus in two lines, derive the wave equation like physicists do it, etc. And it is all correct, exact math. No approximations, no guilty feeling about throwing away “negligible terms” here but not there, and other hocuspocus that physicists have to resort to because nobody told them about this stuff.
Just for fun, let me compute more derivatives. The general strategy in computing $f'(x)$ is to consider an arbitrary infinitesimal $dx$ and express $f'(x) \cdot dx = f(x + dx)  f(x)$ as a quantity multiplied by $dx$. Then we cancel $dx$ on both sides and get $f'(x)$. Throughout we use the fact that $dx^2 = 0$. Here we go:

The derivative of $x^n$ is $n \cdot x^(n1)$:
> $(x+dx)^n  x^n = x^n + n x^(n1) \cdot dx  x^n = n x^(n1) \cdot dx$ 
Leibniz's formula for derivatives of products $(f(x)\cdot g(x))' = f'(x) \cdot g(x) + f(x) \cdot g'(x)$:
> $f(x+dx) \cdot g(x+dx)  f(x) \cdot g(x) = $
> $(f(x) + f'(x) \cdot dx) (g(x) + g'(x) \cdot dx)  f(x) \cdot g(x) =$
> $(f'(x) g(x) + f(x) \cdot g'(x)) \cdot dx$. 
Chain rule $f(g(x))' = f'(g(x)) \cdot g'(x)$
> $f(g(x+dx))  f(g(x)) =$
> $f(g(x) + g'(x) \cdot dx)  f(g(x)) =$
> $f(g(x)) + f'(g(x)) \cdot g'(x) \cdot dx  f(g(x)) =$
> $f'(g(x)) \cdot g'(x) \cdot dx$where we used the fact that $g'(x) \cdot dx$ is infinitesimal because its square is zero. </li> </ul>
There you have it, in a paragraph we derived precisely and in sufficient detail what usually takes a whole lecture of $\epsilon$–$\delta$ manipulations.
If we stick to classical logic, the Principle of microaffinity is false. To see this, consider a function with a jump, such as
$j(x) = 0$ if $x < 0$
$j(x) = 1$ if $x \geq 0$At $x = 0$ the principle of microaffinity fails. This is a counterexample only in classical mathematics because intuitionistically we cannot prove that there is a function with a jump. Concretely, the above definition of $j(x)$ is not intuitionistically valid because it presupposes $\forall x \in R, x < 0 \lor x \geq 0$.
Spacetime anomalies
But wait! Intuitionistically we can construct nondifferentiable continuous functions, such as the absolute value $f(x) = x$, for which the principle of microaffinity fails, too. Well, I am not telling the whole story. The smooth real line $R$ of infinitesimal analysis is not the usual real line $\mathbb{R}$, as constructed by Richard Dedekind. It does not support computation of absolute values.
This seems pretty bad. If we cannot have a function such as the absolute value, then it is not clear how to model phenomena that involve sudden change of direction, such as reflection of light and collision of particles. Can rays not bounce of mirrors, intuitionistically? Yes they can, it is just that the intuitionistic treatment of sudden changes is more profound than the classical one.
Consider a particle which moves freely up to time $t_0$, then bounces off a wall, and moves freely after that. Its position $p$ is described as a function of time $t$ in two parts,
$p(t) = p_1(t)$ if $t \leq t_0$
$p(t) = p_2(t)$ if $t \geq t_0$where $p_1$ and $p_2$ are smooth functions, and $p_1(t_0) = p_2(t_0)$. Because $p$ is defined separately for $t \leq t_0$ and for $t \geq t_0$, its domain of definition is the union of two halflines
$D = \lbrace t \in R \mid t \leq t_0 \rbrace \cup
\lbrace t \in R \mid t \geq t_0 \rbrace$.Classical mathematics proves that $D = R$, which amounts to forgetting that $t_0$ is a special moment. In the smooth world, $D$ is only a subset of $R$, but is not equal to $R$ because it carries more information than $R$. As strange as this may seem, it is useful because it encodes moments in time or places in space where special things happen, such as sudden change of movement or sudden change of density. Smooth spacetime, say $R^4$, allows only smooth motion and smooth distribution of mass. If we place nonsmooth mass in it, the space will change to a subset of $R^4$ which carries additional information about the anomalies contained in it.
This post has become very long so I will stop here.
andrejbauer@mathstodon.xyz
, I will
gladly respond.
You are also welcome to contact me directly.
Comments
First of all, great post!
I had encountered intuitionistic maths before and thought "more mathematical logicians playing games with definitions". As I did more computer science I could see why one would like to use constructive maths but the mathematician in me thought proof by contradiction was too large a sacrifice to make.
Recently, I read about a nonconstructive proof that there are irrational and b such that ab is rational. This pushed me further away from the nonconstructive viewpoint.
I think the treatment of infinitesimals you describe to has sent me over the line to the intuitionist camp. Furthermore, it provides a logically sound basis to automatic differentiation using dual numbers  exactly the nilpotent infinitesimals of second degree you talk about.
That said, I'm still uncomfortable with the interpretation of the absolute value function. Could you please expand on this some more? Does it have any relationship with your earlier post in which you talk about all computable functions being continuous?
Answer to mark's question about absolute value: suppose we had an operation which assigned to every number
x
a numberx
such thatx < 0 => x = x
andx > 0 => x = x
, wherea <= b
is defined asnot (b < a)
. Above we proved that for infinitesimaldx
we have0 <= dx <= 0
, therefore bothdx = dx
anddx = dx
from which it follows thatdx = 0
. So, if absolute value exists then all infinitesimals are zero. But this contradicts the principle of microaffinity, because if all infinitesimals are zero, then for all infinitesimaldx
and now by the law of cancellation we get
0 = 1
.But this is nothing to despair about, because the absolute value exists as a function
{x in R  x <= 0 or 0 <= x} > R
, defined byIt's just that you need to know the sign of
x
in order to compute its absolute value.Having read the invitation by Bell that you link to I can see why things appear strange to me. In particular, the notion of the "indecomposability" of the intuitionist's reals is what appears to be behind the lack of discontinuities. Furthermore, you cannot define the absolute value function in the usual way because that presupposes being able to split the reals at 0.
My ideas of continuity are tied up with the point set topology I studied many years ago. I think I will have to rethink several basic definitions, such as open and closed sets, in light of intuitionist logic to better understand what's going on here.
[...] Great Intro to Intuitionistic Logic Jump to Comments At Mathematics and Computation, there’s a really good accessible introduction to intuitionistic logic called Intuitionistic Logic for Physics. [...]
[...] a great post at Mathematics and Computation on Intuitionist Mathematics and Physics. (HT: Philosophy, Science, and Method) I always found Intuitionism a bit vague in some key places [...]
Is it fair to say "nonzero infinitesimals potentially exist" (which I would parse as Â¬Â¬âˆƒdxâˆˆR [Â¬(dx = 0) âˆ§ dx^2 = 0])? I would actually take this to be provably false (as an intuitionistic field, nonzeros should be the same as invertibles, and thus be closed under squaring). It seems the more accurate statement would be "not all infinitesimals are zero" (which I would parse as Â¬âˆ€dxâˆˆR [dx^2 = 0 => dx = 0]). But perhaps I should just use a different translation from ordinary language to formal intuitionistic logic...
Answer to Sridhar: it is not true that in an intuitionistic field nonzeros coincide with invertibles. In intuitionistic algebra we use apartness relation, which is a constructivized version of nonequality, rather than nonequality itself. Thus the axiom about invertible elements in the field is "if
x
is apart from zero, then it is invertible" (and not "ifx
is not zero, then it is invertible).Concretely, in an ordered field apartness is defined as:
x
andy
are apart iffx < y
orx > y
. So you can only invert an element if you know that it is negative or positive. It does not generally follow that every nonzero element is negative or positive (I actually mention this issue in the post above). I hope that answers your question.In the case of smooth reals we can show that for every infinitesimal
x
we have "x
is not apart from zero". So you cannot have an invertible infinitesimal (although there are variants of the axioms of smooth analysis which give you that).Hm, I see. But it's still not clear to me how to derive the potential existence of nonzero infinitesimals from the principle of microaffinity. Could you expand on that?
For example, taking the formalization of this statement as I stated before,
not not exists dx in R, (not (dx = 0) and dx^2 = 0)
, this should be intuitionistically equivalent tonot forall dx in R, not (not (dx = 0) and dx^2 = 0)
, which in turn should be equivalent tonot forall dx in R, (dx^2 = 0 => not not (dx = 0))
, but this is the negation of what you prove above. Where am I going wrong?Also, my impression that the relevant notion of "intuitionistic field" was one in which "Â¬(x = 0) iff x is invertible" came from, for example, the last field axiom in section R_1 of p. 102 of Bell's "A Primer of Infinitesimal Analysis". Of course, I understand now that there are also other reasonable conceptions of intuitionistic fields, but this would seem to be the one at use in at least Bell's formulation of smooth infinitesimal analysis.
Hm, somehow, my math symbols have turned garbled in some (but not all) of my above comments, even though they were fine before. Odd...
I feel like i have poked a hole in it. Something that is potentially true might actually be true: Let A be a true statement, now suppose A is false; this conflicts with A being true, so A is not false. So infinitesimals are potentially in danger.
x0 => f(x)>0 or f(x)>0 f(0)=0
Say we have a z, a 'finfinitessimal': f(z)=0. Then z>0 or z or f(z)f(g(x)) 'hinfinitessimal'. But if f(g(z))=0, if g(z)=0, then f(g(z))= f(0)=0, so it is in there, so the 'ginfinitessimals' are a subset of the 'hinfinitesimals'. Now, make h the identity, for which e(z)=0 implies z=0, you see that for all the infinitesimals based on functions like this are zero; there are no infinitesimals.
So... What am i doing wrong?
Sorry for having to post here again, i copied my reply, but a part seems to have gone off. Also, i made a mistake. http://www.reddit.com/r/math/comments/6w5ws/mathematics_and_computation_intuitionistic/c0bnuud
Dear Jasper, I am glad you are interested in this topic. My post is not intended to be a complete description of intuitionistic logic and infinitesimal analysis. It is quite impossible to learn these topics just from my post, so I recommend that you look at the literature cited in the post, especially the "Primer" by John Bell.
Regarding your comments, let me just say that my post contains at least one error. Namely, the interpretation of
x != y
should be apartness rather than inequality. By this I mean thatx != y
is defined asx < y or y < x
, rather than the classicalnot (x = y)
. I thank Sridhar Ramesh for noticing the problem.Let me also comment briefly on what you wrote. Firstly, in comment 11, I am not sure what the purpose of the beginning of the post is: you assume A is true and false at the same time, so no wonder you can get anything you like out of that.
Yes, the principle of microaffinity states that the derivative is unique, I did not write that clearly enough. The Law of Cancellation is proved from the Axiom of MicroAffinity, see John Bell's primer. When I speak of "arbitrary" infinitesimals in the Law of Cancellation, what I mean is that in order to cancel
dx
in an equationa * dx = b * dx
you first have to proveforall dx, dx^2 = 0 => a * dx = b * dx
, which is not the usual law of cancellationforall a, b, c, c != 0 and a * c = b * c => a = b
.There are metatheorems saying that the usual derivatives are the same as the ones developed in smooth analysis, again see John Bell's Primer. Further, if
dx^2 = 0
thennot (dx != 0)
, i.e., you cannot have an infinitesimal which is apart from 0. So when you assumed that you had one like that, of course you can derive anything, among other things yourdx < 0 or dx > 0
. Ifdx
is infinitesimal then1/n < dx < 1/n
for all natural numbersn
, i.e., this is what we mean when we say that infinitesimals are "infinitely small".I am not sure what you mean by those
f
infinitesimals. I think you should think more carefully about them and make sure you do not use classical logic on the way.Hello!
I'm interested in understanding what you've written about.
I've been convinced of intuitionist logic, philosophically, since taking Logic 101 as an undergrad philosophy major. I don't know how many times I heard the professor say, "Matthew, we're NOT studying Brouwer!" I had a great deal of difficulty in that class, in no small part because the law of the excluded middle doesn't make sense to me, and we were expected to reason with it. (I scored a 1530 on the SAT; intelligence wasn't the problem.)
For the purpose of this question, let's just say my current level of mathematics proficiency is basic, collegelevel algebra. (I could use a review.)
Can you recommend a series of texts I could study, or courses I could take, that would equip me to even understand the "Primer" by John Bell?
I've asked this same question over the years, but the answer I've gotten repeatedly is that I first need to study logics and maths that acknowledge and use the law of the excluded middle... and then I can get into intuitionist logic and math. I'm not interested in doing this. Surely, I keep thinking: there has to be a way for me to learn more logic and math that doesn't require me to turn a blind eye to reality!
Thank you for your suggestions, and thank you for this fine post!
Dear Matthew, the math we are talking about here is at about advanced undergraduate level. It would help if you knew the basics of calculus. For a superquick and dirty course in calculus I would probably recommend one of Schaum's outline series books, perhaps "Calculus demystified", but before you cough up 20 dollars for that book have a look at the Wikibooks Calculus.
Since you are coming from the philosophical angle I definitely suggest that you have a look at Erret Bishop's manifesto, which is the introduction to his Foundations of Constructive Analysis.
It is a complete lie that you have to study classical mathematics in order to be able to later get to intuitionistic mathematics. Quite the contrary, once your brain has been trained how to use the law of excluded middle it is a real effort to deprogram yourself and be able to reason constructively. Your teachers were just trying to shut you up.
Dear Andrej,
Thank you for your prompt reply and encouragement!
Though the Amazon.com reviews of Calculus Demystified weren't very positive, by and large, the Schaum's outline series looks good. The Life of Fred series, which was recommended to me in the past, also looks like it might be a winner.
It's good to hear both that it's likely possible to avoid a study of classical mathematics in order to be able to get to intuitionist mathematics, and also that my concern about potential difficulties unlearning the classical stuff wasn't unfounded.
Again, thanks!
This goes on my list of things which I would gain immeasurable benefit from being told 10 years ago.
This is very intriguing. Thank you for this post!
What I don't get is how you compute the derivative of x^n. It seems to me that you make the assumption that the derivative is n*x^(n1), put that in and arrive at your assumption. I guess I'm missing something here. I tried to compute the derivative of f(x)=a^x (for some constant a), starting with f'(x) dx = f(x + dx)  f(x) = a^(x + dx)  a^x Now, I know that I can express a^(x + dx) by a^x + f'(x) dx, but that only leads me back to f'(x) = f'(x). What would be the next step to complete the derivation?
[...] Intuitionistic mathematics for physics « Mathematics and Computation (tags: math physics logic philosophy) [...]
Janno: For computing the derivative of $x^n$, Andrej uses the binomial theorem to expand $(x + dx)^n$.
Here's how to compute the derivative of $e^x$: $e^{x + dx}  e^x = e^x e^{dx}  e^x = e^x (1 + dx + (dx)^2/2 + \cdots  1) = e^x (1 + dx  1) = e^x dx$. The infinite series is actually finite, as all terms of order 2 and greater are identically zero.
Anonymous: You're right about the derivative of $x^n$ and wrong about the derivative of $e^x$. Before you write down an infinite sum, like you did, you have to argue that they make sense. Which they don't in synthetic differential geometry. There are no axioms that ensure completeness of the smooth real line with respect to Cauchy sequences. Nor can there be such axioms, because they would allow us to define a nonsmooth function as an infinite sum.
Actually, the exponential function is introduced as the solution of the differential equation $y' = y$ with the initial condition $y(0) = 1$, so the derivative of $e^x$ is postulated to be $e^x$.
Well, there must be at least one because his/her clever way of responding to sceptics was noted here: http://math.stackexchange.com/a/532/1457
For years I have looked to find a community of mathematicians who are not comfortable with analysis as it is traditionally taught. When I was young in college I started to explore intuitionism, but my life took a different path, yet I always retained an interest in mathematics as a 'talented amateur'!
I often wondered if it wasn't possible to construct a mathematics in which dx was a new number outside of the systems constructed for the real numbers, such that dx^2 vanished but dx had no multiplicative inverse.
Much appreciate your links.
A couple months ago, I did a course on smooth infinitesimal analysis with high school students, based on this post. It is of course tricky to do logic stuff with school students, since they typically have no background in writing and understanding (informal, but reasonably precise) proofs. Still, they liked it, were able to work with infinitesimals, and definitely got the main points.
Here are the notes (in German only, I'm sorry).
[…] highly recommend Andrej Bauer’s blog for such matters. The post http://math.andrej.com/2008/08/13/intuitionisticmathematics… is a good starting […]
Since we're working with constructive maths, it's worth pointing out that you can construct $dx$ such that $dx \cdot dx = 0$ quite easily. Consider the $2 \times 2$ matrix $$\begin{matrix} 0 & 1
0 & 0 \end{matrix}$$ Its square is zero. Now an ordinary number $a$ is $$\begin{matrix} a & 0
0 & a \end{matrix}$$ and $a + dx$ is $$\begin{matrix} a & 1
0 & a \end{matrix}$$ Now simply carry out $(a + dx)^2$ using ordinary linear algebra. You will get $$\begin{matrix} a^2 & 2 a
0 & a^2 \end{matrix}$$ which shows that $2 a dx$ is correct. This'll work for $x^n$ as well as everything else. You just need to expand the definition of 'number' to $2 \times 2$ matrices. Also look at nilpotent Lie groups for the multivariate case.
Your suggestion is a well known one. If I am not mistaken you're just suggesting that we work in the ring $\mathbb{R}[x,y]/(y^2)$, but you're giving us the ring in terms of matrix represenations. This gives us the infinitesimals, but not everything. For example, in synthetic differential geometry the smooth reals $R$ form an (intuitionistic) ordered field, but you've only got a ring. There are also more global obstacles to be overcome. For instance, the total tangent space of any object $A$ is simply the exponential $A^\Delta$. To get these sorts of features you would have to add a lot more structure to your setup, and I think by the time you'd be done you would invent models of synthetic differential geometry.
I agree, it's only an entry point. It's useful for students, I would say, to have a concrete example of existence. Also, automatic differentiation schemes are based on little more than this, and have become very important in neural networks.
SIA is amazing, on the other hand, for the view of the continuum that it provides; the continuum cannot be explained entirely in terms of the discrete, which is as it should be. Dual numbers really should be taught in highschool, as well as intuitionistic logic.
For that matter, I've had enough of people calling sqrt(1) an "imaginary" number. It's not. Here it is:
0 1 1 0
The square of that is
1 0 0 1
which is of course I, where I is the unit. "Complex" numbers are just 2x2 matrices
a b b a
People joke that math is just 'castles floating in the air', and 'a religion', but that's because teachers don't talk about these well known constructions that completely ground the subject, and lead to a better understanding.
http://arxiv.org/abs/1104.1492
I liked this paper on Fermat reals. They also discuss the philosophical points, and how all of this is really accessible to highschool students. As somebody else said, this is yet more stuff it would have been great to know 10 years ago (or, well, 20 in my case).
Thanks again for your post, it really helped me.
Absolutely, if I taught synthetic infinitesimal analysis at a graduate level I'd first discuss things like dual numbers.
This is great! I particularly liked the description of how a particle bouncing of a wall may require a different notion of the continuum  it reminds me of something that I've read about Aristotles close examination of Zenos paradox of motion: that the continuum is only potentially infinitely divisible but not actually so, and that the continuum is indecomposable; do you happen to know how well recieved intuitionistic concepts of the continuum are in interpretations of QM?
@AndrejBauer: How do you justify writing $\forall \phi \in \mathsf{Prop}$? Sets and elements are firstclass mathematical objects of the theory. But propositions are objects about which one can only speak in the meta theory to that theory.
Wherever did you get the idea that logic is outside mathematics? Anyhow, you should read $\mathsf{Prop}$ as the set of all truth values, not the set of all formulas. In other words $\mathsf{Prop}$ is isomorphic to the powerset of a singleton $\mathcal{P}(\lbrace\star\rbrace)$. For instance, you can express excluded middle as $\forall S \in \mathcal{P}(\lbrace\star\rbrace) . \star \in S \lor \lnot (\star \in S)$. Does that make you happier?
You say that Prop is in fact not the set of all propositions (= formulas). Why, then, do you denote the set of truth values "Prop"? "Prop" immediately suggests that it is the set of all propositions.
You say that I can imagine $\mathsf {Prop}$ to be the set $\mathcal P({\star})$. But then the law of excluded middle $\forall\phi\in\mathsf{Prop}. \, \phi\lor\neg\phi$. becomes $\forall S\in \mathcal P({\star}).\, S\lor\neg S$. But this latter statement does not make sense, since $S$ is a mathematical object and not a statement. Thus one can't use $S$ to express a statement like "$S\lor\neg S$". That's where my confusion comes from. I hope you can clarify.
Ups, I meant $\mathcal P(\lbrace \star\rbrace)$ and not $\mathcal P(\star)$
It is traditional (and in my opinion misleading) to refer to the object of all truth values as "the object of propositions". I do not know why that is. In Coq the object is called
Prop
, for instance. In a topos it is $\Omega$, and it is isomorphic to $\mathcal{P}(\lbrace * \rbrace)$.You are mistaken about excluded middle. It is not expressed as $\forall S \in \mathcal{P}(\lbrace * \rbrace) . S \lor \lnot S$ but rather as \mathcal{P}(\lbrace * \rbrace) . \star in S \lor \lnot (\star in S)$. In any case, this is a technicality which can be dealt with in many ways. For instance, one can observe that $\mathcal{P}(\lbrace * \rbrace)$ is a complete Heyting algebra under the $\subseteq$ ordering, and that logical formulas denote elements of this Heyting algebra. Then, once can really write $S \lor \lnot S$, and this would mean the same thing as $S \cup (\lbrace{\star\rbrace \setminus S)$. I recommend that you consult a textbook such as Scott & Lambek's "Introduction to HigherOrder Categorical Logic". They explain these things there. You will also learn from that textbook about internal languages, which allow us to mix syntactic forms with semantic objects.
Hey Andres,
I saw a popscience article on Nicolas Gisin, a Swiss physicist who is ... I'm not sure. Let's just say he is mentioning intuitionistic math and physics in the same sentence.
There's a paper of his on Arxiv. https://arxiv.org/abs/2002.01653
My take is that his understanding of intuitionistic math is screwy. But, if he and other physicists wants a mathematics where a finite amount of physical space can only hold a finite amount of information, he's on the right track.