# How to implement dependent type theory II

- 11 November 2012
- Type theory, Programming, Software, Tutorial

I am on a roll. In the second post on how to implement dependent type theory we are going to:

- Spiff up the syntax by allowing more flexible syntax for bindings in functions and products.
- Keep track of source code locations so that we can report
*where*the error has occurred. - Perform normalization by evaluation.

The relevant Github repository is andrejbauer/tt/ (branch blog-part-II). By the way, there are probably bugs in the implementation, I am not spending a huge amount of time on testing (mental note: put “implement testing” on the to do list). If you discover one, please tell me, or preferrably make a pull request with a fix. This also applies to old branches.

### Source code positions

Source code positions seem like an annoyance because they pollute our nice datatypes. Nevertheless, an even bigger annoyance is an error message without an indication of its position.

The OCaml lexer keeps track of positions, and menhir has support for them, so we just need to incorporate them into our program. Every expression should be tagged with the source code position it came from. Sometimes we generate expressions with no associated position, so we define:

type position = | Position of Lexing.position * Lexing.position | Nowhere

The type `Lexing.position`

is the one from OCaml lexer. Each expression is associated with two such positions, its beginning and end. To tag expressions with positions we define two types: `expr`

is an expression with a position and `expr'`

without (I stole the idea from Matija Pretnar's eff code):

(** Abstract syntax of expressions. *) type expr = expr' * position and expr' = | Var of variable | Universe of int | Pi of abstraction | Lambda of abstraction | App of expr * expr (** An abstraction [(x,t,e)] indicates that [x] of type [t] is bound in [e]. *) and abstraction = variable * expr * expr

Note that `expr'`

refers back to `expr`

so that subexpressions come equipped with their positions. We generally follow the rule that an apostrophe is attached to a type or a function which is position-less. Except that apostrohpies are not valid in the names of grammatical rules in the parser, so in parser.mly we write `plain_expr`

instead of `expr'`

.

We also extend the pretty printer and error reporting with positions, feel free to consult the source code.

### Better syntax for bindings

This is a fairly trivial change. It is annoying to have to write things like

fun x : A => fun y : A => fun z : B => fun w : B => ...

We improve the parser so that it accepts syntax like

fun (x y : A) (z w : B) => ...

Let us read out the relevant portion of parser.mly, namely the rules `abstraction`

, `bind1`

and `binds`

:

abstraction: | b = bind1 { [b] } | bs = binds { bs } bind1: mark_position(plain_bind1) { $1 } plain_bind1: | xs = nonempty_list(NAME) COLON t = expr { (List.map (fun x -> String x) xs, t) } binds: | LPAREN b = bind1 RPAREN { [b] } | LPAREN b = bind1 RPAREN lst = binds { b :: lst }

A `bind1`

is something of the form `x y ... z : t`

. A `binds`

is a non-empty list of parenthesized `bind1`

's. An abstraction is either a `bind1`

or a `binds`

. Thus we can write `fun x y z : t => ...`

and `fun (x y z : t) => ...`

and `fun (x y : t) (z : t) => ...`

but not `fun x y : y (z : t) => ...`

.

### Normalization by evaluation

In the first version we performed normalization by substitution, just like theory books say we should. But this is horribly inefficient. We could improve efficiency by keeping a current substitution (a “runtime” environment) which maps variables to the expressions. When we encounter a variable we look up its value in the current substitution. This way at least we do not keep traversing expressions during substitutions.

An even cooler way to normalize is known as normalization by evaluation. We first “evaluate” expressions to actual OCaml values in such a way that definitionally equal expressions evaluate to (observationally) equivalent values, and then we reconstruct the expression from the value (the fancy speak is that we reify the value). Apart from giving us a normal form there are all sorts of other benefits (Dan Grayson keeps asking me which, perhaps the more knowledgable readers can point them out).

We need a datatype `value`

into which we evaluate expressions. We need to evaluate expressions with free variables, which means that we are going to get stuck on applications of the form `x v1 v2 .. vn`

where `x`

is a free variable (these are called head-normal). We collect those in a separate datatype `neutral`

:

type value = | Neutral of neutral | Universe of int | Pi of abstraction | Lambda of abstraction and abstraction = variable * value * (value -> value) and neutral = | Var of variable | App of neutral * value

Abstractions will be evaluated to OCaml functions (so OCaml will take care of substitutions). Thus an abstraction like `fun x : t => e`

should be evaluated to a pair `(u, v)`

where `u`

is the value of `t`

and `v`

is the function $x \mapsto e$. But if you look at the definition of `abstraction`

above you see that we also keep around the variable name. This we do for pretty-printing purposes. When we reify an evaluated abstraction back to its expression form, we use the variable name as a hint.

You should take the time to read `value.ml`

which contains evaluation and reification, and comparison of values. Also note that `Infer.normalize`

really is just the composition of evaluation and reification.

We are now at 759 lines of code. We added 90 codes for evaluation by normalization and 51 for the for keeping track of source code positions.

### Trying something out

Ok, let us try something fun. How about Church numerals?

Parameter N : Type 0. Parameter z : N. Parameter s : N -> N. Definition numeral := forall A : Type 0, (A -> A) -> (A -> A). Definition zero := fun (A : Type 0) (f : A -> A) (x : A) => x. Definition one := fun (A : Type 0) (f : A -> A) => f. Definition two := fun (A : Type 0) (f : A -> A) (x : A) => f (f x). Definition three := fun (A : Type 0) (f : A -> A) (x : A) => f (f (f x)). Definition plus := fun (m n : numeral) (A : Type 0) (f : A -> A) (x : A) => m A f (n A f x). Definition times := fun (m n : numeral) (A : Type 0) (f : A -> A) (x : A) => m A (n A f) x. Definition power := fun (m n : numeral) (A : Type 0) => m (A -> A) (n A). Definition four := plus two two. Definition five := plus two three.

If you put the above code in `church.tt`

you can load it into `tt`

by

./tt.native -l church.tt

Dare we compute $2^{16}$? Sure:

# Eval power two (power two four) N s z. = s (s (s (s (s (s (s (s (s (s (s (s (s (s (s (s (s (s (s (s (s (s (s (s (s (s (s (s (s (s (s (s (s (s (s (s (s (s (... (...))))))))))))))))))))))))))))))))))))))) : N

It takes a moment, but that is mostly because of pretty-printing (if you evaluate the same expression in non-interactive mode by placing it in a file, you will notice no delay). How about $2^{20}$?

# Eval power two (times four five). Fatal error: exception Stack_overflow

Oh well. We will have to do something smarter (I am open to suggestions), or increase the stack size. Next time we are going some more types, and then I would like to focus on how to implement an interactive mode.

**How to comment on this blog:**At present comments are disabled because the relevant script died. If you comment on this post on Mastodon and mention

`andrejbauer@mathstodon.xyz`

, I will
gladly respond.
You are also welcome to contact me directly.
## Comments

Hello Andrej. I'm not sure what you mean by "other benefits" of normalization-by-evaluation -- providing a normal form is benefit enough! The point is that

normal formsare useful for many reasons, and NbE is a concise way of describing a strategy for calculating them directly. In contrast, naively applying rewriting rules in the hopes of eventually reaching a normal form is often an inefficient way of performing normalization...and sometimes it doesn't even work: to guarantee that an arbitrary rewriting strategy will terminate you need a strong normalization result, and in some situations SN fails. A well-known example is lambda calculus with explicit substitutions, but at an even more basic level SN fails in the presence of eta laws if you interpret eta by expansion, which is a more natural direction than eta-reduction.[On the other hand,

havinga rewriting relation on terms (rather than a mere normalization procedure) is useful in itself for reasoning about programs, so long as you know that this rewriting relation is well-behaved (e.g., respected by evaluation).]As for the stack overflow, one way you could get rid of it is by porting your implementation to Standard ML :) Actually that's only half joking: the point is that with NbE, your object language may depend on semantic features of the meta language (in particular of its function spaces). John Reynolds pointed out this problem 30 years ago, and also provided a recipe for addressing it by systematically converting a higher-order definitional interpreter into a first-order abstract machine (via CPS conversion + closure conversion and/or defunctionalization, a mantra which Olivier Danvy and company have been driving home for the last decade).

Although they do not explicitly

derivetheir abstract machine from an NbE procedure but only provide the end result, you may be interested in reading Altenkirch and Chapman's Big-step normalisation, which goes through the task of writing a provably correct environment machine for a lambda calculus with explicit substitutions in Agda.One benefit is that NBE produces (in this case) eta-long normal forms.

A theoretical one is the observation, due to Catarina Coquand, that the evaluation phase corresponds to the soundness theorem wrt Kripke semantics, and the reification phase to the completeness theorem.

This "normalization by evaluation" reminds me of the HOAS representation of substitutions by functions that you were trying out when you gave your tutorial, before settling on ordinary textual substitution. What is the difference here that makes this work where that failed?

Your type of values is what we had in Agdalight. Having function spaces in your data type became awkward when working with meta-variables or a monadic evaluator. I do not remember the exact reason why, but HOAS-values were abandoned; Agda 2 is back to de Bruijn indices (with a new set of drawbacks).

@Mike: I was trying to do type inference on the values, without reification back to expressions. That got horribly confused (it is probably somehwere in the git history).

@Andreas: that is interesting. Note I am reifying back to expressions after evaluation. Did you actually use functions all the time? I tried that and failed.

Thanks again for sharing this. I just want to signal a small bug in the example code: for the power function, shouldn't it be [n (A -> A) (m A)] instead of [m (A -> A) (n A)]?

Your side remark on the Stack Overflow got me inspired a few weeks ago. Valentin Robert kindly wrote a blog post describing our translation to a partial CPS form, just on the right side of application nodes. Problem solved! (for this particular input)

Normalization by Evaluation is indeed an elegant approach, which is in fact also used in Coq: it is behind the "Eval vm_compute in ..." command. This reduction based on bytecode compilation implemented by Benjamin Grégoire [1] is also an instance of untyped NbE. More recently, this machinery has been adapted to the stock OCaml compiler, so that it can be used to compile Coq's conversion test to native code [2]. A prototype branch is available at: https://github.com/maximedenes/native-coq

Just to mention one key difference with what is described above, in order to avoid breaking compiler optimizations, an App node in an expression is interpreted by an OCaml application (without an explicit App tag). Same is true for pattern matching when you have it in your dependently-typed language.

[1] Benjamin Grégoire and Xavier Leroy. A compiled implementation of strong reduction. In International Conference on Functional Programming 2002, pages 235-246. ACM Press, 2002. [2] Full reduction at full throttle. Mathieu Boespflug, Maxime Dénès and Benjamin Grégoire in First International Conference on Certified Programs and Proofs, Tawain, December 7-9, Lecture Notes in Computer Science. Springer, 2011.