Mathematics and Computation

A blog about mathematics for computers

On programming language design

In a recent post I claimed that Python's lambda construct is broken. This attracted some angry responses by people who thought I was confused about how Python works. Luckily there were also many useful responses from which I learnt. This post is a response to comment 27, which asks me to say more about my calling certain design decisions in Python crazy.

Language design is like architecture. The architect is bound by the rules of nature, he has to take into account the properties of the building materials, and he must never forget the purpose that the building will serve. Likewise, the designer of a programming language is bound by the theorems of computability theory, he must take into account the properties of the underlying hardware, and he must never forget that the language is used by programmers.

When I teach the theory of programming languages, I tell my students that there is a design principle from which almost everything else follows:

“Programmers are just humans: forgetful, lazy, and they make every mistake imaginable.”

Therefore, it is the task of the designer to make a programming language which counters these deficiencies. A language must not be too complex, lest the programmer forget half of it. A language must support the programmer's laziness by providing lots of useful libraries, and by making it possible to express ideas directly and succinctly. The language must allow good organization of source code, otherwise the programmer will use the copy-paste method. The language must try really hard to catch programming mistakes, especially the mundane ones that happen to everyone all the time. When it finds a mistake, it must point to the true reason for it, preferably with an error message that humans understand.

You will notice that so far I have not said a word about efficiency. If this were the year 1972 we would talk about efficiency first and forget about the programmers, because 37 years ago hardware and processing time were the scarcest resources. Today we live in different times when the most expensive resource is development time. In 1972 it was a good design decision to implement arrays in C so that they did not carry with them information about their lengths (save a couple of bytes on each array), it was a good decision not to check for out-of-bounds errors in array indexing (save a couple of CPU cycles), and it was a good decision not to have garbage collection (it didn't work well anyhow). From today's point of view all these decisions were horrible mistakes. Buffer overflows, which are a consequence of missing out-of-bounds checks, cost the industry huge amounts of money every year, while lack of automated garbage collection results in memory leaks that cause programs to be unstable.

Of course, even today C might be just the right tool for your specific task. I am not saying that memory efficiency and speed are not important. They are not as important as they used to be. The first objective in a programming language design today should be friendliness to programmers. A lot is known about how to write an optimizing compiler and how to generate efficient code, so usually the design of the language does not prevent generation of efficient compiled or interpreted code.

People do not make bad design decisions because they are evil or stupid. They make them because they judge that the advantages of the decision outweigh the disadvantages. What they often do not see is that they could have achieved the same advantages in a different way, without introducing the disadvantages. Therefore, it is very important to get the order right: first make sure the design avoids the disadvantages, then think about how to get the advantages back.

Let us now apply these principles to several examples.

Undefined values (NULL, null, undef, None)

Suppose we want a language with references (pointers in C). The principle tells us that it is a bad idea to allow invalid references because programmers will create them. Indeed, most recently designed languages, say Java and Python, do not allow you to write obviously risky things, such as

int *p = (int *)0xabcdef;

Unfortunately, many designers have still not learnt that the special NULL pointer or null object is an equally bad idea. Python's None, perl's undef, and SQL's NULL all fall in the same category. I can hear you list lots of advantages of having these. But stick to the principle: NULL is wrong because it causes horrible and tricky mistakes which appear even after the program was tested thoroughly. You cannot introduce NULL into the language and tell the programmer to be careful about it. The programmer is not capable of being careful! There is plenty of evidence to support this sad fact.

Therefore NULL, null, None and undef must go. I shall collectively denote these with Python's None. Of course, if we take away None, we must put something else back in. To see what is needed, consider the fact that None is intended as a special constant that signifies “missing value”. Problems occur when a given value could be either “proper” or “missing” and the programmer forgets to consider the case of missing value. The solution is to design the language in such a way that the programmer is always forced to consider both possibilities.

For example, Haskell does this with the datatype Maybe, which has two kinds of values:

The only way to use such a value in Haskell is to consider both cases, otherwise the compiler complains. The language is forcing the programmer to do the right thing. Is this annoying? You will probably feel annoyed if you are used to ugly hacks with None, but a bit of experience will quickly convince you that the advantages easily outweigh your tendency for laziness. By the way, Haskell actually supports your laziness. Once you tell it that the type of a value is Maybe, it will find for you all the places where you need to be careful about Nothing. C, Java, Python, and perl stay silent and let you suffer through your own mistaken uses of NULL's, null's, None's, and undef's.

Other languages that let you have the data type like Haskell's Maybe are ML and Ocaml because they have sum types. Pascal, Modula-2 and C have broken sum types because they require the programmer to handle the tag by hand.

Everything is an object (or list, or array)

Many languages are advertised as “simple” because in them everything is expressed with just a couple of basic concepts. Lisp and scheme programmers proudly represent all sorts of data with conses and lists. Fortran programmers implement linked lists and trees with arrays. In Java and Python “everything is an object”, more or less.

It is good to have a simple language, but it is not good to sacrifice its expressiveness to the point where most of the time the programmer has to encode the concepts that he really needs indirectly with those available in the language. Programmers cannot do such things reliably, and the compiler cannot help them with the task because it does not know what is in programmer's head.

Let us look at a typical example in scheme. Suppose we would like to represent binary trees in which the nodes are labeled with integers. In scheme we might do this by representing the empty tree as (), and use a three-element list (k l r) to represent a tree whose root is labeled by k, the left subtree is l, and the right subtree is r. A quick search on Google shows that this is a popular way of implementing trees in scheme. It's simple, it's cool, it's easy to explain to the students, but scheme will have no idea whatsoever what you're doing. There are a number of trivial mistakes which can be made with such a representation, and scheme won't detect them (at best you will get a runtime error): you might write (l k r) instead of (k l r), you might mistakenly pass a four-element list to a function expecting a tree, you might mistakenly think that the integer 42 is a valid representation of the tree (42 () ()), you might mistakenly try to compute the left subtree of the empty tree, etc. And remember, the programmer will make all these mistakes.

With objects the situation is somewhat better. In Java we would define a class Tree with three attributes root, left, and right. It will be impossible to build a tree with a missing attribute, or too many attributes. But we will hit another problem: how to represent the empty tree? There are several choices none of which is ideal:

  1. the empty tree is null: this is the worst solution, as any Java programmer knows
  2. we define a class Tree and subclasses EmptyTree and NodeTree represent the two different kinds of tree
  3. we add a fourth attribute empty of type boolean which tells us whether the tree is empty

There are probably other options. The first solution is horrible, as every Java programmer knows, because it leads to many NullPointerExceptions. The second solution is probably the most “object-orientedly correct” but people find it impractical, as it spreads code around in two classes. When I taught java I lectured the third solution, but that one has the big disadvantage that the programmer is responsible for checking every time whether a tree is empty or not.

A decent programming language should help with the following common problems regarding binary trees:

  1. Prevent the construction of an invalid tree, such as one with missing parts, or dangling pointers.
  2. Prevent at compile time access to a component which is not there. For example, the compiler should detect the fact that the programmer is trying to access the left subtree of the empty tree.
  3. Make sure the programmer never forgets to consider both possibilities-the empty tree and the non-empty tree.

The above scheme representation does not help with the first problem. A C implementation with pointers would allow dangling pointers. An object-oriented solution typically won't help with the second and the third problems.

You might wonder what it is that I want. The answer is that the programming language should have built-in inductive data types, because that's what binary trees are. In Haskell, which has inductive data types, trees are defined directly in terms of their structure:

data Tree = Empty | Node Int Tree Tree

This expresses the definition of trees directly: a tree is either empty or a node composed of an integer and two trees. Haskell will be able to catch all the common problems listed above. Other languages supporting this sort of definition are ML, Ocaml, F#, and interestingly Visual Prolog (I am told by Wikipedia).

We might ask for more. Suppose we wanted to implement binary search trees. Then we would require that the left subtree only contains nodes that are smaller than the root, and the right subtree only nodes that are larger than the root. Can a programming language be designed so that this property is guaranteed? Yes, for example the compiler could insert suitable checks into the code so that anomalies are detected during execution as soon as they occur. This might be nice for debugging purposes, but what is production code supposed to do if it discovers an anomalous data structure during its execution? Ignore it? Raise an exception? It is much more useful to know before the program is run that the data structure will never be corrupted. Here we hit against a law of nature: there is no algorithm that would analyze an arbitrary piece of code and determine whether it will only produce valid search trees. It is a fact of life. If you really want to check that your programs are correct you will have to help the compiler. There are excellent tools for doing that, such as Coq and Agda-have a look to see how programmers might develop their code in the future.

Confusing definitions and variables

A definition binds an identifier to a particular fixed value. A variable or a mutable value is a memory location which holds a value that can be read and changed. These two notions should not be confused. Unfortunately, traditional programming languages only provide variables, so many programmers don't even understand what definitions are. Java tries to fix this with the final declaration, and C++ with the const declaration, but these are not used by programmers as much as they could be (which is a typical sign of dubious design decisions).

Using variables instead of definitions is wrong for a number of reasons. First, if the compiler knows which identifiers are bound to immutable values it can optimize the code better. It can, for example, decide to store the value in a register, or to keep around several copies without worrying about synchronization between them (think threaded applications). Second, if we allow the programmer to change a value which is supposed to be constant, then he will do so.

If you observe how variables are typically used, you will see several distinct uses:

Should loop counters be mutable? I think not. Code that changes the loop counter in the body of the loop is confusing and error prone. If you want to fiddle with counters, use the while loop instead. So in two out of three cases we want our variables to be immutable, but the popular programming languages only give us variables. That's silly. We should design the language so that the default case is an immutable value. If the programmer wants a mutable value, he should say so explicitly. This is just the opposite of what Java and C++ do. An example of a language that is designed this way is ML and ocaml. In Haskell you have to jump through hoops to get mutable values (now I am going to hear it from a monad aficionado, please spare me an unnecessary burst of anger).

Out of scope variables

I thought I would not have to explain why undefined identifiers are a bad a idea, but the reader in comment 27 explicitly asked about this.

If a programmer refers to an undefined name then an error should be reported. Concretely, I claimed that Python should complain about the following definition:

def f(n): return i + n

What is i? Pythonists will quickly point out that i will be defined later, and how deferred definitions are useful because they allows us to define mutually recursive functions. Indeed, Java and Haskell also accept mutually recursive definitions. But unlike Python they make sure that nothing is missing at the time of definition, whereas Python will only complain when the above function f is used. To be honest, Python kindly displays the correct error message showing that the trouble is with the definition of f. But why should this be a runtime error when the mistake can easily be detected at compile time? Actually, this question leads to a more general question, which I consider next.

When should mistakes be discovered?

Should programming bugs be discovered by the programmer or by the user? The answer seems pretty clear. Therefore, a language should be designed so that as many programming errors as possible are discovered early on, that is before the program is sent to the user. In fact, in order to speed up development (remember that the development time is expensive) the programmer should be told about errors without having to run the program and directing its execution to the place where the latest code change actually gets executed.

This philosophy leads to the design of statically checked languages. A typical feature of such a language is that all the types are known at compile time. In contrast, a dynamically typed languages checks the types during runtime. Java, Haskell and ocaml are of the former kind, scheme, javascript and Python of the latter.

There are situations in which a statically checked language is better, for example if you're writing a program that will control a laser during eye surgery. But there are also situations in which maximum flexibility is required of a program, for example programs that are embedded in web pages. The web as we know it would not exist if every javascript error caused the browser to reject the entire web page (try finding a page on a major web site that does not have any javascript errors).

Let me also point out that testing cannot replace good language design. Testing is very important, but it should be used to discover problems that cannot be discovered earlier in the development cycle.

I used to think that statically checked languages are better for teaching because they prevent the students from doing obviously stupid things. About two years ago I changed my mind. The students learn much better by doing stupid things than by being told by an oppressive compiler that their programs are stupid. So this year I switched to Python. The students are happier, and so am I (because I don't have to explain that code must be properly indented). Python does not whine all the time. Instead it lets them explore the possibilities, and by discovering which ones crash their programs they seem to understand better how the machine works.


This is a nice summary of many popular fallacies in language design, which many defend to the death against all reason. I would point out that a commonality here is the absence of sum types in popular languages. Another is over-emphasis of mutation and state, ranging from the mutable variables problems you point out up through piles and piles of object nonsense that makes matters worse and worse and worse. Just try to implement "delete" from a binary search tree represented in the contorted "object-oriented style". It's appalling because the class of a node can change from non-empty to empty, but with everything being a reference the effect is only felt further up the tree, and then you have to worry about aliasing. It's a disaster. Finally, I would suggest heading off the usual nonsensical argument about static vs dynamic typing by pointing out that dynamic typing is a mode of use of static typing (so-called dynamic languages pick a single recursive type and obsess over it for no useful purpose), so it can hardly be opposed to static typing, much less an alternative to it. Moreover, every single thing you can do in a so-called dynamic language can be done directly and easily in a static language---but as soon as you do, you realize that you really don't want to do this after all (but you can, if you're obsessive about that one true type that everything must have).


"Lisp and scheme programmers proudly represent all sorts of data with conses and lists"

That is not true. Both Common Lisp and Scheme have arrays; and Common Lisp even has bit vectors! Depending on your system, is compiled as efficiently as it would if you used an int or char array in C -- with the advantage that in Common Lisp you declare it as a bit vector and use bit operators on its elements (in C you have to use bitwise operations on chars/ints, create masks to select specific bits, etc).

Also, Common Lisp has structs (see defstruct), which (in SBCL/CMUCL) is compiled as "contiguous memory location", just as a C struct.

I don't know how to respond to this other than to say that Python's philosophy doesn't match yours. As CAR Hoare said:

"I conclude that there are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, the other way is to make it so complicated there are no obvious deficiencies."

Python definitely encourages the former. To steal a quote from Perl, does the language give you enough rope to hang yourself with? Sure. But python is so simple a language that these kinds of stupid mistakes are usually pretty easy to spot.

Haskell's designers seem to disagree. Peyton Jones once said that "any language large enough to be useful must be dauntingly complex."

Take the piece of code you post:

def f(n): return i + n

The problem is that based on this piece of code, we don't know if i is in scope or not. The above code becomes valid if it's placed in this context:

def g():
    i = 0
    def f(n): return i + n
    return f

Are there ways to allow this behavior and remove the possibility of stupid mistakes? Sure. But not without adding several layers of complexity to the language. Does that mean it's bad? Not necessarily. But then we go back to the dichotomy: do you encourage reliable code through simplicity or do you create reliability through better compiler checks?

I'm of the belief that compiler checks don't eliminate or even reduce programmer stupidity. They just make it go deeper by creating complexity. Have I written my fair share of stupid python code? Sure. But it's rare that I ever find myself saying "WTF is this code doing?"

But you obviously view things differently and that's a good thing.

Jason, I intended that piece of code defining f to be a complete module. Isn't that obvious? Otherwise everything I wrote about it does not make any sense.

Secondly, the claims you and I make are verifiable. There must be research out there measuring whether "simple" languages have fewer bugs, and whether static typing reduces programming errors. It would seem tricky however to measure such things reliably.


> Make sure the programmer never forgets to consider both possibilities–the empty tree and the non-empty tree.

I'm not sure about this one. I often have to put error "won't reach this" in match statements in ML.

[...] Andrej Bauer added an interesting post today on Mathematics and Computation » On programming language designHere’s a small readingThe architect is bound by the rules of nature, he has to take into account the properties of the building materials, and he must never forget the purpose that the building will serve. … I can hear you list lots of advantages of having these. But stick to the principle: NULL is wrong because it causes horrible and tricky mistakes which appear even after the program was tested thoroughly. You cannot introduce NULL into the language and tell the programmer to be careful … [...]

[...] View original post here:  Mathematics and Computation » On programming language design [...]

Let's say I agree that programmers are stupid: Then perhaps we should educate them instead of giving them blunt tools that can't hurt them.

Now to change the subject. As any good Java programmer will tell you, representing the empty tree by null is the best solution. The only disadvantage you gave, that NullPointerException will happen, is simply not a pragmatic reason. For if it happens, it will happen quickly and will make you think harder about the code. The two sub-classes solution sucks for the reason you mentioned: It makes the code hard to read, because it's spread out. The boolean solution is by far the worst: Just as in the null-solution you are responsible to check for empty trees with no help from the compiler, when you get an exception it doesn't directly translate to "you forgot the empty tree case", and you occupy extra memory. (Yes, using extra memory is bad, especially when you don't gain anything.) There is absolutely no excuse for using this solution, let alone teach it.

Most people find it easier to think about a stateful world and some find it easier to think about a stateless world. Of course, at some deep level both formalisms are equivalent. It's sad to see time and again people from one camp claiming that the others are ignorants, instead of acknowledging that different people have different styles of thinking.


Jules: I've realized that dummy "can't happen" matching (with assert false) is becoming rare in my OCaml code, as I tend to construct the data types in ways that allow exhaustive matching without impossible branches, for instance by using polymorphic variants instead of regular sum types or encodings like ['a * 'a list] for non-empty lists.

At any rate, even if you do end up using "impossible" matches, at least the type system forces you to be explicit and to take a minute to think about whether the branch can be reached or not.

Reply to Radu: You misunderstand me. I am not saying that programmers are stupid. I am saying they make mistakes because they are humans. They can't help it, no matter how smart they are. Most mistakes are really, really dumb, like mismatching the parenthesis, writing pritn insteand of print, forgetting to initialize a variable, etc. This has nothing to do with intelligence. It's about how our brains work. We're not robots.

You are suggesting that the programmer should think about his program hard. Yes, he should be thinking hard how to make his program good, not how to get rid of silly errors. That is just a waste of energy.

The tools I am suggesting are far from being blunt. Programming languages such as ML and Haskell are way, way more expressive than Java or Python.

[...] Web pages need to be dynamic?Posted on Saturday, April 11, 2009 at 9:41 PM. Towards the end of his "On programming language design" article, which does a very good job of pointing out the benefits and necessity of statically-typed [...]

Captain Morgan

Arrgh, all this brainy talk is makin' me head spin. Get back to swabbin' the deck, matey, unless you'd rather be hangin' from the mast.


I'm not sure if this was covered in the comments to the closure post, but here's the main issue:

Suppose you have a bunch of inline code that you want to turn into a zero-arg closure. Suppose that code modifies some variable i. If the closure takes the value of i at definition time and turns it into a local variable, the code works differently when you create the closure than when you inline it.

This adds an extra layer of confusion since macros and closures would behave differently. You can't apply the logic of a pure functional language to an imperative stateful one. The example above is a reasonably common one, that comes up if you realize you have the same bit of code in both branches of an if.

Jonathan Allen

Reply to Radu:

You are dead-wrong about using nulls for empty trees, or any other collection for that matter. You can't even call myTree.count() safely if you have a null. That's why most guidelines say to always return empty collections instead.

Jonathan Allen

> But then we go back to the dichotomy: do you encourage reliable code through simplicity or do you create reliability through better compiler checks?

Why not both?

Visual Basic will give you a compiler warning if you try to use a loop variable in a lambda. If you still want to do it you can, but at least you are warned.

Fritz Ruehr

Thanks for a concise but thorough summary of important issues.

Regarding your example of binary search trees, there are some interesting recent alternatives to Coq and Agda for enforcing these sorts of constraints. I am thinking specifically of the Sage language from Cormac Flanagan and his students at UC Santa Cruz (also Steve Freund at Williams); I heard about it from Aaron Tomb during a recent tech talk at Galois. In their tech report Sage: Unified Hybrid Checking for First-Class Types, General Refinement Types, and Dynamic, they in fact use binary search trees as a motivating example (quoting from their Section 2.1):

Note that Sage supports dependent function types, and so the type of the third argument to search can depend on the values of the first and second arguments. ... The Sage compiler uses an automatic theorem prover to statically verify that the specified ordering invariants on binary search trees are satisfied by these two functions -- no run-time checking is required. ... This precisely-typed BST implementation can inter-operate cleanly with dynamically-typed client code, while still preserving the ordering invariant on BSTs.

Sage constitutes an interesting design which manages to accommodate dependent types in a statically-typed setting; I'd be interested in hearing your opinion of it.

Oh, and a PS to Bob Harper: the "one true type that everything must have", is that the One True Type that in the Darkness Binds Them? (at run-time, presumably :) )

Lists and arrays are objects.


"Programmers are just humans: forgetful, lazy, and make every mistake imaginable. Therefore, it is the task of the designer to make a programming language which counters these deficiencies."

Wait, what? Aren't programming language designers also human?

Shouldn't one of the designer's top priorities, then, be to make sure the programmer can work around possible deficiencies of the language design?

Jorge Ortiz

Have you looked at Scala? It addresses most of the issues you bring up. While it does have null (for compatibility with Java), the idiomatic alternative is the Option type, which is the equivalent to Haskell's Maybe. Case classes and pattern matching are a statically checked way to define and operate on datatypes. Definitions (vals) and variables (vars) are syntactic equals, but convention dictates that vals are preferred over vars. Scala's lambdas aren't broken in the way that Python's are. There's even a special syntax for lambdas. For-comprehensions are like Python's list comprehensions but on steroids (closer in expressiveness to Haskell's do-notation).

Scala is statically checked, it has a powerful type system, ok type inference, and it's fully compatible with Java. It basically takes Java, drags it halfway to Haskell, halfway to ML, and ends up looking a little like Python.

Reply to 15: This sounds like a joke and I am not really sure what you want. Perhaps the fact that a general purpose language is Turing complete?



Jonathan Allen: what do the "left" and "right" methods return for a Java EmptyCollection instance? By default they will return NULL with Java inheritance, right? So what do you prefer? To have them return? an exception? Another EmptyCollection? An infinite tree of EmptyCollections?

Perhaps the following analogy is a good response to comment 8 by Radu:

Mathematicians are certainly intelligent enough to calculate 128+395 in their head or on paper. But it will be boring, error-prone and tedious. Fortunately, mathematicians have recognized that this task can be performed by a machine and they can now spend their time with more interesting things.

Similarly, programmers are certainly intelligent enough to carefully handle null pointers, but why bother? Let the language and compiler figure it out.

This is a reply to smallpaul, an angry person from Reddit. I have edited the post to make the idea with trees as subclasses more precise. Thank you for pointing out the lack of clarity in an insulting way.

Commenting further on smallpaul's, uhm, "review" on Reddit (comment 21): recursion or something equivalent to it (loops, for example) is necessary in a programming language that is Turing complete, but NULL's are not. This is the reason I would not advocate elimination of recursion. Although I do think it would be beneficial to separate structural recursion (a.k.a. primitive recursion), which is guaranteed to terminate, from general recursion. I just don't know how to present it in a language without making it annoying.


> There are probably other options. The first solution is horrible, as every Java programmer knows, because it leads to many NullPointerExceptions. The second solution is probably the most “object-orientedly correct” but people find it impractical, as it spreads code around in two classes. When I taught java I lectured the third solution, but that one has the big disadvantage that the programmer is responsible for checking every time whether a tree is empty or not.

What code does the EmptyTree class have? If we are emulating your Haskell example then all of these classes would have data and no code. Conversely, if we wanted to add code, we'd put it only on the NodeTree (given that the EmptyTree has no state and serves as simply a sentinel).

You can use these classes as almost exactly the same as Haskell's constructors and the consumer will be forced to cast to the right subclass (as in Haskell) before they fish out a value or a child node. The only issue is that the consumer must use instanceof before the cast rather than doing both in the same expression as Haskell does. Still, that's not a huge issue.

> we add a fourth attribute empty of type boolean which tells us whether the tree is empty

> When I taught java I lectured the third solution, but that one has the big disadvantage that the programmer is responsible for checking every time whether a tree is empty or not.

If you use this solution, then what do the "right" and "left" properties return?

Why is it easier or better to check for "is_empty" than to check "==null"? As long as the responsibility to check is on the programmer, what's the difference?

"... In fact, in order to speed up development (remember that the development time is expensive) the programmer should be told about errors without having to run the program and directing its execution to the place where the latest code change actually gets executed."

Are you suggesting that the expense of code written in a strongly typed language can be reduced by cutting down test coverage? Well, I hope not. Especially in combination with the classic FUD-"eye surgery" example.

I'm always dazzled by this kind of argument from static type advocates. It completely blinds out the nature of decision problems and creates the illusion that in recent time there was any progress happening towards a static type checking/static analysis system that converges "number of compiler errors in the program" and "number of bugs in the program" in any significant way. As soon as this doesn't happen (and imho it will never happen) your test coverage needs to be high, no matter what language you use.

So if your test coverage needs to be high anyway this degrades the extra-hassle of a static type checking compiler rather to a matter of taste than a matter of "extra security".

Simon Hawkin

Another way to represent empty, say, trees in Java would be to define an interface CanBeEmpty and derive trees from it. Method IsEmpty() is obvious and method Empty() would create an empty object (yes, a cast to the tree would be required; an inconvenience, but resolved at compile time). This does nor solve the fundamental problem of the value of an unitialized tree, but will help the programmer to build trees properly.

Dear Sebastian: No, I am not suggesting that static typing can replace testing. And nowhere did I say that. But I also disagree with you when you say that testing can replace static type checking. How are you going to perform tests which GUARANTEE that type errors will not occur, other than by performing complete type checking?

Read my post again and try to understand it in a POSITIVE way (the incredible amount of hostility and misunderstanding is getting a bit tiresome). I said it is good to tell the programmer about errors as early in the development cycle as possible. You cannot possibly object to that. You say that static typechecking is a hassle. Perhaps it is. But imagine a complex program which has big startup costs. I make a very small change in the code (in a way that cannot be easily tested as a unit). Then I have to run the program just to be told something is wrong. Then I have to look at the code to discover myself that I made a trivial mistake. How is that less hassle than having the compiler immediately refuse my code change?

I write programs in statically and dynamically typed languages (mostly ocaml and Python), and have pretty good idea about what each brings. I wonder how many of the angry readers here have active experience with a wide spectrum of languages, or how many have bothered to actually implement different kinds of languages. The types of statically typed languages are not a hassle, they are a blessing. The dynamic nature of Python is not such a big problem as the proponents of statically-typed languages would have you think. (For example, in web development it's really nice that the changes in the server code are immediately reflected on the web page--without recompilation and server restart.) In my opinion, at the end of the day the quality of the code depends more on the quality of its author than on the quality of the language he uses. But this does not mean that the quality of the programming language does not matter.

I teach the theory of programming languages, where I teach both statically and dynamically typed languages. I do NOT tell my students that dynamically typed languages are bad. I tell them that no one language is the solution to all programming tasks. I emphasize the benfits of static typing (obviously) but I do not belittle other options. Most of all I want the students to understand the reasoning behind design decisions. I also teach programming to beginners. Mostly because of my advocating, this year my department switch from a statically typed language (Java) to a dynamically typed one (Python). I belive Python is better for teaching than Java. My limited experience with teaching ocaml to beginners is not sufficient to say what happens if you teach them ML or Haskell. I am most definitely not saying that Haskell is the solution to everything, and my actions clearly support that claim.

So please stop the religous wars. They are pointless. And one more thing: when you write your comments be polite, or I will reject the comment. If you think I said something incredibly stupid, then you should suspect there is a misunderstanding (either by you, or because I wrote something in an unclear way, or I made a mistake). You can always ask politely instead of making insulting claims right off the bat.

After wading though all the replies, I see several valid points in the comments. The most interesting so far is: how would you implement the Maybe datatype (with all its good properties) in a dynamically typed language? It's an interesting design question. So let's talk about that, can we?


Here's a direct translation to Ruby:

class Maybe def nothing? self.class == Nothing end

def just? not nothing? end end

class Nothing < Maybe end

class Just < Maybe attr_accessor :value def initialize(x) @value = x end end

One of the problems with static typing is that there is no clear choice for language designers. If you choose dynamic typing you're done, and you're at a local maximum in the design space. If you choose static typing you have to design the type system very carefully. There is a whole spectrum between most safety and most flexibility.

Dynamic typing also allows you to solve problems like pickling/distribution in a more straightforward way.


Or this version:

class Maybe def nothing? end

def just? not nothing? end end

class Nothing < Maybe def nothing? true end end

class Just < Maybe attr_accessor :value def initialize(x) @value = x end def nothing? false end end


The title should be: "On programming language design or why Haskell is my favorite language". And seriously, (left-tree my-tree), (right-tree my-tree), (value my-tree), (is-empty? my-tree).

I tend to agree with ken.

Your article suggests to me a dillemma. If programmers are human and fallible, are not the designers who are also human just as capable of error? If what you seek is the one true language which can properly detect the intent of the programmer and protect them from their own mistakes then I would be amazed if you ever discover it.

Instead, wouldn't a more natural solution be a language which gives the programmer the ability to modify it to suit the problem domain? To take you binary tree example, you chose languages which have primitives that can be used to emulate a solution, but their implementation wasn't built to understand those compositions. But what if there was a language that allowed you to construct a grammar and even a syntax that would operate on binary trees as you define them?

Such a language already exists and I believe it's called common lisp. The advantage of it is that whatever construct or operator or even syntax that the language designers failed to include can be added by the programmers as they explore their problem domain.

I don't know Haskell at all, but I've always heard that it's not trivial to control the memory consumption of a Haskell program. If this is true, and if I apply your theory that programmer's are "forgetful, lazy, and make every mistake imaginable", isn't that then a strong reason not to use Haskell? Of course, if you know what you're doing, you probably can write Haskell programs that behave reasonably, memory-wise. But then, if you know what you're doing, you can write very nice programs in C too.

It wasn't and isn't my intention to be impolite. Sorry, if my previous comment was perceived this way. But to be perfectly clear: I read your essay, I understood your points (I guess) and I thoroughly disagree. Furthermore, the tone your last two essays are written in at least suggests that you're familiar with the concept of dealing some out:

In your paragraph "Undefined values...", you're not giving the creators of SQL, perl and python the benefit of the doubt. They didn't learn their lesson that the introduction of nil into a language is a mistake - period. Others might call this a design decision. A questionable one in your eyes - maybe, but oh well.

The reference post "Python's lambda is broken!" is basically a rant, ending with "What were the implementors of Python thinking?!". Considering that, you got many helpful replies pointing out the different implementations of closures in different languages.

If there is a way to read the complete last paragraph "When should mistakes be discovered?" other than a dichotomy between (static==secure) vs. (dynamic==flexible but less secure), tell me how to. It's difficult to not resolve the initial question into "In languages written in dynamic languages, errors are discovered by the user."

But towards your question concerning "Maybe". A function returning "Maybe Integer" is basically returning a tuple of a potential return value (type Integer) plus the potential information that something bad happened. Of course it is absolutely possible to implement this kind of thing in ruby as Jules pointed out. But the very concept is already implemented in all major languages as Exception in the form of begin/rescue in ruby, or $x=f($y) or blah() in perl. But to ask how to enforce error handling up the call stack in a dynamic language is equivalent to "tell me how to enforce types in a dynamic language". You don't. This leads us right back to the central question:

Does solidly implemented software that is reasonably well covered with tests get even more secure by formally enforcing a type system? After years of software development, starting in Eiffel, Java, C/C++ and now Ruby, JavaScript, C I have absolutely no reason to assume this.

Therefore I consider it as a valid question of tastes (which is the least amount of religiosity I can offer :).


In this essay, the comments and the reddit comments you write, I hear three different things. I'd like you to please select one of the following and say which one represents your feelings. I'd also suggest that if you want to be clear in your writing that you stick to the one you select in subsequent articles and be clear about it.

The three statements are:

  1. I prefer static type checking languages. It's a personal preference.

  2. Static type checked languages are better. They are always better.

  3. Statically typed checked languages are better for CERTAIN KINDS OF PROJECTS AND PROGRAMMERS and dynamically typed languages are better for others.

With each post you shift between those three statements randomly, and in fact seem to switch around within this single post. At the beginning you say Python's design decisions are crazy and short-sighted. Then by the end you say that you find Python and Javascript better for certain tasks.

Reply to Freek (33): Yes, memory consumption in Haskell can be quite unpredictable. This has always been one of the main criticisms of functional languages by John Reynolds, a pioneer of programming language theory.

Reply to smallpaul (35): Statically typed languages are my personal preference, especially because of the kind of programming that I do. But I also write programs in dynamic languages and they're perfectly ok. In theory statically typed languages should always be a win, but practice shows that dyamically typed languages fare pretty well in the real world.

I need to do some real work, so I'll stop replying now. Feel free to rant and so on. When I have a bit of time I'll implement Python as I would have done it: the constraint is that it has to be a dynamically typed language with lots of reflection and maximum flexibility. See you then.

Hi Andrej, It seems I'm a little late to the party!

I would like to thank you, sincerely, no smirking, for your considered reply to my original comment. This is one of the few posts I have seen that contains, but is not limited to, a discussion of a good dynamic language versus a good static language.

I have not programmed in OCaml, Haskel, or ML - but I do see their examples on Rosetta Code which makes comparison easy (for example: I started my interest in Python in the mid 90's. I have personal experience of languages changing the way I think about problem solving after being exposed to the SKILL programming language (a lisp dialect), but also of how a language must be more 'complete' or 'well rounded' before it would be prudent to invest too much time in it.

I think I should spend some time now investigating one of these functional, type-inferred, languages so I can more meaningfully join the debate.

I did, welcome the addition of at least a syntax for adding type information to Python in this post: But looking back, what I would take from my post was this:

"You would need to think of static/dynamic as a continuum where instead of having languages bunched at either end, you would have a smearing out towards the middle."

  • Paddy.

The problem with the def f(n): return i + n is not as straightforward as you might think, even if this function is the only thing in a module, because you can do import mymodule; mymodule.i = 10; print mymodule.f(1). Obviously, doing exactly that is silly, but it hints at the fundamental property of Python objects (be that classes or modules) that is extremely useful. Of course, it is most useful with classes because of the existence of standard interception vectors such as getattr, so one may argue that modules could be made different, at least requiring all attributes of a module to be explicitly declared (but allowing redefinition, as it is widely used way of providing custom handlers, and just plain easy way of quickly tweaking module behaviour), yet there might be legitimate uses for the whole functionality (some kind of metaprogramming maybe). Actually I'm aware of one real case when it is used in the core library: sys.setdefaultencoding is deleted by during startup, undoubtedly there are more. So, basically, changing the way it works would complicate the language (making modules really different from all other kinds of objects) and violate Python ideology of not restricting users to do what they want because they could come up with some wonderful unexpected ways of doing things. And the price is not so high, considering Python's primary area of application, that is not eye surgery.

[...] programming language design Andrej Bauer has a very nice article about language design on his blog, which happens to reflect a lot of what I believe about [...]

[...] and Learning April 12, 2009 Via Michael Feather’s blog: “On programming language design” I used to think that statically checked languages are better for teaching because they prevent the [...]

@Johnatan Allen: Of course you would use an empty collection instead of null! I was talking about how you implement a tree, which can be used to represent a collection, not about the interface of a collection. These are very different things.

@Heinrich Apfelmus: I agree. I think that Java should have a non-null data type, and its absence is one of the things that makes me prefer Haskell and ML. I'm not sure, though, how is that contradicting what I said.

@Andrej Bauer: Forgetting that trees can be empty is not a "silly" mistake in my opinion. YMMV.


Haskell and ML have some really nice features the Maybe type is one of my favorite constructs too, but these languages will never be very useful until they have a real tool chain. Anyone who has ever tried to find a bug in a large unfamiliar Haskell code base will know what I mean. It is shockingly time consuming because the debugging tool chain is so primitive and things like dynamic linking only have experimental support. The reality is that as fun and beautiful as it is to write in Haskell and as good as the language itself is, it is just not a terribly useful platform for everyday software development. The ecosystem around it is not filled with people that are interested in getting things done.

John Beattie

Is it true that you are against untyped None but are happy with typed None?

For some type A, Maybe A provides a typed value of Nothing. The compiler can use the type information to do some checking. OTOH, all the constructions you dislike, i.e. NULL, null, None, ... are untyped.

This point occurred to me when reading the paper on Adjunctions by Fokkinga and Meertens, 1993. At the end of that paper there is a discussion of an adjunction between the category of Total functions and of Partial functions.

I am not "against" anything because this is not politics. I think it is a design mistake to adjoint to a type a value None which signifies "error" or "missing" or "invalid" and then not to provide mechanisms which help the programmer catch all the places where the None must be considered as an option. It is a design mistake to think that programmers "can be a bit careful". Practice shows that they are just human beings.

If someone can tell me how this is to be achieved in a dynamic language, I'd like to hear it. I have an idea, but I need some time to implement it.

Sandro Magi

Many of the criticisms leveraged against you charge you with a bias against dynamic typing, despite the fact that you're merely trying to suggest more robust, orthogonal language features that have nothing in particular to do with typing. I would suggest you edit your post to link to a safe dynamic language, like Pure, that includes these features to emphasize this point.

I believe a dynamic language can have strong type checking. The type checking can only be done in the context of whole program analysis, of course. But we can leverage the fact developer knows what contexts they are developing for, and the complier should use those contexts to do the type checking analysis. Unfortunately, few IDEs can consider the greater context of the executing program, and all are still in stone ages when it comes to using the optimizing compiler's type analysis.

Javascript is a good example: Any particular web page specifies the source Javascript, and the order it is executed. In the context of a web page many of the simple Javascript type errors could be revealed by the compiler at compile time. Having the compiler look at all pages (all contexts) in a website will allow it to conclude even more about the correctness of the Javascripts it contains.

What I want is to combine languages simply so that languages serving as context for others can be declared succinctly. I also want an IDE that accepts notifications from the optimizing compiler about type analysis.

"People do not make bad design decisions because they are evil or stupid." Well, it's politically correct to say so, but in fact people make design mistakes for both of these reasons. More the latter than the former, which is the origin of Hanlon's Razor: "Never attribute to malice that which can be adequately explained by stupidity."

And of course, smart people are not immune to really stupid decisions (design or otherwise). Why just today I was stupid enough to point out on a public blog that programmers quite often make stupid design decisions...

I always thought it was Napoleon who said that. Interesting, Wikipedia explains well who said what. Napoleon talked about incompetence and Hanlon about stupidity.

Pascal Bourguignon

Re: the NULL, the problem is in the languages where it is not an object.

In Objective-C, nil behaves like an object: you can send it any method (the result is always nil).

Even better, in Common Lisp, NIL is an object like any other Lisp value, of class NULL, and on which you can define any method you want. So you can use NIL to represent the end of list and define:

(defgeneric len (object) (:method ((x null)) 0) ; (len nil) -> 0 (:method ((x cons)) (1+ (len (cdr x))))) ; (len '(a b c . nil)) --> 3

and then if you want to use nil to represent an empty tree node, you can define:

(defgeneric height (object) (:method ((x null)) 0) (:method ((x node)) (max (height (left x)) (height (right x)))))

and so on.

Seth Fogarty

I disagree with your second point, that you need inductive data types in an object oriented language. Now, I LOVE inductive data types. But they are not necessary, once you have option types/maybe objects, (his first point).

Quite clearly, a tree is an object with a root, left, and right: where left is an option type (Maybe Tree) and right is an option type (Maybe Tree).

A full binary tree is an object with a root, and maybe children (Maybe Tree * Tree)

Reply to Pascal: Having a NULL object which accepts all methods and always returns NULL is even worse than getting an exception when a method is invoked on null. That's a terrific way of creating all sorts of well hidden bugs.

Reply to Seth: Optional type (Maybe) is just a special case of a sum type. So we might as well have sum types then. And once we have sum types and (recursive) objects, we have the usual recursive data structures etc. Also, how would you treat the empty tree? It really is easiest to have a sum type so that a tree can be either Empty or a Node. That's awfully close to having inductive types, isn't it?

I am not saying that a language must have inductive types, but if you're going to work with inductive mathematical structures, inductive (or recursive) types will surely make your life easier.

Seth Fogarty

Reply to Andrej: I certainly wouldn't argue too much, as I find inductive types the most natural thing in the world. But for an object-oriented language, full sum types are probably a bit of overkill: you start overlapping ways to do things, and then there is more for the programmer to get confused about. A possibly empty tree is a Maybe Tree, and you keep it to the one instance of sum typing.

Seth, but a Maybe Tree is not a tree, it's a Maybe. How do you expect to call tree methods on it? And I disagree strongly. I believe object oriented languages can benefit greatly from sum types.

Looking back at this, I guess I should say that my initial comment was about the only two points on which I disagree. (Not as strongly as my language suggests, though. Perhaps I was trying to be intentionally provocative to stimulate discussion?) That may give the impression that I disliked the post but, actually, I agree with pretty much all else, and I like it.

So funny...“Programmers are just humans: forgetful, lazy, and they make every mistake imaginable.”

Seth, but a Maybe Tree is not a tree, it’s a Maybe. How do you expect to call tree methods on it?

Give them to the maybe, which will forward it or not and wrap the return itself in a Maybe? (essentially implementing method-level lifting natively in the language) Might want a different operator for lifted method calls in order to discriminate between calls on Maybe and lifted calls on whatever is within Maybe.

[...] Programmers are just humans: forgetful, lazy, and they make every mistake imaginable Read more from Web Click here to cancel reply. [...]

[…] de alto nivel como Python o MATLAB están optimizados para humanos, mientras que lenguajes de de nivel inferior, como Fortran y C, están optimizados para […]

Saeed Baig

"We should design the language so that the default case is an immutable value. If the programmer wants a mutable value, he should say so explicitly. "

Just wanted to say that another language that does exactly this is Rust. Example of variable declaration and initialisation:

let x = 5;

x there is immutable. If you want to make it mutable, you have to add the "mut" qualifier like so:

let mut x = 5;

How to comment on this blog: At present comments are disabled because the relevant script died. If you comment on this post on Mastodon and mention, I will gladly respond. You are also welcome to contact me directly.