The Pinocchio Problem
I only permit myself about 4 hours a month for blogging. That's been my rough budget for the past year or so. Normally I have all sorts of topics I'd like to write about, things I'm kicking around, and it's not too hard to pick one. But for the past month, I've only had one thing on my mind, and I've been going nuts trying to find a way to voice it — you know, to present it in a nice, concise way so you can gulp it down. And so far I'm failing. I think that means I don't understand it very well. Maybe trying to write it down will help.
It's about designing software. See, it seems like there's a good way to design software. A best way, even. And nobody does it. Well, a few people do, but even in those rare instances, I think it's accidental half the time.
I've been thinking about this problem on and off for quite a while, and wouldn't you know it, suddenly 18 years have gone by and I still can't quite articulate this... well, design principle, if that's what it is. But for the past month, I feel like I've been getting closer. Maybe you can help! I'll tell you what I know, and you can tell me what you know, and maybe we'll figure out something extraordinary.
By way of setting context for today's blog, I'll summarize everything I've ever written to date in one sentence: I think most software is crap. Well, that's not quite right. It's fairer to say that I think all software is crap. Yeah. There you have it, Stevey in a Nutshell: software is crap.
Even so, I think some software systems are better than others: the producer of the crap in question swallowed some pennies, maybe, so their crap is shiny in places. Once in a while someone will even swallow a ruby, and their crap is both beautiful and valuable, viewed from a certain, ah, distance. But a turd is still a turd, no matter how many precious stones someone ate to make it.
Lovely metaphor, eh? Made it myself!
Whenever I reflect on the software systems I like best — the ones that feel like old friends, like nice places to live — I see that they have some properties in common. I'll tell you what they are, at least the ones I've noticed, in a minute. Promise. But it's not the whole story. Once you add these properties to a software system, if you do them right, then you usually get a system that's as good as we can make them today. But they're still crap!
The real problem comes when I start thinking about what would happen if we could build software systems that aren't crappy. That thought exercise raises all sorts of interesting questions, and I don't know the answers to any of them. But I'll throw them out there too, maybe, as long as I don't go over my 4-hour time limit. Life beckons, and all that.
Favorite Systems
The big realization I had, sometime in the last month or so, is that all of the common properties of my favorite software systems can be derived from a single root cause: one property, or design principle, that if present will cause software to take on the right characteristics automatically.
What are my favorite software systems? Here are a few of the very best: Unix. Windows XP. Mac OS/X. Emacs. Microsoft Excel. Firefox. Ruby on Rails. Python. Ruby. Scheme. Common Lisp. LP Muds. The Java Virtual Machine.
A few more that just barely make the cut, for now: Microsoft Word. OmniGraffle Pro. JavaScript. Perforce.
Some that I think would make the cut if I learned how to use them effectively: The GIMP. Mathematica. VIM. Lua. Internet Explorer.
Most popular software systems out there don't make the cut. Most of them are quite useful, but I think they lack the essential properties of ideal software design. Examples: IntelliJ. Eclipse. Visual Studio. Java. C++. Perl. Nethack. Microsoft PowerPoint. All Nintendo and PlayStation console games. Nearly all PC games, with the notable exceptions of Doom and Quake. Most web applications, including highly useful ones like Amazon.com or Google Maps.
I won't keep you in suspense. I think the most important principle in all of software design is this: Systems should never reboot.
If you design a system so that it never needs to reboot, then you will eventually, even if it's by a very roundabout path, arrive at a system that will live forever.
All the systems I've listed need to reboot occasionally, which means their practical lifespan is anywhere from a dozen to a hundred more years. Some of them are getting up there — lots of them are in their twenties and thirties now, and there are even a few in their early forties. But they're all still a far cry from immortal.
I think the second most important design principle, really a corollary to the first, is that systems must be able to grow without rebooting. A system that can't grow over time is static, so it really isn't a system at all; it's a function. It might be a very complex function with lots of possible inputs and outputs. It might be a very useful function, and it might live for a long time. But functions are always either replaced or subsumed by systems that can grow. And I've come to believe, over nearly two decades of thinking about this, that systems that can grow and change without rebooting can live forever.
Essential Features
Here are some of the properties shared by the best software systems in the world today. Not all systems have all the properties I'll list here; I think very few of them have the full superset. I think you'll see that the more of the essential properties a system has, the more powerful, important, and long-lived it is.
Note: most of these are features for programmers. Features for non-technical end-users don't contribute to a system's lifespan. In the fullness of time, I believe programming fluency will become as ubiquitous as literacy, so it won't matter.
First: every great system has a command shell. It is always an integral part of the system. It's been there since the system was born. The designer of the system couldn't imagine life without a command shell. The command shell is a full interface to the system: anything you can do with the system in some other way can also be done in the command shell. Great command shells are a big topic in their own right (most of the essential properties of living systems are, come to think of it.) A rough sketch: a great command shell always has a command language, an interactive help facility, a scripting language, an extension system, a command-history facility, and a rich command-line editor. A truly great command shell is an example of a living system in its own right: it can survive its parent system and migrate elsewhere.
All existing command shells are crap, but they are an essential component of building the best software that can be built today.
Emacs can be thought of as the ultimate command shell: what happens when command shells are taken to their logical extreme, or at least as far as anyone has taken the idea to date.
Command shells aren't the only common feature of the greatest software systems in the world, though, so we'll have to leave them for now.
Great systems also have advice. There's no universally accepted name for this feature. Sometimes it's called hooks, or filters, or aspect-oriented programming. As far as I know, Lisp had it first, and it's called advice in Lisp. Advice is a mini-framework that provides before, around, and after hooks by which you can programmatically modify the behavior of some action or function call in the system. Not all advice systems are created equal. The more scope that is given to an advice system — that is, the more reach it has in the system it's advising — the more powerful the parent system will be.
Ruby on Rails has a minor advice system, which Rails calls "filters". It is (only) capable of advising controller actions, which are special functions that render page requests. You can't put a before-, after-, or around-filter on any old API function in the system, although Ruby itself makes this possible to some extent through its metaprogramming facilities. But it's insufficient for advice to be merely theoretically possible. Advice must be built into the system from the ground up, and must be exposed as a first-class, well-documented programmer interface. Advice is very, very powerful. Even the simple action-filtering system in Rails offers amazing flexibility; it's hard to imagine writing a Rails app without it.
Emacs has a sophisticated advice system. Common Lisp has, arguably, the most powerful advice system in the world, which contributes in no small part to the power of the language. Aspect-Oriented Programming is a herculean attempt to bring advice to the Java language, but due to fundamental limitations of Java, it has to be implemented as a language extension with its own compiler and other language tools, which has severely hampered adoption. Another impediment is that Java programmers prefer to write dead systems, and any hint of a breath of life in the system bothers them greatly.
To be sure, it bothers me too. Living software is a little scary. It's no surprise that most programmers prefer writing marionettes instead of real people; marionettes are a lot easier to manage. But I think living software is more interesting, and more useful, and quite frankly it's an inevitability in any event, so we might as well strive to understand it better. To me, that means building it.
Moving right along, world-class software systems always have an extension language and a plug-in system — a way for programmers to extend the base functionality of the application. Sometimes plugins are called "mods". It's a way for your users to grow the system in ways the designer didn't anticipate.
Microsoft Excel has an excellent mod system. It's quite a remarkable programming framework, verging on being a platform in its own right. Like all the best mod systems, it's tiered, with the easy entry point being Excel macros, working its way up through a full COM interface that can be scripted using VB or even Python or Ruby: any language with COM bindings.
One of the killer features that made Doom and Quake so popular was their mod systems. Doom had some basic facilities for scripting your own levels, sprites, and even game logic. Quake had QuakeC, which is still, I think, the gold standard for PC game scripting, but my info is a little dated, since for the past five years or so I've played mostly console games, which are sadly all deader than a coffin-nail.
The very best plug-in systems are powerful enough to build the entire application in its own plug-in system. This has been the core philosophy behind both Emacs and Eclipse. There's a minimal bootstrap layer, which as we will see functions as the system's hardware, and the rest of the system, to the greatest extent possible (as dictated by performance, usually), is written in the extension language.
Firefox has a plugin system. It's a real piece of crap, but it has one, and one thing you'll quickly discover if you build a plug-in system is that there will always be a few crazed programmers who learn to use it and push it to its limits. This may fool you into thinking you have a good plug-in system, but in reality it has to be both easy to use and possible to use without rebooting the system; Firefox breaks both of these cardinal rules, so it's in an unstable state: either it'll get fixed, or something better will come along and everyone will switch to that.
What's really amazing to me is that there isn't even a special bypass for Firefox extension developers. Their development cycle is amazingly painful, as they have to manually re-install the plugin (using the GUI) on every change. With some symlink trickery you can get around about half the steps, but the community actually frowns on this! They evidently feel that if the user is going to experience the installation pain one time, then the programmer must experience it every single time a line of code changes, as a perpetual reminder that Firefox has a crappy plug-in system that the programmer can't actually do anything about. Fun!
Firefox was given a gift by a programmer named Aaron Boodman, now at Google. The gift was GreaseMonkey, which provides an alternate way of writing Firefox extensions. Unlike Firefox's regular plug-in system, GreaseMonkey extensions can be installed and updated without rebooting Firefox, and they're relatively easy to write. This gift has given Firefox new legs. I doubt most Firefox developers (let alone the Firefox user community at large) fully appreciate the importance of GreaseMonkey to the long-term survival of Firefox.
Interestingly, GreaseMonkey is implemented as a Firefox plugin that offers its own plugin system. This is a common pattern in plug-in systems: some plugins will grow large and configurable enough to be viewed as standalone applications. Emacs has many such plugins: the advice package is a good example. For that matter, the Rails filtering system is also implemented in the style of an optional system extension. Plugins, like other software systems, also have a lifespan that's determined by how well they incorporate the features I'm discussing today. They eventually need command shells, advice, extension languages, and so on. However, plugins usually get these things for free by virtue of having been built within a living system.
Building extensible systems is much harder than the alternative. It's commonly estimated at being three to five times harder than building the equivalent non-extensible version of the system. Adding a plug-in system is much easier up front. Adding extensibility to an existing legacy system is fantastically difficult, and requires massive refactoring of the system. The refactoring needed is rarely of the pretty little automatable type currently so fashionable in Java circles. It typically requires effort of the same order as a full rewrite of the system, although as with all refactoring, the risk can be mitigated by putting thorough unit tests in place first.
Many software systems these days are accessible only remotely, over a network, e.g. via a browser and HTTP. Some of the bigger companies that have built such systems, including Yahoo!, Amazon.com and eBay, have begun to realize that programmer extensibility is a critical feature for the long-term survival of their systems. They have begun selectively opening up access to their internal systems, usually via web service interfaces. This provides a certain level of extensibility: in particular, it enables independent software developers to create their own interfaces to the system. Over time, I think a key differentiator in the web-application space will be the quality of the programmer access to the systems backing the web apps.
Plugin systems have security issues. They have usability issues. They have namespace issues. They have dependency-graph issues. They have backwards-compatibility issues. Plugin systems are damn hard to do at all, let alone do well. And they are absolutely essential for ensuring the long-term survival of software systems.
What other features do great software systems have?
I think one important element, at least today, is that they either have to be a killer app, or they need one. Every world-class software system is, by its very nature, a platform. If you have a command-shell, and an extension language with advice, and a plug-in architecture, then you've already got the makings of a platform. But you need to give users some reason for using it in the first place. So the GIMP is for editing images, Eclipse is for editing Java code, Emacs is for editing plain text, Rails is for building web apps, Firefox is for browsing, Python is for scripting, Lua is for embedding, Mathematica is for math, and so on. Each of them can be stretched into other domains, but they're strongest when they're being used in their niche.
The most generic software systems, I think, are operating systems and programming languages, and even they have a primary purpose. OSes are primarily for resource management, and the most successful programming languages have usually carved out some niche: you use C++ when you need speed, Perl when you need Unix system administration, and Java when you need an especially fat API to impress a client. JavaScript had a lock on browser programming, Lisp has been mostly carried by Emacs all these years, and Ruby has taken off mostly due to Rails.
So software systems need a niche. Someday there may be a generic software system so powerful and well-designed that it's the best system to use for literally everything. Maybe you'll be the one to build it.
The last big feature I'll enumerate today, and it's just as important as the rest, is that great software systems are introspective. You can poke around and examine them at runtime, and ideally they poke around and examine themselves as well. At the very least they should have some basic health monitoring in place. Even in a relatively small system with lots of static checking, you still need monitoring on things like input and output queues, and for large systems, you need to monitor just about everything, including the monitoring systems. (If you don't have meta-monitoring, one bad thing that can happen is that your system gets into a state where all the health checks are returning OK, but the system is in fact totally wedged.)
Introspection can (and should) take many different forms, not just health monitoring. System administration tools and diagnostics are one kind of introspection; single-step debugging is another; profiling is yet another. Dynamic linking is still another: the system needs to be able to (for instance) run bytecode verifiers to make sure the code being loaded passes basic security checks. Introspection usually comes with a performance penalty, so many programmers avoid it at runtime, or they make do with whatever minimal introspection facilities the underlying system provides (e.g. RTTI or Java's reflection facility). If you trade off introspection for speed, you're carving years or even decades off the lifespan of your software system. If you're a consultant, or you just want to get it working so you can move on to something else, then maybe it doesn't matter. But I think most programmers prefer to work on systems that will last a long time.
There's a long tail of other features shared by the best software systems, but at this juncture I think it's best to talk a bit about how they all derive from not rebooting, at which point you should be able to identify the other common features easily enough.
Rebooting is Dying
I can't do this topic justice today, partly because it's a big topic, and partly because I just don't understand it very well. So the best I can do is sketch the outlines of it, and hopefully you'll get the picture.
First, let's get some philosophical perspectives out of the way. I take a very broad view of software: most of it isn't man-made. It's only the man-made stuff that's crap. There's a lot of pretty good naturally-occurring software out there. I think that the workings of our brains can mostly be considered software, and the workings of our bodies are definitely software. (Heck, we can even see the assembly-language instructions in our DNA, although we haven't quite figured out the full code yet.) So people are carrying around at least two software systems: body and mind.
I also think of stable ecosystems as being software systems, and for that matter, so are stable governments. So, too, are organizations of people, e.g. companies like the one you work for. Unless I grossly misunderstood Turing's position, software is anything that can produce computation, and computation only requires a machine with some simple mechanisms for changing the machine's state in deterministic ways, including branches, jumps, and reading/writing of persistent state, plus some set of instructions (a program) for playing out these operations.
I'm assuming my definition of software here is self-evident enough that I don't need to defend the position that not all software is man-made.
So my first argument against rebooting is that in nature it doesn't happen. Or, more accurately, when it does happen it's pretty catastrophic. If you don't like the way a person works, you don't kill them, fix their DNA, and then regrow them. If you don't like the way a government works, you don't shut it down, figure out what's wrong, and start it back up again. Why, then, do we almost always develop software that way?
My next argument against rebooting is from plain old software, the kind programmers make today. Take off your programmer's hat for a moment, and think about rebooting as an end user. Do you like having to reboot your software? Of course not. It's inconvenient.
Do you remember way back, maybe ten years ago, when Microsoft made the decision to eliminate all reboots from Windows NT? I've never worked at Microsoft and I heard this thirdhand, so I don't know exactly how it went down, but my understanding is that some exec got pissed off that seemingly every configuration change required a system reboot, and he or she orchestrated a massive effort to remove them all. When I heard about it, they were down to just five scenarios where a full restart was required. Just think of the impact that change had on the U.S. economy — millions of people saving 5 to 30 minutes a day on downtime from Windows reboots. It's just staggering. If only they could have eliminated the blue screens, it would have been pretty darn good software, wouldn't it?
Linux has gone through some growing pains here, and it's finally getting better, but I would guess it still has many more reboot scenarios than Windows does today.
However, I think the reboot problem may be much deeper than a little inconvenience. Viewed from a radical, yet possibly defensible perspective, a reboot is a murder.
How so? Well, one of the biggest unsolved open questions of all time is the question of consciousness: what the heck is it? What does it mean to be conscious? What exactly does it mean to be self-aware? Are pets conscious? Insects? Venus fly-traps? Where do you draw the line? At various times in history, the question of consciousness has gone in and out of fashion, but for the past few decades it's been very much in, and philosophers and cognitive scientists are working together to try to figure it out. As far as I can tell, the prevailing viewpoint (or leading theory, or whatever) is something like this: consciousness is recursive self-awareness, and it's a gradient (as opposed to a simple on/off) that's apparently a function of how many inputs and/or recursive levels of self-awareness your brain can process simultaneously. So pets are conscious, just not as much as people are. At least that's what I've gathered from reading various books, papers and essays about it written over the past 30 years or so.
Also, a particular consciousness (i.e. a specific person) is localized, focused uniquely at any given time in a manner similar to the way your vision can only be focused on a specific location. That means if you clone it, you've created a separate person. At least that seems to be what the big thought leaders like Daniel Dennett are saying. I.e. if you were somehow able to upload your brain's software to some other blank brain and "turn it on", you'd have a person just like you, but it wouldn't be you, and that person would immediately begin to diverge from you by virtue of having different experiences.
Well, if any of this is true (and weenie academics are welcome to come poke holes in minutiae of my exposition, with the understanding that pedantry is unlikely to change the big picture here very much), then software can be conscious. It's debatable whether any software in the world today is conscious enough to present any ethical problems with shutting it off, but you have to wonder how far off it really is.
For instance, I used to think of my dog Cino (a shih-tzu) as a relatively simple state machine: pee, poo, sleep, eat, play, lick nose for hours. As I got to know him, I gradually discerned many more states, hundreds of them, maybe even thousands, but I'm still not entirely convinced that his behavior isn't deterministic. I love him to pieces, so I'll give him the benefit of the doubt and assume he has a free will. But it's clear that you can predict his behavior at least 90% of the time without much effort. I imagine the same is true for me!
A dog seems more complex than a mouse, which is in turn probably a lot more complex than a cockroach, and at some point down the chain (maybe at ants? flatworms? single-celled organisms?) it seems like there's going to be a point where the animal is completely deterministic.
At least it seemed that way to me at first, but I now think otherwise. I personally believe that all creatures can exhibit some nondeterminism, precisely to the extent that they are composed of software rather than hardware. Your hardware is definitely deterministic. Anything Cino has done since he was born, without needing to be taught, is either hardware or firmware (i.e. built-in software that can't be unlearned or changed). Routines for going to pee (and knowing when it's time), sneezing and other reflexes, processing sensory information, digesting food, and thousands of others are all hardcoded in either his brain firmware or his body firmware, and it works predictably; any observable differences are attributable to his software responding to the current environment.
In other words, I think that both consciousness and free will (i.e. nondeterminism) are software properties.
Nobody's going to shed a tear over the deliberate slaying of an amoeba. Well, most people won't. Similarly, I don't think it makes sense to get worked up when your "Hello, World!" process commits ritual suicide by exiting main() after emitting its one message to the world. But I think we've established that each invocation of your "Hello, World" program is creating a separate instance of a minute consciousness.
Well... sort of. A "Hello, World" program, which has no loops or branches, can't exhibit any nondeterminism (unless it's imposed externally, e.g. by a random hardware error), so you can think of it as pure hardware implemented in software. But somewhere between Hello, World and Hal 9000 lies a minimal software program that can be considered to possess rudimentary consciousness, at which point turning it off is tantamount to killing it.
Do we have any software that complicated today? Complex enough and self-aware enough to be considered conscious from an ethical standpoint? Probably not. But I think we'll get there someday, and by then I sure hope we're not developing software the way we do today, with a compile/reboot cycle.
I think of most of the software we build today as being like setting up dominoes. It's Domino Design. You design it very carefully, and it runs once, during which time the dominoes all fall in a nice, predictable order, and if necessary you pick them all up again. The end result can be much fancier than you can achieve with dominoes — think of all the frames in the movie Toy Story, for instance. Not much different, but definitely fancier.
Even really complex pieces of software like search engines or e-commerce systems are generally approached using Domino Design. If you program primarily in C++ or Java, you're almost certainly guilty of it.
Rebooting a domino system is unavoidable. The notion of rebooting it for every change, every upgrade, is hardwired into the way we think about these systems. As for me, I don't think dominoes are very interesting. I'd personally rather play with an amoeba. And I'd think twice before squishing it and finding myself a new amoeba, if, for instance, it refused to learn to run a maze or something.
DWIM and QWAN
My thinking in this section is a bit fuzzy, so I'll keep it short. DWIM is a cute acronym invented, I think, by Larry Wall. It means "Do What I Mean", and the Perl community bandies it about quite a bit.
The idea behind DWIM is that perfect software is like a grateful djinn in a bottle who is granting you a wish. I'm not talking about those cheap-ass wishes you get from fountains or shooting stars, either. Everyone knows you have to phrase penny-wishes in legalese because the Wish Fairy gives you literally what you asked for, not what you meant. Just like all the software you use. In contrast, the DWIM genie is on your side, and is tolerant of grammatical and logical mistakes that might otherwise result in getting the wrong thing. ("A guy walks into a bar with a miniature piano and a twelve-inch pianist...")
DWIM is ostensibly what every programmer is trying to build into their software, but it's almost always done by guesswork, making assumptions about your end-users. Sometimes it's done by collaborative filtering or some other algorithmic approach, but that's error-prone as well. The only way to make DWIM appear more than fleetingly and accidentally is to create truly intelligent software — not just conscious, but intelligent, and hopefully also wise and perceptive and (gulp) friendly.
But that's where it starts to get a little worrisome, since everyone knows that "intelligent" doesn't necessarily imply sympathy to your cause, especially given that your cause may well be stupid.
So do we want DWIM or no? On the one hand, we want our software to be so good that it predicts your every desire and responds to all your needs, so you can live that hedonistic lifestyle you've always craved. On the other hand, it appears that the only way to do that is to create software that could easily be smarter than we are, at which point we worry about being marginalized at best, or enslaved and/or exterminated at worst.
I think this explains, at least in part, why most of our industry is trying to achieve pseudo-DWIM by building bigger and bigger dead systems. (And what better language for building big, dead systems than Java?) Dead systems are safely deterministic and controllable.
Let's be slightly less ambitious for a moment, and assume for the sake of argument that building conscious, intelligent software is unlikely to happen in our lifetime. Is it still possible to build software that's better than the run-of-the-mill crap most of us are churning out today?
There's another genie named QWAN, invented by the architect Christopher Alexander; it stands for "Quality Without a Name", and (much like the elusive thesis of this essay) it's something he's known about most of his life, and has been trying to both articulate it and find a way to produce it reliably for the entire time. In short, QWAN is an all-but intangible "I know it when I see it" property of certain spaces and/or structures that appears to be (a) most achievable via organic growth, and (b) mysteriously dependent on some hardware or firmware in the human brain, since it triggers warm fuzzies for just about everyone, but nobody really knows why.
At some point in the murky past, some software professionals realized that software can also have QWAN, and in the interim it's become clear that it's just as difficult to pin down in software as in physical architecture.
However, I'll assert that QWAN is most apparent in exactly the systems I listed as my favorites earlier in this essay, and it's mostly absent in the systems I said lacked the essential properties of living software. (I then enumerated a few of the big ones and did some vague hand-waving about the rest of them, in case you've forgotten.)
People have observed that Emacs has QWAN: a nice, organic, comfortable rightness that fits like a pair of old jeans, or a snug warm chair in a library by a fire. It's very right-brain stuff we're talking about here, all touchy and feely and sensitive: exactly the kind of thing that programmers are often so lousy at, so it's no wonder we don't know the recipe for building it into our software. But unlike with UI design, software QWAN can only come from the programmer, who is playing the roles of interior decorator, head chef, and ergonomic consultant for all the programmer-users of said software.
That's why I think most software is crap. I'm not talking about the end-user experience, I'm talking about the programmer experience (if it even offers one). Design is an art, not a science, and most of us aren't born artists. So it's actually pretty remarkable when QWAN evidences itself even a little in a software system.
Note, incidentally, that the end-user experience and programmer experience offered by a software system are almost totally orthogonal. Visual Studio actually offers a pretty slick end-user experience, where the end-user in question is a programmer trying to write some C++ or C# code (as opposed to trying to build tools to help write C++ or C# code, or to help automate away other common tasks.) Emacs offers a completely hostile end-user experience and a nearly unparalleled programmer experience. Eclipse is somewhere in the middle in both dimensions, but is definitely weighted more towards the end-user programmer than the programmer-user.
Finally, there are some aspects to QWAN that probably can't be captured in any recipe. Just adding my ingredients (a command shell, an extension language and advice system, etc.) will improve your software system, but it's no guarantee that QWAN will appear. Some of it just boils down to taste. And since tastes differ, and QWAN is nowhere near as smart as its cousin DWIM, one person's QWAN may be another's software hell.
I still think we should try to build it.
The Role of Type Systems
OK. I've finally built up enough context to be able to explain my position on type systems.
In short, type systems are for building hardware. Every software system has a machine beneath it — a stack of machines, actually, a whole heap of them, from the von Neumann machine with its CPUs and registers and buses down through the semiconductors and doping barriers to the atoms and quarks and their quantum-mechanical underpinnings.
But I take a broader view of hardware. Hardware is anything that produces computations deterministically and is so inflexible that you must either break it or replace it wholesale in order to change it. So it's not just the physical hardware made of atoms; in my view the hardware of a machine also encompasses any software that cannot be changed without a system reboot.
The notion of a type system is closely tied to the notions of "compile time" and "static checking". I tend to take a very broad view of type systems; I consider all constraint systems imposed on computation to be types of type systems, including things like scoping and visibility constraints (public/private/protected/friend/etc.), security constraints, even the fundamental computing architecture of the program as defined by its complete set of function signatures. My view of "static" typing actually includes some stuff that happens at runtime.
But most people, when they talk about static type systems, are talking about the kinds of validations that occur at compile time (with the help of type annotations that are part of the language syntax, and optionally type inference that's run on the IR built from the AST) and which evaporate at runtime.
Static type systems have several benefits over deferring the constraint checking to runtime. For one thing, they can result in faster code, because if you do your checking up front, and you can guarantee the system won't change at runtime, then you don't need to do the checks at runtime. For another, static constraints can often be examined and reported on by tools other than the compiler (e.g. your IDE), which can make it easier to follow the program flow.
And that's all just ducky.
There's nothing wrong with static type systems. You just have to realize that when you use them, you're building hardware, not software.
A static type system has absolutely no runtime effect on a correct program: exactly the same machine code is generated regardless of how many types you chose to model. Case in point: C++ code is no more efficient than C code, and C lacks any but the most primitive type system. C isn't dynamically typed, but it's not particularly statically typed either. It's possible to write fast, robust programs in straight C. (Emacs and the Linux kernel are two fine examples.)
The proper use of a static type system is to freeze software into hardware. Whenever when a particular set of deterministic operations (e.g. OpenGL rendering primitives) becomes ubiquitous enough and stable enough that it's worth trading off some flexibility to create a high-performance machine layer out of the API, we move it into hardware. But it's mostly a performance optimization (and in very small part, a standardization move, I suppose) that incurs a dramatic penalty in flexibility.
Because most programmers nowadays prefer to build marionettes rather than scary real children, most programming is oriented towards building layer upon layer of hardware. C++ and Java programmers (and I'd be willing to bet, C# programmers) are by default baking every line of code they write into hardware by modeling everything in the type system up front.
It's possible to write loosely-typed C++ and Java code using just arrays, linked lists, hashes, trees, and functions, but it's strongly deprecated (or at least frowned upon) in both camps, where strong typing and heavy-duty modeling have been growing increasingly fashionable for over a decade, to the detriment of productivity (which translates to software schedule) and flexibility (which also translates to schedule, via the ability to incorporate new features).
The devotees of the Hindley-Milner type system (which originated in academia and has only rarely escaped into the industry) like to believe that the H-M type system is far better than the Java or C++ type systems, because (among other reasons) it has fewer holes in it.
The reason H-M isn't used in the industry, folks, is that it doesn't have any holes. If you're trying to build software, but you believe (as most do) that you do it by repeatedly building and discarding hardware in a long, painful cycle until the hardware finally does what you want, then you need escape hatches: ways to tell the type system to shut the hell up, that you know what you're doing, that your program is in fact producing the desired computation. Type casts, narrowing and widening conversions, friend functions to bypass the standard class protections, stuffing minilanguages into strings and parsing them out by hand, there are dozens of ways to bypass the type systems in Java and C++, and programmers use them all the time, because (little do they know) they're actually trying to build software, not hardware.
H-M is very pretty, in a totally useless formal mathematical sense. It handles a few computation constructs very nicely; the pattern matching dispatch found in Haskell, SML and OCaml is particularly handy. Unsurprisingly, it handles some other common and highly desirable constructs awkwardly at best, but they explain those scenarios away by saying that you're mistaken, you don't actually want them. You know, things like, oh, setting variables. You don't want to do that, trust them on this. (OCaml lets you do it, but they look down their noses at such impurities.)
It's true that with some effort you can build beautiful, lithe marionettes with Haskell. But they will forever remain little wooden Pinocchio, never to be granted their wish for boyhood by the Good Fairy. Haskell has little or no notion of dynamic code loading or runtime reflection; it's such an alien concept to them that it's difficult even to discuss why they might be useful with a Haskell fan: their world-view just doesn't incorporate the idea of software that grows while it's alive and breathing.
Looking around at the somewhat shabby offerings available in the land of dynamic typing, it's not hard to see why so many programmers view dynamic languages as toys. The main crop of popular dynamic languages includes Perl, PHP, Python, Visual Basic, JavaScript, Ruby, Tcl, and Lua. With all due respect to the language designers, they all apparently flunked out of compiler school before starting work on their scripting languages. Case in point: they all screwed up lexical scoping. Ironically, having their scripting languages propelled into the limelight has turned all of them (Larry, Guido, Matz, Brendan, etc.) into world-class language designers, which makes them, Salieri-like, able to appreciate the weaknesses of their languages (which can now change only with great difficulty) in a way that most programmers will never know.
Scheme has great promise, but suffers from the fatal flaw that it can never grow, not in the way industrial-strength languages and platforms need to grow to sustain massive programmer populations. It has to stay small to keep its niche, which is in CS education.
Common Lisp is in many ways really ideal: it's a dynamically typed language with optional type annotations (i.e. you build software, then selectively turn it into hardware), lots of great tools and documentation, all of the essential features of living software I've enumerated in this essay, and a fair bit of QWAN thrown in for good measure. However, it has stopped growing, and programmers can sense momentum like a shark senses blood. Common Lisp has all of the exciting energy of a debate between William F. Buckley, Jr. and Charleton Heston. (I watched one once, and I'd swear they both slept through half of it.)
Plus Lisp lacks a C-like syntax. We will never be able to make real progress in computing and language design in our industry until C syntax is wholly eradicated: a task that could take fifty years. The only way to make progress in the meantime is to separate the model (the AST) and the presentation (the syntax) in programming languages, allow skinnable syntax, and let the C-like-syntax lovers continue to use it until they're all dead. Common Lisp could be enhanced to permit alternate syntaxes, but since it's not growing anymore, it's not going to happen. I would look elsewhere for the Next Big Thing.
Conclusion
It's 3:30am; I've overshot my monthly budget by over 2 hours. You might actually get a succinct blog out of me next month! In any case, I think I've got everything I wanted off my chest.
I think we want to build software that does what you mean. It will change society in ways that are difficult to predict, since DWIM can only really come from truly intelligent software. Nevertheless, I think it's what we want.
I doubt it will happen any time soon, because most programmers are relentlessly focused on building hardware, through diligent overapplication of their type systems to what would otherwise be perfectly good software.
In the interim between now and the emergence of DWIM from its magic lamp, I think we should build living software. These systems may only be as alive as a tree, but that should be sufficient to awaken QWAN in them, and all systems that have any QWAN at all are truly wonderful to use — at least as a programmer; their end-user quality varies widely.
Living software has a command shell, since you need a way to talk to it like a grown-up. It has an extension language, since you need a way to help it grow. It has an advice system, since you need a way to train and tailor it. It has a niche, since it needs users in order to thrive. It has a plug-in architecture, so you can dress it up for your party. And it is self-aware to the maximum extent possible given the external performance constraints. These features must be seamlessly and elegantly integrated, each subsystem implemented with the same care and attention to detail as the system as a whole.
And you shouldn't need to murder it every time you grow it. If you treat your software like a living thing, then eventually it just might become one.
If we ever wake up a real AI, I think it should be named Pinocchio.
It's about designing software. See, it seems like there's a good way to design software. A best way, even. And nobody does it. Well, a few people do, but even in those rare instances, I think it's accidental half the time.
I've been thinking about this problem on and off for quite a while, and wouldn't you know it, suddenly 18 years have gone by and I still can't quite articulate this... well, design principle, if that's what it is. But for the past month, I feel like I've been getting closer. Maybe you can help! I'll tell you what I know, and you can tell me what you know, and maybe we'll figure out something extraordinary.
By way of setting context for today's blog, I'll summarize everything I've ever written to date in one sentence: I think most software is crap. Well, that's not quite right. It's fairer to say that I think all software is crap. Yeah. There you have it, Stevey in a Nutshell: software is crap.
Even so, I think some software systems are better than others: the producer of the crap in question swallowed some pennies, maybe, so their crap is shiny in places. Once in a while someone will even swallow a ruby, and their crap is both beautiful and valuable, viewed from a certain, ah, distance. But a turd is still a turd, no matter how many precious stones someone ate to make it.
Lovely metaphor, eh? Made it myself!
Whenever I reflect on the software systems I like best — the ones that feel like old friends, like nice places to live — I see that they have some properties in common. I'll tell you what they are, at least the ones I've noticed, in a minute. Promise. But it's not the whole story. Once you add these properties to a software system, if you do them right, then you usually get a system that's as good as we can make them today. But they're still crap!
The real problem comes when I start thinking about what would happen if we could build software systems that aren't crappy. That thought exercise raises all sorts of interesting questions, and I don't know the answers to any of them. But I'll throw them out there too, maybe, as long as I don't go over my 4-hour time limit. Life beckons, and all that.
Favorite Systems
The big realization I had, sometime in the last month or so, is that all of the common properties of my favorite software systems can be derived from a single root cause: one property, or design principle, that if present will cause software to take on the right characteristics automatically.
What are my favorite software systems? Here are a few of the very best: Unix. Windows XP. Mac OS/X. Emacs. Microsoft Excel. Firefox. Ruby on Rails. Python. Ruby. Scheme. Common Lisp. LP Muds. The Java Virtual Machine.
A few more that just barely make the cut, for now: Microsoft Word. OmniGraffle Pro. JavaScript. Perforce.
Some that I think would make the cut if I learned how to use them effectively: The GIMP. Mathematica. VIM. Lua. Internet Explorer.
Most popular software systems out there don't make the cut. Most of them are quite useful, but I think they lack the essential properties of ideal software design. Examples: IntelliJ. Eclipse. Visual Studio. Java. C++. Perl. Nethack. Microsoft PowerPoint. All Nintendo and PlayStation console games. Nearly all PC games, with the notable exceptions of Doom and Quake. Most web applications, including highly useful ones like Amazon.com or Google Maps.
I won't keep you in suspense. I think the most important principle in all of software design is this: Systems should never reboot.
If you design a system so that it never needs to reboot, then you will eventually, even if it's by a very roundabout path, arrive at a system that will live forever.
All the systems I've listed need to reboot occasionally, which means their practical lifespan is anywhere from a dozen to a hundred more years. Some of them are getting up there — lots of them are in their twenties and thirties now, and there are even a few in their early forties. But they're all still a far cry from immortal.
I think the second most important design principle, really a corollary to the first, is that systems must be able to grow without rebooting. A system that can't grow over time is static, so it really isn't a system at all; it's a function. It might be a very complex function with lots of possible inputs and outputs. It might be a very useful function, and it might live for a long time. But functions are always either replaced or subsumed by systems that can grow. And I've come to believe, over nearly two decades of thinking about this, that systems that can grow and change without rebooting can live forever.
Essential Features
Here are some of the properties shared by the best software systems in the world today. Not all systems have all the properties I'll list here; I think very few of them have the full superset. I think you'll see that the more of the essential properties a system has, the more powerful, important, and long-lived it is.
Note: most of these are features for programmers. Features for non-technical end-users don't contribute to a system's lifespan. In the fullness of time, I believe programming fluency will become as ubiquitous as literacy, so it won't matter.
First: every great system has a command shell. It is always an integral part of the system. It's been there since the system was born. The designer of the system couldn't imagine life without a command shell. The command shell is a full interface to the system: anything you can do with the system in some other way can also be done in the command shell. Great command shells are a big topic in their own right (most of the essential properties of living systems are, come to think of it.) A rough sketch: a great command shell always has a command language, an interactive help facility, a scripting language, an extension system, a command-history facility, and a rich command-line editor. A truly great command shell is an example of a living system in its own right: it can survive its parent system and migrate elsewhere.
All existing command shells are crap, but they are an essential component of building the best software that can be built today.
Emacs can be thought of as the ultimate command shell: what happens when command shells are taken to their logical extreme, or at least as far as anyone has taken the idea to date.
Command shells aren't the only common feature of the greatest software systems in the world, though, so we'll have to leave them for now.
Great systems also have advice. There's no universally accepted name for this feature. Sometimes it's called hooks, or filters, or aspect-oriented programming. As far as I know, Lisp had it first, and it's called advice in Lisp. Advice is a mini-framework that provides before, around, and after hooks by which you can programmatically modify the behavior of some action or function call in the system. Not all advice systems are created equal. The more scope that is given to an advice system — that is, the more reach it has in the system it's advising — the more powerful the parent system will be.
Ruby on Rails has a minor advice system, which Rails calls "filters". It is (only) capable of advising controller actions, which are special functions that render page requests. You can't put a before-, after-, or around-filter on any old API function in the system, although Ruby itself makes this possible to some extent through its metaprogramming facilities. But it's insufficient for advice to be merely theoretically possible. Advice must be built into the system from the ground up, and must be exposed as a first-class, well-documented programmer interface. Advice is very, very powerful. Even the simple action-filtering system in Rails offers amazing flexibility; it's hard to imagine writing a Rails app without it.
Emacs has a sophisticated advice system. Common Lisp has, arguably, the most powerful advice system in the world, which contributes in no small part to the power of the language. Aspect-Oriented Programming is a herculean attempt to bring advice to the Java language, but due to fundamental limitations of Java, it has to be implemented as a language extension with its own compiler and other language tools, which has severely hampered adoption. Another impediment is that Java programmers prefer to write dead systems, and any hint of a breath of life in the system bothers them greatly.
To be sure, it bothers me too. Living software is a little scary. It's no surprise that most programmers prefer writing marionettes instead of real people; marionettes are a lot easier to manage. But I think living software is more interesting, and more useful, and quite frankly it's an inevitability in any event, so we might as well strive to understand it better. To me, that means building it.
Moving right along, world-class software systems always have an extension language and a plug-in system — a way for programmers to extend the base functionality of the application. Sometimes plugins are called "mods". It's a way for your users to grow the system in ways the designer didn't anticipate.
Microsoft Excel has an excellent mod system. It's quite a remarkable programming framework, verging on being a platform in its own right. Like all the best mod systems, it's tiered, with the easy entry point being Excel macros, working its way up through a full COM interface that can be scripted using VB or even Python or Ruby: any language with COM bindings.
One of the killer features that made Doom and Quake so popular was their mod systems. Doom had some basic facilities for scripting your own levels, sprites, and even game logic. Quake had QuakeC, which is still, I think, the gold standard for PC game scripting, but my info is a little dated, since for the past five years or so I've played mostly console games, which are sadly all deader than a coffin-nail.
The very best plug-in systems are powerful enough to build the entire application in its own plug-in system. This has been the core philosophy behind both Emacs and Eclipse. There's a minimal bootstrap layer, which as we will see functions as the system's hardware, and the rest of the system, to the greatest extent possible (as dictated by performance, usually), is written in the extension language.
Firefox has a plugin system. It's a real piece of crap, but it has one, and one thing you'll quickly discover if you build a plug-in system is that there will always be a few crazed programmers who learn to use it and push it to its limits. This may fool you into thinking you have a good plug-in system, but in reality it has to be both easy to use and possible to use without rebooting the system; Firefox breaks both of these cardinal rules, so it's in an unstable state: either it'll get fixed, or something better will come along and everyone will switch to that.
What's really amazing to me is that there isn't even a special bypass for Firefox extension developers. Their development cycle is amazingly painful, as they have to manually re-install the plugin (using the GUI) on every change. With some symlink trickery you can get around about half the steps, but the community actually frowns on this! They evidently feel that if the user is going to experience the installation pain one time, then the programmer must experience it every single time a line of code changes, as a perpetual reminder that Firefox has a crappy plug-in system that the programmer can't actually do anything about. Fun!
Firefox was given a gift by a programmer named Aaron Boodman, now at Google. The gift was GreaseMonkey, which provides an alternate way of writing Firefox extensions. Unlike Firefox's regular plug-in system, GreaseMonkey extensions can be installed and updated without rebooting Firefox, and they're relatively easy to write. This gift has given Firefox new legs. I doubt most Firefox developers (let alone the Firefox user community at large) fully appreciate the importance of GreaseMonkey to the long-term survival of Firefox.
Interestingly, GreaseMonkey is implemented as a Firefox plugin that offers its own plugin system. This is a common pattern in plug-in systems: some plugins will grow large and configurable enough to be viewed as standalone applications. Emacs has many such plugins: the advice package is a good example. For that matter, the Rails filtering system is also implemented in the style of an optional system extension. Plugins, like other software systems, also have a lifespan that's determined by how well they incorporate the features I'm discussing today. They eventually need command shells, advice, extension languages, and so on. However, plugins usually get these things for free by virtue of having been built within a living system.
Building extensible systems is much harder than the alternative. It's commonly estimated at being three to five times harder than building the equivalent non-extensible version of the system. Adding a plug-in system is much easier up front. Adding extensibility to an existing legacy system is fantastically difficult, and requires massive refactoring of the system. The refactoring needed is rarely of the pretty little automatable type currently so fashionable in Java circles. It typically requires effort of the same order as a full rewrite of the system, although as with all refactoring, the risk can be mitigated by putting thorough unit tests in place first.
Many software systems these days are accessible only remotely, over a network, e.g. via a browser and HTTP. Some of the bigger companies that have built such systems, including Yahoo!, Amazon.com and eBay, have begun to realize that programmer extensibility is a critical feature for the long-term survival of their systems. They have begun selectively opening up access to their internal systems, usually via web service interfaces. This provides a certain level of extensibility: in particular, it enables independent software developers to create their own interfaces to the system. Over time, I think a key differentiator in the web-application space will be the quality of the programmer access to the systems backing the web apps.
Plugin systems have security issues. They have usability issues. They have namespace issues. They have dependency-graph issues. They have backwards-compatibility issues. Plugin systems are damn hard to do at all, let alone do well. And they are absolutely essential for ensuring the long-term survival of software systems.
What other features do great software systems have?
I think one important element, at least today, is that they either have to be a killer app, or they need one. Every world-class software system is, by its very nature, a platform. If you have a command-shell, and an extension language with advice, and a plug-in architecture, then you've already got the makings of a platform. But you need to give users some reason for using it in the first place. So the GIMP is for editing images, Eclipse is for editing Java code, Emacs is for editing plain text, Rails is for building web apps, Firefox is for browsing, Python is for scripting, Lua is for embedding, Mathematica is for math, and so on. Each of them can be stretched into other domains, but they're strongest when they're being used in their niche.
The most generic software systems, I think, are operating systems and programming languages, and even they have a primary purpose. OSes are primarily for resource management, and the most successful programming languages have usually carved out some niche: you use C++ when you need speed, Perl when you need Unix system administration, and Java when you need an especially fat API to impress a client. JavaScript had a lock on browser programming, Lisp has been mostly carried by Emacs all these years, and Ruby has taken off mostly due to Rails.
So software systems need a niche. Someday there may be a generic software system so powerful and well-designed that it's the best system to use for literally everything. Maybe you'll be the one to build it.
The last big feature I'll enumerate today, and it's just as important as the rest, is that great software systems are introspective. You can poke around and examine them at runtime, and ideally they poke around and examine themselves as well. At the very least they should have some basic health monitoring in place. Even in a relatively small system with lots of static checking, you still need monitoring on things like input and output queues, and for large systems, you need to monitor just about everything, including the monitoring systems. (If you don't have meta-monitoring, one bad thing that can happen is that your system gets into a state where all the health checks are returning OK, but the system is in fact totally wedged.)
Introspection can (and should) take many different forms, not just health monitoring. System administration tools and diagnostics are one kind of introspection; single-step debugging is another; profiling is yet another. Dynamic linking is still another: the system needs to be able to (for instance) run bytecode verifiers to make sure the code being loaded passes basic security checks. Introspection usually comes with a performance penalty, so many programmers avoid it at runtime, or they make do with whatever minimal introspection facilities the underlying system provides (e.g. RTTI or Java's reflection facility). If you trade off introspection for speed, you're carving years or even decades off the lifespan of your software system. If you're a consultant, or you just want to get it working so you can move on to something else, then maybe it doesn't matter. But I think most programmers prefer to work on systems that will last a long time.
There's a long tail of other features shared by the best software systems, but at this juncture I think it's best to talk a bit about how they all derive from not rebooting, at which point you should be able to identify the other common features easily enough.
Rebooting is Dying
I can't do this topic justice today, partly because it's a big topic, and partly because I just don't understand it very well. So the best I can do is sketch the outlines of it, and hopefully you'll get the picture.
First, let's get some philosophical perspectives out of the way. I take a very broad view of software: most of it isn't man-made. It's only the man-made stuff that's crap. There's a lot of pretty good naturally-occurring software out there. I think that the workings of our brains can mostly be considered software, and the workings of our bodies are definitely software. (Heck, we can even see the assembly-language instructions in our DNA, although we haven't quite figured out the full code yet.) So people are carrying around at least two software systems: body and mind.
I also think of stable ecosystems as being software systems, and for that matter, so are stable governments. So, too, are organizations of people, e.g. companies like the one you work for. Unless I grossly misunderstood Turing's position, software is anything that can produce computation, and computation only requires a machine with some simple mechanisms for changing the machine's state in deterministic ways, including branches, jumps, and reading/writing of persistent state, plus some set of instructions (a program) for playing out these operations.
I'm assuming my definition of software here is self-evident enough that I don't need to defend the position that not all software is man-made.
So my first argument against rebooting is that in nature it doesn't happen. Or, more accurately, when it does happen it's pretty catastrophic. If you don't like the way a person works, you don't kill them, fix their DNA, and then regrow them. If you don't like the way a government works, you don't shut it down, figure out what's wrong, and start it back up again. Why, then, do we almost always develop software that way?
My next argument against rebooting is from plain old software, the kind programmers make today. Take off your programmer's hat for a moment, and think about rebooting as an end user. Do you like having to reboot your software? Of course not. It's inconvenient.
Do you remember way back, maybe ten years ago, when Microsoft made the decision to eliminate all reboots from Windows NT? I've never worked at Microsoft and I heard this thirdhand, so I don't know exactly how it went down, but my understanding is that some exec got pissed off that seemingly every configuration change required a system reboot, and he or she orchestrated a massive effort to remove them all. When I heard about it, they were down to just five scenarios where a full restart was required. Just think of the impact that change had on the U.S. economy — millions of people saving 5 to 30 minutes a day on downtime from Windows reboots. It's just staggering. If only they could have eliminated the blue screens, it would have been pretty darn good software, wouldn't it?
Linux has gone through some growing pains here, and it's finally getting better, but I would guess it still has many more reboot scenarios than Windows does today.
However, I think the reboot problem may be much deeper than a little inconvenience. Viewed from a radical, yet possibly defensible perspective, a reboot is a murder.
How so? Well, one of the biggest unsolved open questions of all time is the question of consciousness: what the heck is it? What does it mean to be conscious? What exactly does it mean to be self-aware? Are pets conscious? Insects? Venus fly-traps? Where do you draw the line? At various times in history, the question of consciousness has gone in and out of fashion, but for the past few decades it's been very much in, and philosophers and cognitive scientists are working together to try to figure it out. As far as I can tell, the prevailing viewpoint (or leading theory, or whatever) is something like this: consciousness is recursive self-awareness, and it's a gradient (as opposed to a simple on/off) that's apparently a function of how many inputs and/or recursive levels of self-awareness your brain can process simultaneously. So pets are conscious, just not as much as people are. At least that's what I've gathered from reading various books, papers and essays about it written over the past 30 years or so.
Also, a particular consciousness (i.e. a specific person) is localized, focused uniquely at any given time in a manner similar to the way your vision can only be focused on a specific location. That means if you clone it, you've created a separate person. At least that seems to be what the big thought leaders like Daniel Dennett are saying. I.e. if you were somehow able to upload your brain's software to some other blank brain and "turn it on", you'd have a person just like you, but it wouldn't be you, and that person would immediately begin to diverge from you by virtue of having different experiences.
Well, if any of this is true (and weenie academics are welcome to come poke holes in minutiae of my exposition, with the understanding that pedantry is unlikely to change the big picture here very much), then software can be conscious. It's debatable whether any software in the world today is conscious enough to present any ethical problems with shutting it off, but you have to wonder how far off it really is.
For instance, I used to think of my dog Cino (a shih-tzu) as a relatively simple state machine: pee, poo, sleep, eat, play, lick nose for hours. As I got to know him, I gradually discerned many more states, hundreds of them, maybe even thousands, but I'm still not entirely convinced that his behavior isn't deterministic. I love him to pieces, so I'll give him the benefit of the doubt and assume he has a free will. But it's clear that you can predict his behavior at least 90% of the time without much effort. I imagine the same is true for me!
A dog seems more complex than a mouse, which is in turn probably a lot more complex than a cockroach, and at some point down the chain (maybe at ants? flatworms? single-celled organisms?) it seems like there's going to be a point where the animal is completely deterministic.
At least it seemed that way to me at first, but I now think otherwise. I personally believe that all creatures can exhibit some nondeterminism, precisely to the extent that they are composed of software rather than hardware. Your hardware is definitely deterministic. Anything Cino has done since he was born, without needing to be taught, is either hardware or firmware (i.e. built-in software that can't be unlearned or changed). Routines for going to pee (and knowing when it's time), sneezing and other reflexes, processing sensory information, digesting food, and thousands of others are all hardcoded in either his brain firmware or his body firmware, and it works predictably; any observable differences are attributable to his software responding to the current environment.
In other words, I think that both consciousness and free will (i.e. nondeterminism) are software properties.
Nobody's going to shed a tear over the deliberate slaying of an amoeba. Well, most people won't. Similarly, I don't think it makes sense to get worked up when your "Hello, World!" process commits ritual suicide by exiting main() after emitting its one message to the world. But I think we've established that each invocation of your "Hello, World" program is creating a separate instance of a minute consciousness.
Well... sort of. A "Hello, World" program, which has no loops or branches, can't exhibit any nondeterminism (unless it's imposed externally, e.g. by a random hardware error), so you can think of it as pure hardware implemented in software. But somewhere between Hello, World and Hal 9000 lies a minimal software program that can be considered to possess rudimentary consciousness, at which point turning it off is tantamount to killing it.
Do we have any software that complicated today? Complex enough and self-aware enough to be considered conscious from an ethical standpoint? Probably not. But I think we'll get there someday, and by then I sure hope we're not developing software the way we do today, with a compile/reboot cycle.
I think of most of the software we build today as being like setting up dominoes. It's Domino Design. You design it very carefully, and it runs once, during which time the dominoes all fall in a nice, predictable order, and if necessary you pick them all up again. The end result can be much fancier than you can achieve with dominoes — think of all the frames in the movie Toy Story, for instance. Not much different, but definitely fancier.
Even really complex pieces of software like search engines or e-commerce systems are generally approached using Domino Design. If you program primarily in C++ or Java, you're almost certainly guilty of it.
Rebooting a domino system is unavoidable. The notion of rebooting it for every change, every upgrade, is hardwired into the way we think about these systems. As for me, I don't think dominoes are very interesting. I'd personally rather play with an amoeba. And I'd think twice before squishing it and finding myself a new amoeba, if, for instance, it refused to learn to run a maze or something.
DWIM and QWAN
My thinking in this section is a bit fuzzy, so I'll keep it short. DWIM is a cute acronym invented, I think, by Larry Wall. It means "Do What I Mean", and the Perl community bandies it about quite a bit.
The idea behind DWIM is that perfect software is like a grateful djinn in a bottle who is granting you a wish. I'm not talking about those cheap-ass wishes you get from fountains or shooting stars, either. Everyone knows you have to phrase penny-wishes in legalese because the Wish Fairy gives you literally what you asked for, not what you meant. Just like all the software you use. In contrast, the DWIM genie is on your side, and is tolerant of grammatical and logical mistakes that might otherwise result in getting the wrong thing. ("A guy walks into a bar with a miniature piano and a twelve-inch pianist...")
DWIM is ostensibly what every programmer is trying to build into their software, but it's almost always done by guesswork, making assumptions about your end-users. Sometimes it's done by collaborative filtering or some other algorithmic approach, but that's error-prone as well. The only way to make DWIM appear more than fleetingly and accidentally is to create truly intelligent software — not just conscious, but intelligent, and hopefully also wise and perceptive and (gulp) friendly.
But that's where it starts to get a little worrisome, since everyone knows that "intelligent" doesn't necessarily imply sympathy to your cause, especially given that your cause may well be stupid.
So do we want DWIM or no? On the one hand, we want our software to be so good that it predicts your every desire and responds to all your needs, so you can live that hedonistic lifestyle you've always craved. On the other hand, it appears that the only way to do that is to create software that could easily be smarter than we are, at which point we worry about being marginalized at best, or enslaved and/or exterminated at worst.
I think this explains, at least in part, why most of our industry is trying to achieve pseudo-DWIM by building bigger and bigger dead systems. (And what better language for building big, dead systems than Java?) Dead systems are safely deterministic and controllable.
Let's be slightly less ambitious for a moment, and assume for the sake of argument that building conscious, intelligent software is unlikely to happen in our lifetime. Is it still possible to build software that's better than the run-of-the-mill crap most of us are churning out today?
There's another genie named QWAN, invented by the architect Christopher Alexander; it stands for "Quality Without a Name", and (much like the elusive thesis of this essay) it's something he's known about most of his life, and has been trying to both articulate it and find a way to produce it reliably for the entire time. In short, QWAN is an all-but intangible "I know it when I see it" property of certain spaces and/or structures that appears to be (a) most achievable via organic growth, and (b) mysteriously dependent on some hardware or firmware in the human brain, since it triggers warm fuzzies for just about everyone, but nobody really knows why.
At some point in the murky past, some software professionals realized that software can also have QWAN, and in the interim it's become clear that it's just as difficult to pin down in software as in physical architecture.
However, I'll assert that QWAN is most apparent in exactly the systems I listed as my favorites earlier in this essay, and it's mostly absent in the systems I said lacked the essential properties of living software. (I then enumerated a few of the big ones and did some vague hand-waving about the rest of them, in case you've forgotten.)
People have observed that Emacs has QWAN: a nice, organic, comfortable rightness that fits like a pair of old jeans, or a snug warm chair in a library by a fire. It's very right-brain stuff we're talking about here, all touchy and feely and sensitive: exactly the kind of thing that programmers are often so lousy at, so it's no wonder we don't know the recipe for building it into our software. But unlike with UI design, software QWAN can only come from the programmer, who is playing the roles of interior decorator, head chef, and ergonomic consultant for all the programmer-users of said software.
That's why I think most software is crap. I'm not talking about the end-user experience, I'm talking about the programmer experience (if it even offers one). Design is an art, not a science, and most of us aren't born artists. So it's actually pretty remarkable when QWAN evidences itself even a little in a software system.
Note, incidentally, that the end-user experience and programmer experience offered by a software system are almost totally orthogonal. Visual Studio actually offers a pretty slick end-user experience, where the end-user in question is a programmer trying to write some C++ or C# code (as opposed to trying to build tools to help write C++ or C# code, or to help automate away other common tasks.) Emacs offers a completely hostile end-user experience and a nearly unparalleled programmer experience. Eclipse is somewhere in the middle in both dimensions, but is definitely weighted more towards the end-user programmer than the programmer-user.
Finally, there are some aspects to QWAN that probably can't be captured in any recipe. Just adding my ingredients (a command shell, an extension language and advice system, etc.) will improve your software system, but it's no guarantee that QWAN will appear. Some of it just boils down to taste. And since tastes differ, and QWAN is nowhere near as smart as its cousin DWIM, one person's QWAN may be another's software hell.
I still think we should try to build it.
The Role of Type Systems
OK. I've finally built up enough context to be able to explain my position on type systems.
In short, type systems are for building hardware. Every software system has a machine beneath it — a stack of machines, actually, a whole heap of them, from the von Neumann machine with its CPUs and registers and buses down through the semiconductors and doping barriers to the atoms and quarks and their quantum-mechanical underpinnings.
But I take a broader view of hardware. Hardware is anything that produces computations deterministically and is so inflexible that you must either break it or replace it wholesale in order to change it. So it's not just the physical hardware made of atoms; in my view the hardware of a machine also encompasses any software that cannot be changed without a system reboot.
The notion of a type system is closely tied to the notions of "compile time" and "static checking". I tend to take a very broad view of type systems; I consider all constraint systems imposed on computation to be types of type systems, including things like scoping and visibility constraints (public/private/protected/friend/etc.), security constraints, even the fundamental computing architecture of the program as defined by its complete set of function signatures. My view of "static" typing actually includes some stuff that happens at runtime.
But most people, when they talk about static type systems, are talking about the kinds of validations that occur at compile time (with the help of type annotations that are part of the language syntax, and optionally type inference that's run on the IR built from the AST) and which evaporate at runtime.
Static type systems have several benefits over deferring the constraint checking to runtime. For one thing, they can result in faster code, because if you do your checking up front, and you can guarantee the system won't change at runtime, then you don't need to do the checks at runtime. For another, static constraints can often be examined and reported on by tools other than the compiler (e.g. your IDE), which can make it easier to follow the program flow.
And that's all just ducky.
There's nothing wrong with static type systems. You just have to realize that when you use them, you're building hardware, not software.
A static type system has absolutely no runtime effect on a correct program: exactly the same machine code is generated regardless of how many types you chose to model. Case in point: C++ code is no more efficient than C code, and C lacks any but the most primitive type system. C isn't dynamically typed, but it's not particularly statically typed either. It's possible to write fast, robust programs in straight C. (Emacs and the Linux kernel are two fine examples.)
The proper use of a static type system is to freeze software into hardware. Whenever when a particular set of deterministic operations (e.g. OpenGL rendering primitives) becomes ubiquitous enough and stable enough that it's worth trading off some flexibility to create a high-performance machine layer out of the API, we move it into hardware. But it's mostly a performance optimization (and in very small part, a standardization move, I suppose) that incurs a dramatic penalty in flexibility.
Because most programmers nowadays prefer to build marionettes rather than scary real children, most programming is oriented towards building layer upon layer of hardware. C++ and Java programmers (and I'd be willing to bet, C# programmers) are by default baking every line of code they write into hardware by modeling everything in the type system up front.
It's possible to write loosely-typed C++ and Java code using just arrays, linked lists, hashes, trees, and functions, but it's strongly deprecated (or at least frowned upon) in both camps, where strong typing and heavy-duty modeling have been growing increasingly fashionable for over a decade, to the detriment of productivity (which translates to software schedule) and flexibility (which also translates to schedule, via the ability to incorporate new features).
The devotees of the Hindley-Milner type system (which originated in academia and has only rarely escaped into the industry) like to believe that the H-M type system is far better than the Java or C++ type systems, because (among other reasons) it has fewer holes in it.
The reason H-M isn't used in the industry, folks, is that it doesn't have any holes. If you're trying to build software, but you believe (as most do) that you do it by repeatedly building and discarding hardware in a long, painful cycle until the hardware finally does what you want, then you need escape hatches: ways to tell the type system to shut the hell up, that you know what you're doing, that your program is in fact producing the desired computation. Type casts, narrowing and widening conversions, friend functions to bypass the standard class protections, stuffing minilanguages into strings and parsing them out by hand, there are dozens of ways to bypass the type systems in Java and C++, and programmers use them all the time, because (little do they know) they're actually trying to build software, not hardware.
H-M is very pretty, in a totally useless formal mathematical sense. It handles a few computation constructs very nicely; the pattern matching dispatch found in Haskell, SML and OCaml is particularly handy. Unsurprisingly, it handles some other common and highly desirable constructs awkwardly at best, but they explain those scenarios away by saying that you're mistaken, you don't actually want them. You know, things like, oh, setting variables. You don't want to do that, trust them on this. (OCaml lets you do it, but they look down their noses at such impurities.)
It's true that with some effort you can build beautiful, lithe marionettes with Haskell. But they will forever remain little wooden Pinocchio, never to be granted their wish for boyhood by the Good Fairy. Haskell has little or no notion of dynamic code loading or runtime reflection; it's such an alien concept to them that it's difficult even to discuss why they might be useful with a Haskell fan: their world-view just doesn't incorporate the idea of software that grows while it's alive and breathing.
Looking around at the somewhat shabby offerings available in the land of dynamic typing, it's not hard to see why so many programmers view dynamic languages as toys. The main crop of popular dynamic languages includes Perl, PHP, Python, Visual Basic, JavaScript, Ruby, Tcl, and Lua. With all due respect to the language designers, they all apparently flunked out of compiler school before starting work on their scripting languages. Case in point: they all screwed up lexical scoping. Ironically, having their scripting languages propelled into the limelight has turned all of them (Larry, Guido, Matz, Brendan, etc.) into world-class language designers, which makes them, Salieri-like, able to appreciate the weaknesses of their languages (which can now change only with great difficulty) in a way that most programmers will never know.
Scheme has great promise, but suffers from the fatal flaw that it can never grow, not in the way industrial-strength languages and platforms need to grow to sustain massive programmer populations. It has to stay small to keep its niche, which is in CS education.
Common Lisp is in many ways really ideal: it's a dynamically typed language with optional type annotations (i.e. you build software, then selectively turn it into hardware), lots of great tools and documentation, all of the essential features of living software I've enumerated in this essay, and a fair bit of QWAN thrown in for good measure. However, it has stopped growing, and programmers can sense momentum like a shark senses blood. Common Lisp has all of the exciting energy of a debate between William F. Buckley, Jr. and Charleton Heston. (I watched one once, and I'd swear they both slept through half of it.)
Plus Lisp lacks a C-like syntax. We will never be able to make real progress in computing and language design in our industry until C syntax is wholly eradicated: a task that could take fifty years. The only way to make progress in the meantime is to separate the model (the AST) and the presentation (the syntax) in programming languages, allow skinnable syntax, and let the C-like-syntax lovers continue to use it until they're all dead. Common Lisp could be enhanced to permit alternate syntaxes, but since it's not growing anymore, it's not going to happen. I would look elsewhere for the Next Big Thing.
Conclusion
It's 3:30am; I've overshot my monthly budget by over 2 hours. You might actually get a succinct blog out of me next month! In any case, I think I've got everything I wanted off my chest.
I think we want to build software that does what you mean. It will change society in ways that are difficult to predict, since DWIM can only really come from truly intelligent software. Nevertheless, I think it's what we want.
I doubt it will happen any time soon, because most programmers are relentlessly focused on building hardware, through diligent overapplication of their type systems to what would otherwise be perfectly good software.
In the interim between now and the emergence of DWIM from its magic lamp, I think we should build living software. These systems may only be as alive as a tree, but that should be sufficient to awaken QWAN in them, and all systems that have any QWAN at all are truly wonderful to use — at least as a programmer; their end-user quality varies widely.
Living software has a command shell, since you need a way to talk to it like a grown-up. It has an extension language, since you need a way to help it grow. It has an advice system, since you need a way to train and tailor it. It has a niche, since it needs users in order to thrive. It has a plug-in architecture, so you can dress it up for your party. And it is self-aware to the maximum extent possible given the external performance constraints. These features must be seamlessly and elegantly integrated, each subsystem implemented with the same care and attention to detail as the system as a whole.
And you shouldn't need to murder it every time you grow it. If you treat your software like a living thing, then eventually it just might become one.
If we ever wake up a real AI, I think it should be named Pinocchio.